Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
2,000
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow Cloud - Putting it all together In this example, we will use all of the features outlined in the Keras cloud guide to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages. Setup Step1: Cloud Configuration In order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our authentication key and specify our Cloud storage bucket for image building and publishing. Step2: Model Creation Dataset preprocessing We'll be loading our training data from TensorFlow Datasets Step3: Let's visualize this dataset Step4: Here we will resize and rescale our images to fit into our model's input, as well as create batches. Step5: Model Architecture We're using ResNet50 pretrained on ImageNet, from the Keras Applications module. Step6: Callbacks using Cloud Storage Step7: Here, we're using the tfc.remote() flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab. Step8: Our model requires two additional libraries. We'll create a requirements.txt which specifies those libraries Step9: Let's add a job label so we can document our job logs later Step10: Train on Cloud All that's left to do is run our model on Cloud. To recap, our run() call enables Step11: Evaluate your model We'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time
Python Code: !pip install tensorflow-cloud import datetime import os import matplotlib.pyplot as plt import tensorflow as tf import tensorflow_cloud as tfc import tensorflow_datasets as tfds from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Model Explanation: TensorFlow Cloud - Putting it all together In this example, we will use all of the features outlined in the Keras cloud guide to train a state-of-the-art model to classify dog breeds using feature extraction. Let's begin by installing TensorFlow Cloud and importing a few important packages. Setup End of explanation if not tfc.remote(): from google.colab import files key_upload = files.upload() key_path = list(key_upload.keys())[0] os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = key_path os.system(f"gcloud auth activate-service-account --key-file {key_path}") GCP_BUCKET = "[your-bucket-name]" #@param {type:"string"} Explanation: Cloud Configuration In order to run TensorFlow Cloud from a Colab notebook, we'll need to upload our authentication key and specify our Cloud storage bucket for image building and publishing. End of explanation (ds_train, ds_test), metadata = tfds.load( "stanford_dogs", split=["train", "test"], shuffle_files=True, with_info=True, as_supervised=True, ) NUM_CLASSES = metadata.features["label"].num_classes Explanation: Model Creation Dataset preprocessing We'll be loading our training data from TensorFlow Datasets: End of explanation print("Number of training samples: %d" % tf.data.experimental.cardinality(ds_train)) print("Number of test samples: %d" % tf.data.experimental.cardinality(ds_test)) print("Number of classes: %d" % NUM_CLASSES) plt.figure(figsize=(10, 10)) for i, (image, label) in enumerate(ds_train.take(9)): ax = plt.subplot(3, 3, i + 1) plt.imshow(image) plt.title(int(label)) plt.axis("off") Explanation: Let's visualize this dataset: End of explanation IMG_SIZE = 224 BATCH_SIZE = 64 BUFFER_SIZE = 2 size = (IMG_SIZE, IMG_SIZE) ds_train = ds_train.map(lambda image, label: (tf.image.resize(image, size), label)) ds_test = ds_test.map(lambda image, label: (tf.image.resize(image, size), label)) def input_preprocess(image, label): image = tf.keras.applications.resnet50.preprocess_input(image) return image, label ds_train = ds_train.map( input_preprocess, num_parallel_calls=tf.data.experimental.AUTOTUNE ) ds_train = ds_train.batch(batch_size=BATCH_SIZE, drop_remainder=True) ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE) ds_test = ds_test.map(input_preprocess) ds_test = ds_test.batch(batch_size=BATCH_SIZE, drop_remainder=True) Explanation: Here we will resize and rescale our images to fit into our model's input, as well as create batches. End of explanation inputs = tf.keras.layers.Input(shape=(IMG_SIZE, IMG_SIZE, 3)) base_model = tf.keras.applications.ResNet50( weights="imagenet", include_top=False, input_tensor=inputs ) x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output) x = tf.keras.layers.Dropout(0.5)(x) outputs = tf.keras.layers.Dense(NUM_CLASSES)(x) model = tf.keras.Model(inputs, outputs) base_model.trainable = False Explanation: Model Architecture We're using ResNet50 pretrained on ImageNet, from the Keras Applications module. End of explanation MODEL_PATH = "resnet-dogs" checkpoint_path = os.path.join("gs://", GCP_BUCKET, MODEL_PATH, "save_at_{epoch}") tensorboard_path = os.path.join( "gs://", GCP_BUCKET, "logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S") ) callbacks = [ # TensorBoard will store logs for each epoch and graph performance for us. keras.callbacks.TensorBoard(log_dir=tensorboard_path, histogram_freq=1), # ModelCheckpoint will save models after each epoch for retrieval later. keras.callbacks.ModelCheckpoint(checkpoint_path), # EarlyStopping will terminate training when val_loss ceases to improve. keras.callbacks.EarlyStopping(monitor="val_loss", patience=3), ] model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=1e-2), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=["accuracy"], ) Explanation: Callbacks using Cloud Storage End of explanation if tfc.remote(): epochs = 500 train_data = ds_train test_data = ds_test else: epochs = 1 train_data = ds_train.take(5) test_data = ds_test.take(5) callbacks = None model.fit( train_data, epochs=epochs, callbacks=callbacks, validation_data=test_data, verbose=2 ) if tfc.remote(): SAVE_PATH = os.path.join("gs://", GCP_BUCKET, MODEL_PATH) model.save(SAVE_PATH) Explanation: Here, we're using the tfc.remote() flag to designate a smaller number of epochs than intended for the full training job when running locally. This enables easy debugging on Colab. End of explanation requirements = ["tensorflow-datasets", "matplotlib"] f = open("requirements.txt", 'w') f.write('\n'.join(requirements)) f.close() Explanation: Our model requires two additional libraries. We'll create a requirements.txt which specifies those libraries: End of explanation job_labels = {"job":"resnet-dogs"} Explanation: Let's add a job label so we can document our job logs later: End of explanation tfc.run( requirements_txt="requirements.txt", distribution_strategy="auto", chief_config=tfc.MachineConfig( cpu_cores=8, memory=30, accelerator_type=tfc.AcceleratorType.NVIDIA_TESLA_T4, accelerator_count=2, ), docker_config=tfc.DockerConfig( image_build_bucket=GCP_BUCKET, ), job_labels=job_labels, stream_logs=True, ) Explanation: Train on Cloud All that's left to do is run our model on Cloud. To recap, our run() call enables: - A model that will be trained and stored on Cloud, including checkpoints - Tensorboard callback logs that will be accessible through tensorboard.dev - Specific python library requirements that will be fulfilled - Customizable job labels for log documentation - Real-time streaming logs printed in Colab - Deeply customizable machine configuration (ours will use two Tesla T4s) - An automatic resolution of distribution strategy for this configuration End of explanation !tensorboard dev upload --logdir $tensorboard_path --name "ResNet Dogs" if tfc.remote(): model = tf.keras.models.load_model(SAVE_PATH) model.evaluate(test_data) Explanation: Evaluate your model We'll use the cloud storage directories we saved for callbacks in order to load tensorboard and retrieve the saved model. Tensorboard logs can be used to monitor training performance in real-time End of explanation
2,001
Given the following text description, write Python code to implement the functionality described below step by step Description: These examples are tests for scc_info on alternating automata. Step1: universal edges are handled as if they were many distinct existencial edges from the point of view of scc_info, so the acceptance / rejection status is not always meaningful. Step2: A corner case for the dot printer
Python Code: from IPython.display import display import spot spot.setup(show_default='.bas') spot.automaton(''' HOA: v1 States: 2 Start: 0&1 AP: 2 "a" "b" acc-name: Buchi Acceptance: 1 Inf(0) --BODY-- State: 0 [0] 0 [!0] 1 State: 1 [1] 1 {0} --END-- ''') Explanation: These examples are tests for scc_info on alternating automata. End of explanation spot.automaton(''' HOA: v1 States: 2 Start: 0&1 AP: 2 "a" "b" Acceptance: 1 Fin(0) --BODY-- State: 0 [0] 0&1 {0} State: 1 [1] 1 --END-- ''') spot.automaton(''' HOA: v1 States: 2 Start: 0&1 AP: 2 "a" "b" Acceptance: 1 Fin(0) --BODY-- State: 0 [0] 0 {0} [!0] 1 State: 1 [1] 1&0 --END-- ''') spot.automaton(''' HOA: v1 States: 2 Start: 0 AP: 2 "a" "b" Acceptance: 1 Fin(0) --BODY-- State: 0 [0] 0 [!0] 1 {0} State: 1 [1] 1&0 --END-- ''') spot.automaton(''' HOA: v1 States: 2 Start: 0 AP: 2 "a" "b" Acceptance: 1 Fin(0) --BODY-- State: 0 [0] 0 {0} [!0] 1 State: 1 [1] 1&0 {0} --END-- ''') Explanation: universal edges are handled as if they were many distinct existencial edges from the point of view of scc_info, so the acceptance / rejection status is not always meaningful. End of explanation for a in spot.automata(''' HOA: v1 States: 3 Start: 0 AP: 2 "a" "b" Acceptance: 1 Fin(0) --BODY-- State: 0 [0] 1&2 State: 1 [1] 1&2 {0} State: 2 [1] 2 --END-- HOA: v1 States: 3 Start: 0 AP: 2 "a" "b" Acceptance: 1 Fin(0) --BODY-- State: 0 [0] 1&2 State: 1 [1] 1 {0} State: 2 [1] 2 --END-- '''): display(a) a = spot.automaton(''' HOA: v1 States: 3 Start: 0&2 AP: 2 "a" "b" Acceptance: 1 Fin(0) spot.highlight.edges: 2 2 --BODY-- State: 0 [0] 1&2 State: 1 [1] 1&2 {0} State: 2 [1] 1&2 --END-- ''') display(a, a.show('.basy')) Explanation: A corner case for the dot printer End of explanation
2,002
Given the following text description, write Python code to implement the functionality described below step by step Description: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. Step2: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. Step4: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. Exercise Step5: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al. Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. Step7: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise Step8: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise Step9: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise Step10: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. Step11: Restore the trained network if you need to Step12: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
Python Code: import time import numpy as np import tensorflow as tf import utils Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation from collections import Counter import random threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words. End of explanation def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) idx = len(train_words)-1 train_words[idx] get_target(train_words, idx) t = train_words[idx-5:idx]+[train_words[idx]]+train_words[idx+1:idx+5+1] t Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. End of explanation def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) Explanation: Restore the trained network if you need to: End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation
2,003
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook is part of the nbsphinx documentation Step1: A simple output Step2: The standard output stream Step3: Normal output + standard output Step4: The standard error stream is highlighted and displayed just below the code cell. The standard output stream comes afterwards (with no special highlighting). Finally, the "normal" output is displayed. Step5: <div class="alert alert-info"> Note Using the IPython kernel, the order is actually mixed up, see https Step6: Special Display Formats See IPython example notebook. Local Image Files Step7: See also SVG support for LaTeX. Step8: Image URLs Step9: Math Step10: Plots The output formats for Matplotlib plots can be customized via the IPython configuration file ipython_kernel_config.py. This file can be either in the directory where your notebook is located (see the ipython_kernel_config.py in this directory), or in your profile directory (typically ~/.ipython/profile_default/ipython_kernel_config.py). To find out your IPython profile directory, use this command Step11: Alternatively, the figure format(s) can also be chosen directly in the notebook (which overrides the setting in nbsphinx_execute_arguments and in the IPython configuration) Step12: If you want to use PNG images, but with HiDPI resolution, use the special 'png2x' (a.k.a. 'retina') format (which also looks nice in the LaTeX output) Step13: Pandas Dataframes Pandas dataframes should be displayed as nicely formatted HTML tables (if you are using HTML output). Step14: For LaTeX output, however, the plain text output is used by default. To get nice LaTeX tables, a few settings have to be changed Step15: This is not enabled by default because of Pandas issue #12182. The generated LaTeX tables utilize the booktabs package, so you have to make sure that package is loaded in the preamble with Step16: The longtable package is already used by Sphinx, so you don't have to manually load it in the preamble. Finally, if you want to use LaTeX math expressions in your dataframe, you'll have to disable escaping Step17: The above settings should have no influence on the HTML output, but the LaTeX output should now look nicer Step19: Markdown Content Step20: YouTube Videos Step21: Interactive Widgets (HTML only) The basic widget infrastructure is provided by the ipywidgets module. More advanced widgets are available in separate packages, see for example https Step22: A widget typically consists of a so-called "model" and a "view" into that model. If you display a widget multiple times, all instances act as a "view" into the same "model". That means that their state is synchronized. You can move either one of these sliders to try this out Step23: You can also link different widgets. Widgets can be linked via the kernel (which of course only works while a kernel is running) or directly in the client (which even works in the rendered HTML pages). Widgets can be linked uni- or bi-directionally. Examples for all 4 combinations are shown here Step24: <div class="alert alert-info"> Other Languages The examples shown here are using Python, but the widget technology can also be used with different Jupyter kernels (i.e. with different programming languages). </div> Troubleshooting To obtain more information if widgets are not displayed as expected, you will need to look at the error message in the web browser console. To figure out how to open the web browser console, you may look at the web browser documentation Step25: Unsupported Output Types If a code cell produces data with an unsupported MIME type, the Jupyter Notebook doesn't generate any output. nbsphinx, however, shows a warning message. Step26: ANSI Colors The standard output and standard error streams may contain ANSI escape sequences to change the text and background colors. Step27: The following code showing the 8 basic ANSI colors is based on https Step28: ANSI also supports a set of 256 indexed colors. The following code showing all of them is based on http Step29: You can even use 24-bit RGB colors
Python Code: # 2 empty lines before, 1 after Explanation: This notebook is part of the nbsphinx documentation: https://nbsphinx.readthedocs.io/. Code Cells Code, Output, Streams An empty code cell: Two empty lines: Leading/trailing empty lines: End of explanation 6 * 7 Explanation: A simple output: End of explanation print('Hello, world!') Explanation: The standard output stream: End of explanation print('Hello, world!') 6 * 7 Explanation: Normal output + standard output End of explanation import sys print("I'll appear on the standard error stream", file=sys.stderr) print("I'll appear on the standard output stream") "I'm the 'normal' output" Explanation: The standard error stream is highlighted and displayed just below the code cell. The standard output stream comes afterwards (with no special highlighting). Finally, the "normal" output is displayed. End of explanation %%bash for i in 1 2 3 do echo $i done Explanation: <div class="alert alert-info"> Note Using the IPython kernel, the order is actually mixed up, see https://github.com/ipython/ipykernel/issues/280. </div> Cell Magics IPython can handle code in other languages by means of cell magics: End of explanation from IPython.display import Image i = Image(filename='images/notebook_icon.png') i display(i) Explanation: Special Display Formats See IPython example notebook. Local Image Files End of explanation from IPython.display import SVG SVG(filename='images/python_logo.svg') Explanation: See also SVG support for LaTeX. End of explanation Image(url='https://www.python.org/static/img/python-logo-large.png') Image(url='https://www.python.org/static/img/python-logo-large.png', embed=True) Image(url='https://jupyter.org/assets/homepage/main-logo.svg') Explanation: Image URLs End of explanation from IPython.display import Math eq = Math(r'\int\limits_{-\infty}^\infty f(x) \delta(x - x_0) dx = f(x_0)') eq display(eq) from IPython.display import Latex Latex(r'This is a \LaTeX{} equation: $a^2 + b^2 = c^2$') %%latex \begin{equation} \int\limits_{-\infty}^\infty f(x) \delta(x - x_0) dx = f(x_0) \end{equation} Explanation: Math End of explanation import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=[6, 3]) ax.plot([4, 9, 7, 20, 6, 33, 13, 23, 16, 62, 8]); Explanation: Plots The output formats for Matplotlib plots can be customized via the IPython configuration file ipython_kernel_config.py. This file can be either in the directory where your notebook is located (see the ipython_kernel_config.py in this directory), or in your profile directory (typically ~/.ipython/profile_default/ipython_kernel_config.py). To find out your IPython profile directory, use this command: python3 -m IPython profile locate A local ipython_kernel_config.py in the notebook directory also works on https://mybinder.org/. Alternatively, you can create a file with those settings in a file named .ipython/profile_default/ipython_kernel_config.py in your repository. If you want to use SVG images for Matplotlib plots, add this line to your IPython configuration file: python c.InlineBackend.figure_formats = {'svg'} If you want SVG images, but also want nice plots when exporting to LaTeX/PDF, you can select: python c.InlineBackend.figure_formats = {'svg', 'pdf'} If you want to use the default PNG plots or HiDPI plots using 'png2x' (a.k.a. 'retina'), make sure to set this: python c.InlineBackend.rc = {'figure.dpi': 96} This is needed because the default 'figure.dpi' value of 72 is only valid for the Qt Console. If you are planning to store your SVG plots as part of your notebooks, you should also have a look at the 'svg.hashsalt' setting. For more details on these and other settings, have a look at Default Values for Matplotlib's "inline" Backend. If you for some reason can't use a ipython_kernel_config.py file, you can also change these settings with nbsphinx_execute_arguments in your conf.py file: python nbsphinx_execute_arguments = [ "--InlineBackend.figure_formats={'svg', 'pdf'}", "--InlineBackend.rc=figure.dpi=96", ] In the following example, nbsphinx should use an SVG image in the HTML output and a PDF image for LaTeX/PDF output. End of explanation %config InlineBackend.figure_formats = ['png'] fig Explanation: Alternatively, the figure format(s) can also be chosen directly in the notebook (which overrides the setting in nbsphinx_execute_arguments and in the IPython configuration): End of explanation %config InlineBackend.figure_formats = ['png2x'] fig Explanation: If you want to use PNG images, but with HiDPI resolution, use the special 'png2x' (a.k.a. 'retina') format (which also looks nice in the LaTeX output): End of explanation import numpy as np import pandas as pd df = pd.DataFrame(np.random.randint(0, 100, size=[5, 4]), columns=['a', 'b', 'c', 'd']) df Explanation: Pandas Dataframes Pandas dataframes should be displayed as nicely formatted HTML tables (if you are using HTML output). End of explanation pd.set_option('display.latex.repr', True) Explanation: For LaTeX output, however, the plain text output is used by default. To get nice LaTeX tables, a few settings have to be changed: End of explanation pd.set_option('display.latex.longtable', True) Explanation: This is not enabled by default because of Pandas issue #12182. The generated LaTeX tables utilize the booktabs package, so you have to make sure that package is loaded in the preamble with: \usepackage{booktabs} In order to allow page breaks within tables, you should use: End of explanation pd.set_option('display.latex.escape', False) Explanation: The longtable package is already used by Sphinx, so you don't have to manually load it in the preamble. Finally, if you want to use LaTeX math expressions in your dataframe, you'll have to disable escaping: End of explanation df = pd.DataFrame(np.random.randint(0, 100, size=[10, 4]), columns=[r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$']) df Explanation: The above settings should have no influence on the HTML output, but the LaTeX output should now look nicer: End of explanation from IPython.display import Markdown md = Markdown( # Markdown It *should* show up as **formatted** text with things like [links] and images. [links]: https://jupyter.org/ ![Jupyter notebook icon](images/notebook_icon.png) ## Markdown Extensions There might also be mathematical equations like $a^2 + b^2 = c^2$ and even tables: A | B | A and B ------|-------|-------- False | False | False True | False | False False | True | False True | True | True ) md Explanation: Markdown Content End of explanation from IPython.display import YouTubeVideo YouTubeVideo('9_OIs49m56E') Explanation: YouTube Videos End of explanation import ipywidgets as w slider = w.IntSlider() slider.value = 42 slider Explanation: Interactive Widgets (HTML only) The basic widget infrastructure is provided by the ipywidgets module. More advanced widgets are available in separate packages, see for example https://jupyter.org/widgets. The JavaScript code which is needed to display Jupyter widgets is loaded automatically (using RequireJS). If you want to use non-default URLs or local files, you can use the nbsphinx_widgets_path and nbsphinx_requirejs_path settings. End of explanation slider Explanation: A widget typically consists of a so-called "model" and a "view" into that model. If you display a widget multiple times, all instances act as a "view" into the same "model". That means that their state is synchronized. You can move either one of these sliders to try this out: End of explanation link = w.IntSlider(description='link') w.link((slider, 'value'), (link, 'value')) jslink = w.IntSlider(description='jslink') w.jslink((slider, 'value'), (jslink, 'value')) dlink = w.IntSlider(description='dlink') w.dlink((slider, 'value'), (dlink, 'value')) jsdlink = w.IntSlider(description='jsdlink') w.jsdlink((slider, 'value'), (jsdlink, 'value')) w.VBox([link, jslink, dlink, jsdlink]) tabs = w.Tab() for idx, obj in enumerate([df, fig, eq, i, md, slider]): out = w.Output() with out: display(obj) tabs.children += out, tabs.set_title(idx, obj.__class__.__name__) tabs Explanation: You can also link different widgets. Widgets can be linked via the kernel (which of course only works while a kernel is running) or directly in the client (which even works in the rendered HTML pages). Widgets can be linked uni- or bi-directionally. Examples for all 4 combinations are shown here: End of explanation %%javascript var text = document.createTextNode("Hello, I was generated with JavaScript!"); // Content appended to "element" will be visible in the output area: element.appendChild(text); Explanation: <div class="alert alert-info"> Other Languages The examples shown here are using Python, but the widget technology can also be used with different Jupyter kernels (i.e. with different programming languages). </div> Troubleshooting To obtain more information if widgets are not displayed as expected, you will need to look at the error message in the web browser console. To figure out how to open the web browser console, you may look at the web browser documentation: Chrome: https://developer.chrome.com/docs/devtools/open/#console Firefox: https://developer.mozilla.org/en-US/docs/Tools/Web_Console#opening-the-web-console The error is most probably linked to the JavaScript files not being loaded or loaded in the wrong order within the HTML file. To analyze the error, you can inspect the HTML file within the web browser (e.g.: right-click on the page and select View Page Source) and look at the &lt;head&gt; section of the page. That section should contain some JavaScript libraries. Those relevant for widgets are: ```html <!-- require.js is a mandatory dependency for jupyter-widgets --> <script crossorigin="anonymous" integrity="sha256-Ae2Vz/4ePdIu6ZyI/5ZGsYnb+m0JlOmKPjt6XZ9JJkA=" src="https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js"></script> <!-- jupyter-widgets JavaScript --> <script type="text/javascript" src="https://unpkg.com/@jupyter-widgets/html-manager@^0.18.0/dist/embed-amd.js"></script> <!-- JavaScript containing custom Jupyter widgets --> <script src="../_static/embed-widgets.js"></script> ``` The two first elements are mandatory. The third one is required only if you designed your own widgets but did not publish them on npm.js. If those libraries appear in a different order, the widgets won't be displayed. Here is a list of possible solutions: If the widgets are not displayed, see #519. If the widgets are displayed multiple times, see #378. Arbitrary JavaScript Output (HTML only) End of explanation display({ 'text/x-python': 'print("Hello, world!")', 'text/x-haskell': 'main = putStrLn "Hello, world!"', }, raw=True) Explanation: Unsupported Output Types If a code cell produces data with an unsupported MIME type, the Jupyter Notebook doesn't generate any output. nbsphinx, however, shows a warning message. End of explanation print('BEWARE: \x1b[1;33;41mugly colors\x1b[m!', file=sys.stderr) print('AB\x1b[43mCD\x1b[35mEF\x1b[1mGH\x1b[4mIJ\x1b[7m' 'KL\x1b[49mMN\x1b[39mOP\x1b[22mQR\x1b[24mST\x1b[27mUV') Explanation: ANSI Colors The standard output and standard error streams may contain ANSI escape sequences to change the text and background colors. End of explanation text = ' XYZ ' formatstring = '\x1b[{}m' + text + '\x1b[m' print(' ' * 6 + ' ' * len(text) + ''.join('{:^{}}'.format(bg, len(text)) for bg in range(40, 48))) for fg in range(30, 38): for bold in False, True: fg_code = ('1;' if bold else '') + str(fg) print(' {:>4} '.format(fg_code) + formatstring.format(fg_code) + ''.join(formatstring.format(fg_code + ';' + str(bg)) for bg in range(40, 48))) Explanation: The following code showing the 8 basic ANSI colors is based on https://tldp.org/HOWTO/Bash-Prompt-HOWTO/x329.html. Each of the 8 colors has an "intense" variation, which is used for bold text. End of explanation formatstring = '\x1b[38;5;{0};48;5;{0}mX\x1b[1mX\x1b[m' print(' + ' + ''.join('{:2}'.format(i) for i in range(36))) print(' 0 ' + ''.join(formatstring.format(i) for i in range(16))) for i in range(7): i = i * 36 + 16 print('{:3} '.format(i) + ''.join(formatstring.format(i + j) for j in range(36) if i + j < 256)) Explanation: ANSI also supports a set of 256 indexed colors. The following code showing all of them is based on http://bitmote.com/index.php?post/2012/11/19/Using-ANSI-Color-Codes-to-Colorize-Your-Bash-Prompt-on-Linux. End of explanation start = 255, 0, 0 end = 0, 0, 255 length = 79 out = [] for i in range(length): rgb = [start[c] + int(i * (end[c] - start[c]) / length) for c in range(3)] out.append('\x1b[' '38;2;{rgb[2]};{rgb[1]};{rgb[0]};' '48;2;{rgb[0]};{rgb[1]};{rgb[2]}mX\x1b[m'.format(rgb=rgb)) print(''.join(out)) Explanation: You can even use 24-bit RGB colors: End of explanation
2,004
Given the following text description, write Python code to implement the functionality described below step by step Description: Factorization Machine example Step1: At first we'll test only with the bare minimum userId, itemId and rating columns. Step2: So we have the user ids, item ids and the respective ratings in the 3 columns. Next we need to separate the rating column since we are going to predict that. Also we need to explicitly set the column data type to string for userId and itemId so that the model treats them as categorical variables, not integers. We'll do this for both the train and test sets. Step3: Next we'll train the model. We choose 60 iterations here. Tweak the hyperparameters to get the best performance. Step4: Now we'll try with all the columns and train our model on the whole dataset. Step5: This time, we also need to change the data type of the columns gender and occupation to string so that they are treated as categorical variables and hence one-hot encoded. Step6: Before training, we also need to normalize the age column since the values are greatly different from the other columns and hence will hamper the performance of the model. We choose min-max normaliztion here.
Python Code: import numpy as np import pandas as pd from sklearn.metrics import mean_squared_error from reco.datasets import loadMovieLens100k from reco.recommender import FM Explanation: Factorization Machine example End of explanation train, test, _, _ = loadMovieLens100k(train_test_split=True) print(train.head()) Explanation: At first we'll test only with the bare minimum userId, itemId and rating columns. End of explanation y_train = train['rating'] train.drop(['rating'], axis=1, inplace=True) train['userId'] = train['userId'].astype('str') train['itemId'] = train['itemId'].astype('str') y_test = test['rating'] test.drop(['rating'], axis=1, inplace=True) test['userId'] = test['userId'].astype('str') test['itemId'] = test['itemId'].astype('str') Explanation: So we have the user ids, item ids and the respective ratings in the 3 columns. Next we need to separate the rating column since we are going to predict that. Also we need to explicitly set the column data type to string for userId and itemId so that the model treats them as categorical variables, not integers. We'll do this for both the train and test sets. End of explanation f = FM(k=10, iterations = 60, learning_rate = 0.003, regularizer=0.005) f.fit(X=train, y=y_train) y_pred = f.predict(test) print("MSE: {}".format(mean_squared_error(y_test, y_pred))) Explanation: Next we'll train the model. We choose 60 iterations here. Tweak the hyperparameters to get the best performance. End of explanation train, test, _, _ = loadMovieLens100k(train_test_split=True, all_columns=True) print(train.head()) Explanation: Now we'll try with all the columns and train our model on the whole dataset. End of explanation y_train = train['rating'] train.drop(['rating'], axis=1, inplace=True) train['userId'] = train['userId'].astype('str') train['itemId'] = train['itemId'].astype('str') train['gender'] = train['gender'].astype('str') train['occupation'] = train['occupation'].astype('str') y_test = test['rating'] test.drop(['rating'], axis=1, inplace=True) test['userId'] = test['userId'].astype('str') test['itemId'] = test['itemId'].astype('str') test['gender'] = test['gender'].astype('str') test['occupation'] = test['occupation'].astype('str') Explanation: This time, we also need to change the data type of the columns gender and occupation to string so that they are treated as categorical variables and hence one-hot encoded. End of explanation train['age'] = (train['age']-train['age'].min())/(train['age'].max()-train['age'].min()) test['age'] = (test['age']-test['age'].min())/(test['age'].max()-test['age'].min()) f = FM(k=10, iterations = 60, learning_rate = 0.003, regularizer=0.005) f.fit(X=train, y=y_train) y_pred = f.predict(test) print("MSE: {}".format(mean_squared_error(y_test, y_pred))) Explanation: Before training, we also need to normalize the age column since the values are greatly different from the other columns and hence will hamper the performance of the model. We choose min-max normaliztion here. End of explanation
2,005
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Seaice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required Step7: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required Step8: 3.2. Ocean Freezing Point Value Is Required Step9: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required Step10: 4.2. Canonical Horizontal Resolution Is Required Step11: 4.3. Number Of Horizontal Gridpoints Is Required Step12: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required Step13: 5.2. Target Is Required Step14: 5.3. Simulations Is Required Step15: 5.4. Metrics Used Is Required Step16: 5.5. Variables Is Required Step17: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required Step18: 6.2. Additional Parameters Is Required Step19: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required Step20: 7.2. On Diagnostic Variables Is Required Step21: 7.3. Missing Processes Is Required Step22: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required Step23: 8.2. Properties Is Required Step24: 8.3. Budget Is Required Step25: 8.4. Was Flux Correction Used Is Required Step26: 8.5. Corrected Conserved Prognostic Variables Is Required Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required Step28: 9.2. Grid Type Is Required Step29: 9.3. Scheme Is Required Step30: 9.4. Thermodynamics Time Step Is Required Step31: 9.5. Dynamics Time Step Is Required Step32: 9.6. Additional Details Is Required Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required Step34: 10.2. Number Of Layers Is Required Step35: 10.3. Additional Details Is Required Step36: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required Step37: 11.2. Number Of Categories Is Required Step38: 11.3. Category Limits Is Required Step39: 11.4. Ice Thickness Distribution Scheme Is Required Step40: 11.5. Other Is Required Step41: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required Step42: 12.2. Number Of Snow Levels Is Required Step43: 12.3. Snow Fraction Is Required Step44: 12.4. Additional Details Is Required Step45: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required Step46: 13.2. Transport In Thickness Space Is Required Step47: 13.3. Ice Strength Formulation Is Required Step48: 13.4. Redistribution Is Required Step49: 13.5. Rheology Is Required Step50: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required Step51: 14.2. Thermal Conductivity Is Required Step52: 14.3. Heat Diffusion Is Required Step53: 14.4. Basal Heat Flux Is Required Step54: 14.5. Fixed Salinity Value Is Required Step55: 14.6. Heat Content Of Precipitation Is Required Step56: 14.7. Precipitation Effects On Salinity Is Required Step57: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required Step58: 15.2. Ice Vertical Growth And Melt Is Required Step59: 15.3. Ice Lateral Melting Is Required Step60: 15.4. Ice Surface Sublimation Is Required Step61: 15.5. Frazil Ice Is Required Step62: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required Step63: 16.2. Sea Ice Salinity Thermal Impacts Is Required Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required Step65: 17.2. Constant Salinity Value Is Required Step66: 17.3. Additional Details Is Required Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required Step68: 18.2. Constant Salinity Value Is Required Step69: 18.3. Additional Details Is Required Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required Step72: 20.2. Additional Details Is Required Step73: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required Step74: 21.2. Formulation Is Required Step75: 21.3. Impacts Is Required Step76: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required Step77: 22.2. Snow Aging Scheme Is Required Step78: 22.3. Has Snow Ice Formation Is Required Step79: 22.4. Snow Ice Formation Scheme Is Required Step80: 22.5. Redistribution Is Required Step81: 22.6. Heat Diffusion Is Required Step82: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required Step83: 23.2. Ice Radiation Transmission Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'cas', 'fgoals-f3-l', 'seaice') Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: CAS Source ID: FGOALS-F3-L Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:44 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation
2,006
Given the following text description, write Python code to implement the functionality described below step by step Description: Datasets Datasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model. Adding a dataset - even if it doesn't contain any observational data - is required in order to compute a synthetic model (which will be described in the following Compute Tutorial). Setup Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details. Step2: Adding a Dataset from Arrays To add a dataset, you need to provide the function in phoebe.parameters.dataset for the particular type of data you're dealing with, as well as any of your "observed" arrays. The current available methods include Step3: As you could probably predict by now, add_dataset can either take a function or the name of a function in phoebe.parameters.dataset. The following line would do the same thing (and we'll pass overwrite=True to avoid the error of overwriting dataset='orb01'). Step4: You may notice that add_dataset does take some time to complete. In the background, the passband is being loaded (when applicable) and many parameters are created and attached to the Bundle. If you do not provide a list of component(s), they will be assumed for you based on the dataset method. LCs (light curves) and meshes can only attach at the system level (component=None), for instance, whereas RVs and ORBs can attach for each star. Step5: Here we added an RV dataset and can see that it was automatically created for both stars in our system. Under-the-hood, another entry is created for component='_default'. The default parameters hold the values that will be replicated if a new component is added to the system in the future. In order to see these hidden parameters, you need to pass check_default=False to any filter-type call (and note that '_default' is no longer exposed when calling .components). Also note that for set_value_all, this is automatically set to False. Since we did not explicitly state that we only wanted the primary and secondary components, the time array on '_default' is filled as well. If we were then to add a tertiary component, its RVs would automatically be computed because of this replicated time array. Step6: With Observations Loading datasets with observations is (nearly) as simple. Passing arrays to any of the dataset columns will apply it to all of the same components in which the time will be applied (see the 'Without Observations' section above for more details). This make perfect sense for fluxes in light curves where the time and flux arrays are both at the system level Step7: For datasets which attach to individual components, however, this isn't always the desired behavior. For a single-lined RV where we only attach to one component, everything is as expected. Step8: However, for a double-lined RV we probably don't want to do the following Step9: Instead we want to pass different arrays to the 'rvs@primary' and 'rvs@secondary'. This can be done by explicitly stating the components in a dictionary sent to that argument Step10: Alternatively, you could of course not pass the values while calling add_dataset and instead call the set_value method after and explicitly state the components at that time. For more details see the add_dataset API docs. With Passband Options Passband options follow the exact same rules as dataset columns. Sending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method). Note that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided. Step11: As you might expect, if you want to pass different values to different components, simply provide them in a dictionary. Step12: Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well. Step13: This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value. Adding a Dataset from a File Manually from Arrays For now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section). Here we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy. Step14: Enabling and Disabling Datasets See the Compute Tutorial Dealing with Phases Datasets will no longer accept phases. It is the user's responsibility to convert phased data into times given an ephemeris. But it's still useful to be able to convert times to phases (and vice versa) and be able to plot in phase. Those conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time. Step15: All of these by default use the period in the top-level of the current hierarchy, but accept a component keyword argument if you'd like the ephemeris of an inner-orbit or the rotational ephemeris of a star in the system. We'll see how plotting works later, but if you manually wanted to plot the dataset with phases, all you'd need to do is Step16: or Step17: Although it isn't possible to attach data in phase-space, it is possible (new in PHOEBE 2.2) to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed. Step18: The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced Step19: Removing Datasets Removing a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo. Step20: The simplest way to remove a dataset is by its dataset tag Step21: But remove_dataset also takes any other tag(s) that could be sent to filter.
Python Code: !pip install -I "phoebe>=2.2,<2.3" Explanation: Datasets Datasets tell PHOEBE how and at what times to compute the model. In some cases these will include the actual observational data, and in other cases may only include the times at which you want to compute a synthetic model. Adding a dataset - even if it doesn't contain any observational data - is required in order to compute a synthetic model (which will be described in the following Compute Tutorial). Setup Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details. End of explanation b.add_dataset(phoebe.dataset.orb, compute_times=np.linspace(0,10,20), dataset='orb01', component=['primary', 'secondary']) Explanation: Adding a Dataset from Arrays To add a dataset, you need to provide the function in phoebe.parameters.dataset for the particular type of data you're dealing with, as well as any of your "observed" arrays. The current available methods include: lc light curves (tutorial) rv radial velocity curves (tutorial) lp spectral line profiles (tutorial) orb orbit/positional data (tutorial) mesh discretized mesh of stars (tutorial) Without Observations The simplest case of adding a dataset is when you do not have observational "data" and only want to compute a synthetic model. Here all you need to provide is an array of times and information about the type of data and how to compute it. This situation will almost always be the case for orbits and meshes - its unlikely that you have observed positions and velocities for each of your components, but you still may like to store that information for plotting or diagnostic purposes. Here we'll do just that - we'll add an orbit dataset which will track the positions and velocities of both our 'primary' and 'secondary' stars (by their component tags) at each of the provided times. NEW in PHOEBE 2.2: Unlike other datasets, the mesh and orb dataset cannot accept actual observations, so there is no times parameter, only the compute_times and compute_phases parameters. For more details on these, see the Compute Times & Phases tutorial. End of explanation b.add_dataset('orb', compute_times=np.linspace(0,10,20), component=['primary', 'secondary'], dataset='orb01', overwrite=True) Explanation: As you could probably predict by now, add_dataset can either take a function or the name of a function in phoebe.parameters.dataset. The following line would do the same thing (and we'll pass overwrite=True to avoid the error of overwriting dataset='orb01'). End of explanation b.add_dataset('rv', times=np.linspace(0,10,20), dataset='rv01') print(b.filter(qualifier='times', dataset='rv01').components) Explanation: You may notice that add_dataset does take some time to complete. In the background, the passband is being loaded (when applicable) and many parameters are created and attached to the Bundle. If you do not provide a list of component(s), they will be assumed for you based on the dataset method. LCs (light curves) and meshes can only attach at the system level (component=None), for instance, whereas RVs and ORBs can attach for each star. End of explanation print(b.filter(qualifier='times', dataset='rv01', check_default=False).components) print(b.get('times@_default@rv01', check_default=False)) Explanation: Here we added an RV dataset and can see that it was automatically created for both stars in our system. Under-the-hood, another entry is created for component='_default'. The default parameters hold the values that will be replicated if a new component is added to the system in the future. In order to see these hidden parameters, you need to pass check_default=False to any filter-type call (and note that '_default' is no longer exposed when calling .components). Also note that for set_value_all, this is automatically set to False. Since we did not explicitly state that we only wanted the primary and secondary components, the time array on '_default' is filled as well. If we were then to add a tertiary component, its RVs would automatically be computed because of this replicated time array. End of explanation b.add_dataset('lc', times=[0,1], fluxes=[1,0.5], dataset='lc01') print(b['fluxes@lc01@dataset']) Explanation: With Observations Loading datasets with observations is (nearly) as simple. Passing arrays to any of the dataset columns will apply it to all of the same components in which the time will be applied (see the 'Without Observations' section above for more details). This make perfect sense for fluxes in light curves where the time and flux arrays are both at the system level: End of explanation b.add_dataset('rv', times=[0,1], rvs=[-3,3], component='primary', dataset='rv01', overwrite=True) print(b['rvs@rv01']) Explanation: For datasets which attach to individual components, however, this isn't always the desired behavior. For a single-lined RV where we only attach to one component, everything is as expected. End of explanation b.add_dataset('rv', times=[0,0.5,1], rvs=[-3,3], dataset='rv02') print(b['rvs@rv02']) Explanation: However, for a double-lined RV we probably don't want to do the following: End of explanation b.add_dataset('rv', times=[0,0.5,1], rvs={'primary': [-3,3], 'secondary': [4,-4]}, dataset='rv02', overwrite=True) print(b['rvs@rv02']) Explanation: Instead we want to pass different arrays to the 'rvs@primary' and 'rvs@secondary'. This can be done by explicitly stating the components in a dictionary sent to that argument: End of explanation b.add_dataset('lc', times=[0,1], ld_func='logarithmic', dataset='lc01', overwrite=True) print(b['times@lc01']) print(b['ld_func@lc01']) Explanation: Alternatively, you could of course not pass the values while calling add_dataset and instead call the set_value method after and explicitly state the components at that time. For more details see the add_dataset API docs. With Passband Options Passband options follow the exact same rules as dataset columns. Sending a single value to the argument will apply it to each component in which the time array is attached (either based on the list of components sent or the defaults from the dataset method). Note that for light curves, in particular, this rule gets slightly bent. The dataset arrays for light curves are attached at the system level, always. The passband-dependent options, however, exist for each star in the system. So, that value will get passed to each star if the component is not explicitly provided. End of explanation b.add_dataset('lc', times=[0,1], ld_mode='manual', ld_func={'primary': 'logarithmic', 'secondary': 'quadratic'}, dataset='lc01', overwrite=True) print(b['ld_func@lc01']) Explanation: As you might expect, if you want to pass different values to different components, simply provide them in a dictionary. End of explanation print(b.filter('ld_func@lc01', check_default=False)) Explanation: Note here that we didn't explicitly override the defaults for '_default', so they used the phoebe-wide defaults. If you wanted to set a value for the ld_coeffs of any star added in the future, you would have to provide a value for '_default' in the dictionary as well. End of explanation times, fluxes, sigmas = np.loadtxt('test.lc.in', unpack=True) b.add_dataset(phoebe.dataset.lc, times=times, fluxes=fluxes, sigmas=sigmas, dataset='lc01', overwrite=True) Explanation: This syntax may seem a bit bulky - but alternatively you can add the dataset without providing values and then change the values individually using dictionary access or set_value. Adding a Dataset from a File Manually from Arrays For now, the only way to load data from a file is to do the parsing externally and pass the arrays on (as in the previous section). Here we'll load times, fluxes, and errors of a light curve from an external file and then pass them on to a newly created dataset. Since this is a light curve, it will automatically know that you want the summed light from all copmonents in the hierarchy. End of explanation print(b.get_ephemeris()) print(b.to_phase(0.0)) print(b.to_time(-0.25)) Explanation: Enabling and Disabling Datasets See the Compute Tutorial Dealing with Phases Datasets will no longer accept phases. It is the user's responsibility to convert phased data into times given an ephemeris. But it's still useful to be able to convert times to phases (and vice versa) and be able to plot in phase. Those conversions can be handled via b.get_ephemeris, b.to_phase, and b.to_time. End of explanation print(b.to_phase(b['times@primary@rv01'])) Explanation: All of these by default use the period in the top-level of the current hierarchy, but accept a component keyword argument if you'd like the ephemeris of an inner-orbit or the rotational ephemeris of a star in the system. We'll see how plotting works later, but if you manually wanted to plot the dataset with phases, all you'd need to do is: End of explanation print(b.to_phase('times@primary@rv01')) Explanation: or End of explanation b.add_dataset('lc', compute_phases=np.linspace(0,1,11), dataset='lc01', overwrite=True) Explanation: Although it isn't possible to attach data in phase-space, it is possible (new in PHOEBE 2.2) to tell PHOEBE at which phases to compute the model by setting compute_phases. Note that this overrides the value of times when the model is computed. End of explanation b.add_dataset('lc', times=[0], dataset='lc01', overwrite=True) print(b['compute_phases@lc01']) b.flip_constraint('compute_phases', dataset='lc01', solve_for='compute_times') b.set_value('compute_phases', dataset='lc01', value=np.linspace(0,1,101)) Explanation: The usage of compute_phases (as well as compute_times) will be discussed in further detail in the compute tutorial and the advanced: compute times & phases tutorial. Note also that although you can pass compute_phases directly to add_dataset, if you do not, it will be constrained by compute_times by default. In this case, you would need to flip the constraint before setting compute_phases. See the constraints tutorial and the flip_constraint API docs for more details on flipping constraints. End of explanation print(b.datasets) Explanation: Removing Datasets Removing a dataset will remove matching parameters in either the dataset, model, or constraint contexts. This action is permanent and not undo-able via Undo/Redo. End of explanation b.remove_dataset('lc01') print(b.datasets) Explanation: The simplest way to remove a dataset is by its dataset tag: End of explanation b.remove_dataset(kind='rv') print(b.datasets) Explanation: But remove_dataset also takes any other tag(s) that could be sent to filter. End of explanation
2,007
Given the following text description, write Python code to implement the functionality described below step by step Description: Titanic Step1: 2. Loading/Examining the Data <a class="anchor" id="second-bullet"></a> Step2: 3. All the Features! <a class="anchor" id="third-bullet"></a> We will be extracting the features with custom functions. This isn't necessary for all the features for this project, but we want to leave the possibilty for further development open for the future. 3a. Extracting Titles from Names <a class="anchor" id="third-first"></a> While the Name feature itself may not appear to be useful at first glance, we can tease out additional features that may be useful for predicting survival on the Titanic. We will extract a Title from each name, as that carries information about social and marital status (which in turn may relate to survival). Step3: 3b. Treating Missing Ports of Embarkation <a class="anchor" id="third-second"></a> Next, let's see if there are any rows that are missing ports of embarkation. Step4: We have two passengers in the training set that are missing ports of embarkation, while we are not missing any in the test set. <br> The features which may allow us to assign a port of embarkation based on the data that we do have are Pclass, Fare, and Cabin. However, since we are missing so much of the Cabin column (more on that later), let's focus in on the other two. Step5: Although Southampton was the most popular port of embarkation, there was a greater fraction of passengers in the first ticket class from Cherbourg who paid 80.00 for their tickets. Therefore, we will assign 'C' to the missing values for Embarked. We will also recast Embarked as a numerical feature. Step6: 3c. Handling Missing Fares <a class="anchor" id="third-third"></a> We will perform a similar analysis to see if there are any missing fares. Step7: This time, the Fare column in the training set is complete, but we are missing that information for one passenger in the test set. Since we do have PClass and Embarked, however, we will assign a fare based on the distribution of fares for those particular values of PClass and Embarked. Step8: After examining the distribution of Fare restricted to the specified values of Pclass and Fare, we will use the median for the missing fare (as it falls very close the fare corresponding to the peak of the distribution). Step9: 3d. Cabin Number Step10: What can we do with this data (rather, the lack thereof)? For now, let's just pull out the first letter of each cabin number (including NaNs), cast them as numbers, and hope they improve the performance of our classifier. Step11: 3e. Quick Fixes <a class="anchor" id="third-fifth"></a> Prior to the last step (which is arguably the largest one), we need to tie up a few remaining loose ends Step12: 3f. Imputing Missing Ages <a class="anchor" id="third-sixth"></a> It is expected that age will be an important feature; however, a number of ages are missing. Attempting to predict the ages with a simple model was not very successful. We decided to follow the recommendations for imputation from this article. Step13: This paper provides an introduction to the MICE method with a focus on practical aspects and challenges in using this method. We have chosen to use the MICE implementation from fancyimpute. Step15: 4. Interaction Terms <a class="anchor" id="fourth-bullet"></a> We will employ the PolynomialFeatures function to obtain all the possible combinations of features at second order. QUICK EXPLAINER WITH SOME MATH HERE
Python Code: # data analysis and wrangling import pandas as pd import numpy as np import scipy # visualization import matplotlib.pyplot as plt import seaborn as sns # machine learning from sklearn.svm import SVC from sklearn import preprocessing import fancyimpute from sklearn.model_selection import train_test_split from sklearn.model_selection import RandomizedSearchCV from sklearn.metrics import classification_report from sklearn.model_selection import cross_val_score from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import f_classif %matplotlib inline Explanation: Titanic: Revisiting Disaster An Exploration into the Data using Python Data Science on the Hill (Michael Hoffman and Charlies Bonfield) 1. Introduction <a class="anchor" id="first-bullet"></a> From our previous work on this dataset, it seems like the best way forward is to include new features with explanatory power. The first way to do that would be to use more sophisticated parsing of the available data (particulary the name feature) and extract new features from that. Secondly, we can include interactions up to a specified order -increasing the features in a systematic way. We will explore the second method in this revistation of the titanic dataset. Skip to section 4. if you have already seen the previous work. End of explanation # Load the data. training_data = pd.read_csv('train.csv') test_data = pd.read_csv('test.csv') # Examine the first few rows of data in the training set. training_data.head() Explanation: 2. Loading/Examining the Data <a class="anchor" id="second-bullet"></a> End of explanation # Extract title from names, then assign to one of five classes. # Function based on code from: https://www.kaggle.com/startupsci/titanic/titanic-data-science-solutions def add_title(data): data['Title'] = data.Name.str.extract(' ([A-Za-z]+)\.', expand=False) data.Title = data.Title.replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare') data.Title = data.Title.replace('Mlle', 'Miss') data.Title = data.Title.replace('Ms', 'Miss') data.Title = data.Title.replace('Mme', 'Mrs') # Map from strings to numerical variables. title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5} data.Title = data.Title.map(title_mapping) data.Title = data.Title.fillna(0) return data Explanation: 3. All the Features! <a class="anchor" id="third-bullet"></a> We will be extracting the features with custom functions. This isn't necessary for all the features for this project, but we want to leave the possibilty for further development open for the future. 3a. Extracting Titles from Names <a class="anchor" id="third-first"></a> While the Name feature itself may not appear to be useful at first glance, we can tease out additional features that may be useful for predicting survival on the Titanic. We will extract a Title from each name, as that carries information about social and marital status (which in turn may relate to survival). End of explanation missing_emb_training = training_data[pd.isnull(training_data.Embarked) == True] missing_emb_test = test_data[pd.isnull(test_data.Embarked) == True] missing_emb_training.head() missing_emb_test.head() Explanation: 3b. Treating Missing Ports of Embarkation <a class="anchor" id="third-second"></a> Next, let's see if there are any rows that are missing ports of embarkation. End of explanation grid = sns.FacetGrid(training_data[training_data.Pclass == 1], col='Embarked', size=2.2, aspect=1.6) grid.map(plt.hist, 'Fare', alpha=.5, bins=20) grid.map(plt.axvline, x=80.0, color='red', ls='dashed') grid.add_legend(); Explanation: We have two passengers in the training set that are missing ports of embarkation, while we are not missing any in the test set. <br> The features which may allow us to assign a port of embarkation based on the data that we do have are Pclass, Fare, and Cabin. However, since we are missing so much of the Cabin column (more on that later), let's focus in on the other two. End of explanation # Recast port of departure as numerical feature. def simplify_embark(data): # Two missing values, assign Cherbourg as port of departure. data.Embarked = data.Embarked.fillna('C') le = preprocessing.LabelEncoder().fit(data.Embarked) data.Embarked = le.transform(data.Embarked) return data Explanation: Although Southampton was the most popular port of embarkation, there was a greater fraction of passengers in the first ticket class from Cherbourg who paid 80.00 for their tickets. Therefore, we will assign 'C' to the missing values for Embarked. We will also recast Embarked as a numerical feature. End of explanation missing_fare_training = training_data[np.isnan(training_data['Fare'])] missing_fare_test = test_data[np.isnan(test_data['Fare'])] missing_fare_training.head() missing_fare_test.head() Explanation: 3c. Handling Missing Fares <a class="anchor" id="third-third"></a> We will perform a similar analysis to see if there are any missing fares. End of explanation restricted_training = training_data[(training_data.Pclass == 3) & (training_data.Embarked == 'S')] restricted_test = test_data[(test_data.Pclass == 3) & (test_data.Embarked == 'S')] restricted_test = restricted_test[~np.isnan(restricted_test.Fare)] # Leave out poor Mr. Storey combine = [restricted_training, restricted_test] combine = pd.concat(combine) # Find median fare, plot over resulting distribution. fare_med = np.median(combine.Fare) sns.kdeplot(combine.Fare, shade=True) plt.axvline(fare_med, color='r', ls='dashed', lw='1', label='Median') plt.legend(); Explanation: This time, the Fare column in the training set is complete, but we are missing that information for one passenger in the test set. Since we do have PClass and Embarked, however, we will assign a fare based on the distribution of fares for those particular values of PClass and Embarked. End of explanation test_data['Fare'] = test_data['Fare'].fillna(fare_med) Explanation: After examining the distribution of Fare restricted to the specified values of Pclass and Fare, we will use the median for the missing fare (as it falls very close the fare corresponding to the peak of the distribution). End of explanation missing_cabin_training = np.size(training_data.Cabin[pd.isnull(training_data.Cabin) == True]) / np.size(training_data.Cabin) * 100.0 missing_cabin_test = np.size(test_data.Cabin[pd.isnull(test_data.Cabin) == True]) / np.size(test_data.Cabin) * 100.0 print('Percentage of Missing Cabin Numbers (Training): %0.1f' % missing_cabin_training) print('Percentage of Missing Cabin Numbers (Test): %0.1f' % missing_cabin_test) Explanation: 3d. Cabin Number: Relevant or Not? <a class="anchor" id="third-fourth"></a> When we first encountered the data, we figured that Cabin would be one of the most important features in predicting survival, as it would not be unreasonable to think of it as a proxy for a passenger's position on the Titanic relative to the lifeboats (distance to deck, distance to nearest stairwell, social class, etc.). Unfortunately, much of this data is missing: End of explanation ## Set of functions to transform features into more convenient format. # # Code performs three separate tasks: # (1). Pull out the first letter of the cabin feature. # Code taken from: https://www.kaggle.com/jeffd23/titanic/scikit-learn-ml-from-start-to-finish # (2). Recasts cabin feature as number. def simplify_cabins(data): data.Cabin = data.Cabin.fillna('N') data.Cabin = data.Cabin.apply(lambda x: x[0]) #cabin_mapping = {'N': 0, 'A': 1, 'B': 1, 'C': 1, 'D': 1, 'E': 1, # 'F': 1, 'G': 1, 'T': 1} #data['Cabin_Known'] = data.Cabin.map(cabin_mapping) le = preprocessing.LabelEncoder().fit(data.Cabin) data.Cabin = le.transform(data.Cabin) return data Explanation: What can we do with this data (rather, the lack thereof)? For now, let's just pull out the first letter of each cabin number (including NaNs), cast them as numbers, and hope they improve the performance of our classifier. End of explanation # Recast sex as numerical feature. def simplify_sex(data): sex_mapping = {'male': 0, 'female': 1} data.Sex = data.Sex.map(sex_mapping).astype(int) return data # Drop all unwanted features (name, ticket). def drop_features(data): return data.drop(['Name','Ticket'], axis=1) # Perform all feature transformations. def transform_all(data): data = add_title(data) data = simplify_embark(data) data = simplify_cabins(data) data = simplify_sex(data) data = drop_features(data) return data training_data = transform_all(training_data) test_data = transform_all(test_data) all_data = [training_data, test_data] combined_data = pd.concat(all_data) # Inspect data. combined_data.head() Explanation: 3e. Quick Fixes <a class="anchor" id="third-fifth"></a> Prior to the last step (which is arguably the largest one), we need to tie up a few remaining loose ends: - Recast Sex as numerical feature. - Drop unwanted features. - Name: We've taken out the information that we need (Title). - Ticket: There appears to be no rhyme or reason to the data in this column, so we remove it from our analysis. - Combine training/test data prior to age imputation. End of explanation null_ages = pd.isnull(combined_data.Age) known_ages = pd.notnull(combined_data.Age) initial_dist = combined_data.Age[known_ages] Explanation: 3f. Imputing Missing Ages <a class="anchor" id="third-sixth"></a> It is expected that age will be an important feature; however, a number of ages are missing. Attempting to predict the ages with a simple model was not very successful. We decided to follow the recommendations for imputation from this article. End of explanation def impute_ages(data): drop_survived = data.drop(['Survived'], axis=1) column_titles = list(drop_survived) mice_results = fancyimpute.MICE().complete(np.array(drop_survived)) results = pd.DataFrame(mice_results, columns=column_titles) results['Survived'] = list(data['Survived']) return results complete_data = impute_ages(combined_data) complete_data.Age = complete_data.Age[~(complete_data.Age).index.duplicated(keep='first')] Explanation: This paper provides an introduction to the MICE method with a focus on practical aspects and challenges in using this method. We have chosen to use the MICE implementation from fancyimpute. End of explanation # Transform age and fare data to have mean zero and variance 1.0. scaler = preprocessing.StandardScaler() select = 'Age Fare'.split() complete_data[select] = scaler.fit_transform(complete_data[select]) training_data = complete_data[:891] test_data = complete_data[891:].drop('Survived', axis=1) # drop uninformative data and the target feature droplist = 'Survived PassengerId'.split() data = training_data.drop(droplist, axis=1) # Define features and target values X, y = data, training_data['Survived'] # generate the polynomial features poly = preprocessing.PolynomialFeatures(2) X = pd.DataFrame(poly.fit_transform(X)).drop(0, axis=1) # feature selection features = SelectKBest(f_classif, k=12).fit(X,y) # print(sorted(list(zip(features.scores_, X.columns)), reverse=True)) X_new = pd.DataFrame(features.transform(X)) X_new.head() # ---------------------------------- # Split the data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0) # Support Vector Machines # # # Set the parameters by cross-validation # param_dist = {'C': scipy.stats.uniform(0.1, 1000), 'gamma': scipy.stats.uniform(.001, 1.0), # 'kernel': ['rbf'], 'class_weight':['balanced', None]} # # clf = SVC() # # # run randomized search # n_iter_search = 10000 # random_search = RandomizedSearchCV(clf, param_distributions=param_dist, # n_iter=n_iter_search, n_jobs=-1, cv=4) # # start = time() # random_search.fit(X, y) # print("RandomizedSearchCV took %.2f seconds for %d candidates" # " parameter settings." % ((time() - start), n_iter_search)) # report(random_search.cv_results_) # exit() RandomizedSearchCV took 4851.48 seconds for 10000 candidates parameter settings. Model with rank: 1 Mean validation score: 0.833 (std: 0.013) Parameters: {'kernel': 'rbf', 'C': 107.54222939713921, 'gamma': 0.013379109762586716, 'class_weight': None} Model with rank: 2 Mean validation score: 0.832 (std: 0.012) Parameters: {'kernel': 'rbf', 'C': 154.85033872208422, 'gamma': 0.010852578446979289, 'class_weight': None} Model with rank: 2 Mean validation score: 0.832 (std: 0.012) Parameters: {'kernel': 'rbf', 'C': 142.60506747360913, 'gamma': 0.011625955252680842, 'class_weight': None} params = {'kernel': 'rbf', 'C': 107.54222939713921, 'gamma': 0.013379109762586716, 'class_weight': None} clf = SVC(**params) scores = cross_val_score(clf, X, y, cv=4, n_jobs=-1) print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2)) droplist = 'PassengerId'.split() clf.fit(X,y) predictions = clf.predict(test_data.drop(droplist, axis=1)) #print(predictions) print('Predicted Number of Survivors: %d' % int(np.sum(predictions))) # output .csv for upload # submission = pd.DataFrame({ # "PassengerId": test_data['PassengerId'].astype(int), # "Survived": predictions.astype(int) # }) # # submission.to_csv('../submission.csv', index=False) Explanation: 4. Interaction Terms <a class="anchor" id="fourth-bullet"></a> We will employ the PolynomialFeatures function to obtain all the possible combinations of features at second order. QUICK EXPLAINER WITH SOME MATH HERE End of explanation
2,008
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="https Step1: Data Step2: Markov chain Monte Carlo (MCMC) We set up the model in numpyro and run MCMC. Note that the log_rate parameter doesn't have the obs=... argument set, since it is latent. Step3: We can summarize the MCMC results by plotting our inferred model (here we're showing the 1- and 2-sigma credible regions), and compare it to the known ground truth Step4: Stochastic variational inference (SVI) For larger datasets, it is faster to use stochastic variational inference (SVI) instead of MCMC. Step5: As above, we can plot our inferred conditional model and compare it to the ground truth
Python Code: try: import tinygp except ImportError: %pip install -q tinygp try: import numpyro except ImportError: # It is much faster to use CPU than GPU. # This is because Colab has multiple CPU cores, so can run the 2 MCMC chains in parallel %pip uninstall -y jax jaxlib %pip install -q numpyro jax jaxlib #%pip install numpyro[cuda] -f https://storage.googleapis.com/jax-releases/jax_releases.html try: import arviz except ImportError: %pip install arviz Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/gp_poisson_1d.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> GP with a Poisson Likelihood https://tinygp.readthedocs.io/en/latest/tutorials/likelihoods.html We use the tinygp library to define the model, and the numpyro library to do inference, using either MCMC or SVI. End of explanation import numpy as np import matplotlib.pyplot as plt random = np.random.default_rng(203618) x = np.linspace(-3, 3, 20) true_log_rate = 2 * np.cos(2 * x) y = random.poisson(np.exp(true_log_rate)) plt.plot(x, y, ".k", label="data") plt.plot(x, np.exp(true_log_rate), "C1", label="true rate") plt.legend(loc=2) plt.xlabel("x") _ = plt.ylabel("counts") plt.savefig("gp-poisson-data.pdf") Explanation: Data End of explanation import jax import jax.numpy as jnp try: import numpyro except ModuleNotFoundError: %pip install -qq numpyro import numpyro import numpyro.distributions as dist try: from tinygp import kernels, GaussianProcess except ModuleNotFoundError: %pip install -qq tinygp from tinygp import kernels, GaussianProcess # We'll enable float64 support here for better numerical performance from jax.config import config config.update("jax_enable_x64", True) def model(x, y=None): # The parameters of the GP model mean = numpyro.sample("mean", dist.Normal(0.0, 2.0)) sigma = numpyro.sample("sigma", dist.HalfNormal(3.0)) rho = numpyro.sample("rho", dist.HalfNormal(10.0)) # Set up the kernel and GP objects kernel = sigma**2 * kernels.Matern52(rho) gp = GaussianProcess(kernel, x, diag=1e-5, mean=mean) # This parameter has shape (num_data,) and it encodes our beliefs about # the process rate in each bin log_rate = numpyro.sample("log_rate", gp.numpyro_dist()) # Finally, our observation model is Poisson numpyro.sample("obs", dist.Poisson(jnp.exp(log_rate)), obs=y) # Run the MCMC nuts_kernel = numpyro.infer.NUTS(model, target_accept_prob=0.9) mcmc = numpyro.infer.MCMC( nuts_kernel, num_warmup=500, num_samples=500, num_chains=2, progress_bar=False, ) rng_key = jax.random.PRNGKey(55873) mcmc.run(rng_key, x, y=y) samples = mcmc.get_samples() Explanation: Markov chain Monte Carlo (MCMC) We set up the model in numpyro and run MCMC. Note that the log_rate parameter doesn't have the obs=... argument set, since it is latent. End of explanation q = np.percentile(samples["log_rate"], [5, 25, 50, 75, 95], axis=0) plt.plot(x, np.exp(q[2]), color="C0", label="MCMC inferred rate") plt.fill_between(x, np.exp(q[0]), np.exp(q[-1]), alpha=0.3, lw=0, color="C0") plt.fill_between(x, np.exp(q[1]), np.exp(q[-2]), alpha=0.3, lw=0, color="C0") plt.plot(x, np.exp(true_log_rate), "--", color="C1", label="true rate") plt.plot(x, y, ".k", label="data") plt.legend(loc=2) plt.xlabel("x") _ = plt.ylabel("counts") plt.savefig("gp-poisson-mcmc.pdf") Explanation: We can summarize the MCMC results by plotting our inferred model (here we're showing the 1- and 2-sigma credible regions), and compare it to the known ground truth: End of explanation def guide(x, y=None): numpyro.param("mean", jnp.zeros(())) numpyro.param("sigma", jnp.ones(()), constraint=dist.constraints.positive) numpyro.param("rho", 2 * jnp.ones(()), constraint=dist.constraints.positive) mu = numpyro.param("log_rate_mu", jnp.zeros_like(x) if y is None else jnp.log(y + 1)) sigma = numpyro.param( "log_rate_sigma", jnp.ones_like(x), constraint=dist.constraints.positive, ) numpyro.sample("log_rate", dist.Normal(mu, sigma)) optim = numpyro.optim.Adam(0.01) svi = numpyro.infer.SVI(model, guide, optim, numpyro.infer.Trace_ELBO(10)) results = svi.run(jax.random.PRNGKey(55873), 3000, x, y=y, progress_bar=False) Explanation: Stochastic variational inference (SVI) For larger datasets, it is faster to use stochastic variational inference (SVI) instead of MCMC. End of explanation mu = results.params["log_rate_mu"] sigma = results.params["log_rate_sigma"] plt.plot(x, np.exp(mu), color="C0", label="VI inferred rate") plt.fill_between( x, np.exp(mu - 2 * sigma), np.exp(mu + 2 * sigma), alpha=0.3, lw=0, color="C0", ) plt.fill_between(x, np.exp(mu - sigma), np.exp(mu + sigma), alpha=0.3, lw=0, color="C0") plt.plot(x, np.exp(true_log_rate), "--", color="C1", label="true rate") plt.plot(x, y, ".k", label="data") plt.legend(loc=2) plt.xlabel("x") _ = plt.ylabel("counts") plt.savefig("gp-poisson-svi.pdf") Explanation: As above, we can plot our inferred conditional model and compare it to the ground truth: End of explanation
2,009
Given the following text description, write Python code to implement the functionality described below step by step Description: Goal One Injest a csv file as pure text... (just 500 chars) Step1: Then as a list of lines... (just one line) Step2: Then as a data frame... (just Avatar) Step3: Goal Two Right now, the file is in a 'narrow' format. In other words, several interesting bits are collapsed into a single field. Let's attempt to make the data frame a 'wide' format. All the collapsed items expanded horizontally. References Step4: Goal Three
Python Code: with open('tmdb_5000_movies.csv','r') as f: rtext='' for line in f: rtext += line rtext[:500] Explanation: Goal One Injest a csv file as pure text... (just 500 chars) End of explanation with open('tmdb_5000_movies.csv','r') as f: lines = [line for line in f] lines[0] Explanation: Then as a list of lines... (just one line) End of explanation import pandas as pd df = pd.read_csv("tmdb_5000_movies.csv") df.query('id == 19995') Explanation: Then as a data frame... (just Avatar) End of explanation import json import pandas as pd import numpy as np df = pd.read_csv("tmdb_5000_movies.csv") #convert to json json_columns = ['genres', 'keywords', 'production_countries', 'production_companies', 'spoken_languages'] for column in json_columns: df[column] = df[column].apply(json.loads) def get_unique_inner_json(feature): tmp = [] for i, row in df[feature].iteritems(): for x in range(0,len(df[feature].iloc[i])): tmp.append(df[feature].iloc[i][x]['name']) unique_values = set(tmp) return unique_values def widen_data(df, feature): unique_json = get_unique_inner_json(feature) tmp = [] #rearrange genres for i, row in df.iterrows(): for x in range(0,len(row[feature])): for val in unique_json: if row[feature][x]['name'] == val: row[val] = 1 tmp.append(row) new_df = pd.DataFrame(tmp) new_df[list(unique_json)] = new_df[list(unique_json)].fillna(value=0) return new_df genres_arranged_df = widen_data(df, "genres") genres_arranged_df[list(get_unique_inner_json("genres"))] = genres_arranged_df[list(get_unique_inner_json("genres"))].astype(int) genres_arranged_df.query('title == "Avatar"') Explanation: Goal Two Right now, the file is in a 'narrow' format. In other words, several interesting bits are collapsed into a single field. Let's attempt to make the data frame a 'wide' format. All the collapsed items expanded horizontally. References: https://www.kaggle.com/fabiendaniel/film-recommendation-engine http://www.jeannicholashould.com/tidy-data-in-python.html End of explanation genres_long_df = pd.melt(genres_arranged_df, id_vars=df.columns, value_vars=get_unique_inner_json("genres"), var_name="genre", value_name="genre_val") genres_long_df = genres_long_df[genres_long_df['genre_val'] == 1] genres_long_df.query('title == "Avatar"') Explanation: Goal Three End of explanation
2,010
Given the following text description, write Python code to implement the functionality described below step by step Description: Random forest Out-of-bag score Feature importances Линейные классификаторы $$a(x) = sign(\left<w^Tx\right> - w_0)$$ Step1: Градиентный спуск¶ $$M_i(w, w_0) = y_i(\left<x, w\right> - w_0)$$ $$\sum_{i=1}^l \mathscr{L}(M(x_i)) \to min$$
Python Code: def get_grid(data, step=0.1): x_min, x_max = data.x.min() - 1, data.x.max() + 1 y_min, y_max = data.y.min() - 1, data.y.max() + 1 return np.meshgrid(np.arange(x_min, x_max, step), np.arange(y_min, y_max, step)) from sklearn.cross_validation import cross_val_score def get_score(X, y, cl): return cross_val_score(cl, X, y, cv=5, scoring='mean_squared_error').mean() def plot_linear_border(cl, X, plot, borders=1): x_limits = (np.min(X.x) - borders, np.max(X.x) + borders) y_limits = (np.min(X.y) - borders, np.max(X.y) + borders) line_x = np.linspace(*x_limits, num=2) line_y = (-line_x * cl.coef_[0, 0] - cl.intercept_) / cl.coef_[0, 1] plot.plot(line_x, line_y, c='r', lw=2) plot.fill_between(line_x, line_y, -100, color='r') plot.fill_between(line_x, line_y, 100, color='yellow') plot.autoscale(tight=True) plot.set_xlim(*x_limits) plot.set_ylim(*y_limits) def show_classifier(X, y, cl, feature_modifier=lambda x: x, proba=True, print_score=False, borders=1): fig, ax = plt.subplots(1, 1) xys = c_[ravel(xs), ravel(ys)] cl.fit(feature_modifier(X), y) if print_score: print("MSE = {}".format(get_score(feature_modifier(X), y, cl))) if proba: predicted = cl.predict_proba(feature_modifier(pd.DataFrame(xys, columns=('x', 'y'))))[:,1].reshape(xs.shape) else: predicted = cl.predict(feature_modifier(pd.DataFrame(xys, columns=('x', 'y')))).reshape(xs.shape) plot_linear_border(cl, X, ax, borders=borders) ax.scatter(X.x, X.y, c=y, **scatter_args) return cl n = 200 random = np.random.RandomState(17) df1 = pd.DataFrame(data=random.multivariate_normal((0,0), [[1, 0.3], [0.3, 0.7]], n), columns=['x', 'y']) df1['target'] = 0 df2 = pd.DataFrame(data=random.multivariate_normal((1,2), [[1, -0.5], [-0.5, 1.6]], n), columns=['x', 'y']) df2['target'] = 1 data = pd.concat([df1, df2], ignore_index=True) features = data[['x', 'y']] data.plot(kind='scatter', x='x', y='y', c='target', colormap='autumn', alpha=0.75, colorbar=False); from sklearn.svm import LinearSVC big_grid = get_grid(features, 0.1) show_classifier(features, data.target, LinearSVC(), proba=False); Explanation: Random forest Out-of-bag score Feature importances Линейные классификаторы $$a(x) = sign(\left<w^Tx\right> - w_0)$$ End of explanation from sklearn.linear_model import SGDClassifier random = np.random.RandomState(11) n_iters = 20 figure(figsize=(10, 8 * n_iters)) xys = c_[ravel(xs), ravel(ys)] clf = SGDClassifier(alpha=1, l1_ratio=0) train_objects = data.ix[random.choice(data.index, n_iters)] for iteration in range(n_iters): new_object = train_objects.iloc[iteration] clf = clf.partial_fit([new_object[['x', 'y']]], [new_object.target], classes=[0, 1]) ax = subplot(n_iters, 1, iteration + 1) title("objets count = {}".format(iteration + 1)) predicted = clf.predict(xys).reshape(xs.shape) plot_linear_border(clf, features, ax) processed_objects = train_objects.head(iteration + 1) scatter(processed_objects.x, processed_objects.y, c=processed_objects.target, alpha=0.5, **scatter_args) scatter(new_object.x, new_object.y, marker='x', lw='20') Explanation: Градиентный спуск¶ $$M_i(w, w_0) = y_i(\left<x, w\right> - w_0)$$ $$\sum_{i=1}^l \mathscr{L}(M(x_i)) \to min$$ End of explanation
2,011
Given the following text description, write Python code to implement the functionality described below step by step Description: EECS 445 - Introduction to Machine Learning Lecture 2 Step1: TODAY Step2: Basic matrix multiplication Step3: Matrix Transpose The transpose $A^T$ of a matrix $A$ is what you get from "swapping" rows and columns $$A \in \mathbb{R}^{n \times m} \implies A^T \in \mathbb{R}^{m \times n}$$ $$(A^T){i,j} Step4: A matrix $A$ is symmetric if we have $A^\top = A$ Step5: Some easy ways to get a symmetric matrix Step6: Determinant + Trace The determinant of a square matrix $A$, denoted $|A|$, has the following recursive structure Step7: How to solve Ax=&#955;x $$ A\underline { x } =\lambda \underline { x } \ \left( A-\lambda I \right) \underline { x } =\underline { 0 } $$ The only solution to this equation is for A-&#955;I to be singular and therefor have a determinant of zero $$ \left|{A}-\lambda{I}\right|=0 $$ $|A-\lambda I|$ is a polynomial of the variable $\lambda$ and is called the characteristic polynomial of $A$. The eigenvalues are the roots of the equation of $ \left|{A}-\lambda{I}\right|=0 $. They may be complex numbers. There will be n $\lambda$'s for an $n\times n$ matrix (some of which may be of equal value) Given an eigenvalue $\lambda$, its eigenvectors are the null space of $A-\lambda I$. Example eigenvalue problem Step8: This will have the following (symbolic) determinant polynomial Step9: Eigenvalue example We can solve the polynomial $\lambda^2 - 6\lambda + 8$ with python Step10: I now have two eigenvalues of 2 and 4 I can get the two eigenvectors, $x_1$ and $x_2$, by solving $$ (A - 2I)x_1 = 0 \text{ and } (A - 4I)x_2 = 0 $$ I need to find a vector in the null space of $A - 2I$ and $A - 4I$. Getting eigenvalues/vectors using numpy Step11: Trace related to eigenvals Let $A$ be a matrix whose eigenvalues are $\lambda_1, \ldots, \lambda_n$ Then we have the trace of satisfying $$\text{tr}(A) = \sum_{i=1}^n \lambda_i$$ Step12: Determinant related to eigenvals Let $A$ be a matrix whose eigenvalues are $\lambda_1, \ldots, \lambda_n$ Then we have the trace of satisfying $$|A| = \prod_{i=1}^n \lambda_i$$ Step13: Singular Value Decomposition Any matrix (symmetric, non-symmetric, etc.) $A \in \mathbb{R}^{n\times m}$ admits a singular value decomposition (SVD) The decomposition has three factors, $U \in \mathbb{R}^{n \times n}$, $\Sigma \in \mathbb{R}^{n \times m}$, and $V \in \mathbb{R}^{m \times m}$ $$A = U \Sigma V^\top$$ $U$ and $V$ are both orthonormal matrices, and $\Sigma$ is diagonal SVD Example Step14: Let's show Sigma from the SVD output Step15: And we can show the orthonormal bases $U$ and $V$ Step16: Properties of the SVD $$\text{SVD Step17: Wikipedia
Python Code: from IPython.core.display import HTML, Image from IPython.display import YouTubeVideo from sympy import init_printing, Matrix, symbols, Rational import sympy as sym from warnings import filterwarnings init_printing(use_latex = 'mathjax') filterwarnings('ignore') %pylab inline import numpy as np Explanation: EECS 445 - Introduction to Machine Learning Lecture 2: Linear Algebra and Optimization Date: September 12, 2016 Instructor: Jacob Abernethy and Jia Deng End of explanation a11, a12, a13, a21, a22, a23, a31, a32, a33, b11, b12, b13, b21, b22, b23, b31, b32, b33 = symbols('a11 a12 a13 a21 a22 a23 a31 a32 a33 b11 b12 b13 b21 b22 b23 b31 b32 b33') Explanation: TODAY: Fast overview of linear algebra + convexity we're going to do: * Vectors and norms * Matrices * Positive definite matrices * Eigendecomposition * Singular Value Decomposition End of explanation A = Matrix([[a11, a12, a13], [a21, a22, a23]]) B = Matrix([[b11, b12], [b21, b22], [b31, b32]]) A, B C = A * B C A = Matrix([[3, 1], [1, 3]]) B = Matrix ([[1, 2], [1,4]]) C1 = A * B C2 = B * A C1, C2 Explanation: Basic matrix multiplication End of explanation A = Matrix ([[1, 2], [3,4], [5,6]]) B = Matrix ([[1, 2], [3,4]]) A, A.transpose(), B, B.transpose() Explanation: Matrix Transpose The transpose $A^T$ of a matrix $A$ is what you get from "swapping" rows and columns $$A \in \mathbb{R}^{n \times m} \implies A^T \in \mathbb{R}^{m \times n}$$ $$(A^T){i,j} := A{j,i}$$ End of explanation A = Matrix ([[1, 2], [2,1]]) A, A.transpose() Explanation: A matrix $A$ is symmetric if we have $A^\top = A$ End of explanation X = np.array([[3, 1], [1, 3]]) Matrix(X) # putting Matrix() around X is just for pretty printing Xinv = np.linalg.inv(X) Matrix(Xinv), Matrix(Xinv.dot(X)), Matrix(X.dot(Xinv)) # Should give the identity matrix Explanation: Some easy ways to get a symmetric matrix: $$A + A^\top, \quad A A^\top, \quad A^\top A $$ Transpose properties Obvious properties of the transpose: $(A + B)^T = A^T + B^T$ $(AB)^T = A^T B^T$ (......right?) No! Careful! $(AB)^T = B^T A^T$ Rank of a matrix Linear independent vectors: no vector can be represented as a linear combination of other vectors. rank($A$) (the rank of a m-by-n matrix $A$) is The maximal number of linearly independent columns = The maximal number of linearly independent rows col($A$), the column space of a m-by-n matrix $A$, is the set of all possible linear combinations of its column vectors. row($A$), the row space of a m-by-n matrix $A$, is the set of all possible linear combinations of its row vectors. rank($A$) = dimension of col($A$) = dimension of row($A$) We can still talk about Rank for non-square matrices If $A$ is n by m, then rank($A$) ≤ min(m,n) If rank($A$) = n, then $A$ has full row rank If rank($A$) = m, then $A$ has full column rank Vector Norms A norm measures the "length" of a vector We usually use notation $\|x\|$ to denote the norm of $x$ A norm is a function $f : \mathbb{R}^n \to \mathbb{R}$ such that: $f(x) \geq 0$ for all $x$ $f(x) = 0 \Longleftrightarrow x = 0$ $f(tx) = |t| f(x)$ for all $x$ $f(x + y) \leq f(x) + f(y)$ for all $x$ and $y$ (Triangle Inequality) Examples of norms Perhaps the most common norm is the Euclidean norm $$ \|x\|_2 := \sqrt{x_1^2 + x_2^2 + \ldots x_n^2}$$ This is a special case of the $p$-norm: $$ \|x\|_p := \left(|x_1|^p + \ldots + |x_n|^p\right)^{1/p}$$ There's also the so-called infinity norm $$ \|x\|\infty := \max{i=1,\ldots, n} |x_i|$$ A vector $x$ is said to be normalized if $\|x\| = 1$ Matrix inversion The inverse $A^{-1}$ of a square matrix $A$ is the unique matrix such that $A A^{-1} = A^{-1}A = I$ The inverse doesn't always exist! (For example, when $A$ not full-rank) If $A$ and $B$ are invertible, then $AB$ is invertible and $(AB)^{-1} = B^{-1} A^{-1}$ If $A$ is invertible, then $A^T$ is invertible and $(A^T)^{-1} = (A^{-1})^T$ End of explanation lamda = symbols('lamda') # Note that lambda is a reserved word in python, so we use lamda (without the b) Explanation: Determinant + Trace The determinant of a square matrix $A$, denoted $|A|$, has the following recursive structure: $$\begin{vmatrix} a & b & c & d\e & f & g & h\i & j & k & l\m & n & o & p \end{vmatrix}=a\begin{vmatrix} f & g & h\j & k & l\n & o & p \end{vmatrix} b\begin{vmatrix} e & g & h\i & k & l\m & o & p \end{vmatrix}\ +c\begin{vmatrix} e & f & h\i & j & l\m & n & p \end{vmatrix} -d\begin{vmatrix} e & f & g\i & j & k\m & n & o \end{vmatrix}. $$ $|A| \neq 0$ if and only if $A$ is invertible (non-singular). The trace of a matrix, denoted tr$(A)$, is defined as the sum of the diagonal elements of $A$ Orthogonal + Normalized = Orthonormal Two vectors $x,y$ are orthogonal if $x^T y = 0$ A square matrix $U \in \mathbb{R}^{n \times n}$ is orthogonal if all columns $U_1, \ldots, U_n$ are orthogonal to each other (i.e. $U_i^\top U_j = 0$ for $i \ne j$) $U$ is orthonormal if it is orthogonal and the columns are normalized, i.e. $\|U_i\|_2 = 1$ for every $i$. If $U$ is orthonormal, then $U^T U = I$, that is, $U^{-1} = U^T$. Positive Definiteness We say a symmetric matrix $A$ is positive definite if $$ x^\top A x > 0 \text{ for all } x \ne 0$$ We say a matrix is positive semi-definite (PSD) if $$ x^\top A x \geq 0 \text{ for all } x$$ A matrix that is positive definite gives us a norm. Let $$ \| x \|_A := \sqrt{x ^\top A x} $$ Eigenvalues and Eigenvectors What are eigenvectors? A Matrix is a mathematical object that acts on a (column) vector, resulting in a new vector, i.e. Ax=b An eigenvector is non-zero vector such that the resulting vector is parallel to x (some multiple of x) $$ {A}\underline{x}=\lambda \underline{x} $$ $\lambda$ is called an eigenvalue. The eigenvectors with an eigenvalue of zero are the vectors in the nullspace of $A$. If A is singular (takes some non-zero vector into 0) then zero is an eigenvalue. End of explanation A = Matrix([[3, 1], [1, 3]]) I = sym.eye(2) A, I # Printing A and the 2-by-2 identity matrix to the screen (A - lamda * I) # Printing A minus lambda times the identity matrix to the screen Explanation: How to solve Ax=&#955;x $$ A\underline { x } =\lambda \underline { x } \ \left( A-\lambda I \right) \underline { x } =\underline { 0 } $$ The only solution to this equation is for A-&#955;I to be singular and therefor have a determinant of zero $$ \left|{A}-\lambda{I}\right|=0 $$ $|A-\lambda I|$ is a polynomial of the variable $\lambda$ and is called the characteristic polynomial of $A$. The eigenvalues are the roots of the equation of $ \left|{A}-\lambda{I}\right|=0 $. They may be complex numbers. There will be n $\lambda$'s for an $n\times n$ matrix (some of which may be of equal value) Given an eigenvalue $\lambda$, its eigenvectors are the null space of $A-\lambda I$. Example eigenvalue problem End of explanation (A - lamda * I).det() Explanation: This will have the following (symbolic) determinant polynomial End of explanation ((A - lamda * I).det()).factor() Explanation: Eigenvalue example We can solve the polynomial $\lambda^2 - 6\lambda + 8$ with python: End of explanation X = np.array([[3, 1], [1, 3]]) Matrix(X) eigenvals, eigenvecs = np.linalg.eig(X) Matrix(eigenvecs) Matrix(eigenvals) Explanation: I now have two eigenvalues of 2 and 4 I can get the two eigenvectors, $x_1$ and $x_2$, by solving $$ (A - 2I)x_1 = 0 \text{ and } (A - 4I)x_2 = 0 $$ I need to find a vector in the null space of $A - 2I$ and $A - 4I$. Getting eigenvalues/vectors using numpy End of explanation X = np.random.randn(5,10) A = X.dot(X.T) # For fun, let's look at A = X * X^T eigenvals, eigvecs = np.linalg.eig(A) # Compute eigenvalues of A sum_of_eigs = sum(eigenvals) # Sum the eigenvalues trace_of_A = A.trace() # Look at the trace (sum_of_eigs, trace_of_A) # Are they the same? Explanation: Trace related to eigenvals Let $A$ be a matrix whose eigenvalues are $\lambda_1, \ldots, \lambda_n$ Then we have the trace of satisfying $$\text{tr}(A) = \sum_{i=1}^n \lambda_i$$ End of explanation # We'll use the same matrix A as before prod_of_eigs = np.prod(eigenvals) # Sum the eigenvalues determinant = np.linalg.det(A) # Look at the trace (prod_of_eigs, determinant) # Are they the same? Explanation: Determinant related to eigenvals Let $A$ be a matrix whose eigenvalues are $\lambda_1, \ldots, \lambda_n$ Then we have the trace of satisfying $$|A| = \prod_{i=1}^n \lambda_i$$ End of explanation A = np.array([[4, 4], [-3, 3]]) Matrix(A) Explanation: Singular Value Decomposition Any matrix (symmetric, non-symmetric, etc.) $A \in \mathbb{R}^{n\times m}$ admits a singular value decomposition (SVD) The decomposition has three factors, $U \in \mathbb{R}^{n \times n}$, $\Sigma \in \mathbb{R}^{n \times m}$, and $V \in \mathbb{R}^{m \times m}$ $$A = U \Sigma V^\top$$ $U$ and $V$ are both orthonormal matrices, and $\Sigma$ is diagonal SVD Example End of explanation U, Sigma_diags, V = np.linalg.svd(A) Matrix(np.diag(Sigma_diags)) # Numpy's SVD only returns diagonals, here I'm showing full Sigma Explanation: Let's show Sigma from the SVD output End of explanation U,V = np.round(U,decimals=5), np.round(V,decimals=5) Matrix(U), Matrix(V) # I rounded the values for clarity Explanation: And we can show the orthonormal bases $U$ and $V$ End of explanation Image(url='https://upload.wikimedia.org/wikipedia/commons/e/e9/Singular_value_decomposition.gif') Explanation: Properties of the SVD $$\text{SVD:} \quad \quad A = U \Sigma V^\top$$ * The singular values of $A$ are the diagonal elements of $\Sigma$ * The singular values of $A$ are the square roots of the eigenvalues of both $A^\top A$ and $A A^\top$ * The left-singular vectors of $A$, i.e. the columns of $U$, are the eigenvectors of $A A^\top$ * The right-singular vectors of $A$, i.e. the columns of $V$, are the eigenvectors of $A^\top A$ $$M = U\Sigma V^T$$ End of explanation Image(url='http://www.probabilitycourse.com/images/chapter6/Convex_b.png', width=400) Explanation: Wikipedia: Visualization of the SVD of a 2d matrix M. First, we see the unit disc in blue together with the two canonical unit vectors. We then see the action of M, which distorts the disk to an ellipse. The SVD decomposes M into three simple transformations: an initial rotation V∗, a scaling Σ along the coordinate axes, and a final rotation U. The lengths σ1 and σ2 are singular values of M. Functions and Convexity Let $f$ be a function mapping $\mathbb{R}^{n} \to \mathbb{R}$, and assume $f$ is twice differentiable. The gradient and hessian of $f$, denoted $\nabla f(x)$ and $\nabla^2 f(x)$, are the vector an matrix functions: $$\nabla f(x) = \begin{bmatrix}\frac{\partial f}{\partial x_1} \ \vdots \ \frac{\partial f}{\partial x_n}\end{bmatrix} \quad \quad \quad \nabla^2 f(x) = \begin{bmatrix}\frac{\partial^2 f}{\partial x_1^2} & \ldots & \frac{\partial^2 f}{\partial x_1 \partial x_n} \ \vdots & & \vdots \\frac{\partial^2 f}{\partial x_1 \partial x_n} & \ldots & \frac{\partial^2 f}{\partial x_n^2}\end{bmatrix}$$ Note: the hessian is always symmetric! Gradients of three simple functions Let $b$ be some vector, and $A$ be some matrix $f(x) = b^\top x \implies \nabla_x f(x) = b$ $f(x) = x^\top A x \implies \nabla_x f(x) = 2 A x$ $f(x) = x^\top A x \implies \nabla^2_x f(x) = 2 A $ Convex functions We say that a function $f$ is convex if, for any distinct pair of points $x,y$ we have $$f\left(\frac{x + y}{2}\right) \leq \frac{f(x)}{2} + \frac{f(y)}{2}$$ End of explanation
2,012
Given the following text description, write Python code to implement the functionality described below step by step Description: Medical Text Classification with IPython Notebook The purpose of this notebook is to show a simple medical text classification workflow using IPython notebook. Standard imports and settings Step6: Few words about IPython Notebook Some of the main features of the IPython Notebook app include Step7: Data exploration and preparation Read the data from the sqlite databases into Pandas dataframes Step8: Output a random sample of 20 records Step9: Creating binary classification dataset As we see, each document is assigned one class or more. In this exercise I would like to implement a simple binary classification workflow, so I define 2 binary classes Step10: Let's take another look at the data Step11: How many records do we have in the test / train datasets ? Step12: Data distribution Step13: The accuracy of a dumb classifier that classifies all the documents as False is >80% Baseline classifier We define a trivial baseline classifier that simply look for the string 'Cardio' in the text Step14: Testing our baseline classifier Step15: Comparing actual values to the predictions of the baseline classifier Step16: Assessing Classifier Performance Confusion matrix and derivative estimators Step17: Sensitivity (Recall) or true positive rate (TPR) $=\frac{TP}{TP+FN}$ Specificity (SPC) or true negative rate $=\frac{TN}{TN+FP}$ Precision or positive predictive value (PPV) $=\frac{TP}{TP+FP}$ Confusion matrix and derivatives of the baseline classifier Step18: Some code to prettify the printout of the confusion matrix in the notebook Step19: Even though the accuracy is OK, the recall is misearable. Let's build a real classifier Building a ML classifier Vectorizing textual data using Bag-of-Words (or Bag-of-Ngrams) Our vector space is a dictionary of all the N-grams in our set of documents. Defining tokenizer to extract better features from the text Step20: Vectorize the test data Step21: What is the dimentionality of our vector space? Step22: We have 6286 documents in our training set, each has 355860 features (the combination of all the uni/bi-grams) The first 200 features Step23: At this point of the workflow, there are many feature selection and engineering techniques we could apply. Some generic, some based on natural language processing and some unique to the medical domain. However, to keep it simple for now, let's move on to the classifier Step24: Most Predictive Features A cool feature of the Naive Bayes classifier is that it can list for us most relevant features for each class. These are the features that are most relevant to the positive documents. Some of them are trivial English words, that will also appear in the list of features relevant to the Negative documents. However - we also see some domain specific features, such as arteri and coronari Step25: Test the classifier with the test data Step26: We have 7643 documents in our test set, each has 355860 features - exactly the same features we set while processing the training set of course. Step27: Confusion matrix and derivatives of the classifier Step28: As expected we improved the accuracy compared to the baseline classifier, and dramatically improved the recall on the positive documents - from 29% to 78%. Before we move on to optimize the classifier, let's look at some other interesting output type of Naive Bayes - some insightful output besides the predictions on the test data Get the indices of the elements that with wrong predictions Step29: Probabilities of a particular classification Bayesian models like the Naive Bayes classifier have the nice property that they compute probabilities of a particular classification -- the predict_proba and predict_log_proba methods of MultinomialNB compute these probabilities. Step30: The output of clf_nb.predict_proba(counts_test) is a (N example, 2) array. The first column gives the probability $P(Y=0)$ or $P(False)$, and the second gives $P(Y=1)$ or $P(True)$. Commonly it is more comfortable to work with the log values of the probabilities - the results of clf_nb.predict_log_proba Calculate and plot the differences between the True and False probabilities Step31: Distribution of the predictions on the test set, as function of the difference between the True/False log probabilities Step32: Another view on the distribution of the prediction scores of our classifier Divide the test set into 2 sets - True (blue) / False(red) - and plot them in a scatter plot Step33: Optimizing the classification pipeline Scorer function When optimizing parameters for classification pipeline we can write or define our own scoring function. The optimizing routine GridSearchCV which runs a brute force parameter search, will select the parameters that got the highest score. What do we want to optimize exactly? When classifing medical documents, we sometimes want to maximize sensitivity (recall) while keeping specificity and accuracy in check. In other words - the sensitivity is our target optimization parameter. To maximize sensitivity, I'll define a scorer function that returns the recall score. Step34: Now we build a full pipeline and use the GridSearchCV sklearn utility to fit the best paramaters for the pipeline. Normally I use longer lists of parameters, but that takes hours to run. NOTE - we will use GridSearchCV default 3-fold cross-validation. Other cross-validation schemes can be defined if needed. Step35: Try the optimized classifier on the test set and output the performance parameters
Python Code: %matplotlib inline import matplotlib as mpl mpl.rcParams['font.size'] = 16.0 import matplotlib.pyplot as plt import numpy as np import pandas as pd # process data with pandas dataframe pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) import seaborn as sns sns.set_style("whitegrid") sns.set_context("poster") import time import os Explanation: Medical Text Classification with IPython Notebook The purpose of this notebook is to show a simple medical text classification workflow using IPython notebook. Standard imports and settings End of explanation import sqlite3 as sqlite def get_all_tables(c): Helper function - Gets a list of all the tables in the database. all_tables = [] c.execute('SELECT name FROM SQLITE_MASTER WHERE type = "table"') for tbl in c: all_tables.append(tbl[0]) return all_tables def drop_tables(c, tables): Helper function - Checks that the specified tables exist, and for those that do, drop them. all_tables = get_all_tables(c) for t in tables: if t in all_tables: c.execute('DROP TABLE %s' % t) def create_documents_table(c): Helper function - This function uses SQL to create the Documents table drop_tables(c, [ 'Documents' ]) c.execute(CREATE TABLE Documents ( DOCID TEXT PRIMARY KEY, NOTE_TEXT TEXT, CATEGORY TEXT )) def add_document(c, docid, text, category): Helper function - Adds one document to sql Documents database c.execute('insert or replace into Documents values ( ?, ?, ? )', (docid, text, category)) def ohsumed2sqlite(src_root_dir, dest_sqlite): start_time = time.time() print 'Converting ohsumed directory structure {0} to sqlite database'.format(src_root_dir) conn_out = sqlite.connect(dest_sqlite) c_out = conn_out.cursor() fout = open(dest_sqlite, 'w') fout.close() create_documents_table(c_out) # Process the ohsumed directory dict = {} for root, dirs, files in os.walk(src_root_dir): for f in files: category = os.path.basename(root) # directory name is the category with open (os.path.join(root, f), "r") as cur_file: data=cur_file.read() if f in dict: category = dict[f] + ',' + category dict[f] = category add_document(c_out, f, data, category) conn_out.commit() c_out.close() print("--- ohsumed2sqlite %s seconds ---" % (time.time() - start_time)) # Convert the training data ohsumed2sqlite('.\\Data\\ohsumed-first-20000-docs\\training', 'training.sqlite') # Convert the test data ohsumed2sqlite('.\\Data\\ohsumed-first-20000-docs\\test', 'test.sqlite') Explanation: Few words about IPython Notebook Some of the main features of the IPython Notebook app include: In-browser editing for code, with automatic syntax highlighting, indentation, and tab completion. The ability to execute code from the browser, with the results of computations attached to the code which generated them. Displaying the result of computation using rich media representations, such as HTML, LaTeX, PNG, SVG, etc, up to publication-quality figures content. In-browser editing for rich text using the Markdown markup language, which can provide commentary for the code. The ability to easily include mathematical notation within markdown cells using LaTeX. Personal advantages for my projects: Great support for Interpretable Data Science - This greatly contributes to the process of harnessing the medical expertise of the users to train the algorithms better. Architecture of IPython notebook <img src="images/ipython_architecture.png"> The data The Ohsumed test collection (available at ftp://medir.ohsu.edu/pub/ohsumed) is a subset of the MEDLINE database, which is a bibliographic database of important, peer-reviewed medical literature maintained by the National Library of Medicine. The initial subset I consider in the project is the collection consisting of the first 20,000 documents from the 50,216 medical abstracts of the year 1991. The classification scheme consists of the 23 Medical Subject Headings (MeSH) categories of cardiovascular diseases group: | Category | Description | | ------------- | ------------- | | C01 | Bacterial Infections and Mycoses | | C02 | Virus Diseases | | C03 | Parasitic Diseases | | C04 | Neoplasms | | C05 | Musculoskeletal Diseases | | C06 | Digestive System Diseases | | C07 | Stomatognathic Diseases | | C08 | Respiratory Tract Diseases | | C09 | Otorhinolaryngologic Diseases | | C10 | Nervous System Diseases | | C11 | Eye Diseases | | C12 | Urologic and Male Genital Diseases | | C13 | Female Genital Diseases and Pregnancy Complications | | C14 | Cardiovascular Diseases | | C15 | Hemic and Lymphatic Diseases | | C16 | Neonatal Diseases and Abnormalities | | C17 | Skin and Connective Tissue Diseases | | C18 | Nutritional and Metabolic Diseases | | C19 | Endocrine Diseases | | C20 | Immunologic Diseases | | C21 | Disorders of Environmental Origin | | C22 | Animal Diseases | | C23 | Pathological Conditions, Signs and Symptoms | Downloading the data: Cardiovascular diseases abstracts (the first 20,000 abstracts of the year 1991) The following code assumes that the data is extracted into folder with the name 'Data' in the same folder of the IPython notebook Storing the data in a sql database The following code iterates over the extracted data and converts the ohsumed directory structure to a single sqlite databse with the data. The original dataset is already divided to Test and Training datasets. We will keep this division. End of explanation con = sqlite.connect('training.sqlite') df_train = pd.read_sql_query("SELECT * from Documents", con) con.close() con = sqlite.connect('test.sqlite') df_test = pd.read_sql_query("SELECT * from Documents", con) con.close() Explanation: Data exploration and preparation Read the data from the sqlite databases into Pandas dataframes End of explanation pd.options.display.max_colwidth = 200 df_train.sample(20) Explanation: Output a random sample of 20 records End of explanation import re def ConvertCategoryColToBinVal(df, poscat): df['CATEGORY'] = df['CATEGORY'].apply(lambda x: bool(re.search(poscat, x, re.IGNORECASE) and True)) ConvertCategoryColToBinVal(df_train, 'C14') ConvertCategoryColToBinVal(df_test, 'C14') Explanation: Creating binary classification dataset As we see, each document is assigned one class or more. In this exercise I would like to implement a simple binary classification workflow, so I define 2 binary classes: * Positive / True - Documents that belong to class C14 - Cardiovascular Diseases * Negative / False - Documents that do not belong to C14 The following code converts that data to a binary classification dataset: End of explanation df_train.sample(20) Explanation: Let's take another look at the data: End of explanation print 'Training data has: ', len(df_train.index), ' documents' print 'Test data has: ', len(df_test.index), ' documents' Explanation: How many records do we have in the test / train datasets ? End of explanation plt.axis('equal') plt.pie( df_train.CATEGORY.value_counts().tolist(), labels=['False', 'True'], autopct='%1.1f%%', colors=("#E13F29", "#D69A80")); Explanation: Data distribution End of explanation import re def baseline_cpr_classifier(txt): if re.search('Cardio', txt, re.IGNORECASE): return True else: return False Explanation: The accuracy of a dumb classifier that classifies all the documents as False is >80% Baseline classifier We define a trivial baseline classifier that simply look for the string 'Cardio' in the text End of explanation Ytrue = df_test.CATEGORY.tolist() Ypred = [] for index, row in df_test.iterrows(): Ypred.append(baseline_cpr_classifier(row['NOTE_TEXT'])) Explanation: Testing our baseline classifier End of explanation dictY = {'Actual' : Ytrue, 'Predicted': Ypred} dfY = pd.DataFrame.from_dict(dictY) dfY.sample(20) Explanation: Comparing actual values to the predictions of the baseline classifier End of explanation from ipy_table import * confusion_matrix_binary = [ ['', 'Predicted 0', 'Predicted 1'], ['Actual 0', 'True Negative', 'False Positive'], ['Actual 1', 'False Negative', 'True Positive'] ] make_table(confusion_matrix_binary) apply_theme('basic_both') Explanation: Assessing Classifier Performance Confusion matrix and derivative estimators End of explanation from sklearn.metrics import confusion_matrix conf_mat = confusion_matrix(Ytrue, Ypred) print conf_mat print sum(conf_mat[0]) Explanation: Sensitivity (Recall) or true positive rate (TPR) $=\frac{TP}{TP+FN}$ Specificity (SPC) or true negative rate $=\frac{TN}{TN+FP}$ Precision or positive predictive value (PPV) $=\frac{TP}{TP+FP}$ Confusion matrix and derivatives of the baseline classifier End of explanation def render_confusion_matrix(ytrue, ypred): return pd.crosstab(pd.Series(ytrue), pd.Series(ypred), rownames=['Actual'], colnames=['Predicted'], margins=True) render_confusion_matrix(Ytrue, Ypred) from sklearn.metrics import classification_report, accuracy_score print print 'Accuracy: ', accuracy_score(Ytrue, Ypred) print print classification_report(Ytrue, Ypred) Explanation: Some code to prettify the printout of the confusion matrix in the notebook: End of explanation from nltk.stem.porter import * from nltk.tokenize import word_tokenize import string stemmer = PorterStemmer() def stem_tokens(tokens, stemmer): stemmed = [] for item in tokens: stemmed.append(stemmer.stem(item)) return stemmed def tokenize(text): text = "".join([ch for ch in text if ch not in string.punctuation]) tokens = word_tokenize(text) tokens = [item for item in tokens if item.isalpha()] stems = stem_tokens(tokens, stemmer) return stems Explanation: Even though the accuracy is OK, the recall is misearable. Let's build a real classifier Building a ML classifier Vectorizing textual data using Bag-of-Words (or Bag-of-Ngrams) Our vector space is a dictionary of all the N-grams in our set of documents. Defining tokenizer to extract better features from the text End of explanation X_train = df_train['NOTE_TEXT'].tolist() Y_train = df_train.CATEGORY.tolist() vectorizer = CountVectorizer(tokenizer=tokenize, ngram_range=(1,2)) wcounts = vectorizer.fit_transform(X_train) Explanation: Vectorize the test data End of explanation wcounts Explanation: What is the dimentionality of our vector space? End of explanation feats = vectorizer.get_feature_names() feats[:200] Explanation: We have 6286 documents in our training set, each has 355860 features (the combination of all the uni/bi-grams) The first 200 features End of explanation from sklearn.naive_bayes import MultinomialNB clf_nb = MultinomialNB(alpha=0.1) clf_nb.fit(wcounts, Y_train) Explanation: At this point of the workflow, there are many feature selection and engineering techniques we could apply. Some generic, some based on natural language processing and some unique to the medical domain. However, to keep it simple for now, let's move on to the classifier: Naive Bayes classifier Naive conditional independence assumption: The counts of individual features (or n-grams) are independent Bayes' rule is used to calculate the conditional probabilities for each class/feature pair: $P(Y|W)=\frac{P(W|Y) \cdot P(Y)}{P(W)}$ $\implies P(Y|w_{1}, w_{2}, ..., w_{N})=\frac{1}{P(W)} \cdot P(Y) \cdot \prod_{k=1}^{N} P(w_{k}|Y)$ Since the probability of all the features is not dependent on the probability of Y and all we care about is finding the Y that is most likely, we can drop $\frac{1}{P(W)}$ and stay with: $P(Y|w_{1}, w_{2}, ..., w_{N})=P(Y) \cdot \prod_{k=1}^{N} P(w_{k}|Y)$ We estimate the probabilities from the training set using multinomial distribution with a smoothing factor alpha: $P(w_{i}) = \frac{count_{i}+alpha}{overallcount_{i}+alpha \cdot N}$ <img width="575" src="words_proba.png"/> Fit (aka train) a classifier with some ad-hoc parameters on the test data End of explanation pf = [(clf_nb.feature_log_prob_[1, i], feats[i]) for i in range(len(feats))] pf.sort(reverse=True) for p in pf[:25]: print 'Positive word %.2f: %s' % (p[0], p[1]) Explanation: Most Predictive Features A cool feature of the Naive Bayes classifier is that it can list for us most relevant features for each class. These are the features that are most relevant to the positive documents. Some of them are trivial English words, that will also appear in the list of features relevant to the Negative documents. However - we also see some domain specific features, such as arteri and coronari : End of explanation # Read the data from pandas dataframe to an array X_test = df_test['NOTE_TEXT'].tolist() Y_test = df_test.CATEGORY.tolist() # Convert the text to arrays of numbers counts_test = vectorizer.transform(X_test) counts_test Explanation: Test the classifier with the test data End of explanation # Predict the values of the test set predictions = clf_nb.predict(counts_test) # Look at the first 20 predictions predictions[:20] Explanation: We have 7643 documents in our test set, each has 355860 features - exactly the same features we set while processing the training set of course. End of explanation render_confusion_matrix(Y_test, predictions) print print 'Accuracy: ', accuracy_score(Y_test, predictions) print print classification_report(Y_test, predictions) Explanation: Confusion matrix and derivatives of the classifier End of explanation iwrong_predictions = [i for i,v in enumerate(zip(Y_test, predictions)) if v[0] != v[1]] iwrong_predictions[:20] Explanation: As expected we improved the accuracy compared to the baseline classifier, and dramatically improved the recall on the positive documents - from 29% to 78%. Before we move on to optimize the classifier, let's look at some other interesting output type of Naive Bayes - some insightful output besides the predictions on the test data Get the indices of the elements that with wrong predictions End of explanation proba = clf_nb.predict_proba(counts_test) log_proba = clf_nb.predict_log_proba(counts_test) proba[:10] Explanation: Probabilities of a particular classification Bayesian models like the Naive Bayes classifier have the nice property that they compute probabilities of a particular classification -- the predict_proba and predict_log_proba methods of MultinomialNB compute these probabilities. End of explanation diff_prob = proba[:,1] - proba[:,0] diff_log_proba = log_proba[:,1] - log_proba[:,0] print 'diff_prob:\n' print 'mean:',np.mean(diff_prob) print 'std:', np.std(diff_prob) print '\ndiff_log_prob:\n' print 'mean:', np.mean(diff_log_proba) print 'std:', np.std(diff_log_proba) diff_log_proba[-10:] Explanation: The output of clf_nb.predict_proba(counts_test) is a (N example, 2) array. The first column gives the probability $P(Y=0)$ or $P(False)$, and the second gives $P(Y=1)$ or $P(True)$. Commonly it is more comfortable to work with the log values of the probabilities - the results of clf_nb.predict_log_proba Calculate and plot the differences between the True and False probabilities End of explanation # Plot histogram. plt.hist(diff_log_proba, range=[-500, 500], bins=30, normed=True, alpha=0.5) plt.axvline(0, color='r') Explanation: Distribution of the predictions on the test set, as function of the difference between the True/False log probabilities End of explanation pospts = [v[1] for i,v in enumerate(zip(Y_test, diff_log_proba)) if v[0] == 1] negpts = [v[1] for i,v in enumerate(zip(Y_test, diff_log_proba)) if v[0] == 0] import random plt.scatter( pospts, np.random.uniform(0.9, 1.1, len(pospts)), color='blue') plt.scatter( negpts, np.random.uniform(0.9, 1.1, len(negpts)), color='red') plt.xlim(-500, 450) plt.ylim(0.8, 1.2) plt.axvline(0, color='r') values = np.array(diff_log_proba[iwrong_predictions]) plt.axvspan( np.mean(values)-2*np.std(values), np.mean(values)+2*np.std(values), facecolor='b', alpha=0.1) plt.axvline(np.mean(values), linewidth=1); Explanation: Another view on the distribution of the prediction scores of our classifier Divide the test set into 2 sets - True (blue) / False(red) - and plot them in a scatter plot End of explanation from sklearn.metrics import recall_score, make_scorer # Define scorer function that returns the recall score recall_scorer = make_scorer(recall_score) Explanation: Optimizing the classification pipeline Scorer function When optimizing parameters for classification pipeline we can write or define our own scoring function. The optimizing routine GridSearchCV which runs a brute force parameter search, will select the parameters that got the highest score. What do we want to optimize exactly? When classifing medical documents, we sometimes want to maximize sensitivity (recall) while keeping specificity and accuracy in check. In other words - the sensitivity is our target optimization parameter. To maximize sensitivity, I'll define a scorer function that returns the recall score. End of explanation from sklearn.feature_extraction.text import TfidfTransformer from sklearn.grid_search import GridSearchCV from sklearn.pipeline import Pipeline pipeline = Pipeline([ ('vect', CountVectorizer(tokenizer=tokenize)), ('clf', MultinomialNB()) ]) # Define some possible parameters for feature extraction and for the classifier parameters = { 'vect__max_features': (20000, 30000), 'vect__ngram_range': ((1, 1), (1, 2)), # unigrams or bigrams 'clf__alpha': (0.05, 0.1, 0.2) } # find the best parameters for both the feature extraction and the # classifier, based on the scorer function we defined grid_search = GridSearchCV(pipeline, parameters, verbose=1, scoring=recall_scorer) grid_search.fit(df_train['NOTE_TEXT'].tolist(), df_train.CATEGORY.tolist()) print("Best parameters set found on development set:") print print(grid_search.best_params_) Explanation: Now we build a full pipeline and use the GridSearchCV sklearn utility to fit the best paramaters for the pipeline. Normally I use longer lists of parameters, but that takes hours to run. NOTE - we will use GridSearchCV default 3-fold cross-validation. Other cross-validation schemes can be defined if needed. End of explanation # Use the optimal classifier to make predictions on the test set opt_predictions = grid_search.predict(df_test['NOTE_TEXT'].tolist()) render_confusion_matrix(df_test.CATEGORY.tolist(), opt_predictions) print print 'Accuracy: ', accuracy_score(df_test.CATEGORY.tolist(), opt_predictions) print print classification_report(df_test.CATEGORY.tolist(), opt_predictions) Explanation: Try the optimized classifier on the test set and output the performance parameters End of explanation
2,013
Given the following text description, write Python code to implement the functionality described below step by step Description: Background information on filtering Here we give some background information on filtering in general, and how it is done in MNE-Python in particular. Recommended reading for practical applications of digital filter design can be found in Parks & Burrus (1987) Step1: Take for example an ideal low-pass filter, which would give a magnitude response of 1 in the pass-band (up to frequency $f_p$) and a magnitude response of 0 in the stop-band (down to frequency $f_s$) such that $f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity) Step2: This filter hypothetically achieves zero ripple in the frequency domain, perfect attenuation, and perfect steepness. However, due to the discontinuity in the frequency response, the filter would require infinite ringing in the time domain (i.e., infinite order) to be realized. Another way to think of this is that a rectangular window in the frequency domain is actually a sinc_ function in the time domain, which requires an infinite number of samples (and thus infinite time) to represent. So although this filter has ideal frequency suppression, it has poor time-domain characteristics. Let's try to naïvely make a brick-wall filter of length 0.1 s, and look at the filter itself in the time domain and the frequency domain Step3: This is not so good! Making the filter 10 times longer (1 s) gets us a slightly better stop-band suppression, but still has a lot of ringing in the time domain. Note the x-axis is an order of magnitude longer here, and the filter has a correspondingly much longer group delay (again equal to half the filter length, or 0.5 seconds) Step4: Let's make the stop-band tighter still with a longer filter (10 s), with a resulting larger x-axis Step5: Now we have very sharp frequency suppression, but our filter rings for the entire 10 seconds. So this naïve method is probably not a good way to build our low-pass filter. Fortunately, there are multiple established methods to design FIR filters based on desired response characteristics. These include Step6: Accepting a shallower roll-off of the filter in the frequency domain makes our time-domain response potentially much better. We end up with a more gradual slope through the transition region, but a much cleaner time domain signal. Here again for the 1 s filter Step7: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable stop-band attenuation Step8: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s), our effective stop frequency gets pushed out past 60 Hz Step9: If we want a filter that is only 0.1 seconds long, we should probably use something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz) Step10: So far, we have only discussed non-causal filtering, which means that each sample at each time point $t$ is filtered using samples that come after ($t + \Delta t$) and before ($t - \Delta t$) the current time point $t$. In this sense, each sample is influenced by samples that come both before and after it. This is useful in many cases, especially because it does not delay the timing of events. However, sometimes it can be beneficial to use causal filtering, whereby each sample $t$ is filtered only using time points that came after it. Note that the delay is variable (whereas for linear/zero-phase filters it is constant) but small in the pass-band. Unlike zero-phase filters, which require time-shifting backward the output of a linear-phase filtering stage (and thus becoming non-causal), minimum-phase filters do not require any compensation to achieve small delays in the pass-band. Note that as an artifact of the minimum phase filter construction step, the filter does not end up being as steep as the linear/zero-phase version. We can construct a minimum-phase filter from our existing linear-phase filter with the Step11: Applying FIR filters Now lets look at some practical effects of these filters by applying them to some data. Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part) plus noise (random and line). Note that the original clean signal contains frequency content in both the pass band and transition bands of our low-pass filter. Step12: Filter it with a shallow cutoff, linear-phase FIR (which allows us to compensate for the constant filter delay) Step13: Filter it with a different design method fir_design="firwin2", and also compensate for the constant filter delay. This method does not produce quite as sharp a transition compared to fir_design="firwin", despite being twice as long Step14: Let's also filter with the MNE-Python 0.13 default, which is a long-duration, steep cutoff FIR that gets applied twice Step15: Let's also filter it with the MNE-C default, which is a long-duration steep-slope FIR filter designed using frequency-domain techniques Step16: And now an example of a minimum-phase filter Step18: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency attenuation, but it comes at a cost of potential ringing (long-lasting ripples) in the time domain. Ringing can occur with steep filters, especially in signals with frequency content around the transition band. Our Morlet wavelet signal has power in our transition band, and the time-domain ringing is thus more pronounced for the steep-slope, long-duration filter than the shorter, shallower-slope filter Step19: IIR filters MNE-Python also offers IIR filtering functionality that is based on the methods from Step20: The falloff of this filter is not very steep. <div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS) by using Step21: There are other types of IIR filters that we can use. For a complete list, check out the documentation for Step22: If we can live with even more ripple, we can get it slightly steeper, but the impulse response begins to ring substantially longer (note the different x-axis scale) Step23: Applying IIR filters Now let's look at how our shallow and steep Butterworth IIR filters perform on our Morlet signal from before Step24: Some pitfalls of filtering Multiple recent papers have noted potential risks of drawing errant inferences due to misapplication of filters. Low-pass problems Filters in general, especially those that are non-causal (zero-phase), can make activity appear to occur earlier or later than it truly did. As mentioned in VanRullen (2011) Step25: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) Step26: In response, Maess et al. (2016) Step27: Both groups seem to acknowledge that the choices of filtering cutoffs, and perhaps even the application of baseline correction, depend on the characteristics of the data being investigated, especially when it comes to
Python Code: import numpy as np from numpy.fft import fft, fftfreq from scipy import signal import matplotlib.pyplot as plt from mne.time_frequency.tfr import morlet from mne.viz import plot_filter, plot_ideal_filter import mne sfreq = 1000. f_p = 40. flim = (1., sfreq / 2.) # limits for plotting Explanation: Background information on filtering Here we give some background information on filtering in general, and how it is done in MNE-Python in particular. Recommended reading for practical applications of digital filter design can be found in Parks & Burrus (1987) :footcite:ParksBurrus1987 and Ifeachor & Jervis (2002) :footcite:IfeachorJervis2002, and for filtering in an M/EEG context we recommend reading Widmann et al. (2015) :footcite:WidmannEtAl2015. <div class="alert alert-info"><h4>Note</h4><p>This tutorial goes pretty deep into the mathematics of filtering and the design decisions that go into choosing a filter. If you just want to know how to apply the default filters in MNE-Python to your data, skip this tutorial and read `tut-filter-resample` instead (but someday, you should come back and read this one too 🙂).</p></div> Problem statement Practical issues with filtering electrophysiological data are covered in Widmann et al. (2012) :footcite:WidmannSchroger2012, where they conclude with this statement: Filtering can result in considerable distortions of the time course (and amplitude) of a signal as demonstrated by VanRullen (2011) :footcite:`VanRullen2011`. Thus, filtering should not be used lightly. However, if effects of filtering are cautiously considered and filter artifacts are minimized, a valid interpretation of the temporal dynamics of filtered electrophysiological data is possible and signals missed otherwise can be detected with filtering. In other words, filtering can increase signal-to-noise ratio (SNR), but if it is not used carefully, it can distort data. Here we hope to cover some filtering basics so users can better understand filtering trade-offs and why MNE-Python has chosen particular defaults. Filtering basics Let's get some of the basic math down. In the frequency domain, digital filters have a transfer function that is given by: \begin{align}H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + \ldots + b_M z^{-M}} {1 + a_1 z^{-1} + a_2 z^{-2} + \ldots + a_N z^{-M}} \ &= \frac{\sum_{k=0}^Mb_kz^{-k}}{\sum_{k=1}^Na_kz^{-k}}\end{align} In the time domain, the numerator coefficients $b_k$ and denominator coefficients $a_k$ can be used to obtain our output data $y(n)$ in terms of our input data $x(n)$ as: \begin{align}:label: summations y(n) &amp;= b_0 x(n) + b_1 x(n-1) + \ldots + b_M x(n-M) - a_1 y(n-1) - a_2 y(n - 2) - \ldots - a_N y(n - N)\\ &amp;= \sum_{k=0}^M b_k x(n-k) - \sum_{k=1}^N a_k y(n-k)\end{align} In other words, the output at time $n$ is determined by a sum over 1. the numerator coefficients $b_k$, which get multiplied by the previous input values $x(n-k)$, and 2. the denominator coefficients $a_k$, which get multiplied by the previous output values $y(n-k)$. Note that these summations correspond to (1) a weighted moving average and (2) an autoregression. Filters are broken into two classes: FIR_ (finite impulse response) and IIR_ (infinite impulse response) based on these coefficients. FIR filters use a finite number of numerator coefficients $b_k$ ($\forall k, a_k=0$), and thus each output value of $y(n)$ depends only on the $M$ previous input values. IIR filters depend on the previous input and output values, and thus can have effectively infinite impulse responses. As outlined in Parks & Burrus (1987) :footcite:ParksBurrus1987, FIR and IIR have different trade-offs: * A causal FIR filter can be linear-phase -- i.e., the same time delay across all frequencies -- whereas a causal IIR filter cannot. The phase and group delay characteristics are also usually better for FIR filters. * IIR filters can generally have a steeper cutoff than an FIR filter of equivalent order. * IIR filters are generally less numerically stable, in part due to accumulating error (due to its recursive calculations). In MNE-Python we default to using FIR filtering. As noted in Widmann et al. (2015) :footcite:WidmannEtAl2015: Despite IIR filters often being considered as computationally more efficient, they are recommended only when high throughput and sharp cutoffs are required (Ifeachor and Jervis, 2002 :footcite:`IfeachorJervis2002`, p. 321)... FIR filters are easier to control, are always stable, have a well-defined passband, can be corrected to zero-phase without additional computations, and can be converted to minimum-phase. We therefore recommend FIR filters for most purposes in electrophysiological data analysis. When designing a filter (FIR or IIR), there are always trade-offs that need to be considered, including but not limited to: 1. Ripple in the pass-band 2. Attenuation of the stop-band 3. Steepness of roll-off 4. Filter order (i.e., length for FIR filters) 5. Time-domain ringing In general, the sharper something is in frequency, the broader it is in time, and vice-versa. This is a fundamental time-frequency trade-off, and it will show up below. FIR Filters First, we will focus on FIR filters, which are the default filters used by MNE-Python. Designing FIR filters Here we'll try to design a low-pass filter and look at trade-offs in terms of time- and frequency-domain filter characteristics. Later, in tut_effect_on_signals, we'll look at how such filters can affect signals when they are used. First let's import some useful tools for filtering, and set some default values for our data that are reasonable for M/EEG. End of explanation nyq = sfreq / 2. # the Nyquist frequency is half our sample rate freq = [0, f_p, f_p, nyq] gain = [1, 1, 0, 0] third_height = np.array(plt.rcParams['figure.figsize']) * [1, 1. / 3.] ax = plt.subplots(1, figsize=third_height)[1] plot_ideal_filter(freq, gain, ax, title='Ideal %s Hz lowpass' % f_p, flim=flim) Explanation: Take for example an ideal low-pass filter, which would give a magnitude response of 1 in the pass-band (up to frequency $f_p$) and a magnitude response of 0 in the stop-band (down to frequency $f_s$) such that $f_p=f_s=40$ Hz here (shown to a lower limit of -60 dB for simplicity): End of explanation n = int(round(0.1 * sfreq)) n -= n % 2 - 1 # make it odd t = np.arange(-(n // 2), n // 2 + 1) / sfreq # center our sinc h = np.sinc(2 * f_p * t) / (4 * np.pi) plot_filter(h, sfreq, freq, gain, 'Sinc (0.1 s)', flim=flim, compensate=True) Explanation: This filter hypothetically achieves zero ripple in the frequency domain, perfect attenuation, and perfect steepness. However, due to the discontinuity in the frequency response, the filter would require infinite ringing in the time domain (i.e., infinite order) to be realized. Another way to think of this is that a rectangular window in the frequency domain is actually a sinc_ function in the time domain, which requires an infinite number of samples (and thus infinite time) to represent. So although this filter has ideal frequency suppression, it has poor time-domain characteristics. Let's try to naïvely make a brick-wall filter of length 0.1 s, and look at the filter itself in the time domain and the frequency domain: End of explanation n = int(round(1. * sfreq)) n -= n % 2 - 1 # make it odd t = np.arange(-(n // 2), n // 2 + 1) / sfreq h = np.sinc(2 * f_p * t) / (4 * np.pi) plot_filter(h, sfreq, freq, gain, 'Sinc (1.0 s)', flim=flim, compensate=True) Explanation: This is not so good! Making the filter 10 times longer (1 s) gets us a slightly better stop-band suppression, but still has a lot of ringing in the time domain. Note the x-axis is an order of magnitude longer here, and the filter has a correspondingly much longer group delay (again equal to half the filter length, or 0.5 seconds): End of explanation n = int(round(10. * sfreq)) n -= n % 2 - 1 # make it odd t = np.arange(-(n // 2), n // 2 + 1) / sfreq h = np.sinc(2 * f_p * t) / (4 * np.pi) plot_filter(h, sfreq, freq, gain, 'Sinc (10.0 s)', flim=flim, compensate=True) Explanation: Let's make the stop-band tighter still with a longer filter (10 s), with a resulting larger x-axis: End of explanation trans_bandwidth = 10 # 10 Hz transition band f_s = f_p + trans_bandwidth # = 50 Hz freq = [0., f_p, f_s, nyq] gain = [1., 1., 0., 0.] ax = plt.subplots(1, figsize=third_height)[1] title = '%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth) plot_ideal_filter(freq, gain, ax, title=title, flim=flim) Explanation: Now we have very sharp frequency suppression, but our filter rings for the entire 10 seconds. So this naïve method is probably not a good way to build our low-pass filter. Fortunately, there are multiple established methods to design FIR filters based on desired response characteristics. These include: 1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_) 2. Windowed FIR design (:func:`scipy.signal.firwin2`, :func:`scipy.signal.firwin`, and `MATLAB fir2`_) 3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_) 4. Frequency-domain design (construct filter in Fourier domain and use an :func:`IFFT &lt;numpy.fft.ifft&gt;` to invert it) <div class="alert alert-info"><h4>Note</h4><p>Remez and least squares designs have advantages when there are "do not care" regions in our frequency response. However, we want well controlled responses in all frequency regions. Frequency-domain construction is good when an arbitrary response is desired, but generally less clean (due to sampling issues) than a windowed approach for more straightforward filter applications. Since our filters (low-pass, high-pass, band-pass, band-stop) are fairly simple and we require precise control of all frequency regions, we will primarily use and explore windowed FIR design.</p></div> If we relax our frequency-domain filter requirements a little bit, we can use these functions to construct a lowpass filter that instead has a transition band, or a region between the pass frequency $f_p$ and stop frequency $f_s$, e.g.: End of explanation h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (1.0 s)', flim=flim, compensate=True) Explanation: Accepting a shallower roll-off of the filter in the frequency domain makes our time-domain response potentially much better. We end up with a more gradual slope through the transition region, but a much cleaner time domain signal. Here again for the 1 s filter: End of explanation n = int(round(sfreq * 0.5)) + 1 h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.5 s)', flim=flim, compensate=True) Explanation: Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually use a shorter filter (5 cycles at 10 Hz = 0.5 s) and still get acceptable stop-band attenuation: End of explanation n = int(round(sfreq * 0.2)) + 1 h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 10 Hz transition (0.2 s)', flim=flim, compensate=True) Explanation: But if we shorten the filter too much (2 cycles of 10 Hz = 0.2 s), our effective stop frequency gets pushed out past 60 Hz: End of explanation trans_bandwidth = 25 f_s = f_p + trans_bandwidth freq = [0, f_p, f_s, nyq] h = signal.firwin2(n, freq, gain, nyq=nyq) plot_filter(h, sfreq, freq, gain, 'Windowed 50 Hz transition (0.2 s)', flim=flim, compensate=True) Explanation: If we want a filter that is only 0.1 seconds long, we should probably use something more like a 25 Hz transition band (0.2 s = 5 cycles @ 25 Hz): End of explanation h_min = signal.minimum_phase(h) plot_filter(h_min, sfreq, freq, gain, 'Minimum-phase', flim=flim) Explanation: So far, we have only discussed non-causal filtering, which means that each sample at each time point $t$ is filtered using samples that come after ($t + \Delta t$) and before ($t - \Delta t$) the current time point $t$. In this sense, each sample is influenced by samples that come both before and after it. This is useful in many cases, especially because it does not delay the timing of events. However, sometimes it can be beneficial to use causal filtering, whereby each sample $t$ is filtered only using time points that came after it. Note that the delay is variable (whereas for linear/zero-phase filters it is constant) but small in the pass-band. Unlike zero-phase filters, which require time-shifting backward the output of a linear-phase filtering stage (and thus becoming non-causal), minimum-phase filters do not require any compensation to achieve small delays in the pass-band. Note that as an artifact of the minimum phase filter construction step, the filter does not end up being as steep as the linear/zero-phase version. We can construct a minimum-phase filter from our existing linear-phase filter with the :func:scipy.signal.minimum_phase function, and note that the falloff is not as steep: End of explanation dur = 10. center = 2. morlet_freq = f_p tlim = [center - 0.2, center + 0.2] tticks = [tlim[0], center, tlim[1]] flim = [20, 70] x = np.zeros(int(sfreq * dur) + 1) blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20. n_onset = int(center * sfreq) - len(blip) // 2 x[n_onset:n_onset + len(blip)] += blip x_orig = x.copy() rng = np.random.RandomState(0) x += rng.randn(len(x)) / 1000. x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000. Explanation: Applying FIR filters Now lets look at some practical effects of these filters by applying them to some data. Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part) plus noise (random and line). Note that the original clean signal contains frequency content in both the pass band and transition bands of our low-pass filter. End of explanation transition_band = 0.25 * f_p f_s = f_p + transition_band freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] # This would be equivalent: h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, fir_design='firwin', verbose=True) x_v16 = np.convolve(h, x) # this is the linear->zero phase, causal-to-non-causal conversion / shift x_v16 = x_v16[len(h) // 2:] plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.16 default', flim=flim, compensate=True) Explanation: Filter it with a shallow cutoff, linear-phase FIR (which allows us to compensate for the constant filter delay): End of explanation transition_band = 0.25 * f_p f_s = f_p + transition_band freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] # This would be equivalent: # filter_dur = 6.6 / transition_band # sec # n = int(sfreq * filter_dur) # h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.) h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, fir_design='firwin2', verbose=True) x_v14 = np.convolve(h, x)[len(h) // 2:] plot_filter(h, sfreq, freq, gain, 'MNE-Python 0.14 default', flim=flim, compensate=True) Explanation: Filter it with a different design method fir_design="firwin2", and also compensate for the constant filter delay. This method does not produce quite as sharp a transition compared to fir_design="firwin", despite being twice as long: End of explanation transition_band = 0.5 # Hz f_s = f_p + transition_band filter_dur = 10. # sec freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] # This would be equivalent # n = int(sfreq * filter_dur) # h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.) h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, h_trans_bandwidth=transition_band, filter_length='%ss' % filter_dur, fir_design='firwin2', verbose=True) x_v13 = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1] # the effective h is one that is applied to the time-reversed version of itself h_eff = np.convolve(h, h[::-1]) plot_filter(h_eff, sfreq, freq, gain, 'MNE-Python 0.13 default', flim=flim, compensate=True) Explanation: Let's also filter with the MNE-Python 0.13 default, which is a long-duration, steep cutoff FIR that gets applied twice: End of explanation h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5) x_mne_c = np.convolve(h, x)[len(h) // 2:] transition_band = 5 # Hz (default in MNE-C) f_s = f_p + transition_band freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] plot_filter(h, sfreq, freq, gain, 'MNE-C default', flim=flim, compensate=True) Explanation: Let's also filter it with the MNE-C default, which is a long-duration steep-slope FIR filter designed using frequency-domain techniques: End of explanation h = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, phase='minimum', fir_design='firwin', verbose=True) x_min = np.convolve(h, x) transition_band = 0.25 * f_p f_s = f_p + transition_band filter_dur = 6.6 / transition_band # sec n = int(sfreq * filter_dur) freq = [0., f_p, f_s, sfreq / 2.] gain = [1., 1., 0., 0.] plot_filter(h, sfreq, freq, gain, 'Minimum-phase filter', flim=flim) Explanation: And now an example of a minimum-phase filter: End of explanation axes = plt.subplots(1, 2)[1] def plot_signal(x, offset): Plot a signal. t = np.arange(len(x)) / sfreq axes[0].plot(t, x + offset) axes[0].set(xlabel='Time (s)', xlim=t[[0, -1]]) X = fft(x) freqs = fftfreq(len(x), 1. / sfreq) mask = freqs >= 0 X = X[mask] freqs = freqs[mask] axes[1].plot(freqs, 20 * np.log10(np.maximum(np.abs(X), 1e-16))) axes[1].set(xlim=flim) yscale = 30 yticklabels = ['Original', 'Noisy', 'FIR-firwin (0.16)', 'FIR-firwin2 (0.14)', 'FIR-steep (0.13)', 'FIR-steep (MNE-C)', 'Minimum-phase'] yticks = -np.arange(len(yticklabels)) / yscale plot_signal(x_orig, offset=yticks[0]) plot_signal(x, offset=yticks[1]) plot_signal(x_v16, offset=yticks[2]) plot_signal(x_v14, offset=yticks[3]) plot_signal(x_v13, offset=yticks[4]) plot_signal(x_mne_c, offset=yticks[5]) plot_signal(x_min, offset=yticks[6]) axes[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks, ylim=[-len(yticks) / yscale, 1. / yscale], yticks=yticks, yticklabels=yticklabels) for text in axes[0].get_yticklabels(): text.set(rotation=45, size=8) axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)', ylabel='Magnitude (dB)') mne.viz.tight_layout() plt.show() Explanation: Both the MNE-Python 0.13 and MNE-C filters have excellent frequency attenuation, but it comes at a cost of potential ringing (long-lasting ripples) in the time domain. Ringing can occur with steep filters, especially in signals with frequency content around the transition band. Our Morlet wavelet signal has power in our transition band, and the time-domain ringing is thus more pronounced for the steep-slope, long-duration filter than the shorter, shallower-slope filter: End of explanation sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos') plot_filter(dict(sos=sos), sfreq, freq, gain, 'Butterworth order=2', flim=flim, compensate=True) x_shallow = signal.sosfiltfilt(sos, x) del sos Explanation: IIR filters MNE-Python also offers IIR filtering functionality that is based on the methods from :mod:scipy.signal. Specifically, we use the general-purpose functions :func:scipy.signal.iirfilter and :func:scipy.signal.iirdesign, which provide unified interfaces to IIR filter design. Designing IIR filters Let's continue with our design of a 40 Hz low-pass filter and look at some trade-offs of different IIR filters. Often the default IIR filter is a Butterworth filter_, which is designed to have a maximally flat pass-band. Let's look at a few filter orders, i.e., a few different number of coefficients used and therefore steepness of the filter: <div class="alert alert-info"><h4>Note</h4><p>Notice that the group delay (which is related to the phase) of the IIR filters below are not constant. In the FIR case, we can design so-called linear-phase filters that have a constant group delay, and thus compensate for the delay (making the filter non-causal) if necessary. This cannot be done with IIR filters, as they have a non-linear phase (non-constant group delay). As the filter order increases, the phase distortion near and in the transition band worsens. However, if non-causal (forward-backward) filtering can be used, e.g. with :func:`scipy.signal.filtfilt`, these phase issues can theoretically be mitigated.</p></div> End of explanation iir_params = dict(order=8, ftype='butter') filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, method='iir', iir_params=iir_params, verbose=True) plot_filter(filt, sfreq, freq, gain, 'Butterworth order=8', flim=flim, compensate=True) x_steep = signal.sosfiltfilt(filt['sos'], x) Explanation: The falloff of this filter is not very steep. <div class="alert alert-info"><h4>Note</h4><p>Here we have made use of second-order sections (SOS) by using :func:`scipy.signal.sosfilt` and, under the hood, :func:`scipy.signal.zpk2sos` when passing the ``output='sos'`` keyword argument to :func:`scipy.signal.iirfilter`. The filter definitions given `above <tut_filtering_basics>` use the polynomial numerator/denominator (sometimes called "tf") form ``(b, a)``, which are theoretically equivalent to the SOS form used here. In practice, however, the SOS form can give much better results due to issues with numerical precision (see :func:`scipy.signal.sosfilt` for an example), so SOS should be used whenever possible.</p></div> Let's increase the order, and note that now we have better attenuation, with a longer impulse response. Let's also switch to using the MNE filter design function, which simplifies a few things and gives us some information about the resulting filter: End of explanation iir_params.update(ftype='cheby1', rp=1., # dB of acceptable pass-band ripple ) filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, method='iir', iir_params=iir_params, verbose=True) plot_filter(filt, sfreq, freq, gain, 'Chebychev-1 order=8, ripple=1 dB', flim=flim, compensate=True) Explanation: There are other types of IIR filters that we can use. For a complete list, check out the documentation for :func:scipy.signal.iirdesign. Let's try a Chebychev (type I) filter, which trades off ripple in the pass-band to get better attenuation in the stop-band: End of explanation iir_params['rp'] = 6. filt = mne.filter.create_filter(x, sfreq, l_freq=None, h_freq=f_p, method='iir', iir_params=iir_params, verbose=True) plot_filter(filt, sfreq, freq, gain, 'Chebychev-1 order=8, ripple=6 dB', flim=flim, compensate=True) Explanation: If we can live with even more ripple, we can get it slightly steeper, but the impulse response begins to ring substantially longer (note the different x-axis scale): End of explanation axes = plt.subplots(1, 2)[1] yticks = np.arange(4) / -30. yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8'] plot_signal(x_orig, offset=yticks[0]) plot_signal(x, offset=yticks[1]) plot_signal(x_shallow, offset=yticks[2]) plot_signal(x_steep, offset=yticks[3]) axes[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks, ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,) for text in axes[0].get_yticklabels(): text.set(rotation=45, size=8) axes[1].set(xlim=flim, ylim=(-60, 10), xlabel='Frequency (Hz)', ylabel='Magnitude (dB)') mne.viz.adjust_axes(axes) mne.viz.tight_layout() plt.show() Explanation: Applying IIR filters Now let's look at how our shallow and steep Butterworth IIR filters perform on our Morlet signal from before: End of explanation x = np.zeros(int(2 * sfreq)) t = np.arange(0, len(x)) / sfreq - 0.2 onset = np.where(t >= 0.5)[0][0] cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t) x[onset:onset + len(sig)] = sig iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass') iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass') iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass') iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass') x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0) x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0) x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0) x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0) xlim = t[[0, -1]] ylim = [-2, 6] xlabel = 'Time (sec)' ylabel = r'Amplitude ($\mu$V)' tticks = [0, 0.5, 1.3, t[-1]] axes = plt.subplots(2, 2)[1].ravel() for ax, x_f, title in zip(axes, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1], ['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']): ax.plot(t, x, color='0.5') ax.plot(t, x_f, color='k', linestyle='--') ax.set(ylim=ylim, xlim=xlim, xticks=tticks, title=title, xlabel=xlabel, ylabel=ylabel) mne.viz.adjust_axes(axes) mne.viz.tight_layout() plt.show() Explanation: Some pitfalls of filtering Multiple recent papers have noted potential risks of drawing errant inferences due to misapplication of filters. Low-pass problems Filters in general, especially those that are non-causal (zero-phase), can make activity appear to occur earlier or later than it truly did. As mentioned in VanRullen (2011) :footcite:VanRullen2011, investigations of commonly (at the time) used low-pass filters created artifacts when they were applied to simulated data. However, such deleterious effects were minimal in many real-world examples in Rousselet (2012) :footcite:Rousselet2012. Perhaps more revealing, it was noted in Widmann & Schröger (2012) :footcite:WidmannSchroger2012 that the problematic low-pass filters from VanRullen (2011) :footcite:VanRullen2011: Used a least-squares design (like :func:scipy.signal.firls) that included "do-not-care" transition regions, which can lead to uncontrolled behavior. Had a filter length that was independent of the transition bandwidth, which can cause excessive ringing and signal distortion. High-pass problems When it comes to high-pass filtering, using corner frequencies above 0.1 Hz were found in Acunzo et al. (2012) :footcite:AcunzoEtAl2012 to: "... generate a systematic bias easily leading to misinterpretations of neural activity.” In a related paper, Widmann et al. (2015) :footcite:WidmannEtAl2015 also came to suggest a 0.1 Hz highpass. More evidence followed in Tanner et al. (2015) :footcite:TannerEtAl2015 of such distortions. Using data from language ERP studies of semantic and syntactic processing (i.e., N400 and P600), using a high-pass above 0.3 Hz caused significant effects to be introduced implausibly early when compared to the unfiltered data. From this, the authors suggested the optimal high-pass value for language processing to be 0.1 Hz. We can recreate a problematic simulation from Tanner et al. (2015) :footcite:TannerEtAl2015: "The simulated component is a single-cycle cosine wave with an amplitude of 5µV [sic], onset of 500 ms poststimulus, and duration of 800 ms. The simulated component was embedded in 20 s of zero values to avoid filtering edge effects... Distortions [were] caused by 2 Hz low-pass and high-pass filters... No visible distortion to the original waveform [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters... Filter frequencies correspond to the half-amplitude (-6 dB) cutoff (12 dB/octave roll-off)." <div class="alert alert-info"><h4>Note</h4><p>This simulated signal contains energy not just within the pass-band, but also within the transition and stop-bands -- perhaps most easily understood because the signal has a non-zero DC value, but also because it is a shifted cosine that has been *windowed* (here multiplied by a rectangular window), which makes the cosine and DC frequencies spread to other frequencies (multiplication in time is convolution in frequency, so multiplying by a rectangular window in the time domain means convolving a sinc function with the impulses at DC and the cosine frequency in the frequency domain).</p></div> End of explanation def baseline_plot(x): all_axes = plt.subplots(3, 2)[1] for ri, (axes, freq) in enumerate(zip(all_axes, [0.1, 0.3, 0.5])): for ci, ax in enumerate(axes): if ci == 0: iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass', output='sos') x_hp = signal.sosfiltfilt(iir_hp, x, padlen=0) else: x_hp -= x_hp[t < 0].mean() ax.plot(t, x, color='0.5') ax.plot(t, x_hp, color='k', linestyle='--') if ri == 0: ax.set(title=('No ' if ci == 0 else '') + 'Baseline Correction') ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel) ax.set_ylabel('%0.1f Hz' % freq, rotation=0, horizontalalignment='right') mne.viz.adjust_axes(axes) mne.viz.tight_layout() plt.suptitle(title) plt.show() baseline_plot(x) Explanation: Similarly, in a P300 paradigm reported by Kappenman & Luck (2010) :footcite:KappenmanLuck2010, they found that applying a 1 Hz high-pass decreased the probability of finding a significant difference in the N100 response, likely because the P300 response was smeared (and inverted) in time by the high-pass filter such that it tended to cancel out the increased N100. However, they nonetheless note that some high-passing can still be useful to deal with drifts in the data. Even though these papers generally advise a 0.1 Hz or lower frequency for a high-pass, it is important to keep in mind (as most authors note) that filtering choices should depend on the frequency content of both the signal(s) of interest and the noise to be suppressed. For example, in some of the MNE-Python examples involving the sample-dataset dataset, high-pass values of around 1 Hz are used when looking at auditory or visual N100 responses, because we analyze standard (not deviant) trials and thus expect that contamination by later or slower components will be limited. Baseline problems (or solutions?) In an evolving discussion, Tanner et al. (2015) :footcite:TannerEtAl2015 suggest using baseline correction to remove slow drifts in data. However, Maess et al. (2016) :footcite:MaessEtAl2016 suggest that baseline correction, which is a form of high-passing, does not offer substantial advantages over standard high-pass filtering. Tanner et al. (2016) :footcite:TannerEtAl2016 rebutted that baseline correction can correct for problems with filtering. To see what they mean, consider again our old simulated signal x from before: End of explanation n_pre = (t < 0).sum() sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre)) x[:n_pre] += sig_pre baseline_plot(x) Explanation: In response, Maess et al. (2016) :footcite:MaessEtAl2016a note that these simulations do not address cases of pre-stimulus activity that is shared across conditions, as applying baseline correction will effectively copy the topology outside the baseline period. We can see this if we give our signal x with some consistent pre-stimulus activity, which makes everything look bad. <div class="alert alert-info"><h4>Note</h4><p>An important thing to keep in mind with these plots is that they are for a single simulated sensor. In multi-electrode recordings the topology (i.e., spatial pattern) of the pre-stimulus activity will leak into the post-stimulus period. This will likely create a spatially varying distortion of the time-domain signals, as the averaged pre-stimulus spatial pattern gets subtracted from the sensor time courses.</p></div> Putting some activity in the baseline period: End of explanation # Use the same settings as when calling e.g., `raw.filter()` fir_coefs = mne.filter.create_filter( data=None, # data is only used for sanity checking, not strictly needed sfreq=1000., # sfreq of your data in Hz l_freq=None, h_freq=40., # assuming a lowpass of 40 Hz method='fir', fir_window='hamming', fir_design='firwin', verbose=True) # See the printed log for the transition bandwidth and filter length. # Alternatively, get the filter length through: filter_length = fir_coefs.shape[0] Explanation: Both groups seem to acknowledge that the choices of filtering cutoffs, and perhaps even the application of baseline correction, depend on the characteristics of the data being investigated, especially when it comes to: The frequency content of the underlying evoked activity relative to the filtering parameters. The validity of the assumption of no consistent evoked activity in the baseline period. We thus recommend carefully applying baseline correction and/or high-pass values based on the characteristics of the data to be analyzed. Filtering defaults Defaults in MNE-Python Most often, filtering in MNE-Python is done at the :class:mne.io.Raw level, and thus :func:mne.io.Raw.filter is used. This function under the hood (among other things) calls :func:mne.filter.filter_data to actually filter the data, which by default applies a zero-phase FIR filter designed using :func:scipy.signal.firwin. In Widmann et al. (2015) :footcite:WidmannEtAl2015, they suggest a specific set of parameters to use for high-pass filtering, including: "... providing a transition bandwidth of 25% of the lower passband edge but, where possible, not lower than 2 Hz and otherwise the distance from the passband edge to the critical frequency.” In practice, this means that for each high-pass value l_freq or low-pass value h_freq below, you would get this corresponding l_trans_bandwidth or h_trans_bandwidth, respectively, if the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz): +------------------+-------------------+-------------------+ | l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth | +==================+===================+===================+ | 0.01 | 0.01 | 2.0 | +------------------+-------------------+-------------------+ | 0.1 | 0.1 | 2.0 | +------------------+-------------------+-------------------+ | 1.0 | 1.0 | 2.0 | +------------------+-------------------+-------------------+ | 2.0 | 2.0 | 2.0 | +------------------+-------------------+-------------------+ | 4.0 | 2.0 | 2.0 | +------------------+-------------------+-------------------+ | 8.0 | 2.0 | 2.0 | +------------------+-------------------+-------------------+ | 10.0 | 2.5 | 2.5 | +------------------+-------------------+-------------------+ | 20.0 | 5.0 | 5.0 | +------------------+-------------------+-------------------+ | 40.0 | 10.0 | 10.0 | +------------------+-------------------+-------------------+ | 50.0 | 12.5 | 12.5 | +------------------+-------------------+-------------------+ MNE-Python has adopted this definition for its high-pass (and low-pass) transition bandwidth choices when using l_trans_bandwidth='auto' and h_trans_bandwidth='auto'. To choose the filter length automatically with filter_length='auto', the reciprocal of the shortest transition bandwidth is used to ensure decent attenuation at the stop frequency. Specifically, the reciprocal (in samples) is multiplied by 3.1, 3.3, or 5.0 for the Hann, Hamming, or Blackman windows, respectively, as selected by the fir_window argument for fir_design='firwin', and double these for fir_design='firwin2' mode. <div class="alert alert-info"><h4>Note</h4><p>For ``fir_design='firwin2'``, the multiplicative factors are doubled compared to what is given in Ifeachor & Jervis (2002) :footcite:`IfeachorJervis2002` (p. 357), as :func:`scipy.signal.firwin2` has a smearing effect on the frequency response, which we compensate for by increasing the filter length. This is why ``fir_desgin='firwin'`` is preferred to ``fir_design='firwin2'``.</p></div> In 0.14, we default to using a Hamming window in filter design, as it provides up to 53 dB of stop-band attenuation with small pass-band ripple. <div class="alert alert-info"><h4>Note</h4><p>In band-pass applications, often a low-pass filter can operate effectively with fewer samples than the high-pass filter, so it is advisable to apply the high-pass and low-pass separately when using ``fir_design='firwin2'``. For design mode ``fir_design='firwin'``, there is no need to separate the operations, as the lowpass and highpass elements are constructed separately to meet the transition band requirements.</p></div> For more information on how to use the MNE-Python filtering functions with real data, consult the preprocessing tutorial on tut-filter-resample. Defaults in MNE-C MNE-C by default uses: 5 Hz transition band for low-pass filters. 3-sample transition band for high-pass filters. Filter length of 8197 samples. The filter is designed in the frequency domain, creating a linear-phase filter such that the delay is compensated for as is done with the MNE-Python phase='zero' filtering option. Squared-cosine ramps are used in the transition regions. Because these are used in place of more gradual (e.g., linear) transitions, a given transition width will result in more temporal ringing but also more rapid attenuation than the same transition width in windowed FIR designs. The default filter length will generally have excellent attenuation but long ringing for the sample rates typically encountered in M/EEG data (e.g. 500-2000 Hz). Defaults in other software A good but possibly outdated comparison of filtering in various software packages is available in Widmann et al. (2015) :footcite:WidmannEtAl2015. Briefly: EEGLAB MNE-Python 0.14 defaults to behavior very similar to that of EEGLAB (see the EEGLAB filtering FAQ_ for more information). FieldTrip By default FieldTrip applies a forward-backward Butterworth IIR filter of order 4 (band-pass and band-stop filters) or 2 (for low-pass and high-pass filters). Similar filters can be achieved in MNE-Python when filtering with :meth:raw.filter(..., method='iir') &lt;mne.io.Raw.filter&gt; (see also :func:mne.filter.construct_iir_filter for options). For more information, see e.g. the FieldTrip band-pass documentation &lt;ftbp_&gt;_. Reporting Filters On page 45 in Widmann et al. (2015) :footcite:WidmannEtAl2015, there is a convenient list of important filter parameters that should be reported with each publication: Filter type (high-pass, low-pass, band-pass, band-stop, FIR, IIR) Cutoff frequency (including definition) Filter order (or length) Roll-off or transition bandwidth Passband ripple and stopband attenuation Filter delay (zero-phase, linear-phase, non-linear phase) and causality Direction of computation (one-pass forward/reverse, or two-pass forward and reverse) In the following, we will address how to deal with these parameters in MNE: Filter type Depending on the function or method used, the filter type can be specified. To name an example, in :func:mne.filter.create_filter, the relevant arguments would be l_freq, h_freq, method, and if the method is FIR fir_window and fir_design. Cutoff frequency The cutoff of FIR filters in MNE is defined as half-amplitude cutoff in the middle of the transition band. That is, if you construct a lowpass FIR filter with h_freq = 40, the filter function will provide a transition bandwidth that depends on the h_trans_bandwidth argument. The desired half-amplitude cutoff of the lowpass FIR filter is then at h_freq + transition_bandwidth/2.. Filter length (order) and transition bandwidth (roll-off) In the tut_filtering_in_python section, we have already talked about the default filter lengths and transition bandwidths that are used when no custom values are specified using the respective filter function's arguments. If you want to find out about the filter length and transition bandwidth that were used through the 'auto' setting, you can use :func:mne.filter.create_filter to print out the settings once more: End of explanation
2,014
Given the following text description, write Python code to implement the functionality described below step by step Description: pomegranate / libpgm comparison authors Step1: Lets first compare the two packages based on number of variables. Step2: We can see many expected results from this graph. libpgm implements a quadratic time algorithm, and it appears that the growth is roughly quadratic. The exact algorithm in pomegranate is an exponential time algorithm, and so while it does have an efficient implementation that causes the it to be much faster than libpgm's algorithm for small numbers of variables, eventually it will become slower. Lastly, the chow-liu tree algorithm is far faster than both other algorithms because it is finding the best tree approximation. While it is a quadratic time algorithm, it is also a simpler one. Lets now compare the speed on different numbers of samples.
Python Code: %pylab inline import seaborn, time seaborn.set_style('whitegrid') Explanation: pomegranate / libpgm comparison authors: Jacob Schreiber ([email protected]) <a href="https://github.com/CyberPoint/libpgm">libpgm</a> is a python package for creating and using Bayesian networks. I was unable to figure out how to use libpgm to do inference properly without raising errors, but I was able to get structure learning working. libpgm uses constraints for structure learning, a process which is not probabilistic, but can be asymptoptically more efficient (between O(n^2) and O(n^3) as opposed to exponential). To my knowledge, they do not have exact structure learning implemented, likely due to the super-exponential nature of the naive algorithm. pomegranate has both the exact structure learning problem, and the Chow-Liu tree approximation, implemented. The exact structure learning problem uses an efficient dynamic programming solution to reduce the complexity from super-exponential to exponential in time with the number of variables. The Chow-Liu tree approximation finds the best tree which spans all variables. Lets compare the structure learning task in pomegranate versus the structure learning task in libpgm for different numbers of variables to compare these speed of the two packages. End of explanation from pomegranate import BayesianNetwork from libpgm.pgmlearner import PGMLearner libpgm_time = [] pomegranate_time = [] pomegranate_cl_time = [] for i in range(2, 15): tic = time.time() X = numpy.random.randint(2, size=(10000, i)) model = BayesianNetwork.from_samples(X, algorithm='exact') pomegranate_time.append(time.time() - tic) tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='chow-liu') pomegranate_cl_time.append(time.time() - tic) X = [{j : X[i, j] for j in range(X.shape[1])} for i in range(X.shape[0])] learner = PGMLearner() tic = time.time() model = learner.discrete_constraint_estimatestruct(X) libpgm_time.append(time.time() - tic) plt.figure(figsize=(14, 6)) plt.title("Bayesian Network Structure Learning Time", fontsize=16) plt.xlabel("Number of Variables", fontsize=14) plt.ylabel("Time (s)", fontsize=14) plt.plot(range(2, 15), libpgm_time, c='c', label="libpgm") plt.plot(range(2, 15), pomegranate_time, c='m', label="pomegranate exact") plt.plot(range(2, 15), pomegranate_cl_time, c='r', label="pomegranate chow liu") plt.legend(loc=2, fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: Lets first compare the two packages based on number of variables. End of explanation libpgm_time = [] pomegranate_time = [] pomegranate_cl_time = [] x = 10, 25, 100, 250, 1000, 2500, 10000, 25000, 100000, 250000, 1000000 for i in x: tic = time.time() X = numpy.random.randint(2, size=(i, 10)) model = BayesianNetwork.from_samples(X, algorithm='exact') pomegranate_time.append(time.time() - tic) tic = time.time() model = BayesianNetwork.from_samples(X, algorithm='chow-liu') pomegranate_cl_time.append(time.time() - tic) X = [{j : X[i, j] for j in range(X.shape[1])} for i in range(X.shape[0])] learner = PGMLearner() tic = time.time() model = learner.discrete_constraint_estimatestruct(X) libpgm_time.append(time.time() - tic) plt.figure(figsize=(14, 6)) plt.title("Bayesian Network Structure Learning Time", fontsize=16) plt.xlabel("Number of Samples", fontsize=14) plt.ylabel("Time (s)", fontsize=14) plt.plot(x, libpgm_time, c='c', label="libpgm") plt.plot(x, pomegranate_time, c='m', label="pomegranate exact") plt.plot(x, pomegranate_cl_time, c='r', label="pomegranate chow liu") plt.legend(loc=2, fontsize=14) plt.xticks(fontsize=14) plt.yticks(fontsize=14) plt.show() Explanation: We can see many expected results from this graph. libpgm implements a quadratic time algorithm, and it appears that the growth is roughly quadratic. The exact algorithm in pomegranate is an exponential time algorithm, and so while it does have an efficient implementation that causes the it to be much faster than libpgm's algorithm for small numbers of variables, eventually it will become slower. Lastly, the chow-liu tree algorithm is far faster than both other algorithms because it is finding the best tree approximation. While it is a quadratic time algorithm, it is also a simpler one. Lets now compare the speed on different numbers of samples. End of explanation
2,015
Given the following text description, write Python code to implement the functionality described below step by step Description: Exercise 03 Analyze the baby names dataset using pandas Step1: segment the data into boy and girl names Step2: Analyzing the popularity of a name over time
Python Code: %matplotlib inline import pandas as pd import numpy as np from matplotlib import pyplot as plt # Load dataset names = pd.read_csv('baby-names2.csv') names.head() names[names.year == 1993].head() Explanation: Exercise 03 Analyze the baby names dataset using pandas End of explanation boys = names[names.sex == 'boy'].copy() girls = names[names.sex == 'girl'].copy() Explanation: segment the data into boy and girl names End of explanation william = boys[boys['name']=='William'] plt.plot(range(william.shape[0]), william['prop']) plt.xticks(range(william.shape[0])[::5], william['year'].values[::5], rotation='vertical') plt.ylim([0, 0.1]) plt.show() Daniel = boys[boys['name']=='Daniel'] plt.plot(range(Daniel.shape[0]), Daniel['prop']) plt.xticks(range(Daniel.shape[0])[::5], Daniel['year'].values[::5], rotation='vertical') plt.ylim([0, 0.1]) plt.show() Explanation: Analyzing the popularity of a name over time End of explanation
2,016
Given the following text description, write Python code to implement the functionality described below step by step Description: <img align="left" src="imgs/logo.jpg" width="50px" style="margin-right Step1: I. Background A. Preprocessing the Database In a real application, there is a lot of data preparation, parsing, and database loading that needs to be completed before we dive into writing labeling functions. Here we've pre-generated a database instance for you. All candidates and gold labels (i.e., human-generated labels) are queried from this database for use in the the tutorial. See our preprocessing tutorial <a href="Workshop_5_Advanced_Preprocessing.ipynb">Workshop 5 Advanced Preprocessing</a> for more details on how this database is built. B. Using a Development Set of Human-labeled Data In our setting, we will use the phrase development set to refer to a set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions. This is a list of {-1,1} labels. Step2: C. Data Exploration How do we come up with good keywords and patterns to encode as labeling functions? One way is to manually explore our training data. Here we load a subset of our training candidates into a SentenceNgramViewer object to examine candidates in their parent context. Our goal is to build an intuition for patterns and keywords that are predictive of a candidate's true label. Step3: D. Labeling Function Metrics 1. Coverage One simple metric we can compute quickly is our coverage, the number of candidates labeled by our LF, on our training set (or any other set). 2. Precision / Recall / F1 If we have gold labeled data, we can also compute standard precision, recall, and F1 metrics for the output of a single labeling function. These metrics are computed over 4 error buckets Step4: Viewing Error Buckets If we have gold labeled data, we can evaluate formal metrics. It's useful to view specific errors for a given LF input in the SentenceNgramViewer. Below, we'll compute our empirical scores using human-labeled development set data and then look at any false positive matches by our LF_marriage LF. We can see below from our scores that this LF isn't very accurate -- only 36% precision! Step5: Other Search Contexts We can also search other sentence contexts, such as a window of text to the left or right of our candidate spans. Step6: 4. Regular Expression Factory Sometimes we want to express more generic textual patterns to match against candidates. Perhaps we want to match a specific phrase like 'power couple' or look for modifier prefixes like 'ex' wife, husband, etc. We can generate this supervision in the same way as above using sets of regular expressions -- a formal language for string matching. Step7: B. Distant Supervision Labeling Functions In addition to using factories that encode pattern matching heuristics, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these. DBpedia http Step9: C. Writing Custom Labeling Functions The strength of LFs is that you can write any arbitrary function and use it to supervise a classification task. This approach can combine many of the same strategies discussed above or encode other information. For example, we observe that when mentions of person names occur far apart in a sentence, this is a good indicator that the candidate's label is False. Step10: labeled = coverage(session, LF_too_far_apart, split=1) score(session, LF_too_far_apart, split=1, gold=L_gold_dev) D. Composing Labeling Functions Another useful technique for writing LFs is composing multiple, weaker LFs together. For example, our LF_marriage example above has low precision. Instead of modifying LF_marriage, we'll compose it with our LF_too_far_apart from above. LF_marriage TP Step11: VI. Development Sandbox A. Writing Your Own Labeling Functions Using the information above, write your own labeling functions for this task. Step12: B. Applying Labeling Functions Next, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. 1. Preparing your Labeling Functions First we put all our labeling functions into list Step13: Then we setup the label annotator class Step14: 2. Generating the Label Matrix Step15: 3. Label Matrix Empirical Accuracies If we have a small set of human-labeled data
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import re import sys import numpy as np # Connect to the database backend and initalize a Snorkel session from lib.init import * from lib.scoring import * from lib.lf_factories import * from snorkel.lf_helpers import test_LF from snorkel.annotations import load_gold_labels from snorkel.lf_helpers import ( get_left_tokens, get_right_tokens, get_between_tokens, get_text_between, get_tagged_text, ) # initialize our candidate type definition Spouse = candidate_subclass('Spouse', ['person1', 'person2']) Explanation: <img align="left" src="imgs/logo.jpg" width="50px" style="margin-right:10px"> Snorkel Workshop: Extracting Spouse Relations <br> from the News Part 2: Writing Labeling Functions In Snorkel, our primary interface through which we provide training signal to the end extraction model we are training is by writing labeling functions (LFs) (as opposed to hand-labeling massive training sets). We'll go through some examples for our spouse extraction task below. A labeling function isn't anything special. It's just a Python function that accepts a Candidate as the input argument and returns 1 if it says the Candidate should be marked as true, -1 if it says the Candidate should be marked as false, and 0 if it doesn't know how to vote and abstains. In practice, many labeling functions are unipolar: it labels only 1s and 0s, or it labels only -1s and 0s. Recall that our goal is to ultimately train a high-performance classification model that predicts which of our Candidates are true mentions of spouse relations. It turns out that we can do this by writing potentially low-quality labeling functions! End of explanation L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1) Explanation: I. Background A. Preprocessing the Database In a real application, there is a lot of data preparation, parsing, and database loading that needs to be completed before we dive into writing labeling functions. Here we've pre-generated a database instance for you. All candidates and gold labels (i.e., human-generated labels) are queried from this database for use in the the tutorial. See our preprocessing tutorial <a href="Workshop_5_Advanced_Preprocessing.ipynb">Workshop 5 Advanced Preprocessing</a> for more details on how this database is built. B. Using a Development Set of Human-labeled Data In our setting, we will use the phrase development set to refer to a set of examples (here, a subset of our training set) which we label by hand and use to help us develop and refine labeling functions. Unlike the test set, which we do not look at and use for final evaluation, we can inspect the development set while writing labeling functions. This is a list of {-1,1} labels. End of explanation from snorkel.viewer import SentenceNgramViewer # load our list of training & development candidates train_cands = session.query(Candidate).filter(Candidate.split == 0).all() dev_cands = session.query(Candidate).filter(Candidate.split == 1).all() SentenceNgramViewer(train_cands[0:500], session, n_per_page=1) Explanation: C. Data Exploration How do we come up with good keywords and patterns to encode as labeling functions? One way is to manually explore our training data. Here we load a subset of our training candidates into a SentenceNgramViewer object to examine candidates in their parent context. Our goal is to build an intuition for patterns and keywords that are predictive of a candidate's true label. End of explanation marriage = {'husband', 'wife'} # we'll initialize our LFG and test its coverage on training candidates LF_marriage = MatchTerms(name='marriage', terms=marriage, label=1, search='between').lf() # what candidates are covered by this LF? labeled = coverage(session, LF_marriage, split=0) # now let's view what this LF labeled SentenceNgramViewer(labeled, session, n_per_page=1) Explanation: D. Labeling Function Metrics 1. Coverage One simple metric we can compute quickly is our coverage, the number of candidates labeled by our LF, on our training set (or any other set). 2. Precision / Recall / F1 If we have gold labeled data, we can also compute standard precision, recall, and F1 metrics for the output of a single labeling function. These metrics are computed over 4 error buckets: True Positives (tp), False Positives (fp), True Negatives (tn), and False Negatives (fn). \begin{equation} precision = \frac{tp}{(tp + fp)} \end{equation} \begin{equation} recall = \frac{tp}{(tp + fn)} \end{equation} \begin{equation} F1 = 2 \cdot \frac{ (precision \cdot recall)}{(precision + recall)} \end{equation} II. Labeling Functions A. Pattern Matching Labeling Functions One powerful form of labeling function design is defining sets of keywords or regular expressions that, as a human labeler, you know are correlated with the true label. In the terminology of Bayesian inference, this can be thought of as defining a prior over your word features. For example, we could define a dictionary of terms that occur between person names in a candidate. One simple dictionary of terms indicating a true relation could be: marriage = {'husband', 'wife'} We can then write a labeling function that checks for a match with these terms in the text that occurs between person names. def LF_marriage_terms_between(c): return 1 if len(marriage.intersection(get_between_tokens(c))) &gt; 0 else 0 The idea is that we can easily create dictionaries that encode themes or categories descibing all kinds of relationships between 2 people and then use these objects to weakly supervise our classification task. other_relationship = {'boyfriend', 'girlfriend'} IMPORTANT Good labeling functions manage a trade-off between high coverage and high precision. When constructing your dictionaries, think about building larger, noiser sets of terms instead of relying on 1 or 2 keywords. Sometimes a single word can be very predictive (e.g., ex-wife) but it's almost always better to define something more general, such as a regular expression pattern capturing any string with the ex- prefix. 1. Labeling Function Factories The above is a reasonable way to write labeling functions. However, this type of design pattern is so common that we rely on another abstraction to help us build LFs more quickly: labeling function factories. Factories accept simple inputs, like dictionaries or a set of regular expressions, and automatically builds labeling functions for you. The MatchTerms and MatchRegex factories require a few parameter definitions to setup: name: a string that describes the category of terms/regular expressons label: patterns correlate with a True or False label (1 or -1) search: search a specific part of the sentence ('left'|'right'|'between'|'sentence') window: the length of tokens to match against for ('left'|'right') search spaces 2. Term Matching Factory We illustrate below how you can use the MatchTerms factory to create and test an LF on training candidates. When examining candidates in the SentenceNgramViewer, notice that husband or wife always occurs between person names. That is the supervision signal encoded by this LF! End of explanation tp, fp, tn, fn = error_analysis(session, LF_marriage, split=1, gold=L_gold_dev) # now let's view what this LF labeled SentenceNgramViewer(fp, session, n_per_page=1) Explanation: Viewing Error Buckets If we have gold labeled data, we can evaluate formal metrics. It's useful to view specific errors for a given LF input in the SentenceNgramViewer. Below, we'll compute our empirical scores using human-labeled development set data and then look at any false positive matches by our LF_marriage LF. We can see below from our scores that this LF isn't very accurate -- only 36% precision! End of explanation other_relationship = {'boyfriend', 'girlfriend'} LF_other_relationship = MatchTerms(name='other_relationship', terms=other_relationship, label=-1, search='left', window=1).lf() labeled = coverage(session, LF_other_relationship, split=1) # now let's view what this LF labeled SentenceNgramViewer(labeled, session, n_per_page=1) Explanation: Other Search Contexts We can also search other sentence contexts, such as a window of text to the left or right of our candidate spans. End of explanation exes_rgxs = {' ex[- ](husband|wife)'} LF_exes = MatchRegex(name='exes', rgxs=exes_rgxs, label=-1, search='between').lf() labeled = coverage(session, LF_exes, split=1) # now let's view what this LF labeled SentenceNgramViewer(labeled, session, n_per_page=1) Explanation: 4. Regular Expression Factory Sometimes we want to express more generic textual patterns to match against candidates. Perhaps we want to match a specific phrase like 'power couple' or look for modifier prefixes like 'ex' wife, husband, etc. We can generate this supervision in the same way as above using sets of regular expressions -- a formal language for string matching. End of explanation from lib.dbpedia import known_spouses list(known_spouses)[0:5] LF_distant_supervision = DistantSupervision("dbpedia", kb=known_spouses).lf() labeled = coverage(session, LF_distant_supervision, split=1) # score out LF against dev set labels score(session, LF_distant_supervision, split=1, gold=L_gold_dev) SentenceNgramViewer(labeled, session, n_per_page=1) Explanation: B. Distant Supervision Labeling Functions In addition to using factories that encode pattern matching heuristics, we can also write labeling functions that distantly supervise examples. Here, we'll load in a list of known spouse pairs and check to see if the candidate pair matches one of these. DBpedia http://wiki.dbpedia.org/ Out database of known spouses comes from DBpedia, which is a community-driven resource similar to Wikipedia but for curating structured data. We'll use a preprocessed snapshot as our knowledge base for all labeling function development. We can look at some of the example entries from DBPedia and use them in a simple distant supervision labeling function. End of explanation def LF_too_far_apart(c): Person mentions occur at a distance > 50 words return -1 if len(list(get_between_tokens(c))) > 50 else 0 Explanation: C. Writing Custom Labeling Functions The strength of LFs is that you can write any arbitrary function and use it to supervise a classification task. This approach can combine many of the same strategies discussed above or encode other information. For example, we observe that when mentions of person names occur far apart in a sentence, this is a good indicator that the candidate's label is False. End of explanation def LF_marriage_and_too_far_apart(c): return 1 if LF_too_far_apart(c) != -1 and LF_marriage(c) == 1 else 0 LF_marriage_and_not_same_person = lambda c: LF_too_far_apart(c) != -1 and LF_marriage(c) score(session, LF_marriage_and_too_far_apart, split=1, gold=L_gold_dev) Explanation: labeled = coverage(session, LF_too_far_apart, split=1) score(session, LF_too_far_apart, split=1, gold=L_gold_dev) D. Composing Labeling Functions Another useful technique for writing LFs is composing multiple, weaker LFs together. For example, our LF_marriage example above has low precision. Instead of modifying LF_marriage, we'll compose it with our LF_too_far_apart from above. LF_marriage TP: 63 | FP: 114 LF_marriage AND NOT LF_too_far_apart TP: 60 | FP: 86 We missed 3 true candidates, but we cut our false positive rate by 28 candidates! End of explanation # # PLACE YOUR LFs HERE # Explanation: VI. Development Sandbox A. Writing Your Own Labeling Functions Using the information above, write your own labeling functions for this task. End of explanation LFs = [ # Place your lf function variable names here # NOTE: Below are the demo LFs- for demonstrating the syntax only, and will result in a poor score! # Add your LFs to increase the performace!! LF_marriage, LF_other_relationship, LF_exes, LF_distant_supervision, LF_too_far_apart, LF_marriage_and_too_far_apart ] Explanation: B. Applying Labeling Functions Next, we need to actually run the LFs over all of our training candidates, producing a set of Labels and LabelKeys (just the names of the LFs) in the database. We'll do this using the LabelAnnotator class, a UDF which we will again run with UDFRunner. 1. Preparing your Labeling Functions First we put all our labeling functions into list: End of explanation from snorkel.annotations import LabelAnnotator labeler = LabelAnnotator(lfs=LFs) Explanation: Then we setup the label annotator class: End of explanation np.random.seed(1701) %time L_train = labeler.apply(split=0, parallelism=1) print(L_train.shape) %time L_dev = labeler.apply_existing(split=1, parallelism=1) print(L_dev.shape) L_train.lf_stats(session) Explanation: 2. Generating the Label Matrix End of explanation L_dev.lf_stats(session, labels=L_gold_dev.toarray().ravel()) Explanation: 3. Label Matrix Empirical Accuracies If we have a small set of human-labeled data End of explanation
2,017
Given the following text description, write Python code to implement the functionality described below step by step Description: Description Determining how differences in our isopycnic cfg conditions vary in meaningful ways from those of Clay et al., 2003. Eur Biophys J Needed to determine whether the Clay et al., 2003 function describing diffusion is applicable to our data standard conditions from Step1: Beckman XL-A Beckman model E Angular velocity (omega) Step2: Meselson et al., 1957 equation on s.d. of band due to diffusion \begin{equation} \sigma^2 = \frac{RT}{M_{PX_n}\bar{\upsilon}{PX_n} (\frac{dp}{dr}){r_0} \omega^2 r_0} \end{equation} R = gas constant T = temperature (C) M = molecular weight PX_n = macromolecular electrolyte v = partial specific volume (mL/g) w = angular velocity r_0 = distance between the band center and rotor center dp/dr = density gradient Time to equilibrium Step3: Plotting band s.d. as defined the ultra-cfg technical manual density gradient \begin{equation} \frac{d\rho}{dr} = \frac{\omega^2r}{\beta} \end{equation} band standard deviation \begin{equation} \sigma^2 = \frac{\theta}{M_{app}}\frac{RT}{(\frac{d\rho}{dr})_{eff} \omega^2r_o} \end{equation} combined \begin{equation} \sigma^2 = \frac{\theta}{M_{app}}\frac{RT}{\frac{\omega^4r_o^2}{\beta}} \end{equation} buoyant density of a molecule \begin{equation} \theta = \rho_i + \frac{\omega^2}{2\beta}(r_o^2 - r_1^2) \end{equation} standard deviation due to diffusion (Clay et al., 2003) \begin{equation} \sigma_{diffusion}^2 = \Big(\frac{100%}{0.098}\Big)^2 \frac{\rho RT}{\beta_B^2GM_{Cs}} \frac{1}{1000l} \end{equation} Step4: Notes
Python Code: import numpy as np %load_ext rpy2.ipython %%R library(ggplot2) library(dplyr) Explanation: Description Determining how differences in our isopycnic cfg conditions vary in meaningful ways from those of Clay et al., 2003. Eur Biophys J Needed to determine whether the Clay et al., 2003 function describing diffusion is applicable to our data standard conditions from: Clay et al., 2003. Eur Biophys J Standard conditions: 44k rev/min for Beckman XL-A An-50 Ti Rotor 44.77k rev/min for Beckman model E 35k rev/min for preparative ultra-cfg & fractionation De Sario et al., 1995: vertical rotor: VTi90 (Beckman) 35k rpm for 16.5 h Our conditions: speed (R) = 55k rev/min radius top/bottom (cm) = 2.6, 4.85 angular velocity: w = ((2 * 3.14159 * R)/60)^2 TLA110 rotor End of explanation # angular velocity: our setup angular_vel_f = lambda R: (2 * 3.14159 * R / 60) print angular_vel_f(55000) # angular velocity: De Sario et al., 1995 print angular_vel_f(35000) Explanation: Beckman XL-A Beckman model E Angular velocity (omega) End of explanation %%R -w 14 -h 6 -u in library(ggplot2) library(reshape) library(grid) # radius top,bottom (cm) r.top = 57.9 / 10 r.bottom = 71.1 / 10 # isoconcentration point I = sqrt((r.top^2 + r.top * r.bottom + r.bottom^2)/3) # rpm R = 35000 # particle density D = 1.70 # beta^o B = 1.14e9 # dna in bp from 0.1kb - 100kb L = seq(100, 100000, 100) # angular velocity ## 2*pi*rpm / 60 w = ((2 * 3.14159 * R)/60)^2 # DNA GC content G.C = seq(0.1, 0.9, 0.05) # Molecular weight # M.W in relation to GC content (dsDNA) A = 313.2 T = 304.2 C = 289.2 G = 329.2 GC = G + C AT = A + T #GC2MW = function(x){ x*GC + (1-x)*AT + 157 } # assuming 5' monophosphate on end of molecules GC2MW = function(x){ x*GC + (1-x)*AT } M.W = sapply(G.C, GC2MW) # buoyant density ## GC_fraction = (p - 1.66) / 0.098 GC2buoyant.density = function(x){ (x * 0.098) + 1.66 } B.D = GC2buoyant.density(G.C) # radius of the isoconcentration point from cfg center (AKA: r.p) ## position of the particle at equilibrium buoyant.density2radius = function(x){ sqrt( ((x-D)*2*B/w) + I^2 ) } P = buoyant.density2radius(B.D) # calculating S S.fun = function(L){ 2.8 + (0.00834 * (L*M.W)^0.479) } S = t(sapply( L, S.fun )) # calculating T T = matrix(ncol=17, nrow=length(L)) for(i in 1:ncol(S)){ T[,i] = 1.13e14 * B * (D-1) / (R^4 * P[i]^2 * S[,i]) } ## formating T = as.data.frame(T) colnames(T) = G.C T$dna_size__kb = L / 1000 T.m = melt(T, id.vars=c('dna_size__kb')) colnames(T.m) = c('dna_size__kb', 'GC_content', 'time__h') #T.m$GC_content = as.numeric(as.character(T.m$GC_content)) ## plotting p = ggplot(T.m, aes(dna_size__kb, time__h, color=GC_content, group=GC_content)) + geom_line() + scale_y_continuous(limits=c(0,175)) + labs(x='DNA length (kb)', y='Time (hr)') + scale_color_discrete(name='GC content') + #geom_hline(yintercept=66, linetype='dashed', alpha=0.5) + theme( text = element_text(size=18) ) #print(p) # plotting at small scale p.sub = ggplot(T.m, aes(dna_size__kb, time__h, color=GC_content, group=GC_content)) + geom_line() + scale_x_continuous(limits=c(0,5)) + scale_y_continuous(limits=c(0,175)) + labs(x='DNA length (kb)', y='Time (hr)') + scale_color_discrete(name='GC content') + #geom_hline(yintercept=66, linetype='dashed', alpha=0.5) + theme( text = element_text(size=14), legend.position = 'none' ) vp = viewport(width=0.43, height=0.52, x = 0.65, y = 0.68) print(p) print(p.sub, vp=vp) Explanation: Meselson et al., 1957 equation on s.d. of band due to diffusion \begin{equation} \sigma^2 = \frac{RT}{M_{PX_n}\bar{\upsilon}{PX_n} (\frac{dp}{dr}){r_0} \omega^2 r_0} \end{equation} R = gas constant T = temperature (C) M = molecular weight PX_n = macromolecular electrolyte v = partial specific volume (mL/g) w = angular velocity r_0 = distance between the band center and rotor center dp/dr = density gradient Time to equilibrium: vertical rotor radius_max - radius_min = width_of_tube VTi90 Rotor radius_max = 71.1 mm radius_min = 57.9 mm End of explanation %%R # gas constant R = 8.3144621e7 #J / mol*K # temp T = 273.15 + 23 # 23oC # rotor speed (rpm) S = 55000 # beta^o beta = 1.14 * 10^-9 #beta = 1.195 * 10^-10 # G G = 7.87 * 10^10 #cgs # angular velocity ## 2*pi*rpm / 60 omega = 2 * pi * S /60 # GC GC = seq(0,1,0.1) # lengths lens = seq(1000, 100000, 10000) # molecular weight GC2MW.dry = function(x){ A = 313.2 T = 304.2 C = 289.2 G = 329.2 GC = G + C AT = A + T x*GC + (1-x)*AT } M.dry = sapply(GC, GC2MW) GC2MW.dryCS = function(n){ #n = number of bases #base pair = 665 daltons #base pair per dry cesium DNA = 665 * 4/3 ~= 882 return(n * 882) } M.dryCS = sapply(lens, GC2MW.dryCS) # BD GC2BD = function(x){ (x * 0.098) + 1.66 } rho = sapply(GC, GC2BD) # sd calc_s.d = function(p=1.72, L=50000, T=298, B=1.195e9, G=7.87e-10, M=882){ R = 8.3145e7 x = (100 / 0.098)^2 * ((p*R*T)/(B^2*G*L*M)) return(x) } # run p=seq(1.7, 1.75, 0.01) L=seq(1000, 50000, 1000) m = outer(p, L, calc_s.d) rownames(m) = p colnames(m) = L %%R # gas constant R = 8.3144621e7 #J / mol*K # temp T = 273.15 + 23 # 23oC # rotor speed (rpm) S = 55000 # beta^o beta = 1.14 * 10^-9 #beta = 1.195 * 10^-10 # G G = 7.87 * 10^10 #cgs # angular velocity ## 2*pi*rpm / 60 omega = 2 * pi * S /60 # GC GC = seq(0,1,0.1) # lengths lens = seq(1000, 100000, 10000) # molecular weight GC2MW.dry = function(x){ A = 313.2 T = 304.2 C = 289.2 G = 329.2 GC = G + C AT = A + T x*GC + (1-x)*AT } #GC2MW = function(x){ x*GC + (1-x)*AT } M.dry = sapply(GC, GC2MW.dry) GC2MW.dryCS = function(n){ #n = number of bases #base pair = 665 daltons #base pair per dry cesium DNA = 665 * 4/3 ~= 882 return(n * 882) } M.dryCS = sapply(lens, GC2MW.dryCS) # BD GC2BD = function(x){ (x * 0.098) + 1.66 } rho = sapply(GC, GC2BD) # sd calc_s.d = function(p=1.72, L=50000, T=298, B=1.195e9, G=7.87e-10, M=882){ R = 8.3145e7 x = (100 / 0.098)^2 * ((p*R*T)/(B^2*G*L*M)) return(sqrt(x)) } # run p=seq(1.7, 1.75, 0.01) L=seq(500, 50000, 500) m = outer(p, L, calc_s.d) rownames(m) = p colnames(m) = L %%R heatmap(m, Rowv=NA, Colv=NA) %%R -w 500 -h 350 df = as.data.frame(list('fragment_length'=as.numeric(colnames(m)), 'GC_sd'=m[1,])) #df$GC_sd = sqrt(df$GC_var) ggplot(df, aes(fragment_length, GC_sd)) + geom_line() + geom_vline(xintercept=4000, linetype='dashed', alpha=0.6) + labs(x='fragment length (bp)', y='G+C s.d.') + theme( text = element_text(size=16) ) Explanation: Plotting band s.d. as defined the ultra-cfg technical manual density gradient \begin{equation} \frac{d\rho}{dr} = \frac{\omega^2r}{\beta} \end{equation} band standard deviation \begin{equation} \sigma^2 = \frac{\theta}{M_{app}}\frac{RT}{(\frac{d\rho}{dr})_{eff} \omega^2r_o} \end{equation} combined \begin{equation} \sigma^2 = \frac{\theta}{M_{app}}\frac{RT}{\frac{\omega^4r_o^2}{\beta}} \end{equation} buoyant density of a molecule \begin{equation} \theta = \rho_i + \frac{\omega^2}{2\beta}(r_o^2 - r_1^2) \end{equation} standard deviation due to diffusion (Clay et al., 2003) \begin{equation} \sigma_{diffusion}^2 = \Big(\frac{100%}{0.098}\Big)^2 \frac{\rho RT}{\beta_B^2GM_{Cs}} \frac{1}{1000l} \end{equation} End of explanation %%R calc_s.d = function(p=1.72, L=50000, T=298, B=1.195e9, G=7.87e-10, M=882){ R = 8.3145e7 sigma_sq = (p*R*T)/(B^2*G*L*M) return(sqrt(sigma_sq)) } # run p=seq(1.7, 1.75, 0.01) L=seq(500, 50000, 500) m = outer(p, L, calc_s.d) rownames(m) = p colnames(m) = L head(m) %%R heatmap(m, Rowv=NA, Colv=NA) %%R -w 500 -h 350 BD50 = 0.098 * 0.5 + 1.66 df = as.data.frame(list('fragment_length'=as.numeric(colnames(m)), 'BD_sd'=m[BD50,])) ggplot(df, aes(fragment_length, BD_sd)) + geom_line() + geom_vline(xintercept=4000, linetype='dashed', alpha=0.6) + labs(x='fragment length (bp)', y='G+C s.d.') + theme( text = element_text(size=16) ) Explanation: Notes: Small fragment size (<4000 bp) leads to large standard deviations in realized G+C End of explanation
2,018
Given the following text description, write Python code to implement the functionality described below step by step Description: Artifact Correction with ICA ICA finds directions in the feature space corresponding to projections with high non-Gaussianity. We thus obtain a decomposition into independent components, and the artifact's contribution is localized in only a small number of components. These components have to be correctly identified and removed. If EOG or ECG recordings are available, they can be used in ICA to automatically select the corresponding artifact components from the decomposition. To do so, you have to first build an Epoch object around blink or heartbeat event. Step1: Before applying artifact correction please learn about your actual artifacts by reading tut_artifacts_detect. Fit ICA ICA parameters Step2: Define the ICA object instance Step3: we avoid fitting ICA on crazy environmental artifacts that would dominate the variance and decomposition Step4: Plot ICA components Step5: Component properties Let's take a closer look at properties of first three independent components. Step6: we can see that the data were filtered so the spectrum plot is not very informative, let's change that Step7: we can also take a look at multiple different components at once Step8: Instead of opening individual figures with component properties, we can also pass an instance of Raw or Epochs in inst arument to ica.plot_components. This would allow us to open component properties interactively by clicking on individual component topomaps. In the notebook this woks only when running matplotlib in interactive mode (%matplotlib). Step9: Advanced artifact detection Let's use a more efficient way to find artefacts Step10: We can take a look at the properties of that component, now using the data epoched with respect to EOG events. We will also use a little bit of smoothing along the trials axis in the epochs image Step11: That component is showing a prototypical average vertical EOG time course. Pay attention to the labels, a customized read-out of the mne.preprocessing.ICA.labels_ Step12: These labels were used by the plotters and are added automatically by artifact detection functions. You can also manually edit them to annotate components. Now let's see how we would modify our signals if we removed this component from the data Step13: Exercise Step14: What if we don't have an EOG channel? We could either Step15: The idea behind corrmap is that artefact patterns are similar across subjects and can thus be identified by correlating the different patterns resulting from each solution with a template. The procedure is therefore semi-automatic. Step16: Remember, don't do this at home! Start by reading in a collection of ICA solutions instead. Something like Step17: We use our original ICA as reference. Step18: Investigate our reference ICA Step19: Which one is the bad EOG component? Here we rely on our previous detection algorithm. You would need to decide yourself if no automatic detection was available. Step20: Indeed it looks like an EOG, also in the average time course. We construct a list where our reference run is the first element. Then we can detect similar components from the other runs (the other ICA objects) using Step21: Now we can run the CORRMAP algorithm. Step22: Nice, we have found similar ICs from the other (simulated) runs! In this way, you can detect a type of artifact semi-automatically for example for all subjects in a study. The detected template can also be retrieved as an array and stored; this array can be used as an alternative template to
Python Code: import numpy as np import mne from mne.datasets import sample from mne.preprocessing import ICA from mne.preprocessing import create_eog_epochs, create_ecg_epochs # getting some data ready data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.filter(1, 40, n_jobs=2) # 1Hz high pass is often helpful for fitting ICA picks_meg = mne.pick_types(raw.info, meg=True, eeg=False, eog=False, stim=False, exclude='bads') Explanation: Artifact Correction with ICA ICA finds directions in the feature space corresponding to projections with high non-Gaussianity. We thus obtain a decomposition into independent components, and the artifact's contribution is localized in only a small number of components. These components have to be correctly identified and removed. If EOG or ECG recordings are available, they can be used in ICA to automatically select the corresponding artifact components from the decomposition. To do so, you have to first build an Epoch object around blink or heartbeat event. End of explanation n_components = 25 # if float, select n_components by explained variance of PCA method = 'fastica' # for comparison with EEGLAB try "extended-infomax" here decim = 3 # we need sufficient statistics, not all time points -> saves time # we will also set state of the random number generator - ICA is a # non-deterministic algorithm, but we want to have the same decomposition # and the same order of components each time this tutorial is run random_state = 23 Explanation: Before applying artifact correction please learn about your actual artifacts by reading tut_artifacts_detect. Fit ICA ICA parameters: End of explanation ica = ICA(n_components=n_components, method=method, random_state=random_state) print(ica) Explanation: Define the ICA object instance End of explanation reject = dict(mag=5e-12, grad=4000e-13) ica.fit(raw, picks=picks_meg, decim=decim, reject=reject) print(ica) Explanation: we avoid fitting ICA on crazy environmental artifacts that would dominate the variance and decomposition End of explanation ica.plot_components() # can you spot some potential bad guys? Explanation: Plot ICA components End of explanation # first, component 0: ica.plot_properties(raw, picks=0) Explanation: Component properties Let's take a closer look at properties of first three independent components. End of explanation ica.plot_properties(raw, picks=0, psd_args={'fmax': 35.}) Explanation: we can see that the data were filtered so the spectrum plot is not very informative, let's change that: End of explanation ica.plot_properties(raw, picks=[1, 2], psd_args={'fmax': 35.}) Explanation: we can also take a look at multiple different components at once: End of explanation # uncomment the code below to test the inteactive mode of plot_components: # ica.plot_components(picks=range(10), inst=raw) Explanation: Instead of opening individual figures with component properties, we can also pass an instance of Raw or Epochs in inst arument to ica.plot_components. This would allow us to open component properties interactively by clicking on individual component topomaps. In the notebook this woks only when running matplotlib in interactive mode (%matplotlib). End of explanation eog_average = create_eog_epochs(raw, reject=dict(mag=5e-12, grad=4000e-13), picks=picks_meg).average() # We simplify things by setting the maximum number of components to reject n_max_eog = 1 # here we bet on finding the vertical EOG components eog_epochs = create_eog_epochs(raw, reject=reject) # get single EOG trials eog_inds, scores = ica.find_bads_eog(eog_epochs) # find via correlation ica.plot_scores(scores, exclude=eog_inds) # look at r scores of components # we can see that only one component is highly correlated and that this # component got detected by our correlation analysis (red). ica.plot_sources(eog_average, exclude=eog_inds) # look at source time course Explanation: Advanced artifact detection Let's use a more efficient way to find artefacts End of explanation ica.plot_properties(eog_epochs, picks=eog_inds, psd_args={'fmax': 35.}, image_args={'sigma': 1.}) Explanation: We can take a look at the properties of that component, now using the data epoched with respect to EOG events. We will also use a little bit of smoothing along the trials axis in the epochs image: End of explanation print(ica.labels_) Explanation: That component is showing a prototypical average vertical EOG time course. Pay attention to the labels, a customized read-out of the mne.preprocessing.ICA.labels_: End of explanation ica.plot_overlay(eog_average, exclude=eog_inds, show=False) # red -> before, black -> after. Yes! We remove quite a lot! # to definitely register this component as a bad one to be removed # there is the ``ica.exclude`` attribute, a simple Python list ica.exclude.extend(eog_inds) # from now on the ICA will reject this component even if no exclude # parameter is passed, and this information will be stored to disk # on saving # uncomment this for reading and writing # ica.save('my-ica.fif') # ica = read_ica('my-ica.fif') Explanation: These labels were used by the plotters and are added automatically by artifact detection functions. You can also manually edit them to annotate components. Now let's see how we would modify our signals if we removed this component from the data End of explanation ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5) ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps') ica.plot_properties(ecg_epochs, picks=ecg_inds, psd_args={'fmax': 35.}) Explanation: Exercise: find and remove ECG artifacts using ICA! End of explanation from mne.preprocessing.ica import corrmap # noqa Explanation: What if we don't have an EOG channel? We could either: make a bipolar reference from frontal EEG sensors and use as virtual EOG channel. This can be tricky though as you can only hope that the frontal EEG channels only reflect EOG and not brain dynamics in the prefrontal cortex. go for a semi-automated approach, using template matching. In MNE-Python option 2 is easily achievable and it might give better results, so let's have a look at it. End of explanation # We'll start by simulating a group of subjects or runs from a subject start, stop = [0, len(raw.times) - 1] intervals = np.linspace(start, stop, 4, dtype=int) icas_from_other_data = list() raw.pick_types(meg=True, eeg=False) # take only MEG channels for ii, start in enumerate(intervals): if ii + 1 < len(intervals): stop = intervals[ii + 1] print('fitting ICA from {0} to {1} seconds'.format(start, stop)) this_ica = ICA(n_components=n_components, method=method).fit( raw, start=start, stop=stop, reject=reject) icas_from_other_data.append(this_ica) Explanation: The idea behind corrmap is that artefact patterns are similar across subjects and can thus be identified by correlating the different patterns resulting from each solution with a template. The procedure is therefore semi-automatic. :func:mne.preprocessing.corrmap hence takes a list of ICA solutions and a template, that can be an index or an array. As we don't have different subjects or runs available today, here we will simulate ICA solutions from different subjects by fitting ICA models to different parts of the same recording. Then we will use one of the components from our original ICA as a template in order to detect sufficiently similar components in the simulated ICAs. The following block of code simulates having ICA solutions from different runs/subjects so it should not be used in real analysis - use independent data sets instead. End of explanation print(icas_from_other_data) Explanation: Remember, don't do this at home! Start by reading in a collection of ICA solutions instead. Something like: icas = [mne.preprocessing.read_ica(fname) for fname in ica_fnames] End of explanation reference_ica = ica Explanation: We use our original ICA as reference. End of explanation reference_ica.plot_components() Explanation: Investigate our reference ICA: End of explanation reference_ica.plot_sources(eog_average, exclude=eog_inds) Explanation: Which one is the bad EOG component? Here we rely on our previous detection algorithm. You would need to decide yourself if no automatic detection was available. End of explanation icas = [reference_ica] + icas_from_other_data template = (0, eog_inds[0]) Explanation: Indeed it looks like an EOG, also in the average time course. We construct a list where our reference run is the first element. Then we can detect similar components from the other runs (the other ICA objects) using :func:mne.preprocessing.corrmap. So our template must be a tuple like (reference_run_index, component_index): End of explanation fig_template, fig_detected = corrmap(icas, template=template, label="blinks", show=True, threshold=.8, ch_type='mag') Explanation: Now we can run the CORRMAP algorithm. End of explanation eog_component = reference_ica.get_components()[:, eog_inds[0]] # If you calculate a new ICA solution, you can provide this array instead of # specifying the template in reference to the list of ICA objects you want # to run CORRMAP on. (Of course, the retrieved component map arrays can # also be used for other purposes than artifact correction.) # # You can also use SSP to correct for artifacts. It is a bit simpler and # faster but also less precise than ICA and requires that you know the event # timing of your artifact. # See :ref:`tut_artifacts_correct_ssp`. Explanation: Nice, we have found similar ICs from the other (simulated) runs! In this way, you can detect a type of artifact semi-automatically for example for all subjects in a study. The detected template can also be retrieved as an array and stored; this array can be used as an alternative template to :func:mne.preprocessing.corrmap. End of explanation
2,019
Given the following text description, write Python code to implement the functionality described below step by step Description: Coin games Step1: Playing the games Game 1 Game 1 represents a Markov chain on a countable state space that follows a random walk. If we denote by the random variable $X_n$ the bankroll at toss $n$ with $X_0 = k$, then the sequence $\lbrace X_n Step2: Omitting the detailed proof, taking the limit $N \to \infty$ gives the correct result for the case without a house limit, $$ \lim_{N \to \infty} u_k = \cases{\left(\frac{1-p}{p}\right)^k, \qquad p > \frac{1}{2} \ 1, \,\,\,\,\qquad\qquad p \leq \frac{1}{2} } $$ Therefore, in a fair and unfavourably biased game the gambler always loses no matter the initial bankroll size. In a game that is favourably biased, however, there is a finite probability to never go bankcrupt, which becomes better the larger the initial bankroll and more favourable the bias. Step3: To find the expected number of games to be played before reaching $0$ or $N$, $E_k \equiv E(Y_k)$, where the random variable $Y_k$ is the number of games needed starting with bankroll $k$, we can again condition on the first game Step4: Game 2 Let $X_N$ be the random variable denoting the number of heads in a series of $N$ coin tosses. The corresponding probability density function is the binomial distribution, $$ X_N \sim \text{Binomial}(p, N),$$ where $p = 0.5$ for a fair coin. The mean is $Np$ and the variance is $Np(1-p)$. For large $N$, certainly for $N = 1000$, the Central Limit Theorem asserts that the binomial distribution can be approximated by a Gaussian Step5: We see that the needed number of heads, 550 out of 1000 tosses, is 3.16 standard deviations away from the mean. The probability $P(X_{1000} > 550) \approx 0.0008.$ The expectation becomes $$E\text{(game 2)} = -10 + 20\times 0.0008 \approx -9.98.$$ This game should certainly not be played. The bet can be taken when $E\text{(game 2)}>0$, for example when the game costs EUR 0.016 or less with a payout of EUR 20, or pays EUR 12500 or more for the bet price of EUR 10. Game 3 This time we are dealing with a Discrete Time Quantum Walk. There are many interesting features to study, but let us focus our attention here to calculating the expected winnings when gambling in a quantum casino, similarly in spirit to Game 1. Let us introduce our gambler named $\left|{\psi}\right\rangle$ who is in a quantum superposition of 'lucky' (L) and 'unlucky' (U), $$\left|{\psi}\right\rangle = g_1 \left|{L}\right\rangle + g_2 \left|{U}\right\rangle,$$ where $g_{1,2} \in \mathbb{C}$ such that $|g_1|^2 + |g_2|^2 = 1$. The quantum casino named $\left|{\phi}\right\rangle$, like the classical one, accommodates for all non-zero bankrolls (B) within the house limit $N$ ($\mathbb{N}_N = \lbrace 1, \ldots, N-1 \rbrace$)
Python Code: import numpy as np import matplotlib.pyplot as plt Explanation: Coin games: classical and quantum In this notebook we play a set of interesting coin tossing games using coins obeying classical (games 1-2) and quantum (game 3) mechanics. Game 1: Gambler's ruin A gambler enters the casino with a bankroll of size $k$, and repeatedly plays a game where one wins 1 with probability $p$ and loses 1 with probability $1 - p$. The gambler stops playing if the bankroll reaches $0$ or the house limit $N$. We can ask questions such as <ol> <li>What is the probability that the gambler loses it all / hits the house limit / neither?</li> <li>What is the probability that the gambler loses it all in a game without a house limit?</li> <li>How many games will the gambler need to play on average to leave the casino?</li> </ol> Game 2: Questionable gamble We are presented with the following bet: On flipping 1000 fair coins, if 550 or more land on heads we win EUR 20; otherwise the bet is lost. Should we play this game for EUR 10? Game 3: Gambling in a quantum casino (flipping a Hadamard coin) We encountered a classical random walk in Game 1. What happens when the gambler is allowed to simultaneosly win and lose conditional on the outcome of a quantum coin? What happens when we gamble in a casino obeying quantum laws? We learned in Game 1 what happens in the classical casino where the gambler either wins or loses $-$ and not both at the same time $-$ in a given game, but in the quantum world things are not quite so simple. End of explanation u = lambda k,p,N: (((1-p)/p)**k -((1-p)/p)**N) / (1 - ((1-p)/p)**N); # Evaluating u for p < 1/2 runs into numerical problems with the large fractions. Let us regroup: u1 = lambda k,p,N: (((1-p)/p)**(k-N) -1) / (((1-p)/p)**(-N) - 1); uhalf = lambda k,N: 1 - k/N; def SimulateGame1_1(k,p,N): NGames = 50; # number of games we simulate. The higher the closer to the 'frequentist' prob. ret = []; for ktmp in k: ktmp = int(ktmp); Nruins = 0; # number of ruins we have encountered for this k for i1 in range(NGames): ktmp1 = ktmp; while (True): if (ktmp1 == 0): Nruins += 1; break; if (ktmp1 == N): break; if (np.random.uniform(0,1) <= p): ktmp1 += 1; else: ktmp1 += -1; ret.append(Nruins/NGames); # prob of ruin for this k return ret; N = 100; krange=np.linspace(0, N, num=100); plist = [0.51,0.55,0.7]; p1list = [0.3,0.45,0.49]; for p in p1list: plt.plot(krange, u1(krange,p,N), linewidth=2, label=" p = %f "%(p)); plt.plot(krange, SimulateGame1_1(krange,p,N),color='#c42d41', alpha=0.6); plt.plot(krange, uhalf(krange,N), linewidth=2, label=" p = 0.5 "); plt.plot(krange, SimulateGame1_1(krange,0.5,N),color='#c42d41', alpha=0.6, label = 'Simulated'); for p in plist: plt.plot(krange, u(krange,p,N), linewidth=2, label=" p = %f "%(p)); plt.plot(krange, SimulateGame1_1(krange,p,N),color='#c42d41', alpha=0.6); plt.legend(bbox_to_anchor=(1.04,1), loc="upper left"); plt.title('Gambler\'s ruin, house limit = %d'%(N)); plt.xlabel('Initial bankroll');plt.ylabel('Probability of ruin'); plt.show(); Explanation: Playing the games Game 1 Game 1 represents a Markov chain on a countable state space that follows a random walk. If we denote by the random variable $X_n$ the bankroll at toss $n$ with $X_0 = k$, then the sequence $\lbrace X_n : n \in \mathbb{N} \rbrace$ is a Markov process. The state at toss $n+1$ only depends on the state at $n$; there is no memory. Let's denote the probability that the gambler loses it all with a bankroll of $k$ by $u_k$. To obtain the asked probabilities, we can condition on the first toss using the law of total probability: $$\begin{split}u_k &= \text{P(ruin | win first)P(win first) + P(ruin | lose first)P(lose first)} \ &= u_{k + 1}p + u_{k - 1}(1 - p) \end{split} $$ This is defined for $0 < k < M$ with the boundary conditions $u_0 = 1$ and $u_N = 0$. The solution is $$ u_k = \cases{\frac{\left(\frac{1-p}{p}\right)^k - \left(\frac{1-p}{p}\right)^N}{1 - \left(\frac{1-p}{p}\right)^N}, \qquad p \neq \frac{1}{2} \ 1 - \frac{k}{N}, \,\,\,\,\qquad\qquad p = \frac{1}{2} } $$ Similarly, the probability $v_k$ that the gambler reaches the house limit starting with a bankroll of $k$ is defined by the same recurrence relation, but the boundary conditions are $v_0 = 0$ and $v_N = 1$. We find $v_k = 1 - u_k$. The probability that the gambler plays forever is zero. End of explanation uNinf = lambda k,p: ((1-p)/p)**k; uNinfhalf = lambda k: k**0; krange=np.linspace(0, 5000, num=1000); plist = [0.5001,0.501,0.6]; plt.plot(krange, uNinfhalf(krange), linewidth=2, label=" p =< 0.5 "); for p in plist: plt.plot(krange, uNinf(krange,p), linewidth=2, label=" p = %f "%(p)); plt.legend(loc="best"); plt.title('Gambler\'s ruin, no house limit'); plt.xlabel('Initial bankroll');plt.ylabel('Probability of ruin'); plt.show(); Explanation: Omitting the detailed proof, taking the limit $N \to \infty$ gives the correct result for the case without a house limit, $$ \lim_{N \to \infty} u_k = \cases{\left(\frac{1-p}{p}\right)^k, \qquad p > \frac{1}{2} \ 1, \,\,\,\,\qquad\qquad p \leq \frac{1}{2} } $$ Therefore, in a fair and unfavourably biased game the gambler always loses no matter the initial bankroll size. In a game that is favourably biased, however, there is a finite probability to never go bankcrupt, which becomes better the larger the initial bankroll and more favourable the bias. End of explanation E = lambda k,p,N: (N/(2*p - 1))*((1 - ((1-p)/p)**k)/(1 - ((1-p)/p)**N)) - k/(2*p - 1); # Evaluating E for p < 1/2 runs into numerical problems with the large fractions. Let us regroup: E1 = lambda k,p,N: (N/(2*p - 1))*(((1 - ((1-p)/p)**k)*((1-p)/p)**(-N))/(((1-p)/p)**(-N) - 1)) - k/(2*p - 1); Ehalf = lambda k,N: N*k - k**2; def SimulateGame1_3(k,p,N): NGames = 100; # number of games we simulate, and then average over. ret = []; for ktmp in k: ktmp = int(ktmp); playresults = []; # temp array of results that we finally average over for i1 in range(NGames): Nplays = 1; # number of games we have managed to play for this k ktmp1 = ktmp; while (True): if (ktmp1 == 0) or (ktmp1 == N): break; if (np.random.uniform(0,1) <= p): ktmp1 += 1; else: ktmp1 += -1; Nplays += 1; playresults.append(Nplays); ret.append(np.mean(playresults)); # Expected number of games for this k return ret; N = 100; krange=np.linspace(0, N, num=100); plist = [0.51,0.55,0.7]; p1list = [0.3,0.45,0.49]; for p in p1list: plt.plot(krange, E1(krange,p,N), linewidth=2, label=" p = %f "%(p)); plt.plot(krange, SimulateGame1_3(krange,p,N),color='#c42d41', alpha=0.6); plt.plot(krange, Ehalf(krange,N), linewidth=2, label=" p = 0.5 "); plt.plot(krange, SimulateGame1_3(krange,0.5,N),color='#c42d41', alpha=0.6, label = 'Simulated'); for p in plist: plt.plot(krange, E(krange,p,N), linewidth=2, label=" p = %f "%(p)); plt.plot(krange, SimulateGame1_3(krange,p,N),color='#c42d41', alpha=0.6); plt.legend(bbox_to_anchor=(1.04,1), loc="upper left"); plt.title('Gambler\'s ruin, house limit = %d'%(N)); plt.xlabel('Initial bankroll');plt.ylabel('Expected number of games'); plt.show(); Explanation: To find the expected number of games to be played before reaching $0$ or $N$, $E_k \equiv E(Y_k)$, where the random variable $Y_k$ is the number of games needed starting with bankroll $k$, we can again condition on the first game: $$\begin{split}E_k &= \text{E(}Y_k\text{ | win first)P(win first) + E(}Y_k\text{ | lose first)P(lose first)} \ &= (1 + E_{k + 1})p + (1 + E_{k - 1})(1 - p) \end{split} $$ with the boundary conditions $E_0 = E_N = 0$. Using the standard theory of recurrence relations, we find $$ E_k = \cases{\frac{N}{2 p - 1} \frac{1- \left(\frac{1-p}{p}\right)^k}{1 - \left(\frac{1-p}{p}\right)^N} - \frac{k}{2 p - 1}, \qquad p \neq \frac{1}{2} \ N k - k^2, \,\,\,\,\qquad\qquad p = \frac{1}{2} } $$ End of explanation from scipy.stats import norm Gaussian = lambda x,mu,var: np.exp(-(x - mu)**2.0 / (2.0 * var))/np.sqrt(2.0*np.pi*var); xmin = 350; xmax = 650; xrange=np.linspace(xmin, xmax, num=xmax - xmin); headprob = 0.5; Ntosses = 1000; mub = Ntosses*headprob; varb = Ntosses*headprob*(1 - headprob); # We must normalize the Gaussian: z = (550 - mub)/np.sqrt(varb); print(z); print(1-norm.cdf(z)); plt.xlim(xmin,xmax); plt.ylim(0,0.03); plt.plot(xrange, Gaussian(xrange, mub, varb),color='#c42d41',alpha=0.8,linewidth=2); plt.xlabel('Number of heads');plt.ylabel('Proportion of bets'); plt.annotate('Very rare event', xy=(550, 0.0005), xytext=(565, 0.005), arrowprops=dict(facecolor='black', shrink=0.05), ) plt.title('Game 2'); plt.show(); Explanation: Game 2 Let $X_N$ be the random variable denoting the number of heads in a series of $N$ coin tosses. The corresponding probability density function is the binomial distribution, $$ X_N \sim \text{Binomial}(p, N),$$ where $p = 0.5$ for a fair coin. The mean is $Np$ and the variance is $Np(1-p)$. For large $N$, certainly for $N = 1000$, the Central Limit Theorem asserts that the binomial distribution can be approximated by a Gaussian: $$ X_N \sim \text{N}[Np, Np(1-p)].$$ To answer the question of whether or not to take the bet, let us consider the expectation of its payout: $$\begin{split}E\text{(game 2)} &= E(\text{win})P(\text{win}) + E(\text{lose})P(\text{lose}) \ &= (-10 + 20)P(\text{win}) + (-10)P(\text{lose}) \ &= -10 + 20P(\text{win})\ &= -10 + 20P(X_{1000} > 550). \end{split}$$ Let us evaluate the probability numerically: End of explanation from scipy import sparse # Matrix representations of the operators bvecU = np.array([1,0]); # gambler basis vectors bvecL = np.array([0,1]); NBankR = 40; # House limit BankRmatdim = NBankR - 1; bvecBankR = np.arange(1,NBankR); matWin = sparse.spdiags([1]*(BankRmatdim -1),-1,BankRmatdim ,BankRmatdim ); matLose = sparse.spdiags([1]*(BankRmatdim -1),1,BankRmatdim ,BankRmatdim ); matBankRId = sparse.identity(BankRmatdim); matLL = sparse.csr_matrix(np.outer(bvecL,bvecL)); matUU = sparse.csr_matrix(np.outer(bvecU,bvecU)); matHad = sparse.csr_matrix(np.array([[1,1],[1,-1]])/np.sqrt(2)); matG = sparse.kron(matWin, matLL) + sparse.kron(matLose, matUU); matS = sparse.csr_matrix(matG.dot(sparse.kron(matBankRId, matHad))); # Definition of the initial state vecGamblerT0 = bvecL; # The initial gambler luck state, in this case fully 'lucky' vecCasinoT0 = np.eye(BankRmatdim)[:,int(BankRmatdim/2)]; # The initial bankroll state, only site int(BankRmatdim/2) = 1 stateT0 = np.kron(vecCasinoT0,vecGamblerT0); # The initial total system state (casino + gambler) # Gambling events def Propagate(T,sT0): '''Iterates for T gambles.''' if (T == 0): s = sT0; else: s = matS.dot(sT0); for i in range(T-1): s = matS.dot(s); return s; Tlist = [0,1,2,3,10,20]; i = 1; plt.figure(figsize=(15,6)); plt.subplots_adjust(hspace=0.35); for T in Tlist: state = Propagate(T,stateT0); # The resulting array has meaning only in terms of our chosen basis. # To compute the total probability of having a bankroll value k, we # must trace over the gambler's internal state. state = [np.abs(i2)**2 for i2 in state] # In Python list comprehension is preferred state_BankR = np.array(state[::2]) + np.array(state[1::2]); # Trace over the gambler's Hilbert space # state_BankR contains the probabilities for the bank roll values plt.subplot(2,3,i); i += 1; plt.xlim(0,NBankR); plt.xlabel('Bankroll'); plt.ylabel('Probability'); plt.bar(bvecBankR,state_BankR,width=1.0, color='#42aaf4', alpha=0.9, label=" T = %d "%(T)); plt.legend(loc="upper left"); plt.suptitle('Bankroll at a quantum casino after T plays and starting fully \'lucky\''); plt.show(); # Let us also plot the evolution of the expected bankroll as a function of T expWin = []; for T in range(20): state = Propagate(T,stateT0); state = [np.abs(i2)**2 for i2 in state] state_BankR = np.array(state[::2]) + np.array(state[1::2]); expWin.append(np.sum(bvecBankR*state_BankR)); plt.xlabel('Number of games'); plt.ylabel('Expected bankroll'); plt.plot(range(20),expWin); plt.show(); Explanation: We see that the needed number of heads, 550 out of 1000 tosses, is 3.16 standard deviations away from the mean. The probability $P(X_{1000} > 550) \approx 0.0008.$ The expectation becomes $$E\text{(game 2)} = -10 + 20\times 0.0008 \approx -9.98.$$ This game should certainly not be played. The bet can be taken when $E\text{(game 2)}>0$, for example when the game costs EUR 0.016 or less with a payout of EUR 20, or pays EUR 12500 or more for the bet price of EUR 10. Game 3 This time we are dealing with a Discrete Time Quantum Walk. There are many interesting features to study, but let us focus our attention here to calculating the expected winnings when gambling in a quantum casino, similarly in spirit to Game 1. Let us introduce our gambler named $\left|{\psi}\right\rangle$ who is in a quantum superposition of 'lucky' (L) and 'unlucky' (U), $$\left|{\psi}\right\rangle = g_1 \left|{L}\right\rangle + g_2 \left|{U}\right\rangle,$$ where $g_{1,2} \in \mathbb{C}$ such that $|g_1|^2 + |g_2|^2 = 1$. The quantum casino named $\left|{\phi}\right\rangle$, like the classical one, accommodates for all non-zero bankrolls (B) within the house limit $N$ ($\mathbb{N}_N = \lbrace 1, \ldots, N-1 \rbrace$): $$\left|{\phi}\right\rangle = \sum_{B \in \mathbb{N}_N} c_B \left|{B}\right\rangle,$$ where $c_B \in \mathbb{C}$ such that $\sum_{B \in \mathbb{N}_N} |c_B|^2 = 1$. In a relaxed casino $\mathbb{N}_N \to \mathbb{Z}$. Assuming the gambler and the casino are not entangled, the total state of the system, $\left|{\psi}\right\rangle$, is then the product state $$\left|{\psi}\right\rangle = \left|{\phi}\right\rangle \otimes \left|{\psi}\right\rangle.$$ The game operates as follows: First, the gambler flips a quantum coin that determines the luck state. For the component of the gambler in the state 'lucky' the bankroll jumps up by 1; conversely, if the gambler is in the state 'unlucky', the bankroll jumps down by 1. The gamble operator $G$ is then $$G = \sum_{B \in \mathbb{N}N} \left|{B+1}\right\rangle\left\langle{B}\right| \otimes \left|{L}\right\rangle\left\langle{L}\right| + \sum{B \in \mathbb{N}_N} \left|{B-1}\right\rangle\left\langle{B}\right| \otimes \left|{U}\right\rangle\left\langle{U}\right|. $$ The coin toss that determines the state of the gambler is taken to be the Hadamard coin, which in the basis $\left|{U}\right\rangle = (1,0)^\mathrm{T}$ and $\left|{L}\right\rangle = (0,1)^\mathrm{T}$ reads $$H = \frac{1}{\sqrt{2}}\begin{pmatrix}1 & \;\;1\ 1 & -1\\end{pmatrix}. $$ The total operator progressing the walk by one step, $S$, is then given by $$\begin{split} S &= G(\text{Id} \otimes H), \end{split} $$ where the identity operation on the bankroll space reflects the fact that the bankroll is not modified during the coin flip $H$. The state after $k$ gambles is $$S^k \ket{\Psi_0}, $$ where $\ket{\Psi_0}$ is the initial state. In our chosen basis $\left|{U}\right\rangle\left\langle{U}\right| = \begin{pmatrix}1 & 0\ 0 & 0\\end{pmatrix}$ and $\left|{L}\right\rangle\left\langle{L}\right| = \begin{pmatrix}0 & 0\ 0 & 1\\end{pmatrix}$. Let us adopt the basis $\left|{B}\right\rangle = (0, \ldots, 1, \ldots, 0)^\mathrm{T}$ where the '1' occurs at position $B \in \mathbb{N}_N = \lbrace 1, \ldots, N-1 \rbrace$, and rest of the $N - 1$ entries are zero. Then $$ \begin{split} \sum_{B \in \mathbb{N}N} \left|{B-1}\right\rangle\left\langle{B}\right| &= \begin{pmatrix}0 & 1 & & & \ & 0 & 1& & \ & & \ddots & & \ & & & 0 & 1\ & & & & 0 \end{pmatrix}, \ & \ \sum{B \in \mathbb{N}_N} \left|{B+1}\right\rangle\left\langle{B}\right| &= \begin{pmatrix}0 & & & & \ 1 & 0 & & & \ & & \ddots & & \ & & 1 & 0 & \ & & & 1 & 0 \end{pmatrix}, \end{split} $$ where the matrices are $(N - 1) \times (N - 1)$ in size. End of explanation
2,020
Given the following text description, write Python code to implement the functionality described below step by step Description: \title{Bitwise Behavior in myHDL Step1: myHDL Bit Indexing Bit Indexing is the act of selecting or assigning one of the bits in a Bit Vector Expected Indexing Selection Behavior Step2: which shows that when selecting a single bit from a BitVector that selection [0] is the Least Significant Bit (LSB) (inclusive behavior) while for the Most Significant Bit (MSB) will be the index of the BitVector length -1 (noninclusive behavior) Attempted Selection with Python Negative Warping Step3: This means that negative indexing using python's list selection wrap around is NOT implemented in a myHDL intbv Step4: nor is the negative wrapping supported by the use of the modbv Selecting above the MSB Step5: Thus selecting above the MSB will generate a 0 if the Bit Vector is not signed where as selecting above the MSB for a signed bit will produce a 1. Bit Selection of Signal Step7: The difference is that outside of a generator, bit selection of a signal using [] only returns a value and not a signal that is only returned using (). This is important to know since only a Signal can be converted to registers/wires in the conversion from myHDL to Verilog/VHDL myHDL Bit Selection Demo Step9: Bit Assignment Note Step10: Note that if the for loop range was increased beyond 7 an error would be triggered. Step11: Verilog Conversion Verilog Conversion Error Line 24 in the conversion of BitSelectDemo to BitSelectDemo.v is incorrect. The myHDL source line is RefS=Signal(intbv(-93)[8 Step12: \begin{figure} \centerline{\includegraphics[width=10cm]{BitSelectDemo_v_RTL.png}} \caption{\label{fig Step14: \begin{figure} \centerline{\includegraphics[width=10cm]{BitSelectDemo_vhd_RTL.png}} \caption{\label{fig Step15: Verilog Testbench Verilog Testbench Conversion Issue This testbench will work after assign RefS = 8'd-93; is changed to assign RefS = -8'd93; Step16: VHDL Testbench VHDL Testbench Conversion Issue This Testbench is not working in Vivado Step17: myHDL shift (&lt;&lt;/&gt;&gt;) behavior Left Shift (<<) Left Shifting with intbv Step18: Left Shifting with signed intbv Step19: Left Shifting with modbv Step20: As can be seen, Left shifting tacks on a number of zeros equivalent to the shift increment to the end of the binary expression for the value. This then increases the size of the needed register that the resulting value needs to set into for each left shift that does not undergo right bit cutoff Right Shift (&gt;&gt;) Right Shifting with intbv Step21: Right Shifting with signed intbv Step22: Right Shifting with modbv Step24: As can be seen, the right shift moves values (shifts) to the right by the shift increment while preserving the length of the register that is being shifted. While this means that overflow is not going to be in encountered. Right shifting trades that vulnerability for information loss as any information carried in the leftmost bits gets lost as it is shifted right beyond of the length of the register myHDL Shifting Demo Module Step26: myHDL Testing Step27: Verilog Conversion Unfortunately this is an unsynthesizable module as is due assign RefVal = 8'd-55; needing to be changed to assign RefVal = -8'd55; after wich the module is synthesizable Step28: \begin{figure} \centerline{\includegraphics[width=10cm]{ShiftingDemo_v_RTL.png}} \caption{\label{fig Step30: \begin{figure} \centerline{\includegraphics[width=10cm]{ShiftingDemo_vhd_RTL.png}} \caption{\label{fig Step31: Verilog Testbench Verilog Testbench Conversion Issue This Testbench will work after assign RefVal = 8'd-55; is changed to assign RefVal = -8'd55; Step32: VHDL Testbench VHDL Testbench Conversion Issue This Testbench is not working in Vivado Step33: myHDL concat behavior The concat function is an abbreviated name for the full name of concatenation which is that action that this operator performs by joining the bits of all the signals that are arguments to it into a new concatenated single binary Step35: myHDL concat Demo Step37: myHDL Testing Step38: Verilog Conversion Step39: \begin{figure} \centerline{\includegraphics[width=10cm]{ConcatDemo_v_RTL.png}} \caption{\label{fig Step41: \begin{figure} \centerline{\includegraphics[width=10cm]{ConcatDemo_vhd_RTL.png}} \caption{\label{fig Step42: Verilog Testbench Step43: VHDL Testbench VHDL Testbench Conversion Issue This Testbench is not working in Vivado Step44: myHDL Bitslicing Behavior These example values come from the future work with floating point implemented in fixed point architecture which is incredibly important for Digital Signal Processing as will be shown. For now, just understand that the example are based on multiplying two Q4.4 (8bit fixed point) number resulting in Q8.8 (16bit fixed point) product Slicing intbv the following is an example of truncation from 16bit to 8bit rounding that shows how bit slicing works in myHDL. The truncation bit slicing keeps values from the far left (Most Significant Bit (MSB) ) to the rightmost specified bit (Least Significant Bit (LSB)) Step45: Slicing Signed intbv Step46: Slicing modbv Step48: myHDL BitSlicing Demo Module Step50: myHDL Testing Step51: Verilog Conversion Verilog Conversion Issue The following is unsynthesizable since Verilog requires that the indexes in bit slicing (aka Part-selects) be constant values. Along with the error in assign RefVal = 16'd-1749; However, the generated Verilog code from BitSlicingDemo does hold merit in showing how the index values are mapped from myHDL to Verilog Step52: VHDL Conversion VHDL Conversion Issue The following is unsynthesizable since VHDL requires that the indexes in bit slicing (aka Part-selects) be constant values. However, the generated VHDL code from BitSlicingDemo does hold merit in showing how the index values are mapped from myHDL to VHDL Step54: myHDL to Verilog/VHDL Testbench Step55: Verilog Testbench Step56: VHDL Testbench
Python Code: #This notebook also uses the `(some) LaTeX environments for Jupyter` #https://github.com/ProfFan/latex_envs wich is part of the #jupyter_contrib_nbextensions package from myhdl import * from myhdlpeek import Peeker import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline from sympy import * init_printing() import random #https://github.com/jrjohansson/version_information %load_ext version_information %version_information myhdl, myhdlpeek, numpy, pandas, matplotlib, sympy, random #helper functions to read in the .v and .vhd generated files into python def VerilogTextReader(loc, printresult=True): with open(f'{loc}.v', 'r') as vText: VerilogText=vText.read() if printresult: print(f'***Verilog modual from {loc}.v***\n\n', VerilogText) return VerilogText def VHDLTextReader(loc, printresult=True): with open(f'{loc}.vhd', 'r') as vText: VerilogText=vText.read() if printresult: print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText) return VerilogText CountVal=17 BitSize=int(np.log2(CountVal))+1; BitSize Explanation: \title{Bitwise Behavior in myHDL: Selecting, Shifting, Concatenation, Slicing} \author{Steven K Armour} \maketitle <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#References" data-toc-modified-id="References-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>References</a></span></li><li><span><a href="#Libraries-and-Helper-functions" data-toc-modified-id="Libraries-and-Helper-functions-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Libraries and Helper functions</a></span></li><li><span><a href="#myHDL-Bit-Indexing" data-toc-modified-id="myHDL-Bit-Indexing-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>myHDL Bit Indexing</a></span><ul class="toc-item"><li><span><a href="#Expected-Indexing-Selection-Behavior" data-toc-modified-id="Expected-Indexing-Selection-Behavior-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>Expected Indexing Selection Behavior</a></span></li><li><span><a href="#Attempted-Selection-with-Python-Negative-Warping" data-toc-modified-id="Attempted-Selection-with-Python-Negative-Warping-3.2"><span class="toc-item-num">3.2&nbsp;&nbsp;</span>Attempted Selection with Python Negative Warping</a></span></li><li><span><a href="#Selecting-above-the-MSB" data-toc-modified-id="Selecting-above-the-MSB-3.3"><span class="toc-item-num">3.3&nbsp;&nbsp;</span>Selecting above the MSB</a></span></li><li><span><a href="#Bit-Selection-of-Signal" data-toc-modified-id="Bit-Selection-of-Signal-3.4"><span class="toc-item-num">3.4&nbsp;&nbsp;</span>Bit Selection of <code>Signal</code></a></span></li><li><span><a href="#myHDL-Bit-Selection-Demo" data-toc-modified-id="myHDL-Bit-Selection-Demo-3.5"><span class="toc-item-num">3.5&nbsp;&nbsp;</span>myHDL Bit Selection Demo</a></span><ul class="toc-item"><li><span><a href="#Bit-Assignment" data-toc-modified-id="Bit-Assignment-3.5.1"><span class="toc-item-num">3.5.1&nbsp;&nbsp;</span>Bit Assignment</a></span></li></ul></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-3.6"><span class="toc-item-num">3.6&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-3.7"><span class="toc-item-num">3.7&nbsp;&nbsp;</span>Verilog Conversion</a></span><ul class="toc-item"><li><span><a href="#Verilog-Conversion-Error" data-toc-modified-id="Verilog-Conversion-Error-3.7.1"><span class="toc-item-num">3.7.1&nbsp;&nbsp;</span>Verilog Conversion Error</a></span></li></ul></li><li><span><a href="#VHDL-Conversion" data-toc-modified-id="VHDL-Conversion-3.8"><span class="toc-item-num">3.8&nbsp;&nbsp;</span>VHDL Conversion</a></span><ul class="toc-item"><li><span><a href="#VHDL-Conversion-Issue" data-toc-modified-id="VHDL-Conversion-Issue-3.8.1"><span class="toc-item-num">3.8.1&nbsp;&nbsp;</span>VHDL Conversion Issue</a></span></li></ul></li><li><span><a href="#myHDL-to-Verilog/VHDL-Testbench" data-toc-modified-id="myHDL-to-Verilog/VHDL-Testbench-3.9"><span class="toc-item-num">3.9&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-3.9.1"><span class="toc-item-num">3.9.1&nbsp;&nbsp;</span>Verilog Testbench</a></span><ul class="toc-item"><li><span><a href="#Verilog-Testbench-Conversion-Issue" data-toc-modified-id="Verilog-Testbench-Conversion-Issue-3.9.1.1"><span class="toc-item-num">3.9.1.1&nbsp;&nbsp;</span>Verilog Testbench Conversion Issue</a></span></li></ul></li><li><span><a href="#VHDL-Testbench" data-toc-modified-id="VHDL-Testbench-3.9.2"><span class="toc-item-num">3.9.2&nbsp;&nbsp;</span>VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#VHDL-Testbench-Conversion-Issue" data-toc-modified-id="VHDL-Testbench-Conversion-Issue-3.9.2.1"><span class="toc-item-num">3.9.2.1&nbsp;&nbsp;</span>VHDL Testbench Conversion Issue</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#myHDL-shift-(&lt;&lt;/&gt;&gt;)-behavior" data-toc-modified-id="myHDL-shift-(<</>>)-behavior-4"><span class="toc-item-num">4&nbsp;&nbsp;</span>myHDL shift (<code>&lt;&lt;</code>/<code>&gt;&gt;</code>) behavior</a></span><ul class="toc-item"><li><span><a href="#Left-Shift-(&lt;&lt;)" data-toc-modified-id="Left-Shift-(<<)-4.1"><span class="toc-item-num">4.1&nbsp;&nbsp;</span>Left Shift (&lt;&lt;)</a></span><ul class="toc-item"><li><span><a href="#Left-Shifting-with-intbv" data-toc-modified-id="Left-Shifting-with-intbv-4.1.1"><span class="toc-item-num">4.1.1&nbsp;&nbsp;</span>Left Shifting with <code>intbv</code></a></span></li><li><span><a href="#Left-Shifting-with-signed-intbv" data-toc-modified-id="Left-Shifting-with-signed-intbv-4.1.2"><span class="toc-item-num">4.1.2&nbsp;&nbsp;</span>Left Shifting with signed <code>intbv</code></a></span></li><li><span><a href="#Left-Shifting-with-modbv" data-toc-modified-id="Left-Shifting-with-modbv-4.1.3"><span class="toc-item-num">4.1.3&nbsp;&nbsp;</span>Left Shifting with <code>modbv</code></a></span></li></ul></li><li><span><a href="#Right-Shift-(&gt;&gt;)" data-toc-modified-id="Right-Shift-(>>)-4.2"><span class="toc-item-num">4.2&nbsp;&nbsp;</span>Right Shift (<code>&gt;&gt;</code>)</a></span><ul class="toc-item"><li><span><a href="#Right-Shifting-with-intbv" data-toc-modified-id="Right-Shifting-with-intbv-4.2.1"><span class="toc-item-num">4.2.1&nbsp;&nbsp;</span>Right Shifting with <code>intbv</code></a></span></li><li><span><a href="#Right-Shifting-with-signed-intbv" data-toc-modified-id="Right-Shifting-with-signed-intbv-4.2.2"><span class="toc-item-num">4.2.2&nbsp;&nbsp;</span>Right Shifting with signed <code>intbv</code></a></span></li><li><span><a href="#Right-Shifting-with-modbv" data-toc-modified-id="Right-Shifting-with-modbv-4.2.3"><span class="toc-item-num">4.2.3&nbsp;&nbsp;</span>Right Shifting with <code>modbv</code></a></span></li></ul></li><li><span><a href="#myHDL-Shifting-Demo-Module" data-toc-modified-id="myHDL-Shifting-Demo-Module-4.3"><span class="toc-item-num">4.3&nbsp;&nbsp;</span>myHDL Shifting Demo Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-4.4"><span class="toc-item-num">4.4&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-4.5"><span class="toc-item-num">4.5&nbsp;&nbsp;</span>Verilog Conversion</a></span></li><li><span><a href="#VHDL-Conversion" data-toc-modified-id="VHDL-Conversion-4.6"><span class="toc-item-num">4.6&nbsp;&nbsp;</span>VHDL Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog/VHDL-Testbench" data-toc-modified-id="myHDL-to-Verilog/VHDL-Testbench-4.7"><span class="toc-item-num">4.7&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-4.7.1"><span class="toc-item-num">4.7.1&nbsp;&nbsp;</span>Verilog Testbench</a></span><ul class="toc-item"><li><span><a href="#Verilog-Testbench-Conversion-Issue" data-toc-modified-id="Verilog-Testbench-Conversion-Issue-4.7.1.1"><span class="toc-item-num">4.7.1.1&nbsp;&nbsp;</span>Verilog Testbench Conversion Issue</a></span></li></ul></li><li><span><a href="#VHDL-Testbench" data-toc-modified-id="VHDL-Testbench-4.7.2"><span class="toc-item-num">4.7.2&nbsp;&nbsp;</span>VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#VHDL-Testbench-Conversion-Issue" data-toc-modified-id="VHDL-Testbench-Conversion-Issue-4.7.2.1"><span class="toc-item-num">4.7.2.1&nbsp;&nbsp;</span>VHDL Testbench Conversion Issue</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#myHDL-concat--behavior" data-toc-modified-id="myHDL-concat--behavior-5"><span class="toc-item-num">5&nbsp;&nbsp;</span>myHDL <code>concat</code> behavior</a></span><ul class="toc-item"><li><span><a href="#myHDL-concat-Demo" data-toc-modified-id="myHDL-concat-Demo-5.1"><span class="toc-item-num">5.1&nbsp;&nbsp;</span>myHDL <code>concat</code> Demo</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-5.2"><span class="toc-item-num">5.2&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-5.3"><span class="toc-item-num">5.3&nbsp;&nbsp;</span>Verilog Conversion</a></span></li><li><span><a href="#VHDL-Conversion" data-toc-modified-id="VHDL-Conversion-5.4"><span class="toc-item-num">5.4&nbsp;&nbsp;</span>VHDL Conversion</a></span></li><li><span><a href="#myHDL-to-Verilog/VHDL-Testbench" data-toc-modified-id="myHDL-to-Verilog/VHDL-Testbench-5.5"><span class="toc-item-num">5.5&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-5.5.1"><span class="toc-item-num">5.5.1&nbsp;&nbsp;</span>Verilog Testbench</a></span></li><li><span><a href="#VHDL-Testbench" data-toc-modified-id="VHDL-Testbench-5.5.2"><span class="toc-item-num">5.5.2&nbsp;&nbsp;</span>VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#VHDL-Testbench-Conversion-Issue" data-toc-modified-id="VHDL-Testbench-Conversion-Issue-5.5.2.1"><span class="toc-item-num">5.5.2.1&nbsp;&nbsp;</span>VHDL Testbench Conversion Issue</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#myHDL-Bitslicing-Behavior" data-toc-modified-id="myHDL-Bitslicing-Behavior-6"><span class="toc-item-num">6&nbsp;&nbsp;</span>myHDL Bitslicing Behavior</a></span><ul class="toc-item"><li><span><a href="#Slicing-intbv" data-toc-modified-id="Slicing-intbv-6.1"><span class="toc-item-num">6.1&nbsp;&nbsp;</span>Slicing <code>intbv</code></a></span></li><li><span><a href="#Slicing-Signed-intbv" data-toc-modified-id="Slicing-Signed-intbv-6.2"><span class="toc-item-num">6.2&nbsp;&nbsp;</span>Slicing Signed <code>intbv</code></a></span></li><li><span><a href="#Slicing-modbv" data-toc-modified-id="Slicing-modbv-6.3"><span class="toc-item-num">6.3&nbsp;&nbsp;</span>Slicing <code>modbv</code></a></span></li><li><span><a href="#myHDL-BitSlicing-Demo-Module" data-toc-modified-id="myHDL-BitSlicing-Demo-Module-6.4"><span class="toc-item-num">6.4&nbsp;&nbsp;</span>myHDL BitSlicing Demo Module</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-6.5"><span class="toc-item-num">6.5&nbsp;&nbsp;</span>myHDL Testing</a></span></li><li><span><a href="#Verilog-Conversion" data-toc-modified-id="Verilog-Conversion-6.6"><span class="toc-item-num">6.6&nbsp;&nbsp;</span>Verilog Conversion</a></span><ul class="toc-item"><li><span><a href="#Verilog-Conversion-Issue" data-toc-modified-id="Verilog-Conversion-Issue-6.6.1"><span class="toc-item-num">6.6.1&nbsp;&nbsp;</span>Verilog Conversion Issue</a></span></li></ul></li><li><span><a href="#VHDL-Conversion" data-toc-modified-id="VHDL-Conversion-6.7"><span class="toc-item-num">6.7&nbsp;&nbsp;</span>VHDL Conversion</a></span><ul class="toc-item"><li><span><a href="#VHDL-Conversion-Issue" data-toc-modified-id="VHDL-Conversion-Issue-6.7.1"><span class="toc-item-num">6.7.1&nbsp;&nbsp;</span>VHDL Conversion Issue</a></span></li></ul></li><li><span><a href="#myHDL-to-Verilog/VHDL-Testbench" data-toc-modified-id="myHDL-to-Verilog/VHDL-Testbench-6.8"><span class="toc-item-num">6.8&nbsp;&nbsp;</span>myHDL to Verilog/VHDL Testbench</a></span><ul class="toc-item"><li><span><a href="#Verilog-Testbench" data-toc-modified-id="Verilog-Testbench-6.8.1"><span class="toc-item-num">6.8.1&nbsp;&nbsp;</span>Verilog Testbench</a></span></li><li><span><a href="#VHDL-Testbench" data-toc-modified-id="VHDL-Testbench-6.8.2"><span class="toc-item-num">6.8.2&nbsp;&nbsp;</span>VHDL Testbench</a></span></li></ul></li></ul></li></ul></div> References @misc{myhdl_2018, title={Hardware-oriented types — MyHDL 0.10 documentation}, url={http://docs.myhdl.org/en/stable/manual/hwtypes.html}, journal={Docs.myhdl.org}, author={myHDL}, year={2018} }, @misc{vandenbout_2018, title={pygmyhdl 0.0.3 documentation}, url={https://xesscorp.github.io/pygmyhdl/docs/_build/singlehtml/index.html}, journal={Xesscorp.github.io}, author={Vandenbout, Dave}, year={2018} } Libraries and Helper functions End of explanation TV=intbv(-93)[8:].signed() print(f'Value:{int(TV)}, Binary {bin(TV)}') for i in range(len(TV)): print(f'Bit from LSB: {i}, Selected Bit: {int(TV[i])}') Explanation: myHDL Bit Indexing Bit Indexing is the act of selecting or assigning one of the bits in a Bit Vector Expected Indexing Selection Behavior End of explanation try: TV[-1] except ValueError: print("ValueError: negative shift count") Explanation: which shows that when selecting a single bit from a BitVector that selection [0] is the Least Significant Bit (LSB) (inclusive behavior) while for the Most Significant Bit (MSB) will be the index of the BitVector length -1 (noninclusive behavior) Attempted Selection with Python Negative Warping End of explanation TV=modbv(-93)[8:].signed() print(f'Value:{int(TV)}, Binary {bin(TV)}') try: TV[-1] except ValueError: print("ValueError: negative shift count") Explanation: This means that negative indexing using python's list selection wrap around is NOT implemented in a myHDL intbv End of explanation TV=intbv(93)[8:] TV_S=intbv(-93)[8:].signed() TV_M=modbv(-93)[8:].signed() print(f'`intbv`:Value:{int(TV)}, Binary {bin(TV)}, [8]:{int(TV[8])}, [9]:{int(TV[9])}') print(f'`intbv signed`:Value:{int(TV_S)}, Binary {bin(TV_S)}, [8]:{int(TV_S[8])}, [9]:{int(TV_S[9])}') print(f'`modbv`:Value:{int(TV_M)}, Binary {bin(TV_M)}, [8]:{int(TV_M[8])}, [9]:{int(TV_M[9])}') Explanation: nor is the negative wrapping supported by the use of the modbv Selecting above the MSB End of explanation TV=Signal(intbv(93)[8:]) TV[0], TV(0), TV[9], TV(9) Explanation: Thus selecting above the MSB will generate a 0 if the Bit Vector is not signed where as selecting above the MSB for a signed bit will produce a 1. Bit Selection of Signal End of explanation @block def BitSelectDemo(Index, Res, SignRes): Bit Selection Demo Input: Index(4BitVec): value for selection from internal refrances Output: Res(8BitVec): BitVector with Bit Location set from `Index` from refrance internal 8Bit `intbv` with value 93 SignRes(8BitVec Signed): signed BitVector with Bit Location set from `Index` from refrance internal signed 8Bit `intbv` with value -93 Ref=Signal(intbv(93)[8:]) RefS=Signal(intbv(-93)[8:].signed()) @always_comb def logic(): Res.next[Index]=Ref[Index] SignRes.next[Index]=RefS[Index] return instances() Explanation: The difference is that outside of a generator, bit selection of a signal using [] only returns a value and not a signal that is only returned using (). This is important to know since only a Signal can be converted to registers/wires in the conversion from myHDL to Verilog/VHDL myHDL Bit Selection Demo End of explanation Peeker.clear() Index=Signal(intbv(0)[4:]); Peeker(Index, 'Index') Res=Signal(intbv(0)[8:]); Peeker(Res, 'Res') SignRes=Signal(intbv(0)[8:].signed()); Peeker(SignRes, 'SignRes') DUT=BitSelectDemo(Index, Res, SignRes) def BitSelectDemo_TB(): myHDL only Testbench @instance def stimules(): for i in range(7): Index.next=i yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, BitSelectDemo_TB(), *Peeker.instances()).run() Explanation: Bit Assignment Note: that in the above the module also shows how to perform bit selection assignment. The output signal Res or SignRes is assigned a value from the References at position Index but then the bit from the references is set to position Index in the outputs. Notice that the syntax is Variable.next[index]= The same structure is also used in setting bit slices so that for a big slice assignment is Variable.next[MSB:LSB]= myHDL Testing End of explanation Peeker.to_wavedrom('Index', 'Res', 'SignRes') BitSelectDemoData=Peeker.to_dataframe() BitSelectDemoData['Res Bin']=BitSelectDemoData['Res'].apply(lambda Row: bin(Row, 8), 1) BitSelectDemoData['SignRes Bin']=BitSelectDemoData['SignRes'].apply(lambda Row: bin(Row, 8), 1) BitSelectDemoData=BitSelectDemoData[['Index', 'Res', 'Res Bin', 'SignRes', 'SignRes Bin']] BitSelectDemoData Explanation: Note that if the for loop range was increased beyond 7 an error would be triggered. End of explanation DUT.convert() VerilogTextReader('BitSelectDemo'); Explanation: Verilog Conversion Verilog Conversion Error Line 24 in the conversion of BitSelectDemo to BitSelectDemo.v is incorrect. The myHDL source line is RefS=Signal(intbv(-93)[8:].signed()) but the converted line becomes assign RefS = 8'd-93; but this needs to instead become ``` assign RefS = -8'd93; `` inBitSelectDemo.v` End of explanation DUT.convert('VHDL') VHDLTextReader('BitSelectDemo'); Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{BitSelectDemo_v_RTL.png}} \caption{\label{fig:BSDVRTL} BitSelectDemo Verilog RTL schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{BitSelectDemo_v_SYN.png}} \caption{\label{fig:BSDVHDSYN} BitSelectDemo Verilog Synthesized Schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} VHDL Conversion VHDL Conversion Issue The resulting BitSelectDemo.vhd from BitSelectDemo contains a line that calls from a libary work.pck_myhdl_010.all that is created when this file is ran. Make sure to import this file along with BitSelectDemo.vhd. End of explanation @block def BitSelectDemo_TB_V_VHDL(): myHDL -> Verilog/VHDL Testbench for `BitSelectDemo` Index=Signal(intbv(0)[4:]) Res=Signal(intbv(0)[8:]) SignRes=Signal(intbv(0)[8:].signed()) @always_comb def print_data(): print(Index, Res, SignRes) DUT=BitSelectDemo(Index, Res, SignRes) @instance def stimules(): for i in range(7): Index.next=i yield delay(1) raise StopSimulation() return instances() TB=BitSelectDemo_TB_V_VHDL() Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{BitSelectDemo_vhd_RTL.png}} \caption{\label{fig:BSDVHDRTL} BitSelectDemo VHDL RTL schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{BitSelectDemo_vhd_SYN.png}} \caption{\label{fig:BSDVHDSYN} BitSelectDemo VHDL Synthesized Schematic with corrected errrors; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog/VHDL Testbench End of explanation TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('BitSelectDemo_TB_V_VHDL'); Explanation: Verilog Testbench Verilog Testbench Conversion Issue This testbench will work after assign RefS = 8'd-93; is changed to assign RefS = -8'd93; End of explanation TB.convert(hdl="VHDL", initial_values=True) VHDLTextReader('BitSelectDemo_TB_V_VHDL'); Explanation: VHDL Testbench VHDL Testbench Conversion Issue This Testbench is not working in Vivado End of explanation #Left Shift test with intbv #intialize TV=intbv(52)[8:] print(TV, bin(TV, 8)) #demenstrate left shifting with intbv for i in range(8): LSRes=TV<<i print(f'Left Shift<<{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}') Explanation: myHDL shift (&lt;&lt;/&gt;&gt;) behavior Left Shift (<<) Left Shifting with intbv End of explanation #Left Shift test with intbv signed #intialize TV=intbv(-52)[8:].signed() print(TV, bin(TV, 8)) #demenstrate left shifting with intbv signed for i in range(8): LSRes=(TV<<i).signed() print(f'Left Shift<<{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}') Explanation: Left Shifting with signed intbv End of explanation #Left Shift test with modbv #intialize TV=modbv(52)[8:] print(TV, bin(TV, 8)) #demenstrate left shifting with modbv for i in range(8): LSRes=(TV<<i).signed() print(f'Left Shift<<{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}') Explanation: Left Shifting with modbv End of explanation #Right Shift test with intbv #intialize TV=intbv(52)[8:] print(TV, bin(TV, 8)) #demenstrate left shifting with intbv for i in range(8): LSRes=TV>>i print(f'Right Shift>>{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}') Explanation: As can be seen, Left shifting tacks on a number of zeros equivalent to the shift increment to the end of the binary expression for the value. This then increases the size of the needed register that the resulting value needs to set into for each left shift that does not undergo right bit cutoff Right Shift (&gt;&gt;) Right Shifting with intbv End of explanation #Right Shift test with intbv signed #intialize TV=intbv(-52)[8:].signed() print(TV, bin(TV, 8)) #demenstrate left shifting with intbv signed for i in range(8): LSRes=(TV>>i) print(f'Right Shift>>{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}') Explanation: Right Shifting with signed intbv End of explanation #Right Shift test with modbv #intialize TV=modbv(52)[8:] print(TV, bin(TV, 8)) #demenstrate left shifting with modbv for i in range(8): LSRes=(TV>>i) print(f'Right Shift>>{i}; Binary: {bin(LSRes)}, BitLen: {len(bin(LSRes))}, Value:{LSRes}') Explanation: Right Shifting with modbv End of explanation @block def ShiftingDemo(ShiftVal, RSRes, LSRes): Module to Demo Shifting Behavior in myHDL refrance value -55 8Bit Input: ShiftVal(4BitVec): shift amount, for this demo to not use values greater then 7 Output: RSRes(8BitVec Signed): output of Right Shifting LSRes (15BitVec Signed): output of Left Shifting RefVal=Signal(intbv(-55)[8:].signed()) @always_comb def logic(): RSRes.next=RefVal>>ShiftVal LSRes.next=RefVal<<ShiftVal return instances() Explanation: As can be seen, the right shift moves values (shifts) to the right by the shift increment while preserving the length of the register that is being shifted. While this means that overflow is not going to be in encountered. Right shifting trades that vulnerability for information loss as any information carried in the leftmost bits gets lost as it is shifted right beyond of the length of the register myHDL Shifting Demo Module End of explanation Peeker.clear() ShiftVal=Signal(intbv()[4:]); Peeker(ShiftVal, 'ShiftVal') RSRes=Signal(intbv()[8:].signed()); Peeker(RSRes, 'RSRes') LSRes=Signal(intbv()[15:].signed()); Peeker(LSRes, 'LSRes') DUT=ShiftingDemo(ShiftVal, RSRes, LSRes) def ShiftingDemo_TB(): myHDL only Testbench for `ShiftingDemo` @instance def stimules(): for i in range(8): ShiftVal.next=i yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, ShiftingDemo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('ShiftVal', 'LSRes', 'RSRes'); Peeker.to_dataframe()[['ShiftVal', 'LSRes', 'RSRes']] Explanation: myHDL Testing End of explanation DUT.convert() VerilogTextReader('ShiftingDemo'); Explanation: Verilog Conversion Unfortunately this is an unsynthesizable module as is due assign RefVal = 8'd-55; needing to be changed to assign RefVal = -8'd55; after wich the module is synthesizable End of explanation DUT.convert(hdl='VHDL') VHDLTextReader('ShiftingDemo'); Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{ShiftingDemo_v_RTL.png}} \caption{\label{fig:SDVRTL} ShiftingDemo Verilog RTL schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{ShiftingDemo_v_SYN.png}} \caption{\label{fig:SDVSYN} ShiftingDemo Verilog Synthesized Schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} VHDL Conversion End of explanation @block def ShiftingDemo_TB_V_VHDL(): myHDL -> verilog/VHDL testbench for `ShiftingDemo` ShiftVal=Signal(intbv()[4:]) RSRes=Signal(intbv()[8:].signed()) LSRes=Signal(intbv()[15:].signed()) @always_comb def print_data(): print(ShiftVal, RSRes, LSRes) DUT=ShiftingDemo(ShiftVal, RSRes, LSRes) @instance def stimules(): for i in range(8): ShiftVal.next=i yield delay(1) raise StopSimulation() return instances() TB=ShiftingDemo_TB_V_VHDL() Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{ShiftingDemo_vhd_RTL.png}} \caption{\label{fig:SDVHDRTL} ShiftingDemo VHDL RTL schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{ShiftingDemo_vhd_SYN.png}} \caption{\label{fig:SDVHDSYN} ShiftingDemo VHDL Synthesized Schematic with corrected errors; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog/VHDL Testbench End of explanation TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('ShiftingDemo_TB_V_VHDL'); Explanation: Verilog Testbench Verilog Testbench Conversion Issue This Testbench will work after assign RefVal = 8'd-55; is changed to assign RefVal = -8'd55; End of explanation TB.convert(hdl="VHDL", initial_values=True) VHDLTextReader('ShiftingDemo_TB_V_VHDL'); Explanation: VHDL Testbench VHDL Testbench Conversion Issue This Testbench is not working in Vivado End of explanation RefVal=intbv(25)[6:]; RefVal, bin(RefVal, 6) Result=concat(True, RefVal); Result, bin(Result) ResultSigned=concat(True, RefVal).signed(); ResultSigned, bin(ResultSigned) Explanation: myHDL concat behavior The concat function is an abbreviated name for the full name of concatenation which is that action that this operator performs by joining the bits of all the signals that are arguments to it into a new concatenated single binary End of explanation @block def ConcatDemo(Res, ResS): `concat` demo Input: None Ouput: Res(7BitVec): concat result Res(7BitVec Signed): concat result that is signed RefVal=Signal(intbv(25)[6:]) @always_comb def logic(): Res.next=concat(True, RefVal) ResS.next=concat(True, RefVal).signed() return instances() Explanation: myHDL concat Demo End of explanation Peeker.clear() Res=Signal(intbv(0)[7:]); Peeker(Res, 'Res') ResS=Signal(intbv(0)[7:].signed()); Peeker(ResS, ResS) DUT=ConcatDemo(Res, ResS) def ConcatDemo_TB(): myHDL only Testbench for `ConcatDemo` @instance def stimules(): for i in range(2): yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, ConcatDemo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom() Explanation: myHDL Testing End of explanation DUT.convert() VerilogTextReader('ConcatDemo'); Explanation: Verilog Conversion End of explanation DUT.convert('VHDL') VHDLTextReader('ConcatDemo'); Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{ConcatDemo_v_RTL.png}} \caption{\label{fig:CDVRTL} ConcatDemo Verilog RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{ConcatDemo_v_SYN.png}} \caption{\label{fig:CDVSYN} ConcatDemo Verilog Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} VHDL Conversion End of explanation @block def ConcatDemo_TB_V_VHDL(): myHDL-> Verilog/VHDL Testbench Res=Signal(intbv(0)[7:]) ResS=Signal(intbv(0)[7:].signed()) @always_comb def print_data(): print(Res, ResS) DUT=ConcatDemo(Res, ResS) @instance def stimules(): for i in range(2): yield delay(1) raise StopSimulation() return instances() TB=ConcatDemo_TB_V_VHDL() Explanation: \begin{figure} \centerline{\includegraphics[width=10cm]{ConcatDemo_vhd_RTL.png}} \caption{\label{fig:CDVHDRTL} {ConcatDemo VHDL RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{ConcatDemo_vhd_SYN.png}} \caption{\label{fig:CDVHDSYN} ConcatDemo VHDL Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog/VHDL Testbench End of explanation TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('ConcatDemo_TB_V_VHDL'); Explanation: Verilog Testbench End of explanation TB.convert(hdl="VHDL", initial_values=True) VHDLTextReader('ConcatDemo_TB_V_VHDL'); Explanation: VHDL Testbench VHDL Testbench Conversion Issue This Testbench is not working in Vivado End of explanation TV=intbv(1749)[16:] print(f'int 1749 in bit is {bin(TV, len(TV))}') for j in range(16): try: Trunc=TV[16:j] print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:15, LSB: {j}') except ValueError: print ('MSB {15} is <= LSB {j}') TV=intbv(1749)[16:] print(f'int 1749 in bit is {bin(TV, len(TV))}') for i in reversed(range(16+1)): try: Trunc=TV[i:0] print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:{i}, LSB: {0}') except ValueError: print ('MSB is <= LSB index') Explanation: myHDL Bitslicing Behavior These example values come from the future work with floating point implemented in fixed point architecture which is incredibly important for Digital Signal Processing as will be shown. For now, just understand that the example are based on multiplying two Q4.4 (8bit fixed point) number resulting in Q8.8 (16bit fixed point) product Slicing intbv the following is an example of truncation from 16bit to 8bit rounding that shows how bit slicing works in myHDL. The truncation bit slicing keeps values from the far left (Most Significant Bit (MSB) ) to the rightmost specified bit (Least Significant Bit (LSB)) End of explanation TV=intbv(-1749)[16:].signed() print(f'int -1749 in bit is {bin(TV, len(TV))}') for j in range(16): try: Trunc=TV[16:j].signed() print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:15, LSB: {j}') except ValueError: print ('MSB {15} is <= LSB {j}') TV=intbv(-1749)[16:].signed() print(f'int -1749 in bit is {bin(TV, len(TV))}') for i in reversed(range(16+1)): try: Trunc=TV[i:0].signed() print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:{i}, LSB: {0}') except ValueError: print ('MSB is <= LSB index') Explanation: Slicing Signed intbv End of explanation TV=modbv(1749)[16:] print(f'int 1749 in bit is {bin(TV, len(TV))}') for j in range(16): try: Trunc=TV[16:j] print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:15, LSB: {j}') except ValueError: print ('MSB {15} is <= LSB {j}') TV=modbv(1749)[16:] print(f'int 1749 in bit is {bin(TV, len(TV))}') for i in reversed(range(16+1)): try: Trunc=TV[i:0] print(f'Binary: {bin(Trunc, len(Trunc))}, Bits: {len(Trunc)}, rep: {int(Trunc)}, MSB:{i}, LSB: {0}') except ValueError: print ('MSB is <= LSB index') Explanation: Slicing modbv End of explanation @block def BitSlicingDemo(MSB, LSB, Res): Demenstration Module for Bit Slicing in myHDL Inputs: MSB (5BitVec): Most Signficant Bit Index Must be > LSB, ex: if LSB==0 MSB must range between 1 and 15 LSB (5BitVec): Lest Signficant Bit Index Must be < MSB ex: if MSB==15 LSB must range beteen 0 and 15 Outputs: Res(16BitVec Signed): Result of the slicing operation from Refrance Vales (hard coded in module) -1749 (16BitVec Signed) RefVal=Signal(intbv(-1749)[16:].signed()) @always_comb def logic(): Res.next=RefVal[MSB:LSB].signed() return instances() Explanation: myHDL BitSlicing Demo Module End of explanation Peeker.clear() MSB=Signal(intbv(16)[5:]); Peeker(MSB, 'MSB') LSB=Signal(intbv(0)[5:]); Peeker(LSB, 'LSB') Res=Signal(intbv(0)[16:].signed()); Peeker(Res, 'Res') DUT=BitSlicingDemo(MSB, LSB, Res) def BitslicingDemo_TB(): myHDL only Testbench for `BitSlicingDemo` @instance def stimules(): for j in range(15): MSB.next=16 LSB.next=j yield delay(1) for i in reversed(range(1, 16)): MSB.next=i LSB.next=0 yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, BitslicingDemo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('MSB', 'LSB', 'Res') Peeker.to_dataframe()[['MSB', 'LSB', 'Res']] Explanation: myHDL Testing End of explanation DUT.convert() VerilogTextReader('BitSlicingDemo'); Explanation: Verilog Conversion Verilog Conversion Issue The following is unsynthesizable since Verilog requires that the indexes in bit slicing (aka Part-selects) be constant values. Along with the error in assign RefVal = 16'd-1749; However, the generated Verilog code from BitSlicingDemo does hold merit in showing how the index values are mapped from myHDL to Verilog End of explanation DUT.convert(hdl='VHDL') VHDLTextReader('BitSlicingDemo'); Explanation: VHDL Conversion VHDL Conversion Issue The following is unsynthesizable since VHDL requires that the indexes in bit slicing (aka Part-selects) be constant values. However, the generated VHDL code from BitSlicingDemo does hold merit in showing how the index values are mapped from myHDL to VHDL End of explanation @block def BitslicingDemo_TB_V_VHDL(): myHDL -> Verilog/VHDL Testbench for `BitSlicingDemo` MSB=Signal(intbv(16)[5:]) LSB=Signal(intbv(0)[5:]) Res=Signal(intbv(0)[16:].signed()) @always_comb def print_data(): print(MSB, LSB, Res) DUT=BitSlicingDemo(MSB, LSB, Res) @instance def stimules(): for j in range(15): MSB.next=16 LSB.next=j yield delay(1) #!!! reversed is not being converted #for i in reversed(range(1, 16)): # MSB.next=i # LSB.next=0 # yield delay(1) raise StopSimulation() return instances() TB=BitslicingDemo_TB_V_VHDL() Explanation: myHDL to Verilog/VHDL Testbench End of explanation TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('BitslicingDemo_TB_V_VHDL'); Explanation: Verilog Testbench End of explanation TB.convert(hdl="VHDL", initial_values=True) VHDLTextReader('BitslicingDemo_TB_V_VHDL'); Explanation: VHDL Testbench End of explanation
2,021
Given the following text description, write Python code to implement the functionality described below step by step Description: Statements Assessment Solutions Use for, split(), and if to create a Statement that will print out words that start with 's' Step1: Use range() to print all the even numbers from 0 to 10. Step2: Use List comprehension to create a list of all numbers between 1 and 50 that are divisble by 3. Step3: Go through the string below and if the length of a word is even print "even!" Step4: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". Step5: Use List Comprehension to create a list of the first letters of every word in the string below
Python Code: st = 'Print only the words that start with s in this sentence' for word in st.split(): if word[0] == 's': print word Explanation: Statements Assessment Solutions Use for, split(), and if to create a Statement that will print out words that start with 's': End of explanation range(0,11,2) Explanation: Use range() to print all the even numbers from 0 to 10. End of explanation [x for x in range(1,50) if x%3 == 0] Explanation: Use List comprehension to create a list of all numbers between 1 and 50 that are divisble by 3. End of explanation st = 'Print every word in this sentence that has an even number of letters' for word in st.split(): if len(word)%2 == 0: print word+" <-- has an even length!" Explanation: Go through the string below and if the length of a word is even print "even!" End of explanation for num in xrange(1,101): if num % 5 == 0 and num % 3 == 0: print "FizzBuzz" elif num % 3 == 0: print "Fizz" elif num % 5 == 0: print "Buzz" else: print num Explanation: Write a program that prints the integers from 1 to 100. But for multiples of three print "Fizz" instead of the number, and for the multiples of five print "Buzz". For numbers which are multiples of both three and five print "FizzBuzz". End of explanation st = 'Create a list of the first letters of every word in this string' [word[0] for word in st.split()] Explanation: Use List Comprehension to create a list of the first letters of every word in the string below: End of explanation
2,022
Given the following text description, write Python code to implement the functionality described below step by step Description: RNN Character Model + Lots More This example trains a RNN to create plausible words from a corpus. But it includes lots of interesting "bells and whistles" The data used for training is one of Step1: Network Parameters from Corpus Find the set of characters used in the corpus and construct mappings between characters, integer indices, and one hot encodings Step2: Unigram frequency distribution Step3: Bigram frequency distribution Step4: Trigram frequency distribution Step5: Generate base-line scores Step6: RNN Main Parameters Step7: An RNN 'discriminator' Instead of having a binary 'YES/NO' decision about whether a word is valid (via a lookup in the vocabulary), it may make it simpler to train a word-generator if we can assign a probability that a given word is valid. To do this, let's create a recurrent neural network (RNN) that accepts a (one-hot-encoded) word as input, and (at the end of the sequence) gives us an estimate of the probability that the word is valid. Actually, rather than descriminate according to whether the word is actually valid, let's 'just' try to decide whether it was produced directly from the dictionary or from the generate_bigram_word() source. This can be tested by giving it lists of actual words, and lists of words generated by generate_bigram_word() and seeing whether they can be correctly classified. The decision about what to do in the 12% of cases when the bigram function results in a valid word can be left until later... (since the distribution is so heavily skewed towards producing non-words). Create Training / Testing dataset And a 'batch generator' function that delivers data in the right format for RNN training Step8: Lasagne RNN tutorial (including conventions &amp; rationale) http Step9: Define the Descriminating Network Symbolically Step10: Loss Function for Training Step11: ... and the Training and Prediction functions Step12: Finally, the Discriminator Training Loop Training takes a while Step13: Save the learned parameters Uncomment the pickle.dump() to actually save to disk Step14: Load pretrained weights into network Step15: Check that the Discriminator Network 'works' Step16: Create a Generative network Next, let's build an RNN that produces text, and train it using (a) a pure dictionary look-up, and (b) the correctness signal from the Discriminator above. Plan of attack Step17: Use the Generative Network to create sample words The network above can be used to generate text... The following set-up allows for the output of the RNN at each timestep to be mixed with the letter frequency that the bigram model would suggest - in a proportion bigram_overlay which can vary from 0 (being solely RNN derived) to 1.0 (being solely bigram frequencies, with the RNN output being disregarded). The input is a 'random field' matrix that is used to chose each letter in each slot from the generated probability distribution. Once a space is output for a specific word, then it stops being extended (equivalently, the mask is set to zero going forwards). Once spaces have been observed for all words (or the maximum length reached), the process ends, and a list of the created words is returned. Step18: Remember the initial (random) Network State This will come in handy when we need to reset the network back to 'untrained' later. Step19: Now, train the Generator RNN based on the Dictionary itself Once we have an output word, let's reward the RNN based on a specific training signal. We'll encapsulate the training in a function that takes the input signal as a parameter, so that we can try other training schemes (later). Step20: How are we doing? Step21: Use training signal from Discriminator Step22: How are we doing?
Python Code: import numpy as np import theano import lasagne #from lasagne.utils import floatX import pickle import gzip import random import time WORD_LENGTH_MAX = 16 # Load an interesting corpus (vocabulary words with frequencies from 1-billion-word-corpus) : with gzip.open('../data/RNN/ALL_1-vocab.txt.gz') as f: lines = [ l.strip().lower().split() for l in f.readlines() ] lines[0:10] # Here are our characters : '[a-z\- ]' import re invalid_chars = r'[^a-z\- ]' lines_valid = [ l for l in lines if not re.search(invalid_chars, l[0]) ] #lines_valid = lines_valid[0:50000] lines_valid[0:10], len(lines_valid) # /usr/share/dict/linux.words with open('/usr/share/dict/linux.words','rt') as f: linux_words = [ l.strip() for l in f.readlines() ] linux_wordset = set(linux_words) #'united' in wordset lines_filtered = [l for l in lines_valid if len(l[0])>=3 # Require each word to have 3 or more characters and l[0] in linux_wordset # Require each word to be found in regular dictionary and len(l[0])<WORD_LENGTH_MAX # And limit length (to avoid crazy roll-out of RNN) ] lines_filtered[0:10], len(lines_filtered) # Split apart the words and their frequencies (Assume these are in sorted order, at least initial few) words = [ l[0] for l in lines_filtered ] wordset = set(words) wordsnp = np.array(words) freqs_raw = np.array( [ int(l[1]) for l in lines_filtered ] ) freq_tot = float(freqs_raw.sum()) # Frequency weighting adjustments freqs = freqs_raw / freq_tot cutoff_index = 30 # All words with highter frequencies will be 'limited' at this level freqs[0:cutoff_index] = freqs[cutoff_index] freqs = freqs / freqs.sum() freqs[0:50] test_cum = np.array( [.1, .5, .9, 1.0] ) test_cum.searchsorted([ .05, 0.45, .9, .95]) # Cumulative frequency, so that we can efficiently pick weighted random words... # using http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html freqs_cum = freqs.cumsum() freqs_cum[:10], freqs_cum[-10:], Explanation: RNN Character Model + Lots More This example trains a RNN to create plausible words from a corpus. But it includes lots of interesting "bells and whistles" The data used for training is one of : * a vocabulary/dictionary collected from the 1-Billion-Word Corpus * a list of Indian names (voters rolls, by year) : TODO Adversarial networks : http://carpedm20.github.io/faces/ Doing this with RNNs may be pretty novel : https://www.quora.com/Can-generative-adversarial-networks-be-used-in-sequential-data-in-recurrent-neural-networks-How-effective-would-they-be End of explanation CHARS_VALID = "abcdefghijklmnopqrstuvwxyz- " CHARS_SIZE = len(CHARS_VALID) CHAR_TO_IX = {c: i for i, c in enumerate(CHARS_VALID)} IX_TO_CHAR = {i: c for i, c in enumerate(CHARS_VALID)} CHAR_TO_ONEHOT = {c: np.eye(CHARS_SIZE)[i] for i, c in enumerate(CHARS_VALID)} #CHAR_TO_IX Explanation: Network Parameters from Corpus Find the set of characters used in the corpus and construct mappings between characters, integer indices, and one hot encodings End of explanation # Single letter frequencies unigram_freq = np.zeros( (CHARS_SIZE,)) idx_end = CHAR_TO_IX[' '] for i,w in enumerate(words): word_freq = freqs[i] for c in w: unigram_freq[ CHAR_TO_IX[c] ] += word_freq unigram_freq[ idx_end ] += word_freq unigram_freq /= unigram_freq.sum() unigram_freq_cum = unigram_freq.cumsum() [ (CHARS_VALID[i], "%6.3f" % f) for i,f in enumerate(unigram_freq.tolist()) ] #CHARS_VALID[ unigram_freq_cum.searchsorted(0.20) ] def unigram_word(): s=[] while True: idx = np.searchsorted(unigram_freq_cum, np.random.uniform()) c = IX_TO_CHAR[idx] if c==' ': if len(s)>0: break else: continue s.append(c) return ''.join(s) ' '.join([ unigram_word() for i in range(0,20) ]) Explanation: Unigram frequency distribution End of explanation # two-letter frequencies bigram_freq = np.zeros( (CHARS_SIZE,CHARS_SIZE) ) for i,w in enumerate(words): w2 = ' '+w+' ' word_freq = freqs[i] for j in range(0, len(w2)-1): bigram_freq[ CHAR_TO_IX[ w2[j] ], CHAR_TO_IX[ w2[j+1] ] ] += word_freq #[ (CHARS_VALID[i], "%6.3f" % f) for i,f in enumerate(bigram_freq[ CHAR_TO_IX['q'] ].tolist()) ] #bigram_freq.sum(axis=1)[CHAR_TO_IX['q']] bigram_freq /= bigram_freq.sum(axis=1)[:, np.newaxis] # Trick to enable unflattening of sum() bigram_freq_cum = bigram_freq.cumsum(axis=1) #[ (CHARS_VALID[i], "%6.3f" % f) for i,f in enumerate(bigram_freq_cum[ CHAR_TO_IX['q'] ].tolist()) ] #bigram_freq.sum(axis=1)[CHAR_TO_IX['q']] #(bigram_freq/ bigram_freq.sum(axis=1)).sum(axis=0) #bigram_freq.sum(axis=1)[CHAR_TO_IX['q']] #bigram_freq[CHAR_TO_IX['q'], :].sum() #(bigram_freq / bigram_freq.sum(axis=1)[:, np.newaxis]).cumsum(axis=1) #Letter relative frequency for letters following 'q' [ (CHARS_VALID[i], "%6.3f" % f) for i,f in enumerate(bigram_freq[ CHAR_TO_IX['q'] ].tolist()) if f>0.001] #bigram_freq_cum[4] def bigram_word(): s=[] idx_last = CHAR_TO_IX[' '] while True: idx = np.searchsorted(bigram_freq_cum[idx_last], np.random.uniform()) c = IX_TO_CHAR[idx] if c==' ': if len(s)>0: #if len(s)<50: continue break else: continue s.append(c) idx_last=idx return ''.join(s) ' '.join([ bigram_word() for i in range(0,20) ]) Explanation: Bigram frequency distribution End of explanation # Three-letter frequencies trigram_freq = np.zeros( (CHARS_SIZE,CHARS_SIZE,CHARS_SIZE) ) for i,w in enumerate(words): w3 = ' '+w+' ' word_freq = freqs[i] for j in range(0, len(w3)-2): trigram_freq[ CHAR_TO_IX[ w3[j] ], CHAR_TO_IX[ w3[j+1] ], CHAR_TO_IX[ w3[j+2] ] ] += word_freq trigram_freq /= trigram_freq.sum(axis=2)[:, :, np.newaxis] # Trick to enable unflattening of sum() trigram_freq_cum = trigram_freq.cumsum(axis=2) [ "ex-%s %6.3f" % (CHARS_VALID[i], f) for i,f in enumerate(trigram_freq[ CHAR_TO_IX['e'], CHAR_TO_IX['x'] ].tolist()) if f>0.001 ] def trigram_word(): s=[] idx_1 = idx_2 = CHAR_TO_IX[' '] while True: idx = np.searchsorted(trigram_freq_cum[idx_1, idx_2], np.random.uniform()) c = IX_TO_CHAR[idx] if c==' ': if len(s)>0: #if len(s)<50: continue break else: continue s.append(c) idx_1, idx_2 = idx_2, idx return ''.join(s) ' '.join([ trigram_word() for i in range(0,20) ]) Explanation: Trigram frequency distribution End of explanation sample_size=10000 ngram_hits = [0,0,0] for w in [ unigram_word() for i in range(0, sample_size) ]: if w in wordset: ngram_hits[0] += 1 #print("%s %s" % (("YES" if w in wordset else " - "), w, )) for w in [ bigram_word() for i in range(0, sample_size) ]: if w in wordset: ngram_hits[1] += 1 #print("%s %s" % (("YES" if w in wordset else " - "), w, )) for w in [ trigram_word() for i in range(0, sample_size) ]: if w in wordset: ngram_hits[2] += 1 #print("%s %s" % (("YES" if w in wordset else " - "), w, )) for i,hits in enumerate(ngram_hits): print("%d-gram : %4.2f%%" % (i+1, hits*100./sample_size )) #[ (i,w) for i,w in enumerate(words) if 'mq' in w] # Find the distribution of unigrams by sampling (sanity check) if False: sample_size=1000 arr=[] for w in [ unigram_word() for i in range(0, sample_size) ]: arr.append(w) s = ' '.join(arr) s_len = len(s) for c in CHARS_VALID: f = len(s.split(c))-1 print("%s -> %6.3f%%" % (c, f*100./s_len)) Explanation: Generate base-line scores End of explanation BATCH_SIZE = 64 RNN_HIDDEN_SIZE = CHARS_SIZE GRAD_CLIP_BOUND = 5.0 Explanation: RNN Main Parameters End of explanation def batch_dictionary(size=BATCH_SIZE/2): uniform_vars = np.random.uniform( size=(size,) ) idx = freqs_cum.searchsorted(uniform_vars) return wordsnp[ idx ].tolist() def batch_bigram(size=BATCH_SIZE/2): return [ bigram_word()[0:WORD_LENGTH_MAX] for i in range(size) ] # Test it out #batch_test = lambda : batch_dictionary(size=4) batch_test = lambda : batch_bigram(size=4) print(batch_test()) print(batch_test()) print(batch_test()) Explanation: An RNN 'discriminator' Instead of having a binary 'YES/NO' decision about whether a word is valid (via a lookup in the vocabulary), it may make it simpler to train a word-generator if we can assign a probability that a given word is valid. To do this, let's create a recurrent neural network (RNN) that accepts a (one-hot-encoded) word as input, and (at the end of the sequence) gives us an estimate of the probability that the word is valid. Actually, rather than descriminate according to whether the word is actually valid, let's 'just' try to decide whether it was produced directly from the dictionary or from the generate_bigram_word() source. This can be tested by giving it lists of actual words, and lists of words generated by generate_bigram_word() and seeing whether they can be correctly classified. The decision about what to do in the 12% of cases when the bigram function results in a valid word can be left until later... (since the distribution is so heavily skewed towards producing non-words). Create Training / Testing dataset And a 'batch generator' function that delivers data in the right format for RNN training End of explanation # After sampling a data batch, we transform it into a one hot feature representation with a mask def prep_batch_for_network(batch_of_words): word_max_length = np.array( [ len(w) for w in batch_of_words ]).max() # translate into one-hot matrix, mask values and targets input_values = np.zeros((len(batch_of_words), word_max_length, CHARS_SIZE), dtype='float32') mask_values = np.zeros((len(batch_of_words), word_max_length), dtype='int32') for i, word in enumerate(batch_of_words): for j, c in enumerate(word): input_values[i,j] = CHAR_TO_ONEHOT[ c ] mask_values[i, 0:len(word) ] = 1 return input_values, mask_values Explanation: Lasagne RNN tutorial (including conventions &amp; rationale) http://colinraffel.com/talks/hammer2015recurrent.pdf Lasagne Examples https://github.com/Lasagne/Lasagne/blob/master/lasagne/layers/recurrent.py https://github.com/Lasagne/Recipes/blob/master/examples/lstm_text_generation.py Good blog post series http://www.wildml.com/2015/10/recurrent-neural-network-tutorial-part-4-implementing-a-grulstm-rnn-with-python-and-theano/ End of explanation # Symbolic variables for input. In addition to the usual features and target, # we need initial values for the RNN layer's hidden states disc_input_sym = theano.tensor.tensor3() disc_mask_sym = theano.tensor.imatrix() disc_target_sym = theano.tensor.matrix() # probabilities of being from the dictionary (i.e. a single column matrix) # Our network has two stacked GRU layers processing the input sequence. disc_input = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size disc_mask = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size disc_rnn1 = lasagne.layers.GRULayer(disc_input, num_units=RNN_HIDDEN_SIZE, gradient_steps=-1, grad_clipping=GRAD_CLIP_BOUND, hid_init=lasagne.init.Normal(), learn_init=True, mask_input=disc_mask, only_return_final=True, # Only the state at the last timestep is needed ) disc_decoder = lasagne.layers.DenseLayer(disc_rnn1, num_units=1, nonlinearity=lasagne.nonlinearities.sigmoid ) disc_final = disc_decoder # Finally, the output stage disc_output = lasagne.layers.get_output(disc_final, { disc_input: disc_input_sym, disc_mask: disc_mask_sym, } ) Explanation: Define the Descriminating Network Symbolically End of explanation disc_loss = theano.tensor.nnet.binary_crossentropy(disc_output, disc_target_sym).mean() Explanation: Loss Function for Training End of explanation # For stability during training, gradients are clipped and a total gradient norm constraint is also used #MAX_GRAD_NORM = 15 disc_params = lasagne.layers.get_all_params(disc_final, trainable=True) disc_grads = theano.tensor.grad(disc_loss, disc_params) #disc_grads = [theano.tensor.clip(g, -GRAD_CLIP_BOUND, GRAD_CLIP_BOUND) for g in disc_grads] #disc_grads, disc_norm = lasagne.updates.total_norm_constraint( disc_grads, MAX_GRAD_NORM, return_norm=True) disc_updates = lasagne.updates.adam(disc_grads, disc_params) disc_train = theano.function([disc_input_sym, disc_target_sym, disc_mask_sym], # , disc_rnn1_t0_sym [disc_loss], # , disc_output, norm, hid_out_last, hid2_out_last updates=disc_updates, ) disc_predict = theano.function([disc_input_sym, disc_mask_sym], [disc_output]) print("Discriminator network functions defined") Explanation: ... and the Training and Prediction functions End of explanation t0, iterations_complete = time.time(), 0 epochs = 10*1000 t1, iterations_recent = time.time(), iterations_complete for epoch_i in range(epochs): # create a batch of words : half are dictionary, half are from bigram batch_of_words = batch_dictionary() + batch_bigram() # get the one-hot input values and corresponding mask matrix disc_input_values, disc_mask_values = prep_batch_for_network(batch_of_words) # and here are the assocated target values disc_target_values= np.zeros((len(batch_of_words),1), dtype='float32') disc_target_values[ 0:(BATCH_SIZE/2), 0 ] = 1.0 # First half are dictionary values for i, word in enumerate(batch_of_words): if True and i>BATCH_SIZE/2 and word in wordset: disc_target_values[ i , 0 ] = 1.0 # bigram has hit a dictionary word by luck... # Now train the discriminator RNN disc_loss_, = disc_train(disc_input_values, disc_target_values, disc_mask_values) #disc_output_, = disc_predict(disc_input_values, disc_mask_values) iterations_complete += 1 if iterations_complete % 250 == 0: secs_per_batch = float(time.time() - t1)/ (iterations_complete - iterations_recent) eta_in_secs = secs_per_batch*(epochs-epoch_i) print("Iteration {:5d}, loss_train: {:.4f} ({:.1f}s per 1000 batches) eta: {:.0f}m{:02.0f}s".format( iterations_complete, float(disc_loss_), secs_per_batch*1000., np.floor(eta_in_secs/60), np.floor(eta_in_secs % 60) )) #print('Iteration {}, output: {}'.format(iteration, disc_output_, )) # , output: {} t1, iterations_recent = time.time(), iterations_complete print('Iteration {}, ran in {:.1f}sec'.format(iterations_complete, float(time.time() - t0))) Explanation: Finally, the Discriminator Training Loop Training takes a while :: 1000 iteration takes about 20 seconds on a CPU ... you may want to skip this and the next cell, and load the pretrained weights instead End of explanation disc_param_values = lasagne.layers.get_all_param_values(disc_final) disc_param_dictionary = dict( params = disc_param_values, CHARS_VALID = CHARS_VALID, CHAR_TO_IX = CHAR_TO_IX, IX_TO_CHAR = IX_TO_CHAR, ) #pickle.dump(disc_param_dictionary, open('../data/RNN/disc_trained.pkl','w'), protocol=pickle.HIGHEST_PROTOCOL) Explanation: Save the learned parameters Uncomment the pickle.dump() to actually save to disk End of explanation disc_param_dictionary = pickle.load(open('../data/RNN/disc_trained_64x310k.pkl', 'r')) lasagne.layers.set_all_param_values(disc_final, disc_param_dictionary['params']) Explanation: Load pretrained weights into network End of explanation test_text_list = ["shape", "shast", "shaes", "shafg", "shaqw"] test_text_list = ["opposite", "aposite", "apposite", "xposite", "rrwqsite", "deposit", "idilic", "idyllic"] disc_input_values, disc_mask_values = prep_batch_for_network(test_text_list) disc_output_, = disc_predict(disc_input_values, disc_mask_values) for i,v in enumerate(disc_output_.tolist()): print("%s : %5.2f%%" % ((test_text_list[i]+' '*20)[:20], v[0]*100.)) Explanation: Check that the Discriminator Network 'works' End of explanation # Let's pre-calculate the logs of the bigram frequencies, since they may be mixed in below bigram_min_freq = 1e-10 # To prevent underflow in log... bigram_freq_log = np.log( bigram_freq + bigram_min_freq ).astype('float32') # Symbolic variables for input. In addition to the usual features and target, gen_input_sym = theano.tensor.ftensor3() gen_mask_sym = theano.tensor.imatrix() gen_words_target_sym = theano.tensor.imatrix() # characters generated (as character indicies) # probabilities of being from the dictionary (i.e. a single column matrix) gen_valid_target_sym = theano.tensor.fmatrix( ) # This is a single mixing parameter (0.0 = pure RNN, 1.0=pure Bigram) gen_bigram_overlay = theano.tensor.fscalar() # This is 'current' since it reflects the bigram field as far as it is known during the call gen_bigram_freq_log_field = theano.tensor.ftensor3() gen_input = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size gen_mask = lasagne.layers.InputLayer( (None, None, CHARS_SIZE) ) # batch_size, sequence_len, chars_size #gen_rnn1_t0 = lasagne.layers.InputLayer( (None, RNN_HIDDEN_SIZE) ) # batch_size, RNN_hidden_size=chars_size #n_batch, n_time_steps, n_features = gen_input.input_var.shape n_batch, n_time_steps, n_features = gen_input_sym.shape gen_rnn1 = lasagne.layers.GRULayer(gen_input, num_units=RNN_HIDDEN_SIZE, gradient_steps=-1, grad_clipping=GRAD_CLIP_BOUND, #hid_init=disc_rnn1_t0, hid_init=lasagne.init.Normal(), learn_init=True, mask_input=gen_mask, only_return_final=False, # Need all of the output states ) # Before the decoder layer, we need to reshape the sequence into the batch dimension, # so that timesteps are decoded independently. gen_reshape = lasagne.layers.ReshapeLayer(gen_rnn1, (-1, RNN_HIDDEN_SIZE) ) gen_prob_raw = lasagne.layers.DenseLayer(gen_reshape, num_units=CHARS_SIZE, nonlinearity=lasagne.nonlinearities.linear # No squashing (yet) ) gen_prob = lasagne.layers.ReshapeLayer(gen_prob_raw, (-1, n_time_steps, CHARS_SIZE)) gen_prob_theano = lasagne.layers.get_output(gen_prob, { gen_input: gen_input_sym, gen_mask: gen_mask_sym, }) gen_prob_mix = gen_bigram_overlay*gen_bigram_freq_log_field + (1.0-gen_bigram_overlay)*gen_prob_theano gen_prob_mix_flattened = theano.tensor.reshape(gen_prob_mix, (-1, CHARS_SIZE)) gen_prob_softmax_flattened = theano.tensor.nnet.nnet.softmax(gen_prob_mix_flattened) #gen_prob_final = lasagne.layers.SliceLayer(gen_prob_raw, indices=(-1), axis=1) # Finally, the output stage - this is for the training (over all the letters in the words) #gen_output = gen_prob_softmax_flattened # And for prediction (which is done incrementally, adding one letter at a time) gen_output_last = gen_prob_softmax_flattened.reshape( (-1, n_time_steps, CHARS_SIZE) )[:, -1] # The generative network is trained by encouraging the outputs across time to match the given sequence of letters # We flatten the sequence into the batch dimension before calculating the loss #def gen_word_cross_ent(net_output, targets): # preds_raw = theano.tensor.reshape(net_output, (-1, CHARS_SIZE)) # preds_softmax = theano.tensor.nnet.nnet.softmax(preds_raw) # targets_flat = theano.tensor.flatten(targets) # cost = theano.tensor.nnet.categorical_crossentropy(preds_softmax, targets_flat) # return cost targets_flat = theano.tensor.flatten(gen_words_target_sym) gen_cross_entropy_flat = theano.tensor.nnet.categorical_crossentropy(gen_prob_softmax_flattened, targets_flat) gen_cross_entropy = theano.tensor.reshape(gen_cross_entropy_flat, (-1, n_time_steps) ) gen_loss_weighted = theano.tensor.dot( gen_valid_target_sym.T, gen_cross_entropy ) gen_loss = gen_loss_weighted.mean() # For stability during training, gradients are clipped and a total gradient norm constraint is also used #MAX_GRAD_NORM = 15 gen_predict = theano.function([gen_input_sym, gen_bigram_overlay, gen_bigram_freq_log_field, gen_mask_sym], [gen_output_last]) gen_params = lasagne.layers.get_all_params(gen_prob, trainable=True) gen_grads = theano.tensor.grad(gen_loss, gen_params) #gen_grads = [theano.tensor.clip(g, -GRAD_CLIP_BOUND, GRAD_CLIP_BOUND) for g in gen_grads] #gen_grads, gen_norm = lasagne.updates.total_norm_constraint( gen_grads, MAX_GRAD_NORM, return_norm=True) gen_updates = lasagne.updates.adam(gen_grads, gen_params) gen_train = theano.function([gen_input_sym, gen_bigram_overlay, gen_bigram_freq_log_field, gen_words_target_sym, gen_valid_target_sym, gen_mask_sym], [gen_loss], updates=gen_updates, ) gen_debug = theano.function([gen_input_sym, gen_bigram_overlay, gen_bigram_freq_log_field, gen_words_target_sym, gen_valid_target_sym, gen_mask_sym], [gen_cross_entropy], on_unused_input='ignore' ) print("Generator network functions defined") Explanation: Create a Generative network Next, let's build an RNN that produces text, and train it using (a) a pure dictionary look-up, and (b) the correctness signal from the Discriminator above. Plan of attack : Create a GRU that outputs a character probability distribution for every time step Run the RNN several times : each time is an additional character input longer with the next character chosen according to the probability distribution given and then re-run with the current input words (up to that point) Stop adding characters when they've all reached 'space' This seems very inefficient (since the first RNN steps are being run multiple times on the same starting letters), but is the same as in https://github.com/Lasagne/Recipes/blob/master/examples/lstm_text_generation.py End of explanation def generate_rnn_words(random_field, bigram_overlay=0.0): batch_size, max_word_length = random_field.shape idx_spc = CHAR_TO_IX[' '] def append_indices_as_chars(words_current, idx_list): for i, idx in enumerate(idx_list): if idx == idx_spc: pass # Words end at space #words_current[i] += 'x' else: words_current[i] += IX_TO_CHAR[idx] return words_current # Create a 'first character' by using the bigram transitions from 'space' (this is fair) idx_initial = [ np.searchsorted(bigram_freq_cum[idx_spc], random_field[i, 0]) for i in range(batch_size) ] bigram_freq_log_field = np.zeros( (batch_size, max_word_length, CHARS_SIZE), dtype='float32') bigram_freq_log_field[:,0] = bigram_freq_log[ np.array(idx_initial) , :] words_current = [ '' for _ in range(batch_size) ] words_current = append_indices_as_chars(words_current, idx_initial) col = 1 while True: gen_input_values, gen_mask_values = prep_batch_for_network(words_current) #print(gen_mask_values[:,-1]) #gen_out_, = gen_predict(gen_input_values, gen_mask_values) if gen_input_values.shape[1]<col: # Early termination print("Early termination") col -= 1 break #print(gen_input_values.shape, gen_mask_values.shape, bigram_freq_log_field.shape, col) probs, = gen_predict(gen_input_values, bigram_overlay, bigram_freq_log_field[:,0:col], gen_mask_values) #print(probs[0]) # This output is the final probability[CHARS_SIZE], so let's cumsum it, etc. probs_cum = probs.cumsum(axis=1) idx_next = [ # Only add extra letters if we haven't already passed a space (i.e. mask[-1]==0) idx_spc if gen_mask_values[i,-1]==0 else np.searchsorted(probs_cum[i], random_field[i, col]) for i in range(batch_size) ] words_current = append_indices_as_chars(words_current, idx_next) words_current_max_length = np.array( [ len(w) for w in words_current ]).max() # If the words have reached the maximum length, or we didn't extend any of them... if words_current_max_length>=max_word_length: # Finished col += 1 break # Guarded against overflow on length... bigram_freq_log_field[:, col] = bigram_freq_log[ np.array(idx_next) , :] col += 1 return words_current, bigram_freq_log_field[:,0:col] def view_rnn_generator_sample_output(bigram_overlay=0.9): # Create a probability distribution across all potential positions in the output 'field' random_field = np.random.uniform( size=(BATCH_SIZE, WORD_LENGTH_MAX) ) gen_words_output, _underlying_bigram_field = generate_rnn_words(random_field, bigram_overlay=bigram_overlay) print( '\n'.join(gen_words_output)) #print(_underlying_bigram_field) view_rnn_generator_sample_output(bigram_overlay=0.0) view_rnn_generator_sample_output(bigram_overlay=0.9) Explanation: Use the Generative Network to create sample words The network above can be used to generate text... The following set-up allows for the output of the RNN at each timestep to be mixed with the letter frequency that the bigram model would suggest - in a proportion bigram_overlay which can vary from 0 (being solely RNN derived) to 1.0 (being solely bigram frequencies, with the RNN output being disregarded). The input is a 'random field' matrix that is used to chose each letter in each slot from the generated probability distribution. Once a space is output for a specific word, then it stops being extended (equivalently, the mask is set to zero going forwards). Once spaces have been observed for all words (or the maximum length reached), the process ends, and a list of the created words is returned. End of explanation gen_param_values_initial = lasagne.layers.get_all_param_values(gen_prob) Explanation: Remember the initial (random) Network State This will come in handy when we need to reset the network back to 'untrained' later. End of explanation def is_good_output_dictionary(output_words): return np.array( [ (1.0 if w in wordset else 0.0) for w in output_words ], dtype='float32' ) t0, iterations_complete = time.time(), 0 def reset_generative_network(): global t0, iterations_complete t0, iterations_complete = time.time(), 0 lasagne.layers.set_all_param_values(gen_prob, gen_param_values_initial) def prep_batch_for_network_output(mask_values, batch_of_words): output_indices = np.zeros(mask_values.shape, dtype='int32') for i, word in enumerate(batch_of_words): word_shifted = word[1:]+' ' for j, c in enumerate(word_shifted): output_indices[i,j] = CHAR_TO_IX[ c ] return output_indices def train_generative_network(is_good_output_function=is_good_output_dictionary, epochs=10*1000, bigram_overlay=0.0): if bigram_overlay>=1.0: print("Cannot train with pure bigrams...") return global t0, iterations_complete t1, iterations_recent = time.time(), iterations_complete for epoch_i in range(epochs): random_field = np.random.uniform( size=(BATCH_SIZE, WORD_LENGTH_MAX) ) gen_words_output, underlying_bigram_field = generate_rnn_words(random_field, bigram_overlay=bigram_overlay) #print(gen_words_output[0]) #print(underlying_bigram_field[0]) # Now, create a training set of input -> output, coupled with an intensity signal # first the step-by-step network inputs gen_input_values, gen_mask_values = prep_batch_for_network(gen_words_output) # now create step-by-step network outputs (strip off first character, add spaces) as *indicies* gen_output_values_int = prep_batch_for_network_output(gen_mask_values, gen_words_output) #print(gen_output_values_int.shape, underlying_bigram_field.shape) #print(gen_output_values_int[0]) # makes sense # And, since we have a set of words, we can also determine their 'goodness' is_good_output = is_good_output_function(gen_words_output) #print(is_good_output[0]) Starts at all zero. i.e. the word[0] is bad # This looks like it is the wrong way 'round... target_valid_row = -(np.array(is_good_output) - 0.5) ## i.e. higher values for more-correct symbols : This goes -ve, and wrong, quickly #target_valid_row = (np.array(is_good_output) - 0.5) #target_valid_row = np.ones( (gen_mask_values.shape[0],), dtype='float32' ) target_valid = target_valid_row[:, np.newaxis] #print(target_valid.shape) if False: # Now debug the generator RNN gen_debug_, = gen_debug(gen_input_values, bigram_overlay, underlying_bigram_field, gen_output_values_int, target_valid, gen_mask_values) print(gen_debug_.shape) print(gen_debug_[0]) #return # Now train the generator RNN gen_loss_, = gen_train( gen_input_values, bigram_overlay, underlying_bigram_field, gen_output_values_int, target_valid, gen_mask_values) #print(gen_loss_) # Hmm - this loss is ~ a character-level loss, and isn't comparable to a word-level score, # which is a pity, since the 'words' seem to get worse, not better... iterations_complete += 1 if iterations_complete % 10 == 0: secs_per_batch = float(time.time() - t1)/ (iterations_complete - iterations_recent) eta_in_secs = secs_per_batch*(epochs-epoch_i) print("Iteration {:5d}, loss_train: {:.2f} word-score: {:.2f}% ({:.1f}s per 1000 batches) eta: {:.0f}m{:02.0f}s".format( iterations_complete, float(gen_loss_), float(is_good_output.mean())*100., secs_per_batch*1000., np.floor(eta_in_secs/60), np.floor(eta_in_secs % 60), ) ) print( ' '.join(gen_words_output[:10]) ) #print('Iteration {}, output: {}'.format(iteration, disc_output_, )) # , output: {} t1, iterations_recent = time.time(), iterations_complete print('Iteration {}, ran in {:.1f}sec'.format(iterations_complete, float(time.time() - t0))) #theano.config.exception_verbosity='high' # ... a little pointless with RNNs # See: http://deeplearning.net/software/theano/tutorial/debug_faq.html reset_generative_network() train_generative_network(is_good_output_function=is_good_output_dictionary, epochs=1*1000, bigram_overlay=0.9) Explanation: Now, train the Generator RNN based on the Dictionary itself Once we have an output word, let's reward the RNN based on a specific training signal. We'll encapsulate the training in a function that takes the input signal as a parameter, so that we can try other training schemes (later). End of explanation view_rnn_generator_sample_output(bigram_overlay=0.9) Explanation: How are we doing? End of explanation #def is_good_output_dictionary(output_words): # return np.array( # [ (1.0 if w in wordset else 0.0) for w in output_words ], # dtype='float32' # ) def is_good_output_discriminator(output_words): disc_input_values, disc_mask_values = prep_batch_for_network(output_words) disc_output_, = disc_predict(disc_input_values, disc_mask_values) return disc_output_.reshape( (-1,) ) reset_generative_network() train_generative_network(is_good_output_function=is_good_output_discriminator, epochs=1*1000, bigram_overlay=0.9) #train_generative_network(is_good_output_function=is_good_output_dictionary, epochs=1*1000, bigram_overlay=0.9) Explanation: Use training signal from Discriminator End of explanation view_rnn_generator_sample_output(bigram_overlay=0.9) Explanation: How are we doing? End of explanation
2,023
Given the following text description, write Python code to implement the functionality described below step by step Description: JSON Schema Parser Notes on the JSON schema / Traitlets package Goal Step1: OK, now let's write a traitlets class that does the same thing Step4: Roadmap Start by recognizing all simple JSON types in the schema ("string", "number", "integer", "boolean", "null") Next recognize objects containing simple types Next recognize compound simple types (i.e. where type is a list of simple types) Next recognize arrays & enums Next recognize "$ref" definitions Next recognize "anyOf", "oneOf", "allOf" definitions... first is essentially a traitlets Union, second is a Union where only one must match, and "allOf" is essentially a composite object (not sure if traitlets has that...) Note that among these, Vega-Lite only uses "anyOf" Catalog all validation keywords... Implement custom traitlets that support all the various validation keywords for each type. (Validation keywords listed here) Use hypothesis for testing? Challenges & Questions to think about JSONSchema ignores any keys/properties which are not listed in the schema. Traitlets warns, and in the future will raise an error for undefined keys/properties this may be OK... we can just document the fact that traitlets is more prescriptive than JSONschema JSON allows undefined values as well as explicit nulls, which map to None. Traitlets treats None as undefined. How to resolve this? Best option is probably to use an undefined sentinel within the traitlets structure, such that the code knows when to ignore keys & produces dicts which translate directly to the correct JSON Will probably need to define some custom trait types, e.g. Null, and also extend simple trait types to allow for the more extensive validations allowed in JSON Schema. Generate subclasses with the code What version of the schema should we target? Perhaps try to target multiple versions? start with 04 because this is what's supported by jsonschema and used by Vega(-Lite) Ideas Easiest way Step5: Trying it out... Step6: Testing the result
Python Code: import json import jsonschema simple_schema = { "type": "object", "properties": { "foo": {"type": "string"}, "bar": {"type": "number"} } } good_instance = { "foo": "hello world", "bar": 3.141592653, } bad_instance = { "foo" : 42, "bar" : "string" } # Should succeed jsonschema.validate(good_instance, simple_schema) # Should fail try: jsonschema.validate(bad_instance, simple_schema) except jsonschema.ValidationError as err: print(err) Explanation: JSON Schema Parser Notes on the JSON schema / Traitlets package Goal: write a function that, given a JSON Schema, will generate code for traitlets objects which provide equivalent validation. Links JSON Schema Validation Information jsonschema Python package Altair 1.0 parsing code By-Hand Example First confirm that we're doing things correctly with the jsonschema package: End of explanation import traitlets as T class SimpleInstance(T.HasTraits): foo = T.Unicode() bar = T.Float() # Should succeed SimpleInstance(**good_instance) # Should fail try: SimpleInstance(**bad_instance) except T.TraitError as err: print(err) Explanation: OK, now let's write a traitlets class that does the same thing: End of explanation import jinja2 OBJECT_TEMPLATE = {%- for import in cls.imports %} {{ import }} {%- endfor %} class {{ cls.classname }}({{ cls.baseclass }}): {%- for (name, prop) in cls.properties.items() %} {{ name }} = {{ prop.trait_code }} {%- endfor %} class JSONSchema(object): A class to wrap JSON Schema objects and reason about their contents object_template = OBJECT_TEMPLATE def __init__(self, schema, root=None): self.schema = schema self.root = root or schema @property def type(self): # TODO: should the default type be considered object? return self.schema.get('type', 'object') @property def trait_code(self): type_dict = {'string': 'T.Unicode()', 'number': 'T.Float()', 'integer': 'T.Integer()', 'boolean': 'T.Bool()'} if self.type not in type_dict: raise NotImplementedError() return type_dict[self.type] @property def classname(self): # TODO: deal with non-root schemas somehow... if self.schema is self.root: return "RootInstance" else: raise NotImplementedError("Non-root object schema") @property def baseclass(self): return "T.HasTraits" @property def imports(self): return ["import traitlets as T"] @property def properties(self): return {key: JSONSchema(val) for key, val in self.schema.get('properties', {}).items()} def object_code(self): return jinja2.Template(self.object_template).render(cls=self) Explanation: Roadmap Start by recognizing all simple JSON types in the schema ("string", "number", "integer", "boolean", "null") Next recognize objects containing simple types Next recognize compound simple types (i.e. where type is a list of simple types) Next recognize arrays & enums Next recognize "$ref" definitions Next recognize "anyOf", "oneOf", "allOf" definitions... first is essentially a traitlets Union, second is a Union where only one must match, and "allOf" is essentially a composite object (not sure if traitlets has that...) Note that among these, Vega-Lite only uses "anyOf" Catalog all validation keywords... Implement custom traitlets that support all the various validation keywords for each type. (Validation keywords listed here) Use hypothesis for testing? Challenges & Questions to think about JSONSchema ignores any keys/properties which are not listed in the schema. Traitlets warns, and in the future will raise an error for undefined keys/properties this may be OK... we can just document the fact that traitlets is more prescriptive than JSONschema JSON allows undefined values as well as explicit nulls, which map to None. Traitlets treats None as undefined. How to resolve this? Best option is probably to use an undefined sentinel within the traitlets structure, such that the code knows when to ignore keys & produces dicts which translate directly to the correct JSON Will probably need to define some custom trait types, e.g. Null, and also extend simple trait types to allow for the more extensive validations allowed in JSON Schema. Generate subclasses with the code What version of the schema should we target? Perhaps try to target multiple versions? start with 04 because this is what's supported by jsonschema and used by Vega(-Lite) Ideas Easiest way: validate everything with a single HasTraits class via jsonschema, splitting out properties into traitlets Interface root schema and all definitions should become their own T.HasTraits class Objects defined inline should also have their own class with a generated anonymous name Use Jinja templating; allow output to one file or multiple files with relative imports root object must have type="object"... this differs from jsonschema Testing test cases should be an increasingly complicated set of jsonschema objects, with test cases that should pass and fail. Perhaps store these in a JSON structure? (With a schema?) An initial prototype Let's try generating some traitlets classes for simple cases End of explanation code = JSONSchema(simple_schema).object_code() print(code) Explanation: Trying it out... End of explanation exec(code) # defines RootInstance # Good instance should validate correctly RootInstance(**good_instance) # Bad instance should raise a TraitError try: RootInstance(**bad_instance) except T.TraitError as err: print(err) Explanation: Testing the result End of explanation
2,024
Given the following text description, write Python code to implement the functionality described below step by step Description: Übungsblatt 1 Step1: Aufgabe 1 Gegeben sei eine parametrische Funktion $y = f(x)$, $y = 1 + a_1x + a_2x^2$ mit Parametern $a_1 = 2.0 ± 0.2$, $a_2 = 1.0 ± 0.1$ und Korrelationskoeffizient $ρ = −0.8$.
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('ggplot') Explanation: Übungsblatt 1: Fehlerrechnung Aufgabe 1 Aufgabe 2 Aufgabe 3 End of explanation a1, a1_err = 2.0, 0.2 a2, a2_err = 1.0, 0.1 rho = -0.8 Explanation: Aufgabe 1 Gegeben sei eine parametrische Funktion $y = f(x)$, $y = 1 + a_1x + a_2x^2$ mit Parametern $a_1 = 2.0 ± 0.2$, $a_2 = 1.0 ± 0.1$ und Korrelationskoeffizient $ρ = −0.8$. End of explanation
2,025
Given the following text description, write Python code to implement the functionality described below step by step Description: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out Step1: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Step2: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement Step3: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Step4: Hyperparameters Step5: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Step6: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Step7: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Step8: Training Step9: Training loss Here we'll check out the training losses for the generator and discriminator. Step10: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make. Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
Python Code: %matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can foold the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. End of explanation def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('generator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(z, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim, activation=None) out = tf.tanh(logits) return out Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can use take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. End of explanation def discriminator(x, n_units=128, reuse=False, alpha=0.01): with tf.variable_scope('discriminator', reuse=reuse): # Hidden layer h1 = tf.layers.dense(x, n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. End of explanation # Size of input image to discriminator input_size = 784 # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Smoothing smooth = 0.1 Explanation: Hyperparameters End of explanation tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Build the model g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha) # g_model is the generator output d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, n_units=d_hidden_size, alpha=alpha) Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). End of explanation # Calculate losses d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_logits_real) * (1 - smooth))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_logits_real))) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_logits_fake))) Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. End of explanation # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.startswith('generator')] d_vars = [var for var in t_vars if var.name.startswith('discriminator')] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables to start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to var_list in the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. End of explanation batch_size = 100 epochs = 100 samples = [] losses = [] # Only save generator variables saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) Explanation: Training End of explanation fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation np.array(samples[-1]).shape _ = view_samples(-1, samples) Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) _ = view_samples(0, [gen_samples]) Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation
2,026
Given the following text description, write Python code to implement the functionality described below step by step Description: How to get rid of the bad python code Step1: Handlabeling code Step2: Analysis Given the above percentages, we can calculate how many of the code blocks we think are python, and how many of the python code blocks are captured by the first category (true-true)
Python Code: import json import datetime import tqdm folder = '/dfs/scratch2/fcipollone/stackoverflow/guesslang_and_ast/outfiles' lines = [] guess_and_parse = {} guess_not_parse = {} parse_not_guess = {} total = {} dates = [] for file_num in tqdm.tqdm(range(400)): filename = folder + '/file' + str(file_num) + '.txt' for line in open(filename): line_obj = json.loads(line) for code_block in line_obj['CodeBlocks']: l = int(round(len(code_block['code']),-1)) if l > 10000: l = 10000 gl = code_block['Guesslang'] par = code_block['Parsable'] if par == "True" or gl.strip().lower() == "python": if l not in total: total[l] = [] total[l].append(code_block['code']) if par == "True" and gl.strip().lower() == "python": if l not in guess_and_parse: guess_and_parse[l] = [] guess_and_parse[l].append(code_block['code']) elif par == "True" and not gl.strip().lower() == "python": if l not in parse_not_guess: parse_not_guess[l] = [] parse_not_guess[l].append(code_block['code']) elif not par == "True" and gl.strip().lower() == "python": if l not in guess_not_parse: guess_not_parse[l] = [] guess_not_parse[l].append(code_block['code']) import random desired_code_length = round(100,-1) ''' for i in range(50): print('Guess AND parse') print('Number to choose from',len(guess_and_parse[desired_code_length])) print('-'*5) print(random.sample(guess_and_parse[desired_code_length], 1)[0]) print('*'*100) #print('Guess NOT parse') ''' for i in range(50): print('Number to choose from',len(guess_not_parse[desired_code_length])) print('-'*5) print(random.sample(guess_not_parse[desired_code_length], 1)[0]) print('*'*100) # print('Parse NOT Guess') # print('Number to choose from',len(parse_not_guess[desired_code_length])) # print('-'*5) # print(random.sample(parse_not_guess[desired_code_length], 1)[0]) # print('*'*100) Explanation: How to get rid of the bad python code End of explanation # These are the actual counts for each category (true-true, true-false, false-true) #For 100: #48, 8, 16 #For 500: #50, 19, 12 #For 1000: #50, 13, 7 # These are the percentages for each category for convenience (true-true, true-false, false-true) #For 100: #96, 16, 32 #For 500: #100, 38, 24 #For 1000: #100, 26, 14 Explanation: Handlabeling code End of explanation # Number of code blocks per category python_true = 1695196 python_false = 1471445 other_false = 2234022 # Taking the min probability of each group, and the max probability of each group to get a possible range max_python = python_true*1 + python_false*.38 + other_false*.32 min_python = python_true*.96 + python_false*.16 + other_false*.14 print(max_python) print(min_python) # Taking the ratio of python code to total code -- about half of it isnt python at all! print(max_python / (python_true+python_false+other_false)) print(min_python / (python_true+python_false+other_false)) # Taking the ratio of python code captured by the first group, and total estimated python code print(python_true / max_python) print(python_true / min_python) # Conclusion: # Although this method of estimation may be a bit crude, it gives us a range that I'm pretty sure about, and I would # estimate that the true ratio of python code captured is around 65% or more # Qualitative information # python - false (this is when guesslang says python but it does not parse) #2 JSON #17 Other programming languages #18 python #3 ipython shell #10 trace # other - true (this is when it parses but guesslang says something other than python) #34 JSON #9 python #4 comments #3 other programming languages Explanation: Analysis Given the above percentages, we can calculate how many of the code blocks we think are python, and how many of the python code blocks are captured by the first category (true-true) End of explanation
2,027
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below Step9: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token Step11: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step13: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step15: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below Step18: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders Step21: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) Step24: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. Step27: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) Step30: Build the Neural Network Apply the functions you implemented above to Step33: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements Step35: Neural Network Training Hyperparameters Tune the following parameters Step37: Build the Graph Build the graph using the neural network you implemented. Step39: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. Step41: Save Parameters Save seq_length and save_dir for generating a new TV script. Step43: Checkpoint Step46: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names Step49: Choose Word Implement the pick_word() function to select the next word using probabilities. Step51: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper data_dir = './data/simpsons/moes_tavern_lines.txt' text = helper.load_data(data_dir) # Ignore notice, since we don't use it for analysing the data text = text[81:] Explanation: TV Script Generation In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern. Get the Data The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc.. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()}))) scenes = text.split('\n\n') print('Number of scenes: {}'.format(len(scenes))) sentence_count_scene = [scene.count('\n') for scene in scenes] print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene))) sentences = [sentence for scene in scenes for sentence in scene.split('\n')] print('Number of lines: {}'.format(len(sentences))) word_count_sentence = [len(sentence.split()) for sentence in sentences] print('Average number of words in each line: {}'.format(np.average(word_count_sentence))) print() print('The sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation import numpy as np import problem_unittests as tests def create_lookup_tables(text): Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) # TODO: Implement Function vocab = set(text) vocab_to_int = {c: i for i, c in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) return vocab_to_int, int_to_vocab DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_create_lookup_tables(create_lookup_tables) Explanation: Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab) End of explanation def token_lookup(): Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token # TODO: Implement Function return {'.': '||Period||', ',': '||Comma||', '"': '||Quotation_Mark||', ';': '||Semicolon||', '!': '||Exclamatin_Mark||', '?': '||Question_Mark||', '(': '||Left_Parentheses||', ')': '||Right_Parentheses||', '--': '||Dash||', '\n': '||Return||'} DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_tokenize(token_lookup) Explanation: Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||". End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import helper import numpy as np import problem_unittests as tests int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Build the Neural Network You'll build the components necessary to build a RNN by implementing the following functions below: - get_inputs - get_init_cell - get_embed - build_rnn - build_nn - get_batches Check the Version of TensorFlow and Access to GPU End of explanation def get_inputs(): Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) # TODO: Implement Function Input = tf.placeholder(tf.int32, shape = (None, None), name = 'input') Targets = tf.placeholder(tf.int32, shape = (None, None), name = 'targets') LearningRate = tf.placeholder(tf.float32, name = 'learningrate') return Input, Targets, LearningRate DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_inputs(get_inputs) Explanation: Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate) End of explanation def get_init_cell(batch_size, rnn_size): Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) #drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob = keep_prob) #cell = tf.contrib.rnn.MultiRNNCell([lstm for _ in range(rnn_size)]) cell = tf.contrib.rnn.MultiRNNCell([lstm]) initial_state = cell.zero_state(batch_size, tf.float32) initial_state = tf.identity(initial_state, name = 'initial_state') return cell, initial_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_init_cell(get_init_cell) Explanation: Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState) End of explanation def get_embed(input_data, vocab_size, embed_dim): Create embedding for <input_data>. :param input_data: TF placeholder for text input. :param vocab_size: Number of words in vocabulary. :param embed_dim: Number of embedding dimensions :return: Embedded input. # TODO: Implement Function embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim))) embed = tf.nn.embedding_lookup(embedding, input_data) return embed DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_embed(get_embed) Explanation: Word Embedding Apply embedding to input_data using TensorFlow. Return the embedded sequence. End of explanation def build_rnn(cell, inputs): Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) # TODO: Implement Function #def get_init_cell(batch_size, rnn_size): outputs, final_state = tf.nn.dynamic_rnn(cell = cell, inputs = inputs, dtype = tf.float32) final_state = tf.identity(final_state, name = 'final_state') return outputs, final_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_build_rnn(build_rnn) Explanation: Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState) End of explanation def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) # TODO: Implement Function #print("building Network: rnn_size = {:d} input_data shape = {} vocab_size = {:d} embed_dim = {:d}".format(rnn_size,input_data.get_shape(), vocab_size, embed_dim)) embeddings = get_embed(input_data = input_data, vocab_size = vocab_size, embed_dim = embed_dim) lstm_outputs, final_state = build_rnn(cell = cell, inputs = embeddings) #print("lstm_outputs shape = {}".format(lstm_outputs.get_shape())) # apply fully connected layer logits = tf.contrib.layers.fully_connected(inputs = lstm_outputs, num_outputs = vocab_size, activation_fn = None) return logits, final_state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_build_nn(build_nn) Explanation: Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState) End of explanation def get_batches(int_text, batch_size, seq_length): Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch (number of sequences per batch) :param seq_length: The length of sequence :return: Batches as a Numpy array # TODO: Implement Function # how many batches do we get from the input? #print("creating batches!") #print("int_text length = {:d} batch_size = {:d} seq_length = {:d}".format(len(int_text), batch_size, seq_length)) num_batches = int(len(int_text) // (batch_size*seq_length)) # throw away words at end that would make incomplete batch int_text = int_text[:(batch_size * seq_length * num_batches)] if num_batches == 0: print("Warning! not enough input data to create a batch!") raise ValueError # form feature and target arrays # targets is input shifted 1, then take first value in input list and put it in last value of target array features = np.array(int_text) targets = np.zeros_like(features) targets[:-1], targets[-1] = features[1:], features[0] #print('features:\n', features) #print('targets:\n', targets) # create x,y batches as rows of sequences x_batches = np.split(features.reshape(batch_size, -1), num_batches, axis = 1) y_batches = np.split(targets.reshape(batch_size, -1), num_batches, axis = 1) # zip em up batches = np.array(list(zip(x_batches, y_batches))) return batches DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_batches(get_batches) Explanation: Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive. End of explanation # Number of Epochs # watch validation error, set num_epochs to where val_err starts increasing num_epochs = 200 # Batch Size # 32,64,128,256 suggested batch_size = 128 # RNN Size rnn_size = 200 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 16 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = batch_size DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE save_dir = './save' Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from tensorflow.contrib import seq2seq train_graph = tf.Graph() with train_graph.as_default(): vocab_size = len(int_to_vocab) input_text, targets, lr = get_inputs() input_data_shape = tf.shape(input_text) cell, initial_state = get_init_cell(input_data_shape[0], rnn_size) logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim) # why not working # Probabilities for generating words probs = tf.nn.softmax(logits, name='probs') # Loss function cost = seq2seq.sequence_loss( logits, targets, tf.ones([input_data_shape[0], input_data_shape[1]])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL batches = get_batches(int_text, batch_size, seq_length) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(num_epochs): state = sess.run(initial_state, {input_text: batches[0][0]}) for batch_i, (x, y) in enumerate(batches): feed = { input_text: x, targets: y, initial_state: state, lr: learning_rate} train_loss, state, _ = sess.run([cost, final_state, train_op], feed) # Show every <show_every_n_batches> batches if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0: print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format( epoch_i, batch_i, len(batches), train_loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_dir) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params((seq_length, save_dir)) Explanation: Save Parameters Save seq_length and save_dir for generating a new TV script. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess() seq_length, load_dir = helper.load_params() Explanation: Checkpoint End of explanation def get_tensors(loaded_graph): Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) # TODO: Implement Function input_tensor = loaded_graph.get_tensor_by_name('input:0') initial_state = loaded_graph.get_tensor_by_name('initial_state:0') final_state = loaded_graph.get_tensor_by_name('final_state:0') probs = loaded_graph.get_tensor_by_name('probs:0') return input_tensor, initial_state, final_state, probs DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_get_tensors(get_tensors) Explanation: Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) End of explanation def pick_word(probabilities, int_to_vocab): Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word # TODO: Implement Function #print('probs: ', probabilities) #print('int_to_vocab: ', int_to_vocab) next_word_index = np.random.choice(a = np.array(range(len(probabilities))), p = probabilities) return int_to_vocab[next_word_index] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_pick_word(pick_word) Explanation: Choose Word Implement the pick_word() function to select the next word using probabilities. End of explanation gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script) Explanation: Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. End of explanation
2,028
Given the following text description, write Python code to implement the functionality described below step by step Description: 선형 회귀 분석의 기초 회귀 분석(regression analysis)은 입력 자료(독립 변수) $x$와 이에 대응하는 출력 자료(종속 변수) $y$간의 관계를 정량과 하기 위한 작업이다. 회귀 분석에는 결정론적 모형(Deterministic Model)과 확률적 모형(Probabilistic Model)이 있다. 결정론적 모형은 단순히 독립 변수 $x$에 대해 대응하는 종속 변수 $y$를 계산하는 함수를 만드는 과정이다. $$ \hat{y} = f \left( x; { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } \right) = f (x; D) = f(x) $$ 여기에서 $ { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } $ 는 모형 계수 추정을 위한 과거 자료이다. 만약 함수가 선형 함수이면 선형 회귀 분석(linear regression analysis)이라고 한다. $$ \hat{y} = w_0 + w_1 x_1 + w_2 x_2 + \cdots + w_D x_D $$ Augmentation 일반적으로 회귀 분석에 앞서 다음과 같이 상수항을 독립 변수에 포함하는 작업이 필요할 수 있다. 이를 feature augmentation이라고 한다. $$ x_i = \begin{bmatrix} x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} \rightarrow x_{i,a} = \begin{bmatrix} 1 \ x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} $$ augmentation을 하게 되면 모든 원소가 1인 벡터를 feature matrix 에 추가된다. $$ X = \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1D} \ x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots \ x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} \rightarrow X_a = \begin{bmatrix} 1 & x_{11} & x_{12} & \cdots & x_{1D} \ 1 & x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots & \vdots \ 1 & x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} $$ augmentation을 하면 가중치 벡터(weight vector)도 차원이 증가하여 전체 수식이 다음과 같이 단순화 된다. $$ w_0 + w_1 x_1 + w_2 x_2 \begin{bmatrix} 1 & x_1 & x_2 \end{bmatrix} \begin{bmatrix} w_0 \ w_1 \ w_2 \end{bmatrix} = x_a^T w $$ Step1: OLS (Ordinary Least Squares) OLS는 가장 기본적인 결정론적 회귀 방법으로 Residual Sum of Squares(RSS)를 최소화하는 가중치 벡터 값을 미분을 통해 구한다. Residual 잔차 $$ e_i = {y}_i - x_i^T w $$ Stacking (Vector Form) $$ e = {y} - Xw $$ Residual Sum of Squares (RSS) $$\begin{eqnarray} \text{RSS} &=& \sum (y_i - \hat{y}_i)^2 \ &=& \sum e_i^2 = e^Te \ &=& (y - Xw)^T(y - Xw) \ &=& y^Ty - 2y^T X w + w^TX^TXw \end{eqnarray}$$ Minimize using Gradient $$ \dfrac{\partial \text{RSS}}{\partial w} = -2 X^T y + 2 X^TX w = 0 $$ $$ X^TX w = X^T y $$ $$ w = (X^TX)^{-1} X^T y $$ 여기에서 그레디언트를 나타내는 다음 식을 Normal equation 이라고 한다. $$ X^T y - X^TX w = 0 $$ Normal equation 에서 잔차에 대한 다음 특성을 알 수 있다. $$ X^T (y - X w ) = X^T e = 0 $$ Step2: scikit-learn 패키지를 사용한 선형 회귀 분석 sklearn 패키지를 사용하여 선형 회귀 분석을 하는 경우에는 linear_model 서브 패키지의 LinearRegression 클래스를 사용한다. http Step3: Boston Housing Price Step4: statsmodels 를 사용한 선형 회귀 분석 statsmodels 패키지에서는 OLS 클래스를 사용하여 선형 회귀 분석을 실시한다. http Step5: RegressionResults 클래스는 분석 결과를 다양한 속성에 저장해주므로 추후 사용자가 선택하여 활용할 수 있다. Step6: statsmodel는 다양한 회귀 분석 결과 플롯도 제공한다. plot_fit(results, exog_idx) Plot fit against one regressor. abline_plot([intercept, ...]) Plots a line given an intercept and slope. influence_plot(results[, ...]) Plot of influence in regression. plot_leverage_resid2(results) Plots leverage statistics vs. plot_partregress(endog, ...) Plot partial regression for a single regressor. plot_ccpr(results, exog_idx) Plot CCPR against one regressor. plot_regress_exog(results, ...) Plot regression results against one regressor.
Python Code: import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline from sklearn.datasets import make_regression bias = 100 X0, y, coef = make_regression(n_samples=100, n_features=1, bias=bias, noise=10, coef=True, random_state=1) X = np.hstack([np.ones_like(X0), X0]) np.ones_like(X0)[:5] # no.ones_like(X0) : X0 사이즈와 동일한데 내용물은 1인 행렬 생성 X[:5] Explanation: 선형 회귀 분석의 기초 회귀 분석(regression analysis)은 입력 자료(독립 변수) $x$와 이에 대응하는 출력 자료(종속 변수) $y$간의 관계를 정량과 하기 위한 작업이다. 회귀 분석에는 결정론적 모형(Deterministic Model)과 확률적 모형(Probabilistic Model)이 있다. 결정론적 모형은 단순히 독립 변수 $x$에 대해 대응하는 종속 변수 $y$를 계산하는 함수를 만드는 과정이다. $$ \hat{y} = f \left( x; { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } \right) = f (x; D) = f(x) $$ 여기에서 $ { x_1, y_1, x_2, y_2, \cdots, x_N, y_N } $ 는 모형 계수 추정을 위한 과거 자료이다. 만약 함수가 선형 함수이면 선형 회귀 분석(linear regression analysis)이라고 한다. $$ \hat{y} = w_0 + w_1 x_1 + w_2 x_2 + \cdots + w_D x_D $$ Augmentation 일반적으로 회귀 분석에 앞서 다음과 같이 상수항을 독립 변수에 포함하는 작업이 필요할 수 있다. 이를 feature augmentation이라고 한다. $$ x_i = \begin{bmatrix} x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} \rightarrow x_{i,a} = \begin{bmatrix} 1 \ x_{i1} \ x_{i2} \ \vdots \ x_{iD} \end{bmatrix} $$ augmentation을 하게 되면 모든 원소가 1인 벡터를 feature matrix 에 추가된다. $$ X = \begin{bmatrix} x_{11} & x_{12} & \cdots & x_{1D} \ x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots \ x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} \rightarrow X_a = \begin{bmatrix} 1 & x_{11} & x_{12} & \cdots & x_{1D} \ 1 & x_{21} & x_{22} & \cdots & x_{2D} \ \vdots & \vdots & \vdots & \vdots & \vdots \ 1 & x_{N1} & x_{N2} & \cdots & x_{ND} \ \end{bmatrix} $$ augmentation을 하면 가중치 벡터(weight vector)도 차원이 증가하여 전체 수식이 다음과 같이 단순화 된다. $$ w_0 + w_1 x_1 + w_2 x_2 \begin{bmatrix} 1 & x_1 & x_2 \end{bmatrix} \begin{bmatrix} w_0 \ w_1 \ w_2 \end{bmatrix} = x_a^T w $$ End of explanation y = y.reshape(len(y), 1) w = np.dot(np.dot(np.linalg.inv(np.dot(X.T, X)), X.T), y) print("bias:", bias) print("coef:", coef) print("w:\n", w) w = np.linalg.lstsq(X, y)[0] w xx = np.linspace(np.min(X0) - 1, np.max(X0) + 1, 1000) XX = np.vstack([np.ones(xx.shape[0]), xx.T]).T yy = np.dot(XX, w) plt.scatter(X0, y) plt.plot(xx, yy, 'r-') plt.show() Explanation: OLS (Ordinary Least Squares) OLS는 가장 기본적인 결정론적 회귀 방법으로 Residual Sum of Squares(RSS)를 최소화하는 가중치 벡터 값을 미분을 통해 구한다. Residual 잔차 $$ e_i = {y}_i - x_i^T w $$ Stacking (Vector Form) $$ e = {y} - Xw $$ Residual Sum of Squares (RSS) $$\begin{eqnarray} \text{RSS} &=& \sum (y_i - \hat{y}_i)^2 \ &=& \sum e_i^2 = e^Te \ &=& (y - Xw)^T(y - Xw) \ &=& y^Ty - 2y^T X w + w^TX^TXw \end{eqnarray}$$ Minimize using Gradient $$ \dfrac{\partial \text{RSS}}{\partial w} = -2 X^T y + 2 X^TX w = 0 $$ $$ X^TX w = X^T y $$ $$ w = (X^TX)^{-1} X^T y $$ 여기에서 그레디언트를 나타내는 다음 식을 Normal equation 이라고 한다. $$ X^T y - X^TX w = 0 $$ Normal equation 에서 잔차에 대한 다음 특성을 알 수 있다. $$ X^T (y - X w ) = X^T e = 0 $$ End of explanation from sklearn.datasets import load_diabetes diabetes = load_diabetes() dfX_diabetes = pd.DataFrame(diabetes.data, columns=["X%d" % (i+1) for i in range(np.shape(diabetes.data)[1])]) dfy_diabetes = pd.DataFrame(diabetes.target, columns=["target"]) df_diabetes0 = pd.concat([dfX_diabetes, dfy_diabetes], axis=1) df_diabetes0.tail() from sklearn.linear_model import LinearRegression model_diabetes = LinearRegression().fit(diabetes.data, diabetes.target) print(model_diabetes.coef_) print(model_diabetes.intercept_) predictions = model_diabetes.predict(diabetes.data) plt.scatter(diabetes.target, predictions) plt.xlabel("target") plt.ylabel("prediction") plt.show() mean_abs_error = (np.abs(((diabetes.target - predictions)/diabetes.target)*100)).mean() print("MAE: %.2f%%" % (mean_abs_error)) import sklearn as sk sk.metrics.median_absolute_error(diabetes.target, predictions) sk.metrics.mean_squared_error(diabetes.target, predictions) Explanation: scikit-learn 패키지를 사용한 선형 회귀 분석 sklearn 패키지를 사용하여 선형 회귀 분석을 하는 경우에는 linear_model 서브 패키지의 LinearRegression 클래스를 사용한다. http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html 입력 인수 fit_intercept : 불리언, 옵션 상수상 추가 여부 normalize : 불리언, 옵션 회귀 분석전에 정규화 여부 속성 coef_ : 추정된 가중치 벡터 intercept_ : 추정된 상수항 Diabetes Regression End of explanation from sklearn.datasets import load_boston boston = load_boston() dfX_boston = pd.DataFrame(boston.data, columns=boston.feature_names) dfy_boston = pd.DataFrame(boston.target, columns=["MEDV"]) df_boston0 = pd.concat([dfX_boston, dfy_boston], axis=1) df_boston0.tail() model_boston = LinearRegression().fit(boston.data, boston.target) print(model_boston.coef_) print(model_boston.intercept_) predictions = model_boston.predict(boston.data) plt.scatter(boston.target, predictions) plt.xlabel("target") plt.ylabel("prediction") plt.show() mean_abs_error = (np.abs(((boston.target - predictions)/boston.target)*100)).mean() print("MAE: %.2f%%" % (mean_abs_error)) sk.metrics.median_absolute_error(boston.target, predictions) sk.metrics.mean_squared_error(boston.target, predictions) Explanation: Boston Housing Price End of explanation df_diabetes = sm.add_constant(df_diabetes0) df_diabetes.tail() model_diabetes2 = sm.OLS(df_diabetes.ix[:, -1], df_diabetes.ix[:, :-1]) result_diabetes2 = model_diabetes2.fit() result_diabetes2 print(result_diabetes2.summary()) df_boston = sm.add_constant(df_boston0) model_boston2 = sm.OLS(df_boston.ix[:, -1], df_boston.ix[:, :-1]) result_boston2 = model_boston2.fit() print(result_boston2.summary()) Explanation: statsmodels 를 사용한 선형 회귀 분석 statsmodels 패키지에서는 OLS 클래스를 사용하여 선형 회귀 분석을 실시한다. http://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.html statsmodels.regression.linear_model.OLS(endog, exog=None) 입력 인수 endog : 종속 변수. 1차원 배열 exog : 독립 변수, 2차원 배열. statsmodels 의 OLS 클래스는 자동으로 상수항을 만들어주지 않기 때문에 사용자가 add_constant 명령으로 상수항을 추가해야 한다. 모형 객체가 생성되면 fit, predict 메서드를 사용하여 추정 및 예측을 실시한다. 예측 결과는 RegressionResults 클래스 객체로 출력되면 summary 메서드로 결과 보고서를 볼 수 있다. End of explanation dir(result_boston2) Explanation: RegressionResults 클래스는 분석 결과를 다양한 속성에 저장해주므로 추후 사용자가 선택하여 활용할 수 있다. End of explanation sm.graphics.plot_fit(result_boston2, "CRIM") plt.show() Explanation: statsmodel는 다양한 회귀 분석 결과 플롯도 제공한다. plot_fit(results, exog_idx) Plot fit against one regressor. abline_plot([intercept, ...]) Plots a line given an intercept and slope. influence_plot(results[, ...]) Plot of influence in regression. plot_leverage_resid2(results) Plots leverage statistics vs. plot_partregress(endog, ...) Plot partial regression for a single regressor. plot_ccpr(results, exog_idx) Plot CCPR against one regressor. plot_regress_exog(results, ...) Plot regression results against one regressor. End of explanation
2,029
Given the following text description, write Python code to implement the functionality described below step by step Description: Jupyter Notebook Jupyter Notebook that evolved out of iPython and is aimed at providing a platform for easy sharing, interaction, and development of open-source software, standards and services. Althought primarily and originally used for phyton interactions, you can interact with multiple programming languages using Jupyter (although orignially expanded from iPython to includ Julia, Pythion, and R). Jupyter notebooks is a great way to create, test, and share live code, as well as equations and visualizations with markdown text. To inlcude more languages, check out kernels that are supported and their installation instructions. Installation To get started, download and install Jupyter. Installation of Anaconda is reccomened by Jupyter becuase it will also cover your python and notebook installation at the same time. If you prefer, you can also install Jupyter with pip, but this is only suggested for advanced Python users. There is extensive documentation to help with installation and download availble on the Jupyter and Anaconda pages. Once installed, launch the Anacdona Navigator and launch Jupter notebooks. Also open the Anaconda Prompt window. We will use this window to install extra widgets. In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts. conda install -c conda-forge ipyleaflet In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts. conda install -c conda-forge bqplot In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts. conda install -c conda-forge pythreejs Check out this cheatsheet by DataCamp to help with the interface. Help Dropdown Spend some time going through these help features. Markdown Markdown can be easily used in Jupyter. Simply change the dropdown for the cell to "Markdown". If you need a refresher, here is another good cheatsheet. Special Commands Special commands can change shell to run different commands, for example you can run bash commands and magic commands directly within a new cell. Step1: Magic commands- Single percent sign means the cells arguments come from the same line, 2 mean the entire cell is used for the entire argument. lsmagic command can be used to list all other commands. Step2: Matplotlib examples can be found here. Step3: Python To get started, let's practice some very basic python scripts. Start with trying to print "Hello World". Once you have entered your script hit Shift+Enter. This will execute your code and create or send you to the next cell. Once you have done that try checking out a Cheatsheet to help get you running with another basic code snippet. Don't be afraid to try something new. You won't break the shell, it will simply return an error if something is wrong. There are many cheatsheets out there if you are uncomfortable writing your own snippet. Just do a quick google search for a function and copy/paste/execute in a cell below. Step4: For this next example, we will import the pandas library. Most python scripts use some sort of library or module that contains global variables, source files, and functions. These libraries or modules can be called and executed with your code. Step5: The math functions can be called by saying "import math" before your code. This way jupyter, and any python script for that matter, knows to use those functions in the following code. Step7: Widgets Step9: Equations You can use LaTex within Jupyter. Check out the LaTex cheatsheet! One alternative to LaTeX is prettyPy or sympy. These are all great ways to write out equations, although the pros and cons of each need to be weighed before using. For example prettyPy does not evaluate expressions but you also don't have to initialize variables.
Python Code: !conda list Explanation: Jupyter Notebook Jupyter Notebook that evolved out of iPython and is aimed at providing a platform for easy sharing, interaction, and development of open-source software, standards and services. Althought primarily and originally used for phyton interactions, you can interact with multiple programming languages using Jupyter (although orignially expanded from iPython to includ Julia, Pythion, and R). Jupyter notebooks is a great way to create, test, and share live code, as well as equations and visualizations with markdown text. To inlcude more languages, check out kernels that are supported and their installation instructions. Installation To get started, download and install Jupyter. Installation of Anaconda is reccomened by Jupyter becuase it will also cover your python and notebook installation at the same time. If you prefer, you can also install Jupyter with pip, but this is only suggested for advanced Python users. There is extensive documentation to help with installation and download availble on the Jupyter and Anaconda pages. Once installed, launch the Anacdona Navigator and launch Jupter notebooks. Also open the Anaconda Prompt window. We will use this window to install extra widgets. In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts. conda install -c conda-forge ipyleaflet In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts. conda install -c conda-forge bqplot In your Anaconda Prompt window, type the statement below and press enter, continue by following the prompts. conda install -c conda-forge pythreejs Check out this cheatsheet by DataCamp to help with the interface. Help Dropdown Spend some time going through these help features. Markdown Markdown can be easily used in Jupyter. Simply change the dropdown for the cell to "Markdown". If you need a refresher, here is another good cheatsheet. Special Commands Special commands can change shell to run different commands, for example you can run bash commands and magic commands directly within a new cell. End of explanation %lsmagic %matplotlib inline Explanation: Magic commands- Single percent sign means the cells arguments come from the same line, 2 mean the entire cell is used for the entire argument. lsmagic command can be used to list all other commands. End of explanation import numpy as np import matplotlib import matplotlib.pyplot as plt # Fixing random state for reproducibility np.random.seed(19680801) matplotlib.rcParams['axes.unicode_minus'] = False fig, ax = plt.subplots() ax.plot(10*np.random.randn(100), 10*np.random.randn(100), 'o') ax.set_title('Using hyphen instead of Unicode minus') plt.show() import matplotlib.pyplot as plt from matplotlib import collections, colors, transforms import numpy as np nverts = 50 npts = 100 # Make some spirals r = np.arange(nverts) theta = np.linspace(0, 2*np.pi, nverts) xx = r * np.sin(theta) yy = r * np.cos(theta) spiral = list(zip(xx, yy)) # Make some offsets # Fixing random state for reproducibility rs = np.random.RandomState(19680801) xo = rs.randn(npts) yo = rs.randn(npts) xyo = list(zip(xo, yo)) # Make a list of colors cycling through the default series. colors = [colors.to_rgba(c) for c in plt.rcParams['axes.prop_cycle'].by_key()['color']] fig, axes = plt.subplots(2, 2) fig.subplots_adjust(top=0.92, left=0.07, right=0.97, hspace=0.3, wspace=0.3) ((ax1, ax2), (ax3, ax4)) = axes # unpack the axes col = collections.LineCollection([spiral], offsets=xyo, transOffset=ax1.transData) trans = fig.dpi_scale_trans + transforms.Affine2D().scale(1.0/72.0) col.set_transform(trans) # the points to pixels transform # Note: the first argument to the collection initializer # must be a list of sequences of x,y tuples; we have only # one sequence, but we still have to put it in a list. ax1.add_collection(col, autolim=True) # autolim=True enables autoscaling. For collections with # offsets like this, it is neither efficient nor accurate, # but it is good enough to generate a plot that you can use # as a starting point. If you know beforehand the range of # x and y that you want to show, it is better to set them # explicitly, leave out the autolim kwarg (or set it to False), # and omit the 'ax1.autoscale_view()' call below. # Make a transform for the line segments such that their size is # given in points: col.set_color(colors) ax1.autoscale_view() # See comment above, after ax1.add_collection. ax1.set_title('LineCollection using offsets') # The same data as above, but fill the curves. col = collections.PolyCollection([spiral], offsets=xyo, transOffset=ax2.transData) trans = transforms.Affine2D().scale(fig.dpi/72.0) col.set_transform(trans) # the points to pixels transform ax2.add_collection(col, autolim=True) col.set_color(colors) ax2.autoscale_view() ax2.set_title('PolyCollection using offsets') # 7-sided regular polygons col = collections.RegularPolyCollection( 7, sizes=np.abs(xx) * 10.0, offsets=xyo, transOffset=ax3.transData) trans = transforms.Affine2D().scale(fig.dpi / 72.0) col.set_transform(trans) # the points to pixels transform ax3.add_collection(col, autolim=True) col.set_color(colors) ax3.autoscale_view() ax3.set_title('RegularPolyCollection using offsets') # Simulate a series of ocean current profiles, successively # offset by 0.1 m/s so that they form what is sometimes called # a "waterfall" plot or a "stagger" plot. nverts = 60 ncurves = 20 offs = (0.1, 0.0) yy = np.linspace(0, 2*np.pi, nverts) ym = np.max(yy) xx = (0.2 + (ym - yy)/ym)**2 * np.cos(yy - 0.4)*0.5 segs = [] for i in range(ncurves): xxx = xx + 0.02*rs.randn(nverts) curve = list(zip(xxx, yy*100)) segs.append(curve) col = collections.LineCollection(segs, offsets=offs) ax4.add_collection(col, autolim=True) col.set_color(colors) ax4.autoscale_view() ax4.set_title('Successive data offsets') ax4.set_xlabel('Zonal velocity component (m/s)') ax4.set_ylabel('Depth (m)') # Reverse the y-axis so depth increases downward ax4.set_ylim(ax4.get_ylim()[::-1]) plt.show() %%HTML <iframe src="https://giphy.com/embed/3o7qE32pRVNYJYKGBO" width="480" height="269" frameBorder="0" class="giphy-embed" allowFullScreen></iframe><p><a href="https://giphy.com/gifs/veep-3o7qE32pRVNYJYKGBO">via GIPHY</a></p> Explanation: Matplotlib examples can be found here. End of explanation print ('hello world'); print('this is neat') print('This is cool!') "abc"*52 Explanation: Python To get started, let's practice some very basic python scripts. Start with trying to print "Hello World". Once you have entered your script hit Shift+Enter. This will execute your code and create or send you to the next cell. Once you have done that try checking out a Cheatsheet to help get you running with another basic code snippet. Don't be afraid to try something new. You won't break the shell, it will simply return an error if something is wrong. There are many cheatsheets out there if you are uncomfortable writing your own snippet. Just do a quick google search for a function and copy/paste/execute in a cell below. End of explanation import pandas as pd df = pd.DataFrame() a=range(11) b=range(10,21) c=range(20,31) df['a']=a df['b']=b df['c']=c df Explanation: For this next example, we will import the pandas library. Most python scripts use some sort of library or module that contains global variables, source files, and functions. These libraries or modules can be called and executed with your code. End of explanation import math # Input list. values = [0.9999999, 1, 2, 3] # Sum values in list. r = sum(values) print(r) # Sum values with fsum. r = math.fsum(values) print(r) import math value1 = 9 value2 = 16 value3 = 100 # Use sqrt method. print(math.sqrt(value1)) print(math.sqrt(value2)) print(math.sqrt(value3)) Explanation: The math functions can be called by saying "import math" before your code. This way jupyter, and any python script for that matter, knows to use those functions in the following code. End of explanation import numpy as np import bqplot.pyplot as plt size = 100 plt.figure(title='Scatter plot with colors') plt.scatter(np.random.randn(size), np.random.randn(size), color=np.random.randn(size)) plt.show() from ipyleaflet import Map Map(center=[36.146956, -86.779788], zoom=10) from ipyleaflet import Map import json Map(center=[36.146956, -86.779788], zoom=10) from pythreejs import * f = function f(origu,origv) { // scale u and v to the ranges I want: [0, 2*pi] var u = 2*Math.PI*origu; var v = 2*Math.PI*origv; var x = Math.sin(u); var y = Math.cos(v); var z = Math.cos(u+v); return new THREE.Vector3(x,y,z) } surf_g = ParametricGeometry(func=f); surf = Mesh(geometry=surf_g, material=LambertMaterial(color='green', side='FrontSide')) surf2 = Mesh(geometry=surf_g, material=LambertMaterial(color='yellow', side='BackSide')) scene = Scene(children=[surf, surf2, AmbientLight(color='#777777')]) c = PerspectiveCamera(position=[2.5, 2.5, 2.5], up=[0, 0, 1], children=[DirectionalLight(color='white', position=[3, 5, 1], intensity=0.6)]) Renderer(camera=c, scene=scene, controls=[OrbitControls(controlling=c)]) Explanation: Widgets End of explanation from IPython.display import display, Math, Latex display(Math(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx')) from IPython.display import Latex Latex(r\begin{eqnarray} \nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\ \nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\ \nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}) Explanation: Equations You can use LaTex within Jupyter. Check out the LaTex cheatsheet! One alternative to LaTeX is prettyPy or sympy. These are all great ways to write out equations, although the pros and cons of each need to be weighed before using. For example prettyPy does not evaluate expressions but you also don't have to initialize variables. End of explanation
2,030
Given the following text description, write Python code to implement the functionality described below step by step Description: TF-Slim Walkthrough This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks. Table of contents <a href="#Install">Installation and setup</a><br> <a href='#MLP'>Creating your first neural network with TF-Slim</a><br> <a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br> <a href='#CNN'>Training a convolutional neural network (CNN)</a><br> <a href='#Pretained'>Using pre-trained models</a><br> Installation and setup <a id='Install'></a> Since the stable release of TF 1.0, the latest version of slim has been available as tf.contrib.slim. To test that your installation is working, execute the following command; it should run without raising any errors. python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once" Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path. To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory. Step2: Creating your first neural network with TF-Slim <a id='MLP'></a> Below we give some code to create a simple multilayer perceptron (MLP) which can be used for regression problems. The model has 2 hidden layers. The output is a single node. When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.) We use variable scope to put all the nodes under a common name, so that the graph has some hierarchical structure. This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related variables. The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.) We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time, we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being constructed for training or testing, since the computational graph will be different in the two cases (although the variables, storing the model parameters, will be shared, since they have the same name/scope). Step3: Let's create the model and examine its structure. We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified. Step4: Let's create some 1d regression data . We will train and test the model on some noisy observations of a nonlinear function. Step5: Let's fit the model to the data The user has to specify the loss function and the optimizer, and slim does the rest. In particular, the slim.learning.train function does the following Step6: Training with multiple loss functions. Sometimes we have multiple objectives we want to simultaneously optimize. In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example, but we show how to compute it.) Step7: Let's load the saved model and use it for prediction. Step8: Let's compute various evaluation metrics on the test set. In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set. Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries. After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric. Step9: Reading Data with TF-Slim <a id='ReadingTFSlimDatasets'></a> Reading data with TF-Slim has two main components Step10: Display some of the data. Step11: Convolutional neural nets (CNNs). <a id='CNN'></a> In this section, we show how to train an image classifier using a simple CNN. Define the model. Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing). Step12: Apply the model to some randomly generated images. Step14: Train the model on the Flowers dataset. Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results. Step15: Evaluate some metrics. As we discussed above, we can compute various metrics besides the loss. Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.) Step16: Using pre-trained models <a id='Pretrained'></a> Neural nets work best when they have many parameters, making them very flexible function approximators. However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here. You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes. Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models Step17: Apply Pre-trained Inception V1 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. Step18: Download the VGG-16 checkpoint Step19: Apply Pre-trained VGG-16 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001. Step21: Fine-tune the model on a different set of labels. We will fine tune the inception model on the Flowers dataset. Step22: Apply fine tuned model to some images.
Python Code: from __future__ import absolute_import from __future__ import division from __future__ import print_function import matplotlib %matplotlib inline import matplotlib.pyplot as plt import math import numpy as np import tensorflow as tf import time from datasets import dataset_utils # Main slim library from tensorflow.contrib import slim Explanation: TF-Slim Walkthrough This notebook will walk you through the basics of using TF-Slim to define, train and evaluate neural networks on various tasks. It assumes a basic knowledge of neural networks. Table of contents <a href="#Install">Installation and setup</a><br> <a href='#MLP'>Creating your first neural network with TF-Slim</a><br> <a href='#ReadingTFSlimDatasets'>Reading Data with TF-Slim</a><br> <a href='#CNN'>Training a convolutional neural network (CNN)</a><br> <a href='#Pretained'>Using pre-trained models</a><br> Installation and setup <a id='Install'></a> Since the stable release of TF 1.0, the latest version of slim has been available as tf.contrib.slim. To test that your installation is working, execute the following command; it should run without raising any errors. python -c "import tensorflow.contrib.slim as slim; eval = slim.evaluation.evaluate_once" Although, to use TF-Slim for image classification (as we do in this notebook), you also have to install the TF-Slim image models library from here. Let's suppose you install this into a directory called TF_MODELS. Then you should change directory to TF_MODELS/slim before running this notebook, so that these files are in your python path. To check you've got these two steps to work, just execute the cell below. If it complains about unknown modules, restart the notebook after moving to the TF-Slim models directory. End of explanation def regression_model(inputs, is_training=True, scope="deep_regression"): Creates the regression model. Args: inputs: A node that yields a `Tensor` of size [batch_size, dimensions]. is_training: Whether or not we're currently training the model. scope: An optional variable_op scope for the model. Returns: predictions: 1-D `Tensor` of shape [batch_size] of responses. end_points: A dict of end points representing the hidden layers. with tf.variable_scope(scope, 'deep_regression', [inputs]): end_points = {} # Set the default weight _regularizer and acvitation for each fully_connected layer. with slim.arg_scope([slim.fully_connected], activation_fn=tf.nn.relu, weights_regularizer=slim.l2_regularizer(0.01)): # Creates a fully connected layer from the inputs with 32 hidden units. net = slim.fully_connected(inputs, 32, scope='fc1') end_points['fc1'] = net # Adds a dropout layer to prevent over-fitting. net = slim.dropout(net, 0.8, is_training=is_training) # Adds another fully connected layer with 16 hidden units. net = slim.fully_connected(net, 16, scope='fc2') end_points['fc2'] = net # Creates a fully-connected layer with a single hidden unit. Note that the # layer is made linear by setting activation_fn=None. predictions = slim.fully_connected(net, 1, activation_fn=None, scope='prediction') end_points['out'] = predictions return predictions, end_points Explanation: Creating your first neural network with TF-Slim <a id='MLP'></a> Below we give some code to create a simple multilayer perceptron (MLP) which can be used for regression problems. The model has 2 hidden layers. The output is a single node. When this function is called, it will create various nodes, and silently add them to whichever global TF graph is currently in scope. When a node which corresponds to a layer with adjustable parameters (eg., a fully connected layer) is created, additional parameter variable nodes are silently created, and added to the graph. (We will discuss how to train the parameters later.) We use variable scope to put all the nodes under a common name, so that the graph has some hierarchical structure. This is useful when we want to visualize the TF graph in tensorboard, or if we want to query related variables. The fully connected layers all use the same L2 weight decay and ReLu activations, as specified by arg_scope. (However, the final layer overrides these defaults, and uses an identity activation function.) We also illustrate how to add a dropout layer after the first fully connected layer (FC1). Note that at test time, we do not drop out nodes, but instead use the average activations; hence we need to know whether the model is being constructed for training or testing, since the computational graph will be different in the two cases (although the variables, storing the model parameters, will be shared, since they have the same name/scope). End of explanation with tf.Graph().as_default(): # Dummy placeholders for arbitrary number of 1d inputs and outputs inputs = tf.placeholder(tf.float32, shape=(None, 1)) outputs = tf.placeholder(tf.float32, shape=(None, 1)) # Build model predictions, end_points = regression_model(inputs) # Print name and shape of each tensor. print("Layers") for k, v in end_points.items(): print('name = {}, shape = {}'.format(v.name, v.get_shape())) # Print name and shape of parameter nodes (values not yet initialized) print("\n") print("Parameters") for v in slim.get_model_variables(): print('name = {}, shape = {}'.format(v.name, v.get_shape())) Explanation: Let's create the model and examine its structure. We create a TF graph and call regression_model(), which adds nodes (tensors) to the graph. We then examine their shape, and print the names of all the model variables which have been implicitly created inside of each layer. We see that the names of the variables follow the scopes that we specified. End of explanation def produce_batch(batch_size, noise=0.3): xs = np.random.random(size=[batch_size, 1]) * 10 ys = np.sin(xs) + 5 + np.random.normal(size=[batch_size, 1], scale=noise) return [xs.astype(np.float32), ys.astype(np.float32)] x_train, y_train = produce_batch(200) x_test, y_test = produce_batch(200) plt.scatter(x_train, y_train) Explanation: Let's create some 1d regression data . We will train and test the model on some noisy observations of a nonlinear function. End of explanation def convert_data_to_tensors(x, y): inputs = tf.constant(x) inputs.set_shape([None, 1]) outputs = tf.constant(y) outputs.set_shape([None, 1]) return inputs, outputs # The following snippet trains the regression model using a mean_squared_error loss. ckpt_dir = '/tmp/regression_model/' with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) inputs, targets = convert_data_to_tensors(x_train, y_train) # Make the model. predictions, nodes = regression_model(inputs, is_training=True) # Add the loss function to the graph. loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions) # The total loss is the uers's loss plus any regularization losses. total_loss = slim.losses.get_total_loss() # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=0.005) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training inside a session. final_loss = slim.learning.train( train_op, logdir=ckpt_dir, number_of_steps=5000, save_summaries_secs=5, log_every_n_steps=500) print("Finished training. Last batch loss:", final_loss) print("Checkpoint saved in %s" % ckpt_dir) Explanation: Let's fit the model to the data The user has to specify the loss function and the optimizer, and slim does the rest. In particular, the slim.learning.train function does the following: For each iteration, evaluate the train_op, which updates the parameters using the optimizer applied to the current minibatch. Also, update the global_step. Occasionally store the model checkpoint in the specified directory. This is useful in case your machine crashes - then you can simply restart from the specified checkpoint. End of explanation with tf.Graph().as_default(): inputs, targets = convert_data_to_tensors(x_train, y_train) predictions, end_points = regression_model(inputs, is_training=True) # Add multiple loss nodes. mean_squared_error_loss = tf.losses.mean_squared_error(labels=targets, predictions=predictions) absolute_difference_loss = slim.losses.absolute_difference(predictions, targets) # The following two ways to compute the total loss are equivalent regularization_loss = tf.add_n(slim.losses.get_regularization_losses()) total_loss1 = mean_squared_error_loss + absolute_difference_loss + regularization_loss # Regularization Loss is included in the total loss by default. # This is good for training, but not for testing. total_loss2 = slim.losses.get_total_loss(add_regularization_losses=True) init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) # Will initialize the parameters with random weights. total_loss1, total_loss2 = sess.run([total_loss1, total_loss2]) print('Total Loss1: %f' % total_loss1) print('Total Loss2: %f' % total_loss2) print('Regularization Losses:') for loss in slim.losses.get_regularization_losses(): print(loss) print('Loss Functions:') for loss in slim.losses.get_losses(): print(loss) Explanation: Training with multiple loss functions. Sometimes we have multiple objectives we want to simultaneously optimize. In slim, it is easy to add more losses, as we show below. (We do not optimize the total loss in this example, but we show how to compute it.) End of explanation with tf.Graph().as_default(): inputs, targets = convert_data_to_tensors(x_test, y_test) # Create the model structure. (Parameters will be loaded below.) predictions, end_points = regression_model(inputs, is_training=False) # Make a session which restores the old parameters from a checkpoint. sv = tf.train.Supervisor(logdir=ckpt_dir) with sv.managed_session() as sess: inputs, predictions, targets = sess.run([inputs, predictions, targets]) plt.scatter(inputs, targets, c='r'); plt.scatter(inputs, predictions, c='b'); plt.title('red=true, blue=predicted') Explanation: Let's load the saved model and use it for prediction. End of explanation with tf.Graph().as_default(): inputs, targets = convert_data_to_tensors(x_test, y_test) predictions, end_points = regression_model(inputs, is_training=False) # Specify metrics to evaluate: names_to_value_nodes, names_to_update_nodes = slim.metrics.aggregate_metric_map({ 'Mean Squared Error': slim.metrics.streaming_mean_squared_error(predictions, targets), 'Mean Absolute Error': slim.metrics.streaming_mean_absolute_error(predictions, targets) }) # Make a session which restores the old graph parameters, and then run eval. sv = tf.train.Supervisor(logdir=ckpt_dir) with sv.managed_session() as sess: metric_values = slim.evaluation.evaluation( sess, num_evals=1, # Single pass over data eval_op=names_to_update_nodes.values(), final_op=names_to_value_nodes.values()) names_to_values = dict(zip(names_to_value_nodes.keys(), metric_values)) for key, value in names_to_values.items(): print('%s: %f' % (key, value)) Explanation: Let's compute various evaluation metrics on the test set. In TF-Slim termiology, losses are optimized, but metrics (which may not be differentiable, e.g., precision and recall) are just measured. As an illustration, the code below computes mean squared error and mean absolute error metrics on the test set. Each metric declaration creates several local variables (which must be initialized via tf.initialize_local_variables()) and returns both a value_op and an update_op. When evaluated, the value_op returns the current value of the metric. The update_op loads a new batch of data, runs the model, obtains the predictions and accumulates the metric statistics appropriately before returning the current value of the metric. We store these value nodes and update nodes in 2 dictionaries. After creating the metric nodes, we can pass them to slim.evaluation.evaluation, which repeatedly evaluates these nodes the specified number of times. (This allows us to compute the evaluation in a streaming fashion across minibatches, which is usefulf for large datasets.) Finally, we print the final value of each metric. End of explanation import tensorflow as tf from datasets import dataset_utils url = "http://download.tensorflow.org/data/flowers.tar.gz" flowers_data_dir = '/tmp/flowers' if not tf.gfile.Exists(flowers_data_dir): tf.gfile.MakeDirs(flowers_data_dir) dataset_utils.download_and_uncompress_tarball(url, flowers_data_dir) Explanation: Reading Data with TF-Slim <a id='ReadingTFSlimDatasets'></a> Reading data with TF-Slim has two main components: A Dataset and a DatasetDataProvider. The former is a descriptor of a dataset, while the latter performs the actions necessary for actually reading the data. Lets look at each one in detail: Dataset A TF-Slim Dataset contains descriptive information about a dataset necessary for reading it, such as the list of data files and how to decode them. It also contains metadata including class labels, the size of the train/test splits and descriptions of the tensors that the dataset provides. For example, some datasets contain images with labels. Others augment this data with bounding box annotations, etc. The Dataset object allows us to write generic code using the same API, regardless of the data content and encoding type. TF-Slim's Dataset works especially well when the data is stored as a (possibly sharded) TFRecords file, where each record contains a tf.train.Example protocol buffer. TF-Slim uses a consistent convention for naming the keys and values inside each Example record. DatasetDataProvider A DatasetDataProvider is a class which actually reads the data from a dataset. It is highly configurable to read the data in various ways that may make a big impact on the efficiency of your training process. For example, it can be single or multi-threaded. If your data is sharded across many files, it can read each files serially, or from every file simultaneously. Demo: The Flowers Dataset For convenience, we've include scripts to convert several common image datasets into TFRecord format and have provided the Dataset descriptor files necessary for reading them. We demonstrate how easy it is to use these dataset via the Flowers dataset below. Download the Flowers Dataset <a id='DownloadFlowers'></a> We've made available a tarball of the Flowers dataset which has already been converted to TFRecord format. End of explanation from datasets import flowers import tensorflow as tf from tensorflow.contrib import slim with tf.Graph().as_default(): dataset = flowers.get_split('train', flowers_data_dir) data_provider = slim.dataset_data_provider.DatasetDataProvider( dataset, common_queue_capacity=32, common_queue_min=1) image, label = data_provider.get(['image', 'label']) with tf.Session() as sess: with slim.queues.QueueRunners(sess): for i in range(4): np_image, np_label = sess.run([image, label]) height, width, _ = np_image.shape class_name = name = dataset.labels_to_names[np_label] plt.figure() plt.imshow(np_image) plt.title('%s, %d x %d' % (name, height, width)) plt.axis('off') plt.show() Explanation: Display some of the data. End of explanation def my_cnn(images, num_classes, is_training): # is_training is not used... with slim.arg_scope([slim.max_pool2d], kernel_size=[3, 3], stride=2): net = slim.conv2d(images, 64, [5, 5]) net = slim.max_pool2d(net) net = slim.conv2d(net, 64, [5, 5]) net = slim.max_pool2d(net) net = slim.flatten(net) net = slim.fully_connected(net, 192) net = slim.fully_connected(net, num_classes, activation_fn=None) return net Explanation: Convolutional neural nets (CNNs). <a id='CNN'></a> In this section, we show how to train an image classifier using a simple CNN. Define the model. Below we define a simple CNN. Note that the output layer is linear function - we will apply softmax transformation externally to the model, either in the loss function (for training), or in the prediction function (during testing). End of explanation import tensorflow as tf with tf.Graph().as_default(): # The model can handle any input size because the first layer is convolutional. # The size of the model is determined when image_node is first passed into the my_cnn function. # Once the variables are initialized, the size of all the weight matrices is fixed. # Because of the fully connected layers, this means that all subsequent images must have the same # input size as the first image. batch_size, height, width, channels = 3, 28, 28, 3 images = tf.random_uniform([batch_size, height, width, channels], maxval=1) # Create the model. num_classes = 10 logits = my_cnn(images, num_classes, is_training=True) probabilities = tf.nn.softmax(logits) # Initialize all the variables (including parameters) randomly. init_op = tf.global_variables_initializer() with tf.Session() as sess: # Run the init_op, evaluate the model outputs and print the results: sess.run(init_op) probabilities = sess.run(probabilities) print('Probabilities Shape:') print(probabilities.shape) # batch_size x num_classes print('\nProbabilities:') print(probabilities) print('\nSumming across all classes (Should equal 1):') print(np.sum(probabilities, 1)) # Each row sums to 1 Explanation: Apply the model to some randomly generated images. End of explanation from preprocessing import inception_preprocessing import tensorflow as tf from tensorflow.contrib import slim def load_batch(dataset, batch_size=32, height=299, width=299, is_training=False): Loads a single batch of data. Args: dataset: The dataset to load. batch_size: The number of images in the batch. height: The size of each image after preprocessing. width: The size of each image after preprocessing. is_training: Whether or not we're currently training or evaluating. Returns: images: A Tensor of size [batch_size, height, width, 3], image samples that have been preprocessed. images_raw: A Tensor of size [batch_size, height, width, 3], image samples that can be used for visualization. labels: A Tensor of size [batch_size], whose values range between 0 and dataset.num_classes. data_provider = slim.dataset_data_provider.DatasetDataProvider( dataset, common_queue_capacity=32, common_queue_min=8) image_raw, label = data_provider.get(['image', 'label']) # Preprocess image for usage by Inception. image = inception_preprocessing.preprocess_image(image_raw, height, width, is_training=is_training) # Preprocess the image for display purposes. image_raw = tf.expand_dims(image_raw, 0) image_raw = tf.image.resize_images(image_raw, [height, width]) image_raw = tf.squeeze(image_raw) # Batch it up. images, images_raw, labels = tf.train.batch( [image, image_raw, label], batch_size=batch_size, num_threads=1, capacity=2 * batch_size) return images, images_raw, labels from datasets import flowers # This might take a few minutes. train_dir = '/tmp/tfslim_model/' print('Will save model to %s' % train_dir) with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = flowers.get_split('train', flowers_data_dir) images, _, labels = load_batch(dataset) # Create the model: logits = my_cnn(images, num_classes=dataset.num_classes, is_training=True) # Specify the loss function: one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes) slim.losses.softmax_cross_entropy(logits, one_hot_labels) total_loss = slim.losses.get_total_loss() # Create some summaries to visualize the training process: tf.summary.scalar('losses/Total Loss', total_loss) # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=0.01) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training: final_loss = slim.learning.train( train_op, logdir=train_dir, number_of_steps=1, # For speed, we just do 1 epoch save_summaries_secs=1) print('Finished training. Final batch loss %d' % final_loss) Explanation: Train the model on the Flowers dataset. Before starting, make sure you've run the code to <a href="#DownloadFlowers">Download the Flowers</a> dataset. Now, we'll get a sense of what it looks like to use TF-Slim's training functions found in learning.py. First, we'll create a function, load_batch, that loads batches of dataset from a dataset. Next, we'll train a model for a single step (just to demonstrate the API), and evaluate the results. End of explanation from datasets import flowers # This might take a few minutes. with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.DEBUG) dataset = flowers.get_split('train', flowers_data_dir) images, _, labels = load_batch(dataset) logits = my_cnn(images, num_classes=dataset.num_classes, is_training=False) predictions = tf.argmax(logits, 1) # Define the metrics: names_to_values, names_to_updates = slim.metrics.aggregate_metric_map({ 'eval/Accuracy': slim.metrics.streaming_accuracy(predictions, labels), 'eval/Recall@5': slim.metrics.streaming_recall_at_k(logits, labels, 5), }) print('Running evaluation Loop...') checkpoint_path = tf.train.latest_checkpoint(train_dir) metric_values = slim.evaluation.evaluate_once( master='', checkpoint_path=checkpoint_path, logdir=train_dir, eval_op=names_to_updates.values(), final_op=names_to_values.values()) names_to_values = dict(zip(names_to_values.keys(), metric_values)) for name in names_to_values: print('%s: %f' % (name, names_to_values[name])) Explanation: Evaluate some metrics. As we discussed above, we can compute various metrics besides the loss. Below we show how to compute prediction accuracy of the trained model, as well as top-5 classification accuracy. (The difference between evaluation and evaluation_loop is that the latter writes the results to a log directory, so they can be viewed in tensorboard.) End of explanation from datasets import dataset_utils url = "http://download.tensorflow.org/models/inception_v1_2016_08_28.tar.gz" checkpoints_dir = '/tmp/checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) Explanation: Using pre-trained models <a id='Pretrained'></a> Neural nets work best when they have many parameters, making them very flexible function approximators. However, this means they must be trained on big datasets. Since this process is slow, we provide various pre-trained models - see the list here. You can either use these models as-is, or you can perform "surgery" on them, to modify them for some other task. For example, it is common to "chop off" the final pre-softmax layer, and replace it with a new set of weights corresponding to some new set of labels. You can then quickly fine tune the new model on a small new dataset. We illustrate this below, using inception-v1 as the base model. While models like Inception V3 are more powerful, Inception V1 is used for speed purposes. Take into account that VGG and ResNet final layers have only 1000 outputs rather than 1001. The ImageNet dataset provied has an empty background class which can be used to fine-tune the model to other tasks. VGG and ResNet models provided here don't use that class. We provide two examples of using pretrained models: Inception V1 and VGG-19 models to highlight this difference. Download the Inception V1 checkpoint End of explanation import numpy as np import os import tensorflow as tf try: import urllib2 except ImportError: import urllib.request as urllib from datasets import imagenet from nets import inception from preprocessing import inception_preprocessing from tensorflow.contrib import slim image_size = inception.inception_v1.default_image_size with tf.Graph().as_default(): url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg' image_string = urllib.urlopen(url).read() image = tf.image.decode_jpeg(image_string, channels=3) processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False) processed_images = tf.expand_dims(processed_image, 0) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False) probabilities = tf.nn.softmax(logits) init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'inception_v1.ckpt'), slim.get_model_variables('InceptionV1')) with tf.Session() as sess: init_fn(sess) np_image, probabilities = sess.run([image, probabilities]) probabilities = probabilities[0, 0:] sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])] plt.figure() plt.imshow(np_image.astype(np.uint8)) plt.axis('off') plt.show() names = imagenet.create_readable_names_for_imagenet_labels() for i in range(5): index = sorted_inds[i] print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index])) Explanation: Apply Pre-trained Inception V1 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. End of explanation from datasets import dataset_utils import tensorflow as tf url = "http://download.tensorflow.org/models/vgg_16_2016_08_28.tar.gz" checkpoints_dir = '/tmp/checkpoints' if not tf.gfile.Exists(checkpoints_dir): tf.gfile.MakeDirs(checkpoints_dir) dataset_utils.download_and_uncompress_tarball(url, checkpoints_dir) Explanation: Download the VGG-16 checkpoint End of explanation import numpy as np import os import tensorflow as tf try: import urllib2 except ImportError: import urllib.request as urllib from datasets import imagenet from nets import vgg from preprocessing import vgg_preprocessing from tensorflow.contrib import slim image_size = vgg.vgg_16.default_image_size with tf.Graph().as_default(): url = 'https://upload.wikimedia.org/wikipedia/commons/d/d9/First_Student_IC_school_bus_202076.jpg' image_string = urllib.urlopen(url).read() image = tf.image.decode_jpeg(image_string, channels=3) processed_image = vgg_preprocessing.preprocess_image(image, image_size, image_size, is_training=False) processed_images = tf.expand_dims(processed_image, 0) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(vgg.vgg_arg_scope()): # 1000 classes instead of 1001. logits, _ = vgg.vgg_16(processed_images, num_classes=1000, is_training=False) probabilities = tf.nn.softmax(logits) init_fn = slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'vgg_16.ckpt'), slim.get_model_variables('vgg_16')) with tf.Session() as sess: init_fn(sess) np_image, probabilities = sess.run([image, probabilities]) probabilities = probabilities[0, 0:] sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])] plt.figure() plt.imshow(np_image.astype(np.uint8)) plt.axis('off') plt.show() names = imagenet.create_readable_names_for_imagenet_labels() for i in range(5): index = sorted_inds[i] # Shift the index of a class name by one. print('Probability %0.2f%% => [%s]' % (probabilities[index] * 100, names[index+1])) Explanation: Apply Pre-trained VGG-16 model to Images. We have to convert each image to the size expected by the model checkpoint. There is no easy way to determine this size from the checkpoint itself. So we use a preprocessor to enforce this. Pay attention to the difference caused by 1000 classes instead of 1001. End of explanation # Note that this may take several minutes. import os from datasets import flowers from nets import inception from preprocessing import inception_preprocessing from tensorflow.contrib import slim image_size = inception.inception_v1.default_image_size def get_init_fn(): Returns a function run by the chief worker to warm-start the training. checkpoint_exclude_scopes=["InceptionV1/Logits", "InceptionV1/AuxLogits"] exclusions = [scope.strip() for scope in checkpoint_exclude_scopes] variables_to_restore = [] for var in slim.get_model_variables(): excluded = False for exclusion in exclusions: if var.op.name.startswith(exclusion): excluded = True break if not excluded: variables_to_restore.append(var) return slim.assign_from_checkpoint_fn( os.path.join(checkpoints_dir, 'inception_v1.ckpt'), variables_to_restore) train_dir = '/tmp/inception_finetuned/' with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = flowers.get_split('train', flowers_data_dir) images, _, labels = load_batch(dataset, height=image_size, width=image_size) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True) # Specify the loss function: one_hot_labels = slim.one_hot_encoding(labels, dataset.num_classes) slim.losses.softmax_cross_entropy(logits, one_hot_labels) total_loss = slim.losses.get_total_loss() # Create some summaries to visualize the training process: tf.summary.scalar('losses/Total Loss', total_loss) # Specify the optimizer and create the train op: optimizer = tf.train.AdamOptimizer(learning_rate=0.01) train_op = slim.learning.create_train_op(total_loss, optimizer) # Run the training: final_loss = slim.learning.train( train_op, logdir=train_dir, init_fn=get_init_fn(), number_of_steps=2) print('Finished training. Last batch loss %f' % final_loss) Explanation: Fine-tune the model on a different set of labels. We will fine tune the inception model on the Flowers dataset. End of explanation import numpy as np import tensorflow as tf from datasets import flowers from nets import inception from tensorflow.contrib import slim image_size = inception.inception_v1.default_image_size batch_size = 3 with tf.Graph().as_default(): tf.logging.set_verbosity(tf.logging.INFO) dataset = flowers.get_split('train', flowers_data_dir) images, images_raw, labels = load_batch(dataset, height=image_size, width=image_size) # Create the model, use the default arg scope to configure the batch norm parameters. with slim.arg_scope(inception.inception_v1_arg_scope()): logits, _ = inception.inception_v1(images, num_classes=dataset.num_classes, is_training=True) probabilities = tf.nn.softmax(logits) checkpoint_path = tf.train.latest_checkpoint(train_dir) init_fn = slim.assign_from_checkpoint_fn( checkpoint_path, slim.get_variables_to_restore()) with tf.Session() as sess: with slim.queues.QueueRunners(sess): sess.run(tf.initialize_local_variables()) init_fn(sess) np_probabilities, np_images_raw, np_labels = sess.run([probabilities, images_raw, labels]) for i in range(batch_size): image = np_images_raw[i, :, :, :] true_label = np_labels[i] predicted_label = np.argmax(np_probabilities[i, :]) predicted_name = dataset.labels_to_names[predicted_label] true_name = dataset.labels_to_names[true_label] plt.figure() plt.imshow(image.astype(np.uint8)) plt.title('Ground Truth: [%s], Prediction [%s]' % (true_name, predicted_name)) plt.axis('off') plt.show() Explanation: Apply fine tuned model to some images. End of explanation
2,031
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing of playing pyguessgame. Generates random numbers and plays a game. Create two random lists of numbers 0/9,10/19,20/29 etc to 100. Compare the two lists. If win mark, if lose mark. Debian Step1: Makes dict with keys pointing to the 10s numbers. The value needs the list of random numbers updated. Currently it just adds the number one. How to add the random number list?
Python Code: #for ronum in ranumlis: # print ronum randict = dict() othgues = [] othlow = 0 othhigh = 9 for ranez in range(10): randxz = random.randint(othlow, othhigh) othgues.append(randxz) othlow = (othlow + 10) othhigh = (othhigh + 10) #print othgues tenlis = ['zero', 'ten', 'twenty', 'thirty', 'fourty', 'fifty', 'sixty', 'seventy', 'eighty', 'ninety'] #for telis in tenlis: # for diez in dieci: # print telis #randict Explanation: Testing of playing pyguessgame. Generates random numbers and plays a game. Create two random lists of numbers 0/9,10/19,20/29 etc to 100. Compare the two lists. If win mark, if lose mark. Debian End of explanation for ronum in ranumlis: #print ronum if ronum in othgues: print (str(ronum) + ' You Win!') else: print (str(ronum) + ' You Lose!') #dieci = dict() #for ranz in range(10): #print str(ranz) + str(1)# # dieci.update({str(ranz) + str(1): str(ranz)}) # for numz in range(10): #print str(ranz) + str(numz) # print numz #print zetoo #for diez in dieci: # print diez #for sinum in ranumlis: # print str(sinum) + (str('\n')) #if str(sinum) in othhigh: # print 'Win' #import os #os.system('sudo adduser joemanz --disabled-login --quiet -D') #uslis = os.listdir('/home/wcmckee/signinlca/usernames/') #print ('User List: ') #for usl in uslis: # print usl # os.system('sudo adduser ' + usl + ' ' + '--disabled-login --quiet') # os.system('sudo mv /home/wcmckee/signinlca/usernames/' + usl + ' ' + '/home/' + usl + ' ') #print dieci Explanation: Makes dict with keys pointing to the 10s numbers. The value needs the list of random numbers updated. Currently it just adds the number one. How to add the random number list? End of explanation
2,032
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href='http Step1: Declaring elements in a function If we write a function that accepts one or more parameters and constructs an element, we can build plots that do things like Step2: The function defines a number of parameters that will change the signal, but using the default parameters the function outputs a Curve like this Step3: HoloMaps The HoloMap is the first container type we will start working with, because it is often the starting point of a parameter exploration. HoloMaps allow exploring a parameter space sampled at specific, discrete values, and can easily be created using a dictionary comprehension. When declaring a HoloMap, just ensure the length and ordering of the key tuple matches the key dimensions Step4: Note how the keys in our HoloMap map on to two automatically generated sliders. HoloViews supports two types of widgets by default Step5: Apart from their simplicity and generality, one of the key features of HoloMaps is that they can be exported to a static HTML file, GIF, or video, because every combination of the sliders (parameter values) has been pre-computed already. This very convenient feature of pre-computation becomes a liability for very large or densely sampled parameter spaces, however, leading to the DynamicMap type discussed next. Summary HoloMaps allow declaring a parameter space The default widgets provide a slider for numeric types and a dropdown menu for non-numeric types. HoloMap works well for small or sparsely sampled parameter spaces, exporting to static files DynamicMap A [DynamicMap]((holoviews.org/reference/containers/bokeh/DynamicMap.html) is very similar to a HoloMap except that it evaluates the function lazily. This property makes DynamicMap require a live, running Python server, not just an HTML-serving web site or email, and it may be slow if each frame is slower to compute than it is to display. However, because of these properties, DynamicMap allows exploring arbitrarily large parameter spaces, dynamically generating each element as needed to satisfy a request from the user. The key dimensions kdims must match the arguments of the function Step6: Faceting parameter spaces Casting HoloMaps and DynamicMaps let you explore a multidimensional parameter space by looking at one point in that space at a time, which is often but not always sufficient. If you want to see more data at once, you can facet the HoloMap to put some data points side by side or overlaid to facilitate comparison. One easy way to do that is to cast your HoloMap into a GridSpace, NdLayout, or NdOverlay container Step7: Faceting with methods Using the .overlay, .grid and .layout methods we can facet multi-dimensional data by a specific dimension Step8: Using these methods with a DynamicMap requires special attention, because a dynamic map can return an infinite number of different values along its dimensions, unlike a HoloMap. Obviously, HoloViews could not comply with such a request, but these methods are perfectly legal with DynamicMap if you also define which specific dimension values you need, using the .redim.values method Step9: Optional Slicing and indexing HoloMaps and other containers also allow you to easily index or select by key, allowing you to Step10: You can do the same using the select method
Python Code: import numpy as np import holoviews as hv hv.extension('bokeh') %opts Curve Area [width=600] Explanation: <a href='http://www.holoviews.org'><img src="assets/hv+bk.png" alt="HV+BK logos" width="40%;" align="left"/></a> <div style="float:right;"><h2>03. Exploration with Containers</h2></div> In the first two sections of this tutorial we discovered how to declare static elements and compose them one by one into composite objects, allowing us to quickly visualize data as we explore it. However, many datasets contain numerous additional dimensions of data, such as the same measurement repeated across a large number of different settings or parameter values. To address these common situations, HoloViews provides ontainers that allow you to explore extra dimensions of your data using widgets, as animations, or by "faceting" it (splitting it into "small multiples") in various ways. To begin with we will discover how we can quickly explore the parameters of a function by having it return an element and then evaluating the function over the parameter space. End of explanation def fm_modulation(f_carrier=110, f_mod=110, mod_index=1, length=0.1, sampleRate=3000): x = np.arange(0, length, 1.0/sampleRate) y = np.sin(2*np.pi*f_carrier*x + mod_index*np.sin(2*np.pi*f_mod*x)) return hv.Curve((x, y), kdims=['Time'], vdims=['Amplitude']) Explanation: Declaring elements in a function If we write a function that accepts one or more parameters and constructs an element, we can build plots that do things like: Loading data from disk as needed Querying data from an API Calculating data from a mathematical function Generating data from a simulation As a basic example, let's declare a function that generates a frequency-modulated signal and returns a Curve element: End of explanation fm_modulation() Explanation: The function defines a number of parameters that will change the signal, but using the default parameters the function outputs a Curve like this: End of explanation carrier_frequencies = [10, 20, 110, 220, 330] modulation_frequencies = [110, 220, 330] hmap = hv.HoloMap({(fc, fm): fm_modulation(fc, fm) for fc in carrier_frequencies for fm in modulation_frequencies}, kdims=['fc', 'fm']) hmap Explanation: HoloMaps The HoloMap is the first container type we will start working with, because it is often the starting point of a parameter exploration. HoloMaps allow exploring a parameter space sampled at specific, discrete values, and can easily be created using a dictionary comprehension. When declaring a HoloMap, just ensure the length and ordering of the key tuple matches the key dimensions: End of explanation # Exercise: Try changing the function below to return an ``Area`` or ``Scatter`` element, # in the same way `fm_modulation` returned a ``Curve`` element. def fm_modulation2(f_carrier=220, f_mod=110, mod_index=1, length=0.1, sampleRate=3000): x = np.arange(0,length, 1.0/sampleRate) y = np.sin(2*np.pi*f_carrier*x + mod_index*np.sin(2*np.pi*f_mod*x)) # Then declare a HoloMap like above and assign it to a ``exercise_hmap`` variable and display that Explanation: Note how the keys in our HoloMap map on to two automatically generated sliders. HoloViews supports two types of widgets by default: numeric sliders, or a dropdown selection menu for all non-numeric types. These sliders appear because a HoloMap can display only a single Element at one time, and the user must thus select which of the available elements to show at any one time. End of explanation %%opts Curve (color='red') dmap = hv.DynamicMap(fm_modulation, kdims=['f_carrier', 'f_mod', 'mod_index']) dmap = dmap.redim.range(f_carrier=((10, 110)), f_mod=(10, 110), mod_index=(0.1, 2)) dmap # Exercise: Declare a DynamicMap using the function from the previous exercise and name it ``exercise_dmap`` # Exercise (Optional): Use the ``.redim.step`` method and a floating point range to modify the slider step Explanation: Apart from their simplicity and generality, one of the key features of HoloMaps is that they can be exported to a static HTML file, GIF, or video, because every combination of the sliders (parameter values) has been pre-computed already. This very convenient feature of pre-computation becomes a liability for very large or densely sampled parameter spaces, however, leading to the DynamicMap type discussed next. Summary HoloMaps allow declaring a parameter space The default widgets provide a slider for numeric types and a dropdown menu for non-numeric types. HoloMap works well for small or sparsely sampled parameter spaces, exporting to static files DynamicMap A [DynamicMap]((holoviews.org/reference/containers/bokeh/DynamicMap.html) is very similar to a HoloMap except that it evaluates the function lazily. This property makes DynamicMap require a live, running Python server, not just an HTML-serving web site or email, and it may be slow if each frame is slower to compute than it is to display. However, because of these properties, DynamicMap allows exploring arbitrarily large parameter spaces, dynamically generating each element as needed to satisfy a request from the user. The key dimensions kdims must match the arguments of the function: End of explanation %%opts Curve [width=150] hv.GridSpace(hmap).opts() # Exercise: Try casting your ``exercise_hmap`` HoloMap from the first exercise to an ``NdLayout`` or # ``NdOverlay``, guessing from the name what the resulting organization will be before testing it. Explanation: Faceting parameter spaces Casting HoloMaps and DynamicMaps let you explore a multidimensional parameter space by looking at one point in that space at a time, which is often but not always sufficient. If you want to see more data at once, you can facet the HoloMap to put some data points side by side or overlaid to facilitate comparison. One easy way to do that is to cast your HoloMap into a GridSpace, NdLayout, or NdOverlay container: End of explanation hmap.overlay('fm') Explanation: Faceting with methods Using the .overlay, .grid and .layout methods we can facet multi-dimensional data by a specific dimension: End of explanation %%opts Curve [width=150] dmap.redim.values(f_mod=[10, 20, 30], f_carrier=[10, 20, 30]).overlay('f_mod').grid('f_carrier').opts() # Exercise: Facet the ``exercise_dmap`` DynamicMap using ``.overlay`` and ``.grid`` # Hint: Use the .redim.values method to set discrete values for ``f_mod`` and ``f_carrier`` dimensions Explanation: Using these methods with a DynamicMap requires special attention, because a dynamic map can return an infinite number of different values along its dimensions, unlike a HoloMap. Obviously, HoloViews could not comply with such a request, but these methods are perfectly legal with DynamicMap if you also define which specific dimension values you need, using the .redim.values method: End of explanation %%opts Curve [width=300] hmap[10, 110] + hmap[10, 200:].overlay() + hmap[[10, 110], 110].overlay() Explanation: Optional Slicing and indexing HoloMaps and other containers also allow you to easily index or select by key, allowing you to: select a specific key: obj[10, 110] select a slice: obj[10, 200:] select multiple values: obj[[10, 110], 110] End of explanation (hmap.select(fc=10, fm=110) + hmap.select(fc=10, fm=(200, None)).overlay() + hmap.select(fc=[10, 110], fm=110).overlay()) # Exercise: Try selecting two carrier frequencies and two modulation frequencies on the ``exercise_hmap`` Explanation: You can do the same using the select method: End of explanation
2,033
Given the following text description, write Python code to implement the functionality described below step by step Description: Lets revise Step1: Multi-class classification The goal of multi-class classification is to assign an instance to one of the set of classes. scikit-learn uses a strategy called one-vs.-all, or one-vs.-the-rest, to support multi-class classification. Onevs.- all classification uses one binary classifier for each of the possible classes. The class that is predicted with the greatest confidence is assigned to the instance.
Python Code: # reading the data df = pd.read_csv("data/fertility_Diagnosis.txt", delimiter=',', header=None) df.iloc[:4,0:9] X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:9], df[9], test_size=0.1) pipeline = Pipeline([('clf', LogisticRegression())]) parameters = { 'clf__penalty': ('l1', 'l2'), 'clf__C': (0.01, 0.1, 1, 10) } grid_search = GridSearchCV(pipeline, parameters, n_jobs=5, verbose=True, scoring='accuracy', cv = 5) grid_search.fit(X_train, y_train) print( 'Best score: %0.3f' % grid_search.best_score_) print( 'Best parameters set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print( '\t%s: %r' % (param_name, best_parameters[param_name])) y_pred = grid_search.predict(X_test) #print((y_pred), (y_test)) y_test = [2 if x=='N' else 1 for x in y_test] y_pred = [2 if x=='N' else 1 for x in y_pred] #print((y_pred), (y_test)) print( 'Accuracy:', accuracy_score(y_test, y_pred)) print( 'Precision:', precision_score(y_test, y_pred)) print( 'Recall:', recall_score(y_test, y_pred)) Explanation: Lets revise End of explanation movie = pd.read_csv("data/movie_train.tsv", delimiter="\t") movie[:10] print(movie['Sentiment'].describe()) print(movie['Sentiment'].value_counts()) def movie_rank(): pipeline = Pipeline([('vect', TfidfVectorizer(stop_words='english')), ('clf', LogisticRegression()) ]) parameters = {'vect__max_df': (0.25, 0.5), 'vect__ngram_range': ((1, 1), (1, 2)), 'vect__use_idf': (True, False), 'clf__C': (0.1, 1, 10),} movie=pd.read_csv('data/movie_train.tsv', header=0, delimiter='\t') X, y = movie['Phrase'], movie['Sentiment'].as_matrix() #print(X[:3]) #print(y[:3]) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state = 19) #print(X_train[:3]) #print(y_train[:3]) grid_search = GridSearchCV(pipeline, parameters, n_jobs=2, verbose=1, scoring='accuracy') grid_search.fit(X_train, y_train) print( 'Best score: %0.3f' % grid_search.best_score_) print( 'Best parameters set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print( '\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid_search.predict(X_test) print ('Accuracy:', accuracy_score(y_test, predictions)) print ('Confusion Matrix:', confusion_matrix(y_test, predictions)) print ('Classification Report:', classification_report(y_test, predictions)) movie_rank() Explanation: Multi-class classification The goal of multi-class classification is to assign an instance to one of the set of classes. scikit-learn uses a strategy called one-vs.-all, or one-vs.-the-rest, to support multi-class classification. Onevs.- all classification uses one binary classifier for each of the possible classes. The class that is predicted with the greatest confidence is assigned to the instance. End of explanation
2,034
Given the following text description, write Python code to implement the functionality described below step by step Description: Facies classification using Random Forest Classifier (submission 3) <a rel="license" href="https Step1: Load training data Step2: Build features In the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this Step3: Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about. Step4: Regress missing PE values Step5: Apply regression model to missing PE values and merge back into dataframe Step6: Compute UMAA for lithology model Step7: Umaa Rhomaa plot Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite). Step8: Here I use matrix inversion to "solve" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The "pull" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed. Step9: Plot facies by formation to see if the Formation feature will be useful Step10: Group formations by similar facies distributions Step11: Make dummy variables from the categorical Formation feature Step12: Compute Archie water saturation Step14: Get distances between wells Step15: Add latitude and longitude as features, add distances to every other well as features Step16: First guess at facies using KNN Step17: Fit RandomForect model and apply LeavePGroupsOut test There is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious, so I'll remove data with cross-plot porosity greater than 40% from the dataset. CROSS H CATTLE well also looks pretty different from the others so I'm going to remove it from the training set. Step18: Apply model to validation dataset Load validation data (vd), build features, and use the classfier from above to predict facies. Ultimately the PE_EST curve seemed to be slightly more predictive than the PE curve proper (?). I use that instead of PE in the classifer so I need to compute it with the validation data.
Python Code: import pandas as pd import numpy as np from math import radians, cos, sin, asin, sqrt import itertools from sklearn import neighbors from sklearn import preprocessing from sklearn import ensemble from sklearn.model_selection import LeaveOneGroupOut, LeavePGroupsOut import inversion import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline Explanation: Facies classification using Random Forest Classifier (submission 3) <a rel="license" href="https://creativecommons.org/licenses/by-sa/4.0/"> <img alt="Creative Commons License BY-SA" align="left" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png"> </a> <br> Dan Hallau End of explanation df = pd.read_csv('../facies_vectors.csv') Explanation: Load training data End of explanation def estimate_dphi(df): return ((4*(df['PHIND']**2) - (df['DeltaPHI']**2))**0.5 - df['DeltaPHI']) / 2 def estimate_rhob(df): return (2.71 - (df['DPHI_EST']/100) * 1.71) def estimate_nphi(df): return df['DPHI_EST'] + df['DeltaPHI'] def compute_rhomaa(df): return (df['RHOB_EST'] - (df['PHIND'] / 100)) / (1 - df['PHIND'] / 100) def compute_umaa(df): return ((df['PE'] * df['RHOB_EST']) - (df['PHIND']/100 * 0.398)) / (1 - df['PHIND'] / 100) Explanation: Build features In the real world it would be unusual to have neutron-density cross-plot porosity (i.e. PHIND) without the corresponding raw input curves, namely bulk density and neutron porosity, as we have in this contest dataset. So as part of the feature engineering process, I back-calculate estimates of those raw curves from the provided DeltaPHI and PHIND curves. One issue with this approach though is that cross-plot porosity differs between vendors, toolstrings, and software packages, and it is not known exactly how the PHIND in this dataset was computed. So I make the assumption here that PHIND ≈ sum of squares porosity, which is usually an adequate approximation of neutron-density crossplot porosity. That equation looks like this: $$PHIND = \sqrt{\frac{NPHI^2 + DPHI^2}{2}}$$ and it is assumed here that DeltaPHI is: $$DeltaPHI = NPHI - DPHI$$ The functions below use the relationships from the above equations (...two equations, two unknowns...) to estimate NPHI and DPHI (and consequently RHOB). Once we have RHOB, we can use it combined with PE to estimate apparent grain density (RHOMAA) and apparent photoelectric capture cross-section (UMAA), which are useful in lithology estimations from well logs. End of explanation df['DPHI_EST'] = df.apply(lambda x: estimate_dphi(x), axis=1).astype(float) df['RHOB_EST'] = df.apply(lambda x: estimate_rhob(x), axis=1) df['NPHI_EST'] = df.apply(lambda x: estimate_nphi(x), axis=1) df['RHOMAA_EST'] = df.apply(lambda x: compute_rhomaa(x), axis=1) Explanation: Because solving the sum of squares equation involved the quadratic formula, in some cases imaginary numbers result due to porosities being negative, which is what the warning below is about. End of explanation pe = df.dropna() PE = pe['PE'].values wells = pe['Well Name'].values drop_list_pe = ['Formation', 'Well Name', 'Facies', 'Depth', 'PE', 'RELPOS'] fv_pe = pe.drop(drop_list_pe, axis=1).values X_pe = preprocessing.StandardScaler().fit(fv_pe).transform(fv_pe) y_pe = PE reg = neighbors.KNeighborsRegressor(n_neighbors=40, weights='distance') logo = LeaveOneGroupOut() f1knn_pe = [] for train, test in logo.split(X_pe, y_pe, groups=wells): well_name = wells[test[0]] reg.fit(X_pe[train], y_pe[train]) score = reg.fit(X_pe[train], y_pe[train]).score(X_pe[test], y_pe[test]) print("{:>20s} {:.3f}".format(well_name, score)) f1knn_pe.append(score) print("-Average leave-one-well-out F1 Score: %6f" % (np.mean(f1knn_pe))) Explanation: Regress missing PE values End of explanation reg.fit(X_pe, y_pe) fv_apply = df.drop(drop_list_pe, axis=1).values X_apply = preprocessing.StandardScaler().fit(fv_apply).transform(fv_apply) df['PE_EST'] = reg.predict(X_apply) df.PE = df.PE.combine_first(df.PE_EST) Explanation: Apply regression model to missing PE values and merge back into dataframe: End of explanation df['UMAA_EST'] = df.apply(lambda x: compute_umaa(x), axis=1) Explanation: Compute UMAA for lithology model End of explanation df[df.GR < 125].plot(kind='scatter', x='UMAA_EST', y='RHOMAA_EST', c='GR', figsize=(8,6)) plt.ylim(3.1, 2.2) plt.xlim(0.0, 17.0) plt.plot([4.8, 9.0, 13.8, 4.8], [2.65, 2.87, 2.71, 2.65], c='r') plt.plot([4.8, 11.9, 13.8, 4.8], [2.65, 3.06, 2.71, 2.65], c='g') plt.scatter([4.8], [2.65], s=50, c='r') plt.scatter([9.0], [2.87], s=50, c='r') plt.scatter([13.8], [2.71], s=50, c='r') plt.scatter([11.9], [3.06], s=50, c='g') plt.text(2.8, 2.65, 'Quartz', backgroundcolor='w') plt.text(14.4, 2.71, 'Calcite', backgroundcolor='w') plt.text(9.6, 2.87, 'Dolomite', backgroundcolor='w') plt.text(12.5, 3.06, 'Illite', backgroundcolor='w') plt.text(7.0, 2.55, "gas effect", ha="center", va="center", rotation=-55, size=8, bbox=dict(boxstyle="larrow,pad=0.3", fc="pink", ec="red", lw=2)) plt.text(15.0, 2.78, "barite?", ha="center", va="center", rotation=0, size=8, bbox=dict(boxstyle="rarrow,pad=0.3", fc="yellow", ec="orange", lw=2)) Explanation: Umaa Rhomaa plot Just for fun, below is a basic Umaa-Rhomaa plot to view relative abundances of quartz, calcite, dolomite, and clay. The red triangle represents a ternary solution for QTZ, CAL, and DOL, while the green triangle represents a solution for QTZ, CAL, and CLAY (illite). End of explanation # QTZ-CAL-CLAY ur1 = inversion.UmaaRhomaa() ur1.set_dol_uma(11.9) ur1.set_dol_rhoma(3.06) # QTZ-CAL-DOL ur2 = inversion.UmaaRhomaa() df['UR_QTZ'] = np.nan df['UR_CLY'] = np.nan df['UR_CAL'] = np.nan df['UR_DOL'] = np.nan df.ix[df.GR >= 40, 'UR_QTZ'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR >= 40, 'UR_CLY'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR >= 40, 'UR_CAL'] = df.ix[df.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR >= 40, 'UR_DOL'] = 0 df.ix[df.GR < 40, 'UR_QTZ'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR < 40, 'UR_DOL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR < 40, 'UR_CAL'] = df.ix[df.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) df.ix[df.GR < 40, 'UR_CLY'] = 0 Explanation: Here I use matrix inversion to "solve" the ternary plot for each lithologic component. Essentially each datapoint is a mix of the three components defined by the ternary diagram, with abundances of each defined by the relative distances from each endpoint. I use a GR cutoff of 40 API to determine when to use either the QTZ-CAL-DOL or QTZ-CAL-CLAY ternary solutions. In other words, it is assumed that below 40 API, there is 0% clay, and above 40 API there is 0% dolomite, and also that these four lithologic components are the only components in these rocks. Admittedly it's not a great assumption, especially since the ternary plot indicates other stuff is going on. For example the high Umaa datapoints near the Calcite endpoint may indicate some heavy minerals (e.g., pyrite) or even barite-weighted mud. The "pull" of datapoints to the northwest quadrant probably reflects some gas effect, so my lithologies in those gassy zones will be skewed. End of explanation facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] fms = df.Formation.unique() fig, ax = plt.subplots(int(len(fms) / 2), 2, sharey=True, sharex=True, figsize=(5,10)) for i, fm in enumerate(fms): facies_counts = df[df.Formation == fm]['Facies'].value_counts().sort_index() colors = [facies_colors[i-1] for i in facies_counts.index] ax[int(i/2), i%2].bar(facies_counts.index, height=facies_counts, color=colors) ax[int(i/2), i%2].set_title(fm, size=8) Explanation: Plot facies by formation to see if the Formation feature will be useful End of explanation fm_groups = [['A1 SH', 'B1 SH', 'B2 SH', 'B3 SH', 'B4 SH'], ['B5 SH', 'C SH'], ['A1 LM', 'C LM'], ['B1 LM', 'B3 LM', 'B4 LM'], ['B2 LM', 'B5 LM']] fm_group_dict = {fm:i for i, l in enumerate(fm_groups) for fm in l} df['FM_GRP'] = df.Formation.map(fm_group_dict) Explanation: Group formations by similar facies distributions End of explanation df = pd.get_dummies(df, prefix='FM_GRP', columns=['FM_GRP']) Explanation: Make dummy variables from the categorical Formation feature End of explanation def archie(df): return np.sqrt(0.08 / ((df.PHIND ** 2) * (10 ** df.ILD_log10))) df['SW'] = df.apply(lambda x: archie(x), axis=1) Explanation: Compute Archie water saturation End of explanation # modified from jesper latlong = pd.DataFrame({"SHRIMPLIN": [37.978076, -100.987305], # "ALEXANDER D": [37.6747257, -101.1675259], # "SHANKLE": [38.0633799, -101.3920543], # "LUKE G U": [37.4499614, -101.6121913], # "KIMZEY A": [37.12289, -101.39697], # "CROSS H CATTLE": [37.9105826, -101.6464517], # "NOLAN": [37.7866294, -101.0451641], #? "NEWBY": [37.3172442, -101.3546995], # "CHURCHMAN BIBLE": [37.3497658, -101.1060761], #? "STUART": [37.4857262, -101.1391063], # "CRAWFORD": [37.1893654, -101.1494994], #? "Recruit F9": [0,0]}) def haversine(lon1, lat1, lon2, lat2): Calculate the great circle distance between two points on the earth (specified in decimal degrees) # convert decimal degrees to radians lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) # haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2 c = 2 * asin(sqrt(a)) km = 6367 * c return km def get_lat(df): return latlong[df['Well Name']][0] def get_long(df): return latlong[df['Well Name']][1] Explanation: Get distances between wells End of explanation df['LAT'] = df.apply(lambda x: get_lat(x), axis=1) df['LON'] = df.apply(lambda x: get_long(x), axis=1) dist_dict = {} for k in latlong: dict_name = k + '_DISTANCES' k_dict = {} lat1 = latlong[k][0] lon1 = latlong[k][1] for l in latlong: lat2 = latlong[l][0] lon2 = latlong[l][1] if l == 'Recruit F9': dist = haversine(0, 0, 0, 0) elif k == "Recruit F9": dist = haversine(0, 0, 0, 0) else: dist = haversine(lon1, lat1, lon2, lat2) k_dict[l] = dist dist_dict[dict_name] = k_dict for i in dist_dict: df[i] = np.nan for j in dist_dict[i]: df.loc[df['Well Name'] == j, i] = dist_dict[i][j] Explanation: Add latitude and longitude as features, add distances to every other well as features End of explanation df0 = df[(df.PHIND <= 40) & (df['Well Name'] != 'CROSS H CATTLE')] facies = df0['Facies'].values wells = df0['Well Name'].values keep_list0 = ['GR', 'ILD_log10', 'PHIND', 'PE', 'NM_M', 'RELPOS', 'RHOB_EST', 'UR_CLY', 'UR_CAL'] fv0 = df0[keep_list0].values clf0 = neighbors.KNeighborsClassifier(n_neighbors=56, weights='distance') X0 = preprocessing.StandardScaler().fit(fv0).transform(fv0) y0 = facies logo = LeaveOneGroupOut() f1knn0 = [] clf0.fit(X0, y0) X1 = preprocessing.StandardScaler().fit(df[keep_list0].values).transform(df[keep_list0].values) knn_pred = clf0.predict(X1) df['KNN_FACIES'] = knn_pred Explanation: First guess at facies using KNN End of explanation df1 = df.dropna() df1 = df1[(df1['Well Name'] != 'CROSS H CATTLE') & (df.PHIND < 40.0)] facies = df1['Facies'].values wells = df1['Well Name'].values drop_list = ['Formation', 'Well Name', 'Facies', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI', 'UMAA_EST', 'UR_QTZ', 'PE_EST', 'Recruit F9_DISTANCES', 'KIMZEY A_DISTANCES', 'NEWBY_DISTANCES', 'ALEXANDER D_DISTANCES', 'NOLAN_DISTANCES', 'FM_GRP_3'] fv = df1.drop(drop_list, axis=1).values X = preprocessing.StandardScaler().fit(fv).transform(fv) y = facies ne_grid = [150] mf_grid = [10] md_grid = [None] msl_grid = [5] mss_grid = [20] keys = ['n_estimators', 'max_features', 'max_depth', 'min_samples_leaf', 'min_samples_split'] param_sets = itertools.product(ne_grid, mf_grid, md_grid, msl_grid, mss_grid) param_grid = [dict(zip(keys, i)) for i in param_sets] clf_list = [] for i, d in enumerate(param_grid): clf = ensemble.RandomForestClassifier(n_estimators=d['n_estimators'], class_weight='balanced', min_samples_leaf=d['min_samples_leaf'], min_samples_split=d['min_samples_split'], max_features=d['max_features'], max_depth=d['max_depth'], n_jobs=-1) lpgo = LeavePGroupsOut(n_groups=2) f1rfc = [] for train, test in lpgo.split(X, y, groups=wells): clf.fit(X[train], y[train]) score = clf.fit(X[train], y[train]).score(X[test], y[test]) f1rfc.append(score) print("Average leave-two-wells-out F1 Score: %6f" % (np.mean(f1rfc))) clf_list.append((clf, np.mean(f1rfc))) np.max([i[1] for i in clf_list]) list(zip(df1.drop(drop_list, axis=1).columns, clf.feature_importances_)) Explanation: Fit RandomForect model and apply LeavePGroupsOut test There is some bad log data in this dataset which I'd guess is due to rugose hole. PHIND gets as high at 80%, which is certainly spurious, so I'll remove data with cross-plot porosity greater than 40% from the dataset. CROSS H CATTLE well also looks pretty different from the others so I'm going to remove it from the training set. End of explanation # refit model to entire training set clf.fit(X, y) # load validation data vd = pd.read_csv('../validation_data_nofacies.csv') # compute extra log data features vd['DPHI_EST'] = vd.apply(lambda x: estimate_dphi(x), axis=1).astype(float) vd['RHOB_EST'] = vd.apply(lambda x: estimate_rhob(x), axis=1) vd['NPHI_EST'] = vd.apply(lambda x: estimate_nphi(x), axis=1) vd['RHOMAA_EST'] = vd.apply(lambda x: compute_rhomaa(x), axis=1) # predict missing PE values drop_list_vd = ['Formation', 'Well Name', 'Depth', 'PE', 'RELPOS'] fv_vd = vd.drop(drop_list_vd, axis=1).values X_vd = preprocessing.StandardScaler().fit(fv_vd).transform(fv_vd) vd['PE_EST'] = reg.predict(X_vd) vd.PE = vd.PE.combine_first(vd.PE_EST) vd['UMAA_EST'] = vd.apply(lambda x: compute_umaa(x), axis=1) # Estimate lithology using Umaa Rhomaa solution vd['UR_QTZ'] = np.nan vd['UR_CLY'] = np.nan vd['UR_CAL'] = np.nan vd['UR_DOL'] = np.nan vd.ix[vd.GR >= 40, 'UR_QTZ'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR >= 40, 'UR_CLY'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR >= 40, 'UR_CAL'] = vd.ix[vd.GR >= 40].apply(lambda x: ur1.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR >= 40, 'UR_DOL'] = 0 vd.ix[vd.GR < 40, 'UR_QTZ'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_qtz(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR < 40, 'UR_DOL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_dol(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR < 40, 'UR_CAL'] = vd.ix[vd.GR < 40].apply(lambda x: ur2.get_cal(x.UMAA_EST, x.RHOMAA_EST), axis=1) vd.ix[vd.GR < 40, 'UR_CLY'] = 0 # Formation grouping vd['FM_GRP'] = vd.Formation.map(fm_group_dict) vd = pd.get_dummies(vd, prefix='FM_GRP', columns=['FM_GRP']) # Water saturation vd['SW'] = vd.apply(lambda x: archie(x), axis=1) # Lat-long features vd['LAT'] = vd.apply(lambda x: get_lat(x), axis=1) vd['LON'] = vd.apply(lambda x: get_long(x), axis=1) for i in dist_dict: vd[i] = np.nan for j in dist_dict[i]: vd.loc[vd['Well Name'] == j, i] = dist_dict[i][j] # Compute first guess at facies with KNN X2 = preprocessing.StandardScaler().fit(vd[keep_list0].values).transform(vd[keep_list0].values) vd['KNN_FACIES'] = clf0.predict(X2) # Apply final model drop_list1 = ['Formation', 'Well Name', 'Depth', 'DPHI_EST', 'NPHI_EST', 'DeltaPHI', 'UMAA_EST', 'UR_QTZ', 'PE', 'Recruit F9_DISTANCES', 'KIMZEY A_DISTANCES', 'NEWBY_DISTANCES', 'ALEXANDER D_DISTANCES', 'NOLAN_DISTANCES', 'FM_GRP_3'] fv_vd1 = vd.drop(drop_list1, axis=1).values X_vd1 = preprocessing.StandardScaler().fit(fv_vd1).transform(fv_vd1) vd_predicted_facies = clf.predict(X_vd1) vd['Facies'] = vd_predicted_facies vd.to_csv('RFC_submission_3_predictions.csv') vd_predicted_facies Explanation: Apply model to validation dataset Load validation data (vd), build features, and use the classfier from above to predict facies. Ultimately the PE_EST curve seemed to be slightly more predictive than the PE curve proper (?). I use that instead of PE in the classifer so I need to compute it with the validation data. End of explanation
2,035
Given the following text description, write Python code to implement the functionality described below step by step Description: After running your Pylearn2 models, it's probably not best to compare them on the score they get on the validation set, as that is used in the training process; so could be the victim of overfitting. It would be better to run the model over the test set, which is supposed to be a holdout set used to compare models. We could rerun all our models with a monitor on this value, but for models we've already run, it might be more useful to be able to pull out this value for just that pickle. This is likely to be wasted effort, because it seems like the kind of thing that should already exist in Pylearn2. Unfortunately, since I can't find it and it seems fairly simple to implement I'm just going to go ahead and write it. Hopefully, this will also help us figure out what's going wrong with some submissions, that turn out to be incredibly bad; for example, those using augmentation. Step1: Loading data and model Initialise, loading the settings and the test dataset we're going to be using Step2: Setting up forward pass Now we've loaded the data and the model we're going to set up a forward pass through the data in the same way we do it in the test.py script Step3: Compute probabilities The following is the same as the code in test.py that applies the processing. Step4: Of course, it's strange that there are any zeros at all. Hopefully they'll go away when we start averaging. Score before averaging We can score the model before averaging by just using the class labels as they were going to be used for training. Using Sklearn's utility for calculating log_loss Step5: Score after averaging In test.py we take the least intelligent approach to dealing with averaging over the different augmented versions. Basically, we just assume that whatever the augmentation factor is, the labels must repeat over that step size, so we can just collapse those into a single vector of probabilities. First, we should check that assumption Step6: There are no zeros in there now!
Python Code: import pylearn2.utils import pylearn2.config import theano import neukrill_net.dense_dataset import neukrill_net.utils import numpy as np %matplotlib inline import matplotlib.pyplot as plt import holoviews as hl %load_ext holoviews.ipython Explanation: After running your Pylearn2 models, it's probably not best to compare them on the score they get on the validation set, as that is used in the training process; so could be the victim of overfitting. It would be better to run the model over the test set, which is supposed to be a holdout set used to compare models. We could rerun all our models with a monitor on this value, but for models we've already run, it might be more useful to be able to pull out this value for just that pickle. This is likely to be wasted effort, because it seems like the kind of thing that should already exist in Pylearn2. Unfortunately, since I can't find it and it seems fairly simple to implement I'm just going to go ahead and write it. Hopefully, this will also help us figure out what's going wrong with some submissions, that turn out to be incredibly bad; for example, those using augmentation. End of explanation cd .. settings = neukrill_net.utils.Settings("settings.json") run_settings = neukrill_net.utils.load_run_settings( "run_settings/alexnet_based_norm_global_8aug.json", settings, force=True) %%time # loading the model model = pylearn2.utils.serial.load(run_settings['pickle abspath']) reload(neukrill_net.dense_dataset) %%time # loading the data dataset = neukrill_net.dense_dataset.DensePNGDataset(settings_path=run_settings['settings_path'], run_settings=run_settings['run_settings_path'], train_or_predict='train', training_set_mode='test', force=True) Explanation: Loading data and model Initialise, loading the settings and the test dataset we're going to be using: End of explanation # find allowed batch size over 1000 (want big batches) # (Theano has to have fixed batch size and we don't want leftover) batch_size=1000 while dataset.X.shape[0]%batch_size != 0: batch_size += 1 n_batches = int(dataset.X.shape[0]/batch_size) %%time # set this batch size model.set_batch_size(batch_size) # compile Theano function X = model.get_input_space().make_batch_theano() Y = model.fprop(X) f = theano.function([X],Y) Explanation: Setting up forward pass Now we've loaded the data and the model we're going to set up a forward pass through the data in the same way we do it in the test.py script: pick a batch size, compile a Theano function and then iterate over the whole dataset in batches, filling an array of predictions. End of explanation %%time y = np.zeros((dataset.X.shape[0],len(settings.classes))) for i in xrange(n_batches): print("Batch {0} of {1}".format(i+1,n_batches)) x_arg = dataset.X[i*batch_size:(i+1)*batch_size,:] if X.ndim > 2: x_arg = dataset.get_topological_view(x_arg) y[i*batch_size:(i+1)*batch_size,:] = (f(x_arg.astype(X.dtype).T)) plt.scatter(np.where(y == 0)[1],np.where(y==0)[0]) Explanation: Compute probabilities The following is the same as the code in test.py that applies the processing. End of explanation import sklearn.metrics sklearn.metrics.log_loss(dataset.y,y) Explanation: Of course, it's strange that there are any zeros at all. Hopefully they'll go away when we start averaging. Score before averaging We can score the model before averaging by just using the class labels as they were going to be used for training. Using Sklearn's utility for calculating log_loss: End of explanation # augmentation factor af = 8 for low,high in zip(range(0,dataset.y.shape[0],af),range(af,dataset.y.shape[0]+af,af)): first = dataset.y[low][0] if any(first != i for i in dataset.y[low:high].ravel()): print("Labels do not match at:", (low,high)) break y_collapsed = np.zeros((int(dataset.X.shape[0]/af), len(settings.classes))) for i,(low,high) in enumerate(zip(range(0,dataset.y.shape[0],af), range(af,dataset.y.shape[0]+af,af))): y_collapsed[i,:] = np.mean(y[low:high,:], axis=0) plt.scatter(np.where(y_collapsed == 0)[1],np.where(y_collapsed == 0)[0]) Explanation: Score after averaging In test.py we take the least intelligent approach to dealing with averaging over the different augmented versions. Basically, we just assume that whatever the augmentation factor is, the labels must repeat over that step size, so we can just collapse those into a single vector of probabilities. First, we should check that assumption: End of explanation labels_collapsed = dataset.y[range(0,dataset.y.shape[0],af)] labels_collapsed.shape sklearn.metrics.log_loss(labels_collapsed,y_collapsed) Explanation: There are no zeros in there now! End of explanation
2,036
Given the following text description, write Python code to implement the functionality described below step by step Description: Interact Exercise 2 Imports Step1: Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For the x-axis tick locations use integer multiples of $\pi$. For the x-axis tick labels use multiples of pi using LaTeX Step2: Then use interact to create a user interface for exploring your function Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument Step4: Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
Python Code: %matplotlib inline from matplotlib import pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display Explanation: Interact Exercise 2 Imports End of explanation # YOUR CODE HERE #raise NotImplementedError() def plot_sine1(a,b): x = np.arange(0,4*np.pi,0.1) plt.plot(x,np.sin(a*x+b)) plt.xlim(0,np.pi*4) plt.tick_params(axis='y', right='off', direction='out') plt.tick_params(axis='x', top='off', direction='out') plt.title('Sine') plt.ylabel('y(x)') plt.xlabel('x') plt.box(False) plt.grid(True) plot_sine1(5, 3.4) Explanation: Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For the x-axis tick locations use integer multiples of $\pi$. For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$. End of explanation # YOUR CODE HERE #raise NotImplementedError() interact(plot_sine1, a=(0.0,5.0), b = (-5.0,5.0)) assert True # leave this for grading the plot_sine1 exercise Explanation: Then use interact to create a user interface for exploring your function: a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$. b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$. End of explanation # YOUR CODE HERE #raise NotImplementedError() def plot_sine2(a, b, style): t = np.arange(0, 4*np.pi, 0.04) plt.plot(t, np.sin(a*t+b), style) plt.xlim(0, np.pi*4) plt.tick_params(axis='y', right='off', direction='out') plt.tick_params(axis='x', top='off', direction='out') plt.title('Sine') plt.ylabel('y(x)') plt.xlabel('x') plt.box(False) plt.grid(True) plot_sine2(4.0, -1.0, 'r--') Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument: dashed red: r-- blue circles: bo dotted black: k. Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line. End of explanation # YOUR CODE HERE #raise NotImplementedError() interact(plot_sine2, a=(0.0,5.0), b=(-5.0,5.0), style={'dotted blue line': 'b--', 'black circles': 'ko','red triangles':'r^'}); assert True # leave this for grading the plot_sine2 exercise Explanation: Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. End of explanation
2,037
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep learning for Natural Language Processing Simple text representations, bag of words Word embedding and... not just another word2vec this time 1-dimensional convolutions for text Aggregating several data sources "the hard way" Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning Special thanks to Irina Golzmann for help with technical part. NLTK You will require nltk v3.2 to solve this assignment It is really important that the version is 3.2, otherwize russian tokenizer might not work Install/update * sudo pip install --upgrade nltk==3.2 * If you don't remember when was the last pip upgrade, sudo pip install --upgrade pip If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer. For students with low-RAM machines This assignment can be accomplished with even the low-tier hardware (<= 4Gb RAM) If that is the case, turn flag "low_RAM_mode" below to True If you have around 8GB memory, it is unlikely that you will feel constrained by memory. In case you are using a PC from last millenia, consider setting very_low_RAM=True Step1: Dataset Ex-kaggle-competition on prohibited content detection There goes the description - https Step2: Step3: Balance-out the classes Vast majority of data samples are non-prohibited 250k banned out of 4kk Let's just downsample random 250k legal samples to make further steps less computationally demanding If you aim for high Kaggle score, consider a smarter approach to that. Step4: Tokenizing First, we create a dictionary of all existing words. Assign each word a number - it's Id Step5: Remove rare tokens We are unlikely to make use of words that are only seen a few times throughout the corpora. Again, if you want to beat Kaggle competition metrics, consider doing something better. Step6: Replace words with IDs Set a maximum length for titles and descriptions. * If string is longer that that limit - crop it, if less - pad with zeros. * Thus we obtain a matrix of size [n_samples]x[max_length] * Element at i,j - is an identifier of word j within sample i Step7: Data format examples Step8: As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network Non-sequences Some data features are not text samples. E.g. price, # urls, category, etc They require a separate preprocessing. Step9: Split data into training and test Step10: Save preprocessed data [optional] The next tab can be used to stash all the essential data matrices and get rid of the rest of the data. Highly recommended if you have less than 1.5GB RAM left To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True. Step11: Train the monster Since we have several data sources, our neural network may differ from what you used to work with. Separate input for titles cnn+global max or RNN Separate input for description cnn+global max or RNN Separate input for categorical features обычные полносвязные слои или какие-нибудь трюки These three inputs must be blended somehow - concatenated or added. Output Step12: NN architecture Step13: Loss function The standard way Step14: Determinitic prediction In case we use stochastic elements, e.g. dropout or noize Compile a separate set of functions with deterministic prediction (deterministic = True) Unless you think there's no neet for dropout there ofc. Btw is there? Step15: Coffee-lation Step16: Training loop The regular way with loops over minibatches Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset Step17: Tweaking guide batch_size - how many samples are processed per function call optimization gets slower, but more stable, as you increase it. May consider increasing it halfway through training minibatches_per_epoch - max amount of minibatches per epoch Does not affect training. Lesser value means more frequent and less stable printing Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch n_epochs - total amount of epochs to train for n_epochs = 10**10 and manual interrupting is still an option Tips Step18: Final evaluation Evaluate network over the entire test set
Python Code: low_RAM_mode = True very_low_RAM = False #If you have <3GB RAM, set BOTH to true import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: Deep learning for Natural Language Processing Simple text representations, bag of words Word embedding and... not just another word2vec this time 1-dimensional convolutions for text Aggregating several data sources "the hard way" Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning Special thanks to Irina Golzmann for help with technical part. NLTK You will require nltk v3.2 to solve this assignment It is really important that the version is 3.2, otherwize russian tokenizer might not work Install/update * sudo pip install --upgrade nltk==3.2 * If you don't remember when was the last pip upgrade, sudo pip install --upgrade pip If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer. For students with low-RAM machines This assignment can be accomplished with even the low-tier hardware (<= 4Gb RAM) If that is the case, turn flag "low_RAM_mode" below to True If you have around 8GB memory, it is unlikely that you will feel constrained by memory. In case you are using a PC from last millenia, consider setting very_low_RAM=True End of explanation if not low_RAM_mode: # a lot of ram df = pd.read_csv("avito_train.tsv",sep='\t') else: #aroung 4GB ram df = pd.read_csv("avito_train_1kk.tsv",sep='\t') print (df.shape, df.is_blocked.mean()) df[:5] Explanation: Dataset Ex-kaggle-competition on prohibited content detection There goes the description - https://www.kaggle.com/c/avito-prohibited-content Download High-RAM mode, * Download avito_train.tsv from competition data files Low-RAM-mode, * Download downsampled dataset from here * archive https://yadi.sk/d/l0p4lameqw3W8 * raw https://yadi.sk/d/I1v7mZ6Sqw2WK (in case you feel masochistic) What's inside Different kinds of features: * 2 text fields - title and description * Special features - price, number of e-mails, phones, etc * Category and subcategory - unsurprisingly, categorical features * Attributes - more factors Only 1 binary target whether or not such advertisement contains prohibited materials * criminal, misleading, human reproduction-related, etc * diving into the data may result in prolonged sleep disorders End of explanation print ("Blocked ratio",df.is_blocked.mean()) print ("Count:",len(df)) Explanation: End of explanation df.groupby(['category'])['is_blocked'].mean(), df.category.value_counts() #downsample data = df.copy() df = data[data.is_blocked == 1] data = data[data.is_blocked == 0] df.shape for category in data.category.value_counts().keys(): to_add = data[data.category == category] n_samples = df.category.value_counts()[category] indexes = np.random.choice(np.arange(to_add.shape[0]), size=n_samples) df = df.append(to_add.iloc[indexes]) print ("Blocked ratio:",df.is_blocked.mean()) print ("Count:",len(df)) df.groupby('category')['is_blocked'].mean() assert df.is_blocked.mean() < 0.51 assert df.is_blocked.mean() > 0.49 assert len(df) <= 560000 print ("All tests passed") #In case your RAM-o-meter is in the red if very_low_ram: data = data[::2] Explanation: Balance-out the classes Vast majority of data samples are non-prohibited 250k banned out of 4kk Let's just downsample random 250k legal samples to make further steps less computationally demanding If you aim for high Kaggle score, consider a smarter approach to that. End of explanation from nltk.tokenize import RegexpTokenizer from collections import Counter,defaultdict tokenizer = RegexpTokenizer(r"\w+") #Dictionary of tokens token_counts = Counter() #All texts all_texts = np.hstack([df.description.values,df.title.values]) #Compute token frequencies for s in all_texts: if type(s) is not str: continue s = s.lower() tokens = tokenizer.tokenize(s) for token in tokens: token_counts[token] +=1 Explanation: Tokenizing First, we create a dictionary of all existing words. Assign each word a number - it's Id End of explanation #Word frequency distribution, just for kicks _ = plt.hist(list(token_counts.values()),range=[0,50],bins=50) #Select only the tokens that had at least 10 occurences in the corpora. #Use token_counts. min_count = 10 tokens = dict(token for token in list(token_counts.items()) if token[1] > min_count) token_to_id = {t:i+1 for i,t in enumerate(tokens)} null_token = "NULL" token_to_id[null_token] = 0 print ("# Tokens:",len(token_to_id)) if len(token_to_id) < 30000: print ("Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc") if len(token_to_id) > 1000000: print ("Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc") Explanation: Remove rare tokens We are unlikely to make use of words that are only seen a few times throughout the corpora. Again, if you want to beat Kaggle competition metrics, consider doing something better. End of explanation def vectorize(strings, token_to_id, max_len=150): token_matrix = [] for s in strings: if type(s) is not str: token_matrix.append([0]*max_len) continue s = s.lower() tokens = tokenizer.tokenize(s) token_ids = list(map(lambda token: token_to_id.get(token,0), tokens))[:max_len] token_ids += [0]*(max_len - len(token_ids)) token_matrix.append(token_ids) return np.array(token_matrix) desc_tokens = vectorize(df.description.values,token_to_id,max_len = 150) title_tokens = vectorize(df.title.values,token_to_id,max_len = 15) Explanation: Replace words with IDs Set a maximum length for titles and descriptions. * If string is longer that that limit - crop it, if less - pad with zeros. * Thus we obtain a matrix of size [n_samples]x[max_length] * Element at i,j - is an identifier of word j within sample i End of explanation print ("Размер матрицы:",title_tokens.shape) for title, tokens in zip(df.title.values[:3],title_tokens[:3]): print (title,'->', tokens[:10],'...') Explanation: Data format examples End of explanation #All numeric features df_numerical_features = df[["phones_cnt","emails_cnt","urls_cnt","price"]] #One-hot-encoded category and subcategory from sklearn.feature_extraction import DictVectorizer categories = [] data_cat_subcat = df[["category","subcategory"]].values categories = [{"category":cat_name, "subcategory":subcat_name} for cat_name, subcat_name in data_cat_subcat] vectorizer = DictVectorizer(sparse=False) cat_one_hot = vectorizer.fit_transform(categories) cat_one_hot = pd.DataFrame(cat_one_hot,columns=vectorizer.feature_names_) df_non_text = pd.merge( df_numerical_features,cat_one_hot,on = np.arange(len(cat_one_hot)) ) del df_non_text["key_0"] Explanation: As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network Non-sequences Some data features are not text samples. E.g. price, # urls, category, etc They require a separate preprocessing. End of explanation #Target variable - whether or not sample contains prohibited material target = df.is_blocked.values.astype('int32') #Preprocessed titles title_tokens = title_tokens.astype('int32') #Preprocessed tokens desc_tokens = desc_tokens.astype('int32') #Non-sequences df_non_text = df_non_text.astype('float32') from sklearn.model_selection import train_test_split #Split into training and test set. #Difficulty selector: #Easy: split randomly #Medium: select test set items that have item_ids strictly above that of training set #Hard: do whatever you want, but score yourself using kaggle private leaderboard data_tuple = train_test_split(title_tokens, desc_tokens, df_non_text, target, test_size=0.1, random_state=123) title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple Explanation: Split data into training and test End of explanation save_prepared_data = False #save read_prepared_data = True #load #but not both at once assert not (save_prepared_data and read_prepared_data) if save_prepared_data: print ("Saving preprocessed data (may take up to 3 minutes)") import pickle with open("preprocessed_data.pcl",'wb') as fout: pickle.dump(data_tuple, fout) with open("token_to_id.pcl",'wb') as fout: pickle.dump(token_to_id,fout) print ("готово") elif read_prepared_data: print ("Reading saved data...") import pickle with open("preprocessed_data.pcl",'rb') as fin: data_tuple = pickle.load(fin) title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple with open("token_to_id.pcl",'rb') as fin: token_to_id = pickle.load(fin) #Re-importing libraries to allow staring noteboook from here import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline print ("done") Explanation: Save preprocessed data [optional] The next tab can be used to stash all the essential data matrices and get rid of the rest of the data. Highly recommended if you have less than 1.5GB RAM left To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True. End of explanation #libraries import lasagne from theano import tensor as T import theano #3 inputs and a refere output title_token_ids = T.matrix("title_token_ids",dtype='int32') desc_token_ids = T.matrix("desc_token_ids",dtype='int32') categories = T.matrix("categories",dtype='float32') target_y = T.ivector("is_blocked") Explanation: Train the monster Since we have several data sources, our neural network may differ from what you used to work with. Separate input for titles cnn+global max or RNN Separate input for description cnn+global max or RNN Separate input for categorical features обычные полносвязные слои или какие-нибудь трюки These three inputs must be blended somehow - concatenated or added. Output: a simple binary classification 1 sigmoidal with binary_crossentropy 2 softmax with categorical_crossentropy - essentially the same as previous one 1 neuron without nonlinearity (lambda x: x) + hinge loss End of explanation title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids) descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids) cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories) # Descriptions #word-wise embedding. We recommend to start from some 64 and improving after you are certain it works. descr_nn = lasagne.layers.EmbeddingLayer(descr_inp, input_size=len(token_to_id)+1, output_size=128) #reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1]) descr_nn = lasagne.layers.Conv1DLayer(descr_nn, 64, filter_size=5) #pool over time descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max) #Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way #1dconv -> 1d max pool ->1dconv and finally global pool # Titles title_nn = lasagne.layers.EmbeddingLayer(title_inp, input_size=len(token_to_id)+1, output_size=128) title_nn = lasagne.layers.DimshuffleLayer(title_nn, [0,2,1]) title_nn = lasagne.layers.Conv1DLayer(title_nn, 64, filter_size=5) title_nn = lasagne.layers.GlobalPoolLayer(title_nn,T.max) # Non-sequences cat_nn = lasagne.layers.DenseLayer(cat_inp, 512) cat_nn = lasagne.layers.DenseLayer(cat_nn, 128) nn = lasagne.layers.concat([descr_nn, title_nn, cat_nn]) nn = lasagne.layers.DenseLayer(nn, 1024) nn = lasagne.layers.DropoutLayer(nn,p=5e-2) nn = lasagne.layers.DenseLayer(nn, 1, nonlinearity=lasagne.nonlinearities.linear) Explanation: NN architecture End of explanation #All trainable params weights = lasagne.layers.get_all_params(nn,trainable=True) #Simple NN prediction prediction = lasagne.layers.get_output(nn)[:,0] #Hinge loss loss = lasagne.objectives.binary_hinge_loss(prediction,target_y,log_odds=True).mean() #Weight optimization step updates = lasagne.updates.adam(loss, weights) Explanation: Loss function The standard way: prediction loss updates training and evaluation functions Hinge loss $ L_i = \max(0, \delta - t_i p_i) $ delta is a tunable parameter: how far should a neuron be in the positive margin area for us to stop bothering about it Function description may mention some +-1 limitations - this is not neccessary, at least as long as hinge loss has a default flag binary = True End of explanation #deterministic version det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0] #equivalent loss function det_loss = lasagne.objectives.binary_hinge_loss(det_prediction, target_y, log_odds=True).mean() Explanation: Determinitic prediction In case we use stochastic elements, e.g. dropout or noize Compile a separate set of functions with deterministic prediction (deterministic = True) Unless you think there's no neet for dropout there ofc. Btw is there? End of explanation train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates) eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction]) Explanation: Coffee-lation End of explanation #average precision at K from oracle import APatK, score # Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z) def iterate_minibatches(*arrays,**kwargs): batchsize=kwargs.get("batchsize",100) shuffle = kwargs.get("shuffle",True) if shuffle: indices = np.arange(len(arrays[0])) np.random.shuffle(indices) for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield [np.array(arr)[excerpt] for arr in arrays] Explanation: Training loop The regular way with loops over minibatches Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset End of explanation from sklearn.metrics import roc_auc_score, accuracy_score n_epochs = 5 batch_size = 500 minibatches_per_epoch = 100 for i in range(n_epochs): #training epoch_y_true = [] epoch_y_pred = [] b_c = b_loss = 0 for j, (b_desc,b_title,b_cat, b_y) in enumerate( iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)): if j > minibatches_per_epoch:break loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y) b_loss += loss b_c +=1 epoch_y_true.append(b_y) epoch_y_pred.append(pred_probas) epoch_y_true = np.concatenate(epoch_y_true) epoch_y_pred = np.concatenate(epoch_y_pred) print ("Train:") print ('\tloss:',b_loss/b_c) print ('\tacc:',accuracy_score(epoch_y_true,epoch_y_pred>0.)) print ('\tauc:',roc_auc_score(epoch_y_true,epoch_y_pred)) print ('\tap@k:',APatK(epoch_y_true,epoch_y_pred,K = int(len(epoch_y_pred)*0.025)+1)) #evaluation epoch_y_true = [] epoch_y_pred = [] b_c = b_loss = 0 for j, (b_desc,b_title,b_cat, b_y) in enumerate( iterate_minibatches(desc_ts,title_ts,nontext_tr,target_ts,batchsize=batch_size,shuffle=True)): if j > minibatches_per_epoch: break loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y) b_loss += loss b_c +=1 epoch_y_true.append(b_y) epoch_y_pred.append(pred_probas) epoch_y_true = np.concatenate(epoch_y_true) epoch_y_pred = np.concatenate(epoch_y_pred) print ("Val:") print ('\tloss:',b_loss/b_c) print ('\tacc:',accuracy_score(epoch_y_true,epoch_y_pred>0.)) print ('\tauc:',roc_auc_score(epoch_y_true,epoch_y_pred)) print ('\tap@k:',APatK(epoch_y_true,epoch_y_pred,K = int(len(epoch_y_pred)*0.025)+1)) print ("If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. ") Explanation: Tweaking guide batch_size - how many samples are processed per function call optimization gets slower, but more stable, as you increase it. May consider increasing it halfway through training minibatches_per_epoch - max amount of minibatches per epoch Does not affect training. Lesser value means more frequent and less stable printing Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch n_epochs - total amount of epochs to train for n_epochs = 10**10 and manual interrupting is still an option Tips: With small minibatches_per_epoch, network quality may jump around 0.5 for several epochs AUC is the most stable of all three metrics Average Precision at top 2.5% (APatK) - is the least stable. If batch_size*minibatches_per_epoch < 10k, it behaves as a uniform random variable. Plotting metrics over training time may be a good way to analyze which architectures work better. Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential End of explanation #evaluation epoch_y_true = [] epoch_y_pred = [] b_c = b_loss = 0 for j, (b_desc,b_title,b_cat, b_y) in enumerate( iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)): loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y) b_loss += loss b_c +=1 epoch_y_true.append(b_y) epoch_y_pred.append(pred_probas) epoch_y_true = np.concatenate(epoch_y_true) epoch_y_pred = np.concatenate(epoch_y_pred) final_accuracy = accuracy_score(epoch_y_true,epoch_y_pred>0) final_auc = roc_auc_score(epoch_y_true,epoch_y_pred) final_apatk = APatK(epoch_y_true,epoch_y_pred,K = int(len(epoch_y_pred)*0.025)+1) print ("Scores:") print ('\tloss:',b_loss/b_c) print ('\tacc:',final_accuracy) print ('\tauc:',final_auc) print ('\tap@k:',final_apatk) score(final_accuracy,final_auc,final_apatk) Explanation: Final evaluation Evaluate network over the entire test set End of explanation
2,038
Given the following text description, write Python code to implement the functionality described below step by step Description: Decorators Introduction A decorator is the name used for a software design pattern. Decorators dynamically alter the functionality of a function, method, or class without having to directly use subclasses or change the source code of the function being decorated. Python decorator is a specific change to the Python syntax that allows us to more conveniently alter functions and methods (and possibly classes in a future version). This supports more readable applications of the DecoratorPattern but also other uses as well. Step1: Function Decorators A function decorator is applied to a function definition by placing it on the line before that function definition begins Step2: !!! Order Matters !!! Step3: Decorators with arguments Using Functions We can create a function which can take an argument and Step4: Class Decorators Bound methods Unless you tell it not to, Python will create what is called a bound method when a function is an attribute of a class and you access it via an instance of a class. This may sound complicated but it does exactly what you want. Step5: staticmethod() A static method is a way of suppressing the creation of a bound method when accessing a function. Step6: When we call a static method we don’t get any additional arguments. Step7: classmethod A class method is like a bound method except that the class of the instance is passed as an argument rather than the instance itself.
Python Code: def bread(test_funct): def hyderabad(): print("</''''''\>") test_funct() print("<\______/>") return hyderabad def ingredients(test_funct): def chennai(): print("#tomatoes#") test_funct() print("~salad~") return chennai def cheese(food="--Say Cheese--"): print(food) ch = bread(test_funct=cheese) ch() inn = bread(ingredients(cheese)) inn() Explanation: Decorators Introduction A decorator is the name used for a software design pattern. Decorators dynamically alter the functionality of a function, method, or class without having to directly use subclasses or change the source code of the function being decorated. Python decorator is a specific change to the Python syntax that allows us to more conveniently alter functions and methods (and possibly classes in a future version). This supports more readable applications of the DecoratorPattern but also other uses as well. End of explanation @bread @ingredients def sandwich(food="--Say Cheese--"): print(food) sandwich() Explanation: Function Decorators A function decorator is applied to a function definition by placing it on the line before that function definition begins End of explanation @ingredients @bread def sandwich(food="--Say Cheese--"): print(food) sandwich() @bread @ingredients def hotdog(food="Jam"): print(food) hotdog() def diet_sandwitch(inner_func): def inner(): print("salad") return inner @ingredients @diet_sandwitch def sandwich(food="--Say Cheese--"): print(food) sandwich() Explanation: !!! Order Matters !!! End of explanation def Names(test_funct): def inner(): print("{Hello}") print("\tA-Priya") print("\tManish Gupta") print("\tNeha", end="\n\t") test_funct() print("(/Hello}") return inner @Names def print_AShanti(): print("A-Shanti") print_AShanti() Explanation: Decorators with arguments Using Functions We can create a function which can take an argument and End of explanation class A(object): def method(*argv): return argv a = A() a.method a.method('an arg') Explanation: Class Decorators Bound methods Unless you tell it not to, Python will create what is called a bound method when a function is an attribute of a class and you access it via an instance of a class. This may sound complicated but it does exactly what you want. End of explanation class A(object): @staticmethod def method(*argv): return argv a = A() a.method Explanation: staticmethod() A static method is a way of suppressing the creation of a bound method when accessing a function. End of explanation a.method('an arg') Explanation: When we call a static method we don’t get any additional arguments. End of explanation class A(object): @classmethod def method(*argv): return argv a = A() a.method a.method('an arg') def test(strg): print("Name: ", strg) def hello(func, name): print("Ja") func(name) hello(test, "Mayank") class B(object): @classmethod def method(*argv): return argv a = B() a.method() Explanation: classmethod A class method is like a bound method except that the class of the instance is passed as an argument rather than the instance itself. End of explanation
2,039
Given the following text description, write Python code to implement the functionality described below step by step Description: PyBroom Example - Simple This notebook is part of pybroom. This notebook shows the simplest usage of pybroom when performing a curve fit of a single dataset. Possible applications are only hinted. For a more complex (and interesting!) example using multiple datasets see pybroom-example-multi-datasets. Step1: Create Noisy Data Step2: Model Fitting Step3: Fit result from an lmfit Model can be inspected with with fit_report or params.pretty_print() Step4: These methods a re convenient but extracting the data from the lmfit object requires some work and the knowledge of lmfit object structure. pybroom comes to help, extracting data from fit results and returning pandas DataFrame in tidy format that can be much more easily manipulated, filtered and plotted. Glance Glancing at the fit results (dropping some verbose columns) Step5: The glance function returns a DataFrame with one row per fit-result object. Application Idea If you fit N models to the same dataset you can compare statistics such as reduced-$\chi^2$ Or, fitting several with several methods (and datasets) you can study the convergence properties using reduced-$\chi^2$, number of function evaluation and success rate. Tidy Tidy fit results for all the parameters Step6: The tidy function returns one row for each parameter. Step7: Augment Tidy dataframe with data function of the independent variable ('x'). Columns include the data being fitted, best fit, best fit components, residuals, etc. Step8: The augment function returns one row for each data point.
Python Code: import numpy as np from numpy import sqrt, pi, exp, linspace from lmfit import Model import matplotlib.pyplot as plt %matplotlib inline %config InlineBackend.figure_format='retina' # for hi-dpi displays import lmfit print('lmfit: %s' % lmfit.__version__) import pybroom as br Explanation: PyBroom Example - Simple This notebook is part of pybroom. This notebook shows the simplest usage of pybroom when performing a curve fit of a single dataset. Possible applications are only hinted. For a more complex (and interesting!) example using multiple datasets see pybroom-example-multi-datasets. End of explanation x = np.linspace(-10, 10, 101) peak1 = lmfit.models.GaussianModel(prefix='p1_') peak2 = lmfit.models.GaussianModel(prefix='p2_') model = peak1 + peak2 params = model.make_params(p1_amplitude=1, p2_amplitude=1, p1_sigma=1, p2_sigma=1) y_data = model.eval(x=x, p1_center=-1, p2_center=2, p1_sigma=0.5, p2_sigma=1, p1_amplitude=1, p2_amplitude=2) y_data.shape y_data += np.random.randn(*y_data.shape)/10 plt.plot(x, y_data) Explanation: Create Noisy Data End of explanation params = model.make_params(p1_center=0, p2_center=3, p1_sigma=0.5, p2_sigma=1, p1_amplitude=1, p2_amplitude=2) result = model.fit(y_data, x=x, params=params) Explanation: Model Fitting End of explanation print(result.fit_report()) result.params.pretty_print() Explanation: Fit result from an lmfit Model can be inspected with with fit_report or params.pretty_print(): End of explanation dg = br.glance(result) dg.drop('model', 1).drop('message', 1) Explanation: These methods a re convenient but extracting the data from the lmfit object requires some work and the knowledge of lmfit object structure. pybroom comes to help, extracting data from fit results and returning pandas DataFrame in tidy format that can be much more easily manipulated, filtered and plotted. Glance Glancing at the fit results (dropping some verbose columns): End of explanation dt = br.tidy(result) dt Explanation: The glance function returns a DataFrame with one row per fit-result object. Application Idea If you fit N models to the same dataset you can compare statistics such as reduced-$\chi^2$ Or, fitting several with several methods (and datasets) you can study the convergence properties using reduced-$\chi^2$, number of function evaluation and success rate. Tidy Tidy fit results for all the parameters: End of explanation dt.loc[dt.name == 'p1_center'] Explanation: The tidy function returns one row for each parameter. End of explanation da = br.augment(result) da.head() Explanation: Augment Tidy dataframe with data function of the independent variable ('x'). Columns include the data being fitted, best fit, best fit components, residuals, etc. End of explanation d = br.augment(result) fig, ax = plt.subplots(2, 1, figsize=(7, 8)) ax[1].plot('x', 'data', data=d, marker='o', ls='None') ax[1].plot('x', "Model(gaussian, prefix='p1_')", data=d, lw=2, ls='--') ax[1].plot('x', "Model(gaussian, prefix='p2_')", data=d, lw=2, ls='--') ax[1].plot('x', 'best_fit', data=d, lw=2) ax[0].plot('x', 'residual', data=d); Explanation: The augment function returns one row for each data point. End of explanation
2,040
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents 1. Demonstration of the numpy.polynomial package 1.1 And especially a small hand-made pretty printing function for Polynomial objects 1.2 First goal Step1: And we can then define the monome $X$ as P([0, 1]), defined by the list of its coefficients in the canonical basis $(X_i)_{i \in \mathbb{N}}$ Step2: The main issue with this output (either poly([ 0. 1.]) or Polynomial([ 0., 1.], [-1, 1], [-1, 1])) is its lack of sexyness Step4: 1.2 First goal Step5: We can check its behavior on some small polynomials Step6: We can create a more complicated polynomial $Q_4(X) = (1 + 2 X + 17 X^3) ^ {12}$ and check that it is also nicely printed Step8: 1.3 Second goal Step9: We can quickly try the same examples as before Step10: But we want the $\LaTeX{}$ code to be pretty-printed by IPython, not just displayed like this. For this, the internal function Latex from the module IPython.display is required Step11: Allright! It starts to look like what we wanted! Let's try with a bigger polynomial, as we did before Step12: Way nicer! Yay! 1.4 A bonus for the end Step13: Once our special printer has been loaded, all polynomials will be represented by their mathematical form instead (as $\LaTeX{}$ code displayed with MathJax) Step14: One a last example
Python Code: from numpy.polynomial import Polynomial as P Explanation: Table of Contents 1. Demonstration of the numpy.polynomial package 1.1 And especially a small hand-made pretty printing function for Polynomial objects 1.2 First goal: pretty print in ASCII text 1.3 Second goal: pretty-print in $\LaTeX{}$ code 1.4 A bonus for the end: add this pretty-printer as the default one in IPython: 1. Demonstration of the numpy.polynomial package 1.1 And especially a small hand-made pretty printing function for Polynomial objects For both Python 2 and Python 3, numpy has a very nice module to work with polynomials: numpy.polynomial. If you are not familiar with it, I highly recommend you to read this tutorial. Note: this small tutorial was inspired by this question asked on StackOverflow (and by my answer). This Jupyter Notebook was written by Lilian Besson. Now, let assume you already know everything about Python, and the numpy.polynomial package. So you know that it should be imported like this: End of explanation X = P([0, 1]) print("We defined the monome X =", X) X Explanation: And we can then define the monome $X$ as P([0, 1]), defined by the list of its coefficients in the canonical basis $(X_i)_{i \in \mathbb{N}}$: End of explanation Q = 1 + 17 * X ** 3 print("Q(X) =", Q) Q Explanation: The main issue with this output (either poly([ 0. 1.]) or Polynomial([ 0., 1.], [-1, 1], [-1, 1])) is its lack of sexyness: it gives the required information (coefficients, domain, window etc, see here for more details) but it is too far from the mathematical notation. If we define $Q(X) = 1 + 17 X^3$, we would like Python (or IPython, or in this case, the Jupyter Notebook) to display this polynomial nicely, either as ASCII text: 1 + 17 * X**3 (valid Python code), or as a nice $\LaTeX{}$ code: 1 + 17 X^3. End of explanation def prettyprintPolynomial(p): Small function to print nicely the polynomial p as we write it in maths, in ASCII text. coefs = p.coef # List of coefficient, sorted by increasing degrees res = "" # The resulting string for i, a in enumerate(coefs): if int(a) == a: # Remove the trailing .0 a = int(a) if i == 0: # First coefficient, no need for X if a > 0: res += "{a} + ".format(a=a) elif a < 0: # Negative a is printed like (a) res += "({a}) + ".format(a=a) # a = 0 is not displayed elif i == 1: # Second coefficient, only X and not X**i if a == 1: # a = 1 does not need to be displayed res += "X + " elif a > 0: res += "{a} * X + ".format(a=a) elif a < 0: res += "({a}) * X + ".format(a=a) else: if a == 1: res += "X**{i} + ".format(i=i) elif a > 0: res += "{a} * X**{i} + ".format(a=a, i=i) elif a < 0: res += "({a}) * X**{i} + ".format(a=a, i=i) return res[:-3] if res else "" Explanation: 1.2 First goal: pretty print in ASCII text Our first task will be to implement a small function, or an overload of the numpy.polynomial.Polynomial class to be able to pretty-print a Polynomial object nicely. Note: more details, along with a documentation, some examples and a fully pylint-compatible code are available on this Bitbucket snippet: bitbucket.org/snippets/lbesson/j6dpz. The function prettyprintPolynomial defined below implements a natural strategy to print a polynomial: it prints the coefficients, as a_i * X**i, in increasing order. This function takes care of all the special cases: it removes a trailing .0 if one coefficient is an integer, it displays the coefficient between ( and ) if it is negative, it only displays the non-zero coefficients, if one coefficient is 1, no need to display it. End of explanation print("X =", prettyprintPolynomial(X)) print("Q(X) =", prettyprintPolynomial(Q)) Q3 = -1 - 2*X - 17*X**3 print("Q_3(X) =", prettyprintPolynomial(Q3)) print("- Q_3(X) =", prettyprintPolynomial(-Q3)) Explanation: We can check its behavior on some small polynomials: End of explanation Q4 = (1 + 2*X + 17*X**3) ** 12 print("Q_4(X) =", prettyprintPolynomial(Q4)) Explanation: We can create a more complicated polynomial $Q_4(X) = (1 + 2 X + 17 X^3) ^ {12}$ and check that it is also nicely printed: End of explanation def Polynomial_to_LaTeX(p): Small function to print nicely the polynomial p as we write it in maths, in LaTeX code. coefs = p.coef # List of coefficient, sorted by increasing degrees res = "" # The resulting string for i, a in enumerate(coefs): if int(a) == a: # Remove the trailing .0 a = int(a) if i == 0: # First coefficient, no need for X if a > 0: res += "{a} + ".format(a=a) elif a < 0: # Negative a is printed like (a) res += "({a}) + ".format(a=a) # a = 0 is not displayed elif i == 1: # Second coefficient, only X and not X**i if a == 1: # a = 1 does not need to be displayed res += "X + " elif a > 0: res += "{a} \;X + ".format(a=a) elif a < 0: res += "({a}) \;X + ".format(a=a) else: if a == 1: # A special care needs to be addressed to put the exponent in {..} in LaTeX res += "X^{i} + ".format(i="{%d}" % i) elif a > 0: res += "{a} \;X^{i} + ".format(a=a, i="{%d}" % i) elif a < 0: res += "({a}) \;X^{i} + ".format(a=a, i="{%d}" % i) return "$" + res[:-3] + "$" if res else "" Explanation: 1.3 Second goal: pretty-print in $\LaTeX{}$ code We will simply modify the function prettyprintPolynomial to use $\LaTeX{}$ code instead of ASCII text: the string has to be between $, the multiplications are without symbols (e.g., $17 X$ for 17 * X), and the powers are with the ^ symbol instead of ** : End of explanation print("X =", Polynomial_to_LaTeX(X)) print("Q(X) =", Polynomial_to_LaTeX(Q)) Explanation: We can quickly try the same examples as before: End of explanation from IPython.display import Latex print("X =") Latex(Polynomial_to_LaTeX(X)) print("Q(X) =") Latex(Polynomial_to_LaTeX(Q)) Explanation: But we want the $\LaTeX{}$ code to be pretty-printed by IPython, not just displayed like this. For this, the internal function Latex from the module IPython.display is required: End of explanation Q4 = (1 + 2*X + 17*X**3) ** 12 print("Q_4(X) =") Latex(Polynomial_to_LaTeX(Q4)) Explanation: Allright! It starts to look like what we wanted! Let's try with a bigger polynomial, as we did before: End of explanation ip = get_ipython() latex_formatter = ip.display_formatter.formatters['text/latex'] latex_formatter.for_type_by_name('numpy.polynomial.polynomial', 'Polynomial', Polynomial_to_LaTeX) Explanation: Way nicer! Yay! 1.4 A bonus for the end: add this pretty-printer as the default one in IPython: This manipulation is showed in IPython's documentation. But we can configure IPython to do this automatically for us as follows. We hook into the IPython display system and instruct it to use Polynomial_to_LaTeX for the latex mimetype, when encountering objects of the Polynomial type defined in the numpy.polynomial.polynomial module: End of explanation X Explanation: Once our special printer has been loaded, all polynomials will be represented by their mathematical form instead (as $\LaTeX{}$ code displayed with MathJax): End of explanation P = ((1 + X**2)**16)**16 % (1 - X**16) print("str(P) =", str(P)) print("repr(P) =", repr(P)) P Explanation: One a last example: End of explanation
2,041
Given the following text description, write Python code to implement the functionality described below step by step Description: <img width=400 src="http Step1: More beautiful code through vectorisation pure python with list comprehension Step2: Using numpy Step3: Finding the point with the smallest distance Step4: Small example timings Step5: Basic math Step6: Attention Step7: Most normal python functions with basic operators like *, +, ** simply work because of operator overloading Step8: Useful properties Step9: Arbitrary dimension arrays Step10: Reduction operations Numpy has many operations, which reduce dimensionality of arrays Step11: Standard Deviation Step12: Standard error of the mean Step13: Sample Standard Deviation Step14: Most of the numpy functions are also methods of the array Step15: Difference between neighbor elements Step16: Reductions on multi-dimensional arrays Step17: Exercise 1 Write a function that calculates the analytical linear regression for a set of x and y values. Reminder Step18: Helpers for creating arrays Step19: Numpy Indexing Element access Slicing Step20: Changing array content Step21: Using slices on both sides Step22: Transposing inverts the order of the dimensions Step23: Masks A boolean array can be used to select only the element where it contains True. Very powerfull tool to select certain elements that fullfill a certain condition Step24: Random numbers numpy has a larger number of distributions builtin Step25: Calculating pi through monte-carlo simulation We draw random numbers in a square with length of the sides of 2 We count the points which are inside the circle of radius 1 The area of the square is $$ A_\mathrm{square} = a^2 = 4 $$ The area of the circle is $$ A_\mathrm{circle} = \pi r^2 = \pi $$ With $$ \frac{n_\mathrm{circle}}{n_\mathrm{square}} = \frac{A_\mathrm{circle}}{A_\mathrm{square}} $$ We can calculate pi Step26: Exercise Draw 10000 gaussian random numbers with mean of $\mu = 2$ and standard deviation of $\sigma = 3$ Calculate the mean and the standard deviation of the sample What percentage of the numbers are outside of $[\mu - \sigma, \mu + \sigma]$? How many of the numbers are $> 0$? Calculate the mean and the standard deviation of all numbers ${} > 0$ Step27: Exercise Monte-Carlo uncertainty propagation The hubble constant as measured by PLANCK is $$ H_0 = (67.74 \pm 0.47)\,\frac{\mathrm{km}}{\mathrm{s}\cdot\mathrm{Mpc}} $$ Estimate mean and the uncertainty of the velocity of a galaxy which is measured to be $(500 \pm 100)\,\mathrm{Mpc}$ away using monte carlo methods Step28: Simple io functions Step29: Problems Everything is a float Way larger file than necessary because of too much digits for floats No column names Numpy recarrays Numpy recarrays can store columns of different types Rows are addressed by integer index Columns are addressed by strings Solution for our io problem → Column names, different types Step30: Linear algebra Numpy offers a lot of linear algebra functionality, mostly wrapping LAPACK Step31: Numpy matrices Numpy also has a matrix class, with operator overloading suited for matrices
Python Code: import numpy as np Explanation: <img width=400 src="http://www.numpy.org/_static/numpy_logo.png" alt="Numpy"/> Why do we need numpy? You may have heard "Python is slow", this is true when it concerns looping over many small python objects Python is dynamically typed and everything is an object, even an int. There are no primitive types. Numpy's main feature is the ndarray class, a fixed length, homogeniously typed array class. Numpy implements a lot of functionality in fast c, cython and fortran code to work on these arrays python with vectorized operations using numpy can be blazingly fast See: Python is not C But the most important reason: More beautiful code End of explanation voltages = [10.1, 15.1, 9.5] currents = [1.2, 2.4, 5.2] resistances = [U * I for U, I in zip(voltages, currents)] resistances Explanation: More beautiful code through vectorisation pure python with list comprehension End of explanation U = np.array([10.1, 15.1, 9.5]) I = np.array([1.2, 2.4, 5.2]) R = U * I R Explanation: Using numpy End of explanation import math def euclidean_distance(p1, p2): return math.sqrt((p1[0] - p2[0])**2 + (p1[1] - p2[1])**2) point = (1, 2) points = [(3, 2), (4, 2), (3, 0)] min_distance = float('inf') for other in points: distance = euclidean_distance(point, other) if distance < min_distance: closest = other min_distance = distance print(min_distance, closest) point = np.array([1, 2]) points = np.array([(3, 2), (4, 2), (3, 0)]) distance = np.linalg.norm(point - points, axis=1) idx = np.argmin(distance) print(distance[idx], points[idx]) Explanation: Finding the point with the smallest distance End of explanation import math def var(data): ''' knuth's algorithm for one-pass calculation of the variance Avoids rounding errors of large numbers when doing the naive approach of `sum(v**2 for v in data) - sum(v)**2` ''' n = 0 mean = 0.0 m2 = 0.0 if len(data) < 2: return float('nan') for value in data: n += 1 delta = value - mean mean += delta / n delta2 = value - mean m2 += delta * delta2 return m2 / n list(range(10)) %%timeit l = list(range(1000)) var(l) %%timeit a = np.arange(1000) # array with numbers 0,...,999 np.var(a) Explanation: Small example timings End of explanation # create a numpy array from a python a python list a = np.array([1.0, 3.5, 7.1, 4, 6]) 2 * a a**2 a**a np.cos(a) Explanation: Basic math: vectorized Operations on numpy arrays work vectorized, element-by-element Lose your loops End of explanation math.cos(a) Explanation: Attention: You need the cos from numpy! End of explanation def poly(x): return x + 2 * x**2 - x**3 poly(a) poly(np.e), poly(np.pi) Explanation: Most normal python functions with basic operators like *, +, ** simply work because of operator overloading: End of explanation len(a) a.shape a.dtype a.ndim Explanation: Useful properties End of explanation # two-dimensional array y = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) y + y ## since python 3.5 @ is matrix product y @ y # Broadcasting, changing array dimensions to fit the larger one y + np.array([1, 2, 3]) Explanation: Arbitrary dimension arrays End of explanation x = np.random.normal(0, 1, 1000) np.sum(x) np.prod(x) np.mean(x) Explanation: Reduction operations Numpy has many operations, which reduce dimensionality of arrays End of explanation np.std(x) Explanation: Standard Deviation End of explanation np.std(x, ddof=1) / np.sqrt(len(x)) Explanation: Standard error of the mean End of explanation np.std(x, ddof=1) Explanation: Sample Standard Deviation End of explanation x.mean(), x.std(), x.max(), x.min() Explanation: Most of the numpy functions are also methods of the array End of explanation z = np.arange(10)**2 diff_z = np.diff(z) print(z) print(diff_z) Explanation: Difference between neighbor elements End of explanation array2d = np.arange(20).reshape(4, 5) array2d np.sum(array2d, axis=0) np.var(array2d, axis=1) Explanation: Reductions on multi-dimensional arrays End of explanation # %load 04_01_numpy_solutions/exercise_linear.py x = np.linspace(0, 1, 50) y = 5 * np.random.normal(x, 0.1) + 2 # see section on random numbers later a, b = linear_regression(x, y) a, b Explanation: Exercise 1 Write a function that calculates the analytical linear regression for a set of x and y values. Reminder: $$ f(x) = a \cdot x + b$$ with $$ \hat{a} = \frac{\mathrm{Cov}(x, y)}{\mathrm{Var}(x)} \ \hat{b} = \bar{y} - \hat{a} \cdot \bar{x} $$ End of explanation np.zeros(10) np.ones((5, 2)) np.full(5, np.nan) np.empty(10) # attention, uninitialised memory, be carefull np.linspace(-2, 1, 1) # like range() for arrays: np.arange(5) np.arange(2, 10, 2) np.logspace(-4, 5, 10) np.logspace(1, 4, 4, base=2) Explanation: Helpers for creating arrays End of explanation x = np.arange(0, 10) # like lists: x # like lists: x[0] # all elements with indices ≥1 and <4: x[1:4] # negative indices count from the end x[-1], x[-2] # combination: x[3:-2] # step size x[::2] # trick for reversal: negative step x[::-1] y = np.array([x, x + 10, x + 20, x + 30]) y # only one index ⇒ one-dimensional array y[2] # other axis: (: alone means the whole axis) y[:, 3] # inspecting the number of elements per axis: y[:, 1:3].shape Explanation: Numpy Indexing Element access Slicing End of explanation y y[:, 3] = 0 y Explanation: Changing array content End of explanation y[:,0] = x[3:7] y Explanation: Using slices on both sides End of explanation y y.shape y.T y.T.shape Explanation: Transposing inverts the order of the dimensions End of explanation a = np.linspace(0, 2, 11) b = np.random.normal(0, 1, 11) print(b >= 0) print(a[b >= 0]) a[[0, 2]] = np.nan a a[np.isnan(a)] = -1 a Explanation: Masks A boolean array can be used to select only the element where it contains True. Very powerfull tool to select certain elements that fullfill a certain condition End of explanation np.random.uniform(-1, 1, 10) np.random.normal(0, 5, (2, 10)) np.random.poisson(5, 2) Explanation: Random numbers numpy has a larger number of distributions builtin End of explanation n_square = 10000000 x = np.random.uniform(-1, 1, n_square) y = np.random.uniform(-1, 1, n_square) radius = np.sqrt(x**2 + y**2) n_circle = np.sum(radius <= 1.0) print(4 * n_circle / n_square) Explanation: Calculating pi through monte-carlo simulation We draw random numbers in a square with length of the sides of 2 We count the points which are inside the circle of radius 1 The area of the square is $$ A_\mathrm{square} = a^2 = 4 $$ The area of the circle is $$ A_\mathrm{circle} = \pi r^2 = \pi $$ With $$ \frac{n_\mathrm{circle}}{n_\mathrm{square}} = \frac{A_\mathrm{circle}}{A_\mathrm{square}} $$ We can calculate pi: $$ \pi = 4 \frac{n_\mathrm{circle}}{n_\mathrm{square}} $$ End of explanation # %load 04_01_numpy_solutions/exercise_gaussian.py Explanation: Exercise Draw 10000 gaussian random numbers with mean of $\mu = 2$ and standard deviation of $\sigma = 3$ Calculate the mean and the standard deviation of the sample What percentage of the numbers are outside of $[\mu - \sigma, \mu + \sigma]$? How many of the numbers are $> 0$? Calculate the mean and the standard deviation of all numbers ${} > 0$ End of explanation # %load 04_01_numpy_solutions/exercise_hubble.py Explanation: Exercise Monte-Carlo uncertainty propagation The hubble constant as measured by PLANCK is $$ H_0 = (67.74 \pm 0.47)\,\frac{\mathrm{km}}{\mathrm{s}\cdot\mathrm{Mpc}} $$ Estimate mean and the uncertainty of the velocity of a galaxy which is measured to be $(500 \pm 100)\,\mathrm{Mpc}$ away using monte carlo methods End of explanation idx = np.arange(100) x = np.random.normal(0, 1e5, 100) y = np.random.normal(0, 1, 100) n = np.random.poisson(20, 100) idx.shape, x.shape, y.shape, n.shape np.savetxt( 'data.txt', np.column_stack([idx, x, y, n]), ) !head data.txt # Load back the data, unpack=True is needed to read the data columnwise and not row-wise idx, x, y, n = np.genfromtxt('data.txt', unpack=True) idx.dtype, x.dtype Explanation: Simple io functions End of explanation # for more options on formatting see # https://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html data = np.savetxt( 'data.csv', np.column_stack([idx, x, y, n]), delimiter=',', # true csv file header=','.join(['idx', 'x', 'y', 'n']), fmt=['%d', '%.4g', '%.4g', '%d'], # One formatter for each column ) !head data.csv data = np.genfromtxt( 'data.csv', names=True, # load column names from first row dtype=None, # Automagically determince best data type for each column delimiter=',', ) data[:10] data[0] data['n'] data.dtype Explanation: Problems Everything is a float Way larger file than necessary because of too much digits for floats No column names Numpy recarrays Numpy recarrays can store columns of different types Rows are addressed by integer index Columns are addressed by strings Solution for our io problem → Column names, different types End of explanation # symmetric matrix, use eigh # If not symmetric, use eig mat = np.array([ [4, 2, 0], [2, 1, -3], [0, -3, 4] ]) eig_vals, eig_vecs = np.linalg.eigh(mat) eig_vals, eig_vecs np.linalg.inv(mat) Explanation: Linear algebra Numpy offers a lot of linear algebra functionality, mostly wrapping LAPACK End of explanation mat = np.matrix(mat) mat.T mat ** 2 mat * 5 mat.I mat * np.matrix([1, 2, 3]).T Explanation: Numpy matrices Numpy also has a matrix class, with operator overloading suited for matrices End of explanation
2,042
Given the following text description, write Python code to implement the functionality described below step by step Description: Ejemplo 2. Estática Step1: Por medio de un par de cables se quiere sostener un bloque de peso $W = 200\; kgf$. Determine la tensión en cada cuerda si las coordenadas de posición de los puntos A, B, C son A(0, -20 cm, 40 cm), B(-40 cm, 50 cm, 0cm) y C(45 cm, 40 cm, 0cm), respectivamente. Suponga que no hay fricción entre la rampa y el contrapeso. Step2: Si llamamos $\vec{T_{B}}$ y $\vec{T_{C}}$ la tensión en el cable AB y AC (incógnitas) respectivamente, $\vec{W}$ el peso del elemento y $\vec{N}$ el vector normal al plano de la rampa, y hacemos equilibro sobre el bloque, se tiene Step3: y para el cable AC Step4: Por analogía, la fuerza normal prodría N podría ser escrita como Step5: Lo que queda ahora es resolver el sistema de ecuaciones de equilibrio. Para ello lo escribimos de la forma Step6: Resolviendo el sistema Step7: Nota. Problema tomado de Mecánica Vectorial para Ingenieros. Estatica. Sexta Edición. Beer Ferninand; Johnston, Russell.
Python Code: from IPython.display import Image,Latex Explanation: Ejemplo 2. Estática End of explanation Image(filename='FIGURES/Rampa.png',width=250) Explanation: Por medio de un par de cables se quiere sostener un bloque de peso $W = 200\; kgf$. Determine la tensión en cada cuerda si las coordenadas de posición de los puntos A, B, C son A(0, -20 cm, 40 cm), B(-40 cm, 50 cm, 0cm) y C(45 cm, 40 cm, 0cm), respectivamente. Suponga que no hay fricción entre la rampa y el contrapeso. End of explanation from numpy import array, sqrt, cross, dot, arctan, cos, sin from scipy.linalg import inv R_AB = [-40., 70., -40.] MR_AB = sqrt(dot (R_AB,R_AB)) nB = R_AB / MR_AB print 'Magnitud =', MR_AB, ',', 'nB =', nB.round(2) Explanation: Si llamamos $\vec{T_{B}}$ y $\vec{T_{C}}$ la tensión en el cable AB y AC (incógnitas) respectivamente, $\vec{W}$ el peso del elemento y $\vec{N}$ el vector normal al plano de la rampa, y hacemos equilibro sobre el bloque, se tiene: $$ \begin{align} &\sum \; F_{X} = T_{BX} + T_{CX} = 0\ &\sum \; F_{Y} = - W + T_{BY} + T_{CY} + N_{Y} = 0\ &\sum \; F_{Z} = T_{BZ} + T_{CZ} + N_{Z} = 0 \end{align}$$ Por otro lado, los vectores de tensión pueden ser escritos como: $\vec{T} = T \; \hat{n} $ , donde $\hat{n}$ es un vector director unitario. De esta forma para la tensión en el cable AB: $\vec{T_B} = T_B \; \hat{n_B} = T_B \;\frac{\vec{R_{AB}}}{MR_{AB}} $ End of explanation R_AC = [45., 60., -40.] MR_AC = sqrt(dot (R_AC,R_AC)) nC = R_AC / MR_AC print 'Magnitud =', MR_AC, ',', 'nC =', nC.round(2) Explanation: y para el cable AC: $\vec{T_C} = T_C \; \hat{n_C} = T_C \;\frac{\vec{R_{AC}}}{MR_{AC}} $ End of explanation Np = [0.0, cos(arctan(40./80.)),sin(arctan(40./80.))] Explanation: Por analogía, la fuerza normal prodría N podría ser escrita como: $\vec{N} = N \; \hat{n_p} $, donde las componentes de $\hat{n_p} $ son determinadas por la inclinación de la rampa. De esta forma. End of explanation A = array([[nB[0], nC[0], Np[0]], [nB[1],nC[1], Np[1]], [nB[2], nC[2], Np[2]]]) B = [0.0, 200., 0] print 'A=', A.round(2) print print 'B=', B Explanation: Lo que queda ahora es resolver el sistema de ecuaciones de equilibrio. Para ello lo escribimos de la forma: $$AX = B$$ End of explanation X = dot(inv(A),B) print '[TB, TC, N]=', X.round(2) Explanation: Resolviendo el sistema End of explanation from IPython.core.display import HTML def css_styling(): styles = open('./custom_barba.css', 'r').read() return HTML(styles) css_styling() Explanation: Nota. Problema tomado de Mecánica Vectorial para Ingenieros. Estatica. Sexta Edición. Beer Ferninand; Johnston, Russell. End of explanation
2,043
Given the following text description, write Python code to implement the functionality described below step by step Description: Notebook accompanying pbpython article - Pandas Grouper and Agg Functions Explained Step1: Read in the sample sales file then convert the date column to a proper date time column Step2: Example showing how resample can be used along with set_index Step3: A more complex example with a groupby Step4: A simpler example using pd.Grouper Step5: Some more examples using various off set alisases - http Step6: Now show how to use the new .agg function First, how to get summary stats without agg Step7: Using .agg for sums and means across multiple columns Step8: Passing a dictionary containing different operations per column Step9: Using custom functions Step10: Clean up the naming in the output by defining the name for get_max Step11: Using an OrderedDictionary to maintain column order
Python Code: import pandas as pd import collections Explanation: Notebook accompanying pbpython article - Pandas Grouper and Agg Functions Explained End of explanation df = pd.read_excel("https://github.com/chris1610/pbpython/blob/master/data/sample-salesv3.xlsx?raw=True") df["date"] = pd.to_datetime(df['date']) df.head() df.dtypes Explanation: Read in the sample sales file then convert the date column to a proper date time column End of explanation df.set_index('date').resample('M')["ext price"].sum() Explanation: Example showing how resample can be used along with set_index End of explanation df.set_index('date').groupby('name')["ext price"].resample("M").sum().head(20) Explanation: A more complex example with a groupby End of explanation df.groupby(['name', pd.Grouper(key='date', freq='M')])['ext price'].sum().head(20) df.groupby(['name', pd.Grouper(key='date', freq='A-DEC')])['ext price'].sum() # This works but is kind of slow and probably not that useful for this data set #df.groupby(['name', pd.Grouper(key='date', freq='60s')])['ext price'].sum() Explanation: A simpler example using pd.Grouper End of explanation df.groupby(['name', pd.Grouper(key='date', freq='W-MON')])['ext price'].sum() df.groupby(['name', 'sku', pd.Grouper(key='date', freq='A-DEC')])['ext price'].sum() Explanation: Some more examples using various off set alisases - http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases End of explanation df[["ext price", "quantity"]].sum() df[["ext price", "quantity"]].mean() Explanation: Now show how to use the new .agg function First, how to get summary stats without agg End of explanation df[["ext price", "quantity", "unit price"]].agg(['sum', 'mean']) Explanation: Using .agg for sums and means across multiple columns End of explanation df.agg({'ext price': ['sum', 'mean'], 'quantity': ['sum', 'mean'], 'unit price': ['mean']}) Explanation: Passing a dictionary containing different operations per column End of explanation get_max = lambda x: x.value_counts(dropna=False).index[0] df.agg({'ext price': ['sum', 'mean'], 'quantity': ['sum', 'mean'], 'unit price': ['mean'], 'sku': [get_max]}) Explanation: Using custom functions End of explanation get_max.__name__ = "most frequent" df.agg({'ext price': ['sum', 'mean'], 'quantity': ['sum', 'mean'], 'unit price': ['mean'], 'sku': [get_max]}) Explanation: Clean up the naming in the output by defining the name for get_max End of explanation f = collections.OrderedDict([('ext price', ['sum', 'mean']), ('quantity', ['sum', 'mean']), ('sku', [get_max])]) df.agg(f) Explanation: Using an OrderedDictionary to maintain column order End of explanation
2,044
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Parsing STIX Content Parsing STIX content is as easy as calling the parse() function on a JSON string, dictionary, or file-like object. It will automatically determine the type of the object. The STIX objects within bundle objects will be parsed as well. Parsing a string Step2: Parsing a dictionary Step3: Parsing a file-like object Step4: Parsing Custom STIX Content Parsing custom STIX objects and/or STIX objects with custom properties is also completed easily with parse(). Just supply the keyword argument allow_custom=True. When allow_custom is specified, parse() will attempt to convert the supplied STIX content to known STIX 2 domain objects and/or previously defined custom STIX 2 objects. If the conversion cannot be completed (and allow_custom is specified), parse() will treat the supplied STIX 2 content as valid STIX 2 objects and return them. This is an axiomatic possibility as the stix2 library cannot guarantee proper processing of unknown custom STIX 2 objects that were explicitly flagged to be allowed, and thus may not be valid. <div class="alert alert-warning"> **Warning** Specifying allow_custom may lead to critical errors if further processing (searching, filtering, modifying etc...) of the custom content occurs where the custom content supplied is not valid STIX 2 </div> For examples of parsing STIX 2 objects with custom STIX properties, see Custom STIX Content
Python Code: from stix2 import parse input_string = { "type": "observed-data", "id": "observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf", "spec_version": "2.1", "created": "2016-04-06T19:58:16.000Z", "modified": "2016-04-06T19:58:16.000Z", "first_observed": "2015-12-21T19:00:00Z", "last_observed": "2015-12-21T19:00:00Z", "number_observed": 50, "object_refs": [ "file--5d2dc832-b137-4e8c-97b2-5b00c18be611" ] } obj = parse(input_string) print(type(obj)) print(obj.serialize(pretty=True)) Explanation: Parsing STIX Content Parsing STIX content is as easy as calling the parse() function on a JSON string, dictionary, or file-like object. It will automatically determine the type of the object. The STIX objects within bundle objects will be parsed as well. Parsing a string End of explanation input_dict = { "type": "identity", "id": "identity--311b2d2d-f010-4473-83ec-1edf84858f4c", "spec_version": "2.1", "created": "2015-12-21T19:59:11Z", "modified": "2015-12-21T19:59:11Z", "name": "Cole Powers", "identity_class": "individual" } obj = parse(input_dict) print(type(obj)) print(obj.serialize(pretty=True)) Explanation: Parsing a dictionary End of explanation file_handle = open("/tmp/stix2_store/course-of-action/course-of-action--d9727aee-48b8-4fdb-89e2-4c49746ba4dd/20170531213041022744.json") obj = parse(file_handle) print(type(obj)) print(obj.serialize(pretty=True)) Explanation: Parsing a file-like object End of explanation from taxii2client import Collection from stix2 import CompositeDataSource, FileSystemSource, TAXIICollectionSource # to allow for the retrieval of unknown custom STIX2 content, # just create *Stores/*Sources with the 'allow_custom' flag # create FileSystemStore fs = FileSystemSource("/path/to/stix2_data/", allow_custom=True) # create TAXIICollectionSource colxn = Collection('http://taxii_url') ts = TAXIICollectionSource(colxn, allow_custom=True) Explanation: Parsing Custom STIX Content Parsing custom STIX objects and/or STIX objects with custom properties is also completed easily with parse(). Just supply the keyword argument allow_custom=True. When allow_custom is specified, parse() will attempt to convert the supplied STIX content to known STIX 2 domain objects and/or previously defined custom STIX 2 objects. If the conversion cannot be completed (and allow_custom is specified), parse() will treat the supplied STIX 2 content as valid STIX 2 objects and return them. This is an axiomatic possibility as the stix2 library cannot guarantee proper processing of unknown custom STIX 2 objects that were explicitly flagged to be allowed, and thus may not be valid. <div class="alert alert-warning"> **Warning** Specifying allow_custom may lead to critical errors if further processing (searching, filtering, modifying etc...) of the custom content occurs where the custom content supplied is not valid STIX 2 </div> For examples of parsing STIX 2 objects with custom STIX properties, see Custom STIX Content: Custom Properties For examples of parsing defined custom STIX 2 objects, see Custom STIX Content: Custom STIX Object Types For retrieving STIX 2 content from a source (e.g. file system, TAXII) that may possibly have custom STIX 2 content unknown to the user, the user can create a STIX 2 DataStore/Source with the flag allow_custom=True. As mentioned, this will configure the DataStore/Source to allow for unknown STIX 2 content to be returned (albeit not converted to full STIX 2 domain objects and properties); the stix2 library may preclude processing the unknown content, if the content is not valid or actual STIX 2 domain objects and properties. End of explanation
2,045
Given the following text description, write Python code to implement the functionality described below step by step Description: Day 4 pre-class assignment Goals for today's pre-class assignment Use the pyplot module to make a figure Use the NumPy module to manipulate arrays of data Assignment instructions Watch the videos below, do the readings linked to below the videos, and complete the assigned programming problems. Please get started early, and come to office hours if you have any questions! Recall that to make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' key (you'll have to do this to get the YouTube videos to run). To edit a cell (to add answers, for example) you double-click on the cell, add your text, and then enter it by holding down 'shift' and pressing 'enter' This assignment is due by 11 Step1: Useful references Step2: Useful references The NumPy Quick Start Guide An introduction to numpy Question 2
Python Code: # Imports the functionality that we need to display YouTube videos in a Jupyter Notebook. # You need to run this cell before you run ANY of the YouTube videos. from IPython.display import YouTubeVideo # Display a specific YouTube video, with a given width and height. # WE STRONGLY RECOMMEND that you can watch the video in full-screen mode # (much higher resolution) by clicking the little box in the bottom-right # corner of the video. YouTubeVideo("chBLLNBGoEE",width=640,height=360) # modules and pyplot Explanation: Day 4 pre-class assignment Goals for today's pre-class assignment Use the pyplot module to make a figure Use the NumPy module to manipulate arrays of data Assignment instructions Watch the videos below, do the readings linked to below the videos, and complete the assigned programming problems. Please get started early, and come to office hours if you have any questions! Recall that to make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' key (you'll have to do this to get the YouTube videos to run). To edit a cell (to add answers, for example) you double-click on the cell, add your text, and then enter it by holding down 'shift' and pressing 'enter' This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 4. Submission instructions can be found at the end of the notebook. Some possibly useful links IPython tutorial -- this contains some very useful suggestions about IPython commands that allow you to get help on Python, figure out what specific variables or objects do, and many other things. Jupyter notebook tips and tricks -- some clever things you can do with Jupyter notebooks Another Jupyter notebook tip page End of explanation # imports the pyplot module from matplotlib import matplotlib.pyplot as plt # ensures that the plots made by matplotlib/pyplot show up in the notebook! %matplotlib inline x1 = [2,4,6,8,10,12,14,16,18] y1 = [10,8.25,7.5,7,6.5,7,7.5,8.25,10] x2 = [5, 15] y2 = [15, 15] # put your plotting commands here! # Display a specific YouTube video, with a given width and height. # WE STRONGLY RECOMMEND that you can watch the video in full-screen mode # (much higher resolution) by clicking the little box in the bottom-right # corner of the video. YouTubeVideo("BTXyE3KLIOs",width=640,height=360) # numpy Explanation: Useful references: The Python math module - contains a list of math commands The matplotlib website - where you can find all information about the matplotlib package The matplotlib gallery - lots of cool visualizations that can be made with matplotlib The Pyplot tutorial - the pyplot tutorial. A good place to get started. A summary of pyplot commands - more extensive documentation on pyplot. Question 1: The cell below contains four lists of data, which correspond to two sets of X and Y values. Use matplotlib to make a plot of these datasets. The first pair of lists (x1, y1) should be drawn with a thick blue dashed line and the second pair of lists (x2, y2) should be drawn with red diamonds. Make the width and height of the plot a bit wider than the region occupied by the points, so that you can see the shape that is drawn. Add axis labels to the x and y axes, with whatever text you like. Use the Pyplot Tutorial for inspiration, and in particular you can find instructions on how to create different types of characters and line types in the pyplot 'plot' command documentation. End of explanation # put your code here! Explanation: Useful references The NumPy Quick Start Guide An introduction to numpy Question 2: In the cell below, import the numpy module and then create two arrays with the same number of elements in each one (pick the number of elements and the value of each yourself, making sure they're numbers and not strings!). Then, add those arrays together and store it in a third array, and print it out. Sort the values of the third array, and print that out again. End of explanation
2,046
Given the following text description, write Python code to implement the functionality described below step by step Description: List comprehensions Q Step1: Compare it with this Step2: As with many Python statements, you can almost read-off the meaning of this statement in plain English Step3: Conditionals Step4: Dictionary comprehension Step5: Keyword arguments in functions (kwargs) Q Step6: Cleaning up after model runs Q Step10: Aside Step11: Initialise the object m Step12: No results yet Step13: Run the model Step14: Now, delete all *.pyc files in the working directory Step15: Take-home code snippet (using path.py) Step16: Or using os module Step17: Remove the directory Step18: Polar ~~stereoscopic~~ stereographic maps Q Step19: Now, let's try the same North Polar Stereographic projection plot with cartopy. Step20: References A Whirlwind Tour of Python by Jake VanderPlas (O'Reilly Media, 2016) Cartopy projections
Python Code: L = [] for n in range(12): L.append(n ** 2) L Explanation: List comprehensions Q: Why doesn't the list comprehension syntax make any sense to me / how can list comprehension be "more readable" than not using a list comprehension. List comprehensions are simply a way to compress a list-building for-loop into a single short, readable line. For example, here is a loop that constructs a list of the first 12 square integers: End of explanation L = [n ** 2 for n in range(12)] L L = [n ** 0.5 for n in [11, 22, 33]] L Explanation: Compare it with this: End of explanation [(i, j) for i in range(2) for j in range(3)] L = [] for i in range(2): for j in range(3): if i == 0: L.append((i, j)) L Explanation: As with many Python statements, you can almost read-off the meaning of this statement in plain English: "construct a list consisting of the square of n for each n up to 12". This basic syntax, then, is [expr for var in iterable], where expr is any valid expression, var is a variable name, and iterable is any iterable Python object. Multiple loops End of explanation [val for val in range(20) if val % 3 == 0 and val != 0] [val if val % 2 else -val for val in range(20) if val % 3] Explanation: Conditionals End of explanation d = {k: v for k, v in zip('abc', range(3))} d {k: v for k, v in d.items() if k in ['a', 'c']} Explanation: Dictionary comprehension End of explanation def catch_all(*args, **kwargs): print("Positional (required) arguments:\t", args) print("Keyword (optional) arguments:\t", kwargs) # print(kwargs['pi']) catch_all(1, 2, 3, a=4, b=5) catch_all('a', keyword=2) inputs = (1, 2, 3) keywords = {'pi': 3.14, 'e': 2.71} catch_all(*inputs, **keywords) def fun(a=1, pi=3, e=2): print(a, pi, e) fun(**keywords) class MyClass: def __init__(self, **kwargs): self.__dict__.update(kwargs) data = dict(var=123) a = MyClass(blahblah=456) a.blahblah Explanation: Keyword arguments in functions (kwargs) Q: How to optimise function interface with keyword arguments? End of explanation from path import Path from tempfile import mkdtemp import uuid Explanation: Cleaning up after model runs Q: What is the best way to delete unwanted files within Jupyter notebook workflow? Say I need to run the model multiple times and want to insure that every new model run uses only newly created files. Method 1: have a Makefile run make clean from a terminal or make clean using subprocess Example of a Makefile ``` .PHONY: clean clean: -rm *.pyc ``` Method 2: End of explanation class MockModel(object): Mock model that generates empty files def __init__(self, workdir=None): Initialise the model Kwargs: workdir: working directory path; defaults to a temporary directory if workdir is None: self.workdir = Path(mkdtemp()) else: self.workdir = Path(workdir) # .abspath() self.workdir.makedirs_p() def run(self, n, extensions=['tmp']): Run the model Args: n: int, number of random files to generate assert isinstance(n, int), 'n must be an integer!' self.files = [] for ext in extensions: for _ in range(n): f = self.workdir / 'junk_{filename}.{ext}'.format(filename=uuid.uuid4(), ext=ext) with f.open('wb') as fout: random_data = Path('/dev/urandom').open('rb').read(1024) fout.write(random_data) self.files.append(f) def show_results(self, pattern='*'): return [x for x in sorted(self.workdir.glob(pattern)) if x.isfile()] def clean_all(self, pattern): for f in self.workdir.glob(pattern): f.remove() def purge(self): self.workdir.rmtree_p() Explanation: Aside: what is UUID? * A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems. * UUID version 4 End of explanation m = MockModel('such_folder_wow') Explanation: Initialise the object m End of explanation m.show_results() Explanation: No results yet: End of explanation m.run(2, extensions=['temp', 'pyc', 'out']) m.show_results() Explanation: Run the model End of explanation m.clean_all('*.pyc') Explanation: Now, delete all *.pyc files in the working directory End of explanation def clean_dir(dirname, pattern): d = Path(dirname) for file in d.glob(pattern): file.remove() Explanation: Take-home code snippet (using path.py) End of explanation import os from glob import glob def clean_dir(dirname, pattern): absdir = os.path.abspath(dirname) p = '{absdir}/{pattern}'.format(absdir=absdir, pattern=pattern) for file in glob(p): os.remove(file) clean_dir('such_folder_wow', '*.so') Explanation: Or using os module: End of explanation m.purge() Explanation: Remove the directory: End of explanation import matplotlib.pyplot as plt import numpy as np from mpl_toolkits.basemap import Basemap fig, ax = plt.subplots() mymap = Basemap(ax=ax, projection='npstere', boundinglat=60, round='True', lon_0=0, lat_0=90) mymap.drawcoastlines() mymap.drawmeridians(np.arange(-180, 181, 60), latmax=90); mymap.drawparallels(np.arange(-90, 91, 5)); Explanation: Polar ~~stereoscopic~~ stereographic maps Q: What is the best way to create polar stereographic plots? I will be creating a lot soon and currently using cf plots to do this. What other ways are there? Let's have a look at cf-plot! http://ajheaps.github.io/cf-plot/gallery.html https://github.com/ajheaps/cf-plot/blob/master/cfplot/cfplot.py#L4803 We can see that under the hood it uses basemap library: https://matplotlib.org/basemap/ End of explanation import cartopy.crs as ccrs %matplotlib inline plt.rcParams['figure.figsize'] = (8, 8) fig = plt.figure() ax = fig.add_subplot(111, projection=ccrs.Stereographic(central_latitude=90)) ax.set_extent([-180, 180, 60, 90], ccrs.PlateCarree()) ax.gridlines() ax.coastlines() import matplotlib.path as mpath fig = plt.figure() ax = fig.add_subplot(111, projection=ccrs.NorthPolarStereo()) ax.coastlines() ax.set_extent([-180, 180, 60, 90], ccrs.PlateCarree()) ax.gridlines() # Compute a circle in axes coordinates, which we can use as a boundary # for the map. We can pan/zoom as much as we like - the boundary will be # permanently circular. theta = np.linspace(0, 2*np.pi, 100) center, radius = [0.5, 0.5], 0.5 verts = np.vstack([np.sin(theta), np.cos(theta)]).T circle = mpath.Path(verts * radius + center) ax.set_boundary(circle, transform=ax.transAxes) Explanation: Now, let's try the same North Polar Stereographic projection plot with cartopy. End of explanation HTML(html) Explanation: References A Whirlwind Tour of Python by Jake VanderPlas (O'Reilly Media, 2016) Cartopy projections End of explanation
2,047
Given the following text description, write Python code to implement the functionality described below step by step Description: After moving the sensors How do things look after the sensors have been positioned away from the surprise heat source? Let's look at a day's worth of data. Step1: Much better! The spread looks to be < 0.5F (consistent with the ±0.2C in the specifications). That's good enough for the needs of the project. How did humidity fare? Step2: Looks like about a 3% spread, which is within the ±2%RH specified range. Setting aside the minor issue of calibration, and assuming the sensors differ by a more-or-less constant amount, we should be able to normalize the readings. (Yeah, yeah, we've already downsampled...)
Python Code: %matplotlib inline import matplotlib matplotlib.rcParams['figure.figsize'] = (12, 5) import pandas as pd df = pd.read_csv('after-sensor-move.csv', header=None, names=['time', 'mac', 'f', 'h'], parse_dates=[0]) per_sensor_f = df.pivot(index='time', columns='mac', values='f') downsampled_f = per_sensor_f.resample('2T').mean() downsampled_f.plot(); Explanation: After moving the sensors How do things look after the sensors have been positioned away from the surprise heat source? Let's look at a day's worth of data. End of explanation per_sensor_h = df.pivot(index='time', columns='mac', values='h') downsampled_h = per_sensor_h.resample('2T').mean() downsampled_h.plot(); Explanation: Much better! The spread looks to be < 0.5F (consistent with the ±0.2C in the specifications). That's good enough for the needs of the project. How did humidity fare? End of explanation means = {} for c in downsampled_h.columns: means[c] = downsampled_h[c].mean() mean_means = sum(means.values()) / len(means) mean_means adjusted_h = downsampled_h.copy() for c in adjusted_h.columns: adjusted_h[c] -= (means[c] - mean_means) adjusted_h.plot(); Explanation: Looks like about a 3% spread, which is within the ±2%RH specified range. Setting aside the minor issue of calibration, and assuming the sensors differ by a more-or-less constant amount, we should be able to normalize the readings. (Yeah, yeah, we've already downsampled...) End of explanation
2,048
Given the following text description, write Python code to implement the functionality described below step by step Description: Intelligent Systems Assignment 2 Bayes' net inference Names Step1: a. Bayes' net for instant perception and position. Build a Bayes' net that represent the relationships between the random variables. Based on it, write an expression for the joint probability distribution of all the variables. $P(X, E_{N}, E_{S}, E_{W},E_{E}) = P(X)P(E_{N}|X)P(E_{S}|X)P(E_{W}|X)P(E_{E}|X)$ b. Probability functions calculated from the instant model. Assuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities Step2: ii. $P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})$ Step3: iii. $P(S)$, where $S\subseteq{e_{N},e_{S},e_{E},e_{W}}$ Step4: c. Bayes' net for dynamic perception and position. Now we will consider a scenario where the Pacman moves a finite number of steps $n$. In this case we have $n$ different variables for the positions $X_{1},\dots,X_{n}$, as well as for each one of the perceptions, e.g. $E_{N_{1}},\dots,E_{N_{n}}$ for the north perception. For the initial Pacman position, assume an uniform distribution among the valid positions. Also assume that at each time step the Pacman choses, to move, one of the valid neighbor positions with uniform probability. Draw the corresponding Bayes' net for $n=4$. d. Probability functions calculated from the dynamic model. Assuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities Step5: ii. $P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})$ Step6: iii. $P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})$ Step7: iv. $P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})$ Step8: Test functions You can use the following functions to test your solutions.
Python Code: class Directions: NORTH = 'North' SOUTH = 'South' EAST = 'East' WEST = 'West' STOP = 'Stop' Explanation: Intelligent Systems Assignment 2 Bayes' net inference Names: IDs: End of explanation def getMapa(): mapa = [[0] * 6 for i in range(1, 6)] mapa[1][1] = 1 mapa[1][3] = 1 mapa[1][4] = 1 mapa[3][1] = 1 mapa[3][3] = 1 mapa[3][4] = 1 return mapa def getMap(): mapa = getMapa() matriz = [[None] * 6 for i in range(1, 6)] px = 1 / float(24) for x in range(0, 5): for y in range(0, 6): if(mapa [x][y] == 1): p = 0.0 else: p = px if(x == 0): n = True elif(mapa[x - 1][y] == 1): n = True else: n = False if(x == 4): s = True elif(mapa[x + 1][y] == 1): s = True else: s = False if(y == 0): l = True elif(mapa[x][y - 1] == 1): l = True else: l = False if(y == 5): r = True elif(mapa[x][y + 1] == 1): r = True else: r = False matriz[x][y] = [n, l, p, r, s] return matriz def P_1(eps, E_N, E_S): ''' Calculates: P(X=x|E_{N}=e_{N},E_{S}=e_{S}) Arguments: E_N, E_S \in {True,False} 0 <= eps <= 1 (epsilon) ''' truePerception = 1 - eps; falsePerception = eps; matrix = getMap() den = 0 for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pn = falsePerception ps = falsePerception if n == E_N: pn = truePerception if s == E_S: ps = truePerception den += (p * pn * ps) pd = {(x, y):0 for x in range(1, 7) for y in range(1, 6)} for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pn = falsePerception ps = falsePerception if n == E_N: pn = truePerception if s == E_S: ps = truePerception p = (p * pn * ps) / den row[j] = [n, l, p, r, s] # Cambiar a coordenadas cartesianas pd[(j + 1, 5 - i)] = p return pd P_1(0.0, True, False) Explanation: a. Bayes' net for instant perception and position. Build a Bayes' net that represent the relationships between the random variables. Based on it, write an expression for the joint probability distribution of all the variables. $P(X, E_{N}, E_{S}, E_{W},E_{E}) = P(X)P(E_{N}|X)P(E_{S}|X)P(E_{W}|X)P(E_{E}|X)$ b. Probability functions calculated from the instant model. Assuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities: i. $P(X=x|E_{N}=e_{N},E_{S}=e_{S}) = \dfrac{P(X=x)P(E_{N}=e_{N}|X=x)P(E_{N}=e_{N}|X=x)}{\sum\limits_{x} P(X=x)P(E_{N}=e_{N}|X=x)P(E_{N}=e_{N}|X=x)}$ End of explanation def P_2(eps, E_N, E_S): ''' Calculates: P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S}) Arguments: E_N, E_S \in {True,False} 0 <= eps <= 1 ''' truePerception = 1 - eps; falsePerception = eps; mapa = getMapa() matrix = getMap() den = 0 for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pn = falsePerception ps = falsePerception pr = truePerception if n == E_N: pn = truePerception if s == E_S: ps = truePerception if r == True: pr = truePerception else: pr = falsePerception if mapa[i][j]==0: wall=1 else: wall=0 pr = truePerception den += (pr* pn * ps * wall) #print den # print 'den ',den count=0 for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pn = falsePerception ps = falsePerception pr = falsePerception if n == E_N: pn = truePerception if s == E_S: ps = truePerception # print r if mapa[i][j]==0: wall=1 else: wall=0 if r == True: pr = truePerception else: pr = falsePerception pr = (pr * pn * ps*wall) #/den #print i,' ',j,' ',pr,' ',pr/den count += pr # print count pr = count/den # print pr pd = {True:pr, False:(1-pr)} return pd P_2(0.0, True, False) Explanation: ii. $P(E_{E}=e_{E}|E_{N}=e_{N},E_{S}=E_{S})$ End of explanation def P_3(eps, S): ''' Calculates: P(S), where S\subseteq\{e_{N},e_{S},e_{E},e_{W}\} Arguments: S a dictionary with keywords in Directions and values in {True,False} 0 <= eps <= 1 ''' # for i in range(len(S)): # print S[i] mapa = getMapa() matrix = getMap() truePerception = 1 - eps; falsePerception = eps; pb=0 if(len(S)==1): for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pr = falsePerception pn = falsePerception pl = falsePerception ps = falsePerception if mapa[i][j]==0: wall=1 else: wall=0 if S.get(Directions.EAST) != None: if r == S.get(Directions.EAST): pr = truePerception pb += (pr*wall*p) elif S.get(Directions.WEST) != None: if l == S.get(Directions.WEST): pl = truePerception pb += (pl*wall*p) elif S.get(Directions.SOUTH) != None: if s == S.get(Directions.SOUTH): ps = truePerception pb += (ps*wall*p) elif S.get(Directions.NORTH) != None: if n == S.get(Directions.NORTH): pn = truePerception pb += (pn*wall*p) elif(len(S)==2): for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pr = falsePerception pn = falsePerception pl = falsePerception ps = falsePerception if mapa[i][j]==0: wall=1 else: wall=0 if S.get(Directions.EAST) != None and S.get(Directions.WEST) != None: if r == S.get(Directions.EAST): pr = truePerception if l == S.get(Directions.WEST): pl = truePerception pb += (pr*pl*wall*p) elif S.get(Directions.EAST) != None and S.get(Directions.SOUTH) != None: if r == S.get(Directions.EAST): pr = truePerception if s == S.get(Directions.SOUTH): ps = truePerception pb += (pr*ps*wall*p) elif S.get(Directions.EAST) != None and S.get(Directions.NORTH) != None: if r == S.get(Directions.EAST): pr = truePerception if n == S.get(Directions.NORTH): pn = truePerception pb += (pr*pn*wall*p) elif S.get(Directions.WEST) != None and S.get(Directions.SOUTH) != None: if l == S.get(Directions.WEST): pl = truePerception if s == S.get(Directions.SOUTH): ps = truePerception pb += (pl*ps*wall*p) elif S.get(Directions.WEST) != None and S.get(Directions.NORTH) != None: if l == S.get(Directions.WEST): pl = truePerception if n == S.get(Directions.NORTH): pn = truePerception pb += (pl*pn*wall*p) elif S.get(Directions.NORTH) != None and S.get(Directions.SOUTH) != None: if n == S.get(Directions.NORTH): pn = truePerception if s == S.get(Directions.SOUTH): ps = truePerception pb += (pn*ps*wall*p) elif(len(S)==3): for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pr = falsePerception pn = falsePerception pl = falsePerception ps = falsePerception if mapa[i][j]==0: wall=1 else: wall=0 if S.get(Directions.EAST) != None and S.get(Directions.WEST) != None and S.get(Directions.SOUTH) != None: if r == S.get(Directions.EAST): pr = truePerception if l == S.get(Directions.WEST): pl = truePerception if s == S.get(Directions.SOUTH): ps = truePerception pb += (pr*pl*ps*wall*p) elif S.get(Directions.EAST) != None and S.get(Directions.WEST) and S.get(Directions.NORTH) != None: if r == S.get(Directions.EAST): pr = truePerception if l == S.get(Directions.WEST): pl = truePerception if n == S.get(Directions.NORTH): pn = truePerception pb += (pr*pl*pn*wall*p) elif S.get(Directions.EAST) != None and S.get(Directions.NORTH) != None and S.get(Directions.SOUTH) != None: if r == S.get(Directions.EAST): pr = truePerception if n == S.get(Directions.NORTH): pn = truePerception if s == S.get(Directions.SOUTH): ps = truePerception pb += (pr*pn*ps*wall*p) elif S.get(Directions.WEST) != None and S.get(Directions.NORTH) != None and S.get(Directions.SOUTH) != None: if l == S.get(Directions.WEST): pl = truePerception if n == S.get(Directions.NORTH): pn = truePerception if s == S.get(Directions.SOUTH): ps = truePerception pb += (pl*pn*ps*wall*p) elif(len(S)==4): for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] pr = falsePerception pn = falsePerception pl = falsePerception ps = falsePerception if mapa[i][j]==0: wall=1 else: wall=0 if S.get(Directions.EAST) != None and S.get(Directions.WEST) != None and S.get(Directions.SOUTH) != None and S.get(Directions.NORTH) != None: if r == S.get(Directions.EAST): pr = truePerception if l == S.get(Directions.WEST): pl = truePerception if s == S.get(Directions.SOUTH): ps = truePerception if n == S.get(Directions.NORTH): pn = truePerception pb += (pr*pl*ps*pn*wall*p) # print pb return pb P_3(0.0, {Directions.EAST: True, Directions.WEST: True}) Explanation: iii. $P(S)$, where $S\subseteq{e_{N},e_{S},e_{E},e_{W}}$ End of explanation def P_4(eps, E_1, E_3): import numpy as np pos= np.random.randint(24) count=0 x1=1/24 for i in range(len(matrix)): row = matrix[i] for j in range(len(row)): n, l, p, r, s = row[j] count++ pr = falsePerception pn = falsePerception pl = falsePerception ps = falsePerception if mapa[i][j]==0: wall=1 else: wall=0 if count == pos: ''' Calculates: P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3}) Arguments: E_1, E_3 dictionaries of type Directions --> {True,False} 0 <= eps <= 1 ''' pd = {(x,y):0 for x in range(1,7) for y in range(1,6)} return pd E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False} P_4(0.1, E_1, E_3) Explanation: c. Bayes' net for dynamic perception and position. Now we will consider a scenario where the Pacman moves a finite number of steps $n$. In this case we have $n$ different variables for the positions $X_{1},\dots,X_{n}$, as well as for each one of the perceptions, e.g. $E_{N_{1}},\dots,E_{N_{n}}$ for the north perception. For the initial Pacman position, assume an uniform distribution among the valid positions. Also assume that at each time step the Pacman choses, to move, one of the valid neighbor positions with uniform probability. Draw the corresponding Bayes' net for $n=4$. d. Probability functions calculated from the dynamic model. Assuming an uniform distribution for the Pacman position probability, write functions to calculate the following probabilities: i. $P(X_{4}=x_{4}|E_{1}=e_{1},E_{3}=e_{3})$ End of explanation def P_5(eps, E_2, E_3, E_4): ''' Calculates: P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4}) Arguments: E_2, E_3, E_4 dictionaries of type Directions --> {True,False} 0 <= eps <= 1 ''' pd = {(x,y):0 for x in range(1,7) for y in range(1,6)} return pd E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False} E_4 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} P_5(0.1, E_2, E_3, E_4) Explanation: ii. $P(X_{2}=x_{2}|E_{2}=e_{2},E_{3}=e_{3},E_{4}=e_{4})$ End of explanation def P_6(eps, E_1, E_2, E_3): ''' Calculates: P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3}) Arguments: E_1, E_2, E_3 dictionaries of type Directions --> {True,False} 0 <= eps <= 1 ''' pd = {(n, s, e, w): 0 for n in [False, True] for s in [False, True] for e in [False, True] for w in [False, True]} return pd E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} E_2 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} E_3 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False} P_6(0.1, E_1, E_2, E_3) Explanation: iii. $P(E_{4}=e_{4}|E_{1}=e_{1},E_{2}=e_{2},E_{3}=e_{3})$ End of explanation def P_7(eps, E_N, E_S): ''' Calculates: P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}}) Arguments: E_N_2, E_S_2 \in {True,False} 0 <= eps <= 1 ''' pd = {True:0, False:0} return pd P_7(0.1, True, False) Explanation: iv. $P(E_{E_{2}}=e_{E_{2}}|E_{N_{2}}=e_{N_{2}},E_{S_{2}}=E_{S_{2}})$ End of explanation def approx_equal(val1, val2): return abs(val1-val2) <= 0.00001 def test_P_1(): pd = P_1(0.0, True, True) assert approx_equal(pd[(2, 1)], 0.1111111111111111) assert approx_equal(pd[(3, 1)], 0) pd = P_1(0.3, True, False) assert approx_equal(pd[(2, 1)], 0.03804347826086956) assert approx_equal(pd[(3, 1)], 0.016304347826086956) def test_P_2(): pd = P_2(0.0, True, True) assert approx_equal(pd[False], 1.0) pd = P_2(0.3, True, False) assert approx_equal(pd[False], 0.5514492753623188) def test_P_3(): pd = P_3(0.1, {Directions.EAST: True, Directions.WEST: True}) assert approx_equal(pd, 0.2299999999999999) pd = P_3(0.1, {Directions.EAST: True}) assert approx_equal(pd, 0.3999999999999999) pd = P_3(0.2, {Directions.EAST: False, Directions.WEST: True, Directions.SOUTH: True}) assert approx_equal(pd, 0.0980000000000000) def test_P_4(): E_1 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True} E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: True} pd = P_4(0.0, E_1, E_3) assert approx_equal(pd[(6, 3)], 0.1842105263157895) assert approx_equal(pd[(4, 3)], 0.0) pd = P_4(0.2, E_1, E_3) assert approx_equal(pd[(6, 3)], 0.17777843398830864) assert approx_equal(pd[(4, 3)], 0.000578430282649176) E_1 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} E_3 = {Directions.NORTH: False, Directions.SOUTH: False, Directions.EAST: True, Directions.WEST: False} pd = P_4(0.0, E_1, E_3) assert approx_equal(pd[(6, 2)], 0.3333333333333333) assert approx_equal(pd[(4, 3)], 0.0) def test_P_5(): E_2 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False} E_3 = {Directions.NORTH: True, Directions.SOUTH: False, Directions.EAST: False, Directions.WEST: False} E_4 = {Directions.NORTH: True, Directions.SOUTH: True, Directions.EAST: False, Directions.WEST: False} pd = P_5(0, E_2, E_3, E_4) assert approx_equal(pd[(2, 5)], 0.5) assert approx_equal(pd[(4, 3)], 0.0) pd = P_5(0.3, E_2, E_3, E_4) assert approx_equal(pd[(2, 5)], 0.1739661245168835) assert approx_equal(pd[(4, 3)], 0.0787991740545979) def test_P_7(): pd = P_7(0.0, True, False) assert approx_equal(pd[False], 0.7142857142857143) pd = P_7(0.3, False, False) assert approx_equal(pd[False], 0.5023529411764706) h test_P_1() Explanation: Test functions You can use the following functions to test your solutions. End of explanation
2,049
Given the following text description, write Python code to implement the functionality described below step by step Description: Language Processing and Python Computing with Language Step1: concordance is a view that shows every occurrence of a word alongside some context Step2: similar shows other words that appear in a similar context to the entered word Step3: text 1 (Melville) uses monstrous very differently from text 2 (Austen) Text 2 Step4: trying out other words... Step5: Lexical Dispersion Plot Determining the location of words in text (how many words from beginning does this word appear?) -- using dispersion_plot Step6: Generating some random text in the style of text3 -- using generate() not yet supported in NLTK 3.0 Step7: 1.4 Counting Vocabulary Count the number of tokens using len Step8: View/count vocabulary using set(text_obj) Step9: Calculating lexical richness of the text Step10: Count how often a word occurs in the text Step11: Compute what percentage of the text is taken up by a specific word Step12: Define some simple functions to calculate these values Step13: A Closer Look at Python Step14: List Concatenation Step15: Indexing Lists (...and Text objects) Step16: Computing with Language Step17: Frequency Distributions Step18: 50 most frequent words account for almost half of the book Step19: Fine-grained Selection of Words Looking at long words of a text (maybe these will be more meaningful words?) Step20: words that are longer than 7 characters and occur more than 7 times Step21: collocation - sequence of words that occur together unusually often (red wine is a collocation, vs. the wine is not) Step22: collocations are just frequent bigrams -- we want to focus on the cases that involve rare words** collocations() returns bigrams that occur more often than expected, based on word frequency Step23: counting other things word length distribution in text1 Step24: words of length 3 (~50k) make up ~20% of all words in the book Back to Python Step25: Only include alphabetic words -- no punctuation
Python Code: %matplotlib inline import matplotlib.pyplot as plt import nltk from nltk.book import * text1 text2 Explanation: Language Processing and Python Computing with Language: Texts and Words Ran the following in python3 interpreter: import nltk nltk.download() Select book to download corpora for NLTK Book End of explanation text1.concordance("monstrous") text2.concordance("affection") text3.concordance("lived") Explanation: concordance is a view that shows every occurrence of a word alongside some context End of explanation text1.similar("monstrous") text2.similar("monstrous") Explanation: similar shows other words that appear in a similar context to the entered word End of explanation text2.common_contexts(["monstrous", "very"]) Explanation: text 1 (Melville) uses monstrous very differently from text 2 (Austen) Text 2: monstrous has positive connotations, sometimes functions as an intensifier like very common_contexts shows contexts that are shared by two or more words End of explanation text2.similar("affection") text2.common_contexts(["affection", "regard"]) Explanation: trying out other words... End of explanation plt.figure(figsize=(18,10)) text4.dispersion_plot(["citizens", "democracy", "freedom", "duties", "America", "liberty", "constitution"]) Explanation: Lexical Dispersion Plot Determining the location of words in text (how many words from beginning does this word appear?) -- using dispersion_plot End of explanation # (not available in NLTK 3.0) # text3.generate() Explanation: Generating some random text in the style of text3 -- using generate() not yet supported in NLTK 3.0 End of explanation len(text3) Explanation: 1.4 Counting Vocabulary Count the number of tokens using len End of explanation len(set(text3)) # first 50 sorted(set(text3))[:50] Explanation: View/count vocabulary using set(text_obj) End of explanation len(set(text3)) / len(text3) Explanation: Calculating lexical richness of the text End of explanation text3.count("smote") Explanation: Count how often a word occurs in the text End of explanation 100 * text4.count('a') / len(text4) text5.count('lol') 100 * text5.count('lol') / len(text5) Explanation: Compute what percentage of the text is taken up by a specific word End of explanation def lexical_diversity(text): return len(set(text)) / len(text) def percentage(count, total): return 100 * count / total lexical_diversity(text3), lexical_diversity(text5) percentage(text4.count('a'), len(text4)) Explanation: Define some simple functions to calculate these values End of explanation sent1 sent2 lexical_diversity(sent1) Explanation: A Closer Look at Python: Texts as Lists of Words skipping some basic python parts of this section... End of explanation ['Monty', 'Python'] + ['and', 'the', 'Holy', 'Grail'] Explanation: List Concatenation End of explanation text4[173] text4.index('awaken') text5[16715:16735] text6[1600:1625] Explanation: Indexing Lists (...and Text objects) End of explanation saying = 'After all is said and done more is said than done'.split() tokens = sorted(set(saying)) tokens[-2:] Explanation: Computing with Language: Simple Statistics End of explanation fdist1 = FreqDist(text1) print(fdist1) fdist1.most_common(50) fdist1['whale'] Explanation: Frequency Distributions End of explanation plt.figure(figsize=(18,10)) fdist1.plot(50, cumulative=True) Explanation: 50 most frequent words account for almost half of the book End of explanation V = set(text1) long_words = [w for w in V if len(w) > 15] sorted(long_words) Explanation: Fine-grained Selection of Words Looking at long words of a text (maybe these will be more meaningful words?) End of explanation fdist5 = FreqDist(text5) sorted(w for w in set(text5) if len(w) > 7 and fdist5[w] > 7) Explanation: words that are longer than 7 characters and occur more than 7 times End of explanation list(nltk.bigrams(['more', 'is', 'said', 'than', 'done'])) # bigrams() returns a generator Explanation: collocation - sequence of words that occur together unusually often (red wine is a collocation, vs. the wine is not) End of explanation text4.collocations() text8.collocations() Explanation: collocations are just frequent bigrams -- we want to focus on the cases that involve rare words** collocations() returns bigrams that occur more often than expected, based on word frequency End of explanation [len(w) for w in text1][:10] fdist = FreqDist(len(w) for w in text1) print(fdist) fdist fdist.most_common() fdist.max() fdist[3] fdist.freq(3) Explanation: counting other things word length distribution in text1 End of explanation len(text1) len(set(text1)) len(set(word.lower() for word in text1)) Explanation: words of length 3 (~50k) make up ~20% of all words in the book Back to Python: Making Decisions and Taking Control skipping basic python stuff More accurate vocabulary size counting -- convert all strings to lowercase End of explanation len(set(word.lower() for word in text1 if word.isalpha())) Explanation: Only include alphabetic words -- no punctuation End of explanation
2,050
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic Ultrastorage Step1: Creating storage systems A Storage system is a collection of storage units. It is the responsability of the storage system to add items to the various storage units. Step2: Create a storage system. Without any argument, the storage unit is given a unique name and a transfert rate of 1. Step3: Initially, the storage unit does not contain any storage unit as shown by $\texttt{storage unit=()}$. A name and/or a transfert rate may be given. Step4: Adding and removing storage units Use the add_storage_unit(self, capacity, cpu, name=None) method to add a storage unit to the storage system. The first two arguments are mandatory. Step5: Use the remove_storage_unit(self, storage_unit_name) method to remove a storage unit. Step6: More on creating a strorage system The storage_system_builder(capacities, cpus, name=None, transfert_rate=1., storage_unit_names=None) function is a convenient function for both creating a storage system and adding storage units at the same time. Storage unitsare defined by giving the capacities and CPUs of the storage units to be added to the newly created storage system. Step7: Some contrained storage systems are defined Step8: Semi-homegeneous storage systems Step9: Homegeneous storage systems Step10: Adding and deleting items Add items to the storage system using the add_item(self, storage_unit_name, item) method.
Python Code: import IPython IPython.__version__ import ultrastorage ultrastorage.__version__ Explanation: Basic Ultrastorage: Storage system For reproducibility. End of explanation from ultrastorage.storagesystem import StorageSystem from ultrastorage.item import Item Explanation: Creating storage systems A Storage system is a collection of storage units. It is the responsability of the storage system to add items to the various storage units. End of explanation storage_system = StorageSystem() print("storage system={}".format(storage_system)) Explanation: Create a storage system. Without any argument, the storage unit is given a unique name and a transfert rate of 1. End of explanation storage_system = StorageSystem(name="myStorageSystem", transfert_rate=0.5) print("storage system={}".format(storage_system)) Explanation: Initially, the storage unit does not contain any storage unit as shown by $\texttt{storage unit=()}$. A name and/or a transfert rate may be given. End of explanation storage_system = StorageSystem(name="myStorageSystem") # storage unit specifications capacity = 100 cpu = 1 name = "myFirstStorageUnit" # add the storage unit to the storage system storage_system.add_storage_unit(capacity, cpu, name) print("storage system={}".format(storage_system)) storage_system.add_storage_unit(2000, 2) storage_system.add_storage_unit(3000, 3) print(storage_system) Explanation: Adding and removing storage units Use the add_storage_unit(self, capacity, cpu, name=None) method to add a storage unit to the storage system. The first two arguments are mandatory. End of explanation storage_system.remove_storage_unit(name) print("storage system={}".format(storage_system)) Explanation: Use the remove_storage_unit(self, storage_unit_name) method to remove a storage unit. End of explanation from ultrastorage.storagesystem import storage_system_builder capacities = [1000, 2000, 3000] cpus = [1, 2, 3] storage_system = storage_system_builder(capacities, cpus) print("storage system={}".format(storage_system)) Explanation: More on creating a strorage system The storage_system_builder(capacities, cpus, name=None, transfert_rate=1., storage_unit_names=None) function is a convenient function for both creating a storage system and adding storage units at the same time. Storage unitsare defined by giving the capacities and CPUs of the storage units to be added to the newly created storage system. End of explanation from ultrastorage.storagesystem import RegularStorageSystem # a regular storage system hosting storage units of capacity 1000 capacity = 1000 regular_storage_system = RegularStorageSystem(capacity) # add some storage units cpus = [1, 2, 3] for cpu in cpus: regular_storage_system.add_storage_unit(cpu) print("storage system={}".format(regular_storage_system)) from ultrastorage.storagesystem import regular_storage_system_builder # a regular storage system hosting 3 storage units of capacity 1000 each capacity = 1000 cpus = [1, 2, 3] regular_storage_system = regular_storage_system_builder(capacity, cpus) print("storage system={}".format(regular_storage_system)) Explanation: Some contrained storage systems are defined: a regular storage system is composed of storage units of the same capacity (by they may differ in the number of CPUs they have), a semi-homogeneous storage system is composed of storage units with the same number of CPUs (bu they may differ in the capacity they have), a homegeneous storage system is composed of storage units with the same capacity and the same number of CPUs. Regular storage systems End of explanation from ultrastorage.storagesystem import SemiHomogeneousStorageSystem # a semi-homogeneous storage system hosting storage units with 2 CPUs cpus = 2 semi_homogeneous_storage_system = SemiHomogeneousStorageSystem(cpus) # add some storage units capacities = [1000, 2000, 3000] for capacity in capacities: semi_homogeneous_storage_system.add_storage_unit(capacity) print("storage system={}".format(semi_homogeneous_storage_system)) from ultrastorage.storagesystem import semi_homogeneous_storage_system_builder # a semi-homogeneous storage system hosting 3 storage units with 2 CPUs each cpu = 2 capacities = [1000, 2000, 3000] semi_homogeneous_storage_system = semi_homogeneous_storage_system_builder(cpu, capacities) print("storage system={}".format(semi_homogeneous_storage_system)) Explanation: Semi-homegeneous storage systems End of explanation from ultrastorage.storagesystem import HomogeneousStorageSystem # an homogeneous storage system hosting storage units with 2 CPUs and capacity 1000 capacity = 1000 cpu = 2 homogeneous_storage_system = HomogeneousStorageSystem(capacity, cpu) # add some storage units number_of_storage_units = 3 for _ in range(number_of_storage_units): homogeneous_storage_system.add_storage_unit() print("storage system={}".format(homogeneous_storage_system)) from ultrastorage.storagesystem import homogeneous_storage_system_builder # an homogeneous storage system hosting 3 storage units with capacity 1000 and 2 CPUs number_of_storage_units = 3 capacity = 1000 cpu = 2 homogeneous_storage_system = homogeneous_storage_system_builder(number_of_storage_units, capacity, cpu) print("storage system={}".format(homogeneous_storage_system)) Explanation: Homegeneous storage systems End of explanation # create a new storage system storage_system = StorageSystem() # add one storage unit names "myStorageUnit" to the storage system capacity = 1000 cpu = 2 name = "myStorageUnit" storage_system.add_storage_unit(capacity, cpu, name) print("storage system={}".format(storage_system)) item = Item(400) storage_system.add_item(name, item) print("storage system={}".format(storage_system)) Explanation: Adding and deleting items Add items to the storage system using the add_item(self, storage_unit_name, item) method. End of explanation
2,051
Given the following text description, write Python code to implement the functionality described below step by step Description: 0. Calibrate MCA Channels to sources of known emission energy Step1: 1. Test how the energy of scattered atoms varies with scattering angle Step2: 2. Use (1) to determine keV mass of electron Step3: 3. Which of Thomson and Klein-Nishina differential cross section is a better description?
Python Code: CS137Peaks = np.array([165.85]) #Channel Number of photopeak CS137Energy = np.array([661.7]) #Accepted value of emission energy BA133Peaks = np.array([21.59, 76.76, 90.52]) BA133Energy = np.array([81.0, 302.9, 356.0]) Mn54Peaks = np.array([207.72]) Mn54Energy = np.array([834.8]) Na22Peaks = np.array([128.84]) Na22Energy = np.array([511.0]) CO57Peaks = np.array([31.98]) CO57Energy = np.array([122.1]) Peaks = np.hstack([CS137Peaks,BA133Peaks,Mn54Peaks,Na22Peaks,CO57Peaks]) Energy = np.hstack([CS137Energy,BA133Energy,Mn54Energy,Na22Energy,CO57Energy]) plt.figure(figsize=(10,6)); plt.scatter(Peaks,Energy); plt.xlabel('MCA Number',fontsize=20); plt.ylabel('Energy (keV)',fontsize = 20); plt.xticks(size = 13); plt.yticks(size = 13); #plt.savefig('Sample') def myfun(N,a,b,c): ans = a + b*N + c*N**2 # this is y, "the function to be fit" return ans p0 = [-2,1,0] xlots = np.linspace(0,240) # need lots of data points for smooth curve yfit = np.zeros((len(Peaks),xlots.size)) plsq, pcov = curve_fit(myfun, Peaks, Energy, p0) # curve fit returns p and covariance matrix # these give the parameters and the uncertainties a = plsq[0] da = np.sqrt(pcov[0,0]) b = plsq[1] db = np.sqrt(pcov[1,1]) c = plsq[2] dc = np.sqrt(pcov[2,2]) yfit = myfun(xlots,plsq[0],plsq[1],plsq[2]) # use fit results for a, b, c print('a = %.7f +/- %.7f' % (plsq[0], np.sqrt(pcov[0,0]))) print('b = %.7f +/- %.7f' % (plsq[1], np.sqrt(pcov[1,1]))) print('c = %.7f +/- %.7f' % (plsq[2], np.sqrt(pcov[2,2]))) plt.figure(figsize=(10,6)); plt.scatter(Peaks,Energy); plt.xlim(0,240) plt.ylim(0,1000) plt.xlabel('x (mm)'); plt.ylabel('y (mm)'); plt.plot(xlots,yfit); plt.legend(['data','Fit'],loc='lower right'); plt.text(5,900,'a = %.1f +/- %.1f keV' % (plsq[0], np.sqrt(pcov[0,0])),size=17) plt.text(5,800,'b = %.2f +/- %.2f keV MCA$^{-1}$' % (plsq[1], np.sqrt(pcov[1,1])),size=17) plt.text(5,700,'c = %.1f +/- %.1f keV MCA$^{-2}$' % (plsq[2]*1e3, np.sqrt(pcov[2,2])*1e3),size=17) plt.xlabel('MCA Number',fontsize=20); plt.ylabel('Energy (keV)',fontsize = 20); plt.xticks(size = 13); plt.yticks(size = 13); plt.savefig('LinearMCAFit') Explanation: 0. Calibrate MCA Channels to sources of known emission energy End of explanation N = np.array([102.20, 85.85, 121.57, 140.34, 127.77, 115.69]) #Photopeak channel of scattered rays dN = np.array([5.37, 8.01, 5.13, 5.54, 8.91, 5.5]) #Uncertainty in channel number theta = np.array([60, 75, 45, 30, 40, 50])*np.pi/180 #Scattering angle entered in degrees converted to radians afterwords def deltaE(N,dN): daN = np.sqrt((da/a)**2 + (dN/N)**2)*(a*N) dbN2 = np.sqrt((db/b)**2 + 4*(dN/N)**2)*(b*N**2) dcN3 = np.sqrt((dc/c**2) + 9*(dN/N)**2)*(c*N**3) dEMeas = np.sqrt(daN**2 + dbN2**2 + dcN3**2)*1e-3 #Convert to KeV return dEMeas EMeas = myfun(N,a,b,c) EMeas dEMeas = deltaE(N,dN) dEMeas Eo = 661.7 #Initial keV energy of gamma rays (before scattering) mc2 = 511 #electron mass in keV def ECompton(Eo,mc2,theta): return Eo/(1+(Eo/mc2)*(1-np.cos(theta))) EComp = ECompton(Eo,mc2,theta) EComp thetas = np.linspace(0,np.pi,50); plt.figure(figsize=(10,6)); plt.plot(thetas,ECompton(Eo,mc2,thetas),label='Compton'); plt.errorbar(theta,EMeas,yerr = dEMeas,fmt='none'); plt.scatter(theta,EMeas,label='Measured',color='k'); plt.legend(); plt.xlabel('Scattering Angle [Radians]',fontsize=20); plt.ylabel('Final Energy (keV)',fontsize = 20); plt.xticks(size = 13); plt.yticks(size = 13); plt.xlim(0,np.pi); plt.savefig('ComptonEnergy') Explanation: 1. Test how the energy of scattered atoms varies with scattering angle End of explanation y = np.array([1/entry for entry in EMeas]) dy = np.array([dEMeas[i]/EMeas[i]**2 for i in np.arange(len(EMeas))]) x = np.array([1-np.cos(entry) for entry in theta]) plt.figure(figsize=(10,6)); plt.scatter(x + 1/Eo,y); plt.errorbar(x + 1/Eo,y,dy,fmt='none') plt.xlabel('(1-cos(theta))',fontsize=20); plt.ylabel('(1/Ef)',fontsize = 20); plt.xticks(size = 13); plt.yticks(size = 13); plt.ylim(0.0015,0.0035); def myfun2(x,mc2): # x = (1-np.cos(theta)) return 1/Eo + (1/mc2)*x p02 = [511] xlots2 = np.linspace(0,1) # need lots of data points for smooth curve yfit = np.zeros((len(Peaks),xlots2.size)) plsq, pcov = curve_fit(myfun2, np.array([1-np.cos(entry) for entry in theta]), np.array([1/entry for entry in EMeas]), p02) # curve fit returns p and covariance matrix # these give the parameters and the uncertainties mc2Meas = plsq[0] dmc2Meas = np.sqrt(pcov[0,0]) yfit2 = myfun2(xlots2,plsq[0]) # use fit results for a, b, c print('mc2Meas = (%.1f +/- %.1f) keV/c2' % (plsq[0], np.sqrt(pcov[0,0]))) plt.figure(figsize=(10,6)); plt.scatter(x + 1/Eo,y,label='Measured'); plt.errorbar(x + 1/Eo,y,dy,fmt='none') plt.plot(xlots2,yfit2,label='Fit') plt.legend(loc='upper left') plt.xlabel('(1-cos(theta))',fontsize=20); plt.ylabel('(1/$E_f$)',fontsize = 20); plt.xticks(size = 13); plt.yticks(size = 13); plt.ylim(0.0015,0.0031); plt.xlim(0,0.81) plt.text(0.01,0.0027,'$mc^2$ = (%.0f +/- %.0f) keV/$c^2$' % (plsq[0], np.sqrt(pcov[0,0])),size=17) plt.savefig('ElectronMass') Explanation: 2. Use (1) to determine keV mass of electron End of explanation EMeas #For determining efficiency from manual Counts = np.array([2446, 1513, 3357, 3231, 1285, 1944]) #This is the detector efficiency which is a function of the incoming gamma energy (EMeas) e = np.array([0.6, 0.67, 0.52, 0.475, 0.51, 0.55]) Counts = np.array([Counts[i]/e[i] for i in np.arange(len(Counts))]) unc = np.array([np.sqrt(entry) for entry in Counts]) Time = np.array([1531.76, 1952.72, 1970.43, 629.12, 663.42, 750.65]) Rates = np.array([Counts[i]/Time[i] for i in np.arange(len(Counts))]) unc = np.array([unc[i]/Time[i] for i in np.arange(len(Counts))]) def Thomson(theta): ro = 2.82*1e-15 return (1/2)*(ro**2)*(1+np.cos(theta)**2)*1.20e30 #set b = 1 def KleinNishina(theta): ro = 2.82*1e-15 gamma = Eo/mc2 return (1/2)*(ro**2)*(1+np.cos(theta)**2)*((1+gamma*(1-np.cos(theta)))**(-2))*(1+((gamma*(1-np.cos(theta)))**2)/((1+np.cos(theta)**2)*(1+gamma*(1-np.cos(theta)))))*1.20e30 thetas = np.linspace(0,np.pi/2,50); plt.figure(figsize=(10,6)); plt.plot(thetas,Thomson(thetas),label='Thomson'); plt.plot(thetas,KleinNishina(thetas),label='Klein-Nishina'); plt.scatter(theta,Rates,label='Measured',marker = '.',color='red') plt.errorbar(theta,Rates,unc,fmt='none') plt.legend(); plt.xlabel('Scattering Angle [Radians]',fontsize=20); plt.ylabel('Count Rate [$s^{-1}$]',fontsize = 20); plt.xticks(size = 13); plt.yticks(size = 13); plt.xlim(0,np.pi/2); plt.savefig('ThomsonKleinNishina') Explanation: 3. Which of Thomson and Klein-Nishina differential cross section is a better description? End of explanation
2,052
Given the following text description, write Python code to implement the functionality described below step by step Description: Training a classifier This is it. You have seen how to define neural networks, compute loss and make updates to the weights of the network. Now you might be thinking, What about data? Generally, when you have to deal with image, text, audio or video data, you can use standard python packages that load data into a numpy array. Then you can convert this array into a torch.*Tensor. For images, packages such as Pillow, OpenCV are useful. For audio, packages such as scipy and librosa For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful. Specifically for vision, we have created a package called torchvision, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz., torchvision.datasets and torch.utils.data.DataLoader. This provides a huge convenience and avoids writing boilerplate code. For this tutorial, we will use the CIFAR10 dataset. It has the classes Step1: The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1] Step2: Let us show some of the training images, for fun. Step3: Define a Convolution Neural Network ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel images as it was defined). Step4: Define a Loss function and optimizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's use a Classification Cross-Entropy loss and SGD with momentum Step5: Train the network ^^^^^^^^^^^^^^^^^^^^ This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize
Python Code: import torch import torchvision import torchvision.transforms as transforms Explanation: Training a classifier This is it. You have seen how to define neural networks, compute loss and make updates to the weights of the network. Now you might be thinking, What about data? Generally, when you have to deal with image, text, audio or video data, you can use standard python packages that load data into a numpy array. Then you can convert this array into a torch.*Tensor. For images, packages such as Pillow, OpenCV are useful. For audio, packages such as scipy and librosa For text, either raw Python or Cython based loading, or NLTK and SpaCy are useful. Specifically for vision, we have created a package called torchvision, that has data loaders for common datasets such as Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz., torchvision.datasets and torch.utils.data.DataLoader. This provides a huge convenience and avoids writing boilerplate code. For this tutorial, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size. .. figure:: /_static/img/cifar10.png :alt: cifar10 cifar10 Training an image classifier We will do the following steps in order: Load and normalizing the CIFAR10 training and test datasets using torchvision Define a Convolution Neural Network Define a loss function Train the network on the training data Test the network on the test data Loading and normalizing CIFAR10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Using torchvision, it’s extremely easy to load CIFAR10. End of explanation transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') Explanation: The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1] End of explanation import matplotlib.pyplot as plt import numpy as np # functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # show images imshow(torchvision.utils.make_grid(images)) # print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) Explanation: Let us show some of the training images, for fun. End of explanation from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 16, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(16, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x net = Net() Explanation: Define a Convolution Neural Network ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Copy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel images as it was defined). End of explanation import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) Explanation: Define a Loss function and optimizer ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's use a Classification Cross-Entropy loss and SGD with momentum End of explanation for epoch in range(5): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # wrap them in Variable inputs, labels = Variable(inputs), Variable(labels) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.data[0] if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training') class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) for data in testloader: images, labels = data outputs = net(Variable(images)) _, predicted = torch.max(outputs.data, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i] class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) Explanation: Train the network ^^^^^^^^^^^^^^^^^^^^ This is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize End of explanation
2,053
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial on RVM Regression In this tutorial we play around with linear regression in form of Relevance Vector Machines (RVMs) using linear and localized kernels. And heeeere we go! Step1: First things first, let's set up up the database to regress. Step2: 1. Single Regression 1.1 Linear Kernel Neat now let's test whether we can regress that data using a polynomial feature space. Step3: 1.2 Localized Kernel Indeed that seemed to work. But what about a Gaussian feature space, will it be able to fit the Gaussian? Step4: 2. Repeated Regressions Indeed using a Gaussian basis set, for some mysterious reason, gave a closer estimate to the real data with tighter confidence intervals. Now let's do the same again for both kernels but multiple times initializing the hyperparmaeters such that we sample them from distributions as well. 2.1 Linear Kernel Step5: 2.2 Localized kernel
Python Code: %matplotlib inline from linear_model import RelevanceVectorMachine, distribution_wrapper, GaussianFeatures, \ FourierFeatures, repeated_regression, plot_summary from sklearn import preprocessing import numpy as np from scipy import stats import matplotlib# import matplotlib.pylab as plt matplotlib.rc('text', usetex=True) matplotlib.rcParams['text.latex.preamble']=[r"\usepackage{amsmath}"] Explanation: Tutorial on RVM Regression In this tutorial we play around with linear regression in form of Relevance Vector Machines (RVMs) using linear and localized kernels. And heeeere we go! End of explanation x = np.linspace(-np.pi,np.pi,100) x_pred = np.linspace(-1.5*np.pi,1.5*np.pi,200) epsilon = stats.norm(loc=0,scale=0.01) noise = epsilon.rvs(size=x.shape[0]) t = np.exp(-x**2) + noise fig = plt.figure(figsize=(5,5)) plt.plot(x,t,'ro',markerfacecolor="None",label="data") plt.xlabel("input") plt.ylabel("output") plt.legend(loc=0) plt.show() Explanation: First things first, let's set up up the database to regress. End of explanation # choosing the feature space k = 5 trafo = preprocessing.PolynomialFeatures(k) X = trafo.fit_transform(x.reshape((-1,1))) # initializing hyperparameters init_beta = 1./ np.var(t) # (that's the default start) init_alphas = np.ones(X.shape[1]) init_alphas[1:] = np.inf # setting up the model regression class model = RelevanceVectorMachine(n_iter=250,verbose=False,compute_score=True,init_beta=init_beta, init_alphas=init_alphas) # regress model.fit(X,t) # predict X_pred = trafo.fit_transform(x_pred.reshape((-1,1))) y, yerr = model.predict(X_pred,return_std=True) fig = plt.figure() ax = fig.add_subplot(121) ax.plot(x,t,'ro',label="data",markerfacecolor="None") ax.fill_between(x_pred,y-2*yerr,y+2*yerr,alpha=.5,label="95\%") ax.plot(x_pred,y,'-',label="estimate") plt.legend(loc=0) ax.set_xlabel("input") ax.set_ylabel("output") ax1 = fig.add_subplot(122) ax1.plot(model.mse_,'-') ax1.set_xlabel("iteration") ax1.set_ylabel("MSE") plt.tight_layout() plt.show() Explanation: 1. Single Regression 1.1 Linear Kernel Neat now let's test whether we can regress that data using a polynomial feature space. End of explanation # choosing the feature space trafo = GaussianFeatures(k=30,mu0=-3,dmu=.2) X = trafo.fit_transform(x.reshape((-1,1))) # initializing hyperparameters init_beta = 1./ np.var(t) # (that's the default start) init_alphas = np.ones(X.shape[1]) init_alphas[1:] = np.inf # setting up the model regression class model = RelevanceVectorMachine(n_iter=250,verbose=False,compute_score=True,init_beta=init_beta, init_alphas=init_alphas) # regress model.fit(X,t) # predict X_pred = trafo.fit_transform(x_pred.reshape((-1,1))) y, yerr = model.predict(X_pred,return_std=True) fig = plt.figure() ax = fig.add_subplot(121) ax.plot(x,t,'ro',label="data",markerfacecolor="None") ax.fill_between(x_pred,y-2*yerr,y+2*yerr,alpha=.5,label="95\%") ax.plot(x_pred,y,'-',label="estimate") plt.legend(loc=0) ax.set_xlabel("input") ax.set_ylabel("output") ax1 = fig.add_subplot(122) ax1.plot(model.mse_,'-') ax1.set_xlabel("iteration") ax1.set_ylabel("MSE") plt.tight_layout() plt.show() Explanation: 1.2 Localized Kernel Indeed that seemed to work. But what about a Gaussian feature space, will it be able to fit the Gaussian? End of explanation # choosing the feature space k = 5 trafo = preprocessing.PolynomialFeatures(k) X = trafo.fit_transform(x.reshape((-1,1))) base_trafo = trafo.fit_transform # initializing hyperparameters using callable distributions giving new hyperparameters # with every call (useful for repeated regression) init_beta = distribution_wrapper(stats.halfnorm(scale=1),size=1,single=True) init_alphas = distribution_wrapper(stats.halfnorm(scale=1),single=False) model_type = RelevanceVectorMachine model_kwargs = dict(n_iter=250,verbose=False,compute_score=True,init_beta=init_beta, init_alphas=init_alphas,fit_intercept=False) Nruns = 100 runtimes, coefs, models = repeated_regression(x,base_trafo,model_type,t=t, model_kwargs=model_kwargs,Nruns=Nruns, return_coefs=True,return_models=True) plot_summary(models,noise,x,t,X,coefs,base_trafo) Explanation: 2. Repeated Regressions Indeed using a Gaussian basis set, for some mysterious reason, gave a closer estimate to the real data with tighter confidence intervals. Now let's do the same again for both kernels but multiple times initializing the hyperparmaeters such that we sample them from distributions as well. 2.1 Linear Kernel End of explanation # choosing the feature space trafo = GaussianFeatures(k=30,mu0=-3,dmu=.2) base_trafo = trafo.fit_transform # initializing hyperparameters using callable distributions giving new hyperparameters # with every call (useful for repeated regression) init_beta = distribution_wrapper(stats.halfnorm(scale=1),size=1,single=True) init_alphas = distribution_wrapper(stats.halfnorm(scale=1),single=False) model_type = RelevanceVectorMachine model_kwargs = dict(n_iter=250,verbose=False,compute_score=True,init_beta=init_beta, init_alphas=init_alphas,fit_intercept=False) Nruns = 100 runtimes, coefs, models = repeated_regression(x,base_trafo,model_type,t=t, model_kwargs=model_kwargs,Nruns=Nruns, return_coefs=True,return_models=True) X = base_trafo(x.reshape((-1,1))) plot_summary(models,noise,x,t,X,coefs,base_trafo) Explanation: 2.2 Localized kernel End of explanation
2,054
Given the following text description, write Python code to implement the functionality described below step by step Description: Intro to Thinc for beginners Step1: There are also some optional extras to install, depending on whether you want to run this on GPU, and depending on which of the integrations you want to test. Step2: If you're running the notebook on GPU, the first thing to do is use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If you want to test out an integration with another library, you should check that it can access the GPU too. Step3: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including MNIST. So we can set up the data as follows Step4: Now let’s define a model with two Relu-activated hidden layers, followed by a softmax-activated output layer. We’ll also add dropout after the two hidden layers, to help the model generalize better. The chain combinator is like Sequential in PyTorch or Keras Step5: After creating the model, we can call the Model.initialize method, passing in a small batch of input data X and a small batch of output data Y. This allows Thinc to infer the missing dimensions Step6: Next we need to create an optimizer, and make several passes over the data, randomly selecting paired batches of the inputs and labels each time. While some machine learning libraries provide a single .fit() method to train a model all at once, Thinc puts you in charge of shuffling and batching your data, with the help of a few handy utility methods. model.ops.xp is an instance of either numpy or cupy, depending on whether you run the code on CPU or GPU. Step7: Let's wrap the training code in a function, so we can reuse it later Step8: Operator overloading for more concise model definitions Thinc allows you to overload operators and bind arbitrary functions to Python operators like +, *, but also &gt;&gt; or @. The Model.define_operators contextmanager takes a dict of operators mapped to functions – typically combinators like chain. The operators are only valid for the with block. This lets us define the model like this Step10: If your model definitions are very complex, mapping combinators to operators can help you keep the code readable and concise. You can find more examples of model definitions with overloaded operators in the docs. (Also note that you don't have to use this syntax!) Using config files Configuration is a huge problem for machine learning code, because you may want to expose almost any detail of any function as a hyperparameter. The setting you want to expose might be arbitrarily far down in your call stack. Default values also become hard to change without breaking backwards compatibility. To solve this problem, Thinc provides a config system that lets you easily describe arbitrary trees of objects. The objects can be created via function calls you register using a simple decorator syntax. The config can include values like hyperparameters or training settings (whatever you need), or references to functions and the values of their arguments. Thinc will then construct the config bottom-up – so you can define one function with its arguments, and then pass the return value into another function. 💡 You can keep the config as a string in your Python script, or save it to a file like config.cfg. To load a config from a string, you can use Config.from_str. To load from a file, you can use Config.from_disk. The following examples all use strings so we can include them in the notebook. Step11: When you open the config with Config.from_str, Thinc will parse it as a dict and fill in the references to values defined in other sections. For example, ${hyper_params Step13: If function arguments are missing or have incompatible types, Thinc will raise an error and tell you what's wrong. Configs can also define nested blocks using the . notation. In this example, optimizer.learn_rate defines the learn_rate argument of the optimizer block. Instead of a float, the learning rate can also be a generator – for instance, a linear warm-up rate Step14: Calling registry.resolve will now construct the objects bottom-up Step16: This gives you a loaded optimizer using the settings defined in the config, which you can then use in your script. How you set up your config and what you do with the result is entirely up to you. Thinc just gives you a dictionary of objects back and makes no assumptions about what they "mean". This means that you can also choose the names of the config sections – the only thing that needs to stay consistent are the names of the function arguments. Configuring the MNIST model Here's a config describing the model we defined above. The values in the hyper_params section can be referenced in other sections to keep them consistent. The * is used for positional arguments – in this case, the arguments to the chain function, two Relu layers and one softmax layer. Step17: When you call registry.resolve, Thinc will first create the three layers using the specified arguments populated by the hyperparameters. It will then pass the return values (the layer objects) to chain. It will also create an optimizer. All other values, like the training config, will be passed through as a regular dict. Your training code can now look like this Step18: If you want to change a hyperparamter or experiment with a different optimizer, all you need to change is the config. For each experiment you run, you can save a config and you'll be able to reproduce it later. Programming via config vs. registering custom functions The config system is very powerful and lets you define complex relationships, including model definitions with levels of nested layers. However, it's not always a good idea to program entirely in your config – this just replaces one problem (messy and hard to maintain code) with another one (messy and hard to maintain configs). So ultimately, it's about finding the best possible trade-off. If you've written a layer or model definition you're happy with, you can use Thinc's function registry to register it and assign it a string name. Your function can take any arguments that can later be defined in the config. Adding type hints ensures that config settings will be parsed and validated before they're passed into the function, so you don't end up with incompatible settings and confusing failures later on. Here's the MNIST model, defined as a custom layer Step20: In the config, we can now refer to it by name and set its arguments. This makes the config maintainable and compact, while still allowing you to change and record the hyperparameters. Step22: If you don't want to hard-code the dataset being used, you can also wrap it in a registry function. This lets you refer to it by name in the config, and makes it easy to swap it out. In your config, you can then load the data in its own section, or as a subsection of training. Step23: Wrapping TensorFlow, PyTorch and MXNet models The previous example showed how to define the model directly in Thinc, which is pretty straightforward. But you can also define your model using a machine learning library of your choice and wrap it as a Thinc model. This gives your layers a unified interface so you can easily mix and match them, and also lets you take advantage of the config system and type hints. Thinc currently ships with built-in wrappers for PyTorch, TensorFlow and MXNet. Wrapping TensorFlow models Here's the same model definition in TensorFlow Step24: You can now use the same training code to train the model Step25: Wrapping PyTorch models Here's the PyTorch version. Thinc's PyTorchWrapper wraps the model and turns it into a regular Thinc Model. Step26: You can now use the same training code to train the model Step27: Wrapping MXNet models Here's the MXNet version. Thinc's MXNetWrapper wraps the model and turns it into a regular Thinc Model. MXNet doesn't provide a Softmax layer but a .softmax() operation/method for prediction and it integrates an internal softmax during training. So to be able to integrate it with the rest of the components, you combine it with a Softmax() Thinc layer using the chain combinator. Make sure you initialize() the MXNet model and the Thinc model. Step28: And train it the same way
Python Code: !pip install "thinc==8.0.0rc6.dev0" "ml_datasets>=0.2.0a0" "tqdm>=4.41" Explanation: Intro to Thinc for beginners: defining a simple model and config & wrapping PyTorch, TensorFlow and MXNet This example shows how to get started with Thinc, using the "hello world" of neural network models: recognizing handwritten digits from the MNIST dataset. For comparison, here's the same model implemented in other frameworks: PyTorch version, TensorFlow version. In this notebook, we'll walk through creating and training the model, using config files, registering custom functions and wrapping models defined in PyTorch, TensorFlow and MXNet. This tutorial is aimed at beginners, but it assumes basic knowledge of machine learning concepts and terminology. End of explanation import thinc.util # If you want to run this notebook on GPU, you'll need to install cupy. if not thinc.util.has_cupy: !pip install "cupy-cuda101" import thinc.util # If you want to try out the tensorflow integration, you'll need to install that. # You'll either need to do tensorflow or tensorflow-gpu, depending on your # requirements. if not thinc.util.has_tensorflow: !pip install "tensorflow-gpu>=2" import thinc.util # If you want to try out the PyTorch integration, you'll need to install it. if not thinc.util.has_torch: !pip install "torch" import thinc.util # If you want to try out the MxNet integration, you'll need to install it. if not thinc.util.has_mxnet: !pip install "mxnet" Explanation: There are also some optional extras to install, depending on whether you want to run this on GPU, and depending on which of the integrations you want to test. End of explanation from thinc.api import prefer_gpu import thinc.util print("Thinc GPU?", prefer_gpu()) if thinc.util.has_tensorflow: import tensorflow as tf print("Tensorflow GPU?", bool(tf.config.experimental.list_physical_devices('GPU'))) Explanation: If you're running the notebook on GPU, the first thing to do is use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If you want to test out an integration with another library, you should check that it can access the GPU too. End of explanation import ml_datasets (train_X, train_Y), (dev_X, dev_Y) = ml_datasets.mnist() print(f"Training size={len(train_X)}, dev size={len(dev_X)}") Explanation: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including MNIST. So we can set up the data as follows: End of explanation from thinc.api import chain, Relu, Softmax n_hidden = 32 dropout = 0.2 model = chain( Relu(nO=n_hidden, dropout=dropout), Relu(nO=n_hidden, dropout=dropout), Softmax() ) Explanation: Now let’s define a model with two Relu-activated hidden layers, followed by a softmax-activated output layer. We’ll also add dropout after the two hidden layers, to help the model generalize better. The chain combinator is like Sequential in PyTorch or Keras: it combines a list of layers together with a feed-forward relationship. End of explanation # making sure the data is on the right device train_X = model.ops.asarray(train_X) train_Y = model.ops.asarray(train_Y) dev_X = model.ops.asarray(dev_X) dev_Y = model.ops.asarray(dev_Y) model.initialize(X=train_X[:5], Y=train_Y[:5]) nI = model.get_dim("nI") nO = model.get_dim("nO") print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}") Explanation: After creating the model, we can call the Model.initialize method, passing in a small batch of input data X and a small batch of output data Y. This allows Thinc to infer the missing dimensions: when we defined the model, we didn’t tell it the input size nI or the output size nO. When passing in the data, make sure it is on the right device by calling model.ops.asarray which will e.g. transform the arrays to cupy when running on GPU. End of explanation from thinc.api import Adam, fix_random_seed from tqdm.notebook import tqdm fix_random_seed(0) optimizer = Adam(0.001) batch_size = 128 print("Measuring performance across iterations:") for i in range(10): batches = model.ops.multibatch(batch_size, train_X, train_Y, shuffle=True) for X, Y in tqdm(batches, leave=False): Yh, backprop = model.begin_update(X) backprop(Yh - Y) model.finish_update(optimizer) # Evaluate and print progress correct = 0 total = 0 for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y): Yh = model.predict(X) correct += (Yh.argmax(axis=1) == Y.argmax(axis=1)).sum() total += Yh.shape[0] score = correct / total print(f" {i} {float(score):.3f}") Explanation: Next we need to create an optimizer, and make several passes over the data, randomly selecting paired batches of the inputs and labels each time. While some machine learning libraries provide a single .fit() method to train a model all at once, Thinc puts you in charge of shuffling and batching your data, with the help of a few handy utility methods. model.ops.xp is an instance of either numpy or cupy, depending on whether you run the code on CPU or GPU. End of explanation def train_model(data, model, optimizer, n_iter, batch_size): (train_X, train_Y), (dev_X, dev_Y) = data indices = model.ops.xp.arange(train_X.shape[0], dtype="i") for i in range(n_iter): batches = model.ops.multibatch(batch_size, train_X, train_Y, shuffle=True) for X, Y in tqdm(batches, leave=False): Yh, backprop = model.begin_update(X) backprop(Yh - Y) model.finish_update(optimizer) # Evaluate and print progress correct = 0 total = 0 for X, Y in model.ops.multibatch(batch_size, dev_X, dev_Y): Yh = model.predict(X) correct += (Yh.argmax(axis=1) == Y.argmax(axis=1)).sum() total += Yh.shape[0] score = correct / total print(f" {i} {float(score):.3f}") Explanation: Let's wrap the training code in a function, so we can reuse it later: End of explanation from thinc.api import Model, chain, Relu, Softmax n_hidden = 32 dropout = 0.2 with Model.define_operators({">>": chain}): model = Relu(nO=n_hidden, dropout=dropout) >> Relu(nO=n_hidden, dropout=dropout) >> Softmax() Explanation: Operator overloading for more concise model definitions Thinc allows you to overload operators and bind arbitrary functions to Python operators like +, *, but also &gt;&gt; or @. The Model.define_operators contextmanager takes a dict of operators mapped to functions – typically combinators like chain. The operators are only valid for the with block. This lets us define the model like this: End of explanation from thinc.api import Config, registry EXAMPLE_CONFIG1 = [hyper_params] learn_rate = 0.001 [optimizer] @optimizers = "Adam.v1" learn_rate = ${hyper_params:learn_rate} config1 = Config().from_str(EXAMPLE_CONFIG1) config1 Explanation: If your model definitions are very complex, mapping combinators to operators can help you keep the code readable and concise. You can find more examples of model definitions with overloaded operators in the docs. (Also note that you don't have to use this syntax!) Using config files Configuration is a huge problem for machine learning code, because you may want to expose almost any detail of any function as a hyperparameter. The setting you want to expose might be arbitrarily far down in your call stack. Default values also become hard to change without breaking backwards compatibility. To solve this problem, Thinc provides a config system that lets you easily describe arbitrary trees of objects. The objects can be created via function calls you register using a simple decorator syntax. The config can include values like hyperparameters or training settings (whatever you need), or references to functions and the values of their arguments. Thinc will then construct the config bottom-up – so you can define one function with its arguments, and then pass the return value into another function. 💡 You can keep the config as a string in your Python script, or save it to a file like config.cfg. To load a config from a string, you can use Config.from_str. To load from a file, you can use Config.from_disk. The following examples all use strings so we can include them in the notebook. End of explanation loaded_config1 = registry.resolve(config1) loaded_config1 Explanation: When you open the config with Config.from_str, Thinc will parse it as a dict and fill in the references to values defined in other sections. For example, ${hyper_params:learn_rate} is substituted with 0.001. Keys starting with @ are references to registered functions. For example, @optimizers = "Adam.v1" refers to the function registered under the name "Adam.v1", a function creating an Adam optimizer. The function takes one argument, the learn_rate. Calling registry.resolve will resolve the config and create the functions it defines. End of explanation EXAMPLE_CONFIG2 = [optimizer] @optimizers = "Adam.v1" [optimizer.learn_rate] @schedules = "warmup_linear.v1" initial_rate = 2e-5 warmup_steps = 1000 total_steps = 10000 config2 = Config().from_str(EXAMPLE_CONFIG2) config2 Explanation: If function arguments are missing or have incompatible types, Thinc will raise an error and tell you what's wrong. Configs can also define nested blocks using the . notation. In this example, optimizer.learn_rate defines the learn_rate argument of the optimizer block. Instead of a float, the learning rate can also be a generator – for instance, a linear warm-up rate: End of explanation loaded_config2 = registry.resolve(config2) loaded_config2 Explanation: Calling registry.resolve will now construct the objects bottom-up: first, it will create the schedule with the given arguments. Next, it will create the optimizer and pass in the schedule as the learn_rate argument. End of explanation CONFIG = [hyper_params] n_hidden = 32 dropout = 0.2 learn_rate = 0.001 [model] @layers = "chain.v1" [model.*.relu1] @layers = "Relu.v1" nO = ${hyper_params:n_hidden} dropout = ${hyper_params:dropout} [model.*.relu2] @layers = "Relu.v1" nO = ${hyper_params:n_hidden} dropout = ${hyper_params:dropout} [model.*.softmax] @layers = "Softmax.v1" [optimizer] @optimizers = "Adam.v1" learn_rate = ${hyper_params:learn_rate} [training] n_iter = 10 batch_size = 128 config = Config().from_str(CONFIG) config loaded_config = registry.resolve(config) loaded_config Explanation: This gives you a loaded optimizer using the settings defined in the config, which you can then use in your script. How you set up your config and what you do with the result is entirely up to you. Thinc just gives you a dictionary of objects back and makes no assumptions about what they "mean". This means that you can also choose the names of the config sections – the only thing that needs to stay consistent are the names of the function arguments. Configuring the MNIST model Here's a config describing the model we defined above. The values in the hyper_params section can be referenced in other sections to keep them consistent. The * is used for positional arguments – in this case, the arguments to the chain function, two Relu layers and one softmax layer. End of explanation model = loaded_config["model"] optimizer = loaded_config["optimizer"] n_iter = loaded_config["training"]["n_iter"] batch_size = loaded_config["training"]["batch_size"] model.initialize(X=train_X[:5], Y=train_Y[:5]) train_model(((train_X, train_Y), (dev_X, dev_Y)), model, optimizer, n_iter, batch_size) Explanation: When you call registry.resolve, Thinc will first create the three layers using the specified arguments populated by the hyperparameters. It will then pass the return values (the layer objects) to chain. It will also create an optimizer. All other values, like the training config, will be passed through as a regular dict. Your training code can now look like this: End of explanation import thinc @thinc.registry.layers("MNIST.v1") def create_mnist(nO: int, dropout: float): return chain( Relu(nO, dropout=dropout), Relu(nO, dropout=dropout), Softmax() ) Explanation: If you want to change a hyperparamter or experiment with a different optimizer, all you need to change is the config. For each experiment you run, you can save a config and you'll be able to reproduce it later. Programming via config vs. registering custom functions The config system is very powerful and lets you define complex relationships, including model definitions with levels of nested layers. However, it's not always a good idea to program entirely in your config – this just replaces one problem (messy and hard to maintain code) with another one (messy and hard to maintain configs). So ultimately, it's about finding the best possible trade-off. If you've written a layer or model definition you're happy with, you can use Thinc's function registry to register it and assign it a string name. Your function can take any arguments that can later be defined in the config. Adding type hints ensures that config settings will be parsed and validated before they're passed into the function, so you don't end up with incompatible settings and confusing failures later on. Here's the MNIST model, defined as a custom layer: End of explanation CONFIG2 = [model] @layers = "MNIST.v1" nO = 32 dropout = 0.2 [optimizer] @optimizers = "Adam.v1" learn_rate = 0.001 [training] n_iter = 10 batch_size = 128 config = Config().from_str(CONFIG2) config loaded_config = registry.resolve(config) loaded_config Explanation: In the config, we can now refer to it by name and set its arguments. This makes the config maintainable and compact, while still allowing you to change and record the hyperparameters. End of explanation @thinc.registry.datasets("mnist_data.v1") def mnist(): return ml_datasets.mnist() CONFIG3 = [model] @layers = "MNIST.v1" nO = 32 dropout = 0.2 [optimizer] @optimizers = "Adam.v1" learn_rate = 0.001 [training] n_iter = 10 batch_size = 128 [training.data] @datasets = "mnist_data.v1" config = Config().from_str(CONFIG3) loaded_config = registry.resolve(config) loaded_config model = loaded_config["model"] optimizer = loaded_config["optimizer"] n_iter = loaded_config["training"]["n_iter"] batch_size = loaded_config["training"]["batch_size"] (train_X, train_Y), (dev_X, dev_Y) = loaded_config["training"]["data"] # After loading the data from config, they might still need to be moved to the right device train_X = model.ops.asarray(train_X) train_Y = model.ops.asarray(train_Y) dev_X = model.ops.asarray(dev_X) dev_Y = model.ops.asarray(dev_Y) model.initialize(X=train_X[:5], Y=train_Y[:5]) train_model(((train_X, train_Y), (dev_X, dev_Y)), model, optimizer, n_iter, batch_size) Explanation: If you don't want to hard-code the dataset being used, you can also wrap it in a registry function. This lets you refer to it by name in the config, and makes it easy to swap it out. In your config, you can then load the data in its own section, or as a subsection of training. End of explanation from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.models import Sequential from thinc.api import TensorFlowWrapper, Adam width = 32 nO = 10 nI = 784 dropout = 0.2 tf_model = Sequential() tf_model.add(Dense(width, activation="relu", input_shape=(nI,))) tf_model.add(Dropout(dropout)) tf_model.add(Dense(width, activation="relu", input_shape=(nI,))) tf_model.add(Dropout(dropout)) tf_model.add(Dense(nO, activation="softmax")) wrapped_tf_model = TensorFlowWrapper(tf_model) wrapped_tf_model Explanation: Wrapping TensorFlow, PyTorch and MXNet models The previous example showed how to define the model directly in Thinc, which is pretty straightforward. But you can also define your model using a machine learning library of your choice and wrap it as a Thinc model. This gives your layers a unified interface so you can easily mix and match them, and also lets you take advantage of the config system and type hints. Thinc currently ships with built-in wrappers for PyTorch, TensorFlow and MXNet. Wrapping TensorFlow models Here's the same model definition in TensorFlow: a Sequential layer (equivalent of Thinc's chain) with two Relu layers and dropout, and an output layer with a softmax activation. Thinc's TensorFlowWrapper wraps the model and turns it into a regular Thinc Model. End of explanation data = ml_datasets.mnist() optimizer = Adam(0.001) wrapped_tf_model.initialize(X=train_X[:5], Y=train_Y[:5]) train_model(data, wrapped_tf_model, optimizer, n_iter=10, batch_size=128) Explanation: You can now use the same training code to train the model: End of explanation import torch import torch.nn import torch.nn.functional as F from thinc.api import PyTorchWrapper, Adam width = 32 nO = 10 nI = 784 dropout = 0.2 class PyTorchModel(torch.nn.Module): def __init__(self, width, nO, nI, dropout): super(PyTorchModel, self).__init__() self.dropout1 = torch.nn.Dropout2d(dropout) self.dropout2 = torch.nn.Dropout2d(dropout) self.fc1 = torch.nn.Linear(nI, width) self.fc2 = torch.nn.Linear(width, nO) def forward(self, x): x = F.relu(x) x = self.dropout1(x) x = self.fc1(x) x = F.relu(x) x = self.dropout2(x) x = self.fc2(x) output = F.log_softmax(x, dim=1) return output wrapped_pt_model = PyTorchWrapper(PyTorchModel(width, nO, nI, dropout)) wrapped_pt_model Explanation: Wrapping PyTorch models Here's the PyTorch version. Thinc's PyTorchWrapper wraps the model and turns it into a regular Thinc Model. End of explanation data = ml_datasets.mnist() optimizer = Adam(0.001) wrapped_pt_model.initialize(X=train_X[:5], Y=train_Y[:5]) train_model(data, wrapped_pt_model, optimizer, n_iter=10, batch_size=128) Explanation: You can now use the same training code to train the model: End of explanation !pip install "mxnet>=1.5.1,<1.6.0" from mxnet.gluon.nn import Dense, Sequential, Dropout from thinc.api import MXNetWrapper, chain, Softmax import thinc.util assert thinc.util.has_mxnet width = 32 nO = 10 nI = 784 dropout = 0.2 mx_model = Sequential() mx_model.add(Dense(width, activation="relu")) mx_model.add(Dropout(dropout)) mx_model.add(Dense(width, activation="relu")) mx_model.add(Dropout(dropout)) mx_model.add(Dense(nO)) mx_model.initialize() wrapped_mx_model = chain(MXNetWrapper(mx_model), Softmax()) wrapped_mx_model Explanation: Wrapping MXNet models Here's the MXNet version. Thinc's MXNetWrapper wraps the model and turns it into a regular Thinc Model. MXNet doesn't provide a Softmax layer but a .softmax() operation/method for prediction and it integrates an internal softmax during training. So to be able to integrate it with the rest of the components, you combine it with a Softmax() Thinc layer using the chain combinator. Make sure you initialize() the MXNet model and the Thinc model. End of explanation data = ml_datasets.mnist() optimizer = Adam(0.001) wrapped_mx_model.initialize(X=train_X[:5], Y=train_Y[:5]) train_model(data, wrapped_mx_model, optimizer, n_iter=10, batch_size=128) Explanation: And train it the same way: End of explanation
2,055
Given the following text description, write Python code to implement the functionality described below step by step Description: K Nearest Neighbor (KNN) is a popular non-parametric method. The prediction (for regression/classification) is obtained by looking into the K closest memorized examples. The algorithm itself can be summarized into three steps Step1: Let's first look at a demo for classification problem. The Iris data set with three differe classes will be loaded here Step2: We can split the original data for test purposes Step3: We will define an object of the KNearestNeighbors() class to train the model and predict for test examples Step4: Let's then look at a demo for regression problem. The Boston housing data will be loaded here Step5: Again, let's split the entire data into training and test sets Step6: Now we will utilize the KNearestNeighbors() class again to train the model and predict for test examples
Python Code: import numpy as np import operator class KNearestNeighbors(): def __init__(self, k, model_type='regression', weights='uniform'): # model_type can be either 'classification' or 'regression' # weights = 'uniform', the K nearest neighbors are equally weighted # weights = 'distance', the K nearest entries are weighted by inverse of the distance self.model_type = model_type self.k = k self.weights = weights self.X_train = None self.y_train = None def _dist(self, example1, example2): # calculate euclidean distance between two examples if len(example1) != len(example2): print "Inconsistent Dimension!" return return np.sqrt(sum(np.power(np.array(example1) - np.array(example2), 2))) def _find_neighbors(self, test_instance): # find K nearest neighbors for a test instance # this function return a list of K nearest neighbors for this test instance, # each element of the list is another list of distance and target m, n = self.X_train.shape neighbors = [[self._dist(self.X_train[i, :], test_instance), self.y_train[i]] for i in range(m)] neighbors.sort(key=lambda x: x[0]) return neighbors[:self.k] def fit(self, X, y): # no parameters learning in model fitting process for KNN # just to store all the training instances self.X_train = X self.y_train = y return self def predict(self, X): # predict using KNN algorithm X = np.array(X) # if only have one test example to predict if len(X.shape) == 1: X = X[np.newaxis, :] m = X.shape[0] y_predict = np.zeros((m, 1)) # for regression problems, depending on the weights ('uniform' or 'distance'), # it will perform average or weighted average based on inverse of distance if self.model_type == 'regression': for i in range(m): distance_mat = np.array(self._find_neighbors(X[i, :])) if self.weights == 'distance': y_predict[i] = np.average(distance_mat[:, 1], weights=1.0/distance_mat[:, 0]) else: y_predict[i] = np.average(distance_mat[:, 1]) # for classification, we will apply majority vote for prediction # it still offer two options in terms of the weights else: for i in range(m): votes = {} distance_mat = np.array(self._find_neighbors(X[i, :])) for j in range(self.k): if self.weights == 'distance': votes[distance_mat[j, 1]] = votes.get(distance_mat[j, 1], 0) \ + 1.0 / distance_mat[j, 0] else: votes[distance_mat[j, 1]] = votes.get(distance_mat[j, 1], 0) + 1.0 sorted_votes = sorted(votes.iteritems(), key=operator.itemgetter(1), reverse=True) y_predict[i] = sorted_votes[0][0] y_predict = y_predict.astype(int) return y_predict.ravel() Explanation: K Nearest Neighbor (KNN) is a popular non-parametric method. The prediction (for regression/classification) is obtained by looking into the K closest memorized examples. The algorithm itself can be summarized into three steps: Select a positive integer K along with a new example Select K entries in the training databse which are closest to the new example For regression problem, we perform an average or weighted average of the response of these closest training examples to make the prediction. For classification scenarios, we do a majority vote within the traning entries to assign the label to the new example The following class KNearestNeighbors() implements this idea. End of explanation from sklearn.datasets import load_iris iris = load_iris() X = iris['data'] y = iris['target'] print X.shape print y.shape print "Number of Classes: {}".format(len(np.unique(y))) Explanation: Let's first look at a demo for classification problem. The Iris data set with three differe classes will be loaded here: End of explanation from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=26) Explanation: We can split the original data for test purposes End of explanation knn = KNearestNeighbors(k=3, model_type='classification', weights='uniform') knn = knn.fit(X_train, y_train) y_predict = knn.predict(X_test) print "True Values: {}".format(y_test) print "Predicted Values: {}".format(y_predict) print "Prediction Accuracy: {:.2%}".format(np.mean((y_predict == y_test).astype(float))) Explanation: We will define an object of the KNearestNeighbors() class to train the model and predict for test examples: End of explanation from sklearn.datasets import load_boston boston = load_boston() X = boston['data'] y = boston['target'] print X.shape print y.shape Explanation: Let's then look at a demo for regression problem. The Boston housing data will be loaded here: End of explanation from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.025, random_state=26) Explanation: Again, let's split the entire data into training and test sets: End of explanation knn = KNearestNeighbors(k=20, model_type='regression', weights='uniform') knn.fit(X_train, y_train) y_predict = knn.predict(X_test) print "True Values:\n{}".format([round(elem, 1) for elem in y_test]) print "Predicted Values:\n{}".format([round(elem, 1) for elem in y_predict]) print "RMSE is {:.4}".format(np.sqrt(np.mean((y_test.reshape((len(y_test), 1)) - y_predict) ** 2))) Explanation: Now we will utilize the KNearestNeighbors() class again to train the model and predict for test examples: End of explanation
2,056
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualização de dados com Python 1 - Turbo introdução aos gráficos Cleuton Sampaio, DataLearningHub Nesta lição veremos a parte básica de geração de gráficos, com formatação e posicionamento dos gráficos mais comuns. Visualização de dados é um aspecto muito importante de um trabalho de Ciência de Dados, e nem todos conhecem os tipos de gráfico mais utilizados, além das técnicas e bibliotecas utilizadas para isto. Quando se têm muitas variáveis, a visualização torna-se impossível. Nestes casos, é possível aplicar técnicas de redução de dimensionalidade, algo que está fora do escopo deste trabalho. Barras, pizzas e linhas Os gráficos mais comuns em pesquisas são estes Step1: Vamos calcular a média de temperatura de cada cidade e utilizá-la para gerar um gráfico Step2: Agora, vamos criar um gráfico de barras utilizando o módulo pyplot. Primeiramente, mostraremos um gráfico bem simples Step3: Gerar um gráfico de linhas pode ajudar a entender a evolução dos dados ao longo do tempo. Vamos gerar gráficos de linhas com as temperaturas das três cidades. Para começar, vamos gerar um gráfico de linha com uma só cidade Step4: É muito comum compararmos variações de dados, e podemos fazer isso desenhando os gráficos lado a lado. Podem ser gráficos do mesmo tipo ou de diferentes tipos, além de poderem ser em várias linhas e colunas. Para isto, construímos uma instância da classe Figure separadamente, e cada instância de Axes também Step5: Outra maneira interessante de comparar séries de dados é criar um gráfico com múltiplas séries. Vejamos como fazer isso Step6: Aqui, foram utilizadas linhas de cores diferentes, mas podemos mudar a legenda e a forma dos gráficos Step7: Mas podemos dar maior destaque Step8: As propriedades abaixo podem ser utilizadas para diferenciar as linhas Step9: Vamos analisar esse dataframe Step10: Tem algo errado! Alguns valores estão asima de 1.000. Deve ser erro de dataset. Vamos acertar isso Step11: Podemos construir gráficos em linha de cada um deles usando o plot, mas, na verdade qualquer tipo de gráfico pode ser gerado com dataframes. Step12: É... Uma das grandes vantagens da visualização, mesmo simples, é constatarmos uma correlação positiva entre o valor do dólar e o desemprego. Mas cuidado para não tomar isso como relação de causa e efeito! Há outros fatores que influenciam ambos! Para ilustrar isso, e mostrar como criar gráficos de dispersão (scatter) vamos plotar um gráfico com o dólar no eixo X e o desemprego no Y Step13: Podemos ver que até há alguma correlação aparente, mas em algum momento, depois do valor de R$ 3,00, a taxa de desemprego deu um salto. Isso prova que faltam variáveis explicativas no modelo. Para encerrar essa lição, vejamos como gerar gráficos de pizza. Para começar, vamos "inventar" um dataframe de vendas mensais de produtos Step14: Bom, imaginemos que cada coluna representa as vendas de um dos produtos, e cada linha seja um dia. Vamos gerar um gráfico de pizza com as vendas. Step15: E podemos "explodir" um ou mais pedaços. Por exemplo, vamos separar o pedaço maior, do produto "C"
Python Code: import numpy as np %matplotlib inline temp_cidade1 = np.array([33.15,32.08,32.10,33.25,33.01,33.05,32.00,31.10,32.27,33.81]) temp_cidade2 = np.array([35.17,36.23,35.22,34.33,35.78,36.31,36.03,36.23,36.35,35.25]) temp_cidade3 = np.array([22.17,23.25,24.22,22.31,23.18,23.31,24.11,23.53,24.38,21.25]) Explanation: Visualização de dados com Python 1 - Turbo introdução aos gráficos Cleuton Sampaio, DataLearningHub Nesta lição veremos a parte básica de geração de gráficos, com formatação e posicionamento dos gráficos mais comuns. Visualização de dados é um aspecto muito importante de um trabalho de Ciência de Dados, e nem todos conhecem os tipos de gráfico mais utilizados, além das técnicas e bibliotecas utilizadas para isto. Quando se têm muitas variáveis, a visualização torna-se impossível. Nestes casos, é possível aplicar técnicas de redução de dimensionalidade, algo que está fora do escopo deste trabalho. Barras, pizzas e linhas Os gráficos mais comuns em pesquisas são estes: Barras, Pizzas e Linhas, e você pode criá-los facilmente com a Matplotlib. A primeira coisa a fazer é instalar a matplotlib: pip install matplotlib Se você estiver desenvolvendo dentro do Jupyter, então tem que usar um comando mágico para informar que o gráfico gerado deverá ser inserido no próprio notebook: %matplotlib inline Vamos mostrar um exemplo bem simples. Imaginemos dados coletados de temperaturas em 3 cidades diferentes. Temos listas contendo medições de temperaturas de cada cidade. Vamos usar Numpy para criar vetores e podermos trabalhar numericamente com eles: End of explanation medias = [np.mean(temp_cidade1), np.mean(temp_cidade2), np.mean(temp_cidade3)] # Valores para o gráfico nomes = ['Cidade Um', 'Cidade Dois', 'Cidade Três'] # Nomes para o gráfico Explanation: Vamos calcular a média de temperatura de cada cidade e utilizá-la para gerar um gráfico: End of explanation import matplotlib.pyplot as plt fig, ax = plt.subplots() # Retorna a figura do gráfico e o objeto de elementos gráficos (axes) ax.bar([0,1,2], medias, align='center') # Criamos um gráfico passando a posição dos elementos ax.set_xticks([0,1,2]) # Indica a posição de cada rótulo no eixo X ax.set_xticklabels(nomes) # Nomes das cidades ax.set_title('Média das temperaturas') # Título do gráfico ax.yaxis.grid(True) # Se é para mostrar a grade dos valores Y plt.show() # Gera o gráfico Explanation: Agora, vamos criar um gráfico de barras utilizando o módulo pyplot. Primeiramente, mostraremos um gráfico bem simples: End of explanation fig, ax = plt.subplots() ax.plot(temp_cidade1) ax.set_title('Temperaturas da Cidade 1') # Título do gráfico ax.yaxis.grid(True) plt.show() Explanation: Gerar um gráfico de linhas pode ajudar a entender a evolução dos dados ao longo do tempo. Vamos gerar gráficos de linhas com as temperaturas das três cidades. Para começar, vamos gerar um gráfico de linha com uma só cidade: End of explanation fig = plt.figure(figsize=(20, 5)) # Largura e altura da figura em polegadas grade = fig.add_gridspec(1, 3) # Criamos uma grade com 1 linha e 3 colunas (pode ser com várias linhas também) ax1 = fig.add_subplot(grade[0, 0]) # Primeira linha, primeira coluna ax2 = fig.add_subplot(grade[0, 1]) # Primeira linha, segunda coluna ax3 = fig.add_subplot(grade[0, 2]) # Primeira linha, terceira coluna ax1.plot(temp_cidade1) ax1.set_title('Temperaturas da Cidade 1') # Título do gráfico 1 ax1.yaxis.grid(True) ax2.plot(temp_cidade2) ax2.set_title('Temperaturas da Cidade 2') # Título do gráfico 2 ax2.yaxis.grid(True) ax3.plot(temp_cidade3) ax3.set_title('Temperaturas da Cidade 3') # Título do gráfico 3 ax3.yaxis.grid(True) plt.show() Explanation: É muito comum compararmos variações de dados, e podemos fazer isso desenhando os gráficos lado a lado. Podem ser gráficos do mesmo tipo ou de diferentes tipos, além de poderem ser em várias linhas e colunas. Para isto, construímos uma instância da classe Figure separadamente, e cada instância de Axes também: End of explanation fig, ax = plt.subplots() ax.plot(temp_cidade1) ax.plot(temp_cidade2) ax.plot(temp_cidade3) ax.set_title('Temperaturas das Cidades 1,2 e 3') # Título do gráfico ax.yaxis.grid(True) plt.show() Explanation: Outra maneira interessante de comparar séries de dados é criar um gráfico com múltiplas séries. Vejamos como fazer isso: End of explanation fig, ax = plt.subplots() ax.plot(temp_cidade1, marker='^') # Marcadores em triângulos ax.plot(temp_cidade2, marker='o') # Marcadores em círculos ax.plot(temp_cidade3, marker='.') # Marcadores em pontos ax.set_title('Temperaturas das Cidades 1,2 e 3') # Título do gráfico ax.yaxis.grid(True) plt.show() Explanation: Aqui, foram utilizadas linhas de cores diferentes, mas podemos mudar a legenda e a forma dos gráficos: End of explanation fig, ax = plt.subplots() ax.plot(temp_cidade1, color="red",markerfacecolor='pink', marker='^', linewidth=4, markersize=12, label='Cidade1') ax.plot(temp_cidade2, color="skyblue",markerfacecolor='blue', marker='o', linewidth=4, markersize=12, label='Cidade2') ax.plot(temp_cidade3, color="green", linewidth=4, linestyle='dashed', label='Cidade3') ax.set_title('Temperaturas das Cidades 1,2 e 3') # Título do gráfico ax.yaxis.grid(True) plt.legend() plt.show() Explanation: Mas podemos dar maior destaque: End of explanation import pandas as pd dolar = pd.read_csv('../datasets/dolar.csv') desemprego = pd.read_csv('../datasets/desemprego.csv') dolar.head() Explanation: As propriedades abaixo podem ser utilizadas para diferenciar as linhas: - color: Cor da linha - marker: Tipo de marcador - markerfacecolor: Cor de preenchimento do marcador - linewidth: Largura da linha - markersize: Tamanho do marcador - label: Rótulo da série de dados - linestyle: Estilo da linha de dados E temos o método "legend()" que cria a legenda do gráfico Mas, geralmente, quando estamos criando um modelo, usamos dataframes Pandas, com vários atributos. Dá para gerar gráfico diretamente a partir deles? Sim, é claro. Veremos um exemplo simples. Primeiramente, vamos importar o pandas e depois ler dois datasets CSV (Os datasets estão em: https://github.com/cleuton/datascience/tree/master/datasets End of explanation dolar.describe() Explanation: Vamos analisar esse dataframe: End of explanation dolar.loc[dolar.Dolar > 1000, ['Dolar']] = dolar['Dolar'] / 1000 desemprego.head() Explanation: Tem algo errado! Alguns valores estão asima de 1.000. Deve ser erro de dataset. Vamos acertar isso: End of explanation fig = plt.figure(figsize=(20, 5)) # Largura e altura da figura em polegadas grade = fig.add_gridspec(1, 2) # Criamos uma grade com 1 linha e 3 colunas (pode ser com várias linhas também) ax1 = fig.add_subplot(grade[0, 0]) # Primeira linha, primeira coluna ax2 = fig.add_subplot(grade[0, 1]) # Primeira linha, segunda coluna ax1.plot(dolar['Periodo'],dolar['Dolar']) # Potamos o valor do dólar por período ax1.set_title('Evolução da cotação do Dólar') ax1.yaxis.grid(True) ax2.plot(desemprego['Periodo'],desemprego['Desemprego']) ax2.set_title('Evolução da taxa de desemprego') # Evolução da taxa de desemprego por período ax2.yaxis.grid(True) Explanation: Podemos construir gráficos em linha de cada um deles usando o plot, mas, na verdade qualquer tipo de gráfico pode ser gerado com dataframes. End of explanation fig, ax = plt.subplots() ax.scatter(dolar['Dolar'],desemprego['Desemprego']) ax.set_xlabel("Valor do dólar") ax.set_ylabel("Taxa de desemprego") plt.show() Explanation: É... Uma das grandes vantagens da visualização, mesmo simples, é constatarmos uma correlação positiva entre o valor do dólar e o desemprego. Mas cuidado para não tomar isso como relação de causa e efeito! Há outros fatores que influenciam ambos! Para ilustrar isso, e mostrar como criar gráficos de dispersão (scatter) vamos plotar um gráfico com o dólar no eixo X e o desemprego no Y: End of explanation df_vendas = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) df_vendas.head() Explanation: Podemos ver que até há alguma correlação aparente, mas em algum momento, depois do valor de R$ 3,00, a taxa de desemprego deu um salto. Isso prova que faltam variáveis explicativas no modelo. Para encerrar essa lição, vejamos como gerar gráficos de pizza. Para começar, vamos "inventar" um dataframe de vendas mensais de produtos: End of explanation list(df_vendas.columns) fig, ax = plt.subplots() totais = df_vendas.sum() ax.pie(totais, labels=list(df_vendas.columns),autopct='%1.1f%%') ax.set_title('Vendas do período') plt.show() Explanation: Bom, imaginemos que cada coluna representa as vendas de um dos produtos, e cada linha seja um dia. Vamos gerar um gráfico de pizza com as vendas. End of explanation listexplode = [0]*len(totais) # criamos uma lista contendo zeros: Um para cada fatia da pizza imax = totais.idxmax() # Pegamos o índice do produto com maior quantidade de vendas ix = list(df_vendas.columns).index(imax) # Agora, transformamos este índice em posição da lista listexplode[ix]=0.1 # Modificamos a especificação de destaque da fatia com maior valor fig, ax = plt.subplots() ax.pie(totais, labels=list(df_vendas.columns),autopct='%1.1f%%', explode=listexplode) ax.set_title('Vendas do período') plt.show() Explanation: E podemos "explodir" um ou mais pedaços. Por exemplo, vamos separar o pedaço maior, do produto "C": End of explanation
2,057
Given the following text description, write Python code to implement the functionality described below step by step Description: 2 Step1: 3 Step2: 4 Step3: 6 Step4: 7 Step5: 9 Step6: 10 Step7: 11 Step8: 13 Step9: 14 Step10: 16 Step11: 17
Python Code: # Let's parse the data from the last mission as an example. # First, we open the wait times file from the last mission. f = open("crime_rates.csv", 'r') data = f.read() rows = data.split('\n') full_data = [] for row in rows: split_row = row.split(",") full_data.append(split_row) weather_data = [] f = open("la_weather.csv", 'r') data = f.read() rows = data.split('\n') full_data = [] for row in rows: split_row = row.split(",") weather_data.append(split_row) print(weather_data[:10]) Explanation: 2: Parsing the file Instructions Open "la_weather.csv", parse it, and assign the result to weather_data. Answer End of explanation # The "days" column in our data isn't extremely useful for our task, so we need to just grab the second column, with the weather. # We looped over lists before, and this is how we will extract the second column. lolist = [[1,2],[3,4],[5,6],[7,8]] second_column = [] for item in lolist: # Each item in lolist is a list. # We can get just the second column value by indexing the item. value = item[1] second_column.append(value) # second_column is now a list containing only values from the second column of lolist. print(second_column) # Let's read in our weather data again. weather_data = [] f = open("la_weather.csv", 'r') data = f.read() rows = data.split('\n') for row in rows: split_row = row.split(",") weather_data.append(split_row) weather_column = [] for row in weather_data: val = row[1] weather_column.append(val) print(weather_column) Explanation: 3: Getting a single column from the data Instructions Get all of the values in the second column and append them to weather_column. Answer End of explanation weather = weather_column # In order to make it easier to use the weather column that we just parsed, we're going to automatically include it from now on. # It's been specially added before our code runs. # We can interact with it normally -- it's a list. print(weather[0]) count = len(weather) print(count) Explanation: 4: Pre-defined variables Instructions Loop over the weather variable, and set count equal to the number of items in weather. Answer End of explanation # Let's practice with some list slicing. a = [4,5,6,7,8] # New list containing index 2 and 3. print(a[2:4]) # New list with no elements. print(a[2:2]) # New list containing only index 2. print(a[2:3]) slice_me = [7,6,4,5,6] slice1 = slice_me[2:4] slice2 = slice_me[1:2] slice3 = slice_me[3:] print(slice1, slice2, slice3) Explanation: 6: Practice slicing a list Instructions Assign a slice containing index 2 and 3 from slice_me to slice1. Assign a slice containing index 1 from slice_me to slice2. Assign a slice containing index 3 and 4 from slice_me to slice3. Answer End of explanation new_weather = weather[1:] print(new_weather) Explanation: 7: Removing our header Instructions The weather data is in the weather variable. Slice the data and remove the header. The slice can end at 367. Assign the result to new_weather. Answer End of explanation # We can make a dictionary with curly braces. dictionary_one = {} # The we can add keys and values. dictionary_one["key_one"] = 2 print(dictionary_one) # Keys and values can be anything. # And dictionaries can have multiple keys dictionary_one[10] = 5 dictionary_one[5.2] = "hello" print(dictionary_one) dictionary_two = { "test": 5, 10: "hello" } print(dictionary_two) Explanation: 9: Making a dictionary Instructions Assign the value 5 to the key "test" in dictionary_two. Assign the value "hello" to the key 10 in dictionary_two. Answer End of explanation dictionary_one = {} dictionary_one["test"] = 10 dictionary_one["key"] = "fly" # We can retrieve values from dictionaries with square brackets. print(dictionary_one["test"]) print(dictionary_one["key"]) dictionary_two = {} dictionary_two["key1"] = "high" dictionary_two["key2"] = 10 dictionary_two["key3"] = 5.6 a, b, c = dictionary_two["key1"], dictionary_two["key2"], dictionary_two["key3"] print(a, b, c) Explanation: 10: Indexing a dictionary Instructions Assign the value in "key1" in dictionary_two to a. Assign the value in "key2" in dictionary_two to b. Assign the value in "key3" in dictionary_two to c. Answer End of explanation # We can define dictionaries that already contain values. # All we do is add in keys and values separated by colons. # We have to separate pairs of keys and values with commas. a = {"key1": 10, "key2": "indubitably", "key3": "dataquest", 3: 5.6} # a is initialized with those keys and values, so we can access them. print(a["key1"]) # Another example b = {4: "robin", 5: "bluebird", 6: "sparrow"} print(b[4]) c = { 7: "raven", 8: "goose", 9: "duck" } d = { "morning": 9, "afternoon": 14, "evening": 19, "night": 23 } print(c, d) Explanation: 11: Defining a dictionary with values Instructions Make a dictionary c with the keys 7, 8, and 9 corresponding to the values "raven", "goose", and "duck". Make a dictionary d with the keys "morning", "afternoon", "evening", and "night" corresponding to the values 9, 14, 19, and 23 respectively. Answer End of explanation # We can check if values are in lists using the in statement. the_list = [10,60,-5,8] # This is True because 10 is in the_list print(10 in the_list) # This is True because -5 is in the_list print(-5 in the_list) # This is False because 9 isn't in the_list print(9 in the_list) # We can assign the results of an in statement to a variable. # Just like any other boolean. a = 7 in the_list list2 = [8, 5.6, 70, 800] c, d, e = 9 in list2, 8 in list2, -1 in list2 print(c, d, e) Explanation: 13: Testing if items are in a list Instructions Check if 9 is in list2, and assign the result to c. Check if 8 is in list2, and assign the result to d. Check if -1 is in list2, and assign the result to e. Answer End of explanation # We can check if a key is in a dictionary with the in statement. the_dict = {"robin": "red", "cardinal": "red", "oriole": "orange", "lark": "blue"} # This is True print("robin" in the_dict) # This is False print("crow" in the_dict) # We can also assign the boolean to a variable a = "cardinal" in the_dict print(a) dict2 = {"mercury": 1, "venus": 2, "earth": 3, "mars": 4} b = "jupiter" in dict2 c = "earth" in dict2 print(b, c) Explanation: 14: More uses for the in statement Instructions Check whether "jupiter" is a key in dict2 and assign the result to b. Check whether "earth" is a key in dict2 and assign the result to c. Answer End of explanation # The code in an else statement will be executed if the if statement boolean is False. # This will print "Not 7!" a = 6 # a doesn't equal 7, so this is False. if a == 7: print(a) else: print("Not 7!") # This will print "Nintendo is the best!" video_game = "Mario" # video_game is "Mario", so this is True if video_game == "Mario": print("Nintendo is the best!") else: print("Sony is the best!") season = "Spring" if season == "Summer": print("It's hot!") else: print("It might be hot!") Explanation: 16: Practicing with the else statement Instructions Write an if statement that prints "It's hot!" when the season is "Summer" Add an else statement to the if that prints "It might be hot!". Answer End of explanation # We can count how many times items appear in a list using dictionaries. pantry = ["apple", "orange", "grape", "apple", "orange", "apple", "tomato", "potato", "grape"] # Create an empty dictionary pantry_counts = {} # Loop through the whole list for item in pantry: # If the list item is already a key in the dictionary, then add 1 to the value of that key. # This is because we've seen the item again, so our count goes up. if item in pantry_counts: pantry_counts[item] = pantry_counts[item] + 1 else: # If the item isn't already a key in the count dictionary, then add the key, and set the value to 1. # We set the value to 1 because we are seeing the item, so it's occured once already in the list. pantry_counts[item] = 1 print(pantry_counts) us_presidents = ["Adams", "Bush", "Clinton", "Obama", "Harrison", "Taft", "Bush", "Adams", "Wilson", "Roosevelt", "Roosevelt"] us_president_counts = {} for p in us_presidents: if p not in us_president_counts: us_president_counts[p] = 0 us_president_counts[p] += 1 print(us_president_counts) Explanation: 17: Counting with dictionaries Instructions Count how many times each presidential last name appears in us_presidents. Assign the counts to us_president_counts. Answer End of explanation
2,058
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Step1: Helper functions to make the code more readable. Step2: Create model Step3: Run training
Python Code: # Author: Robert Guthrie import torch import torch.autograd as autograd import torch.nn as nn import torch.optim as optim torch.manual_seed(1) Explanation: Advanced: Making Dynamic Decisions and the Bi-LSTM CRF Dynamic versus Static Deep Learning Toolkits Pytorch is a dynamic neural network kit. Another example of a dynamic kit is Dynet &lt;https://github.com/clab/dynet&gt;__ (I mention this because working with Pytorch and Dynet is similar. If you see an example in Dynet, it will probably help you implement it in Pytorch). The opposite is the static tool kit, which includes Theano, Keras, TensorFlow, etc. The core difference is the following: In a static toolkit, you define a computation graph once, compile it, and then stream instances to it. In a dynamic toolkit, you define a computation graph for each instance. It is never compiled and is executed on-the-fly Without a lot of experience, it is difficult to appreciate the difference. One example is to suppose we want to build a deep constituent parser. Suppose our model involves roughly the following steps: We build the tree bottom up Tag the root nodes (the words of the sentence) From there, use a neural network and the embeddings of the words to find combinations that form constituents. Whenever you form a new constituent, use some sort of technique to get an embedding of the constituent. In this case, our network architecture will depend completely on the input sentence. In the sentence "The green cat scratched the wall", at some point in the model, we will want to combine the span $(i,j,r) = (1, 3, \text{NP})$ (that is, an NP constituent spans word 1 to word 3, in this case "The green cat"). However, another sentence might be "Somewhere, the big fat cat scratched the wall". In this sentence, we will want to form the constituent $(2, 4, NP)$ at some point. The constituents we will want to form will depend on the instance. If we just compile the computation graph once, as in a static toolkit, it will be exceptionally difficult or impossible to program this logic. In a dynamic toolkit though, there isn't just 1 pre-defined computation graph. There can be a new computation graph for each instance, so this problem goes away. Dynamic toolkits also have the advantage of being easier to debug and the code more closely resembling the host language (by that I mean that Pytorch and Dynet look more like actual Python code than Keras or Theano). Bi-LSTM Conditional Random Field Discussion For this section, we will see a full, complicated example of a Bi-LSTM Conditional Random Field for named-entity recognition. The LSTM tagger above is typically sufficient for part-of-speech tagging, but a sequence model like the CRF is really essential for strong performance on NER. Familiarity with CRF's is assumed. Although this name sounds scary, all the model is is a CRF but where an LSTM provides the features. This is an advanced model though, far more complicated than any earlier model in this tutorial. If you want to skip it, that is fine. To see if you're ready, see if you can: Write the recurrence for the viterbi variable at step i for tag k. Modify the above recurrence to compute the forward variables instead. Modify again the above recurrence to compute the forward variables in log-space (hint: log-sum-exp) If you can do those three things, you should be able to understand the code below. Recall that the CRF computes a conditional probability. Let $y$ be a tag sequence and $x$ an input sequence of words. Then we compute \begin{align}P(y|x) = \frac{\exp{(\text{Score}(x, y)})}{\sum_{y'} \exp{(\text{Score}(x, y')})}\end{align} Where the score is determined by defining some log potentials $\log \psi_i(x,y)$ such that \begin{align}\text{Score}(x,y) = \sum_i \log \psi_i(x,y)\end{align} To make the partition function tractable, the potentials must look only at local features. In the Bi-LSTM CRF, we define two kinds of potentials: emission and transition. The emission potential for the word at index $i$ comes from the hidden state of the Bi-LSTM at timestep $i$. The transition scores are stored in a $|T|x|T|$ matrix $\textbf{P}$, where $T$ is the tag set. In my implementation, $\textbf{P}_{j,k}$ is the score of transitioning to tag $j$ from tag $k$. So: \begin{align}\text{Score}(x,y) = \sum_i \log \psi_\text{EMIT}(y_i \rightarrow x_i) + \log \psi_\text{TRANS}(y_{i-1} \rightarrow y_i)\end{align} \begin{align}= \sum_i h_i[y_i] + \textbf{P}{y_i, y{i-1}}\end{align} where in this second expression, we think of the tags as being assigned unique non-negative indices. If the above discussion was too brief, you can check out this &lt;http://www.cs.columbia.edu/%7Emcollins/crf.pdf&gt;__ write up from Michael Collins on CRFs. Implementation Notes The example below implements the forward algorithm in log space to compute the partition function, and the viterbi algorithm to decode. Backpropagation will compute the gradients automatically for us. We don't have to do anything by hand. The implementation is not optimized. If you understand what is going on, you'll probably quickly see that iterating over the next tag in the forward algorithm could probably be done in one big operation. I wanted to code to be more readable. If you want to make the relevant change, you could probably use this tagger for real tasks. End of explanation def to_scalar(var): # returns a python float return var.view(-1).data.tolist()[0] def argmax(vec): # return the argmax as a python int _, idx = torch.max(vec, 1) return to_scalar(idx) def prepare_sequence(seq, to_ix): idxs = [to_ix[w] for w in seq] tensor = torch.LongTensor(idxs) return autograd.Variable(tensor) # Compute log sum exp in a numerically stable way for the forward algorithm def log_sum_exp(vec): max_score = vec[0, argmax(vec)] max_score_broadcast = max_score.view(1, -1).expand(1, vec.size()[1]) return max_score + \ torch.log(torch.sum(torch.exp(vec - max_score_broadcast))) Explanation: Helper functions to make the code more readable. End of explanation class BiLSTM_CRF(nn.Module): def __init__(self, vocab_size, tag_to_ix, embedding_dim, hidden_dim): super(BiLSTM_CRF, self).__init__() self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.vocab_size = vocab_size self.tag_to_ix = tag_to_ix self.tagset_size = len(tag_to_ix) self.word_embeds = nn.Embedding(vocab_size, embedding_dim) self.lstm = nn.LSTM(embedding_dim, hidden_dim // 2, num_layers=1, bidirectional=True) # Maps the output of the LSTM into tag space. self.hidden2tag = nn.Linear(hidden_dim, self.tagset_size) # Matrix of transition parameters. Entry i,j is the score of # transitioning *to* i *from* j. self.transitions = nn.Parameter( torch.randn(self.tagset_size, self.tagset_size)) # These two statements enforce the constraint that we never transfer # to the start tag and we never transfer from the stop tag self.transitions.data[tag_to_ix[START_TAG], :] = -10000 self.transitions.data[:, tag_to_ix[STOP_TAG]] = -10000 self.hidden = self.init_hidden() def init_hidden(self): return (autograd.Variable(torch.randn(2, 1, self.hidden_dim // 2)), autograd.Variable(torch.randn(2, 1, self.hidden_dim // 2))) def _forward_alg(self, feats): # Do the forward algorithm to compute the partition function init_alphas = torch.Tensor(1, self.tagset_size).fill_(-10000.) # START_TAG has all of the score. init_alphas[0][self.tag_to_ix[START_TAG]] = 0. # Wrap in a variable so that we will get automatic backprop forward_var = autograd.Variable(init_alphas) # Iterate through the sentence for feat in feats: alphas_t = [] # The forward variables at this timestep for next_tag in range(self.tagset_size): # broadcast the emission score: it is the same regardless of # the previous tag emit_score = feat[next_tag].view( 1, -1).expand(1, self.tagset_size) # the ith entry of trans_score is the score of transitioning to # next_tag from i trans_score = self.transitions[next_tag].view(1, -1) # The ith entry of next_tag_var is the value for the # edge (i -> next_tag) before we do log-sum-exp next_tag_var = forward_var + trans_score + emit_score # The forward variable for this tag is log-sum-exp of all the # scores. alphas_t.append(log_sum_exp(next_tag_var)) forward_var = torch.cat(alphas_t).view(1, -1) terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] alpha = log_sum_exp(terminal_var) return alpha def _get_lstm_features(self, sentence): self.hidden = self.init_hidden() embeds = self.word_embeds(sentence).view(len(sentence), 1, -1) lstm_out, self.hidden = self.lstm(embeds, self.hidden) lstm_out = lstm_out.view(len(sentence), self.hidden_dim) lstm_feats = self.hidden2tag(lstm_out) return lstm_feats def _score_sentence(self, feats, tags): # Gives the score of a provided tag sequence score = autograd.Variable(torch.Tensor([0])) tags = torch.cat([torch.LongTensor([self.tag_to_ix[START_TAG]]), tags]) for i, feat in enumerate(feats): score = score + \ self.transitions[tags[i + 1], tags[i]] + feat[tags[i + 1]] score = score + self.transitions[self.tag_to_ix[STOP_TAG], tags[-1]] return score def _viterbi_decode(self, feats): backpointers = [] # Initialize the viterbi variables in log space init_vvars = torch.Tensor(1, self.tagset_size).fill_(-10000.) init_vvars[0][self.tag_to_ix[START_TAG]] = 0 # forward_var at step i holds the viterbi variables for step i-1 forward_var = autograd.Variable(init_vvars) for feat in feats: bptrs_t = [] # holds the backpointers for this step viterbivars_t = [] # holds the viterbi variables for this step for next_tag in range(self.tagset_size): # next_tag_var[i] holds the viterbi variable for tag i at the # previous step, plus the score of transitioning # from tag i to next_tag. # We don't include the emission scores here because the max # does not depend on them (we add them in below) next_tag_var = forward_var + self.transitions[next_tag] best_tag_id = argmax(next_tag_var) bptrs_t.append(best_tag_id) viterbivars_t.append(next_tag_var[0][best_tag_id]) # Now add in the emission scores, and assign forward_var to the set # of viterbi variables we just computed forward_var = (torch.cat(viterbivars_t) + feat).view(1, -1) backpointers.append(bptrs_t) # Transition to STOP_TAG terminal_var = forward_var + self.transitions[self.tag_to_ix[STOP_TAG]] best_tag_id = argmax(terminal_var) path_score = terminal_var[0][best_tag_id] # Follow the back pointers to decode the best path. best_path = [best_tag_id] for bptrs_t in reversed(backpointers): best_tag_id = bptrs_t[best_tag_id] best_path.append(best_tag_id) # Pop off the start tag (we dont want to return that to the caller) start = best_path.pop() assert start == self.tag_to_ix[START_TAG] # Sanity check best_path.reverse() return path_score, best_path def neg_log_likelihood(self, sentence, tags): feats = self._get_lstm_features(sentence) forward_score = self._forward_alg(feats) gold_score = self._score_sentence(feats, tags) return forward_score - gold_score def forward(self, sentence): # dont confuse this with _forward_alg above. # Get the emission scores from the BiLSTM lstm_feats = self._get_lstm_features(sentence) # Find the best path, given the features. score, tag_seq = self._viterbi_decode(lstm_feats) return score, tag_seq Explanation: Create model End of explanation START_TAG = "<START>" STOP_TAG = "<STOP>" EMBEDDING_DIM = 5 HIDDEN_DIM = 4 # Make up some training data training_data = [( "the wall street journal reported today that apple corporation made money".split(), "B I I I O O O B I O O".split() ), ( "georgia tech is a university in georgia".split(), "B I O O O O B".split() )] word_to_ix = {} for sentence, tags in training_data: for word in sentence: if word not in word_to_ix: word_to_ix[word] = len(word_to_ix) tag_to_ix = {"B": 0, "I": 1, "O": 2, START_TAG: 3, STOP_TAG: 4} model = BiLSTM_CRF(len(word_to_ix), tag_to_ix, EMBEDDING_DIM, HIDDEN_DIM) optimizer = optim.SGD(model.parameters(), lr=0.01, weight_decay=1e-4) # Check predictions before training precheck_sent = prepare_sequence(training_data[0][0], word_to_ix) precheck_tags = torch.LongTensor([tag_to_ix[t] for t in training_data[0][1]]) print(model(precheck_sent)) # Make sure prepare_sequence from earlier in the LSTM section is loaded for epoch in range( 300): # again, normally you would NOT do 300 epochs, it is toy data for sentence, tags in training_data: # Step 1. Remember that Pytorch accumulates gradients. # We need to clear them out before each instance model.zero_grad() # Step 2. Get our inputs ready for the network, that is, # turn them into Variables of word indices. sentence_in = prepare_sequence(sentence, word_to_ix) targets = torch.LongTensor([tag_to_ix[t] for t in tags]) # Step 3. Run our forward pass. neg_log_likelihood = model.neg_log_likelihood(sentence_in, targets) # Step 4. Compute the loss, gradients, and update the parameters by # calling optimizer.step() neg_log_likelihood.backward() optimizer.step() # Check predictions after training precheck_sent = prepare_sequence(training_data[0][0], word_to_ix) print(model(precheck_sent)) # We got it! Explanation: Run training End of explanation
2,059
Given the following text description, write Python code to implement the functionality described below step by step Description: We compute confusion matrix using the final object (not just the sklearn svm). But we do it without sound segmentation. Step1: FILTERING THRESHOLD ..
Python Code: files = glob.glob('/mnt/protolab_innov/data/sounds/dataset_demo/*/*.wav') files = glob.glob('/home/lgeorge/Downloads/dataset/*/*.wav') _class = [os.path.basename(f).split('-')[0] for f in files] df = pd.DataFrame(zip(_class, files), columns=['classname', 'filename']) mask_to_remove = df.filename.str.contains('BlowNose') | df.filename.str.contains('SmokeDetector') mask_to_remove |= df.filename.str.contains('Laugh') mask_to_remove |= df.filename.str.contains('RobotNoisePushed') df = df[~mask_to_remove] #df = df[~df.filename.str.contains('Romeo')] # we remove file recorded on romeo from the database.. because there are in 44100Hz not 48000 print(df.classname.value_counts()) df.shape[0] %load_ext autoreload %autoreload from sound_classification.evaluate_classification import get_expected_predicted_stratified_fold from sklearn.cross_validation import StratifiedKFold n_folds = 3 stratified_fold = StratifiedKFold(df.classname, n_folds) # we use only 3 fold.. as we have only 16 values on some data folds = list(stratified_fold) expected, predicted, labels, filenames = get_expected_predicted_stratified_fold(stratified_fold, df, calibrate_score=True, window_block=None) # si on veut cropper #expected, predicted, labels, filenames = get_expected_predicted_stratified_fold(stratified_fold, df, calibrate_score=True, window_block=1.0, keep_first_slice_only=True) predicted_class = [x.class_predicted for x in predicted] %load_ext autoreload %autoreload %pylab notebook #import seaborn as sns #sns.reset_orig() import sound_classification.evaluate_classification sound_classification.evaluate_classification.print_report(expected, predicted_class, labels) # REORDERING COLUMNS.. %pylab inline labels_bis = labels.copy() labels_bis = ['ApplauseLight', 'ClapHand', 'DeskBell', 'DoorBell01', 'DoorBell02', 'FakeSneeze', 'FireAlarmFr', 'NoisePaper', 'TacTac', 'ToyChicken', 'ToyGiraffe', 'ToyMaracas', 'ToyPig', 'Whistle', 'HumanCaressHead', 'HumanScratchHead', 'VoiceAlex', 'VoiceLaurent'] labels_bis = np.array(labels_bis, dtype=np.object) %load_ext autoreload %autoreload import seaborn as sns sns.reset_orig() import sound_classification.evaluate_classification pylab.figure() fig = sound_classification.evaluate_classification.print_report(expected, predicted_class, labels_bis) fig.savefig('/tmp/final/conf_mat_withou_threshold.png', dpi=600) type(labels) labels.shape np.array(labels_bis, dtype=np.object).dtype labels.dtype Explanation: We compute confusion matrix using the final object (not just the sklearn svm). But we do it without sound segmentation. End of explanation prediction_df = pd.DataFrame([[x.confidence, x.score, x.class_predicted, x.timestamp_start, expected_val, filename] for x, expected_val, filename in zip(predicted, expected, filenames)], columns=['confidence', 'score', 'class_predicted', 'timestamp_start', 'expected', 'filename']) mask_wrong = prediction_df.score < 0.9 #prediction_df[mask_wrong].class_predicted = 'UNKNOWN' prediction_df.class_predicted[mask_wrong] = 'UNKNOWN' %pylab notebook new_labels = np.concatenate([labels_bis, ['UNKNOWN']]) pylab.figure() sound_classification.evaluate_classification.print_report(list(prediction_df.expected) + ['UNKNOWN'], list(prediction_df.class_predicted) + ['UNKNOWN'], new_labels ) pylab.savefig('/tmp/final/conf_mat_with_threshold.png', dpi=600) np.sum(prediction_df.class_predicted == 'UNKNOWN') #np.sum(mask_wrong) Explanation: FILTERING THRESHOLD .. End of explanation
2,060
Given the following text description, write Python code to implement the functionality described below step by step Description: Advanced Step1: Default Animations By passing animate=True to b.show(), b.savefig(), or the final call to b.plot() along with save=filename or show=True will create an animation instead of a static plot. Alternatively, you can call afig.animate() on the returned afig object returned by b.plot(). Step2: Note that like the rest of the examples below, this is simply the animated version of the exact same call to plot Providing Times To override the default times explained above, pass a list or array to the times keyword. For synthetic models, highlight mode will be enabled by default and the provided time does not need to be one that is computed - the value will be interpolated if it is not. However, for plotting meshes, the exact time must be stored in the synthetic meshes or they will not be drawn. This is especially usefully in cases where you may not want to repeat the first and last frame for a looping gif, or where you want a smoother animation by interpolation. In this example we'll plot all but the last time so that the loop doesn't have a repeated frame. In this example, times[ Step3: Plotting Options By default, time highlighting is turned on. See the plotting tutorial for details on 'highlight' and 'uncover' options. Any additional arguments (colors, linestyle, etc) are passed to the plot call for EACH frame and for EVERY plotting call. Step4: Step5: Disabling Fixed Limits By default, as can be seen above in the mesh animation, the limits of the axes are automatically set so that they are fixed throughout the animation. Sometimes this may not be desired. By setting xlim='frame' (and/or ylim='frame'), the axes limits are determined automatically per-frame instead of fixed throughout the animation. For more information and other options see the autofig tutorial on limits Step6: 3D axes Plotting to 3D axes are supported. In addition to the options for static plots, animations also support passing a list for the range of elevation/azimuth (in degrees) throughout the animation.
Python Code: #!pip install -I "phoebe>=2.3,<2.4" import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary() times = np.linspace(0,1,51) b.add_dataset('lc', compute_times=times, dataset='lc01') b.add_dataset('orb', compute_times=times, dataset='orb01') b.add_dataset('mesh', compute_times=times, dataset='mesh01', columns=['teffs']) b.run_compute(irrad_method='none') Explanation: Advanced: Animations NOTE: this tutorial may take a while to load in a browser as there are many embedded animations and also takes significant time to run and create all animations. Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation afig, mplanim = b.plot(y={'orb': 'ws'}, animate=True, save='animations_1.gif', save_kwargs={'writer': 'imagemagick'}) Explanation: Default Animations By passing animate=True to b.show(), b.savefig(), or the final call to b.plot() along with save=filename or show=True will create an animation instead of a static plot. Alternatively, you can call afig.animate() on the returned afig object returned by b.plot(). End of explanation afig, mplanim = b.plot(y={'orb': 'ws'}, times=times[:-1:2], animate=True, save='animations_2.gif', save_kwargs={'writer': 'imagemagick'}) Explanation: Note that like the rest of the examples below, this is simply the animated version of the exact same call to plot Providing Times To override the default times explained above, pass a list or array to the times keyword. For synthetic models, highlight mode will be enabled by default and the provided time does not need to be one that is computed - the value will be interpolated if it is not. However, for plotting meshes, the exact time must be stored in the synthetic meshes or they will not be drawn. This is especially usefully in cases where you may not want to repeat the first and last frame for a looping gif, or where you want a smoother animation by interpolation. In this example we'll plot all but the last time so that the loop doesn't have a repeated frame. In this example, times[:-1:2] means skip the last time and only use every-other time. This option is not available from run_compute - a frame will be drawn for each computed time. End of explanation afig, mplanim = b['lc01@model'].plot(times=times[:-1], uncover=True,\ c='r', linestyle=':',\ highlight_marker='s', highlight_color='g', animate=True, save='animations_3.gif', save_kwargs={'writer': 'imagemagick'}) Explanation: Plotting Options By default, time highlighting is turned on. See the plotting tutorial for details on 'highlight' and 'uncover' options. Any additional arguments (colors, linestyle, etc) are passed to the plot call for EACH frame and for EVERY plotting call. End of explanation afig, mplanim = b['mesh01@model'].plot(times=times[:-1], fc='teffs', ec='None', animate=True, save='animations_4.gif', save_kwargs={'writer': 'imagemagick'}) Explanation: End of explanation afig, mplanim = b['lc01@model'].plot(times=times[:-1], uncover=True, xlim='frame', animate=True, save='animations_5.gif', save_kwargs={'writer': 'imagemagick'}) Explanation: Disabling Fixed Limits By default, as can be seen above in the mesh animation, the limits of the axes are automatically set so that they are fixed throughout the animation. Sometimes this may not be desired. By setting xlim='frame' (and/or ylim='frame'), the axes limits are determined automatically per-frame instead of fixed throughout the animation. For more information and other options see the autofig tutorial on limits End of explanation afig, mplanim = b['orb01@model'].plot(times=times[:-1], projection='3d', azim=[0, 360], elev=[-20,20], animate=True, save='animations_6.gif', save_kwargs={'writer': 'imagemagick'}) Explanation: 3D axes Plotting to 3D axes are supported. In addition to the options for static plots, animations also support passing a list for the range of elevation/azimuth (in degrees) throughout the animation. End of explanation
2,061
Given the following text description, write Python code to implement the functionality described below step by step Description: Bayesian Signed-Rank Test Module signrank in bayesiantests computes the Bayesian equivalent of the Wilcoxon signed-rank test. It returns probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence. This notebook demonstrates the use of the module. We will load the classification accuracies of the naive Bayesian classifier and AODE on 54 UCI datasets from the file Data/accuracy_nbc_aode.csv. For simplicity, we will skip the header row and the column with data set names. Step1: Functions in the module accept the following arguments. x Step2: The first value (left) is the probability that the first classifier (the left column of x) has a higher score than the second (or that the differences are negative, if x is given as a vector). In the above case, the right (AODE) performs worse than naive Bayes with a probability of 0.88, and they are practically equivalent with a probability of 0.12. If we add arguments verbose and names, the function also prints out the probabilities. Step3: The posterior distribution can be plotted out Step4: Checking sensitivity to the prior To check the effect of the prior, let us a put a greater prior on the left. Step5: ... and on the right Step6: The prior with a strength of 1 has negligible effect. Only a much stronger prior on the left would shift the probabilities toward NBC
Python Code: import numpy as np scores = np.loadtxt('Data/accuracy_nbc_aode.csv', delimiter=',', skiprows=1, usecols=(1, 2)) names = ("NBC", "AODE") Explanation: Bayesian Signed-Rank Test Module signrank in bayesiantests computes the Bayesian equivalent of the Wilcoxon signed-rank test. It returns probabilities that, based on the measured performance, one model is better than another or vice versa or they are within the region of practical equivalence. This notebook demonstrates the use of the module. We will load the classification accuracies of the naive Bayesian classifier and AODE on 54 UCI datasets from the file Data/accuracy_nbc_aode.csv. For simplicity, we will skip the header row and the column with data set names. End of explanation import bayesiantests as bt left, within, right = bt.signrank(scores, rope=0.01,rho=1/10) print(left, within, right) Explanation: Functions in the module accept the following arguments. x: a 2-d array with scores of two models (each row corresponding to a data set) or a vector of differences. rope: the region of practical equivalence. We consider two classifiers equivalent if the difference in their performance is smaller than rope. prior_strength: the prior strength for the Dirichlet distribution. Default is 0.6. prior_place: the region into which the prior is placed. Default is bayesiantests.ROPE, the other options are bayesiantests.LEFT and bayesiantests.RIGHT. nsamples: the number of Monte Carlo samples used to approximate the posterior. names: the names of the two classifiers; if x is a vector of differences, positive values mean that the second (right) model had a higher score. Summarizing probabilities Function signrank(x, rope, prior_strength=0.6, prior_place=ROPE, nsamples=50000, verbose=False, names=('C1', 'C2')) computes the Bayesian signed-rank test and returns the probabilities that the difference (the score of the first classifier minus the score of the first) is negative, within rope or positive. End of explanation left, within, right = bt.signrank(scores, rope=0.01, verbose=True, names=names) Explanation: The first value (left) is the probability that the first classifier (the left column of x) has a higher score than the second (or that the differences are negative, if x is given as a vector). In the above case, the right (AODE) performs worse than naive Bayes with a probability of 0.88, and they are practically equivalent with a probability of 0.12. If we add arguments verbose and names, the function also prints out the probabilities. End of explanation %matplotlib inline import matplotlib.pyplot as plt samples = bt.signrank_MC(scores, rope=0.01) fig = bt.plot_posterior(samples,names) plt.show() Explanation: The posterior distribution can be plotted out: 1. using the function signrank_MC(x, rope, prior_strength=1, prior_place=ROPE, nsamples=50000) we generate the samples of the posterior 2. using the function plot_posterior(samples,names=('C1', 'C2')) we then plot the posterior in the probability simplex End of explanation samples = bt.signrank_MC(scores, rope=0.01, prior_strength=0.6, prior_place=bt.LEFT) fig = bt.plot_posterior(samples,names) plt.show() Explanation: Checking sensitivity to the prior To check the effect of the prior, let us a put a greater prior on the left. End of explanation samples = bt.signrank_MC(scores, rope=0.01, prior_strength=0.6, prior_place=bt.RIGHT) fig = bt.plot_posterior(samples,names) plt.show() Explanation: ... and on the right End of explanation samples = bt.signrank_MC(scores, rope=0.01, prior_strength=6, prior_place=bt.LEFT) fig = bt.plot_posterior(samples,names) plt.show() Explanation: The prior with a strength of 1 has negligible effect. Only a much stronger prior on the left would shift the probabilities toward NBC: End of explanation
2,062
Given the following text description, write Python code to implement the functionality described. Description: You have to write a function which validates a given date string and returns True if the date is valid otherwise False. The date is valid if all of the following rules are satisfied: 1. The date string is not empty. 2. The number of days is not less than 1 or higher than 31 days for months 1,3,5,7,8,10,12. And the number of days is not less than 1 or higher than 30 days for months 4,6,9,11. And, the number of days is not less than 1 or higher than 29 for the month 2. 3. The months should not be less than 1 or higher than 12. 4. The date should be in the format: mm-dd-yyyy for example: valid_date('03-11-2000') => True valid_date('15-01-2012') => False valid_date('04-0-2040') => False valid_date('06-04-2020') => True valid_date('06/04/2020') => False
Python Code: def valid_date(date): try: date = date.strip() month, day, year = date.split('-') month, day, year = int(month), int(day), int(year) if month < 1 or month > 12: return False if month in [1,3,5,7,8,10,12] and day < 1 or day > 31: return False if month in [4,6,9,11] and day < 1 or day > 30: return False if month == 2 and day < 1 or day > 29: return False except: return False return True
2,063
Given the following text description, write Python code to implement the functionality described below step by step Description: Word Embeddings Word embedding is a mapping of a word to a d-dimensional vector space. This real valued vector representation captures semantic and syntactic features. Polyglot offers a simple interface to load several formats of word embeddings. Step1: Formats The Embedding class can read word embeddings from different sources Step2: Nearest Neighbors A common way to investigate the space capture by the embeddings is to query for the nearest neightbors of any word. Step3: to calculate the distance between a word and the nieghbors, we can call the distances method Step4: The word embeddings are not unit vectors, actually the more frequent the word is the larger the norm of its own vector. Step5: This could be problematic for some applications and training algorithms. We can normalize them by $L_2$ norms to get unit vectors to reduce effects of word frequency, as the following Step6: Vocabulary Expansion Step7: Not all the words are available in the dictionary defined by the word embeddings. Sometimes it would be useful to map new words to similar ones that we have embeddings for. Case Expansion For example, the word GREEN is not available in the embeddings, Step8: we would like to return the vector that represents the word Green, to do that we apply a case expansion Step9: Digit Expansion We reduce the size of the vocabulary while training the embeddings by grouping special classes of words. Once common case of such grouping is digits. Every digit in the training corpus get replaced by the symbol #. For example, a number like 123.54 becomes ###.##. Therefore, querying the embedding for a new number like 434 will result in a failure Step10: To fix that, we apply another type of vocabulary expansion DigitExpander. It will map any number to a sequence of #s. Step11: As expected, the neighbors of the new number 434 will be other numbers
Python Code: from polyglot.mapping import Embedding Explanation: Word Embeddings Word embedding is a mapping of a word to a d-dimensional vector space. This real valued vector representation captures semantic and syntactic features. Polyglot offers a simple interface to load several formats of word embeddings. End of explanation embeddings = Embedding.load("/home/rmyeid/polyglot_data/embeddings2/en/embeddings_pkl.tar.bz2") Explanation: Formats The Embedding class can read word embeddings from different sources: Gensim word2vec objects: (from_gensim method) Word2vec binary/text models: (from_word2vec method) polyglot pickle files: (load method) End of explanation neighbors = embeddings.nearest_neighbors("green") neighbors Explanation: Nearest Neighbors A common way to investigate the space capture by the embeddings is to query for the nearest neightbors of any word. End of explanation embeddings.distances("green", neighbors) Explanation: to calculate the distance between a word and the nieghbors, we can call the distances method End of explanation %matplotlib inline import matplotlib.pyplot as plt import numpy as np norms = np.linalg.norm(embeddings.vectors, axis=1) window = 300 smooth_line = np.convolve(norms, np.ones(window)/float(window), mode='valid') plt.plot(smooth_line) plt.xlabel("Word Rank"); _ = plt.ylabel("$L_2$ norm") Explanation: The word embeddings are not unit vectors, actually the more frequent the word is the larger the norm of its own vector. End of explanation embeddings = embeddings.normalize_words() neighbors = embeddings.nearest_neighbors("green") for w,d in zip(neighbors, embeddings.distances("green", neighbors)): print("{:<8}{:.4f}".format(w,d)) Explanation: This could be problematic for some applications and training algorithms. We can normalize them by $L_2$ norms to get unit vectors to reduce effects of word frequency, as the following End of explanation from polyglot.mapping import CaseExpander, DigitExpander Explanation: Vocabulary Expansion End of explanation "GREEN" in embeddings Explanation: Not all the words are available in the dictionary defined by the word embeddings. Sometimes it would be useful to map new words to similar ones that we have embeddings for. Case Expansion For example, the word GREEN is not available in the embeddings, End of explanation embeddings.apply_expansion(CaseExpander) "GREEN" in embeddings embeddings.nearest_neighbors("GREEN") Explanation: we would like to return the vector that represents the word Green, to do that we apply a case expansion: End of explanation "434" in embeddings Explanation: Digit Expansion We reduce the size of the vocabulary while training the embeddings by grouping special classes of words. Once common case of such grouping is digits. Every digit in the training corpus get replaced by the symbol #. For example, a number like 123.54 becomes ###.##. Therefore, querying the embedding for a new number like 434 will result in a failure End of explanation embeddings.apply_expansion(DigitExpander) "434" in embeddings Explanation: To fix that, we apply another type of vocabulary expansion DigitExpander. It will map any number to a sequence of #s. End of explanation embeddings.nearest_neighbors("434") Explanation: As expected, the neighbors of the new number 434 will be other numbers: End of explanation
2,064
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: Text classification with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Import the required packages. Step3: Get the data path Download the dataset for this tutorial. Step4: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab. <img src="https Step5: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec. Step6: Step 3. Customize the TensorFlow model. Step7: Step 4. Evaluate the model. Step8: Step 5. Export as a TensorFlow Lite model with metadata. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation. Step9: You can also download the model using the left sidebar in Colab. After executing the 5 steps above, you can further use the TensorFlow Lite model file in on-device applications using BertNLClassifier API in TensorFlow Lite Task Library. The following sections walk through the example step by step to show more detail. Choose a model_spec that Represents a Model for Text Classifier Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models. Supported Model | Name of model_spec | Model Description --- | --- | --- MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. averaging word embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process. Step10: Load Input Data Specific to an On-device ML App The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes Step11: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format Step12: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders. Customize the TensorFlow Model Create a custom text classifier model based on the loaded data. Step13: Examine the detailed model structure. Step14: Evaluate the Customized Model Evaluate the model with the test data and get its loss and accuracy. Step15: Export as a TensorFlow Lite Model Convert the existing model to TensorFlow Lite model format with metadata that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite. Step16: The TensorFlow Lite model file can be used in the text classification reference app using NLClassifier API in TensorFlow Lite Task Library. The allowed export formats can be one or a list of the following Step17: You can evalute the tflite model with evaluate_tflite method to get its accuracy. Step18: Advanced Usage The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps Step19: Get the preprocessed data. Step20: Train the new model. Step21: You can also adjust the MobileBERT model. The model parameters you can adjust are Step22: Tune the training hyperparameters You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance, epochs Step23: Evaluate the newly retrained model with 20 training epochs. Step24: Change the Model Architecture You can change the model by changing the model_spec. The following shows how to change to BERT-Base model. Change the model_spec to BERT-Base model for the text classifier.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation !pip install tflite-model-maker Explanation: Text classification with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications. This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews. Prerequisites Install the required packages To run this example, install the required packages, including the Model Maker package from the GitHub repo. End of explanation import numpy as np import os import tensorflow as tf assert tf.__version__.startswith('2') from tflite_model_maker import configs from tflite_model_maker import ExportFormat from tflite_model_maker import model_spec from tflite_model_maker import text_classifier from tflite_model_maker import TextClassifierDataLoader Explanation: Import the required packages. End of explanation data_dir = tf.keras.utils.get_file( fname='SST-2.zip', origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8', extract=True) data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2') Explanation: Get the data path Download the dataset for this tutorial. End of explanation spec = model_spec.get('mobilebert_classifier') Explanation: You can also upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab. <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100"> If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide. End-to-End Workflow This workflow consists of five steps as outlined below: Step 1. Choose a model specification that represents a text classification model. This tutorial uses MobileBERT as an example. End of explanation train_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'train.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=True) test_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'dev.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=False) Explanation: Step 2. Load train and test data specific to an on-device ML app and preprocess the data according to a specific model_spec. End of explanation model = text_classifier.create(train_data, model_spec=spec) Explanation: Step 3. Customize the TensorFlow model. End of explanation loss, acc = model.evaluate(test_data) Explanation: Step 4. Evaluate the model. End of explanation config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY]) config._experimental_new_quantizer = True model.export(export_dir='mobilebert/', quantization_config=config) Explanation: Step 5. Export as a TensorFlow Lite model with metadata. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress it by almost 4x with minimal performance degradation. End of explanation spec = model_spec.get('average_word_vec') Explanation: You can also download the model using the left sidebar in Colab. After executing the 5 steps above, you can further use the TensorFlow Lite model file in on-device applications using BertNLClassifier API in TensorFlow Lite Task Library. The following sections walk through the example step by step to show more detail. Choose a model_spec that Represents a Model for Text Classifier Each model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models. Supported Model | Name of model_spec | Model Description --- | --- | --- MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. averaging word embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. This tutorial uses a smaller model, average_word_vec that you can retrain multiple times to demonstrate the process. End of explanation data_dir = tf.keras.utils.get_file( fname='SST-2.zip', origin='https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8', extract=True) data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2') Explanation: Load Input Data Specific to an On-device ML App The SST-2 (Stanford Sentiment Treebank) is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for validation. The dataset has two classes: positive and negative movie reviews. Download the archived version of the dataset and extract it. End of explanation train_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'train.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=True) test_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'dev.tsv')), text_column='sentence', label_column='label', model_spec=spec, delimiter='\t', is_training=False) Explanation: The SST-2 dataset has train.tsv for training and dev.tsv for validation. The files have the following format: sentence | label --- | --- it 's a charming and often affecting journey . | 1 unflinchingly bleak and desperate | 0 A positive review is labeled 1 and a negative review is labeled 0. Use the TestClassifierDataLoader.from_csv method to load the data. End of explanation model = text_classifier.create(train_data, model_spec=spec, epochs=10) Explanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders. Customize the TensorFlow Model Create a custom text classifier model based on the loaded data. End of explanation model.summary() Explanation: Examine the detailed model structure. End of explanation loss, acc = model.evaluate(test_data) Explanation: Evaluate the Customized Model Evaluate the model with the test data and get its loss and accuracy. End of explanation model.export(export_dir='average_word_vec/') Explanation: Export as a TensorFlow Lite Model Convert the existing model to TensorFlow Lite model format with metadata that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite. End of explanation model.export(export_dir='average_word_vec/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB]) Explanation: The TensorFlow Lite model file can be used in the text classification reference app using NLClassifier API in TensorFlow Lite Task Library. The allowed export formats can be one or a list of the following: ExportFormat.TFLITE ExportFormat.LABEL ExportFormat.VOCAB ExportFormat.SAVED_MODEL By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the label file and vocab file as follows: End of explanation accuracy = model.evaluate_tflite('average_word_vec/model.tflite', test_data) Explanation: You can evalute the tflite model with evaluate_tflite method to get its accuracy. End of explanation new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32) Explanation: Advanced Usage The create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecModelSpec and BertClassifierModelSpec classes are currently supported. The create function comprises of the following steps: Creates the model for the text classifier according to model_spec. Trains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object. This section covers advanced usage topics like adjusting the model and the training hyperparameters. Adjust the model You can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecModelSpec class. For example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model. End of explanation new_train_data = TextClassifierDataLoader.from_csv( filename=os.path.join(os.path.join(data_dir, 'train.tsv')), text_column='sentence', label_column='label', model_spec=new_model_spec, delimiter='\t', is_training=True) Explanation: Get the preprocessed data. End of explanation model = text_classifier.create(new_train_data, model_spec=new_model_spec) Explanation: Train the new model. End of explanation new_model_spec = model_spec.get('mobilebert_classifier') new_model_spec.seq_len = 256 Explanation: You can also adjust the MobileBERT model. The model parameters you can adjust are: seq_len: Length of the sequence to feed into the model. initializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices. trainable: Boolean that specifies whether the pre-trained layer is trainable. The training pipeline parameters you can adjust are: model_dir: The location of the model checkpoint files. If not set, a temporary directory will be used. dropout_rate: The dropout rate. learning_rate: The initial learning rate for the Adam optimizer. tpu: TPU address to connect to. For instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text. End of explanation model = text_classifier.create(train_data, model_spec=spec, epochs=20) Explanation: Tune the training hyperparameters You can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance, epochs: more epochs could achieve better accuracy, but may lead to overfitting. batch_size: the number of samples to use in one training step. For example, you can train with more epochs. End of explanation loss, accuracy = model.evaluate(test_data) Explanation: Evaluate the newly retrained model with 20 training epochs. End of explanation spec = model_spec.get('bert_classifier') Explanation: Change the Model Architecture You can change the model by changing the model_spec. The following shows how to change to BERT-Base model. Change the model_spec to BERT-Base model for the text classifier. End of explanation
2,065
Given the following text description, write Python code to implement the functionality described below step by step Description: CBOE VXN Index In this notebook, we'll take a look at the CBOE VXN Index dataset, available on the Quantopian Store. This dataset spans 02 Feb 2001 through the current day. This data has a daily frequency. CBOE VXN measures market expectations of near-term volatility conveyed by NASDAQ-100 Index option prices Notebook Contents There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through. <a href='#interactive'><strong>Interactive overview</strong></a> Step1: Let's go over the columns Step2: <a id='pipeline'></a> Pipeline Overview Accessing the data in your algorithms & research The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows Step3: Now that we've imported the data, let's take a look at which fields are available for each dataset. You'll find the dataset, the available fields, and the datatypes for each of those fields. Step4: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline. This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread Step5: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need. Taking what we've seen from above, let's see how we'd move that into the backtester.
Python Code: # For use in Quantopian Research, exploring interactively from quantopian.interactive.data.quandl import cboe_vxn as dataset # import data operations from odo import odo # import other libraries we will use import pandas as pd # Let's use blaze to understand the data a bit using Blaze dshape() dataset.dshape # And how many rows are there? # N.B. we're using a Blaze function to do this, not len() dataset.count() # Let's see what the data looks like. We'll grab the first three rows. dataset[:3] Explanation: CBOE VXN Index In this notebook, we'll take a look at the CBOE VXN Index dataset, available on the Quantopian Store. This dataset spans 02 Feb 2001 through the current day. This data has a daily frequency. CBOE VXN measures market expectations of near-term volatility conveyed by NASDAQ-100 Index option prices Notebook Contents There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through. <a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting. <a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting. Limits One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze. With preamble in place, let's get started: <a id='interactive'></a> Interactive Overview Accessing the data with Blaze and Interactive on Research Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner. Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side. It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization. Helpful links: * Query building for Blaze * Pandas-to-Blaze dictionary * SQL-to-Blaze dictionary. Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using: from odo import odo odo(expr, pandas.DataFrame) To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a> End of explanation # Plotting this DataFrame df = odo(dataset, pd.DataFrame) df.head(5) # So we can plot it, we'll set the index as the `asof_date` df['asof_date'] = pd.to_datetime(df['asof_date']) df = df.set_index(['asof_date']) df.head(5) import matplotlib.pyplot as plt df['open_'].plot(label=str(dataset)) plt.ylabel(str(dataset)) plt.legend() plt.title("Graphing %s since %s" % (str(dataset), min(df.index))) Explanation: Let's go over the columns: - open: open price for VXN - high: daily high for VXN - low: daily low for VXN - close: close price for VXN - asof_date: the timeframe to which this data applies - timestamp: this is our timestamp on when we registered the data. We've done much of the data processing for you. Fields like timestamp are standardized across all our Store Datasets, so the datasets are easy to combine. We can select columns and rows with ease. Below, we'll do a simple plot. End of explanation # Import necessary Pipeline modules from quantopian.pipeline import Pipeline from quantopian.research import run_pipeline from quantopian.pipeline.factors import AverageDollarVolume # Import the datasets available from quantopian.pipeline.data.quandl import cboe_vxn Explanation: <a id='pipeline'></a> Pipeline Overview Accessing the data in your algorithms & research The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows: Import the data set here from quantopian.pipeline.data.quandl import cboe_vxn Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline: pipe.add(cboe_vxn.open_.latest, 'open') Pipeline usage is very similar between the backtester and Research so let's go over how to import this data through pipeline and view its outputs. End of explanation print "Here are the list of available fields per dataset:" print "---------------------------------------------------\n" def _print_fields(dataset): print "Dataset: %s\n" % dataset.__name__ print "Fields:" for field in list(dataset.columns): print "%s - %s" % (field.name, field.dtype) print "\n" _print_fields(cboe_vxn) print "---------------------------------------------------\n" Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset. You'll find the dataset, the available fields, and the datatypes for each of those fields. End of explanation pipe = Pipeline() pipe.add(cboe_vxn.open_.latest, 'open_vxn') # Setting some basic liquidity strings (just for good habit) dollar_volume = AverageDollarVolume(window_length=20) top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000 pipe.set_screen(top_1000_most_liquid & cboe_vxn.open_.latest.notnan()) # The show_graph() method of pipeline objects produces a graph to show how it is being calculated. pipe.show_graph(format='png') # run_pipeline will show the output of your pipeline pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25') pipe_output Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline. This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread: https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters End of explanation # This section is only importable in the backtester from quantopian.algorithm import attach_pipeline, pipeline_output # General pipeline imports from quantopian.pipeline import Pipeline from quantopian.pipeline.factors import AverageDollarVolume # For use in your algorithms via the pipeline API from quantopian.pipeline.data.quandl import cboe_vxn def make_pipeline(): # Create our pipeline pipe = Pipeline() # Screen out penny stocks and low liquidity securities. dollar_volume = AverageDollarVolume(window_length=20) is_liquid = dollar_volume.rank(ascending=False) < 1000 # Create the mask that we will use for our percentile methods. base_universe = (is_liquid) # Add the datasets available pipe.add(cboe_vxn.open_.latest, 'vxn_open') # Set our pipeline screens pipe.set_screen(is_liquid) return pipe def initialize(context): attach_pipeline(make_pipeline(), "pipeline") def before_trading_start(context, data): results = pipeline_output('pipeline') Explanation: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need. Taking what we've seen from above, let's see how we'd move that into the backtester. End of explanation
2,066
Given the following text description, write Python code to implement the functionality described below step by step Description: Create your own region Creating own regions is straightforward. Import regionmask and check the version Step1: Import numpy Assume you have two custom regions in the US, you can easily use these to create Regions Step2: If you want to set the names and abbrevs yourself you can still do that Step3: Again we can plot the outline of the defined regions Step4: and obtain a mask Step5: Use shapely Polygon You can also define the region with shapely polygons (see geopandas tutorial how to work with shapefiles). Step6: Create Regions with MultiPolygon and interiors Create two discontiguous regions and combine them to one. Add a hole to one of the regions Step7: Create Polygons, a MultiPolygon, and finally Regions Step8: Create a mask Step9: and plot it
Python Code: import cartopy.crs as ccrs import numpy as np import matplotlib.pyplot as plt import regionmask regionmask.__version__ Explanation: Create your own region Creating own regions is straightforward. Import regionmask and check the version: End of explanation US1 = np.array([[-100.0, 30], [-100, 40], [-120, 35]]) US2 = np.array([[-100.0, 30], [-80, 30], [-80, 40], [-100, 40]]) regionmask.Regions([US1, US2]) Explanation: Import numpy Assume you have two custom regions in the US, you can easily use these to create Regions: End of explanation names = ["US_west", "US_east"] abbrevs = ["USw", "USe"] USregions = regionmask.Regions([US1, US2], names=names, abbrevs=abbrevs, name="US") USregions Explanation: If you want to set the names and abbrevs yourself you can still do that: End of explanation ax = USregions.plot(label="abbrev") # fine tune the extent ax.set_extent([225, 300, 25, 45], crs=ccrs.PlateCarree()) Explanation: Again we can plot the outline of the defined regions End of explanation import numpy as np # define lat/ lon grid lon = np.arange(200.5, 330, 1) lat = np.arange(74.5, 15, -1) mask = USregions.mask(lon, lat) ax = plt.subplot(111, projection=ccrs.PlateCarree()) h = mask.plot( transform=ccrs.PlateCarree(), cmap="Paired", add_colorbar=False, vmax=12, ) ax.coastlines() # add the outlines of the regions USregions.plot_regions(ax=ax, add_label=False) ax.set_extent([225, 300, 25, 45], crs=ccrs.PlateCarree()) Explanation: and obtain a mask: End of explanation from shapely.geometry import Polygon, MultiPolygon US1_poly = Polygon(US1) US2_poly = Polygon(US2) US1_poly, US2_poly USregions_poly = regionmask.Regions([US1_poly, US2_poly]) USregions_poly Explanation: Use shapely Polygon You can also define the region with shapely polygons (see geopandas tutorial how to work with shapefiles). End of explanation US1_shifted = US1 - (5, 0) US2_hole = np.array([[-98.0, 33], [-92, 33], [-92, 37], [-98, 37], [-98.0, 33]]) Explanation: Create Regions with MultiPolygon and interiors Create two discontiguous regions and combine them to one. Add a hole to one of the regions End of explanation US1_poly = Polygon(US1_shifted) US2_poly = Polygon(US2, holes=[US2_hole]) US_multipoly = MultiPolygon([US1_poly, US2_poly]) USregions_poly = regionmask.Regions([US_multipoly]) USregions_poly.plot(); Explanation: Create Polygons, a MultiPolygon, and finally Regions End of explanation mask = USregions_poly.mask(lon, lat) Explanation: Create a mask: End of explanation ax = plt.subplot(111, projection=ccrs.PlateCarree()) mask.plot(transform=ccrs.PlateCarree(), add_colorbar=False) ax.coastlines() # fine tune the extent ax.set_extent([225, 300, 25, 45], crs=ccrs.PlateCarree()) Explanation: and plot it: End of explanation
2,067
Given the following text description, write Python code to implement the functionality described below step by step Description: Using a feature representation learned for signature images This notebook contains code to pre-process signature images and to obtain feature-vectors using the learned feature representation on the GPDS dataset Step1: Pre-processing a single image Step2: Processing multiple images and obtaining feature vectors Step3: Using the CNN to obtain the feature representations Step4: Inspecting the learned features The feature vectors have size 2048 Step5: Using SPP models (signatures from different sizes) For the SPP models, we can use images of any size as input, to obtain a feature vector of a fixed size. Note that in the paper we obtained better results by padding small images to a fixed canvas size, and processed larger images in their original size. More information can be found in the paper
Python Code: import numpy as np # Functions to load and pre-process the images: from scipy.misc import imread, imsave from preprocess.normalize import normalize_image, resize_image, crop_center, preprocess_signature # Functions to load the CNN model import signet from cnn_model import CNNModel # Functions for plotting: import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['image.cmap'] = 'Greys' Explanation: Using a feature representation learned for signature images This notebook contains code to pre-process signature images and to obtain feature-vectors using the learned feature representation on the GPDS dataset End of explanation original = imread('data/some_signature.png') # Manually normalizing the image following the steps provided in the paper. # These steps are also implemented in preprocess.normalize.preprocess_signature normalized = 255 - normalize_image(original, size=(952, 1360)) resized = resize_image(normalized, (170, 242)) cropped = crop_center(resized, (150,220)) # Visualizing the intermediate steps f, ax = plt.subplots(4,1, figsize=(6,15)) ax[0].imshow(original, cmap='Greys_r') ax[1].imshow(normalized) ax[2].imshow(resized) ax[3].imshow(cropped) ax[0].set_title('Original') ax[1].set_title('Background removed/centered') ax[2].set_title('Resized') ax[3].set_title('Cropped center of the image') Explanation: Pre-processing a single image End of explanation user1_sigs = [imread('data/a%d.png' % i) for i in [1,2]] user2_sigs = [imread('data/b%d.png' % i) for i in [1,2]] canvas_size = (952, 1360) processed_user1_sigs = np.array([preprocess_signature(sig, canvas_size) for sig in user1_sigs]) processed_user2_sigs = np.array([preprocess_signature(sig, canvas_size) for sig in user2_sigs]) # Shows pre-processed samples of the two users f, ax = plt.subplots(2,2, figsize=(10,6)) ax[0,0].imshow(processed_user1_sigs[0]) ax[0,1].imshow(processed_user1_sigs[1]) ax[1,0].imshow(processed_user2_sigs[0]) ax[1,1].imshow(processed_user2_sigs[1]) Explanation: Processing multiple images and obtaining feature vectors End of explanation # Path to the learned weights model_weight_path = 'models/signet.pkl' # Instantiate the model model = CNNModel(signet, model_weight_path) # Obtain the features. Note that you can process multiple images at the same time user1_features = model.get_feature_vector_multiple(processed_user1_sigs, layer='fc2') user2_features = model.get_feature_vector_multiple(processed_user2_sigs, layer='fc2') Explanation: Using the CNN to obtain the feature representations End of explanation user1_features.shape print('Euclidean distance between signatures from the same user') print(np.linalg.norm(user1_features[0] - user1_features[1])) print(np.linalg.norm(user2_features[0] - user2_features[1])) print('Euclidean distance between signatures from different users') dists = [np.linalg.norm(u1 - u2) for u1 in user1_features for u2 in user2_features] print(dists) # Other models: # model_weight_path = 'models/signetf_lambda0.95.pkl' # model_weight_path = 'models/signetf_lambda0.999.pkl' Explanation: Inspecting the learned features The feature vectors have size 2048: End of explanation from preprocess.normalize import remove_background # To illustrate that images from any size can be used, let's process the signatures just # by removing the background and inverting the image normalized_spp = 255 - remove_background(original) plt.imshow(normalized_spp) # Note that now we need to use lists instead of numpy arrays, since the images will have different sizes. # We will also process each image individually processed_user1_sigs_spp = [255-remove_background(sig) for sig in user1_sigs] processed_user2_sigs_spp = [255-remove_background(sig) for sig in user2_sigs] # Shows pre-processed samples of the two users f, ax = plt.subplots(2,2, figsize=(10,6)) ax[0,0].imshow(processed_user1_sigs_spp[0]) ax[0,1].imshow(processed_user1_sigs_spp[1]) ax[1,0].imshow(processed_user2_sigs_spp[0]) ax[1,1].imshow(processed_user2_sigs_spp[1]) import signet_spp_300dpi # Instantiate the model model = CNNModel(signet_spp_300dpi, 'models/signet_spp_300dpi.pkl') # Obtain the features. Note that we need to process them individually here since they have different sizes user1_features_spp = [model.get_feature_vector(sig, layer='fc2') for sig in processed_user1_sigs_spp] user2_features_spp = [model.get_feature_vector(sig, layer='fc2') for sig in processed_user2_sigs_spp] print('Euclidean distance between signatures from the same user') print(np.linalg.norm(user1_features_spp[0] - user1_features_spp[1])) print(np.linalg.norm(user2_features_spp[0] - user2_features_spp[1])) print('Euclidean distance between signatures from different users') dists = [np.linalg.norm(u1 - u2) for u1 in user1_features_spp for u2 in user2_features_spp] print(dists) Explanation: Using SPP models (signatures from different sizes) For the SPP models, we can use images of any size as input, to obtain a feature vector of a fixed size. Note that in the paper we obtained better results by padding small images to a fixed canvas size, and processed larger images in their original size. More information can be found in the paper: https://arxiv.org/abs/1804.00448 End of explanation
2,068
Given the following text description, write Python code to implement the functionality described below step by step Description: Introdução à Programação em Python Vetores e Matrizes Na matemática, uma Matriz é uma tabela de valores com $m$ linhas e $n$ colunas. $$ \left( \begin{array}{cccc} 1 & 2 & 3 & 4 \ 5 & 6 & 7 & 8 \ 9 & 10 & 11 & 12 \end{array} \right) $$ Um Vetor é uma Matriz com apenas $1$ coluna. $$ \left( \begin{array}{c} 1 \ 2 \ 3 \ \end{array} \right) $$ Essas estruturas matemáticas possuem seu conjunto próprio de operações Step1: Alternativamente poderíamos implementar funções que realizassem tais operações Step2: Mas a biblioteca numpy contém o tipo array que permite trabalhar com vetores e matrizes de maneira simplificada. Step3: Analogamente com matrizes Step4: Para acessar os elementos utilizamos pares de colchetes exatamente como nas listas. Para matrizes colocamos os dois índices dentro dos colchetes. Step5: Adicionalmente podemos passar uma lista de índices que queremos acessar Step6: O tamanho de um vetor ou matriz pode ser obtido com o comando .shape. Step7: Podemos criar uma Matriz com todos elementos iguais a $0$ ou $1$ ou até mesmo uma constante $c$. Step8: A biblioteca numpy conta com todas as funções da biblioteca math e outras diversas funções úteis para cálculo numérico e álgebra. Step11: Vamos estimar a entropia de uma moeda honesta e de uma moeda desonesta utilizando Método de Monte Carlo Step13: Simulação de Incêndio em Floresta Dada uma floresta representada por uma matriz $m \times n$ em que os elementos são Step15: Crie uma função que faz o seguinte Step16: Agora vamos fazer cada uma dessas funções em separado. Começando pela FocoIncendio(F). Step17: Agora vamos implementar a função que verifica se ainda tem fogo na floresta Step18: Finalmente implementamos a função AtualizaFloresta(F) Step20: Finalmente realizaremos um método de Monte Carlo para estimar o percentual de árvores restantes em um incêndio em uma floresta de densidade $p$. Para cada densidade na lista [0.1, 0.2, ..., 0.9] e para cada repetição da simulação, crie uma floresta com essa densidade e simule o incêndio. Calcule o percentual de árvores sobrevientes (média da matriz F) e calcule a média desse percentual em relação a todas as simulações. Step21: Esse experimento mostra que em uma floresta com até 40% da área ocupada com árvores, um incêndio danifica apenas uma pequena região. Iniciando em 50%, um incêndio elimina praticamente toda a floresta. Isso é conhecido como ponto crítico. Análise da valorização do dólar Vamos realizar algumas análises estátisticas da valorização diária do dólar no período de 19/09/2014 até 17/03/2015 em relação ao Real e ao Euro. Para isso, vamos utilizar o arquivo dolar.csv contendo as informações do câmbio. Vejamos o formato desse arquivo. Step22: Cada linha representa uma das datas analisadas, a primeira coluna indica qual é data (sendo 0 a primeira data), a segunda coluna é o valor do Euro em relação ao Dólar e a terceira coluna o valor do Real em função do Dólar. Notem que as colunas estão separadas por ';' A leitura de um arquivo nesse formato para uma Matriz do numpy é feita com o comando genfromtxt. Essa função pode receber diversos parâmetros, porém apenas dois são de nosso interesse Step23: Vamos salvar a projeção para os próximos 60 dias! Para isso temos que criar uma matriz chamada proj de tamanho 60x3. Na primeira coluna vamos inserir os valores de 180 até 240; na segunda coluna a projeção para o Euro e na terceira coluna a projeção para o Real. Para salvar um arquivo csv usando o numpy utilizamos o comando savetxt
Python Code: u = [1,2,3] v = [4,5,6] print(u + v) print(u + 1) Explanation: Introdução à Programação em Python Vetores e Matrizes Na matemática, uma Matriz é uma tabela de valores com $m$ linhas e $n$ colunas. $$ \left( \begin{array}{cccc} 1 & 2 & 3 & 4 \ 5 & 6 & 7 & 8 \ 9 & 10 & 11 & 12 \end{array} \right) $$ Um Vetor é uma Matriz com apenas $1$ coluna. $$ \left( \begin{array}{c} 1 \ 2 \ 3 \ \end{array} \right) $$ Essas estruturas matemáticas possuem seu conjunto próprio de operações: Operações elemento-a-elemento: realizar as operações matemáticas $+, -, \times,\div$ elemento a elemento entre duas matrizes. $$ \left( \begin{array}{cc} 1 & 2 \ 3 & 4 \ \end{array} \right) + \left( \begin{array}{cc} 4 & 5 \ 6 & 7 \ \end{array} \right) = \left( \begin{array}{cc} 5 & 7 \ 9 & 11 \ \end{array} \right) $$ Operações com constantes: realizar as operações matemáticas de uma constante com todos os valores da matriz. $$ \left( \begin{array}{cc} 1 & 2 \ 3 & 4 \ \end{array} \right) + 1 = \left( \begin{array}{cc} 2 & 3 \ 4 & 5 \ \end{array} \right) $$ Operações próprias: operações específicas dessas estruturas como produto interno, externo de vetores e multiplicação de matrizes. $$ \left( \begin{array}{c} 1 \ 2 \ 3 \ \end{array} \right) \cdot \left( \begin{array}{c} 3 \ 2 \ 1 \ \end{array} \right) = 1 \cdot 3 + 2 \cdot 2 + 3 \cdot 1 = 36 $$ $$ \left( \begin{array}{c} 1 \ 2 \ 3 \ \end{array} \right) \cdot \left( \begin{array}{ccc} 3 & 2 & 1 \end{array} \right) = \left( \begin{array}{ccc} 1 \cdot 3 & 1 \cdot 2 & 1 \cdot 1 \ 2 \cdot 3 & 2 \cdot 2 & 2 \cdot 1 \ 3 \cdot 3 & 3 \cdot 2 & 3 \cdot 1 \ \end{array} \right) $$ $$ \left( \begin{array}{cc} 1 & 2 \ 3 & 4 \ \end{array} \right) \times \left( \begin{array}{cc} 4 & 5 \ 6 & 7 \ \end{array} \right) = \left( \begin{array}{cc} 1 \cdot 4 + 2 \cdot 6 & 1 \cdot 5 + 2 \cdot 7 \ 3 \cdot 4 + 4 \cdot 6 & 3 \cdot 5 + 4 \cdot 7\ \end{array} \right) $$ Com nosso conhecimento atual poderíamos representar vetores e matrizes como listas, porém tais operações não funcionariam. End of explanation def SomaVetores(u,v): z = [] if len(u)!=len(v): return z for i in range(len(u)): z.append( u[i] + v[i] ) return z print (SomaVetores(u,v)) Explanation: Alternativamente poderíamos implementar funções que realizassem tais operações: End of explanation import numpy as np u = np.array([1,2,3]) v = np.array([4,5,6]) print (u + v) print (u + 1) print (u*v )# produto elemento a elemento print (np.dot(u,v)) # produto interno print (np.outer(u,v) )# produto externo Explanation: Mas a biblioteca numpy contém o tipo array que permite trabalhar com vetores e matrizes de maneira simplificada. End of explanation A = np.array( [ [1,2], [3,4] ] ) B = np.array( [ [5,6], [7,8] ] ) print (A+B, '\n') print (A*B, '\n' ) # multiplicação elemento a elemento print (np.dot(A,B), '\n' )# multiplicação de matrizes print (A.T, '\n') # transposta print (np.linalg.inv(A), '\n' ) # inversa Explanation: Analogamente com matrizes: End of explanation print (u[0]) print (v[0:1]) print (A[0,0]) print (A[0,:]) Explanation: Para acessar os elementos utilizamos pares de colchetes exatamente como nas listas. Para matrizes colocamos os dois índices dentro dos colchetes. End of explanation u = np.array([i*i for i in range(10)]) print (u[ [1,3,5] ]) Explanation: Adicionalmente podemos passar uma lista de índices que queremos acessar: End of explanation print (A.shape) Explanation: O tamanho de um vetor ou matriz pode ser obtido com o comando .shape. End of explanation m,n = 5,3 A = np.zeros((m,n)) # repare no parentes duplo B = np.ones((m,n)) C = np.ones((m,n))*3 print (A, '\n') print (B, '\n') print (C, '\n') Explanation: Podemos criar uma Matriz com todos elementos iguais a $0$ ou $1$ ou até mesmo uma constante $c$. End of explanation print (np.cos(C), '\n') print (np.arange(0,1,0.1), '\n') D = np.random.random( (m,n) ) # matriz de números aleatórios de tamanho m x n print (D) print (D[ D<0.5]) # somente os elementos menores que 0.5 idx = np.where(D<0.5) # índices dos elementos < 0.5 print (idx, '\n') print (D[idx]) print (D.max(), D.argmax()) # máximo valor e índice do máximo print (D.min(), D.argmin()) # mínimo valor e índice do mínimo print( D.mean(), D.std(), D.var()) # média e desvio-padrão e variância. print (D.mean( axis=0 )) # média das colunas print (D.mean(axis=1) ) # média das linhas print (D.sum(axis=0)) # soma das colunas # matriz aleatória gaussiana de média 1.7 e variância 0.5 D = np.random.normal( loc=1.7, scale=0.5, size=(m,n) ) print (D) # random.choice() escolhe um elemento da lista dada a dist. de probabilidade p print (np.random.choice( ['cara', 'coroa'], p=[0.5,0.5] )) # moeda honesta print (np.random.choice( ['cara', 'coroa'], p=[0.8,0.2] )) # moeda desonesta Explanation: A biblioteca numpy conta com todas as funções da biblioteca math e outras diversas funções úteis para cálculo numérico e álgebra. End of explanation def MonteCarlo( p, maxit ): Retorna a probabilidade de cara e coroa utilizando a função random.choice com probabilidade p caras = 0 for it in range(maxit): moeda = np.random.choice( ['cara','coroa'], p=p ) if moeda == 'cara': caras += 1 return np.array( [caras/float(maxit), 1.0 - caras/float(maxit)] ) def Entropia( p ): Calcula a entropia do vetor de probabilidades p return -(p*np.log2(p)).sum() honesta = np.array([0.5, 0.5]) desonesta = np.array([0.8,0.2]) print( Entropia( MonteCarlo( honesta, 10000 ) ), Entropia( honesta )) print (Entropia( MonteCarlo( desonesta, 10000 ) ) , Entropia( desonesta)) Explanation: Vamos estimar a entropia de uma moeda honesta e de uma moeda desonesta utilizando Método de Monte Carlo: End of explanation def CriaFloresta( m,n,p ): Cria uma floresta com 0s e 1s return np.random.choice( [0,1], size=(m,n), p=[1.0-p,p]) Explanation: Simulação de Incêndio em Floresta Dada uma floresta representada por uma matriz $m \times n$ em que os elementos são: $0$ caso aquele ponto esteja vazio $1$ se contém uma árvore $2$ se a árvore está pegando fogo Utilizando a função random.choice() da biblioteca numpy, inicialize a matriz com $0$'s e $1$'s de tal forma que uma árvore é criada com probabilidade $p$ para criação de árvores. Utilize a opção size para determinar o tamanho da matriz. End of explanation def SimulaIncendio( F ): Simula o incêndio em uma floresta FocoIncendio(F) while temFogo(F): AtualizaFloresta(F) return F Explanation: Crie uma função que faz o seguinte: Bota fogo em um ponto aleatório da floresta Repita enquanto existir fogo: Para cada árvore pegando fogo, espalha fogo para as árvores vizinhas, apagando o fogo dessa árvore em seguida Podemos imaginar o código da seguinte maneira: End of explanation def FocoIncendio(F): # vamos pegar todos os pares (i,j) de índices contendo o valor 1 i,j = np.where(F==1) # escolhe um deles para ser o foco de incêndio quantos = len( i ) foco = np.random.choice( range(quantos) ) F[ i[foco], j[foco] ] = 2 Explanation: Agora vamos fazer cada uma dessas funções em separado. Começando pela FocoIncendio(F). End of explanation def temFogo(F): return len(np.where(F==2)[0]) > 0 Explanation: Agora vamos implementar a função que verifica se ainda tem fogo na floresta: End of explanation def AtualizaFloresta(F): # verifica os pontos de incêndio I, J = np.where(F==2) # para cada foco de incêndio for i,j in zip(I, J): EspalhaFogo(F, i, j) ApagaFogo(F, i, j) def EspalhaFogo(F, i, j): # queremos verificar esses vizinhos de cada ponto de incêndio vizinhos = [ (-1,-1), (-1,0), (-1,1), (0,-1), (0,1), (1,-1), (1,0), (1,1) ] for k,l in vizinhos: # se está dentro do limite e for árvore, bota fogo e avisa que botou fogo if DentroFloresta(F, i+k, j+l) and temArvore(F, i+k, j+l): F[i+k][j+l] = 2 def temArvore(F, i, j): return F[i][j] == 1 def DentroFloresta(F, i, j): return 0 <= i < F.shape[0] and 0 <= j < F.shape[1] def ApagaFogo(F, i, j): F[i,j] = 0 # apaga o fogo F = CriaFloresta( 10,10, 0.3 ) F = SimulaIncendio(F) print (F) # Vamos plotar a floresta %pylab inline import matplotlib.pyplot as plt plt.imshow(F, cmap="hot") # o comando imshow plota uma matriz de valores entre 0 e 1 com tonalidades de cores plt.show(); Explanation: Finalmente implementamos a função AtualizaFloresta(F): End of explanation def MonteCarlo( m, n, N ): Simulação de Monte Carlo para verificar a maior densidade em que um incêndio não causa muito estrago em uma floresta # queremos verificar as probabilidades de 0.1 até 0.9 Probabilidades = np.arange(0.1,1.0,0.1) PF = [] # para cada probabilidade repete o processo de Monte Carlo for p in Probabilidades: media = Simula(p, m, n, N) PF.append( (p, media) ) return PF def Simula(p, m, n, N): pf = 0 for it in range(N): F = CriaFloresta( m,n,p ) F = SimulaIncendio(F) pf += F.mean() return pf/N PF = MonteCarlo( 50,50,500 ) print ('Inicial \t Final') for pf in PF: print( '%.2f \t\t %.2f' % (pf[0], pf[1])) Explanation: Finalmente realizaremos um método de Monte Carlo para estimar o percentual de árvores restantes em um incêndio em uma floresta de densidade $p$. Para cada densidade na lista [0.1, 0.2, ..., 0.9] e para cada repetição da simulação, crie uma floresta com essa densidade e simule o incêndio. Calcule o percentual de árvores sobrevientes (média da matriz F) e calcule a média desse percentual em relação a todas as simulações. End of explanation !head dolar.csv Explanation: Esse experimento mostra que em uma floresta com até 40% da área ocupada com árvores, um incêndio danifica apenas uma pequena região. Iniciando em 50%, um incêndio elimina praticamente toda a floresta. Isso é conhecido como ponto crítico. Análise da valorização do dólar Vamos realizar algumas análises estátisticas da valorização diária do dólar no período de 19/09/2014 até 17/03/2015 em relação ao Real e ao Euro. Para isso, vamos utilizar o arquivo dolar.csv contendo as informações do câmbio. Vejamos o formato desse arquivo. End of explanation import numpy as np data = np.genfromtxt('dolar.csv', delimiter=';') print (data[:10,:]) # imprime os 10 primeiros registros print (data.shape[0], "registros") # As médias do US$/EURO e US$/R$ são: print ("Média: ", data[:,1].mean(), data[:,2].mean()) # E o desvio-padrão: print ("Desvio-padrão: ", data[:,1].std(), data[:,2].std()) # Vamos plotar a tendência de cada um # O eixo x é a primeira coluna [0,1,2...] e o y serão a segunda e terceira colunas plt.figure(figsize=(10,10)) plt.style.use('fivethirtyeight') plt.title("Valorização do Dólar") plt.plot(data[:,0], data[:,1], color='green', label="Euro") plt.plot(data[:,0], data[:,2], color='blue', label="Real") plt.legend() plt.show() polyEUR = np.polyfit( data[:,0], data[:,1], 5 ) polyR = np.polyfit( data[:,0], data[:,2], 5 ) print (polyEUR) print( polyR) polyEURf = np.poly1d(polyEUR) polyRf = np.poly1d(polyR) plt.figure(figsize=(10,10)) plt.style.use('fivethirtyeight') plt.title("Valorização do Dólar") plt.plot(data[:,0], polyEURf(data[:,0]), color='green', label="Euro fit", linewidth=5) plt.plot(data[:,0], polyRf(data[:,0]), color='blue', label="Real fit", linewidth=5) plt.plot(data[:,0], data[:,1], 'o', color='green', label="Euro", markersize=10, fillstyle='none') plt.plot(data[:,0], data[:,2], 'o', color='blue', label="Real", markersize=10, fillstyle='none') plt.legend() plt.show() # Vamos ver o que nos aguarda: print ("Próximos dez dias do Real: ", polyRf(range(180,190))) print() print( "Próximos dez dias do Euro: ", polyEURf(range(180,190))) Explanation: Cada linha representa uma das datas analisadas, a primeira coluna indica qual é data (sendo 0 a primeira data), a segunda coluna é o valor do Euro em relação ao Dólar e a terceira coluna o valor do Real em função do Dólar. Notem que as colunas estão separadas por ';' A leitura de um arquivo nesse formato para uma Matriz do numpy é feita com o comando genfromtxt. Essa função pode receber diversos parâmetros, porém apenas dois são de nosso interesse: genfromtxt( nome_do_arquivo, delimiter = 'delimitardor') Nosso delimitador é o ';' End of explanation # Vamos gravar as projeções: proj = np.zeros( (60,3) ) proj[:,0] = range(180,180+60) proj[:,1] = polyEURf(proj[:,0]) proj[:,2] = polyRf(proj[:,0]) np.savetxt('dolarProj60.csv', proj, delimiter=';', fmt=['%d', '%.4f', '%.4f']) !head dolarProj60.csv Explanation: Vamos salvar a projeção para os próximos 60 dias! Para isso temos que criar uma matriz chamada proj de tamanho 60x3. Na primeira coluna vamos inserir os valores de 180 até 240; na segunda coluna a projeção para o Euro e na terceira coluna a projeção para o Real. Para salvar um arquivo csv usando o numpy utilizamos o comando savetxt: savetxt( nome_arquivo, matriz, delimiter='delimitador', fmt=[ lista_formato ]) A lista de formatos é referente ao tipo de dado de cada coluna da matriz, no nosso caso a primeira coluna é inteiro ( %d) e as duas colunas seguintes são floats padronizados com 4 casas decimais ( %.4f). End of explanation
2,069
Given the following text description, write Python code to implement the functionality described below step by step Description: Denoise algorithm This notebook defines the denoise algorithm (step C defined in Towsey 2013) and compares the speed of different implementations. This is a step in processing recordings of the natural environment that "better preserves the structural integrity of complex acoustic events (e.g. bird calls) but removes noise from background locations further removed from that event (Towsey 2013)." Towsey, Michael W. (2013) Noise removal from wave-forms and spectrograms derived from natural recordings of the environment. Required packages numba <br /> scipy <br /> numpy <br /> matplotlib <br /> pyprind Import statements Step1: python implementation using pure python Step2: scipy implementation using scipy.ndimage.generic_filter—the custom callback function is just-in-time compiled by numba Step3: numba implementation of a universal function via numba.guvectorize Step4: serial version Step5: parallel version Step6: check results test the implementations on a randomly generated dataset and verfiy that all the results are the same Step7: check if the different implementations produce the same result Step8: plot results Step9: profile for different data sizes time the different implementations on different dataset sizes Step10: plot profile results
Python Code: import numpy as np from scipy.ndimage import generic_filter from numba import jit, guvectorize, float64 import pyprind import matplotlib.pyplot as plt %matplotlib inline Explanation: Denoise algorithm This notebook defines the denoise algorithm (step C defined in Towsey 2013) and compares the speed of different implementations. This is a step in processing recordings of the natural environment that "better preserves the structural integrity of complex acoustic events (e.g. bird calls) but removes noise from background locations further removed from that event (Towsey 2013)." Towsey, Michael W. (2013) Noise removal from wave-forms and spectrograms derived from natural recordings of the environment. Required packages numba <br /> scipy <br /> numpy <br /> matplotlib <br /> pyprind Import statements End of explanation def denoise(a, b): for channel in range(2): for f_band in range(4, a.shape[1] - 4): for t_step in range(1, a.shape[2] - 1): neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2] if neighborhood.mean() < 10: b[channel, f_band, t_step] = neighborhood.min() else: b[channel, f_band, t_step] = neighborhood[4, 1] return b Explanation: python implementation using pure python End of explanation @jit(nopython=True) def filter_denoise(neighborhood): if neighborhood.mean() < 10: return neighborhood.min() else: return neighborhood[13] def denoise_scipy(a, b): for channel in range(2): b[channel] = generic_filter(input=a[channel], function=filter_denoise, size=(9, 3), mode='constant') return b Explanation: scipy implementation using scipy.ndimage.generic_filter—the custom callback function is just-in-time compiled by numba End of explanation # just removed return statement def denoise_guvectorize(a, b): for channel in range(2): for f_band in range(4, a.shape[1] - 4): for t_step in range(1, a.shape[2] - 1): neighborhood = a[channel, f_band - 4:f_band + 5, t_step - 1:t_step + 2] if neighborhood.mean() < 10: b[channel, f_band, t_step] = neighborhood.min() else: b[channel, f_band, t_step] = neighborhood[4, 1] Explanation: numba implementation of a universal function via numba.guvectorize End of explanation denoise_numba = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)', nopython=True)(denoise_guvectorize) Explanation: serial version End of explanation denoise_parallel = guvectorize('float64[:,:,:], float64[:,:,:]', '(c,f,t)->(c,f,t)', nopython=True, target='parallel')(denoise_guvectorize) Explanation: parallel version End of explanation size = 100 data = np.random.rand(2, size, int(size*1.5)) data[:, int(size/4):int(size/2), int(size/4):int(size/2)] = 27 result_python = denoise(data, np.zeros_like(data)) result_scipy = denoise_scipy(data, np.zeros_like(data)) result_numba = denoise_numba(data, np.zeros_like(data)) result_parallel = denoise_parallel(data, np.zeros_like(data)) Explanation: check results test the implementations on a randomly generated dataset and verfiy that all the results are the same End of explanation assert np.allclose(result_python, result_scipy) and np.allclose(result_python, result_numba) Explanation: check if the different implementations produce the same result End of explanation fig, ax = plt.subplots(2,2) fig.set_figheight(8) fig.set_figwidth(12) im1 = ax[0, 0].imshow(data[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[0, 0].set_title('data') im2 = ax[0, 1].imshow(result_python[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[0, 1].set_title('pure python') im3 = ax[1, 0].imshow(result_scipy[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[1, 0].set_title('scipy') im4 = ax[1, 1].imshow(result_numba[0], cmap='viridis', interpolation='none', vmax=1) t1 = ax[1, 1].set_title('numba') Explanation: plot results End of explanation sizes = [30, 50, 100, 200, 400, 800, 1600] progress_bar = pyprind.ProgBar(iterations=len(sizes), track_time=True, stream=1, monitor=True) t_python = np.empty_like(sizes, dtype=np.float64) t_scipy = np.empty_like(sizes, dtype=np.float64) t_numba = np.empty_like(sizes, dtype=np.float64) t_parallel = np.empty_like(sizes, dtype=np.float64) for size in range(len(sizes)): progress_bar.update(item_id=sizes[size]) data = np.random.rand(2, sizes[size], sizes[size])*0.75 t_1 = %timeit -oq denoise(data, np.zeros_like(data)) t_2 = %timeit -oq denoise_scipy(data, np.zeros_like(data)) t_3 = %timeit -oq denoise_numba(data, np.zeros_like(data)) t_4 = %timeit -oq denoise_parallel(data, np.zeros_like(data)) t_python[size] = t_1.best t_scipy[size] = t_2.best t_numba[size] = t_3.best t_parallel[size] = t_4.best Explanation: profile for different data sizes time the different implementations on different dataset sizes End of explanation fig, ax = plt.subplots(figsize=(15,5)) p1 = ax.loglog(sizes, t_python, color='black', marker='.', label='python') p2 = ax.loglog(sizes, t_scipy, color='blue', marker='.', label='scipy') p3 = ax.loglog(sizes, t_numba, color='green', marker='.', label='numba') p4 = ax.loglog(sizes, t_parallel, color='red', marker='.', label='parallel') lx = ax.set_xlabel("data array size (2 x n x n elements)") ly = ax.set_ylabel("time (seconds)") t1 = ax.set_title("running times of the 'denoise' algorithm") ax.grid(True, which='major') l = ax.legend() Explanation: plot profile results End of explanation
2,070
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Classification The goal of a classification task is to predict whether a given observation in a dataset (e.g. a text in a collection of text files) possesses some particular property or attribute (e.g. was written by a woman). To make these predictions, we generally measure the attributes of several labelled data observations, then compare new unlabelled observations to those measurements. For example, let's examine a small corpus of text files from the Philosophical Transactions, a scientific periodical published in England since 1665. Each of our samples from this corpus is either an archeological or medical text. Let's look at a sample archeological file Step1: Next let's look at a sample medical file Step2: As we can see above, our archeological files are stored in data_a/archeological while our medical files are stored in data_a/medical. Let's keep this in mind below. Given this data, we might build a classifier to determine whether a given text is from the collection is from the archeological or medical genre. Our first approach to this task will use a simple method--we will simply count the number of times the word "bones" occurs in the text. If that word occurs more than 5 times, we will classify the text as an archeological text; otherwise, we'll classify the text as medical Step3: These results show our simple classification model does a pretty good job of classifying a text as archeological or medical! As we recall, however, this classifier is based on counts of a single word in each text file. As you might guess, increasing the number of words that the model uses will increase the power of our model. Let's see how to use multiple word counts in a classifier model below. Working with Multiple Dimensions In the example above, we used the counts of a single word to classify whether a document was a mathematical or medical text. Another way of stating that fact is to say we used just a single "dimension" of data to perform our classification--where "dimension" here refers to the count of a single word. We can plot that single dimension of data as follows Step4: As we can see, the archeological texts tend to have higher counts of the word "bones" than medical texts. If we add the counts of the word "fossil" on the y axis of our plot, we can display a 2D data model Step5: Comparing this plot to the one above it, we can see adding a second "dimension" of data increases the separation between our two document types. This is good, from a model-building perspective, as the greater the separation between our classes, the easier it will be for an algorithm to classify the observations from either class. We'll continue to use this 2D data model as we explore K-Nearest Neighbors classification below. <h1 style='color Step6: <details> <summary>Solution</summary> We can create this plot by modifying the code above slightly Step7: The classifier above takes as input the counts of the words "bones" and "fossil" for a single text, then predicts the genre label for that text based on those counts. Stated more precisely, the classifier.fit() method takes a 2D list counts and a 1D list labels. The counts 2D list is a list of lists, with one sublist for each text in the corpus. Each of those sublists contains a sequence of numbers that represent the number of times a given words occurs in the given text file. Note that the order of the word counts in these sublists is very important--each sublist must include the counts of each word being counted in the same order. Now that we have trained our classifier, we can now ask this classifier to classify text records for which we don't have genre labels. Let's suppose we have a text file that includes the word "bones" five times and the word "fossil" once. We can predict the genre of this text with the following method Step8: Given a text with 5 counts of the word "bones" and 1 count of the word "fossil", the model classifies that text as a "medical" rather than "mathematical" text. That's all it takes to classify texts with Python! <h1 style='color Step9: <details> <summary>Solution</summary> We can train this classifier with the following code Step10: For each pixel in the plot above, we retrieve the 3 closest points with known labels. We then use a majority vote of those labels to assign the label of the pixel. This is exactly analogous to predicting a label for unlabelled point&mdash;in both cases, we take a majority vote of the 3 closest points with known labels. Working in this way, we can use labelled data to classify unlabelled data. That's all there is to K-Nearest Neighbors classification! It's worth noting that K-Nearest Neighbors is only one of many popular classification algorithms. From a high-level point of view, each classification algorithm works in a similar way&mdash;each requires a certain number of observations with known labels, and each uses those labelled observations to classify unlabelled observations. However, different classification algorithms use different logic to assign unlabelled observations to groups, which means different classification algorithms have very different decision boundaries. In the chart below [source], each row plots the decision boundaries several classifiers give the same dataset. Notice how some classifiers work better with certain data shapes Step11: counts above is a list with the words contained in our Term Document Matrix. Note that the order matters! Just as we saw in our example above, counts is an example of a 2D array, or a list of lists. counts contains one sublist for each input document, and each of those sublists has a count for each word in labels. In this way, counts and labels work together to express the Term Document Matrix for our input data. Once we have the Term Document Matrix counts, we can use that matrix to create a classifier that predicts the labels of unlabelled observations. Let's see how this looks in practice Step12: That's all it takes to create a Term Document Matrix of the word counts in each of our documents. We can investigate that Term Document Matrix by printing counts and words Step13: As we can see, our list of words contains a large number of terms, only a sample of which is printed above. We can also see that our word counts matrix has many rows and columns (signified by the ellipses above), most of which contain 0-value cells (as most words don't occur in most documents). We can now use the same syntax we used above to train a KNN classifier on that corpus Step14: Just as before, we can then use our classifier to predict the genre of certain text files. In the example below, we'll ask the classifier to predict the genre of each file in data_c. Step15: The results look pretty good! We can see our classifier correctly identifies the genre of our four new texts, despite the fact that the classifier had never seen those files before. This means our classifier has "learned" some of the features that diffentiate archeological and medical texts! <h1 style='color Step16: <h4 style='color Step17: <details> <summary>Solution</summary> We can accomplish these two goals with the following
Python Code: import os open(os.path.join('data_a', 'archeological', '10.2307_104838.txt')).read() Explanation: Introduction to Classification The goal of a classification task is to predict whether a given observation in a dataset (e.g. a text in a collection of text files) possesses some particular property or attribute (e.g. was written by a woman). To make these predictions, we generally measure the attributes of several labelled data observations, then compare new unlabelled observations to those measurements. For example, let's examine a small corpus of text files from the Philosophical Transactions, a scientific periodical published in England since 1665. Each of our samples from this corpus is either an archeological or medical text. Let's look at a sample archeological file: End of explanation import os open(os.path.join('data_a', 'medical', '10.2307_105060.txt')).read() Explanation: Next let's look at a sample medical file: End of explanation import os import glob import collections # specify the files to process - nb: * matches all files and directories path = os.path.join('data_a', '*', '*.txt') # iterate over each file matched above for i in glob.glob(path): # read the current file text = open(i).read() # split the current file into a list of words words = text.split() # count the number of times each word in the file occurs counts = collections.Counter(words) # classify this file as a mathematical or medical text prediction = 'archeological' if counts['bones'] > 5 else 'medical' # print the prediction print(prediction, '--', i) Explanation: As we can see above, our archeological files are stored in data_a/archeological while our medical files are stored in data_a/medical. Let's keep this in mind below. Given this data, we might build a classifier to determine whether a given text is from the collection is from the archeological or medical genre. Our first approach to this task will use a simple method--we will simply count the number of times the word "bones" occurs in the text. If that word occurs more than 5 times, we will classify the text as an archeological text; otherwise, we'll classify the text as medical: End of explanation %matplotlib inline import matplotlib.pyplot as plt x_values = [] y_values = [] colors = [] # specify the files to process - nb: * matches all files and directories path = os.path.join('data_a', '*', '*.txt') # iterate over each file matched above for i in glob.glob(path): # read the current file text = open(i).read() # add this text's count of the first keyword to the x axis values x_values.append( text.count('bones') ) # use the same y axis value for all texts y_values.append(0) # make the dot blue if it's a medical dot, else red if it's a mathematical dot colors.append('blue' if 'medical' in i else 'red') # plot the 1D distribution plt.title('Samples from the Philosophical Transactions') plt.xlabel('Count of the word "bones"') plt.ylabel('') plt.scatter(x_values, y_values, c=colors) Explanation: These results show our simple classification model does a pretty good job of classifying a text as archeological or medical! As we recall, however, this classifier is based on counts of a single word in each text file. As you might guess, increasing the number of words that the model uses will increase the power of our model. Let's see how to use multiple word counts in a classifier model below. Working with Multiple Dimensions In the example above, we used the counts of a single word to classify whether a document was a mathematical or medical text. Another way of stating that fact is to say we used just a single "dimension" of data to perform our classification--where "dimension" here refers to the count of a single word. We can plot that single dimension of data as follows: End of explanation x_values = [] y_values = [] colors = [] # specify the files to process - nb: * matches all files and directories path = os.path.join('data_a', '*', '*.txt') # iterate over each file matched above for i in glob.glob(path): # read the current file text = open(i).read() # add this text's count of the first keyword to the x axis values x_values.append( text.count('bones') ) # add this text's count of the second keyword to the y axis values y_values.append( text.count('fossil') ) # make the dot blue if it's a medical dot, else red if it's a mathematical dot colors.append('blue' if 'medical' in i else 'red') # plot the 1D distribution plt.title('Samples from the Philosophical Transactions') plt.xlabel('Count of the word "bones"') plt.ylabel('Count of the word "fossil"') plt.scatter(x_values, y_values, c=colors) Explanation: As we can see, the archeological texts tend to have higher counts of the word "bones" than medical texts. If we add the counts of the word "fossil" on the y axis of our plot, we can display a 2D data model: End of explanation # type your code here Explanation: Comparing this plot to the one above it, we can see adding a second "dimension" of data increases the separation between our two document types. This is good, from a model-building perspective, as the greater the separation between our classes, the easier it will be for an algorithm to classify the observations from either class. We'll continue to use this 2D data model as we explore K-Nearest Neighbors classification below. <h1 style='color:green'>Reviewing Multidimensional Classification</h1> Using the approach we discussed above, see if you can plot the counts of the words "parallax" and "angle" in data_b/*/*.txt. That folder contains texts that are either astronomical or geometrical. Color the astronomical texts blue and the non-astronomical texts red. Hint: once again, you may find it helpful to copy the code block we used above and modify that code block! End of explanation from sklearn.neighbors import KNeighborsClassifier import numpy as np import glob import os # create the 2D dataset we used above counts = [] labels = [] # identify the data we will use for our analysis path = os.path.join('data_a', '*', '*.txt') # iterate over each file for i in glob.glob(path): # get the text content for this file text = open(i).read() # add the counts to our data counts.append([ text.count('bones'), text.count('fossil'), ]) # add the label for this text to our data labels.append('archeological' if 'archeological' in i else 'medical') # create a KNN classifier using 3 as the value of K classifier = KNeighborsClassifier(3) # "fit" the classifier by showing it our labelled data classifier.fit(counts, labels) Explanation: <details> <summary>Solution</summary> We can create this plot by modifying the code above slightly: ``` x_values = [] y_values = [] colors = [] # specify the files to process - nb: * matches all files and directories path = os.path.join('data_b', '*', '*.txt') # iterate over each file matched above for i in glob.glob(path): # read the current file text = open(i).read() # add this text's count of the first keyword to the x axis values x_values.append( text.count('parallax') ) # add this text's count of the second keyword to the x axis values y_values.append( text.count('angle') ) # make the dot blue if it's a medical dot, else red if it's a mathematical dot colors.append('blue' if 'astronomical' in i else 'red') # plot the 1D distribution plt.title('Samples from the Philosophical Transactions') plt.xlabel('Count of the word "parallax"') plt.ylabel('Count of the word "angle"') plt.scatter(x_values, y_values, c=colors) ``` </details> Classification using K-Nearest Neighbors In our initial classification experiment, we classified texts as "archeological" if they included the word "bones" more than five times. That algorithm worked pretty well, but it required us to choose a keyword and then specify the threshold (>5) counts of our selected word, which is pretty clumsy. As it turns out, one can use algorithmic classification techniques that don't require one to specify any thresholds. There are many such classification algorithms, but we will focus on just one of them: K-Nearest Neighbors. With a K-Nearest Neighbors Classifier, we start with a labelled dataset (e.g. a collection of texts with genre labels). We then add new, unlabelled observations to the dataset. For each of these new observations, we consult the K labelled observations to which the unlabelled observation is closest, where K is an odd integer we use for all classifications. We then find the most common label among those K observations (the "K nearest neighbors") and give the new observation that label. The following diagram shows this scenario. Our new observation (represented by the question mark) has some points near it that are labelled with a triangle or star. Suppose we have chosen to use 3 for our value of K. In that case, we consult the 3 nearest labelled points near the question mark. Those 3 nearest neighbors have labels: star, triangle, triangle. Using a majority vote, we give the question mark a triangle label. <img src='./images/knn.gif'> Examining the plot above, we can see that if K were set to 1, we would classify the question mark as a star, but if K is 3 or 5, we would classify the question mark as a triangle. That is to say, K is an important parameter in a K Nearest Neighbors classifier. To show how to execute this classification in Python, let's show how we can use our labelled book data to classify an unlabelled text record: End of explanation classifier.predict([[5, 1]]) Explanation: The classifier above takes as input the counts of the words "bones" and "fossil" for a single text, then predicts the genre label for that text based on those counts. Stated more precisely, the classifier.fit() method takes a 2D list counts and a 1D list labels. The counts 2D list is a list of lists, with one sublist for each text in the corpus. Each of those sublists contains a sequence of numbers that represent the number of times a given words occurs in the given text file. Note that the order of the word counts in these sublists is very important--each sublist must include the counts of each word being counted in the same order. Now that we have trained our classifier, we can now ask this classifier to classify text records for which we don't have genre labels. Let's suppose we have a text file that includes the word "bones" five times and the word "fossil" once. We can predict the genre of this text with the following method: End of explanation # type your code here Explanation: Given a text with 5 counts of the word "bones" and 1 count of the word "fossil", the model classifies that text as a "medical" rather than "mathematical" text. That's all it takes to classify texts with Python! <h1 style='color:green'>Reviewing KNN Classification</h1> Challenge 1: Using the approach we discussed above, see if you can train your own classifier. To train this classifier, use the counts of the words "parallax" and "angle" within files in data_b/*/*.txt. Each text in the astronomical directory should have a class label of astronomical, and each text in the geometrical directory should have a class label geometrical. Challenge 2: Once you have trained your classifier, see if you can predict the genre of a text that contains 10 instances of the word "parallax" and 1 instance of the word "angle". Hint: once again, you may find it helpful to copy the code block we used above and modify that code block! End of explanation %matplotlib inline import matplotlib.pyplot as plt import helpers # create a KNN classifier using 3 as the value of K classifier = KNeighborsClassifier(3) # "fit" the classifier by showing it our labelled data classifier.fit(counts, labels) # use a helper function to plot the trained classifier's decision boundary helpers.plot_decision_boundary(classifier, counts, labels) plt.title('K-Nearest Neighbors: Classifying the Philosophical Transactions') plt.xlabel('occurrences of word bones') plt.ylabel('occurrences of word fossil') plt.show() Explanation: <details> <summary>Solution</summary> We can train this classifier with the following code: ``` from sklearn.neighbors import KNeighborsClassifier import numpy as np import glob import os # create the 2D dataset we used above counts = [] labels = [] # identify the data we will use for our analysis path = os.path.join('data_b', '*', '*.txt') # iterate over each file for i in glob.glob(path): # get the text content for this file text = open(i).read() # add the counts to our data counts.append([ text.count('parallax'), text.count('angle'), ]) # add the label for this text to our data labels.append('astronomical' if 'astronomical' in i else 'geometrical') # create a KNN classifier using 3 as the value of K classifier = KNeighborsClassifier(3) # "fit" the classifier by showing it our labelled data classifier.fit(counts, labels) # predict the genre label for a text that contains 10 instances of parallax and 1 instance of angle classifier.predict([[10, 1]]) ``` </details> Decision Boundaries The classification example above shows how we can classify just a single point in space, but suppose we want to analyze the way a classifier would classify each possible point in some space. To do so, we can transform our space into a grid of units, then classify each point in that grid. Analyzing a space in this way is known as identifying a classifier's decision boundary, because this analysis shows one the boundaries between different classification outcomes in the space. This kind of analysis is very helpful in training machine learning models, because studying a classifier's decision boundary can help one see how to improve the classifier. Let's plot our classifier's decision boundary below: End of explanation from sklearn.feature_extraction.text import CountVectorizer # use a "vectorizer" from sklearn to count our word occurrences vectorizer = CountVectorizer() # specify a list of strings to process -- each string represents a document corpus = [ 'This is the first document.', 'This document is the second document.', 'And this is the third one.', 'Is this the first document?', ] # process the corpus into counts, which has one row per document and one column per unique word counts = vectorizer.fit_transform(corpus).toarray() # obtain the words that correspond to the columns in `counts` words = vectorizer.get_feature_names() # print the words print(words) # print the word counts in each row print(counts) Explanation: For each pixel in the plot above, we retrieve the 3 closest points with known labels. We then use a majority vote of those labels to assign the label of the pixel. This is exactly analogous to predicting a label for unlabelled point&mdash;in both cases, we take a majority vote of the 3 closest points with known labels. Working in this way, we can use labelled data to classify unlabelled data. That's all there is to K-Nearest Neighbors classification! It's worth noting that K-Nearest Neighbors is only one of many popular classification algorithms. From a high-level point of view, each classification algorithm works in a similar way&mdash;each requires a certain number of observations with known labels, and each uses those labelled observations to classify unlabelled observations. However, different classification algorithms use different logic to assign unlabelled observations to groups, which means different classification algorithms have very different decision boundaries. In the chart below [source], each row plots the decision boundaries several classifiers give the same dataset. Notice how some classifiers work better with certain data shapes: <img src='images/scikit_decision_boundaries.png'> For an intuitive introduction to many of these classifiers, including Support Vector Machines, Decision Trees, Neural Networks, and Naive Bayes classifiers, see Luis Serrano's introduction to machine learning video. Term Document Matrices and Vector Space Models In our simple classifier above, we used the count of a single word to predict the genre of each text in our sample corpus. As you can imagine, however, using more words allows us to make better predictions. Let's see how this works below. To create a classifier from multiple words, we typically make what is known as a "Term Document Matrix" (TDM). A Term Document Matrix records the number of times each unique word type in a corpus occurs in each document within that corpus: <img src='./images/term-document-matrix.png'> The TDM above indicates that document "D1" includes the word "complexity" 2 times, the word "algorithm" 3 times, and so on. This TDM could be an Excel spreadsheet, with one row for each unique term within the corpus and one column for each document within the corpus. Since we are working in Python, however, we will build our Term Document Matrix with code: End of explanation from sklearn.feature_extraction.text import CountVectorizer # use a "vectorizer" from sklearn to count our word occurrences vectorizer = CountVectorizer() # create a list that will hold the text content of each file corpus = [] # create a list that will hold the class label for each file labels = [] # loop over the documents to process for i in glob.glob( os.path.join('data_a', '*', '*.txt') ): # add the words from this document to the corpus corpus.append( open(i).read() ) # add the class label for this file labels.append('archeological' if 'archeological' in i else 'medical') # process the corpus into X, which has one row per document and one column per unique word counts = vectorizer.fit_transform(corpus).toarray() # identify the column labels words = vectorizer.get_feature_names() Explanation: counts above is a list with the words contained in our Term Document Matrix. Note that the order matters! Just as we saw in our example above, counts is an example of a 2D array, or a list of lists. counts contains one sublist for each input document, and each of those sublists has a count for each word in labels. In this way, counts and labels work together to express the Term Document Matrix for our input data. Once we have the Term Document Matrix counts, we can use that matrix to create a classifier that predicts the labels of unlabelled observations. Let's see how this looks in practice: End of explanation # print a sample of the column labels print(words[5000:5050]) # print a sample of the word counts in each row print(counts[:10]) Explanation: That's all it takes to create a Term Document Matrix of the word counts in each of our documents. We can investigate that Term Document Matrix by printing counts and words: End of explanation # create a KNN classifier using 3 as the value of K classifier = KNeighborsClassifier(3) # "fit" the classifier by showing it our labelled data classifier.fit(counts, labels) Explanation: As we can see, our list of words contains a large number of terms, only a sample of which is printed above. We can also see that our word counts matrix has many rows and columns (signified by the ellipses above), most of which contain 0-value cells (as most words don't occur in most documents). We can now use the same syntax we used above to train a KNN classifier on that corpus: End of explanation # identify the files to process path = os.path.join('data_c', '*', '*.txt') # iterate over each file for i in glob.glob(path): # read the current file text = open(i).read() # count the words in this file word_counts = collections.Counter(text.split()) # get the word counts for this file text_counts = [word_counts.get(i, 0) for i in words] # predict the genre for this file prediction = classifier.predict([text_counts]) # print the file path and the prediction print(prediction, '--', i) Explanation: Just as before, we can then use our classifier to predict the genre of certain text files. In the example below, we'll ask the classifier to predict the genre of each file in data_c. End of explanation import glob import os # create a list that will hold the text content of each file corpus = [] # create a list that will hold the class labels for each file labels = [] # identify the files to process path = os.path.join('federalist-papers', '*', '*.txt') # iterate over each file for i in glob.glob(path): # determine the label for this file if 'hamilton' in i: label = 'Hamilton' elif 'madison' in i: label = 'Madison' elif 'jay' in i: label = 'Jay' else: continue # skip the disputed papers # store the text content for this file corpus.append( open(i).read() ) # store the label for this file labels.append(label) Explanation: The results look pretty good! We can see our classifier correctly identifies the genre of our four new texts, despite the fact that the classifier had never seen those files before. This means our classifier has "learned" some of the features that diffentiate archeological and medical texts! <h1 style='color:green'>Reviewing Classification: Classifying the Federalist Papers</h1> We've now covered a lot of ground on classification. We've discussed: 1) How to transform a list of files into a Term Document Matrix 2) How to identify the words that correspond to each column in teh Term Document Matrix 3) How to train a KNN classifier on that Term Document Matrix 4) How to use that trained classifier to predict the class label for unlabelled observations Now's our chance to put all of this together in a mini project challenge. Our goal is to files from The Federalist Papers to predict the author of the anonymously published Federalist Papers. The Federalist Files text files are available in: federalist-papers. If you look inside that directory, you'll see there are four subfolders: hamilton, jay, madison, and disputed. The first three of these contain the Federalist papers of known authorship, while disputed contains files of uncertain authorship. Just as we did in our examples above, we'll use the labelled observations to predict the unlabelled observations. Let's get started! Getting Started Our first step will be to create our word count matrix and our list of class labels for labelled text files. End of explanation from sklearn.feature_extraction.text import CountVectorizer from sklearn.neighbors import KNeighborsClassifier vectorizer = CountVectorizer() classifier = KNeighborsClassifier(3) Explanation: <h4 style='color:green'>Challenge One: Training the Classifier</h4> See if you can use the vectorizer below to obtain a variable counts that indicates the number of times each word occurs in each document. Then see if you can use that variable counts to train the classifier defined below: End of explanation # count the word occurrences in each document counts = vectorizer.fit_transform(corpus).toarray() # fit the classifier on the counts and labels classifier.fit(counts, labels) # obtain a list of the words in this TDM words = vectorizer.get_feature_names() # specify the path to the disputed papers path = os.path.join('federalist-papers', 'disputed', '*') # iterate over each disputed paper for i in glob.glob(path): # read the file text = open(i).read() # count the words in the file word_counts = collections.Counter(text.split()) # get the word counts for the file text_counts = [word_counts.get(i, 0) for i in words] # type your code here... Explanation: <details> <summary>Solution</summary> We can accomplish these two goals with the following: ``` counts = vectorizer.fit_transform(corpus).toarray() classifier.fit(counts, labels) ``` </details> <h4 style='color:green'>Challenge Two: Classifying Disputed Papers</h4> We now have a trained KNN classifier. To predict the class label ("Hamilton", "Madison", or "Jay") for our so-called disputed texts, we now need to read in each of those texts, create a list of word counts using the word order in words, and then ask the classifier to predict the label for that sequence of word counts. See if you can update the code below to predict the class label for each disputed document. Hint: you'll need to use the classifier and the text_counts variables! End of explanation
2,071
Given the following text description, write Python code to implement the functionality described below step by step Description: Quickstart We will be working on a mutagenicity dataset, released by Kazius et al.. 4337 compounds, provided as the file mols.sdf, were subjected to the AMES test. The results are given in labels.csv. We will clean the molecules, perform a brief chemical space analysis before finally assessing potential predictive models built on the data. Imports scikit-chem imports all subpackages with the main package, so all we need to do is import the main package, skchem. We will also need pandas. Step1: Loading the data We can use skchem.read_sdf to import the sdf file Step2: And pandas to import the labels. Step3: Quickly check the class balance Step4: And binarize them Step5: The classes are (mercifully) quite balanced. Cleaning The data is unlikely to be canonicalized, and potentially contain broken or difficult molecules, so we will now clean it. Standardization The first step is to apply a Transformer to canonicalize the representations. Specifically, we will use the ChemAxon Standardizer wrapper. Some compounds are likely to fail this procedure, however they are likely to still be valid structures, so we will use the keep_failed configuration option on the object to keep these, rather than returning a None, or raising an error. Step6: Filter undesirable molecules Next, we will remove molecules that are likely to not work well with the circular descriptors that we will use. These are usually large or inorganic molecules. To do this, we will use some Filters, which implement the filter method. Step7: Optimize Geometry We would like to calculate some features that require three dimensional coordinates, so we will next calculate three dimensional conformers using the Universal Force Field. Additionally, some compounds may be unfeasible - these should be dropped from the dataset. In order to do this, we will use the transform_filter method Step8: As we can see, we get a warning that 3 molecules failed to embed, have been dropped. If we didn't care about the warnings, we could have set the warn_on_fail property to False (or set it using a keyword argument at initialization). Conversely, if we really cared about failures, we could have set error_on_fail to True, which would raise an Error if any Mols failed to embed. Visualize Chemical Space scikit-chem adds a custom mol accessor to pandas.Series, which provides a shorthand for calling methods on all Mols in the collection. This is analogous to the str accessor Step9: We will use this function to binarize the labels Step10: Amongst other options, it is provides access to chemical space plotting functionality. This will featurize the molecules using a passed featurizer (or a string shortcut), and a dimensionality reduction technique to reduce the feature space to two dimensions, which are then plotted. In this example, we use circular Morgan fingerprints, reduced by t-SNE to visualize structural diversity in the dataset. Step11: The data appears to be reasonably separable in structural space, so we may suspect that Morgan fingerprints will be a good representation for modelling the data. Featurizing the data As previously noted, Morgan fingerprints would be a good fit for this data. To calculate them, we will use the MorganFeaturizer class, which is a Transformer. Step12: Pipelining If this process appeared unnecessarily laborious (as it should!), scikit-chem provides a Pipeline class that will sequentially apply objects passed to it. For this example, we could have simply performed Step13: Modelling the data In this section, we will try building some basic scikit-learn models on the data. Partitioning the data To decide on the best model to use, we should perform some model selection. This will require comparing the relative performance of a selection of candidate molecules each trained on the same train set, and evaluated on a validation set. In cheminformatics, partitioning datasets usually requires some thought, as chemical datasets usually vastly overrepresent certain scaffolds, and underrepresent others. In order to get as unbiased an estimate of performance as possible, one can either downsample compounds in a region of high density, or artifically favor splits that pool in the same split molecules that are too close in chemical space. scikit-chem provides this functionality in the SimThresholdSplit class, which applies single link heirachical clustering to produce a large number of clusters consisting of highly similar compounds. These clusters are then randomly assigned to the desired splits, such that no split contains compounds that are more similar to compounds in any other split than the clustering threshold. Step14: Model selection Step15: Random Forests appear to work best (although we should have chosen hyperparameters using Random or Grid search). Assessing the Final performance
Python Code: import skchem import pandas as pd Explanation: Quickstart We will be working on a mutagenicity dataset, released by Kazius et al.. 4337 compounds, provided as the file mols.sdf, were subjected to the AMES test. The results are given in labels.csv. We will clean the molecules, perform a brief chemical space analysis before finally assessing potential predictive models built on the data. Imports scikit-chem imports all subpackages with the main package, so all we need to do is import the main package, skchem. We will also need pandas. End of explanation ms_raw = skchem.read_sdf('mols.sdf'); ms_raw Explanation: Loading the data We can use skchem.read_sdf to import the sdf file: End of explanation y_raw = pd.read_csv('labels.csv').set_index('name').squeeze(); y_raw Explanation: And pandas to import the labels. End of explanation y_raw.value_counts().plot.bar() Explanation: Quickly check the class balance: End of explanation y_bin = (y_raw == 'mutagen').astype(np.uint8); y_bin Explanation: And binarize them: End of explanation std = skchem.standardizers.ChemAxonStandardizer(keep_failed=True) ms = std.transform(ms_raw); ms Explanation: The classes are (mercifully) quite balanced. Cleaning The data is unlikely to be canonicalized, and potentially contain broken or difficult molecules, so we will now clean it. Standardization The first step is to apply a Transformer to canonicalize the representations. Specifically, we will use the ChemAxon Standardizer wrapper. Some compounds are likely to fail this procedure, however they are likely to still be valid structures, so we will use the keep_failed configuration option on the object to keep these, rather than returning a None, or raising an error. End of explanation of = skchem.filters.OrganicFilter() ms, y = of.filter(ms, y) mf = skchem.filters.MassFilter(above=100, below=900) ms, y = mf.filter(ms, y) nf = skchem.filters.AtomNumberFilter(above=5, below=100, include_hydrogens=True) ms, y = nf.filter(ms, y) Explanation: Filter undesirable molecules Next, we will remove molecules that are likely to not work well with the circular descriptors that we will use. These are usually large or inorganic molecules. To do this, we will use some Filters, which implement the filter method. End of explanation uff = skchem.forcefields.UFF() ms, y = uff.transform_filter(ms, y) len(ms) Explanation: Optimize Geometry We would like to calculate some features that require three dimensional coordinates, so we will next calculate three dimensional conformers using the Universal Force Field. Additionally, some compounds may be unfeasible - these should be dropped from the dataset. In order to do this, we will use the transform_filter method: End of explanation y_raw.str.get_dummies() Explanation: As we can see, we get a warning that 3 molecules failed to embed, have been dropped. If we didn't care about the warnings, we could have set the warn_on_fail property to False (or set it using a keyword argument at initialization). Conversely, if we really cared about failures, we could have set error_on_fail to True, which would raise an Error if any Mols failed to embed. Visualize Chemical Space scikit-chem adds a custom mol accessor to pandas.Series, which provides a shorthand for calling methods on all Mols in the collection. This is analogous to the str accessor: End of explanation y = y.str.get_dummies()['mutagen'] Explanation: We will use this function to binarize the labels: End of explanation ms.mol.visualize(fper='morgan', dim_red='tsne', dim_red_kw={'method': 'exact'}, c=y, cmap='copper') Explanation: Amongst other options, it is provides access to chemical space plotting functionality. This will featurize the molecules using a passed featurizer (or a string shortcut), and a dimensionality reduction technique to reduce the feature space to two dimensions, which are then plotted. In this example, we use circular Morgan fingerprints, reduced by t-SNE to visualize structural diversity in the dataset. End of explanation mf = skchem.descriptors.MorganFeaturizer() X, y = mf.transform(ms, y); X Explanation: The data appears to be reasonably separable in structural space, so we may suspect that Morgan fingerprints will be a good representation for modelling the data. Featurizing the data As previously noted, Morgan fingerprints would be a good fit for this data. To calculate them, we will use the MorganFeaturizer class, which is a Transformer. End of explanation pipeline = skchem.pipeline.Pipeline([ skchem.standardizers.ChemAxonStandardizer(keep_failed=True), skchem.forcefields.UFF(), skchem.filters.OrganicFilter(), skchem.filters.MassFilter(above=100, below=1000), skchem.filters.AtomNumberFilter(above=5, below=100), skchem.descriptors.MorganFeaturizer() ]) X, y = pipeline.transform_filter(ms_raw, y_raw) Explanation: Pipelining If this process appeared unnecessarily laborious (as it should!), scikit-chem provides a Pipeline class that will sequentially apply objects passed to it. For this example, we could have simply performed: End of explanation cv = skchem.cross_validation.SimThresholdSplit(fper=None, n_jobs=4).fit(X) train, valid, test = cv.split((60, 20, 20)) X_train, X_valid, X_test = X[train], X[valid], X[test] y_train, y_valid, y_test = y[train], y[valid], y[test] Explanation: Modelling the data In this section, we will try building some basic scikit-learn models on the data. Partitioning the data To decide on the best model to use, we should perform some model selection. This will require comparing the relative performance of a selection of candidate molecules each trained on the same train set, and evaluated on a validation set. In cheminformatics, partitioning datasets usually requires some thought, as chemical datasets usually vastly overrepresent certain scaffolds, and underrepresent others. In order to get as unbiased an estimate of performance as possible, one can either downsample compounds in a region of high density, or artifically favor splits that pool in the same split molecules that are too close in chemical space. scikit-chem provides this functionality in the SimThresholdSplit class, which applies single link heirachical clustering to produce a large number of clusters consisting of highly similar compounds. These clusters are then randomly assigned to the desired splits, such that no split contains compounds that are more similar to compounds in any other split than the clustering threshold. End of explanation import sklearn.ensemble import sklearn.linear_model import sklearn.naive_bayes rf = sklearn.ensemble.RandomForestClassifier(n_estimators=100) nb = sklearn.naive_bayes.BernoulliNB() lr = sklearn.linear_model.LogisticRegression() X_train.shape, y_train.shape rf_score = rf.fit(X_train, y_train).score(X_valid, y_valid) nb_score = nb.fit(X_train, y_train).score(X_valid, y_valid) lr_score = lr.fit(X_train, y_train).score(X_valid, y_valid) print(rf_score, nb_score, lr_score) Explanation: Model selection End of explanation rf.fit(X_train.append(X_valid), y_train.append(y_valid)).score(X_test, y_test) Explanation: Random Forests appear to work best (although we should have chosen hyperparameters using Random or Grid search). Assessing the Final performance End of explanation
2,072
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting the full vector-valued MNE solution The source space that is used for the inverse computation defines a set of dipoles, distributed across the cortex. When visualizing a source estimate, it is sometimes useful to show the dipole directions in addition to their estimated magnitude. This can be accomplished by computing a Step1: Plot the source estimate Step2: Plot the activation in the direction of maximal power for this data Step3: The normal is very similar Step4: You can also do this with a fixed-orientation inverse. It looks a lot like the result above because the loose=0.2 orientation constraint keeps sources close to fixed orientation
Python Code: # Author: Marijn van Vliet <[email protected]> # # License: BSD-3-Clause import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, apply_inverse print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' smoothing_steps = 7 # Read evoked data fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif' evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) # Read inverse solution fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' inv = read_inverse_operator(fname_inv) # Apply inverse solution, set pick_ori='vector' to obtain a # :class:`mne.VectorSourceEstimate` object snr = 3.0 lambda2 = 1.0 / snr ** 2 stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector') # Use peak getter to move visualization to the time point of the peak magnitude _, peak_time = stc.magnitude().get_peak(hemi='lh') Explanation: Plotting the full vector-valued MNE solution The source space that is used for the inverse computation defines a set of dipoles, distributed across the cortex. When visualizing a source estimate, it is sometimes useful to show the dipole directions in addition to their estimated magnitude. This can be accomplished by computing a :class:mne.VectorSourceEstimate and plotting it with :meth:stc.plot &lt;mne.VectorSourceEstimate.plot&gt;, which uses :func:~mne.viz.plot_vector_source_estimates under the hood rather than :func:~mne.viz.plot_source_estimates. It can also be instructive to visualize the actual dipole/activation locations in 3D space in a glass brain, as opposed to activations imposed on an inflated surface (as typically done in :meth:mne.SourceEstimate.plot), as it allows you to get a better sense of the underlying source geometry. End of explanation brain = stc.plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir, smoothing_steps=smoothing_steps) # You can save a brain movie with: # brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16, framerate=10, # interpolation='linear', time_viewer=True) Explanation: Plot the source estimate: End of explanation stc_max, directions = stc.project('pca', src=inv['src']) # These directions must by design be close to the normals because this # inverse was computed with loose=0.2 print('Absolute cosine similarity between source normals and directions: ' f'{np.abs(np.sum(directions * inv["source_nn"][2::3], axis=-1)).mean()}') brain_max = stc_max.plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir, time_label='Max power', smoothing_steps=smoothing_steps) Explanation: Plot the activation in the direction of maximal power for this data: End of explanation brain_normal = stc.project('normal', inv['src'])[0].plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir, time_label='Normal', smoothing_steps=smoothing_steps) Explanation: The normal is very similar: End of explanation fname_inv_fixed = ( data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif') inv_fixed = read_inverse_operator(fname_inv_fixed) stc_fixed = apply_inverse( evoked, inv_fixed, lambda2, 'dSPM', pick_ori='vector') brain_fixed = stc_fixed.plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir, smoothing_steps=smoothing_steps) Explanation: You can also do this with a fixed-orientation inverse. It looks a lot like the result above because the loose=0.2 orientation constraint keeps sources close to fixed orientation: End of explanation
2,073
Given the following text description, write Python code to implement the functionality described below step by step Description: Convolutions In this notebook, we explore the concept of convolutional neural networks. You may want to read this wikipedia page if you're not familiar with the concept of a convolution. In a convolutional neural network 1. Definition of the (discrete) convolution You may read Wikipedia's web page I we consider two functions $f$ and $g$ taking values from $\mathbb{Z} \to \mathbb{R}$ then Step1: 2. Derive the Convolution !! As we use it, the convolution is parametrised by two vectors $x$ and $w$ and outputs a vector $z$. We have Step2: Train a convolutional neural net Step3: Applied to image We demonstrate how 2D convolutions applies to images (In this case, we designe the kernels of the conlutions).
Python Code: # Which is easily implemented on python : def _convolve(x, w, type='valid'): # x and w are np vectors conv = [] for i in range(len(x)): if type == 'valid': conv.append((x[i: i+len(w)] * w).sum()) return np.array(conv) def convolve(X, w): # Convolves a batch X to w w = np.array(w) X = np.array(X) conv = [] for i in range(len(X)): conv.append(_convolve(X[i], w)) return np.array(conv) Explanation: Convolutions In this notebook, we explore the concept of convolutional neural networks. You may want to read this wikipedia page if you're not familiar with the concept of a convolution. In a convolutional neural network 1. Definition of the (discrete) convolution You may read Wikipedia's web page I we consider two functions $f$ and $g$ taking values from $\mathbb{Z} \to \mathbb{R}$ then: $ (f * g)[n] = \sum_{m = -\infty}^{+\infty} f[m] \cdot g[n - m] $ In our case, we consider the two vectors $x$ and $w$ : $ x = (x_1, x_2, ..., x_{n-1}, x_n) $ $ w = (w_1, w_2) $ And get : $ x * w = (w_1 x_1 + w_2 x_2, w_1 x_2 + w_2 x_3, ..., w_1 x_{n-1} + w_2 x_n)$ Deep learning subtility : In most of deep learning framewoks, you'll get to chose in between three paddings: - Same: $(fg)$ has the same shape as x (we pad the entry with zeros) - valid: $(fg)$ has the shape of x minus the shape of w plus 1 (no padding on x) - Causal: $(f*g)(n_t)$ does not depend on any $(n_{t+1})$ End of explanation from utils import * import utils reload(utils) from utils import * (x_train, y_train), (x_test, y_test) = load_up_down(50) plt.plot(x_train.T) plt.show() Explanation: 2. Derive the Convolution !! As we use it, the convolution is parametrised by two vectors $x$ and $w$ and outputs a vector $z$. We have: $ x * w = z$ $ z_i = (w_1 x_i + w_2 x_{i+1})$ We want to derive $z$ with respect to some weights $w_j$: $\frac{\delta z_i}{\delta w_j} = x_{i+j}$ $\frac{\delta z_i}{\delta w} = (x_{i}, x_{i+1}, ..., x_{i+n})$ Example of convolutions : We consider a classification problem where we want to distinguish 2 signals. One is going upward and the other is going downwards End of explanation # Rename y_silver to X and y_gold to Y X, Y = [x_train, ], y_train # Initilize the parameters Ws = [0.5, 0.5] alphas = (0.01, 0.01) # Load Trainer t = Trainer(X, Y, Ws, alphas) # Define Prediction and Loss t.pred = lambda X : convolve(X[0], (t.Ws[0], t.Ws[1])).mean(axis=1) t.loss = lambda : (np.power((t.Y - t.pred(t.X)), 2) * 1 / 2.).mean() print t.pred(X) t.acc = lambda X, Y : t.pred(X) # Define the gradient functions dl_dp = lambda : -(t.Y - t.pred(X)) dl_dw0 = lambda : (t.X[0][:-1]).mean() dl_dw1 = lambda : (t.X[0][1:]).mean() t.dWs = (dl_dw0, dl_dw1) # Start training anim = t.animated_train(is_notebook=True) from IPython.display import HTML HTML(anim.to_html5_video()) t.loss() Explanation: Train a convolutional neural net End of explanation from scipy import signal # Load MNIST (x_train, y_train), (x_test, y_test) = load_MNIST() img = x_train[2] # Design the kernels kernels = [[[-1, 2, -1],[-1, 2, -1],[-1, 2, -1]], [[-1, -1, -1],[2, 2, 2],[-1, -1, -1]], [[2, -1, -1],[-1, 2, -1],[-1, -1, 2]], [[-1, -1, 2],[-1, 2, -1],[2, -1, -1]], ] # Plot and convolve them to the image for i, k in enumerate(kernels): i = i*2+1 plt.subplot(3,4,i) plt.imshow(k, cmap='gray') plt.subplot(3,4,i+1) conv = signal.convolve2d(img, k) plt.imshow(conv > 1.5, cmap='gray') plt.subplot(349) plt.imshow(img, cmap='gray') plt.show() Explanation: Applied to image We demonstrate how 2D convolutions applies to images (In this case, we designe the kernels of the conlutions). End of explanation
2,074
Given the following text description, write Python code to implement the functionality described below step by step Description: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. Step1: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. <img src='assets/convolutional_autoencoder.png' width=500px> Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise Step2: Training As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. Step3: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise Step4: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
Python Code: %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', validation_size=0) img = mnist.train.images[2] plt.imshow(img.reshape((28, 28)), cmap='Greys_r') Explanation: Convolutional Autoencoder Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data. End of explanation learning_rate = 0.001 # Input and target placeholders inputs_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name = 'input') targets_ = tf.placeholder(tf.float32, [None, 28, 28, 1], name = 'targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 16, 3, 1, padding='same', activation=tf.nn.relu) # Now 28x28x16 maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2, padding='valid') # Now 14x14x16 conv2 = tf.layers.conv2d(maxpool1, 8, 3, 1, padding='same', activation=tf.nn.relu) # Now 14x14x8 maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2, padding='valid') # Now 7x7x8 conv3 = tf.layers.conv2d(maxpool2, 8, 3, 1, padding='same', activation=tf.nn.relu) # Now 7x7x8 encoded = tf.layers.max_pooling2d(conv3, 2, 2, padding='same') # Now 4x4x8 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, [7, 7]) # Now 7x7x8 conv4 = tf.layers.conv2d(upsample1, 8, 3, 1, padding='same', activation=tf.nn.relu) # Now 7x7x8 upsample2 = tf.image.resize_nearest_neighbor(conv4, [14, 14]) # Now 14x14x8 conv5 = tf.layers.conv2d(upsample2, 8, 3, 1, padding='same', activation=tf.nn.relu) # Now 14x14x8 upsample3 = tf.image.resize_nearest_neighbor(conv5, [28, 28]) # Now 28x28x8 conv6 = tf.layers.conv2d(upsample3, 16, 3, 1, padding='same', activation=tf.nn.relu) # Now 28x28x16 logits = tf.layers.conv2d(upsample3, 1, 3, 1, padding='same') #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) Explanation: Network Architecture The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. <img src='assets/convolutional_autoencoder.png' width=500px> Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data. What's going on with the decoder Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling. Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor. End of explanation sess = tf.Session() epochs = 20 batch_size = 200 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) imgs = batch[0].reshape((-1, 28, 28, 1)) batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([in_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) sess.close() Explanation: Training As before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays. End of explanation learning_rate = 0.001 inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs') targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets') ### Encoder conv1 = tf.layers.conv2d(inputs_, 32, 3, 1, padding='same', activation=tf.nn.relu) # Now 28x28x32 maxpool1 = tf.layers.max_pooling2d(conv1, 2, 2, padding='valid') # Now 14x14x32 conv2 = tf.layers.conv2d(maxpool1, 32, 3, 1, padding='same', activation=tf.nn.relu) # Now 14x14x32 maxpool2 = tf.layers.max_pooling2d(conv2, 2, 2, padding='valid') # Now 7x7x32 conv3 = tf.layers.conv2d(maxpool2, 16, 3, 1, padding='same', activation=tf.nn.relu) # Now 7x7x16 encoded = tf.layers.max_pooling2d(conv2, 2, 2, padding='valid') # Now 4x4x16 ### Decoder upsample1 = tf.image.resize_nearest_neighbor(encoded, [7, 7]) # Now 7x7x16 conv4 = tf.layers.conv2d(upsample1, 16, 3, 1, padding='same', activation=tf.nn.relu) # Now 7x7x16 upsample2 = tf.image.resize_nearest_neighbor(conv4, [14, 14]) # Now 14x14x16 conv5 = tf.layers.conv2d(upsample2, 32, 3, 1, padding='same', activation=tf.nn.relu) # Now 14x14x32 upsample3 = tf.image.resize_nearest_neighbor(conv5, [28, 28]) # Now 28x28x32 conv6 = tf.layers.conv2d(upsample3, 32, 3, 1, padding='same', activation=tf.nn.relu) # Now 28x28x32 logits = tf.layers.conv2d(conv6, 1, 3, 1, padding='same') #Now 28x28x1 # Pass logits through sigmoid to get reconstructed image decoded = tf.sigmoid(logits) # Pass logits through sigmoid and calculate the cross-entropy loss loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits) # Get cost and define the optimizer cost = tf.reduce_mean(loss) opt = tf.train.AdamOptimizer(learning_rate).minimize(cost) sess = tf.Session() epochs = 20 batch_size = 200 # Set's how much noise we're adding to the MNIST images noise_factor = 0.5 sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images from the batch imgs = batch[0].reshape((-1, 28, 28, 1)) # Add random noise to the input images noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape) # Clip the images to be between 0 and 1 noisy_imgs = np.clip(noisy_imgs, 0., 1.) # Noisy images as inputs, original images as targets batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs, targets_: imgs}) print("Epoch: {}/{}...".format(e+1, epochs), "Training loss: {:.4f}".format(batch_cost)) Explanation: Denoising As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images. Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before. Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers. End of explanation noise_factor = 0.5 fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4)) in_imgs = mnist.test.images[:10] noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape) noisy_imgs = np.clip(noisy_imgs, 0., 1.) reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))}) for images, row in zip([noisy_imgs, reconstructed], axes): for img, ax in zip(images, row): ax.imshow(img.reshape((28, 28)), cmap='Greys_r') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) fig.tight_layout(pad=0.1) Explanation: Checking out the performance Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is. End of explanation
2,075
Given the following text description, write Python code to implement the functionality described below step by step Description: Atlanta Police Department The Atlanta Police Department provides Part 1 crime data at http Step1: Review Step2: We need to enter the descriptions for each entry in our dictionary manually... Step3: Convert Time Columns Please refer to the following resources for working with time series data in pandas Step4: What's the date range of the data? Step5: Number of crimes reported each year Step6: Looks like most of the data is actually from 2009-2017! Let's throw the rest away... Step7: Crime Over Time Has the number of crimes in Atlanta changed over time? Are some areas more affected by crime than others? Do different types of crime correlate with each other? Number of Crimes Over Time, with Pivot Tables Step8: This gives us a timeline for different types of crime reported in Atlanta. By itself, this can be useful, but we are more interested in aggregate statistics. Let's get the number of crimes by month... Step9: Average number of crimes per month, for each year Step10: Explanation of boxplot Step11: More on pandas datetime objects Step12: Can you pick out the seasonal variation in number of crimes per year? Suppose we are not interested in seasonal trends, but want to see if the number of reported crimes is changing year over year. We could simply add the number of crimes together to get number of crimes reported each year. Step13: Correlation In Number of Crimes Over Time You can use the "corr" method in Pandas to find the correlation between columns of a dataframe. Step14: Visualizing the correlation... Step15: Crimes By Place Beats and Zones The City of Atlanta is divided into 6 zones, each with 12 to 14 beats.
Python Code: import numpy as np import pandas as pd %matplotlib inline import matplotlib.pyplot as plt # load data set df = pd.read_csv('/home/data/APD/COBRA-YTD-multiyear.csv.gz') print "Shape of table: ", df.shape Explanation: Atlanta Police Department The Atlanta Police Department provides Part 1 crime data at http://www.atlantapd.org/i-want-to/crime-data-downloads A recent copy of the data file is stored in the cluster. <span style="color: red; font-weight: bold;">Please, do not copy this data file into your home directory!</span> Introduction This notebooks leads into an exploration of public crime data provided by the Atlanta Police Department. The original data set and supplemental information can be found at http://www.atlantapd.org/i-want-to/crime-data-downloads The data set is available on ARC, please, don't download into your home directory on ARC! End of explanation dataDict = pd.DataFrame({'DataType': df.dtypes.values, 'Description': '', }, index=df.columns.values) Explanation: Review: Creating a data key Let's look at the structure of this table. We're actually creating some text output that can be used to create a data dictionary. End of explanation dataDict.loc['MI_PRINX'].Description = '' # type: int64 dataDict.loc['offense_id'].Description = 'Unique ID in the format YYDDDNNNN with the year YY, the day of the year DDD and a counter NNNN' # type: int64 dataDict.loc['rpt_date'].Description = 'Date the crime was reported' # type: object dataDict.loc['occur_date'].Description = 'Estimated date when the crime occured' # type: object dataDict.loc['occur_time'].Description = 'Estimated time when the crime occured' # type: object dataDict.loc['poss_date'].Description = '' # type: object dataDict.loc['poss_time'].Description = '' # type: object dataDict.loc['beat'].Description = '' # type: int64 dataDict.loc['apt_office_prefix'].Description = '' # type: object dataDict.loc['apt_office_num'].Description = '' # type: object dataDict.loc['location'].Description = '' # type: object dataDict.loc['MinOfucr'].Description = '' # type: int64 dataDict.loc['MinOfibr_code'].Description = '' # type: object dataDict.loc['dispo_code'].Description = '' # type: object dataDict.loc['MaxOfnum_victims'].Description = '' # type: float64 dataDict.loc['Shift'].Description = 'Zones have 8 or 10 hour shifts' # type: object dataDict.loc['Avg Day'].Description = '' # type: object dataDict.loc['loc_type'].Description = '' # type: float64 dataDict.loc['UC2 Literal'].Description = '' # type: object dataDict.loc['neighborhood'].Description = '' # type: object dataDict.loc['npu'].Description = '' # type: object dataDict.loc['x'].Description = '' # type: float64 dataDict.loc['y'].Description = '' # type: float64 dataDict.to_csv("COBRA_Data_Dictionary.csv") dataDict Explanation: We need to enter the descriptions for each entry in our dictionary manually... End of explanation # function currying def fixdatetime(fld): def _fix(s): date_col = '%s_date' % fld # "rpt_date" time_col = '%s_time' % fld # "rpt_time" if time_col in s.index: return str(s[date_col])+' '+str(s[time_col]) else: return str(s[date_col])+' 00:00:00' return _fix for col in ['rpt', 'occur', 'poss']: datser = df.apply(fixdatetime(col), axis=1) df['%s_dt'%col] = pd.to_datetime(datser, format="%m/%d/%Y %H:%M:%S", errors='coerce') df[["MI_PRINX", "offense_id", "beat", "UC2 Literal", "neighborhood", "rpt_dt", "occur_dt", "poss_dt"]].head() Explanation: Convert Time Columns Please refer to the following resources for working with time series data in pandas: - https://pandas.pydata.org/pandas-docs/stable/timeseries.html - https://pandas.pydata.org/pandas-docs/stable/api.html#id10 End of explanation print df.occur_dt.min(), '---', df.occur_dt.max() Explanation: What's the date range of the data? End of explanation # resample is like "groupby" for time df.resample('A-DEC', closed='right', on='occur_dt').offense_id.count() # df['Year'] = df.occur_dt.map(lambda d: d.year) # df2 = df[(df.Year>=2010) & (df.Year<=2017)] # df2.shape, df.shape Explanation: Number of crimes reported each year: End of explanation df = df[df.occur_dt>='01/01/2009'] Explanation: Looks like most of the data is actually from 2009-2017! Let's throw the rest away... End of explanation df[["occur_dt", "UC2 Literal", "offense_id"]].head() # Pivoting the table: # index = nolumn that the new table will be indexed by # columns = column whose unique values will form the new column names # values = values used to fill the table (default = all columns other than those given in index and columns) df_ct = df.pivot_table(index="occur_dt", columns="UC2 Literal", values="offense_id") df_ct.head() Explanation: Crime Over Time Has the number of crimes in Atlanta changed over time? Are some areas more affected by crime than others? Do different types of crime correlate with each other? Number of Crimes Over Time, with Pivot Tables End of explanation df_ct = df_ct.resample("1M", closed="right").count() df_ct.head() Explanation: This gives us a timeline for different types of crime reported in Atlanta. By itself, this can be useful, but we are more interested in aggregate statistics. Let's get the number of crimes by month... End of explanation ax = df_ct.plot.box(figsize=(13,4), rot=45) plt.ylabel("Total Reported Crimes by Month") Explanation: Average number of crimes per month, for each year: End of explanation ## In-class exercise: # Make a boxplot of the number of reported crimes, aggregating by week. df_wk = df.pivot_table(index="occur_dt", columns="UC2 Literal", values="offense_id") df_wk = df_wk.resample("W-SUN", closed='right').count() df_wk.plot.box(figsize=(13,4), rot=45) plt.ylabel("Total Reported Crimes by Week") Explanation: Explanation of boxplot: From the matplotlib documentation (http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.boxplot): The box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Flier points are those past the end of the whiskers. Whiskers: IQR is the interquartile range (Q3-Q1). The upper whisker will extend to last datum less than Q3 + whisIQR (where the default value for whis is 1.5). Similarly, the lower whisker will extend to the first datum greater than Q1 - whisIQR. Beyond the whiskers, data are considered outliers and are plotted as individual points. End of explanation ax = df_ct.plot(figsize=(10,5), style='-o') ax.get_legend().set_bbox_to_anchor((1, 1)) plt.ylabel("Total Reported Crimes by Month") ax.vlines(pd.date_range("12/31/2009", "12/31/2017", freq="A-JAN"), 0, 900) Explanation: More on pandas datetime objects: http://pandas-docs.github.io/pandas-docs-travis/timeseries.html#dateoffset-objects http://pandas-docs.github.io/pandas-docs-travis/timeseries.html#anchored-offsets Crimes over time Let's take a look at a time series plot of the number of crimes over time... End of explanation ann_cr = df_ct.resample("A-DEC", closed="right").sum() ax = ann_cr[ann_cr.index<"01/01/2017"].plot(figsize=(10,5), style='-o') ax.get_legend().set_bbox_to_anchor((1, 1)) Explanation: Can you pick out the seasonal variation in number of crimes per year? Suppose we are not interested in seasonal trends, but want to see if the number of reported crimes is changing year over year. We could simply add the number of crimes together to get number of crimes reported each year. End of explanation crime_corr = df_ct.corr() crime_corr Explanation: Correlation In Number of Crimes Over Time You can use the "corr" method in Pandas to find the correlation between columns of a dataframe. End of explanation plt.matshow(crime_corr); plt.yticks(range(len(crime_corr.columns)), crime_corr.columns); plt.xticks(range(len(crime_corr.columns)), crime_corr.columns, rotation=90); plt.colorbar(); Explanation: Visualizing the correlation... End of explanation df['Zone'] = df['beat']//100 df['Year'] = df.occur_dt.apply(lambda x: x.year) df_cp = df.pivot_table(index="Zone", columns="UC2 Literal", values="offense_id", aggfunc=lambda x: np.count_nonzero(~np.isnan(x))) df_cp df_cp = df.pivot_table(index=["Year","Zone"], columns="UC2 Literal", values="offense_id", aggfunc=lambda x: np.count_nonzero(~np.isnan(x))) df_cp df_cp = df_cp[np.logical_and([x >= 1 for x in zip(*df_cp.index.values)[1]], [x <= 6 for x in zip(*df_cp.index.values)[1]])].fillna(0) df_cp.head(20) # A MUCH PRETTIER way to do the same thing: df_cp = df_cp.loc[(slice(None), slice(1,6)),:].fillna(0) df_cp.head(20) ## slicing on a multi-index # get data for 2009-2010, for zones 1-3 df_cp.loc[(slice(2009,2010), slice(1,5,2)),:] ## In-class exercise: # Show all robbery data for 2011, 2013, and 2015, for zones 4-6 df_cp.loc[(slice(2011,2015,2), slice(4,6)), "ROBBERY-COMMERCIAL":"ROBBERY-RESIDENCE"] df_cp.filter(like='ROBBERY').loc[(slice(2011,2015,2), slice(4,6)), :] ## In-class exercise: # Count the total number of crimes in each zone df_cp.groupby(level=1).sum() help(df_cp.plot) ## In-class exercise: # Plot the number of pedestrian robberies in each zone in 2016 df_cp.loc[(slice(2016,2016), slice(None)), "ROBBERY-PEDESTRIAN"].plot.bar() plt.xticks(range(6), range(1,7), rotation=0); plt.xlabel("Zone"); plt.ylabel("Ped. Robberies in 2016"); ## In-class exercise: # What is the average annual number of crimes in each zone (for each type of crime)? # Hint: use "groupby" with a "level" argument. df_cp.groupby(level=1).mean() ##### Shapefile stuff ######## import sys try: from osgeo import ogr, osr, gdal except: sys.exit('ERROR: cannot find GDAL/OGR modules') Explanation: Crimes By Place Beats and Zones The City of Atlanta is divided into 6 zones, each with 12 to 14 beats. End of explanation
2,076
Given the following text description, write Python code to implement the functionality described below step by step Description: PS Orthotile and Landsat 8 Crossovers Have you ever wanted to compare PS images to Landsat 8 images? Both image collections are made available via the Planet API. However, it takes a bit of work to identify crossovers - that is, images of the same area that were collected within a reasonable time difference of each other. Also, you may be interested in filtering out some imagery, e.g. cloudy images. This notebook walks you through the process of finding crossovers between PS Orthotiles and Landsat 8 scenes. In this notebook, we specify 'crossovers' as images that have been taken within 1hr of eachother. This time gap is sufficiently small that we expect the atmospheric conditions won't change much (this assumption doesn't always hold, but is the best we can do for now). We also filter out cloudy images and constrain our search to images collected in 2017, January 1 through August 23. Step1: Define AOI Define the AOI as a geojson polygon. This can be done at geojson.io. If you use geojson.io, only copy the single aoi feature, not the entire feature collection. Step2: Build Request Build the Planet API Filter request for the Landsat 8 and PS Orthotile imagery taken in 2017 through August 23. Step3: Search Planet API The client is how we interact with the planet api. It is created with the user-specific api key, which is pulled from $PL_API_KEY environment variable. Create the client then use it to search for PS Orthotile and Landsat 8 scenes. Save a subset of the metadata provided by Planet API as our 'scene'. Step4: In processing the items to scenes, we are only using a small subset of the product metadata. Step5: Investigate Landsat Scenes There are quite a few Landsat 8 scenes that are returned by our query. What do the footprints look like relative to our AOI and what is the collection time of the scenes? Step6: Show Landsat 8 Footprints on Map Step7: This AOI is located in a region covered by 3 different path/row tiles. This means there is 3x the coverage than in regions only covered by one path/row tile. This is particularly lucky! What about the within each path/row tile. How long and how consistent is the Landsat 8 collect period for each path/row? Step8: It looks like the collection period is 16 days, which lines up with the Landsat 8 mission description. path/row 43/33 is missing one image which causes an unusually long collect period. What this means is that we don't need to look at every Landsat 8 scene collect time to find crossovers with Planet scenes. We could look at the first scene for each path/row, then look at every 16 day increment. However, we will need to account for dropped Landsat 8 scenes in some way. What is the time difference between the tiles? Step9: So the tiles that are in the same path are very close (24sec) together from the same day. Therefore, we would want to only use one tile and pick the best image. Tiles that are in different paths are 7 days apart. Therefore, we want to keep tiles from different paths, as they represent unique crossovers. Investigate PS Orthotiles There are also quite a few PS Orthotiles that match our query. Some of those scenes may not have much overlap with our AOI. We will want to filter those out. Also, we are interested in knowing how many unique days of coverage we have, so we will group PS Orthotiles by collect day, since we may have days with more than one collect (due multiple PS satellites collecting imagery). Step10: What about overlap? We really only want images that overlap over 20% of the AOI. Note Step11: Ideally, PS scenes have daily coverage over all regions. How many days have PS coverage and how many PS scenes were taken on the same day? Step12: Looks like the multiple collects on the same day are just a few minutes apart. They are likely crossovers between different PS satellites. Cool! Since we only want to us one PS image for a crossover, we will chose the best collect for days with multiple collects. Find Crossovers Now that we have the PS Orthotiles filtered to what we want and have investigated the Landsat 8 scenes, let's look for crossovers between the two. First we find concurrent crossovers, PS and Landsat collects that occur within 1hour of each other. Step13: Now that we have the crossovers, what we are really interested in is the IDs of the landsat and PS scenes, as well as how much they overlap the AOI. Step14: Next, we filter to overlaps that cover a significant portion of the AOI. Step15: Browsing through the crossovers, we see that in some instances, multiple crossovers take place on the same day. Really, we are interested in 'unique crossovers', that is, crossovers that take place on unique days. Therefore, we will look at the concurrent crossovers by day. Step16: There are 6 unique crossovers between Landsat 8 and PS that cover over 90% of our AOI between January and August in 2017. Not bad! That is definitely enough to perform comparison. Display Crossovers Let's take a quick look at the crossovers we found to make sure that they don't look cloudy, hazy, or have any other quality issues that would affect the comparison.
Python Code: # Notebook dependencies from __future__ import print_function import datetime import json import os import ipyleaflet as ipyl import ipywidgets as ipyw from IPython.core.display import HTML from IPython.display import display import pandas as pd from planet import api from planet.api import filters from shapely import geometry as sgeom Explanation: PS Orthotile and Landsat 8 Crossovers Have you ever wanted to compare PS images to Landsat 8 images? Both image collections are made available via the Planet API. However, it takes a bit of work to identify crossovers - that is, images of the same area that were collected within a reasonable time difference of each other. Also, you may be interested in filtering out some imagery, e.g. cloudy images. This notebook walks you through the process of finding crossovers between PS Orthotiles and Landsat 8 scenes. In this notebook, we specify 'crossovers' as images that have been taken within 1hr of eachother. This time gap is sufficiently small that we expect the atmospheric conditions won't change much (this assumption doesn't always hold, but is the best we can do for now). We also filter out cloudy images and constrain our search to images collected in 2017, January 1 through August 23. End of explanation aoi = {u'geometry': {u'type': u'Polygon', u'coordinates': [[[-121.3113248348236, 38.28911976564886], [-121.3113248348236, 38.34622533958], [-121.2344205379486, 38.34622533958], [-121.2344205379486, 38.28911976564886], [-121.3113248348236, 38.28911976564886]]]}, u'type': u'Feature', u'properties': {u'style': {u'opacity': 0.5, u'fillOpacity': 0.2, u'noClip': False, u'weight': 4, u'color': u'blue', u'lineCap': None, u'dashArray': None, u'smoothFactor': 1, u'stroke': True, u'fillColor': None, u'clickable': True, u'lineJoin': None, u'fill': True}}} json.dumps(aoi) Explanation: Define AOI Define the AOI as a geojson polygon. This can be done at geojson.io. If you use geojson.io, only copy the single aoi feature, not the entire feature collection. End of explanation # define the date range for imagery start_date = datetime.datetime(year=2017,month=1,day=1) stop_date = datetime.datetime(year=2017,month=8,day=23) # filters.build_search_request() item types: # Landsat 8 - 'Landsat8L1G' # Sentinel - 'Sentinel2L1C' # PS Orthotile = 'PSOrthoTile' def build_landsat_request(aoi_geom, start_date, stop_date): query = filters.and_filter( filters.geom_filter(aoi_geom), filters.range_filter('cloud_cover', lt=5), # ensure has all assets, unfortunately also filters 'L1TP' # filters.string_filter('quality_category', 'standard'), filters.range_filter('sun_elevation', gt=0), # filter out Landsat night scenes filters.date_range('acquired', gt=start_date), filters.date_range('acquired', lt=stop_date) ) return filters.build_search_request(query, ['Landsat8L1G']) def build_ps_request(aoi_geom, start_date, stop_date): query = filters.and_filter( filters.geom_filter(aoi_geom), filters.range_filter('cloud_cover', lt=0.05), filters.date_range('acquired', gt=start_date), filters.date_range('acquired', lt=stop_date) ) return filters.build_search_request(query, ['PSOrthoTile']) print(build_landsat_request(aoi['geometry'], start_date, stop_date)) print(build_ps_request(aoi['geometry'], start_date, stop_date)) Explanation: Build Request Build the Planet API Filter request for the Landsat 8 and PS Orthotile imagery taken in 2017 through August 23. End of explanation def get_api_key(): return os.environ['PL_API_KEY'] # quick check that key is defined assert get_api_key(), "PL_API_KEY not defined." def create_client(): return api.ClientV1(api_key=get_api_key()) def search_pl_api(request, limit=500): client = create_client() result = client.quick_search(request) # note that this returns a generator return result.items_iter(limit=limit) items = list(search_pl_api(build_ps_request(aoi['geometry'], start_date, stop_date))) print(len(items)) # uncomment below to see entire metadata for a PS orthotile # print(json.dumps(items[0], indent=4)) del items items = list(search_pl_api(build_landsat_request(aoi['geometry'], start_date, stop_date))) print(len(items)) # uncomment below to see entire metadata for a landsat scene # print(json.dumps(items[0], indent=4)) del items Explanation: Search Planet API The client is how we interact with the planet api. It is created with the user-specific api key, which is pulled from $PL_API_KEY environment variable. Create the client then use it to search for PS Orthotile and Landsat 8 scenes. Save a subset of the metadata provided by Planet API as our 'scene'. End of explanation def items_to_scenes(items): item_types = [] def _get_props(item): props = item['properties'] props.update({ 'thumbnail': item['_links']['thumbnail'], 'item_type': item['properties']['item_type'], 'id': item['id'], 'acquired': item['properties']['acquired'], 'footprint': item['geometry'] }) return props scenes = pd.DataFrame(data=[_get_props(i) for i in items]) # acquired column to index, it is unique and will be used a lot for processing scenes.index = pd.to_datetime(scenes['acquired']) del scenes['acquired'] scenes.sort_index(inplace=True) return scenes scenes = items_to_scenes(search_pl_api(build_landsat_request(aoi['geometry'], start_date, stop_date))) # display(scenes[:1]) print(scenes.thumbnail.tolist()[0]) del scenes Explanation: In processing the items to scenes, we are only using a small subset of the product metadata. End of explanation landsat_scenes = items_to_scenes(search_pl_api(build_landsat_request(aoi['geometry'], start_date, stop_date))) # How many Landsat 8 scenes match the query? print(len(landsat_scenes)) Explanation: Investigate Landsat Scenes There are quite a few Landsat 8 scenes that are returned by our query. What do the footprints look like relative to our AOI and what is the collection time of the scenes? End of explanation def landsat_scenes_to_features_layer(scenes): features_style = { 'color': 'grey', 'weight': 1, 'fillColor': 'grey', 'fillOpacity': 0.15} features = [{"geometry": r.footprint, "type": "Feature", "properties": {"style": features_style, "wrs_path": r.wrs_path, "wrs_row": r.wrs_row}} for r in scenes.itertuples()] return features def create_landsat_hover_handler(scenes, label): def hover_handler(event=None, id=None, properties=None): wrs_path = properties['wrs_path'] wrs_row = properties['wrs_row'] path_row_query = 'wrs_path=={} and wrs_row=={}'.format(wrs_path, wrs_row) count = len(scenes.query(path_row_query)) label.value = 'path: {}, row: {}, count: {}'.format(wrs_path, wrs_row, count) return hover_handler def create_landsat_feature_layer(scenes, label): features = landsat_scenes_to_features_layer(scenes) # Footprint feature layer feature_collection = { "type": "FeatureCollection", "features": features } feature_layer = ipyl.GeoJSON(data=feature_collection) feature_layer.on_hover(create_landsat_hover_handler(scenes, label)) return feature_layer # Initialize map using parameters from above map # and deleting map instance if it exists try: del fp_map except NameError: pass zoom = 6 center = [38.28993659801203, -120.14648437499999] # lat/lon # Create map, adding box drawing controls # Reuse parameters if map already exists try: center = fp_map.center zoom = fp_map.zoom print(zoom) print(center) except NameError: pass # Change tile layer to one that makes it easier to see crop features # Layer selected using https://leaflet-extras.github.io/leaflet-providers/preview/ map_tiles = ipyl.TileLayer(url='http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png') fp_map = ipyl.Map( center=center, zoom=zoom, default_tiles = map_tiles ) label = ipyw.Label(layout=ipyw.Layout(width='100%')) fp_map.add_layer(create_landsat_feature_layer(landsat_scenes, label)) # landsat layer fp_map.add_layer(ipyl.GeoJSON(data=aoi)) # aoi layer # Display map and label ipyw.VBox([fp_map, label]) Explanation: Show Landsat 8 Footprints on Map End of explanation def time_diff_stats(group): time_diff = group.index.to_series().diff() # time difference between rows in group stats = {'median': time_diff.median(), 'mean': time_diff.mean(), 'std': time_diff.std(), 'count': time_diff.count(), 'min': time_diff.min(), 'max': time_diff.max()} return pd.Series(stats) landsat_scenes.groupby(['wrs_path', 'wrs_row']).apply(time_diff_stats) Explanation: This AOI is located in a region covered by 3 different path/row tiles. This means there is 3x the coverage than in regions only covered by one path/row tile. This is particularly lucky! What about the within each path/row tile. How long and how consistent is the Landsat 8 collect period for each path/row? End of explanation def find_closest(date_time, data_frame): # inspired by: # https://stackoverflow.com/questions/36933725/pandas-time-series-join-by-closest-time time_deltas = (data_frame.index - date_time).to_series().reset_index(drop=True).abs() idx_min = time_deltas.idxmin() min_delta = time_deltas[idx_min] return (idx_min, min_delta) def closest_time(group): '''group: data frame with acquisition time as index''' inquiry_date = datetime.datetime(year=2017,month=3,day=7) idx, _ = find_closest(inquiry_date, group) return group.index.to_series().iloc[idx] # for accurate results, we look at the closest time for each path/row tile to a given time # using just the first entry could result in a longer time gap between collects due to # the timing of the first entries landsat_scenes.groupby(['wrs_path', 'wrs_row']).apply(closest_time) Explanation: It looks like the collection period is 16 days, which lines up with the Landsat 8 mission description. path/row 43/33 is missing one image which causes an unusually long collect period. What this means is that we don't need to look at every Landsat 8 scene collect time to find crossovers with Planet scenes. We could look at the first scene for each path/row, then look at every 16 day increment. However, we will need to account for dropped Landsat 8 scenes in some way. What is the time difference between the tiles? End of explanation all_ps_scenes = items_to_scenes(search_pl_api(build_ps_request(aoi['geometry'], start_date, stop_date))) # How many PS scenes match query? print(len(all_ps_scenes)) all_ps_scenes[:1] Explanation: So the tiles that are in the same path are very close (24sec) together from the same day. Therefore, we would want to only use one tile and pick the best image. Tiles that are in different paths are 7 days apart. Therefore, we want to keep tiles from different paths, as they represent unique crossovers. Investigate PS Orthotiles There are also quite a few PS Orthotiles that match our query. Some of those scenes may not have much overlap with our AOI. We will want to filter those out. Also, we are interested in knowing how many unique days of coverage we have, so we will group PS Orthotiles by collect day, since we may have days with more than one collect (due multiple PS satellites collecting imagery). End of explanation def aoi_overlap_percent(footprint, aoi): aoi_shape = sgeom.shape(aoi['geometry']) footprint_shape = sgeom.shape(footprint) overlap = aoi_shape.intersection(footprint_shape) return overlap.area / aoi_shape.area overlap_percent = all_ps_scenes.footprint.apply(aoi_overlap_percent, args=(aoi,)) all_ps_scenes = all_ps_scenes.assign(overlap_percent = overlap_percent) all_ps_scenes.head() print(len(all_ps_scenes)) ps_scenes = all_ps_scenes[all_ps_scenes.overlap_percent > 0.20] print(len(ps_scenes)) Explanation: What about overlap? We really only want images that overlap over 20% of the AOI. Note: we do this calculation in WGS84, the geographic coordinate system supported by geojson. The calculation of coverage expects that the geometries entered are 2D, which WGS84 is not. This will cause a small inaccuracy in the coverage area calculation, but not enough to bother us here. End of explanation # ps_scenes.index.to_series().head() # ps_scenes.filter(items=['id']).groupby(pd.Grouper(freq='D')).agg('count') # Use PS acquisition year, month, and day as index and group by those indices # https://stackoverflow.com/questions/14646336/pandas-grouping-intra-day-timeseries-by-date daily_ps_scenes = ps_scenes.index.to_series().groupby([ps_scenes.index.year, ps_scenes.index.month, ps_scenes.index.day]) daily_count = daily_ps_scenes.agg('count') daily_count.index.names = ['y', 'm', 'd'] # How many days is the count greater than 1? daily_multiple_count = daily_count[daily_count > 1] print('Out of {} days of coverage, {} days have multiple collects.'.format( \ len(daily_count), len(daily_multiple_count))) daily_multiple_count.head() def scenes_and_count(group): entry = {'count': len(group), 'acquisition_time': group.index.tolist()} return pd.DataFrame(entry) daily_count_and_scenes = daily_ps_scenes.apply(scenes_and_count) # need to rename indices because right now multiple are called 'acquired', which # causes a bug when we try to run the query daily_count_and_scenes.index.names = ['y', 'm', 'd', 'num'] multiplecoverage = daily_count_and_scenes.query('count > 1') multiplecoverage.query('m == 7') # look at just occurrence in July Explanation: Ideally, PS scenes have daily coverage over all regions. How many days have PS coverage and how many PS scenes were taken on the same day? End of explanation def find_crossovers(acquired_time, landsat_scenes): '''landsat_scenes: pandas dataframe with acquisition time as index''' closest_idx, closest_delta = find_closest(acquired_time, landsat_scenes) closest_landsat = landsat_scenes.iloc[closest_idx] crossover = {'landsat_acquisition': closest_landsat.name, 'delta': closest_delta} return pd.Series(crossover) # fetch PS scenes ps_scenes = items_to_scenes(search_pl_api(build_ps_request(aoi['geometry'], start_date, stop_date))) # for each PS scene, find the closest Landsat scene crossovers = ps_scenes.index.to_series().apply(find_crossovers, args=(landsat_scenes,)) # filter to crossovers within 1hr concurrent_crossovers = crossovers[crossovers['delta'] < pd.Timedelta('1 hours')] print(len(concurrent_crossovers)) concurrent_crossovers Explanation: Looks like the multiple collects on the same day are just a few minutes apart. They are likely crossovers between different PS satellites. Cool! Since we only want to us one PS image for a crossover, we will chose the best collect for days with multiple collects. Find Crossovers Now that we have the PS Orthotiles filtered to what we want and have investigated the Landsat 8 scenes, let's look for crossovers between the two. First we find concurrent crossovers, PS and Landsat collects that occur within 1hour of each other. End of explanation def get_crossover_info(crossovers, aoi): def get_scene_info(acquisition_time, scenes): scene = scenes.loc[acquisition_time] scene_info = {'id': scene.id, 'thumbnail': scene.thumbnail, # we are going to use the footprints as shapes so convert to shapes now 'footprint': sgeom.shape(scene.footprint)} return pd.Series(scene_info) landsat_info = crossovers.landsat_acquisition.apply(get_scene_info, args=(landsat_scenes,)) ps_info = crossovers.index.to_series().apply(get_scene_info, args=(ps_scenes,)) footprint_info = pd.DataFrame({'landsat': landsat_info.footprint, 'ps': ps_info.footprint}) overlaps = footprint_info.apply(lambda x: x.landsat.intersection(x.ps), axis=1) aoi_shape = sgeom.shape(aoi['geometry']) overlap_percent = overlaps.apply(lambda x: x.intersection(aoi_shape).area / aoi_shape.area) crossover_info = pd.DataFrame({'overlap': overlaps, 'overlap_percent': overlap_percent, 'ps_id': ps_info.id, 'ps_thumbnail': ps_info.thumbnail, 'landsat_id': landsat_info.id, 'landsat_thumbnail': landsat_info.thumbnail}) return crossover_info crossover_info = get_crossover_info(concurrent_crossovers, aoi) print(len(crossover_info)) Explanation: Now that we have the crossovers, what we are really interested in is the IDs of the landsat and PS scenes, as well as how much they overlap the AOI. End of explanation significant_crossovers_info = crossover_info[crossover_info.overlap_percent > 0.9] print(len(significant_crossovers_info)) significant_crossovers_info Explanation: Next, we filter to overlaps that cover a significant portion of the AOI. End of explanation def group_by_day(data_frame): return data_frame.groupby([data_frame.index.year, data_frame.index.month, data_frame.index.day]) unique_crossover_days = group_by_day(significant_crossovers_info.index.to_series()).count() print(len(unique_crossover_days)) print(unique_crossover_days) Explanation: Browsing through the crossovers, we see that in some instances, multiple crossovers take place on the same day. Really, we are interested in 'unique crossovers', that is, crossovers that take place on unique days. Therefore, we will look at the concurrent crossovers by day. End of explanation # https://stackoverflow.com/questions/36006136/how-to-display-images-in-a-row-with-ipython-display def make_html(image): return '<img src="{0}" alt="{0}"style="display:inline;margin:1px"/>' \ .format(image) def display_thumbnails(row): print(row.name) display(HTML(''.join(make_html(t) for t in (row.ps_thumbnail, row.landsat_thumbnail)))) _ = significant_crossovers_info.apply(display_thumbnails, axis=1) Explanation: There are 6 unique crossovers between Landsat 8 and PS that cover over 90% of our AOI between January and August in 2017. Not bad! That is definitely enough to perform comparison. Display Crossovers Let's take a quick look at the crossovers we found to make sure that they don't look cloudy, hazy, or have any other quality issues that would affect the comparison. End of explanation
2,077
Given the following text description, write Python code to implement the functionality described below step by step Description: Sample, Explore, and Clean Taxifare Dataset Learning Objectives - Practice querying BigQuery - Sample from large dataset in a reproducible way - Practice exploring data using Pandas - Identify corrupt data and clean accordingly Introduction In this notebook, we will explore a dataset corresponding to taxi rides in New York City to build a Machine Learning model that estimates taxi fares. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Such a model would also be useful for ride-hailing apps that quote you the trip price in advance. Set up environment variables and load necessary libraries Step1: View data schema and size Our dataset is hosted in BigQuery Step2: Preview data (alternate way) Alternatively we can use BigQuery's web UI to execute queries. Open the web UI Paste the above query minus the %%bigquery part into the Query Editor Click the 'Run' button or type 'CTRL + ENTER' to execute the query Query results will be displayed below the Query editor. Sample data repeatably There's one issue with using RAND() &lt; N to sample. It's non-deterministic. Each time you run the query above you'll get a different sample. Since repeatability is key to data science, let's instead use a hash function (which is deterministic by definition) and then sample the using the modulo operation on the hashed value. We obtain our hash values using Step4: Load sample into Pandas dataframe The advantage of querying BigQuery directly as opposed to the web UI is that we can supplement SQL analysis with Python analysis. A popular Python library for data analysis on structured data is Pandas, and the primary data strucure in Pandas is called a DataFrame. To store BigQuery results in a Pandas DataFrame we have have to query the data with a slightly differently syntax. Import the google.cloud bigquery module Create a variable called bq which is equal to the BigQuery Client bigquery.Client() Store the desired SQL query as a Python string Execute bq.query(query_string).to_dataframe() where query_string is what you created in the previous step This will take about a minute Tip Step5: Explore datafame Step6: The Python variable trips is now a Pandas DataFrame. The .head() function above prints the first 5 rows of a DataFrame. The rows in the DataFrame may be in a different order than when using %%bq query, but the data is the same. It would be useful to understand the distribution of each of our columns, which is to say the mean, min, max, standard deviation etc.. A DataFrame's .describe() method provides this. By default it only analyzes numeric columns. To include stats about non-numeric column use describe(include='all'). Step7: Distribution analysis Do you notice anything off about the data? Pay attention to min and max. Latitudes should be between -90 and 90, and longitudes should be between -180 and 180, so clearly some of this data is bad. Further more some trip fares are negative and some passenger counts are 0 which doesn't seem right. We'll clean this up later. Investigate trip distance Looks like some trip distances are 0 as well, let's investigate this. Step8: It appears that trips are being charged substantial fares despite having 0 distance. Let's graph trip_distance vs fare_amount using the Pandas .plot() method to corroborate. Step9: It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Identify correct label Should we use fare_amount or total_amount as our label? What's the difference? To make this clear let's look at some trips that included a toll. Exercise 3 Use the pandas DataFrame indexing to look at a subset of the trips dataframe created above where the tolls_amount is positive. Hint Step10: What do you see looking at the samples above? Does total_amount always reflect the fare amount + tolls_amount + tip? Why would there be a discrepancy? To account for this, we will use the sum of fare_amount and tolls_amount Select useful fields What fields do you see that may be useful in modeling taxifare? They should be Related to the objective Available at prediction time Related to the objective For example we know passenger_count shouldn't have any affect on fare because fare is calculated by time and distance. Best to eliminate it to reduce the amount of noise in the data and make the job of the ML algorithm easier. If you're not sure whether a column is related to the objective, err on the side of keeping it and let the ML algorithm figure out whether it's useful or not. Available at prediction time For example trip_distance is certainly related to the objective, but we can't know the value until a trip is completed (depends on the route taken), so it can't be used for prediction. We will use the following pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, and dropoff_latitude. Clean the data We need to do some clean-up of the data
Python Code: from google.cloud import bigquery PROJECT = !gcloud config get-value project PROJECT = PROJECT[0] %env PROJECT=$PROJECT Explanation: Sample, Explore, and Clean Taxifare Dataset Learning Objectives - Practice querying BigQuery - Sample from large dataset in a reproducible way - Practice exploring data using Pandas - Identify corrupt data and clean accordingly Introduction In this notebook, we will explore a dataset corresponding to taxi rides in New York City to build a Machine Learning model that estimates taxi fares. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Such a model would also be useful for ride-hailing apps that quote you the trip price in advance. Set up environment variables and load necessary libraries End of explanation %%bigquery --project $PROJECT #standardSQL SELECT * FROM `nyc-tlc.yellow.trips` WHERE RAND() < .0000001 -- sample a small fraction of the data Explanation: View data schema and size Our dataset is hosted in BigQuery: Google's petabyte scale, SQL queryable, fully managed cloud data warehouse. It is a publically available dataset, meaning anyone with a GCP account has access. Click here to acess the dataset. In the web UI, below the query editor, you will see the schema of the dataset. What fields are available, what does each mean? Click the 'details' tab. How big is the dataset? Preview data Let's see what a few rows of our data looks like. Any cell that starts with %%bigquery will be interpreted as a SQL query that is executed on BigQuery, and the result is printed to our notebook. BigQuery supports two flavors of SQL syntax: legacy SQL and standard SQL. The preferred is standard SQL because it complies with the official SQL:2011 standard. To instruct BigQuery to interpret our syntax as such we start the query with #standardSQL. There are over 1 Billion rows in this dataset and it's 130GB large, so let's retrieve a small sample End of explanation %%bigquery --project $PROJECT # TODO: Your code goes here Explanation: Preview data (alternate way) Alternatively we can use BigQuery's web UI to execute queries. Open the web UI Paste the above query minus the %%bigquery part into the Query Editor Click the 'Run' button or type 'CTRL + ENTER' to execute the query Query results will be displayed below the Query editor. Sample data repeatably There's one issue with using RAND() &lt; N to sample. It's non-deterministic. Each time you run the query above you'll get a different sample. Since repeatability is key to data science, let's instead use a hash function (which is deterministic by definition) and then sample the using the modulo operation on the hashed value. We obtain our hash values using: FARM_FINGERPRINT(CAST(hashkey AS STRING)) Working from inside out: CAST(): Casts hashkey to string because our hash function only works on strings FARM_FINGERPRINT(): Hashes strings to 64bit integers The hashkey should be: Unrelated to the objective Sufficiently high cardinality Given these properties we can sample our data repeatably using the modulo operation. To get a 1% sample: WHERE ABS(MOD(hashvalue, 100)) = 0 To get a different 1% sample change the remainder condition, for example: WHERE ABS(MOD(hashvalue, 100)) = 55 To get a 20% sample: WHERE ABS(MOD(hashvalue, 100)) &lt; 20 Alternatively: WHERE ABS(MOD(hashvalue, 5)) = 0 And so forth... We'll use pickup_datetime as our hash key because it meets our desired properties. If such a column doesn't exist in the data you can synthesize a hashkey by concatenating multiple columns. Below we sample 1/5000th of the data. The syntax is admittedly less elegant than RAND() &lt; N, but now each time you run the query you'll get the same result. *Tech note: Taking absolute value doubles the chances of hash collisions but since there are 2^64 possible hash values and less than 2^30 hash keys the collision risk is negligable. Exercise 1 Modify the BigQuery query above to produce a repeatable sample of the taxi fare data. Replace the RAND operation above with a FARM_FINGERPRINT operation that will yield a repeatable 1/5000th sample of the data. End of explanation bq = # TODO: Your code goes here query_string = # TODO: Your code goes here trips = # TODO: Your code goes here Explanation: Load sample into Pandas dataframe The advantage of querying BigQuery directly as opposed to the web UI is that we can supplement SQL analysis with Python analysis. A popular Python library for data analysis on structured data is Pandas, and the primary data strucure in Pandas is called a DataFrame. To store BigQuery results in a Pandas DataFrame we have have to query the data with a slightly differently syntax. Import the google.cloud bigquery module Create a variable called bq which is equal to the BigQuery Client bigquery.Client() Store the desired SQL query as a Python string Execute bq.query(query_string).to_dataframe() where query_string is what you created in the previous step This will take about a minute Tip: Use triple quotes for a multi-line string in Python Tip: You can measure execution time of a cell by starting that cell with %%time Exercise 2 Store the results of the query you created in the previous TODO above in a Pandas DataFrame called trips. You will need to import the bigquery module from Google Cloud and store the query as a string before executing the query. Then, - Create a variable called bq which contains the BigQuery Client - Copy/paste the query string from above - Use the BigQuery Client to execute the query and save it to a Pandas dataframe End of explanation print(type(trips)) trips.head() Explanation: Explore datafame End of explanation trips.describe() Explanation: The Python variable trips is now a Pandas DataFrame. The .head() function above prints the first 5 rows of a DataFrame. The rows in the DataFrame may be in a different order than when using %%bq query, but the data is the same. It would be useful to understand the distribution of each of our columns, which is to say the mean, min, max, standard deviation etc.. A DataFrame's .describe() method provides this. By default it only analyzes numeric columns. To include stats about non-numeric column use describe(include='all'). End of explanation # first 10 rows with trip_distance == 0 trips[trips["trip_distance"] == 0][:10] Explanation: Distribution analysis Do you notice anything off about the data? Pay attention to min and max. Latitudes should be between -90 and 90, and longitudes should be between -180 and 180, so clearly some of this data is bad. Further more some trip fares are negative and some passenger counts are 0 which doesn't seem right. We'll clean this up later. Investigate trip distance Looks like some trip distances are 0 as well, let's investigate this. End of explanation %matplotlib inline trips.plot(x="trip_distance", y="fare_amount", kind="scatter") Explanation: It appears that trips are being charged substantial fares despite having 0 distance. Let's graph trip_distance vs fare_amount using the Pandas .plot() method to corroborate. End of explanation # TODO: Your code goes here Explanation: It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Identify correct label Should we use fare_amount or total_amount as our label? What's the difference? To make this clear let's look at some trips that included a toll. Exercise 3 Use the pandas DataFrame indexing to look at a subset of the trips dataframe created above where the tolls_amount is positive. Hint: You can index the dataframe over values which have trips['tolls_amount'] &gt; 0. End of explanation %%bigquery --project $PROJECT #standardSQL SELECT (tolls_amount + fare_amount) AS fare_amount, -- create label that is the sum of fare_amount and tolls_amount pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude FROM `nyc-tlc.yellow.trips` WHERE -- Clean Data trip_distance > 0 AND passenger_count > 0 TODO: Your code goes here TODO: Your code goes here -- create a repeatable 1/5000th sample AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 5000)) = 1 Explanation: What do you see looking at the samples above? Does total_amount always reflect the fare amount + tolls_amount + tip? Why would there be a discrepancy? To account for this, we will use the sum of fare_amount and tolls_amount Select useful fields What fields do you see that may be useful in modeling taxifare? They should be Related to the objective Available at prediction time Related to the objective For example we know passenger_count shouldn't have any affect on fare because fare is calculated by time and distance. Best to eliminate it to reduce the amount of noise in the data and make the job of the ML algorithm easier. If you're not sure whether a column is related to the objective, err on the side of keeping it and let the ML algorithm figure out whether it's useful or not. Available at prediction time For example trip_distance is certainly related to the objective, but we can't know the value until a trip is completed (depends on the route taken), so it can't be used for prediction. We will use the following pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, and dropoff_latitude. Clean the data We need to do some clean-up of the data: Filter to latitudes and longitudes that are reasonable for NYC the pickup longitude and dropoff_longitude should lie between -70 degrees and -78 degrees the pickup_latitude and dropoff_latitude should lie between 37 degrees and 45 degrees We shouldn't include fare amounts less than $2.50 Trip distances and passenger counts should be non-zero Have the label reflect the sum of fare_amount and tolls_amount Let's change the BigQuery query appropriately, and only return the fields we'll use in our model. End of explanation
2,078
Given the following text description, write Python code to implement the functionality described below step by step Description: VARMAX models This is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available. Step1: Model specification The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument). Example 1 Step2: Example 2 Step3: Caution
Python Code: %matplotlib inline import numpy as np import pandas as pd import statsmodels.api as sm import dismalpy as dp import matplotlib.pyplot as plt dta = pd.read_stata('data/lutkepohl2.dta') dta.index = dta.qtr endog = dta.ix['1960-04-01':'1978-10-01', ['dln_inv', 'dln_inc', 'dln_consump']] Explanation: VARMAX models This is a notebook stub for VARMAX models. Full development will be done after impulse response functions are available. End of explanation exog = pd.Series(np.arange(len(endog)), index=endog.index, name='trend') exog = endog['dln_consump'] mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(2,0), trend='nc', exog=exog) res = mod.fit(maxiter=1000) print(res.summary()) Explanation: Model specification The VARMAX class in Statsmodels allows estimation of VAR, VMA, and VARMA models (through the order argument), optionally with a constant term (via the trend argument). Exogenous regressors may also be included (as usual in Statsmodels, by the exog argument), and in this way a time trend may be added. Finally, the class allows measurement error (via the measurement_error argument) and allows specifying either a diagonal or unstructured innovation covariance matrix (via the error_cov_type argument). Example 1: VAR Below is a simple VARX(2) model in two endogenous variables and an exogenous series, but no constant term. Notice that we needed to allow for more iterations than the default (which is maxiter=50) in order for the likelihood estimation to converge. This is not unusual in VAR models which have to estimate a large number of parameters, often on a relatively small number of time series: this model, for example, estimates 27 parameters off of 75 observations of 3 variables. End of explanation mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(0,2), error_cov_type='diagonal') res = mod.fit(maxiter=1000) print(res.summary()) Explanation: Example 2: VMA A vector moving average model can also be formulated. Below we show a VMA(2) on the same data, but where the innovations to the process are uncorrelated. In this example we leave out the exogenous regressor but now include the constant term. End of explanation mod = dp.ssm.VARMAX(endog[['dln_inv', 'dln_inc']], order=(1,1)) res = mod.fit(maxiter=1000) print(res.summary()) Explanation: Caution: VARMA(p,q) specifications Although the model allows estimating VARMA(p,q) specifications, these models are not identified without additional restrictions on the representation matrices, which are not built-in. For this reason, it is recommended that the user proceed with error (and indeed a warning is issued when these models are specified). Nonetheless, they may in some circumstances provide useful information. End of explanation
2,079
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to kgof This notebook will introduce you to kgof (kernel goodness-of-fit), a Python package implementing a linear-time kernel-based goodness-of-fit test as described in A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum, Wenkai Xu, Zoltan Szabo, Kenji Fukumizu, Arthur Gretton NIPS 2017 https Step2: In kgof, we use autograd to compute derivatives for our optimization problem. So instead of import numpy as np make sure you use import autograd.numpy as np Goodness-of-fit test Given a known probability density $p$ (model) and a sample ${ \mathbf{x}i }{i=1}^n \sim q$ where $q$ is an unknown density, a goodness-of-fit test proposes a null hypothesis $H_0 Step3: Notice that the function computes the log of an unnormalized density. This works fine as our test only requires $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ which does not depend on the normalizer. The gradient $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ will be automatically computed by autograd. In kgof package, a model $p$ can be specified by implementing the class density.UnnormalizedDensity. Implementing this directly is a bit tedious, however. An easier way is to use the function density.from_log_den(d, f) which takes as input 2 arguments Step4: Next, let us draw some sample from $q$. Step5: Plot the data from q Step6: All the implemented tests take the data in the form of a data.Data object. This is just an encapsulation of the sample X. To construct data.Data we do the following Step7: Now that we have the data, let us randomly split it into two disjoint halves Step8: Let us optimize the parameters of the test on tr. The optimization relies on autograd to compute the gradient. We will use a Gaussian kernel for the test. Step9: The optimization procedure returns back V_opt Step10: Let us use these optimized parameters to construct the FSSD test. Our test using a Gaussian kernels is implemented in kgof.goftest.GaussFSSD. Step11: Perform the goodness-of-fit test on the testing data te. Step12: It can be seen that the test correctly rejects $H_0$ with a very small p-value. Learned features Let us check the optimized test locations. We will plot the training data, the learned feature(s) and the contour of the unnormalized density of $p$. Step14: Here, the learned feature(s) indicate that the data do not match the tail profile of $p$. If you would like to see the optimization surface, see the notebook fssd_locs_surface.ipynb. Exercise Go back to where we sample the data from $q$, and change m (mean of the first coordinate of $q$) to 0. This will make $p=q$ so that $H_0$ is now true. Run the whole procedure again and verify that the test will not reject $H_0$. (Technically, the probability of rejecting is about $\alpha$.) Note that when the test fails to reject, the learned features are not interpretable. They will be arbitrary. Important note A few points worth mentioning The FSSD test requires that the derivative of $\log p$ exists. The test requires a technical condition called the "vanishing boundary" condition for it to be consistent. The condition is $\lim_{\|\mathbf{x} \|\to \infty} p(\mathbf{x}) \mathbf{g}(\mathbf{x}) = \mathbf{0}$ where $\mathbf{g}$ is the so called the Stein witness function (see the paper) which depends on the kernel and $\nabla_{\mathbf{x}} \log p(\mathbf{x})$. For a density $p$ which has support everywhere e.g., Gaussian, there is no problem at all. However, for a density defined on a domain with a boundary, one has to be careful. For example, if $p$ is a Gamma density defined on the positive orthant of $\mathbb{R}$, the density itself can actually be evaluated on negative points. Looking at the way the Gamma density is written, there is nothing that tells the test that it cannot be evaluated on negative orthant. Therefore, if $p$ is Gamma, and the observed sample also follows $p$ (i.e., $H_0$ is true), the test will still reject $H_0$! The reason is that the data do not match the left tail (in the negative region!) of the Gamma. It is necessary to include the fact that negative region has 0 density into the density itself. Specify $p$ directly with its gradient As mentioned, the FSSD test requires only $\nabla_{\mathbf{x}} \log p(\mathbf{x})$, and not even $\log p(\mathbf{x})$. If your model is such that it is easier to specify with its gradient, then this can be done as well. For instance, $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ when $p(\mathbf{x}) = \mathcal{N}(\mathbf{0}, \mathbf{I})$ is given by $-\mathbf{x}$. Step15: The UnnormalizedDensity can then be constructed with the following code. This defines the same model as before, and therefore will give the same test result.
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import kgof import kgof.data as data import kgof.density as density import kgof.goftest as gof import kgof.kernel as kernel import kgof.util as util import matplotlib import matplotlib.pyplot as plt import autograd.numpy as np import scipy.stats as stats Explanation: Introduction to kgof This notebook will introduce you to kgof (kernel goodness-of-fit), a Python package implementing a linear-time kernel-based goodness-of-fit test as described in A Linear-Time Kernel Goodness-of-Fit Test Wittawat Jitkrittum, Wenkai Xu, Zoltan Szabo, Kenji Fukumizu, Arthur Gretton NIPS 2017 https://arxiv.org/abs/1705.07673 See the Github page for more information. Make sure that you have kgof included in Python's search path. In particular the following import statements should not produce any fatal error. End of explanation # Assume two dimensions. d = 2 def isogauss_log_den(X): Evaluate the log density at the points (rows) in X of the standard isotropic Gaussian. Note that the density is NOT normalized. X: n x d nd-array return a length-n array mean = np.zeros(d) variance = 1 unden = -np.sum((X-mean)**2, 1)/(2.0*variance) return unden Explanation: In kgof, we use autograd to compute derivatives for our optimization problem. So instead of import numpy as np make sure you use import autograd.numpy as np Goodness-of-fit test Given a known probability density $p$ (model) and a sample ${ \mathbf{x}i }{i=1}^n \sim q$ where $q$ is an unknown density, a goodness-of-fit test proposes a null hypothesis $H_0: p = q$ against the alternative hypothesis $H_1: p \neq q$. In other words, it tests whether or not the sample ${ \mathbf{x}i }{i=1}^n $ is distributed according to a known $p$. Our test relies on a new test statistic called The Finite-Set Stein Discrepancy (FSSD) which is a discrepancy measure between a density and a sample. Unique features of our new goodness-of-fit test are It makes only a few mild assumptions on the distributions $p$ and $q$. The model $p$ can take almost any form. The normalizer of $p$ is not assumed known. The test only assesses the goodness of $p$ through $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ i.e., the first derivative of the log density. The runtime complexity of the full procedure (both parameter tuning and the actual test) is $\mathcal{O}(n)$ i.e., linear in the sample size. It returns a set of points (features) which indicate where $p$ fails to fit the data. For demonstration purpose, let us consider a simple two-dimensional toy problem where $p$ is the standard Gaussian. A simple Gaussian model Let us assume that $p(\mathbf{x}) = \mathcal{N}(\mathbf{0}, \mathbf{I})$ in $\mathbb{R}^2$ (two-dimensional space). The data ${ \mathbf{x}i }{i=1}^n \sim q = \mathcal{N}([m, 0], \mathbf{I})$ where $m$ specifies the mean of the first coordinate of $q$. From this setting, if $m\neq 0$, then $H_1$ is true and the test should reject $H_0$. Let us first construct the log density function for our model. End of explanation # p is an UnnormalizedDensity object p = density.from_log_den(d, isogauss_log_den) Explanation: Notice that the function computes the log of an unnormalized density. This works fine as our test only requires $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ which does not depend on the normalizer. The gradient $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ will be automatically computed by autograd. In kgof package, a model $p$ can be specified by implementing the class density.UnnormalizedDensity. Implementing this directly is a bit tedious, however. An easier way is to use the function density.from_log_den(d, f) which takes as input 2 arguments: d: the dimension of the input space f: a function taking in a 2D numpy array of size n x d and producing a one-dimensional array of size n for the n values of the log unnormalized density. Let us construct an UnnormalizedDensity which is the object representing a model. All the implemented goodness-of-fit tests take this object as an input. End of explanation # Let's assume that m = 1. # If m=0, then p=q and H_0 is true. # m = 0 m = 1 # Draw n points from q seed = 4 np.random.seed(seed) n = 400 X = np.random.randn(n, 2) + np.array([m, 0]) Explanation: Next, let us draw some sample from $q$. End of explanation plt.plot(X[:, 0], X[:, 1], 'ko', label='Data from $q$') plt.legend() Explanation: Plot the data from q End of explanation # dat will be fed to the test. dat = data.Data(X) Explanation: All the implemented tests take the data in the form of a data.Data object. This is just an encapsulation of the sample X. To construct data.Data we do the following End of explanation # We will use 20% of the data for parameter tuning, and 80% for testing. tr, te = dat.split_tr_te(tr_proportion=0.2, seed=2) Explanation: Now that we have the data, let us randomly split it into two disjoint halves: tr and te. The training set tr will be used for parameter optimization. The testing set te will be used for the actual goodness-of-fit test. tr and te are again of type data.Data. End of explanation # J is the number of test locations (or features). Typically not larger than 10. J = 1 # There are many options for the optimization. # Almost all of them have default values. # Here, we will list a few to give you a sense of what you can control. # Full options can be found in gof.GaussFSSD.optimize_locs_widths(..) opts = { 'reg': 1e-2, # regularization parameter in the optimization objective 'max_iter': 50, # maximum number of gradient ascent iterations 'tol_fun':1e-7, # termination tolerance of the objective } # make sure to give tr (NOT te). # do the optimization with the options in opts. V_opt, gw_opt, opt_info = gof.GaussFSSD.optimize_auto_init(p, tr, J, **opts) Explanation: Let us optimize the parameters of the test on tr. The optimization relies on autograd to compute the gradient. We will use a Gaussian kernel for the test. End of explanation opt_info Explanation: The optimization procedure returns back V_opt: optimized test locations (features). A $J \times d$ numpy array. gw_opt: optimized Gaussian width (for the Gaussian kernel). A floating point number. opt_info: a dictionary containing information gathered during the optimization. End of explanation # alpha = significance level of the test alpha = 0.01 fssd_opt = gof.GaussFSSD(p, gw_opt, V_opt, alpha) Explanation: Let us use these optimized parameters to construct the FSSD test. Our test using a Gaussian kernels is implemented in kgof.goftest.GaussFSSD. End of explanation # return a dictionary of testing results test_result = fssd_opt.perform_test(te) test_result Explanation: Perform the goodness-of-fit test on the testing data te. End of explanation # xtr is an n x d numpy array xtr = tr.data() # training data plt.plot(xtr[:, 0], xtr[:, 1], 'ko', label='Training data') # feature plt.plot(V_opt[:, 0], V_opt[:, 1], 'r*', label='Learned feature(s)', markersize=20) max0, max1 = np.max(xtr, 0) min0, min1 = np.min(xtr, 0) sd0, sd1 = ((max0-min0)*0.4, (max1-min1)*0.4) # form a test location grid to try nd0 = 30 nd1 = 30 loc0_cands = np.linspace(min0-sd0/2, max0+sd0/2, nd0) loc1_cands = np.linspace(min1-sd1/2, max1+sd1/2, nd1) lloc0, lloc1 = np.meshgrid(loc0_cands, loc1_cands) # nd1 x nd0 x 2 loc3d = np.dstack((lloc0, lloc1)) # #candidates x 2 all_loc2s = np.reshape(loc3d, (-1, 2) ) den_grid = np.exp(p.log_den(all_loc2s)) den_grid = np.reshape(den_grid, (nd1, nd0)) # plt.figure(figsize=(10, 6)) # Plot the unnormalized density CS = plt.contour(lloc0, lloc1, den_grid, alpha=0.7) plt.legend(numpoints=1, loc='best') Explanation: It can be seen that the test correctly rejects $H_0$ with a very small p-value. Learned features Let us check the optimized test locations. We will plot the training data, the learned feature(s) and the contour of the unnormalized density of $p$. End of explanation def isogauss_grad_log(X): Evaluate the gradient of the log density of N(0, I) at the points (rows) in X. X: n x d nd-array Return an n x d numpy array of gradients return -X Explanation: Here, the learned feature(s) indicate that the data do not match the tail profile of $p$. If you would like to see the optimization surface, see the notebook fssd_locs_surface.ipynb. Exercise Go back to where we sample the data from $q$, and change m (mean of the first coordinate of $q$) to 0. This will make $p=q$ so that $H_0$ is now true. Run the whole procedure again and verify that the test will not reject $H_0$. (Technically, the probability of rejecting is about $\alpha$.) Note that when the test fails to reject, the learned features are not interpretable. They will be arbitrary. Important note A few points worth mentioning The FSSD test requires that the derivative of $\log p$ exists. The test requires a technical condition called the "vanishing boundary" condition for it to be consistent. The condition is $\lim_{\|\mathbf{x} \|\to \infty} p(\mathbf{x}) \mathbf{g}(\mathbf{x}) = \mathbf{0}$ where $\mathbf{g}$ is the so called the Stein witness function (see the paper) which depends on the kernel and $\nabla_{\mathbf{x}} \log p(\mathbf{x})$. For a density $p$ which has support everywhere e.g., Gaussian, there is no problem at all. However, for a density defined on a domain with a boundary, one has to be careful. For example, if $p$ is a Gamma density defined on the positive orthant of $\mathbb{R}$, the density itself can actually be evaluated on negative points. Looking at the way the Gamma density is written, there is nothing that tells the test that it cannot be evaluated on negative orthant. Therefore, if $p$ is Gamma, and the observed sample also follows $p$ (i.e., $H_0$ is true), the test will still reject $H_0$! The reason is that the data do not match the left tail (in the negative region!) of the Gamma. It is necessary to include the fact that negative region has 0 density into the density itself. Specify $p$ directly with its gradient As mentioned, the FSSD test requires only $\nabla_{\mathbf{x}} \log p(\mathbf{x})$, and not even $\log p(\mathbf{x})$. If your model is such that it is easier to specify with its gradient, then this can be done as well. For instance, $\nabla_{\mathbf{x}} \log p(\mathbf{x})$ when $p(\mathbf{x}) = \mathcal{N}(\mathbf{0}, \mathbf{I})$ is given by $-\mathbf{x}$. End of explanation p1 = density.from_grad_log(d, isogauss_grad_log) Explanation: The UnnormalizedDensity can then be constructed with the following code. This defines the same model as before, and therefore will give the same test result. End of explanation
2,080
Given the following text description, write Python code to implement the functionality described below step by step Description: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. Step2: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. Step4: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise Step5: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al. Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. Step7: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise Step8: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise Step9: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise Step10: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. Step11: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. Step12: Restore the trained network if you need to Step13: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
Python Code: import time import numpy as np import tensorflow as tf import utils Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] list(int_to_vocab.items())[:5] Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation ## Your code here from collections import Counter import random freq = Counter(int_words) p_drop = {word : (1 - np.sqrt(0.01/freq[word])) for word in int_words} train_words = [word for word in int_words if p_drop[word] < random.random()] # The final subsampled word list len(train_words) Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words. End of explanation def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # Your code here random_range = random.choice(range(1,window_size+1)) start = idx-random_range if (idx-random_range) > 0 else 0 target_list = list(set(words[start:idx] + words[idx+1 : idx+random_range+1])) return target_list Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window. End of explanation def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, 1], name='labels') Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation n_vocab = len(int_to_vocab) n_embedding = 300 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform([n_vocab, n_embedding], -1, 1))# create embedding weight matrix here embed = tf.nn.embedding_lookup(embedding, inputs)# use tf.nn.embedding_lookup to get the hidden layer output Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: ## From Thushan Ganegedara's implementation # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) Explanation: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. End of explanation with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) Explanation: Restore the trained network if you need to: End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation
2,081
Given the following text description, write Python code to implement the functionality described below step by step Description: prelim_month_human - confusion matrix old file name Step1: Setup - Imports Back to Table of Contents Step2: Setup - Initialize Django Back to Table of Contents First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings. You need to have installed your virtualenv with django as a kernel, then select that kernel for this notebook. Step3: Import any sourcenet or context_analysis models or classes. Step4: Setup - Tools Back to Table of Contents Write functions here to do math, so that we can reuse said tools below. Build Confusion Matrix Data Back to Table of Contents A basic confusion matrix ( https Step5: Person detection - confusion matrix Step6: Person lookup Back to Table of Contents For each person detected across the set of articles, look at whether the automated coder correctly looked up the person (so compare person IDs). Person lookup - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. Step7: Person lookup - confusion matrix Step8: Person Types For each person type, will build binary lists of yes or no where each person type value will in turn be the value of interest, and positive or negative is whether the coder found the current person to be of that type (positive/1) or any other type (negative/0). Function Step9: Person type - Authors Back to Table of Contents For each person detected across the set of articles, look at whether the automated coder assigned the correct type. Person type - Authors - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. Step10: Person type - Authors - confusion matrix Step11: Person type - Subjects Back to Table of Contents For each person detected across the set of articles classified by Ground truth as a subject, look at whether the automated coder assigned the correct person type. Person type - Subjects - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. Step12: Person type - Subjects - confusion matrix Step13: Person type - Sources Back to Table of Contents For each person detected across the set of articles classified by Ground truth as a source, look at whether the automated coder assigned the correct person type. Person type - Sources - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. Step14: Person type - Sources - confusion matrix
Python Code: # set the label we'll be looking at throughout current_label = "prelim_month_human" Explanation: prelim_month_human - confusion matrix old file name: 2017.10.21 - work log - prelim_month_human - confusion matrix Confusion matrix for data where coder 1 is ground truth, coder 2 is uncorrected human coding. <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Setup</a></span><ul class="toc-item"><li><span><a href="#Setup---Imports" data-toc-modified-id="Setup---Imports-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Setup - Imports</a></span></li><li><span><a href="#Setup---Initialize-Django" data-toc-modified-id="Setup---Initialize-Django-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Setup - Initialize Django</a></span></li><li><span><a href="#Setup---Tools" data-toc-modified-id="Setup---Tools-1.3"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Setup - Tools</a></span></li></ul></li><li><span><a href="#Build-Confusion-Matrix-Data" data-toc-modified-id="Build-Confusion-Matrix-Data-2"><span class="toc-item-num">2&nbsp;&nbsp;</span>Build Confusion Matrix Data</a></span><ul class="toc-item"><li><span><a href="#Person-detection" data-toc-modified-id="Person-detection-2.1"><span class="toc-item-num">2.1&nbsp;&nbsp;</span>Person detection</a></span><ul class="toc-item"><li><span><a href="#Person-detection---build-value-lists" data-toc-modified-id="Person-detection---build-value-lists-2.1.1"><span class="toc-item-num">2.1.1&nbsp;&nbsp;</span>Person detection - build value lists</a></span></li><li><span><a href="#Person-detection---confusion-matrix" data-toc-modified-id="Person-detection---confusion-matrix-2.1.2"><span class="toc-item-num">2.1.2&nbsp;&nbsp;</span>Person detection - confusion matrix</a></span></li></ul></li><li><span><a href="#Person-lookup" data-toc-modified-id="Person-lookup-2.2"><span class="toc-item-num">2.2&nbsp;&nbsp;</span>Person lookup</a></span><ul class="toc-item"><li><span><a href="#Person-lookup---build-value-lists" data-toc-modified-id="Person-lookup---build-value-lists-2.2.1"><span class="toc-item-num">2.2.1&nbsp;&nbsp;</span>Person lookup - build value lists</a></span></li><li><span><a href="#Person-lookup---confusion-matrix" data-toc-modified-id="Person-lookup---confusion-matrix-2.2.2"><span class="toc-item-num">2.2.2&nbsp;&nbsp;</span>Person lookup - confusion matrix</a></span></li></ul></li><li><span><a href="#Person-Types" data-toc-modified-id="Person-Types-2.3"><span class="toc-item-num">2.3&nbsp;&nbsp;</span>Person Types</a></span><ul class="toc-item"><li><span><a href="#Function:-build-confusion-lists-for-a-given-categorical-value" data-toc-modified-id="Function:-build-confusion-lists-for-a-given-categorical-value-2.3.1"><span class="toc-item-num">2.3.1&nbsp;&nbsp;</span>Function: build confusion lists for a given categorical value</a></span></li><li><span><a href="#Person-type---Authors" data-toc-modified-id="Person-type---Authors-2.3.2"><span class="toc-item-num">2.3.2&nbsp;&nbsp;</span>Person type - Authors</a></span><ul class="toc-item"><li><span><a href="#Person-type---Authors---build-value-lists" data-toc-modified-id="Person-type---Authors---build-value-lists-2.3.2.1"><span class="toc-item-num">2.3.2.1&nbsp;&nbsp;</span>Person type - Authors - build value lists</a></span></li><li><span><a href="#Person-type---Authors---confusion-matrix" data-toc-modified-id="Person-type---Authors---confusion-matrix-2.3.2.2"><span class="toc-item-num">2.3.2.2&nbsp;&nbsp;</span>Person type - Authors - confusion matrix</a></span></li></ul></li><li><span><a href="#Person-type---Subjects" data-toc-modified-id="Person-type---Subjects-2.3.3"><span class="toc-item-num">2.3.3&nbsp;&nbsp;</span>Person type - Subjects</a></span><ul class="toc-item"><li><span><a href="#Person-type---Subjects---build-value-lists" data-toc-modified-id="Person-type---Subjects---build-value-lists-2.3.3.1"><span class="toc-item-num">2.3.3.1&nbsp;&nbsp;</span>Person type - Subjects - build value lists</a></span></li><li><span><a href="#Person-type---Subjects---confusion-matrix" data-toc-modified-id="Person-type---Subjects---confusion-matrix-2.3.3.2"><span class="toc-item-num">2.3.3.2&nbsp;&nbsp;</span>Person type - Subjects - confusion matrix</a></span></li></ul></li><li><span><a href="#Person-type---Sources" data-toc-modified-id="Person-type---Sources-2.3.4"><span class="toc-item-num">2.3.4&nbsp;&nbsp;</span>Person type - Sources</a></span><ul class="toc-item"><li><span><a href="#Person-type---Sources---build-value-lists" data-toc-modified-id="Person-type---Sources---build-value-lists-2.3.4.1"><span class="toc-item-num">2.3.4.1&nbsp;&nbsp;</span>Person type - Sources - build value lists</a></span></li><li><span><a href="#Person-type---Sources---confusion-matrix" data-toc-modified-id="Person-type---Sources---confusion-matrix-2.3.4.2"><span class="toc-item-num">2.3.4.2&nbsp;&nbsp;</span>Person type - Sources - confusion matrix</a></span></li></ul></li></ul></li></ul></li><li><span><a href="#TODO" data-toc-modified-id="TODO-3"><span class="toc-item-num">3&nbsp;&nbsp;</span>TODO</a></span><ul class="toc-item"><li><span><a href="#todo" data-toc-modified-id="todo-3.1"><span class="toc-item-num">3.1&nbsp;&nbsp;</span>todo</a></span></li></ul></li></ul></div> Setup Back to Table of Contents End of explanation import datetime import math import pandas import pandas_ml import sklearn import sklearn.metrics import six import statsmodels import statsmodels.api print( "packages imported at " + str( datetime.datetime.now() ) ) %pwd Explanation: Setup - Imports Back to Table of Contents End of explanation %run ../django_init.py Explanation: Setup - Initialize Django Back to Table of Contents First, initialize my dev django project, so I can run code in this notebook that references my django models and can talk to the database using my project's settings. You need to have installed your virtualenv with django as a kernel, then select that kernel for this notebook. End of explanation # python_utilities from python_utilities.analysis.statistics.confusion_matrix_helper import ConfusionMatrixHelper from python_utilities.analysis.statistics.stats_helper import StatsHelper from python_utilities.dictionaries.dict_helper import DictHelper # context_analysis models. from context_analysis.models import Reliability_Names print( "sourcenet and context_analysis packages imported at " + str( datetime.datetime.now() ) ) Explanation: Import any sourcenet or context_analysis models or classes. End of explanation # declare variables reliability_names_label = None label_in_list = [] reliability_names_qs = None ground_truth_coder_index = 1 predicted_coder_index = 2 # processing column_name = "" predicted_value = -1 predicted_list = [] ground_truth_value = -1 ground_truth_list = [] reliability_names_instance = None # set label reliability_names_label = current_label # "prelim_month_human" # lookup Reliability_Names for selected label label_in_list.append( reliability_names_label ) reliability_names_qs = Reliability_Names.objects.filter( label__in = label_in_list ) print( "Found " + str( reliability_names_qs.count() ) + " rows with label in " + str( label_in_list ) ) # loop over records predicted_value = -1 predicted_list = [] ground_truth_value = -1 ground_truth_list = [] ground_truth_positive_count = 0 predicted_positive_count = 0 true_positive_count = 0 false_positive_count = 0 ground_truth_negative_count = 0 predicted_negative_count = 0 true_negative_count = 0 false_negative_count = 0 for reliability_names_instance in reliability_names_qs: # get detected flag from ground truth and predicted columns and add them to list. # ==> ground truth column_name = Reliability_Names.FIELD_NAME_PREFIX_CODER column_name += str( ground_truth_coder_index ) column_name += "_" + Reliability_Names.FIELD_NAME_SUFFIX_DETECTED ground_truth_value = getattr( reliability_names_instance, column_name ) ground_truth_list.append( ground_truth_value ) # ==> predicted column_name = Reliability_Names.FIELD_NAME_PREFIX_CODER column_name += str( predicted_coder_index ) column_name += "_" + Reliability_Names.FIELD_NAME_SUFFIX_DETECTED predicted_value = getattr( reliability_names_instance, column_name ) predicted_list.append( predicted_value ) #-- END loop over Reliability_Names instances. --# print( "==> population values count: " + str( len( ground_truth_list ) ) ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) print( "==> percentage agreement = " + str( StatsHelper.percentage_agreement( ground_truth_list, predicted_list ) ) ) print( "==> population values: " + str( len( ground_truth_list ) ) ) list_name = "ACTUAL_VALUE_LIST" string_list = map( str, ground_truth_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) list_name = "PREDICTED_VALUE_LIST" string_list = map( str, predicted_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) Explanation: Setup - Tools Back to Table of Contents Write functions here to do math, so that we can reuse said tools below. Build Confusion Matrix Data Back to Table of Contents A basic confusion matrix ( https://en.wikipedia.org/wiki/Confusion_matrix ) contains counts of true positives, true negatives, false positives, and false negatives for a given binary or boolean (yes/no) classification decision you are asking someone or something to make. To create a confusion matrix, you need two associated vectors containing classification decisions (0s and 1s), one that contains ground truth, and one that contains values predicted by whatever coder you are testing. For each associated pair of values: Start with the predicted value: positive (1) or negative (0). Look at the corresponding ground truth value. If they match, it is "true". If not, it is "false". So, predicted 1 and ground_truth 1 is a "true positive". Add one to the counter for the class of prediction: "true positive", "true negative", "false positive", "false negative". Once you have your basic confusion matrix, the counts of true positives, true negatives, false positives, and false negatives can then be used to calculate a set of different scores and values one can use to assess the quality of predictive models. These scores include "precision and recall", "accuracy", an "F1 score" (a harmonic mean), and a "diagnostic odds ratio", among many others. Person detection Back to Table of Contents For each person detected across the set of articles, look at whether the automated coder correctly detected the person, independent of eventual lookup or person type. Person detection - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. End of explanation confusion_matrix = pandas_ml.ConfusionMatrix( ground_truth_list, predicted_list ) print("Confusion matrix:\n%s" % confusion_matrix) confusion_matrix.print_stats() stats_dict = confusion_matrix.stats() print( str( stats_dict ) ) print( str( stats_dict[ 'TPR' ] ) ) # get counts in variables true_positive_count = confusion_matrix.TP false_positive_count = confusion_matrix.FP true_negative_count = confusion_matrix.TN false_negative_count = confusion_matrix.FN # and derive population and predicted counts ground_truth_positive_count = true_positive_count + false_negative_count predicted_positive_count = true_positive_count + false_positive_count ground_truth_negative_count = true_negative_count + false_positive_count predicted_negative_count = true_negative_count + false_negative_count print( "==> Predicted positives: " + str( predicted_positive_count ) + " ( " + str( ( true_positive_count + false_positive_count ) ) + " )" ) print( "==> Ground truth positives: " + str( ground_truth_positive_count ) + " ( " + str( ( true_positive_count + false_negative_count ) ) + " )" ) print( "==> True positives: " + str( true_positive_count ) ) print( "==> False positives: " + str( false_positive_count ) ) print( "==> Predicted negatives: " + str( predicted_negative_count ) + " ( " + str( ( true_negative_count + false_negative_count ) ) + " )" ) print( "==> Ground truth negatives: " + str( ground_truth_negative_count ) + " ( " + str( ( true_negative_count + false_positive_count ) ) + " )" ) print( "==> True negatives: " + str( true_negative_count ) ) print( "==> False negatives: " + str( false_negative_count ) ) print( "==> Precision (true positive/predicted positive): " + str( ( true_positive_count / predicted_positive_count ) ) ) print( "==> Recall (true positive/ground truth positive): " + str( ( true_positive_count / ground_truth_positive_count ) ) ) confusion_helper = ConfusionMatrixHelper.populate_confusion_matrix( ground_truth_list, predicted_list ) print( str( confusion_helper ) ) Explanation: Person detection - confusion matrix End of explanation # declare variables reliability_names_label = None label_in_list = [] reliability_names_qs = None ground_truth_coder_index = 1 predicted_coder_index = 2 # processing column_name = "" predicted_value = -1 predicted_list = [] ground_truth_value = -1 ground_truth_list = [] reliability_names_instance = None # set label reliability_names_label = current_label # "prelim_month_human" # lookup Reliability_Names for selected label label_in_list.append( reliability_names_label ) reliability_names_qs = Reliability_Names.objects.filter( label__in = label_in_list ) print( "Found " + str( reliability_names_qs.count() ) + " rows with label in " + str( label_in_list ) ) # loop over records predicted_value = -1 predicted_list = [] ground_truth_value = -1 ground_truth_list = [] ground_truth_positive_count = 0 predicted_positive_count = 0 true_positive_count = 0 false_positive_count = 0 ground_truth_negative_count = 0 predicted_negative_count = 0 true_negative_count = 0 false_negative_count = 0 for reliability_names_instance in reliability_names_qs: # get person_id from ground truth and predicted columns and add them to list. # ==> ground truth column_name = Reliability_Names.FIELD_NAME_PREFIX_CODER column_name += str( ground_truth_coder_index ) column_name += "_" + Reliability_Names.FIELD_NAME_SUFFIX_PERSON_ID ground_truth_value = getattr( reliability_names_instance, column_name ) ground_truth_list.append( ground_truth_value ) # ==> predicted column_name = Reliability_Names.FIELD_NAME_PREFIX_CODER column_name += str( predicted_coder_index ) column_name += "_" + Reliability_Names.FIELD_NAME_SUFFIX_PERSON_ID predicted_value = getattr( reliability_names_instance, column_name ) predicted_list.append( predicted_value ) #-- END loop over Reliability_Names instances. --# print( "==> population values count: " + str( len( ground_truth_list ) ) ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) print( "==> percentage agreement = " + str( StatsHelper.percentage_agreement( ground_truth_list, predicted_list ) ) ) print( "==> population values: " + str( len( ground_truth_list ) ) ) list_name = "ACTUAL_VALUE_LIST" string_list = map( str, ground_truth_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) list_name = "PREDICTED_VALUE_LIST" string_list = map( str, predicted_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) Explanation: Person lookup Back to Table of Contents For each person detected across the set of articles, look at whether the automated coder correctly looked up the person (so compare person IDs). Person lookup - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. End of explanation confusion_helper = ConfusionMatrixHelper.populate_confusion_matrix( ground_truth_list, predicted_list ) print( str( confusion_helper ) ) Explanation: Person lookup - confusion matrix End of explanation def build_confusion_lists( column_name_suffix_IN, desired_value_IN, label_list_IN = [ "prelim_month_human", ], ground_truth_coder_index_IN = 1, predicted_coder_index_IN = 2, debug_flag_IN = False ): ''' Accepts suffix of column name of interest and desired value. Also accepts optional labels list, indexes of ground_truth and predicted coder users, and a debug flag. Uses these values to loop over records whose label matches the on in the list passed in. For each, in the specified column, checks to see if the ground_truth and predicted values match the desired value. If so, positive, so 1 is stored for the row. If no, negative, so 0 is stored for the row. Returns dictionary with value lists inside, ground truth values list mapped to key "ground_truth" and predicted values list mapped to key "predicted". ''' # return reference lists_OUT = {} # declare variables reliability_names_label = None label_in_list = [] reliability_names_qs = None ground_truth_coder_index = -1 predicted_coder_index = -1 # processing debug_flag = False desired_column_suffix = None desired_value = None ground_truth_column_name = None ground_truth_column_value = None ground_truth_value = -1 ground_truth_list = [] predicted_column_name = None predicted_column_value = None predicted_value = -1 predicted_list = [] reliability_names_instance = None # got required values? # column name suffix? if ( column_name_suffix_IN is not None ): # desired value? if ( desired_value_IN is not None ): # ==> initialize desired_column_suffix = column_name_suffix_IN desired_value = desired_value_IN label_in_list = label_list_IN ground_truth_coder_index = ground_truth_coder_index_IN predicted_coder_index = predicted_coder_index_IN debug_flag = debug_flag_IN # create ground truth column name ground_truth_column_name = Reliability_Names.FIELD_NAME_PREFIX_CODER ground_truth_column_name += str( ground_truth_coder_index ) ground_truth_column_name += "_" + desired_column_suffix # create predicted column name. predicted_column_name = Reliability_Names.FIELD_NAME_PREFIX_CODER predicted_column_name += str( predicted_coder_index ) predicted_column_name += "_" + desired_column_suffix # ==> processing # lookup Reliability_Names for selected label(s) reliability_names_qs = Reliability_Names.objects.filter( label__in = label_in_list ) print( "Found " + str( reliability_names_qs.count() ) + " rows with label in " + str( label_in_list ) ) # reset all lists and values. ground_truth_column_value = "" ground_truth_value = -1 ground_truth_list = [] predicted_column_value = "" predicted_value = -1 predicted_list = [] # loop over records to build ground_truth and predicted value lists # where 1 = value matching desired value in multi-value categorical # variable and 0 = any value other than the desired value. for reliability_names_instance in reliability_names_qs: # get detected flag from ground truth and predicted columns and add them to list. # ==> ground truth # get column value. ground_truth_column_value = getattr( reliability_names_instance, ground_truth_column_name ) # does it match desired value? if ( ground_truth_column_value == desired_value ): # it does - True (or positive or 1!)! ground_truth_value = 1 else: # it does not - False (or negative or 0!)! ground_truth_value = 0 #-- END check to see if current value matches desired value. --# # add value to list. ground_truth_list.append( ground_truth_value ) # ==> predicted # get column value. predicted_column_value = getattr( reliability_names_instance, predicted_column_name ) # does it match desired value? if ( predicted_column_value == desired_value ): # it does - True (or positive or 1!)! predicted_value = 1 else: # it does not - False (or negative or 0!)! predicted_value = 0 #-- END check to see if current value matches desired value. --# # add to predicted list. predicted_list.append( predicted_value ) if ( debug_flag == True ): print( "----> gt: " + str( ground_truth_column_value ) + " ( " + str( ground_truth_value ) + " ) - p: " + str( predicted_column_value ) + " ( " + str( predicted_value ) + " )" ) #-- END DEBUG --# #-- END loop over Reliability_Names instances. --# else: print( "ERROR - you must specify a desired value." ) #-- END check to see if desired value passed in. --# else: print( "ERROR - you must provide the suffix of the column you want to examine." ) #-- END check to see if column name suffix passed in. --# # package up and return lists. lists_OUT[ "ground_truth" ] = ground_truth_list lists_OUT[ "predicted" ] = predicted_list return lists_OUT #-- END function build_confusion_lists() --# print( "Function build_confusion_lists() defined at " + str( datetime.datetime.now() ) ) Explanation: Person Types For each person type, will build binary lists of yes or no where each person type value will in turn be the value of interest, and positive or negative is whether the coder found the current person to be of that type (positive/1) or any other type (negative/0). Function: build confusion lists for a given categorical value End of explanation confusion_lists = build_confusion_lists( Reliability_Names.FIELD_NAME_SUFFIX_PERSON_TYPE, Reliability_Names.PERSON_TYPE_AUTHOR ) ground_truth_list = confusion_lists.get( "ground_truth", None ) predicted_list = confusion_lists.get( "predicted", None ) print( "==> population values count: " + str( len( ground_truth_list ) ) ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) print( "==> percentage agreement = " + str( StatsHelper.percentage_agreement( ground_truth_list, predicted_list ) ) ) print( "==> population values: " + str( len( ground_truth_list ) ) ) list_name = "ACTUAL_VALUE_LIST" string_list = map( str, ground_truth_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) list_name = "PREDICTED_VALUE_LIST" string_list = map( str, predicted_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) Explanation: Person type - Authors Back to Table of Contents For each person detected across the set of articles, look at whether the automated coder assigned the correct type. Person type - Authors - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. End of explanation confusion_helper = ConfusionMatrixHelper.populate_confusion_matrix( ground_truth_list, predicted_list ) print( str( confusion_helper ) ) Explanation: Person type - Authors - confusion matrix End of explanation # subjects = "mentioned" confusion_lists = build_confusion_lists( Reliability_Names.FIELD_NAME_SUFFIX_PERSON_TYPE, Reliability_Names.SUBJECT_TYPE_MENTIONED ) ground_truth_list = confusion_lists.get( "ground_truth", None ) predicted_list = confusion_lists.get( "predicted", None ) print( "==> population values count: " + str( len( ground_truth_list ) ) ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) print( "==> percentage agreement = " + str( StatsHelper.percentage_agreement( ground_truth_list, predicted_list ) ) ) print( "==> population values: " + str( len( ground_truth_list ) ) ) list_name = "ACTUAL_VALUE_LIST" string_list = map( str, ground_truth_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) list_name = "PREDICTED_VALUE_LIST" string_list = map( str, predicted_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) Explanation: Person type - Subjects Back to Table of Contents For each person detected across the set of articles classified by Ground truth as a subject, look at whether the automated coder assigned the correct person type. Person type - Subjects - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. End of explanation confusion_helper = ConfusionMatrixHelper.populate_confusion_matrix( ground_truth_list, predicted_list ) print( str( confusion_helper ) ) Explanation: Person type - Subjects - confusion matrix End of explanation # subjects = "mentioned" confusion_lists = build_confusion_lists( Reliability_Names.FIELD_NAME_SUFFIX_PERSON_TYPE, Reliability_Names.SUBJECT_TYPE_QUOTED ) ground_truth_list = confusion_lists.get( "ground_truth", None ) predicted_list = confusion_lists.get( "predicted", None ) print( "==> population values count: " + str( len( ground_truth_list ) ) ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) print( "==> percentage agreement = " + str( StatsHelper.percentage_agreement( ground_truth_list, predicted_list ) ) ) print( "==> population values: " + str( len( ground_truth_list ) ) ) list_name = "ACTUAL_VALUE_LIST" string_list = map( str, ground_truth_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) print( "==> predicted values count: " + str( len( predicted_list ) ) ) list_name = "PREDICTED_VALUE_LIST" string_list = map( str, predicted_list ) list_values = ", ".join( string_list ) print( list_name + " = [ " + list_values + " ]" ) Explanation: Person type - Sources Back to Table of Contents For each person detected across the set of articles classified by Ground truth as a source, look at whether the automated coder assigned the correct person type. Person type - Sources - build value lists Back to Table of Contents First, build lists of ground truth and predicted values per person. End of explanation confusion_helper = ConfusionMatrixHelper.populate_confusion_matrix( ground_truth_list, predicted_list, calc_type_IN = ConfusionMatrixHelper.CALC_TYPE_PANDAS_ML ) print( str( confusion_helper ) ) Explanation: Person type - Sources - confusion matrix End of explanation
2,082
Given the following text description, write Python code to implement the functionality described below step by step Description: Ejercicios 3 1 Ejercicio Escribe una expresión Python para recuperar el valor del elemento con clave 'Hola' del un diccionario d. * Comprueba que si d es {} la ejecución produce un error. * ¿ Y si d es {'Hola' Step1: 2 Ejercicio Dados dos diccionarios d1 y d2, escribe una función en Python llamada fusion que realice la fusión de los dos diccionarios pasados como parámetros. Puedes utilizar la función update. Prueba la función con los diccionarios d1 = {1 Step2: 3 Ejercicio Dada la lista de las ciudades más pobladas de Italia it
Python Code: d1 = { } d2 = {'Hola': ['Hi','Hello'], 'Adios': ['Bye'] } # Sol: d2['Hola'] Explanation: Ejercicios 3 1 Ejercicio Escribe una expresión Python para recuperar el valor del elemento con clave 'Hola' del un diccionario d. * Comprueba que si d es {} la ejecución produce un error. * ¿ Y si d es {'Hola': ['Hi','Hello'], 'Adios': ['Bye']} ? End of explanation # Sol: def fusion(): d1 = {1: 'A', 2:'B', 3:'C'} d2 = {4: 'Aa', 5:'Ba', 6:'Ca'} d1.update(d2) return d1 fusion() Explanation: 2 Ejercicio Dados dos diccionarios d1 y d2, escribe una función en Python llamada fusion que realice la fusión de los dos diccionarios pasados como parámetros. Puedes utilizar la función update. Prueba la función con los diccionarios d1 = {1: 'A', 2:'B', 3:'C'}, d2 = {4: 'Aa', 5:'Ba', 6:'Ca'} Utiliza la función len para recuperar el número de elementos del nuevo diccionario Prueba la función con los diccionarios d1 = {1: 'A', 2:'B', 3:'C'}, d2 = {2: 'Aa', 3:'Ba'} Utiliza la función len para recuperar el número de elementos del nuevo diccionario End of explanation # Sol: # Definimos la lista con las ciudades que aparece en el enunciado it = [ 'Roma', 'Milán', 'Nápoles', 'Turín', 'Palermo' , 'Génova', 'Bolonia', 'Florencia', 'Bari', 'Catania', 'Verona'] # Definimos una variable para almacenar una lista que crearemos a partir de un rango [0, longitud de la lista) # Si no especificamos inicio, el rango comienza en 0 y termina en 10 # Si se especifica el inicio en 1, hay que sumarle +1 a la longitud de la lista a modo de offset pos_ciudad = range(1, len(it)+1) resultado = list(zip(pos_ciudad, it)) resultado dic = dict(resultado) dic Explanation: 3 Ejercicio Dada la lista de las ciudades más pobladas de Italia it: it = [ 'Roma', 'Milán', 'Nápoles', 'Turín', 'Palermo' , 'Génova', 'Bolonia', 'Florencia', 'Bari', 'Catania'] Crea un diccionario donde la clave sea la posición que ocupa cada ciudad en la lista. Para hacerlo sigue estas indicaciones: Crea una secuencia de enteros mediante la función range. El inicio de la secuencia es el cero y el fin de la secuencia es la longitud de la lista de poblaciones de Italia. Crea una lista m de tuplas del tipo (pos, ciudad). Utiliza la función zip. Utiliza la función dict para construir el diccionario a partir de la lista m. Escribe una expresión Python para recuperar la quinta ciudad italiana más poblada. End of explanation
2,083
Given the following text description, write Python code to implement the functionality described below step by step Description: A First Look at the SDSS Photometric "Galaxy" Catalog The Sloan Digital Sky Survey imaged over 10,000 sq degrees of sky (about 25% of the total), automatically detecting, measuring and cataloging millions of "objects". While the primary data products of the SDSS was (and still are) its spectroscopic surveys, the photometric survey provides an important testing ground for dealing with pure imaging surveys like those being carried out by DES and that is planned with LSST. Let's download part of the SDSS photometric object catalog and explore it. SDSS data release 12 (DR12) is described at the SDSS3 website and in the survey paper by Alam et al 2015. We will use the SDSS DR12 SQL query interface. For help designing queries, the sample queries page is invaluable, and you will probably want to check out the links to the "schema browser" at some point as well. Notice the "check syntax only" button on the SQL query interface Step1: Notice Step2: Visualizing Data in N-dimensions This is, in general, difficult. Looking at all possible 1 and 2-dimensional histograms/scatter plots helps a lot. Color coding can bring in a 3rd dimension (and even a 4th). Interactive plots and movies are also well worth thinking about. <br> Here we'll follow a multi-dimensional visualization example due to Josh Bloom at UC Berkeley Step3: Size-magnitude Let's zoom in and look at the objects' (log) sizes and magnitudes.
Python Code: %load_ext autoreload %autoreload 2 import numpy as np import SDSS import pandas as pd import matplotlib %matplotlib inline objects = "SELECT top 10000 \ ra, \ dec, \ dered_u as u, \ dered_g as g, \ dered_r as r, \ dered_i as i, \ petroR50_i AS size \ FROM PhotoObjAll \ WHERE \ ((type = '3' OR type = '6') AND \ ra > 185.0 AND ra < 185.2 AND \ dec > 15.0 AND dec < 15.2)" print objects # Download data. This can take a while... sdssdata = SDSS.select(objects) sdssdata Explanation: A First Look at the SDSS Photometric "Galaxy" Catalog The Sloan Digital Sky Survey imaged over 10,000 sq degrees of sky (about 25% of the total), automatically detecting, measuring and cataloging millions of "objects". While the primary data products of the SDSS was (and still are) its spectroscopic surveys, the photometric survey provides an important testing ground for dealing with pure imaging surveys like those being carried out by DES and that is planned with LSST. Let's download part of the SDSS photometric object catalog and explore it. SDSS data release 12 (DR12) is described at the SDSS3 website and in the survey paper by Alam et al 2015. We will use the SDSS DR12 SQL query interface. For help designing queries, the sample queries page is invaluable, and you will probably want to check out the links to the "schema browser" at some point as well. Notice the "check syntax only" button on the SQL query interface: this is very useful for debugging SQL queries. Small test queries can be executed directly in the browser. Larger ones (involving more than a few tens of thousands of objects, or that involve a lot of processing) should be submitted via the CasJobs system. Try the browser first, and move to CasJobs when you need to. End of explanation !mkdir -p downloads sdssdata.to_csv("downloads/SDSSobjects.csv") Explanation: Notice: * Some values are large and negative - indicating a problem with the automated measurement routine. We will need to deal with these. * Sizes are "effective radii" in arcseconds. The typical resolution ("point spread function" effective radius) in an SDSS image is around 0.7". Let's save this download for further use. End of explanation # We'll use astronomical g-r color as the colorizer, and then plot # position, magnitude, size and color against each other. data = pd.read_csv("downloads/SDSSobjects.csv",usecols=["ra","dec","u","g",\ "r","i","size"]) # Filter out objects with bad magnitude or size measurements: data = data[(data["u"] > 0) & (data["g"] > 0) & (data["r"] > 0) & (data["i"] > 0) & (data["size"] > 0)] # Log size, and g-r color, will be more useful: data['log_size'] = np.log10(data['size']) data['g-r_color'] = data['g'] - data['r'] # Drop the things we're not so interested in: del data['u'], data['g'], data['r'], data['size'] data.head() # Get ready to plot: pd.set_option('display.max_columns', None) # !pip install --upgrade seaborn import seaborn as sns sns.set() def plot_everything(data,colorizer,vmin=0.0,vmax=10.0): # Truncate the color map to retain contrast between faint objects. norm = matplotlib.colors.Normalize(vmin=vmin, vmax=vmax) cmap = matplotlib.cm.jet m = matplotlib.cm.ScalarMappable(norm=norm, cmap=cmap) plot = pd.scatter_matrix(data, alpha=0.2,figsize=[15,15],color=m.to_rgba(data[colorizer])) return plot_everything(data,'g-r_color',vmin=-1.0, vmax=3.0) Explanation: Visualizing Data in N-dimensions This is, in general, difficult. Looking at all possible 1 and 2-dimensional histograms/scatter plots helps a lot. Color coding can bring in a 3rd dimension (and even a 4th). Interactive plots and movies are also well worth thinking about. <br> Here we'll follow a multi-dimensional visualization example due to Josh Bloom at UC Berkeley: End of explanation zoom = data.copy() del zoom['ra'],zoom['dec'],zoom['g-r_color'] plot_everything(zoom,'i',vmin=15.0, vmax=21.5) Explanation: Size-magnitude Let's zoom in and look at the objects' (log) sizes and magnitudes. End of explanation
2,084
Given the following text description, write Python code to implement the functionality described below step by step Description: What is Apache Spark? distributed framework in-memory data structures data processing it improves (most of the times) Hadoop workloads Spark enables data scientists to tackle problems with larger data sizes than they could before with tools like R or Pandas First Steps with Apache Spark Interactive Programming First of all check that PySpark is running properly. You can check if PySpark is correctly loaded Step1: The first thing to note is that with Spark all computation is parallelized by means of distributed data structures that are spread through the cluster. These collections are called Resilient Distributed Datasets (RDD). We will talk more about RDD, as they are the main piece in Spark. As we have successfully loaded the Spark Context, we are ready to do some interactive analysis. We can read a simple file Step2: This is a very simple first example, where we create an RDD (variable lines) and then we apply some operations (count and first) in a parallel manner. It has to be noted, that as we are running all our examples in a single computer the parallelization is not applied. In the next section we will cover the core Spark concepts that allow Spark users to do parallel computation. Core Spark Concepts We will talk about Spark applications that are in charge of loading data and applying some distributed computation over it. Every application has a driver program that launches parallel operations to the cluster. In the case of interactive programming, the driver program is the shell (or Notebook) itself. The "access point" to Spark from the driver program is the Spark Context object. Once we have an Spark Context we can use it to build RDDs. In the previous examples we used sc.textFile() to represent the lines of the textFile. Then we run different operations over the RDD lines. To run these operations over RDDs, driver programs manage different nodes called executors. For example, for the count operation, it is possible to run count in different ranges of the file. Spark's API allows passing functions to its operators to run them on the cluster. For example, we could extend our example by filtering the lines in the file that contain a word, such as individuum. Step3: RDD Basics An RDD can be defined as a distributed collection of elements. All work done with Spark can be summarized as creating, transforming and applying operations over RDDs to compute a result. Under the hood, Spark automatically distributes the data contained in RDDs across your cluster and parallelizes the operations you perform on them. RDD properties Step4: It is important to note that once we have an RDD, we can run two kind of operations Step5: Transformations and actions are very different because of the way Spark computes RDDs. Transformations are defined in a lazy manner this is they are only computed once they are used in an action. Step6: The drawback is that Spark recomputes again the RDD at each action application. This means that the computing effort over an already computed RDD may be lost. To mitigate this drawback, the user can take the decision of persisting the RDD after computing it the first time, Spark will store the RDD contents in memory (partitioned across the machines in your cluster), and reuse them in future actions. Persisting RDDs on disk instead of memory is also possible. Let's see an example on the impact of persisting Step7: RDD Operations We have already seen that RDDs have two basic operations Step8: filter applies the lambda function to each line in lines RDD, only lines that accomplish the condition that the length is greater than zero are in lines_nonempty variable (this RDD is not computed yet!) flatMap applies the lambda function to each element of the RDD and then the result is flattened (i.e. a list of lists would be converted to a simple list) Actions are operations that return an object to the driver program or write to external storage, they kick a computation. Examples Step9: Actions are the operations that return a final value to the driver program or write data to an external storage system. Actions force the evaluation of the transformations required for the RDD they were called on, since they need to actually produce output. Returning to the previous example, until we call count over words and words persisted, the RDD are not computed. See that we persisted words_persisted, and until its second computation we cannot see the impact of persisting that RDD in memory. If we want to see a part of the RDD, we can use take, and to have the full RDD we can use collect. Step10: Question Step11: Working with common Spark transformations The two most common transformations you will likely be using are map and filter. The map() transformation takes in a function and applies it to each element in the RDD with the result of the function being the new value of each element in the resulting RDD. The filter() transformation takes in a function and returns an RDD that only has elements that pass the filter() function. Sometimes map() returns nested lists, to flatten these nested lists we can use flatMap(). So, flatMap() is called individually for each element in our input RDD. Instead of returning a single element, we return an iterator with our return values. Rather than producing an RDD of iterators, we get back an RDD that consists of the elements from all of the iterators. Set operations distinct() transformation to produce a new RDD with only distinct items. Note that distinct() is expensive, however, as it requires shuffling all the data over the network to ensure that we receive only one copy of each element RDD.union(other) back an RDD consisting of the data from both sources. Unlike the mathematical union(), if there are duplicates in the input RDDs, the result of Spark’s union() will contain duplicates (which we can fix if desired with distinct()). RDD.intersection(other) returns only elements in both RDDs. intersection() also removes all duplicates (including duplicates from a single RDD) while running. While intersection() and union() are two similar concepts, the performance of intersection() is much worse since it requires a shuffle over the network to identify common elements. RDD.subtract(other) function takes in another RDD and returns an RDD that has only values present in the first RDD and not the second RDD. Like intersection(), it performs a shuffle. RDD.cartesian(other) transformation returns all possible pairs of (a,b) where a is in the source RDD and b is in the other RDD. The Cartesian product can be useful when we wish to consider the similarity between all possible pairs, such as computing every user’s expected interest in each offer. We can also take the Cartesian product of an RDD with itself, which can be useful for tasks like user similarity. Be warned, however, that the Cartesian product is very expensive for large RDDs. Actions reduce() Step12: Exercise 2 Step13: Exercise 4 Step14: Exercise 5 Step15: Exercise 6 Step16: Exercise 7 Step17: Exercise 8 Step18: Exercise 9 Step19: Exercise 10 Step20: Spark Key/Value Pairs Spark provides special operations on RDDs containing key/value pairs. These RDDs are called pair RDDs, but are simple RDDs with an special structure. In Python, for the functions on keyed data to work we need to return an RDD composed of tuples. Exercise 1 Step21: Exercise 2 Step22: Exercise 3 Step23: Transformations on Pair RDDs Since pair RDDs contain tuples, we need to pass functions that operate on tuples rather than on individual elements. reduceByKey(func) Step24: Exercise 2 Step25: Exercise 3 Step26: Exercise 4 Step27: Exercise 5 Step28: Exercise 6 Step29: Exercise 7 Step30: Joins Some of the most useful operations we get with keyed data comes from using it together with other keyed data. Joining data together is probably one of the most common operations on a pair RDD, and we have a full range of options including right and left outer joins, cross joins, and inner joins. Inner Join Only keys that are present in both pair RDDs are output. When there are multiple values for the same key in one of the inputs, the resulting pair RDD will have an entry for every possible pair of values with that key from the two input RDDs Exercise Step31: Left and Right outer Joins Sometimes we don’t need the key to be present in both RDDs to want it in our result. For example, imagine that our list of countries is not complete, and we don't want to miss data if it a country is not present in both RDDs. leftOuterJoin(other) and rightOuterJoin(other) both join pair RDDs together by key, where one of the pair RDDs can be missing the key. With leftOuterJoin() the resulting pair RDD has entries for each key in the source RDD. The value associated with each key in the result is a tuple of the value from the source RDD and an Option for the value from the other pair RDD. In Python, if a value isn’t present None is used; and if the value is present the regular value, without any wrapper, is used. As with join(), we can have multiple entries for each key; when this occurs, we get the Cartesian product between the two lists of values. rightOuterJoin() is almost identical to leftOuterJoin() except the key must be present in the other RDD and the tuple has an option for the source rather than the other RDD. Exercise Step32: Exercise Step33: Inspect the dataset with life expectancy. Step34: We have some missing data, that we have to complete, but we have quite a lot of data, let's follow. Inspect the results of GDP and life expectancy and join them. Is there some data missing? Step35: Sort Data sortByKey() Step36: Actions over Pair RDDs countByKey() Step37: Data Partitioning (from Step38: The application periodically combines this table with a smaller file representing events that happened in the past five minutes—say, a table of (UserID, LinkInfo) pairs for users who have clicked a link on a website in those five minutes. Step39: For example, we may wish to count how many users visited a link that was not to one of their subscribed topics. We can perform this combination with Spark’s join() operation, which can be used to group the User Info and LinkInfo pairs for each UserID by key. Step40: Imagine that we want to count the number of visits to non-subscribed visits using a function. Step41: This code will run fine as is, but it will be inefficient. This is because the join() operation, called each time process_new_logs() is invoked, does not know anything about how the keys are partitioned in the datasets. By default, this operation will hash all the keys of both datasets, sending elements with the same key hash across the network to the same machine, and then join together the elements with the same key on that machine (see figure below). Because we expect the rdd_userinfo table to be much larger than the small log of events seen every five minutes, this wastes a lot of work
Python Code: import pyspark sc = pyspark.SparkContext(appName="my_spark_app") Explanation: What is Apache Spark? distributed framework in-memory data structures data processing it improves (most of the times) Hadoop workloads Spark enables data scientists to tackle problems with larger data sizes than they could before with tools like R or Pandas First Steps with Apache Spark Interactive Programming First of all check that PySpark is running properly. You can check if PySpark is correctly loaded: In case it is not, you can follow these posts: Windows (IPython): http://jmdvinodjmd.blogspot.com.es/2015/08/installing-ipython-notebook-with-apache.html Windows (Jupyter): http://www.ithinkcloud.com/tutorials/tutorial-on-how-to-install-apache-spark-on-windows/ End of explanation lines = sc.textFile("../data/people.csv") lines.count() lines.first() Explanation: The first thing to note is that with Spark all computation is parallelized by means of distributed data structures that are spread through the cluster. These collections are called Resilient Distributed Datasets (RDD). We will talk more about RDD, as they are the main piece in Spark. As we have successfully loaded the Spark Context, we are ready to do some interactive analysis. We can read a simple file: End of explanation lines = sc.textFile("../data/people.csv") filtered_lines = lines.filter(lambda line: "individuum" in line) filtered_lines.first() Explanation: This is a very simple first example, where we create an RDD (variable lines) and then we apply some operations (count and first) in a parallel manner. It has to be noted, that as we are running all our examples in a single computer the parallelization is not applied. In the next section we will cover the core Spark concepts that allow Spark users to do parallel computation. Core Spark Concepts We will talk about Spark applications that are in charge of loading data and applying some distributed computation over it. Every application has a driver program that launches parallel operations to the cluster. In the case of interactive programming, the driver program is the shell (or Notebook) itself. The "access point" to Spark from the driver program is the Spark Context object. Once we have an Spark Context we can use it to build RDDs. In the previous examples we used sc.textFile() to represent the lines of the textFile. Then we run different operations over the RDD lines. To run these operations over RDDs, driver programs manage different nodes called executors. For example, for the count operation, it is possible to run count in different ranges of the file. Spark's API allows passing functions to its operators to run them on the cluster. For example, we could extend our example by filtering the lines in the file that contain a word, such as individuum. End of explanation # loading an external dataset lines = sc.textFile("../data/people.csv") print(type(lines)) # applying a transformation to an existing RDD filtered_lines = lines.filter(lambda line: "individuum" in line) print(type(filtered_lines)) Explanation: RDD Basics An RDD can be defined as a distributed collection of elements. All work done with Spark can be summarized as creating, transforming and applying operations over RDDs to compute a result. Under the hood, Spark automatically distributes the data contained in RDDs across your cluster and parallelizes the operations you perform on them. RDD properties: * it is an immutable distributed collection of objects * it is split into multiple partitions * it is computed on different nodes of the cluster * it can contain any type of Python object (user defined ones included) An RDD can be created in two ways: 1. loading an external dataset 2. distributing a collection of objects in the driver program We have already seen the two ways of creating an RDD. End of explanation # if we print lines we get only this print(lines) # when we perform an action, then we get the result action_result = lines.first() print(type(action_result)) action_result Explanation: It is important to note that once we have an RDD, we can run two kind of operations: * transformations: construct a new RDD from a previous one. For example, by filtering lines RDD we create a new RDD that holds the lines that contain "individuum" string. Note that the returning result is an RDD. * actions: compute a result based on an RDD, and returns the result to the driver program or stores it to an external storage system (e.g. HDFS). Note that the returning result is not an RDD but another variable type. Notice how when we go to print it, it prints out that it is an RDD and that the type is a PipelinedRDD not a list of values as we might expect. That's because we haven't performed an action yet, we've only performed a transformation. End of explanation # filtered_lines is not computed until the next action is applied over it # it make sense when working with big data sets, as it is not necessary to # transform the whole RDD to get an action over a subset # Spark doesn't even reads the complete file! filtered_lines.first() Explanation: Transformations and actions are very different because of the way Spark computes RDDs. Transformations are defined in a lazy manner this is they are only computed once they are used in an action. End of explanation import time lines = sc.textFile("../data/REFERENCE/*") lines_nonempty = lines.filter( lambda x: len(x) > 0 ) words = lines_nonempty.flatMap(lambda x: x.split()) words_persisted = lines_nonempty.flatMap(lambda x: x.split()) t1 = time.time() words.count() print("Word count 1:",time.time() - t1) t1 = time.time() words.count() print("Word count 2:",time.time() - t1) t1 = time.time() words_persisted.persist() words_persisted.count() print("Word count persisted 1:",time.time() - t1) t1 = time.time() words_persisted.count() print("Word count persisted 2:", time.time() - t1) Explanation: The drawback is that Spark recomputes again the RDD at each action application. This means that the computing effort over an already computed RDD may be lost. To mitigate this drawback, the user can take the decision of persisting the RDD after computing it the first time, Spark will store the RDD contents in memory (partitioned across the machines in your cluster), and reuse them in future actions. Persisting RDDs on disk instead of memory is also possible. Let's see an example on the impact of persisting: End of explanation # load a file lines = sc.textFile("../data/REFERENCE/*") # make a transformation filtering positive length lines lines_nonempty = lines.filter( lambda x: len(x) > 0 ) print("-> lines_nonepmty is: {} and if we print it we get\n {}".format(type(lines_nonempty), lines_nonempty)) # we transform again words = lines_nonempty.flatMap(lambda x: x.split()) print("-> words is: {} and if we print it we get\n {}".format(type(words), words)) words_persisted = lines_nonempty.flatMap(lambda x: x.split()) print("-> words_persisted is: {} and if we print it we get\n {}".format(type(words_persisted), words_persisted)) final_result = words.take(10) print("-> final_result is: {} and if we print it we get\n {}".format(type(final_result), final_result)) Explanation: RDD Operations We have already seen that RDDs have two basic operations: transformations and actions. Transformations are operations that return a new RDD. Examples: filter, map. Remember that , transformed RDDs are computed lazily, only when you use them in an action. Lazy evaluation means that when we call a transformation on an RDD (for instance, calling map()), the operation is not immediately performed. Instead, Spark internally records metadata to indicate that this operation has been requested. Loading data into an RDD is lazily evaluated in the same way trans formations are. So, when we call sc.textFile(), the data is not loaded until it is necessary. As with transformations, the operation (in this case, reading the data) can occur multiple times. Take in mind that transformations DO HAVE impact over computation time. Many transformations are element-wise; that is, they work on one element at a time; but this is not true for all transformations. End of explanation import time # we checkpint the initial time t1 = time.time() words.count() # and count the time expmended on the computation print("Word count 1:",time.time() - t1) t1 = time.time() words.count() print("Word count 2:",time.time() - t1) t1 = time.time() words_persisted.persist() words_persisted.count() print("Word count persisted 1:",time.time() - t1) t1 = time.time() words_persisted.count() print("Word count persisted 2:", time.time() - t1) Explanation: filter applies the lambda function to each line in lines RDD, only lines that accomplish the condition that the length is greater than zero are in lines_nonempty variable (this RDD is not computed yet!) flatMap applies the lambda function to each element of the RDD and then the result is flattened (i.e. a list of lists would be converted to a simple list) Actions are operations that return an object to the driver program or write to external storage, they kick a computation. Examples: first, count. End of explanation lines = sc.textFile("../data/people.csv") print("-> Three elements:\n", lines.take(3)) print("-> The whole RDD:\n", lines.collect()) Explanation: Actions are the operations that return a final value to the driver program or write data to an external storage system. Actions force the evaluation of the transformations required for the RDD they were called on, since they need to actually produce output. Returning to the previous example, until we call count over words and words persisted, the RDD are not computed. See that we persisted words_persisted, and until its second computation we cannot see the impact of persisting that RDD in memory. If we want to see a part of the RDD, we can use take, and to have the full RDD we can use collect. End of explanation lines = sc.textFile("../data/people.csv") # we create a lambda function to apply tp all lines of the dataset # WARNING, see that after splitting we get only the first element first_cells = lines.map(lambda x: x.split(",")[0]) print(first_cells.collect()) # we can define a function as well def get_cell(x): return x.split(",")[0] first_cells = lines.map(get_cell) print(first_cells.collect()) Explanation: Question: Why is not a god idea to collect an RDD? Passing functions to Spark Most of Spark’s transformations, and some of its actions, depend on passing in functions that are used by Spark to compute data. In Python, we have three options for passing functions into Spark. * For shorter functions, we can pass in lambda expressions * We can pass in top-level functions, or * Locally defined functions. End of explanation import urllib3 def download_file(csv_line): link = csv_line[0] http = urllib3.PoolManager() r = http.request('GET', link, preload_content=False) response = r.read() return response books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(",")) print(books_info.take(10)) books_content = books_info.map(download_file) print(books_content.take(10)[1][:100]) Explanation: Working with common Spark transformations The two most common transformations you will likely be using are map and filter. The map() transformation takes in a function and applies it to each element in the RDD with the result of the function being the new value of each element in the resulting RDD. The filter() transformation takes in a function and returns an RDD that only has elements that pass the filter() function. Sometimes map() returns nested lists, to flatten these nested lists we can use flatMap(). So, flatMap() is called individually for each element in our input RDD. Instead of returning a single element, we return an iterator with our return values. Rather than producing an RDD of iterators, we get back an RDD that consists of the elements from all of the iterators. Set operations distinct() transformation to produce a new RDD with only distinct items. Note that distinct() is expensive, however, as it requires shuffling all the data over the network to ensure that we receive only one copy of each element RDD.union(other) back an RDD consisting of the data from both sources. Unlike the mathematical union(), if there are duplicates in the input RDDs, the result of Spark’s union() will contain duplicates (which we can fix if desired with distinct()). RDD.intersection(other) returns only elements in both RDDs. intersection() also removes all duplicates (including duplicates from a single RDD) while running. While intersection() and union() are two similar concepts, the performance of intersection() is much worse since it requires a shuffle over the network to identify common elements. RDD.subtract(other) function takes in another RDD and returns an RDD that has only values present in the first RDD and not the second RDD. Like intersection(), it performs a shuffle. RDD.cartesian(other) transformation returns all possible pairs of (a,b) where a is in the source RDD and b is in the other RDD. The Cartesian product can be useful when we wish to consider the similarity between all possible pairs, such as computing every user’s expected interest in each offer. We can also take the Cartesian product of an RDD with itself, which can be useful for tasks like user similarity. Be warned, however, that the Cartesian product is very expensive for large RDDs. Actions reduce(): which takes a function that operates on two elements of the type in your RDD and returns a new element of the same type. aggregate(): takes an initial zero value of the type we want to return. We then supply a function to combine the elements from our RDD with the accumulator. Finally, we need to supply a second function to merge two accumulators, given that each node accumulates its own results locally. To know more: http://stackoverflow.com/questions/28240706/explain-the-aggregate-functionality-in-spark http://atlantageek.com/2015/05/30/python-aggregate-rdd/ collect(): returns the entire RDD’s contents. collect() is commonly used in unit tests where the entire contents of the RDD are expected to fit in memory, as that makes it easy to compare the value of our RDD with our expected result. take(n): returns n elements from the RDD and attempts to minimize the number of partitions it accesses, so it may represent a biased collection top(): will use the default ordering on the data, but we can supply our own comparison function to extract the top elements. Exercises We have a file (../data/books.csv) with a lot of links to books. We want to perform an analysis to the books and its contents. Exercise 1: Download all books, from books.csv using the map function. Exercise 2: Identify transformations and actions. When the returned data is calculated? Exercise 3: Imagine that you only want to download Dickens books, how would you do that? Which is the impact of not persisting dickens_books_content? Exercise 4: Use flatMap() in the resulting RDD of the previous exercise, how the result is different? Exercise 5: You want to know the different books authors there are. Exercise 6: Return Poe's and Dickens' books URLs (use union function). Exercise 7: Return the list of books without Dickens' and Poe's books. Exercise 8: Count the number of books using reduce function. For the following two exercices, we will use ../data/Sacramentorealestatetransactions.csv Exercise 9: Compute the mean price of estates from csv containing Sacramento's estate price using aggregate function. Exercise 10: Get top 5 highest and lowest prices in Sacramento estate's transactions Exercise 1: Download all books, from books.csv using the map function. Answer 1: End of explanation import re def is_dickens(csv_line): link = csv_line[0] t = re.match("http://www.textfiles.com/etext/AUTHORS/DICKENS/",link) return t != None dickens_books_info = books_info.filter(is_dickens) print(dickens_books_info.take(4)) dickens_books_content = dickens_books_info.map(download_file) # take into consideration that each time an action is performed over dickens_book_content, the file is downloaded # this has a big impact into calculations print(dickens_books_content.take(2)[1][:100]) Explanation: Exercise 2: Identify transformations and actions. When the returned data is calculated? Answer 2: If we consider the text reading as a transformation... Transformations: * books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(",")) * books_content = books_info.map(lambda x: download_file(x[0])) Actions: * print books_info.take(10) * print books_content.take(1)[0][:100] Computation is carried out in actions. In this case we take advantage of it, as for downloading data we only apply the function to one element of the books_content RDD Exercise 3: Imagine that you only want to download Dickens books, how would you do that? Which is the impact of not persisting dickens_books_content? Answer 3: End of explanation flat_content = dickens_books_info.map(lambda x: x) print(flat_content.take(4)) flat_content = dickens_books_info.flatMap(lambda x: x) print(flat_content.take(4)) Explanation: Exercise 4: Use flatMap() in the resulting RDD of the previous exercise, how the result is different? Answer 4: End of explanation def get_author(csv_line): link = csv_line[0] t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link) if t: return t.group(1) return u'UNKNOWN' authors = books_info.map(get_author) authors.distinct().collect() Explanation: Exercise 5: You want to know the different books authors there are. Answer 5: End of explanation import re def get_author_and_link(csv_line): link = csv_line[0] t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link) if t: return (t.group(1), link) return (u'UNKNOWN',link) authors_links = books_info.map(get_author_and_link) # not very efficient dickens_books = authors_links.filter(lambda x: x[0]=="DICKENS") poes_books = authors_links.filter(lambda x: x[0]=="POE") poes_dickens_books = poes_books.union(dickens_books) # sample is a transformation that returns an RDD sampled over the original RDD # https://spark.apache.org/docs/1.1.1/api/python/pyspark.rdd.RDD-class.html poes_dickens_books.sample(True,0.05).collect() # takeSample is an action, returning a sampled subset of the RDD poes_dickens_books.takeSample(True,10) Explanation: Exercise 6: Return Poe's and Dickens' books URLs (use union function). Answer 6 End of explanation authors_links.subtract(poes_dickens_books).map(lambda x: x[0]).distinct().collect() Explanation: Exercise 7: Return the list of books without Dickens' and Poe's books. Answer 7: End of explanation authors_links.map(lambda x: 1).reduce(lambda x,y: x+y) == authors_links.count() # let's see this approach more in detail # this transformation generates an rdd of 1, one per element in the RDD authors_map = authors_links.map(lambda x: 1) authors_map.takeSample(True,10) # with reduce, we pass a function with two parameters which is applied by pairs # inside the the function we specify which operation we perform with the two parameters # the result is then returned and the action is applied again using the result until there is only one element in the resulting # this is a very efficient way to do a summation in parallel # using a functional approach # we could define any operation inside the function authors_map.reduce(lambda x,y: x*y) Explanation: Exercise 8: Count the number of books using reduce function. Answer 8 End of explanation sacramento_estate_csv = sc.textFile("../data/Sacramentorealestatetransactions.csv") header = sacramento_estate_csv.first() # first load the data # we know that the price is in column 9 sacramento_estate = sacramento_estate_csv.filter(lambda x: x != header)\ .map(lambda x: x.split(","))\ .map(lambda x: int(x[9])) sacramento_estate.takeSample(True, 10) seqOp = (lambda x,y: (x[0] + y, x[1] + 1)) combOp = (lambda x,y: (x[0] + y[0], x[1] + y[1])) total_sum, number = sacramento_estate.aggregate((0,0),seqOp,combOp) mean = float(total_sum)/number mean Explanation: Exercise 9: Compute the mean price of estates from csv containing Sacramento's estate price using aggregate function. Answer 9 End of explanation print(sacramento_estate.top(5)) print(sacramento_estate.top(5, key=lambda x: -x)) Explanation: Exercise 10: Get top 5 highest and lowest prices in Sacramento estate's transactions Answer 10 End of explanation import re def get_author_data(csv_line): link = csv_line[0] t = re.match("http://www.textfiles.com/etext/AUTHORS/(\w+)/",link) if t: return (t.group(1), csv_line) return (u'UNKNOWN', csv_line) books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(",")) authors_info = books_info.map(get_author_data) print(authors_info.take(5)) Explanation: Spark Key/Value Pairs Spark provides special operations on RDDs containing key/value pairs. These RDDs are called pair RDDs, but are simple RDDs with an special structure. In Python, for the functions on keyed data to work we need to return an RDD composed of tuples. Exercise 1: Create a pair RDD from our books information data, having author as key and the rest of the information as value. (Hint: the answer is very similar to the previous section Exercise 6) Exercise 2: Check that pair RDDs are also RDDs and that common RDD operations work as well. Filter elements with author equals to "UNKNOWN" from previous RDD. Exercise 3: Check mapValue in Spark API (http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.mapValues) function that works on pair RDDs. Exercise 1: Create a pair RDD from our books information data, having author as key and the rest of the information as value. (Hint: the answer is very similar to the previous section Exercise 6) Answer 1: End of explanation authors_info.filter(lambda x: x[0] != "UNKNOWN").take(3) Explanation: Exercise 2: Check that pair RDDs are also RDDs and that common RDD operations work as well. Filter elements with author equals to "UNKNOWN" from previous RDD. Answer 2: The operations over pair RDDs will also be slightly different. But take into account that pair RDDs are just special RDDs that some operations can be applied, however common RDDs also fork for them. End of explanation authors_info.mapValues(lambda x: x[2]).take(5) Explanation: Exercise 3: Check mapValue in Spark API (http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.RDD.mapValues) function that works on pair RDDs. Answer 3: Sometimes is awkward to work with pairs, and Spark provides a map function that operates over values. End of explanation # first get each book size, keyed by author authors_data = authors_info.mapValues(lambda x: int(x[2])) authors_data.take(5) # ther reduce summing authors_data.reduceByKey(lambda y,x: y+x).collect() Explanation: Transformations on Pair RDDs Since pair RDDs contain tuples, we need to pass functions that operate on tuples rather than on individual elements. reduceByKey(func): Combine values with the same key. groupByKey(): Group values with the same key. combineByKey(createCombiner, mergeValue, mergeCombiners, partitioner): Combine values with the same key using a different result type. keys(): return RDD keys values(): return RDD values groupBy(): takes a function that it applies to every element in the source RDD and uses the result to determine the key. cogroup(): over two RDDs sharing the same key type, K, with the respective value types V and W gives us back RDD[(K,(Iterable[V], Iterable[W]))]. If one of the RDDs doesn’t have elements for a given key that is present in the other RDD, the corresponding Iterable is simply empty. cogroup() gives us the power to group data from multiple RDDs. Exercise 1: Get the total size of files for each author. Exercise 2: Get the top 5 authors with more data. Exercise 3: Try the combineByKey() with a randomly generated set of 5 values for 4 keys. Get the average value of the random variable for each key. Exercise 4: Compute the average book size per author using combineByKey(). If you were an English Literature student and your teacher says: "Pick one Author and I'll randomly pick a book for you to read", what would be a Data Scientist answer? Exercise 5: All Spark books have the word count example. Let's count words over all our books! (This might take some time) Exercise 6: Group author data by author surname initial. How many authors have we grouped? Exercise 7: Generate a pair RDD with alphabet letters in upper case as key, and empty list as value. Then group the previous RDD with this new one. Exercise 1: Get the total size of files for each author. Answer 1 End of explanation authors_data.reduceByKey(lambda y,x: y+x).top(5,key=lambda x: x[1]) Explanation: Exercise 2: Get the top 5 authors with more data. Answer 2: End of explanation import numpy as np # generate the data for pair in list(zip(np.arange(5).tolist()*5, np.random.normal(0,1,5*5))): print(pair) rdd = sc.parallelize(zip(np.arange(5).tolist()*5, np.random.normal(0,1,5*5))) createCombiner = lambda value: (value,1) # you can check what createCombiner does # rdd.mapValues(createCombiner).collect() # here x is the combiner (sum,count) and value is value in the # initial RDD (the random variable) mergeValue = lambda x, value: (x[0] + value, x[1] + 1) # here, all combiners are summed (sum,count) mergeCombiner = lambda x, y: (x[0] + y[0], x[1] + y[1]) sumCount = rdd.combineByKey(createCombiner, mergeValue, mergeCombiner) print(sumCount.collect()) sumCount.mapValues(lambda x: x[0]/x[1]).collect() Explanation: Exercise 3: Try the combineByKey() with a randomly generated set of 5 values for 4 keys. Get the average value of the random variable for each key. Answer 3: End of explanation createCombiner = lambda value: (value,1) # you can check what createCombiner does # rdd.mapValues(createCombiner).collect() # here x is the combiner (sum,count) and value is value in the # initial RDD (the random variable) mergeValue = lambda x, value: (x[0] + value, x[1] + 1) # here, all combiners are summed (sum,count) mergeCombiner = lambda x, y: (x[0] + y[0], x[1] + y[1]) sumCount = authors_data.combineByKey(createCombiner, mergeValue, mergeCombiner) print(sumCount.mapValues(lambda x: x[0]/x[1]).collect()) # I would choose the author with lowest average book size print(sumCount.mapValues(lambda x: x[0]/x[1]).top(5,lambda x: -x[1])) Explanation: Exercise 4: Compute the average book size per author using combineByKey(). If you were an English Literature student and your teacher says: "Pick one Author and I'll randomly pick a book for you to read", what would be a Data Scientist answer? Answer 4: End of explanation import urllib3 import re def download_file(csv_line): link = csv_line[0] http = urllib3.PoolManager() r = http.request('GET', link, preload_content=False) response = r.read() return str(response) books_info = sc.textFile("../data/books.csv").map(lambda x: x.split(",")) #books_content = books_info.map(download_file) # while trying the function use only two samples books_content = sc.parallelize(books_info.map(download_file).take(2)) words_rdd = books_content.flatMap(lambda x: x.split(" ")).\ flatMap(lambda x: x.split("\r\n")).\ map(lambda x: re.sub('[^0-9a-zA-Z]+', '', x).lower()).\ filter(lambda x: x != '') words_rdd.map(lambda x: (x,1)).reduceByKey(lambda x,y: x+y).top(5, key=lambda x: x[1]) Explanation: Exercise 5: All Spark books have the word count example. Let's count words over all our books! (This might take some time) Answer 5: End of explanation print(authors_info.groupBy(lambda x: x[0][0]).collect()) authors_info.map(lambda x: x[0]).distinct().\ map(lambda x: (x[0],1)).\ reduceByKey(lambda x,y: x+y).\ filter(lambda x: x[1]>1).\ collect() Explanation: Exercise 6: Group author data by author surname initial. How many authors have we grouped? Answer 6: End of explanation import string sc.parallelize(list(string.ascii_uppercase)).\ map(lambda x: (x,[])).\ cogroup(authors_info.groupBy(lambda x: x[0][0])).\ take(5) Explanation: Exercise 7: Generate a pair RDD with alphabet letters in upper case as key, and empty list as value. Then group the previous RDD with this new one. Answer 7: End of explanation #more info: https://www.worlddata.info/downloads/ rdd_countries = sc.textFile("../data/countries_data_clean.csv").map(lambda x: x.split(",")) #more info: http://data.worldbank.org/data-catalog/GDP-ranking-table rdd_gdp = sc.textFile("../data/countries_GDP_clean.csv").map(lambda x: x.split(";")) # check rdds size hyp_final_rdd_num = rdd_gdp.count() if rdd_countries.count() > rdd_gdp.count() else rdd_countries.count() print("The final number of elements in the joined rdd should be: ", hyp_final_rdd_num) p_rdd_gdp = rdd_gdp.map(lambda x: (x[3],x)) p_rdd_countries = rdd_countries.map(lambda x: (x[1],x)) print(p_rdd_countries.take(1)) print(p_rdd_gdp.take(1)) p_rdd_contry_data = p_rdd_countries.join(p_rdd_gdp) final_join_rdd_size = p_rdd_contry_data.count() hyp = hyp_final_rdd_num == final_join_rdd_size print("The initial hypothesis is ", hyp) if not hyp: print("The final joined rdd size is ", final_join_rdd_size) Explanation: Joins Some of the most useful operations we get with keyed data comes from using it together with other keyed data. Joining data together is probably one of the most common operations on a pair RDD, and we have a full range of options including right and left outer joins, cross joins, and inner joins. Inner Join Only keys that are present in both pair RDDs are output. When there are multiple values for the same key in one of the inputs, the resulting pair RDD will have an entry for every possible pair of values with that key from the two input RDDs Exercise: Take countries_data_clean.csv and countries_GDP_clean.csv and join them using country name as key. Before doing the join, please, check how many element should the resulting pair RDD have. After the join, check if the initial hypothesis was true. In case it is not, what is the reason? How would you resolve that problem? End of explanation n = 5 rdd_1 = sc.parallelize([(x,1) for x in range(n)]) rdd_2 = sc.parallelize([(x*2,1) for x in range(n)]) print("rdd_1: ",rdd_1.collect()) print("rdd_2: ",rdd_2.collect()) print("leftOuterJoin: ",rdd_1.leftOuterJoin(rdd_2).collect()) print("rightOuterJoin: ",rdd_1.rightOuterJoin(rdd_2).collect()) print("join: ", rdd_1.join(rdd_2).collect()) #explore what hapens if a key is present twice or more rdd_3 = sc.parallelize([(x*2,1) for x in range(n)] + [(4,2),(6,4)]) print("rdd_3: ",rdd_3.collect()) print("join: ", rdd_2.join(rdd_3).collect()) Explanation: Left and Right outer Joins Sometimes we don’t need the key to be present in both RDDs to want it in our result. For example, imagine that our list of countries is not complete, and we don't want to miss data if it a country is not present in both RDDs. leftOuterJoin(other) and rightOuterJoin(other) both join pair RDDs together by key, where one of the pair RDDs can be missing the key. With leftOuterJoin() the resulting pair RDD has entries for each key in the source RDD. The value associated with each key in the result is a tuple of the value from the source RDD and an Option for the value from the other pair RDD. In Python, if a value isn’t present None is used; and if the value is present the regular value, without any wrapper, is used. As with join(), we can have multiple entries for each key; when this occurs, we get the Cartesian product between the two lists of values. rightOuterJoin() is almost identical to leftOuterJoin() except the key must be present in the other RDD and the tuple has an option for the source rather than the other RDD. Exercise: Use two simple RDDs to show the results of left and right outer join. End of explanation rdd_gdp = sc.textFile("../data/countries_GDP_clean.csv").map(lambda x: x.split(";")) rdd_gdp.take(2) #generate a pair rdd with countrycode and GDP rdd_cc_gdp = rdd_gdp.map(lambda x: (x[1],x[4])) rdd_cc_gdp.take(2) Explanation: Exercise: Generate two pair RDDs with country info: 1. A first one with country code and GDP 2. A second one with country code and life expectancy Then join them to have a pair RDD with country code plus GDP and life expentancy. Answer: Inspect the dataset with GDP. End of explanation rdd_countries = sc.textFile("../data/countries_data_clean.csv").map(lambda x: x.split(",")) print(rdd_countries.take(2)) #generate a pair rdd with countrycode and lifexpectancy #(more info in https://www.worlddata.info/downloads/) #we don't have countrycode in this dataset, but let's try to add it #we have a dataset with countrynames and countrycodes #let's take countryname and ISO 3166-1 alpha3 code rdd_cc = sc.textFile("../data/countrycodes.csv").\ map(lambda x: x.split(";")).\ map(lambda x: (x[0].strip("\""),x[4].strip("\""))).\ filter(lambda x: x[0] != 'Country (en)') print(rdd_cc.take(2)) rdd_cc_info = rdd_countries.map(lambda x: (x[1],x[16])) rdd_cc_info.take(2) #let's count and see if something is missing print(rdd_cc.count()) print(rdd_cc_info.count()) #take only values, the name is no longer needed rdd_name_cc_le = rdd_cc_info.leftOuterJoin(rdd_cc) rdd_cc_le = rdd_name_cc_le.map(lambda x: x[1]) print(rdd_cc_le.take(5)) print(rdd_cc_le.count()) #what is missing? rdd_name_cc_le.filter(lambda x: x[1][1] == None).collect() #how can we solve this problem?? Explanation: Inspect the dataset with life expectancy. End of explanation print("Is there some data missing?", rdd_cc_gdp.count() != rdd_cc_le.count()) print("GDP dataset: ", rdd_cc_gdp.count()) print("Life expectancy dataset: ", rdd_cc_le.count()) #lets try to see what happens print(rdd_cc_le.take(10)) print (rdd_cc_gdp.take(10)) rdd_cc_gdp_le = rdd_cc_le.map(lambda x: (x[1],x[0])).leftOuterJoin(rdd_cc_gdp) #we have some countries that the data is missing # we have to check if this data is available # or there is any error rdd_cc_gdp_le.take(10) Explanation: We have some missing data, that we have to complete, but we have quite a lot of data, let's follow. Inspect the results of GDP and life expectancy and join them. Is there some data missing? End of explanation p_rdd_contry_data.sortByKey().take(2) Explanation: Sort Data sortByKey(): We can sort an RDD with key/value pairs provided that there is an ordering defined on the key. Once we have sorted our data, any subsequent call on the sorted data to collect() or save() will result in ordered data. Exercise: Sort country data by key. End of explanation p_rdd_contry_data.countByKey()["Andorra"] p_rdd_contry_data.collectAsMap()["Andorra"] #p_rdd_contry_data.lookup("Andorra") Explanation: Actions over Pair RDDs countByKey(): Count the number of elements for each key. collectAsMap(): Collect the result as a map to provide easy lookup. lookup(key): Return all values associated with the provided key. Exercises: 1. Count countries RDD by key 2. Collect countries RDD as map 3. Lookup Andorra info in countries RDD End of explanation rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\ .filter(lambda x: len(x)>0)\ .map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))) rdd_userinfo.take(2) Explanation: Data Partitioning (from: Learning Spark - O'Reilly) Spark programs can choose to control their RDDs’ partitioning to reduce communication. Partitioning will not be helpful in all applications— for example, if a given RDD is scanned only once, there is no point in partitioning it in advance. It is useful only when a dataset is reused multiple times in key-oriented operations such as joins. Spark’s partitioning is available on all RDDs of key/value pairs, and causes the system to group elements based on a function of each key. Spark does not give explicit control of which worker node each key goes to (partly because the system is designed to work even if specific nodes fail), it lets the program ensure that a set of keys will appear together on some node. Example: As a simple example, consider an application that keeps a large table of user information in memory—say, an RDD of (UserID, UserInfo) pairs, where UserInfo contains a list of topics the user is subscribed to. End of explanation rdd_userevents = sc.textFile("../data/users_events_example/userevents_*.log")\ .filter(lambda x: len(x))\ .map(lambda x: (x.split(",")[1], [x.split(",")[2]])) print(rdd_userevents.take(2)) Explanation: The application periodically combines this table with a smaller file representing events that happened in the past five minutes—say, a table of (UserID, LinkInfo) pairs for users who have clicked a link on a website in those five minutes. End of explanation rdd_joined = rdd_userinfo.join(rdd_userevents) print(rdd_joined.count()) print(rdd_joined.filter(lambda x: (x[1][1][0] not in x[1][0])).count()) print(rdd_joined.filter(lambda x: (x[1][1][0] in x[1][0])).count()) Explanation: For example, we may wish to count how many users visited a link that was not to one of their subscribed topics. We can perform this combination with Spark’s join() operation, which can be used to group the User Info and LinkInfo pairs for each UserID by key. End of explanation rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\ .filter(lambda x: len(x)>0)\ .map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))).persist() def process_new_logs(event_fite_path): rdd_userevents = sc.textFile(event_fite_path)\ .filter(lambda x: len(x))\ .map(lambda x: (x.split(",")[1], [x.split(",")[2]])) rdd_joined = rdd_userinfo.join(rdd_userevents) print("Number of visits to non-subscribed topics: ", rdd_joined.filter(lambda x: (x[1][1][0] not in x[1][0])).count()) process_new_logs("../data/users_events_example/userevents_01012016000500.log") Explanation: Imagine that we want to count the number of visits to non-subscribed visits using a function. End of explanation rdd_userinfo = sc.textFile("../data/users_events_example/user_info_1000users_20topics.csv")\ .filter(lambda x: len(x)>0)\ .map(lambda x: (x.split(",")[0],x.split(",")[1].split("|"))).partitionBy(10) rdd_userinfo Explanation: This code will run fine as is, but it will be inefficient. This is because the join() operation, called each time process_new_logs() is invoked, does not know anything about how the keys are partitioned in the datasets. By default, this operation will hash all the keys of both datasets, sending elements with the same key hash across the network to the same machine, and then join together the elements with the same key on that machine (see figure below). Because we expect the rdd_userinfo table to be much larger than the small log of events seen every five minutes, this wastes a lot of work: the rdd_userinfo table is hashed and shuffled across the network on every call, even though it doesn’t change. Fixing this is simple: just use the partitionBy() transformation on rdd_userinfo to hash-partition it at the start of the program. We do this by passing a spark.HashPartitioner object to partitionBy. End of explanation
2,085
Given the following text description, write Python code to implement the functionality described below step by step Description: Whole dataset Step1: Data pre-processing Load the data set and seperate the dataset into predictors and targets. the first column of targets is the EUI need to be predicted, the third column is the sample weighting. Step2: The missing values to be imputed before fit the model Separate numeric values to other values as they are imputed differently Step3: Exploratory analysis Plot the Total EUI distribution to give an indication of consumption condition Step4: Model establishement Seperating data into training and testing set Step5: A series functions are defined to fit and evaluate the model Step6: The optimum model is first analyzed using self-defined method Step7: The selected features are extracted Analyzing features Step8: find the name of selected features Step9: The features selected is translated as shown below Step10: We map the feature to the files Step11: All the features are also mapped Step12: To improve the inpterpretability and understand what is the most important features, feature selection is performed Step13: Significance test
Python Code: from sklearn import linear_model import csv import numpy as np from matplotlib import pyplot as plt from sklearn.preprocessing import Imputer from sklearn.linear_model import lasso_path from sklearn.linear_model import LassoCV from sklearn.metrics import r2_score from sklearn.metrics import mean_squared_error %matplotlib inline Explanation: Whole dataset End of explanation data = np.genfromtxt('ALL_ATRIB_DATA.csv',delimiter=',',skip_header=1) Predictor = data[:,0:159] Target = data[:, 159:162] Explanation: Data pre-processing Load the data set and seperate the dataset into predictors and targets. the first column of targets is the EUI need to be predicted, the third column is the sample weighting. End of explanation all_predictor = list(range(159)) numeric_index = [0, 18, 20, 38, 40, 45, 47, 58, 70, 79, 103, 104, 106, 110, 113, 136, 138, 147, 156, 157, 158] bi_no_index = [item for item in all_predictor if item not in numeric_index] print(len(numeric_index)) print(len(bi_no_index)) len(numeric_index) + len(bi_no_index) == len(all_predictor) # Binary and categorical values are imputed as most frequendt imp_bi = Imputer(missing_values='NaN', strategy='most_frequent', axis = 0) imp_bi.fit(Predictor[:,bi_no_index]) Predictor[:,bi_no_index] = imp_bi.transform(Predictor[:,bi_no_index]) #Numeric values are imputed as median (to elimanates influence of extreme values) imp_num = Imputer(missing_values='NaN', strategy='median', axis = 0) imp_num.fit(Predictor[:,numeric_index]) Predictor[:,numeric_index] = imp_num.transform(Predictor[:,numeric_index]) imp_tar = Imputer(missing_values = 'NaN', strategy = 'median', axis = 0) imp_tar.fit(Target) Target = imp_tar.transform(Target) Explanation: The missing values to be imputed before fit the model Separate numeric values to other values as they are imputed differently End of explanation plt.figure(figsize=(10, 8)) plt.hist(Target[:,0], bins = 20) plt.ylabel("Number of buildings") plt.xlabel("Total EUI (MBTU/sqft)") plt.title("Total EUI distribution") plt.figure(figsize=(10, 8)) plt.boxplot(Target[:,0], notch=True, sym='bd', vert=False) plt.xlabel("Building EUI") plt.ylabel("Building") plt.title("Building EUI") Explanation: Exploratory analysis Plot the Total EUI distribution to give an indication of consumption condition End of explanation trainPredictor = Predictor[0:len(Predictor)//2] testPredictor = Predictor[len(Predictor)//2: len(Predictor)] trainTarget = Target[0:len(Target)//2] testTarget = Target[len(Target)//2:len(Target)] print(len(trainPredictor) + len(testPredictor) == len(Predictor)) print(len(trainTarget) + len(testTarget) == len(Target)) print(len(trainPredictor) == len(trainTarget)) print(len(testPredictor) == len(testTarget)) print(trainPredictor.shape) print(trainTarget.shape) Explanation: Model establishement Seperating data into training and testing set End of explanation #This function gives back the Lasso regression coefficient after the model is fitted def lasso_fit(alpha, predictor, Target): clf = linear_model.Lasso(alpha=alpha) clf.fit(predictor,Target) coefficient = clf.coef_ return coefficient # the function returns the predicted y matrix of test dataset def lasso_results(alpha_input, train_X, train_y, test_X, test_y): clf = linear_model.Lasso(alpha=alpha_input) clf.fit(train_X,train_y) # a column of ones is added to the design matrix to fit the intercept oneMatrix = np.ones((len(test_X),1)) DesignMatrix = np.concatenate((test_X, oneMatrix),axis = 1) coefficients = np.concatenate((clf.coef_ , [clf.intercept_]), axis = 0) testResults = np.dot(DesignMatrix, coefficients) return testResults # the function returns the evaluator of the lasso fit (r_square, mse) def lasso_test(alpha_input, train_X, train_y, test_X, test_y, testWeight): r_square = [] mse = [] for a in alpha_input: testResults = lasso_results(a, train_X, train_y, test_X, test_y) r_square.append(r2_score(test_y, testResults, sample_weight = testWeight)) mse.append(mean_squared_error(test_y, testResults, sample_weight =testWeight)) index = mse.index(min(mse)) evaluator = np.stack((r_square, mse), axis = 1) return {"evaluator": evaluator, "r_square": r_square[index], "MSE": mse[index], "alpha":alpha_input[index]} #find best fit using LassoCV def lasso_cros(alpha_input, train_X, train_Y, test_X, test_Y): clf2 = LassoCV(alphas = alpha_range, cv = 5) clf2.fit(train_X, train_Y) plt.figure(figsize=(15,10)) plt.plot(np.log10(clf2.alphas_), clf2.mse_path_[:], '--') plt.plot(np.log10(clf2.alphas_), clf2.mse_path_.mean(axis=-1), 'k-') plt.show() return {'alpha':clf2.alpha_, 'r_square':clf2.score(train_X, train_Y), 'intercept':clf2.intercept_, 'Minimum MSE': min(clf2.mse_path_.mean(axis=-1))} #this function give the number of features selected under each alpha def num_feature(alpha_input, train_X, train_y): num_feature = [] for alpha_input in alpha_input: clf = linear_model.Lasso(alpha=alpha_input) clf.fit(train_X,train_y) num = find_features(clf.coef_)["count"] num_feature.append(num) return num_feature #This function give the features selected (non-zero coefficient) def find_features(coeff): index = [] count = 0 for i in range(len(coeff)): if coeff[i] != 0: index.append(i) count += 1 return {'index':index, 'count': count} Explanation: A series functions are defined to fit and evaluate the model End of explanation #define alpha range alpha_range = np.logspace(-2, 2, num = 1000, base = 10) #extract MSE and number of features selected from given alpha range evaluators = lasso_test(alpha_range, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0], testTarget[:,2])["evaluator"] #and number of features selected from given alpha range num_feature = num_feature(alpha_range, trainPredictor, trainTarget[:,0]) #Plot the results plt.figure(figsize=(8,10)) plt.subplot(211) plt.plot(np.log10(np.logspace(-2, 2, num = 1000, base =10)), evaluators[:,1], 'k-') plt.ylabel("Mean Square Error") plt.xlabel("log10(alpha)") plt.title("Change of Mearn Square Error with Tuning Parameter") plt.subplot(212) plt.plot(np.log10(np.logspace(-2, 2, num = 1000, base =10)), num_feature, 'k-') plt.ylabel("Number of features selected") plt.xlabel("log10(alpha)") plt.title("Change of Number of features selected with Tuning Parameter") plt.show() # the Model is auto selected by the LassoCV function lasso_cros(alpha_range, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0]) # the predicted value and real value are compared testResults = lasso_results(0.30302710828663965, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0]) fig2= plt.figure(figsize=(10, 8)) plt.plot(list(range(2608)), testResults, 'ko', label = 'Predicted EUI') plt.plot(list(range(2608)), testTarget[:,0], 'ro', label = 'Actual EUI') plt.ylim(-100, 800) plt.xlim(0, len(testResults)) plt.xlabel("Buildings") plt.ylabel("Total EUI (MBTU/sqft)") plt.title("Predicted EUI vs. Actual EUI") plt.legend() Explanation: The optimum model is first analyzed using self-defined method End of explanation #Attributes selected alpha = lasso_cros(alpha_range, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0])["alpha"] coefficient = lasso_fit(alpha, trainPredictor, trainTarget[:,0]) features = find_features(coefficient)["index"] Explanation: The selected features are extracted Analyzing features End of explanation import csv File = open('ALL_ATRIB_DATA_code.csv','r') reader = csv.reader(File, delimiter = ',') code = list(reader) code = np.array(code) Explanation: find the name of selected features End of explanation code[features,1] #This function maps the feature to the file def file_map(feature_index, code): basic = [] file1 = [] file2 = [] file3 = [] file4 = [] file5 = [] file6 = [] file7 = [] for i in feature_index: if code[i, 3] == "Basic info.": basic.append(code[i,1]) elif code[i, 3] == "File (1)": file1.append(code[i, 1]) elif code[i, 3] == "File (2)": file2.append(code[i, 1]) elif code[i, 3] == "File (3)": file3.append(code[i, 1]) elif code[i, 3] == "File (4)": file4.append(code[i, 1]) elif code[i, 3] == "File (5)": file5.append(code[i, 1]) elif code[i, 3] == "File (6)": file6.append(code[i, 1]) elif code[i, 3] == "File (7)": file7.append(code[i, 1]) print "The total number of features selected is \t{}.".format(len(feature_index)) print "The number of features in Basic info is \t{}.".format(len(basic)) print "The number of features in File 1 is \t{}.".format(len(file1)) print "The number of features in File 2 is \t{}.".format(len(file2)) print "The number of features in File 3 is \t{}.".format(len(file3)) print "The number of features in File 4 is \t{}.".format(len(file4)) print "The number of features in File 5 is \t{}.".format(len(file5)) print "The number of features in File 6 is \t{}.".format(len(file6)) print "The number of features in File 7 is \t{}.".format(len(file7)) return None Explanation: The features selected is translated as shown below End of explanation file_map(features, code) Explanation: We map the feature to the files End of explanation file_map(list(range(159)), code) Explanation: All the features are also mapped End of explanation # this function returns the targeted number of features selected def feature_reduc(alpha_input, train_X, train_y, threshold): feature_num = len(train_X[0]) while feature_num > threshold: clf = linear_model.Lasso(alpha=alpha_input) clf.fit(train_X,train_y) feature_index = find_features(clf.coef_)["index"] feature_num = len(feature_index) alpha_input = alpha_input * 1.2 return {'alpha':alpha_input, 'feature_index': feature_index} # Target 1 most important feature code[feature_reduc(10, Predictor, Target[:,0],1)["feature_index"],1] # Target 3 most important feature code[feature_reduc(10, Predictor, Target[:,0],3)["feature_index"],1] # Target 5 most important feature code[feature_reduc(10, Predictor, Target[:,0],5)["feature_index"],1] # Target 10 most important feature code[feature_reduc(10, Predictor, Target[:,0],10)["feature_index"],1] Explanation: To improve the inpterpretability and understand what is the most important features, feature selection is performed End of explanation # the MSE of the optimum model is used MSE = 10595.655356917619 #alpha obtained from previous model coef = coefficient = lasso_fit(0.30302710828663965, trainPredictor, trainTarget[:,0]) Y_pred=lasso_results(0.30372635797, trainPredictor, trainTarget[:,0], testPredictor, testTarget[:,0]) from scipy.stats import t s2=MSE*np.linalg.inv(np.dot(np.transpose(testPredictor),testPredictor)) ss2=np.diag(s2) ss=np.sqrt(ss2) stu=t.isf((1-0.95)/2,2608-159) Max=np.matrix(coef).T+np.matrix(stu*ss).T Min=np.matrix(coef).T-np.matrix(stu*ss).T beta_min_max=np.concatenate((Min,Max),axis=1) print beta_min_max from scipy.stats import t stu=t.isf((1-0.95)/2,2608-159) T=coef/ss print 'T-test of the confidence interval at 99% is between -2.578 and 2.578' num=np.where((T<-stu)|(T>stu)) Stu_p=stu*np.ones(159) x=np.linspace(1,159,num=159) plt.plot(x,T,x,Stu_p,x,-Stu_p) num_reject=np.matrix(num).shape print num_reject #print[code[num,1]] print(numeric_index) print(num) print(code[num,1]) Explanation: Significance test End of explanation
2,086
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. Step1: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. Step5: Network Inputs Here, just creating some placeholders like normal. Step6: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stack layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper Step7: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note Step9: Model Loss Calculating the loss like before, nothing new here. Step11: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. Step12: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. Step13: Here is a function for displaying generated images. Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt. Step16: Hyperparameters GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Python Code: %matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data Explanation: Deep Convolutional GANs In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here. You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same. End of explanation from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(data_dir + "train_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat', data_dir + 'train_32x32.mat', pbar.hook) if not isfile(data_dir + "test_32x32.mat"): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar: urlretrieve( 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat', data_dir + 'test_32x32.mat', pbar.hook) Explanation: Getting the data Here you can download the SVHN dataset. Run the cell above and it'll download to your machine. End of explanation trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat') Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above. End of explanation idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0, hspace=0) Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake. End of explanation def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx = int(len(test['y'])*(1 - val_frac)) self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:] self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:] self.train_x, self.train_y = train['X'], train['y'] self.train_x = np.rollaxis(self.train_x, 3) self.valid_x = np.rollaxis(self.valid_x, 3) self.test_x = np.rollaxis(self.test_x, 3) if scale_func is None: self.scaler = scale else: self.scaler = scale_func self.shuffle = shuffle def batches(self, batch_size): if self.shuffle: idx = np.arange(len(dataset.train_x)) np.random.shuffle(idx) self.train_x = self.train_x[idx] self.train_y = self.train_y[idx] n_batches = len(self.train_y)//batch_size for ii in range(0, len(self.train_y), batch_size): x = self.train_x[ii:ii+batch_size] y = self.train_y[ii:ii+batch_size] yield self.scaler(x), y Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images. End of explanation def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z Explanation: Network Inputs Here, just creating some placeholders like normal. End of explanation def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x1 = tf.layers.dense(z, 4*4*512) # Reshape it to start the convolutional stack x1 = tf.reshape(x1, (-1, 4, 4, 512)) x1 = tf.layers.batch_normalization(x1, training=training) x1 = tf.maximum(alpha * x1, x1) # 4x4x512 now x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same') x2 = tf.layers.batch_normalization(x2, training=training) x2 = tf.maximum(alpha * x2, x2) # 8x8x256 now x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same') x3 = tf.layers.batch_normalization(x3, training=training) x3 = tf.maximum(alpha * x3, x3) # 16x16x128 now # Output layer logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same') # 32x32x3 now out = tf.tanh(logits) return out Explanation: Generator Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images. What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU. You keep stack layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper: Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. End of explanation def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same') relu1 = tf.maximum(alpha * x1, x1) # 16x16x64 x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same') bn2 = tf.layers.batch_normalization(x2, training=True) relu2 = tf.maximum(alpha * bn2, bn2) # 8x8x128 x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same') bn3 = tf.layers.batch_normalization(x3, training=True) relu3 = tf.maximum(alpha * bn3, bn3) # 4x4x256 # Flatten it flat = tf.reshape(relu3, (-1, 4*4*256)) logits = tf.layers.dense(flat, 1) out = tf.sigmoid(logits) return out, logits Explanation: Discriminator Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers. You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU. Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately. End of explanation def model_loss(input_real, input_z, output_dim, alpha=0.2): Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, generator loss) g_model = generator(input_z, output_dim, alpha=alpha) d_model_real, d_logits_real = discriminator(input_real, alpha=alpha) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha) d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real))) d_loss_fake = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake))) g_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake))) d_loss = d_loss_real + d_loss_fake return d_loss, g_loss Explanation: Model Loss Calculating the loss like before, nothing new here. End of explanation def model_opt(d_loss, g_loss, learning_rate, beta1): Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :return: A tuple of (discriminator training operation, generator training operation) # Get weights and bias to update t_vars = tf.trainable_variables() d_vars = [var for var in t_vars if var.name.startswith('discriminator')] g_vars = [var for var in t_vars if var.name.startswith('generator')] # Optimize with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars) return d_train_opt, g_train_opt Explanation: Optimizers Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics. End of explanation class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, real_size[2], alpha=0.2) self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1) Explanation: Building the model Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object. End of explanation def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8) ax.set_adjustable('box-forced') im = ax.imshow(img, aspect='equal') plt.subplots_adjust(wspace=0, hspace=0) return fig, axes Explanation: Here is a function for displaying generated images. End of explanation def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for x, y in dataset.batches(batch_size): steps += 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z}) _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x}) if steps % print_every == 0: # At the end of each epoch, get the losses and print them out train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x}) train_loss_g = net.g_loss.eval({net.input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) if steps % show_every == 0: gen_samples = sess.run( generator(net.input_z, 3, reuse=True, training=False), feed_dict={net.input_z: sample_z}) samples.append(gen_samples) _ = view_samples(-1, samples, 6, 12, figsize=figsize) plt.show() saver.save(sess, './checkpoints/generator.ckpt') with open('samples.pkl', 'wb') as f: pkl.dump(samples, f) return losses, samples Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an errror without it because of the tf.control_dependencies block we created in model_opt. End of explanation # ORIGINAL real_size = (32,32,3) z_size = 100 learning_rate = 0.001 batch_size = 64 epochs = 1 alpha = 0.01 beta1 = 0.9 # BEST - HERE real_size = (32,32,3) z_size = 100 learning_rate = 0.0002 batch_size = 128 epochs = 1 # 25 alpha = 0.2 beta1 = 0.5 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend() _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) _ = view_samples(-1, samples, 6, 12, figsize=(10,5)) Explanation: Hyperparameters GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them. End of explanation
2,087
Given the following text description, write Python code to implement the functionality described below step by step Description: Wissenschaftliches Python Tutorial Nachdem wir uns im Python Tutorial um die Grundlagen gekümmert haben, wollen wir uns nun mit einigen Bibliotheken beschäftigen, die das wissenschaftliche Arbeiten erleichtern. Diese sind Numpy für effiziente Berechnungen auf strukturierten Daten Matplotlib bietet eine einfache Möglichkeit Daten schön darzustellen Scipy enthält mathematische Funktionen und Algorithmen für statistische Berechnungen, Fits, etc. Zunächst laden wir die Bibliotheken. Sollte dabei ein Fehler auftreten, stell bitte sicher, dass bei der Installation alles geklappt hat und du kein Paket vergessen hast. Die erste Zeile mit dem %-Zeichen ist sogenannte "Magie", die dafür sorgt, dass Plots im Notebook dargestellt werden. Step1: Numpy Step2: Numpy Arrays unterstützen arithmetische Operationen, die wiederum effizient implementiert sind. Beispielsweise lassen sich zwei Arrays (elementweise) addieren sofern sie die gleichen Dimensionen haben. Step3: Um einen Überblick über alle Features von Numpy zu bekommen, können wir die Hilfe zu Rate ziehen. Zusätzlich zur help Funktion bietet IPython auch die ?-Magie mit einer besseren Integration in Jupyter Step4: Für die Übungsaufgaben werden wir häufig Zufallszahlen brauchen. Dafür bietet sich die Verwendung von np.random an. Step5: Matplotlib Step6: Falls dir dieser Plot zu steril ist, können wir den Stil der bekannten R-Bibliothek ggplot2 verwenden. Step7: Wir wollen nun die Anzahl Bins erhöhen und zusätzlich das Histogramm normieren, damit wir die normierte Verteilungsfunktion (PDF) eintragen können. Step8: Scipy Step9: Das sieht doch schon mal hübsch aus. Zum Abschluss wollen wir noch Unsicherheiten auf die Bins berechnen und in das Histogramm eintragen. Um es einfach zu halten, verwenden wir nicht die normierte PDF, sondern skalieren unsere PDF auf unsere Daten.
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import scipy Explanation: Wissenschaftliches Python Tutorial Nachdem wir uns im Python Tutorial um die Grundlagen gekümmert haben, wollen wir uns nun mit einigen Bibliotheken beschäftigen, die das wissenschaftliche Arbeiten erleichtern. Diese sind Numpy für effiziente Berechnungen auf strukturierten Daten Matplotlib bietet eine einfache Möglichkeit Daten schön darzustellen Scipy enthält mathematische Funktionen und Algorithmen für statistische Berechnungen, Fits, etc. Zunächst laden wir die Bibliotheken. Sollte dabei ein Fehler auftreten, stell bitte sicher, dass bei der Installation alles geklappt hat und du kein Paket vergessen hast. Die erste Zeile mit dem %-Zeichen ist sogenannte "Magie", die dafür sorgt, dass Plots im Notebook dargestellt werden. End of explanation xs = np.array([1, 2, 3, 4]) # Konvertiert eine Python-Liste in ein Numpy-Array print(xs) ys = np.arange(4) # Erzeugt ein Array analog zur `range` Funktion print(ys) Explanation: Numpy: Arrays und effiziente Berechnungen Das Herzstück von Numpy ist das Array. Dieser Datentyp repräsentiert eine Matrix und ist unter der Haube in C implementiert. Dabei wird großer Wert auf effiziente Speichernutzung gelegt. Der gängige Satz "Python ist viel zu langsam" ist also nicht zwingend wahr. Wir können Arrays auf verschiedene Arten erzeugen. End of explanation xs + ys Explanation: Numpy Arrays unterstützen arithmetische Operationen, die wiederum effizient implementiert sind. Beispielsweise lassen sich zwei Arrays (elementweise) addieren sofern sie die gleichen Dimensionen haben. End of explanation np? Explanation: Um einen Überblick über alle Features von Numpy zu bekommen, können wir die Hilfe zu Rate ziehen. Zusätzlich zur help Funktion bietet IPython auch die ?-Magie mit einer besseren Integration in Jupyter End of explanation np.random? n_events = 10000 gauss = np.random.normal(2, 3, size=n_events) # Erzeuge 10000 Gauß-verteilte Zufallszahlen # mit µ=2 und σ=3. Explanation: Für die Übungsaufgaben werden wir häufig Zufallszahlen brauchen. Dafür bietet sich die Verwendung von np.random an. End of explanation plt.hist(gauss) plt.xlabel('Wert') plt.ylabel('Absolute Häufigkeit') Explanation: Matplotlib: Schöne Plots Matplotlib bietet sehr intuitive Funktionen um Daten darzustellen. Die sehr ausführliche Dokumentation bietet einen guten Überblick. Wir benutzen an dieser Stelle nur das pyplot Submodul, das uns ein einfaches Interface für die Kernfunktionalität bietet. In der Matplotlib Galerie finden sich viele schöne Beispiele mit Codeschnipseln. Um unsere Gauß-verteilten Zufallszahlen zu histogrammieren benutzen wir einfach plt.hist. Außerdem setzen wir gleich Achsenbeschriftungen. End of explanation plt.style.use('ggplot') Explanation: Falls dir dieser Plot zu steril ist, können wir den Stil der bekannten R-Bibliothek ggplot2 verwenden. End of explanation plt.hist(gauss, bins=20, normed=True) plt.xlabel('Wert') plt.ylabel('Relative Häufigkeit') Explanation: Wir wollen nun die Anzahl Bins erhöhen und zusätzlich das Histogramm normieren, damit wir die normierte Verteilungsfunktion (PDF) eintragen können. End of explanation import scipy.stats pdf = scipy.stats.norm(2, 3).pdf xs = np.linspace(-15, 15, 5000) # Erzeuge 5000 äquidistante Werte im Interval [-15, 15). plt.hist(gauss, bins=20, normed=True, label='Werte') plt.plot(xs, pdf(xs), label='PDF') plt.xlabel('Wert') plt.ylabel('Relative Häufigkeit') plt.legend() Explanation: Scipy: Statistische Funktionen und mehr Die PDF erhalten wir ebenfalls aus Scipy. Um sie plotten zu können, müssen wir sie auf eine Reihe von Werten anwenden um Datenpunkte zu erhalten. Hier zeigt sich erneut die Stärke von Numpy: wir können einfach die Funktion auf das ganze Array anwenden und erhalten ein Array von Ergebnissen. Scipy ist modular aufgebaut, so dass wir mit unserem obigen Import nicht alle Untermodule enthalten. Wir müssen das Statistikmodul explizit importieren. End of explanation bins, edges = np.histogram(gauss, bins=20) bin_width = edges[1] - edges[0] # Alle Bins haben die gleiche Breite centres = edges[:-1] + bin_width / 2 def scaled_pdf(x): return bin_width * n_events * pdf(x) plt.errorbar( # Typisches "Teilchenphysikerhistorgamm" centres, # x bins, # y xerr=bin_width/2, # Unsicherheit auf x: hier Breite der Bins yerr=np.sqrt(bins), # Unsicherheit auf y fmt='o', # Benutze Punkte statt Linien zur Darstellung label='Data' ) plt.plot(xs, scaled_pdf(xs), label='PDF') plt.xlabel('Wert') plt.ylabel('Relative Häufigkeit') plt.ylim(-100, 2000) # Manuelles Setzen des sichtbaren vertikalen Ausschnittes plt.legend() Explanation: Das sieht doch schon mal hübsch aus. Zum Abschluss wollen wir noch Unsicherheiten auf die Bins berechnen und in das Histogramm eintragen. Um es einfach zu halten, verwenden wir nicht die normierte PDF, sondern skalieren unsere PDF auf unsere Daten. End of explanation
2,088
Given the following text description, write Python code to implement the functionality described below step by step Description: Setup Step1: Code Day 1 Step2: We will form the direction map since they are finite. Step3: Day 2 Step4: part2 You finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the result of hundreds of man-hours of bathroom-keypad-design meetings Step5: Day3 squares With Three Sides part1 Now that you can think clearly, you move deeper into the labyrinth of hallways and office furniture that makes up this part of Easter Bunny HQ. This must be a graphic design department; the walls are covered in specifications for triangles. Or are they? The design document gives the side lengths of each triangle it describes, but... 5 10 25? Some of these aren't triangles. You can't help but mark the impossible ones. In a valid triangle, the sum of any two sides must be larger than the remaining side. For example, the "triangle" given above is impossible, because 5 + 10 is not larger than 25. In your puzzle input, how many of the listed triangles are possible? Step7: part2 Now that you've helpfully marked up their design documents, it occurs to you that triangles are specified in groups of three vertically. Each set of three numbers in a column specifies a triangle. Rows are unrelated. For example, given the following specification, numbers with the same hundreds digit would be part of the same triangle Step8: Day4 part1 Step9: part2 With all the decoy data out of the way, it's time to decrypt this list and get moving. The room names are encrypted by a state-of-the-art shift cipher, which is nearly unbreakable without the right software. However, the information kiosk designers at Easter Bunny HQ were not expecting to deal with a master cryptographer like yourself. To decrypt a room name, rotate each letter forward through the alphabet a number of times equal to the room's sector ID. A becomes B, B becomes C, Z becomes A, and so on. Dashes become spaces. For example, the real name for qzmt-zixmtkozy-ivhz-343 is very encrypted name. What is the sector ID of the room where North Pole objects are stored?
Python Code: import sys import os import re import collections import itertools import bcolz import pickle import numpy as np import pandas as pd import gc import random import smart_open import h5py import csv import tensorflow as tf import gensim import string import datetime as dt from tqdm import tqdm_notebook as tqdm import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns random_state_number = 967898 Explanation: Setup End of explanation ! cat day1_input.txt input_data = None with open("day1_input.txt") as f: input_data = f.read().strip().split() input_data = [w.strip(",") for w in input_data ] Explanation: Code Day 1: Inverse Captcha The captcha requires you to review a sequence of digits (your puzzle input) and find the sum of all digits that match the next digit in the list. The list is circular, so the digit after the last digit is the first digit in the list. End of explanation directions = { ("N","R") : ("E",0,1), ("N","L") : ("W",0,-1), ("W","R") : ("N",1,1), ("W","L") : ("S",1,-1), ("E","R") : ("S",1,-1), ("E","L") : ("N",1,1), ("S","R") : ("W",0,-1), ("S","L") : ("E",0,1) } def get_distance(data): d,pos = "N",[0,0] for code in data: d1,v = code[0], int(code[1:]) d,i,m = directions[(d, code[0])] pos[i] += m*v #print(code,d,v,pos) return sum([abs(n) for n in pos]) data = ["R2", "R2", "R2"] get_distance(data) data = ["R5", "L5", "R5", "R3"] get_distance(data) get_distance(input_data) Explanation: We will form the direction map since they are finite. End of explanation input_data = None with open("day2_input.txt") as f: input_data = f.read().strip().split() def get_codes(data, keypad, keypad_max_size, start_index=(1,1), verbose=False): r,c = start_index digit = "" for codes in data: if verbose: print(" ",codes) for code in codes: if verbose: print(" before",r,c,keypad[r][c]) if code == 'R' and c+1 < keypad_max_size and keypad[r][c+1] is not None: c += 1 elif code == 'L' and c-1 >= 0 and keypad[r][c-1] is not None: c -= 1 elif code == 'U' and r-1 >= 0 and keypad[r-1][c] is not None: r -= 1 elif code == 'D' and r+1 < keypad_max_size and keypad[r+1][c] is not None: r += 1 if verbose: print(" after",code,r,c,keypad[r][c]) digit += str(keypad[r][c]) return digit sample = ["ULL", "RRDDD", "LURDL", "UUUUD"] keypad = [[1,2,3],[4,5,6],[7,8,9]] get_codes(sample, keypad, keypad_max_size=3) keypad = [[1,2,3],[4,5,6],[7,8,9]] get_codes(input_data, keypad, keypad_max_size=3) Explanation: Day 2: Bathroom Security part1 You arrive at Easter Bunny Headquarters under cover of darkness. However, you left in such a rush that you forgot to use the bathroom! Fancy office buildings like this one usually have keypad locks on their bathrooms, so you search the front desk for the code. "In order to improve security," the document you find says, "bathroom codes will no longer be written down. Instead, please memorize and follow the procedure below to access the bathrooms." The document goes on to explain that each button to be pressed can be found by starting on the previous button and moving to adjacent buttons on the keypad: U moves up, D moves down, L moves left, and R moves right. Each line of instructions corresponds to one button, starting at the previous button (or, for the first line, the "5" button); press whatever button you're on at the end of each line. If a move doesn't lead to a button, ignore it. You can't hold it much longer, so you decide to figure out the code as you walk to the bathroom. You picture a keypad like this: 1 2 3 4 5 6 7 8 9 Suppose your instructions are: ULL RRDDD LURDL UUUUD You start at "5" and move up (to "2"), left (to "1"), and left (you can't, and stay on "1"), so the first button is 1. Starting from the previous button ("1"), you move right twice (to "3") and then down three times (stopping at "9" after two moves and ignoring the third), ending up with 9. Continuing from "9", you move left, up, right, down, and left, ending with 8. Finally, you move up four times (stopping at "2"), then down once, ending with 5. So, in this example, the bathroom code is 1985. Your puzzle input is the instructions from the document you found at the front desk. What is the bathroom code? End of explanation input_data = None with open("day21_input.txt") as f: input_data = f.read().strip().split() keypad = [[None, None, 1, None, None], [None, 2, 3, 4, None], [ 5, 6, 7, 8, None], [None, 'A', 'B', 'C', None], [None, None, 'D', None, None]] sample = ["ULL", "RRDDD", "LURDL", "UUUUD"] get_codes(sample, keypad, keypad_max_size=5, start_index=(2,0), verbose=False) get_codes(input_data, keypad, keypad_max_size=5, start_index=(2,0), verbose=False) Explanation: part2 You finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the result of hundreds of man-hours of bathroom-keypad-design meetings: 1 2 3 4 5 6 7 8 9 A B C D You still start at "5" and stop when you're at an edge, but given the same instructions as above, the outcome is very different: You start at "5" and don't move at all (up and left are both edges), ending at 5. Continuing from "5", you move right twice and down three times (through "6", "7", "B", "D", "D"), ending at D. Then, from "D", you move five more times (through "D", "B", "C", "C", "B"), ending at B. Finally, after five more moves, you end at 3. So, given the actual keypad layout, the code would be 5DB3. Using the same instructions in your puzzle input, what is the correct bathroom code? Although it hasn't changed, you can still get your puzzle input. End of explanation input_data = None with open("day3_input.txt") as f: input_data = f.read().strip().split("\n") input_data = [list(map(int, l.strip().split())) for l in input_data] result = [ (sides[0]+sides[1] > sides[2]) and (sides[2]+sides[1] > sides[0]) and (sides[0]+sides[2] > sides[1]) for sides in input_data] sum(result) Explanation: Day3 squares With Three Sides part1 Now that you can think clearly, you move deeper into the labyrinth of hallways and office furniture that makes up this part of Easter Bunny HQ. This must be a graphic design department; the walls are covered in specifications for triangles. Or are they? The design document gives the side lengths of each triangle it describes, but... 5 10 25? Some of these aren't triangles. You can't help but mark the impossible ones. In a valid triangle, the sum of any two sides must be larger than the remaining side. For example, the "triangle" given above is impossible, because 5 + 10 is not larger than 25. In your puzzle input, how many of the listed triangles are possible? End of explanation input_data = None with open("day31_input.txt") as f: input_data = f.read().strip().split("\n") input_data = [list(map(int, l.strip().split())) for l in input_data] input_data[:5] def chunks(l, n): Yield successive n-sized chunks from l. for i in range(0, len(l), n): yield l[i:i + n] single_list = [input_data[r][c] for c in [0,1,2] for r in range(len(input_data))] result = [ (sides[0]+sides[1] > sides[2]) and (sides[2]+sides[1] > sides[0]) and (sides[0]+sides[2] > sides[1]) for sides in chunks(single_list, 3)] sum(result) Explanation: part2 Now that you've helpfully marked up their design documents, it occurs to you that triangles are specified in groups of three vertically. Each set of three numbers in a column specifies a triangle. Rows are unrelated. For example, given the following specification, numbers with the same hundreds digit would be part of the same triangle: 101 301 501 102 302 502 103 303 503 201 401 601 202 402 602 203 403 603 In your puzzle input, and instead reading by columns, how many of the listed triangles are possible? End of explanation input_data = None with open("day4_input.txt") as f: input_data = f.read().strip().split("\n") len(input_data), input_data[:5] answer = 0 for code in input_data: m = re.match(r'(.+)-(\d+)\[([a-z]*)\]', code) code, sector, checksum = m.groups() code = code.replace("-","") counts = collections.Counter(code).most_common() counts.sort(key=lambda k: (-k[1], k[0])) if ''.join([ch for ch,_ in counts[:5]]) == checksum: answer += int(sector) answer Explanation: Day4 part1: Security Through Obscurity Finally, you come across an information kiosk with a list of rooms. Of course, the list is encrypted and full of decoy data, but the instructions to decode the list are barely hidden nearby. Better remove the decoy data first. Each room consists of an encrypted name (lowercase letters separated by dashes) followed by a dash, a sector ID, and a checksum in square brackets. A room is real (not a decoy) if the checksum is the five most common letters in the encrypted name, in order, with ties broken by alphabetization. For example: aaaaa-bbb-z-y-x-123[abxyz] is a real room because the most common letters are a (5), b (3), and then a tie between x, y, and z, which are listed alphabetically. a-b-c-d-e-f-g-h-987[abcde] is a real room because although the letters are all tied (1 of each), the first five are listed alphabetically. not-a-real-room-404[oarel] is a real room. totally-real-room-200[decoy] is not. Of the real rooms from the list above, the sum of their sector IDs is 1514. What is the sum of the sector IDs of the real rooms? End of explanation for code in input_data: m = re.match(r'(.+)-(\d+)\[([a-z]*)\]', code) code, sector, checksum = m.groups() sector = int(sector) code = code.replace("-","") counts = collections.Counter(code).most_common() counts.sort(key=lambda k: (-k[1], k[0])) string_maps = string.ascii_lowercase cipher_table = str.maketrans(string_maps, string_maps[sector%26:] + string_maps[:sector%26]) if ''.join([ch for ch,_ in counts[:5]]) == checksum: if "north" in code.translate(cipher_table): print(code.translate(cipher_table)) print("sector",sector) Explanation: part2 With all the decoy data out of the way, it's time to decrypt this list and get moving. The room names are encrypted by a state-of-the-art shift cipher, which is nearly unbreakable without the right software. However, the information kiosk designers at Easter Bunny HQ were not expecting to deal with a master cryptographer like yourself. To decrypt a room name, rotate each letter forward through the alphabet a number of times equal to the room's sector ID. A becomes B, B becomes C, Z becomes A, and so on. Dashes become spaces. For example, the real name for qzmt-zixmtkozy-ivhz-343 is very encrypted name. What is the sector ID of the room where North Pole objects are stored? End of explanation
2,089
Given the following text description, write Python code to implement the functionality described below step by step Description: Gradient Checking Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking". Let's do it! Step2: 1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient) Step4: Expected Output Step6: Expected Output Step8: Expected Output Step10: Now, run backward propagation. Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. How does gradient checking work?. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still
Python Code: # Packages import numpy as np from testCases import * from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector Explanation: Gradient Checking Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking". Let's do it! End of explanation # GRADED FUNCTION: forward_propagation def forward_propagation(x, theta): Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x) Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: J -- the value of function J, computed using the formula J(theta) = theta * x ### START CODE HERE ### (approx. 1 line) J = x*theta ### END CODE HERE ### return J x, theta = 2, 4 J = forward_propagation(x, theta) print ("J = " + str(J)) Explanation: 1) How does gradient checking work? Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function. Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient): $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small." We know the following: $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checking Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input. You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. <img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;"> <caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption> The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions. End of explanation # GRADED FUNCTION: backward_propagation def backward_propagation(x, theta): Computes the derivative of J with respect to theta (see Figure 1). Arguments: x -- a real-valued input theta -- our parameter, a real number as well Returns: dtheta -- the gradient of the cost with respect to theta ### START CODE HERE ### (approx. 1 line) dtheta = x ### END CODE HERE ### return dtheta x, theta = 2, 4 dtheta = backward_propagation(x, theta) print ("dtheta = " + str(dtheta)) Explanation: Expected Output: <table style=> <tr> <td> ** J ** </td> <td> 8</td> </tr> </table> Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$. End of explanation # GRADED FUNCTION: gradient_check def gradient_check(x, theta, epsilon = 1e-7): Implement the backward propagation presented in Figure 1. Arguments: x -- a real-valued input theta -- our parameter, a real number as well epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient # Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit. ### START CODE HERE ### (approx. 5 lines) thetaplus = theta+epsilon # Step 1 thetaminus = theta-epsilon # Step 2 J_plus = forward_propagation(x, thetaplus) # Step 3 J_minus = forward_propagation(x, thetaminus) # Step 4 gradapprox = (J_plus-J_minus)/(2*epsilon) # Step 5 ### END CODE HERE ### # Check if gradapprox is close enough to the output of backward_propagation() ### START CODE HERE ### (approx. 1 line) grad = backward_propagation(x, theta) ### END CODE HERE ### ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox) # Step 1' denominator = np.linalg.norm(grad)+np.linalg.norm(gradapprox) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference < 1e-7: print ("The gradient is correct!") else: print ("The gradient is wrong!") return difference x, theta = 2, 4 difference = gradient_check(x, theta) print("difference = " + str(difference)) Explanation: Expected Output: <table> <tr> <td> ** dtheta ** </td> <td> 2 </td> </tr> </table> Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking. Instructions: - First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$ - Then compute the gradient using backward propagation, and store the result in a variable "grad" - Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula: $$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$ You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them. - If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation. End of explanation def forward_propagation_n(X, Y, parameters): Implements the forward propagation (and computes the cost) presented in Figure 3. Arguments: X -- training set for m examples Y -- labels for m examples parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) Returns: cost -- the cost function (logistic cost for one example) # retrieve parameters m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1./m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache Explanation: Expected Output: The gradient is correct! <table> <tr> <td> ** difference ** </td> <td> 2.9193358103083e-10 </td> </tr> </table> Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation(). Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. <img src="images/NDgrad_kiank.png" style="width:600px;height:400px;"> <caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption> Let's look at your implementations for forward propagation and backward propagation. End of explanation def backward_propagation_n(X, Y, cache): Implement the backward propagation presented in figure 2. Arguments: X -- input datapoint, of shape (input size, 1) Y -- true "label" cache -- cache output from forward_propagation_n() Returns: gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1./m * np.dot(dZ3, A2.T) db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) dW2 = 1./m * np.dot(dZ2, A1.T) * 2 db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) dW1 = 1./m * np.dot(dZ1, X.T) db1 = 4./m * np.sum(dZ1, axis=1, keepdims = True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients Explanation: Now, run backward propagation. End of explanation # GRADED FUNCTION: gradient_check_n def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7): Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n Arguments: parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3": grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters. x -- input datapoint, of shape (input size, 1) y -- true "label" epsilon -- tiny shift to the input to compute approximated gradient with formula(1) Returns: difference -- difference (2) between the approximated gradient and the backward propagation gradient # Set-up variables parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) # Compute gradapprox for i in range(num_parameters): # Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]". # "_" is used because the function you have to outputs two parameters but we only care about the first one ### START CODE HERE ### (approx. 3 lines) thetaplus = np.copy(parameters_values) # Step 1 thetaplus[i][0] = thetaplus[i][0]+epsilon # Step 2 J_plus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaplus)) # Step 3 ### END CODE HERE ### # Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]". ### START CODE HERE ### (approx. 3 lines) thetaminus = np.copy(parameters_values) # Step 1 thetaminus[i][0] = thetaminus[i][0]-epsilon # Step 2 J_minus[i], _ = forward_propagation_n(X, Y,vector_to_dictionary(thetaminus)) # Step 3 ### END CODE HERE ### # Compute gradapprox[i] ### START CODE HERE ### (approx. 1 line) gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon) ### END CODE HERE ### # Compare gradapprox to backward propagation gradients by computing difference. ### START CODE HERE ### (approx. 1 line) numerator = np.linalg.norm(grad-gradapprox, ord=2) # Step 1' denominator = np.linalg.norm(grad, ord=2)+np.linalg.norm(gradapprox, ord=2) # Step 2' difference = numerator/denominator # Step 3' ### END CODE HERE ### if difference > 2e-7: print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y) Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. How does gradient checking work?. As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still: $$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$ However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them. The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary. <img src="images/dictionary_to_vector.png" style="width:600px;height:400px;"> <caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption> We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that. Exercise: Implement gradient_check_n(). Instructions: Here is pseudo-code that will help you implement the gradient check. For each i in num_parameters: - To compute J_plus[i]: 1. Set $\theta^{+}$ to np.copy(parameters_values) 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )). - To compute J_minus[i]: do the same thing with $\theta^{-}$ - Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$ Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$ End of explanation
2,090
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial 1 This is a direct port of the R dftools tutorial to Python. Objective of tutorial Step1: Download the HI-mass data of Westmeier et al. 2017 Step2: There are 31 galaxies in this sample, hence the array has 31 rows. This data can be recast into the log-masses $x$, normally used by pydftools. We assume the mass uncertainties to be normal in $x$ and determine their amplitude using linear error propagation. We also define the vector of effective volumes Step3: Now fit these data. We first must create a Data and Selection object Step4: and show the fitted parameters Step5: The dashed line in the bottom panel shows the effective volume as a function of mass, recovered from the 31 values of veff. By default an effective volume of 0 for masses smaller than the smallest observed mass, and identical to the maximum volume for masses larger than the largest observed mass. If a better model is available from survey-specific considerations, then this information can be exploited to improve the fit. In this example, we replace the assumption of veff=0 for x&lt;xmin, by veff=max(0,(x−6.53)∗75) Step6: Now fit again Step7: and see the best fit solution Step8: As can be seen the parameters have change very slightly due to the modified effecive volume at the lowest masses. The printed parameters have symmetric Gaussian uncertainties, determined in the Lapace approximation (i.e. by inverting the Hessian matrix of the modified likelihood function). To allow for non-Gaussian parameter posteriors, we now refit the data while bootstrapping it 10^3 times Step9: Finally, let’s produce the plot with 68% and 95% confidence regions around the best fit. Also change fit color to red, change data color to black, remove posterior data, remove effective volume line, and adjust binning of input data. Then, add HIPASS and ALFALFA lines. Step10: and write the bes-fitting parameters
Python Code: %matplotlib inline import pydftools as df from pydftools.plotting import mfplot import numpy as np from urllib.request import Request, urlopen # For getting the data online from IPython.display import display, Math, Latex, Markdown, TextDisplayObject Explanation: Tutorial 1 This is a direct port of the R dftools tutorial to Python. Objective of tutorial: Illuystrate the basic functionality of pydftools by reproducing the HI mass function in Fig. 7 of Westmeier et al. 2017 (https://arxiv.org/pdf/1709.00780.pdf). Load the relevant libraries: End of explanation req = Request('http://quantumholism.com/dftools/westmeier2017.txt', headers={'User-Agent': 'Mozilla/5.0'}) data = urlopen(req) data = np.genfromtxt(data, skip_header=1) Explanation: Download the HI-mass data of Westmeier et al. 2017: End of explanation x = np.log10(data[:,0]) x_err = data[:,1]/data[:,0]/np.log(10) veff_values = data[:,2] Explanation: There are 31 galaxies in this sample, hence the array has 31 rows. This data can be recast into the log-masses $x$, normally used by pydftools. We assume the mass uncertainties to be normal in $x$ and determine their amplitude using linear error propagation. We also define the vector of effective volumes: End of explanation data = df.dffit.Data(x = x, x_err=x_err) selection = df.selection.SelectionVeffPoints(veff=veff_values, xval = x, xmin = 5, xmax = 13) survey = df.DFFit(data = data, selection=selection, grid_dx = 0.01) mfplot(survey, xlim=(10**6.63, 5e10), ylim=(1e-3, 1), show_bias_correction=False); Explanation: Now fit these data. We first must create a Data and Selection object: End of explanation display(Markdown(survey.fit_summary(format_for_notebook=True))) Explanation: and show the fitted parameters: End of explanation def veff_extrap(x): veff_max = np.max(veff_values) return np.clip((x-6.53)*75, 0,veff_max) selection = df.selection.SelectionVeffPoints(veff=veff_values, xval = x, veff_extrap=veff_extrap, xmin = 5, xmax = 13) Explanation: The dashed line in the bottom panel shows the effective volume as a function of mass, recovered from the 31 values of veff. By default an effective volume of 0 for masses smaller than the smallest observed mass, and identical to the maximum volume for masses larger than the largest observed mass. If a better model is available from survey-specific considerations, then this information can be exploited to improve the fit. In this example, we replace the assumption of veff=0 for x&lt;xmin, by veff=max(0,(x−6.53)∗75): End of explanation survey = df.DFFit(data = data, selection=selection, grid_dx = 0.01) Explanation: Now fit again: End of explanation display(Markdown(survey.fit_summary(format_for_notebook=True))) Explanation: and see the best fit solution: End of explanation survey.resample(n_bootstrap = 1000) Explanation: As can be seen the parameters have change very slightly due to the modified effecive volume at the lowest masses. The printed parameters have symmetric Gaussian uncertainties, determined in the Lapace approximation (i.e. by inverting the Hessian matrix of the modified likelihood function). To allow for non-Gaussian parameter posteriors, we now refit the data while bootstrapping it 10^3 times: End of explanation fig, ax = mfplot(survey,xlim=(2e6,5e10),ylim=(1e-3,1),uncertainty_type=3, col_fit='red',col_data='black',show_posterior_data=False, ls_veff='none', nbins=6,bin_xmin=6.5,bin_xmax=9.5, show_bias_correction=False, xpower10=True) x = survey.grid.x[0] ax[0].plot(10**x, survey.model.gdf(x,[np.log10(6.0e-3),9.80,-1.37]), ls='--',lw=1.5, color='C0', label="HIPASS") ax[0].plot(10**x, survey.model.gdf(x,[np.log10(4.8e-3),9.96,-1.33]), ls='--',lw=1.5, color='C1', label="ALFALFA") ax[0].legend() Explanation: Finally, let’s produce the plot with 68% and 95% confidence regions around the best fit. Also change fit color to red, change data color to black, remove posterior data, remove effective volume line, and adjust binning of input data. Then, add HIPASS and ALFALFA lines. End of explanation display(Markdown(survey.fit_summary(format_for_notebook=True))) Explanation: and write the bes-fitting parameters: End of explanation
2,091
Given the following text description, write Python code to implement the functionality described below step by step Description: Loading and modifying a SMIRNOFF-format force field This notebook illustrates how to load a SMIRNOFF-format force field, apply it to an example molecule, get the energy, then manipulate the parameters in the force field and update the energy. Prep some utility functions/import stuff Step2: Define utility function we'll use to get energy of an OpenMM system Step3: Example 1 Step4: Get positions for use below Step5: Load the Parsley force field file Step6: Generate an Open Force Field Toolkit Topology containing only this molecule Step7: Parameterize the molecule, creating an OpenMM system. Note that the charges generated in this step do not depend on the input conformation of parameterized molecules. See the FAQ for more information. Step8: Calculate energy before parameter modification Step9: Get parameters for the C-O-H angle Step10: Modify the parameters Step11: Evaluate energy after parameter modification Step12: Print out energy Step13: Example 2 Step14: Select a solvent box to parameterize. <div class="alert alert-block alert-warning"> <b>Note Step15: Provide a "complete" (including bond orders, charges, and stereochemistry) representation of each molecule that might be in the PDB. This is necessary because a PDB representation of a molecule does not contain sufficient information for parameterization. Step16: Create an Open Force Field Toolkit Topology object by matching the Open Force Field molecules defined above to those in the PDB <div class="alert alert-block alert-warning"> <b>Note Step17: Change the long-range van der Waals method to be PME Step18: The Open Force Field Toolkit applies vdW parameters using a SMIRKS-based typing scheme. Inspect the first few vdW parameters. These can be changed programmatically, as shown in example 1. Step19: Now recompute the energy of the system using PME for long-range vdW interactions <div class="alert alert-block alert-warning"> <b>Note
Python Code: from openff.toolkit.topology import Molecule, Topology from openff.toolkit.typing.engines.smirnoff.forcefield import ForceField from openff.toolkit.utils import get_data_file_path from simtk import openmm, unit import numpy as np Explanation: Loading and modifying a SMIRNOFF-format force field This notebook illustrates how to load a SMIRNOFF-format force field, apply it to an example molecule, get the energy, then manipulate the parameters in the force field and update the energy. Prep some utility functions/import stuff End of explanation def get_energy(system, positions): Return the potential energy. Parameters ---------- system : simtk.openmm.System The system to check positions : simtk.unit.Quantity of dimension (natoms,3) with units of length The positions to use Returns --------- energy integrator = openmm.VerletIntegrator(1.0 * unit.femtoseconds) context = openmm.Context(system, integrator) context.setPositions(positions) state = context.getState(getEnergy=True) energy = state.getPotentialEnergy().in_units_of(unit.kilocalories_per_mole) return energy Explanation: Define utility function we'll use to get energy of an OpenMM system End of explanation molecule = Molecule.from_file(get_data_file_path('molecules/ethanol.sdf')) Explanation: Example 1: Load a molecule and evaluate its energy before and after a parameter modification In this example, we load a single ethanol molecule with geometry information, parameterize it using the original "Parsley" (openff-1.0.0) force field, and evaluate its energy. We then modify the parameter that is applied to the C-O-H angle and re-evaluate the energy. Load a molecule End of explanation positions = molecule.conformers[0] Explanation: Get positions for use below End of explanation ff = ForceField('openff_unconstrained-1.0.0.offxml') Explanation: Load the Parsley force field file End of explanation topology = molecule.to_topology() Explanation: Generate an Open Force Field Toolkit Topology containing only this molecule End of explanation orig_system = ff.create_openmm_system(topology) Explanation: Parameterize the molecule, creating an OpenMM system. Note that the charges generated in this step do not depend on the input conformation of parameterized molecules. See the FAQ for more information. End of explanation orig_energy = get_energy(orig_system, positions) print(f"Original energy: {orig_energy}") Explanation: Calculate energy before parameter modification End of explanation smirks = '[*:1]-[#8:2]-[*:3]' # SMIRKS for the parameter to retrieve parameter = ff.get_parameter_handler('Angles').parameters[smirks] Explanation: Get parameters for the C-O-H angle End of explanation parameter.k *= 0.9 parameter.angle *= 1.1 Explanation: Modify the parameters End of explanation new_system = ff.create_openmm_system(topology) new_energy = get_energy(new_system, positions) Explanation: Evaluate energy after parameter modification End of explanation print(f"Original energy: {orig_energy}. New energy: {new_energy}") Explanation: Print out energy End of explanation # Create a new ForceField containing the original "Parsley" parameter set: forcefield = ForceField('openff_unconstrained-1.0.0.offxml') # Inspect the long-range van der Waals method: vdw_handler = forcefield.get_parameter_handler('vdW') print(f"The vdW method is currently set to: {vdw_handler.method}") Explanation: Example 2: Inspect and manipulate nonbonded treatment The SMIRNOFF spec aims to specify all aspects of a system that contribute to the energy within the ForceField object. This includes the treatment of long-range electrostatics and van der Waals interactions. This may be different for some users, as other packages set these parameters at runtime, for example in an AMBER mdin file or GROMACS MDP file. This example evaluates the energy of a periodic box of solvent molecules using the "standard" "Parsley" (openff-1.0.0) settings (PME for electrostatics, 9 Angstrom cutoff for vdW interactions). It then changes the ForceField's vdW treatment method to "PME" and re-evaluates the energy. <div class="alert alert-block alert-warning"> <b>Note:</b> The Open Force Field Toolkit ensures that its `create_openmm_system` function produces a system that employs the `ForceField`-specified nonbonded treatment. However, operations which convert this system to AMBER or GROMACS-format topologies/structures are likely to lose these details, as there is no equivalent data field in those objects. In the future we will work on developing robust ways to create other system formats which preserve all details of a `ForceField` object. </div> End of explanation from simtk.openmm import app # A 239-molecule mixture of cyclohexane and ethanol pdbfile = app.PDBFile(get_data_file_path('systems/packmol_boxes/cyclohexane_ethanol_0.4_0.6.pdb')) # A 340-molecule mixture of propane, methane, and butanol. #pdbfile = app.PDBFile(get_data_file_path('systems/packmol_boxes/propane_methane_butanol_0.2_0.3_0.5.pdb')) # One cyclohexane in a box of roughly 1400 waters #pdbfile = app.PDBFile(get_data_file_path('systems/packmol_boxes/cyclohexane_water.pdb')) # One ethanol in a box of roughly 1300 waters #pdbfile = app.PDBFile(get_data_file_path('systems/packmol_boxes/ethanol_water.pdb')) Explanation: Select a solvent box to parameterize. <div class="alert alert-block alert-warning"> <b>Note:</b> This process will parameterize water using Parsley parameters. We do not recommend this, and instead suggest parameterizing water externally with a model like TIP3P and merging systems using ParmEd. </div> End of explanation molecules = [Molecule.from_smiles('C'), # methane Molecule.from_smiles('CCC'),# propane Molecule.from_smiles('CCCCO'), # butanol Molecule.from_smiles('O'), # water Molecule.from_smiles('CCO'), #ethanol Molecule.from_smiles('C1CCCCC1'), #cyclohexane ] Explanation: Provide a "complete" (including bond orders, charges, and stereochemistry) representation of each molecule that might be in the PDB. This is necessary because a PDB representation of a molecule does not contain sufficient information for parameterization. End of explanation top = Topology.from_openmm(pdbfile.topology, unique_molecules=molecules) orig_system = forcefield.create_openmm_system(top) orig_energy = get_energy(orig_system, pdbfile.getPositions()) print(f"Original energy: {orig_energy}") Explanation: Create an Open Force Field Toolkit Topology object by matching the Open Force Field molecules defined above to those in the PDB <div class="alert alert-block alert-warning"> <b>Note:</b> This function is currently unoptimized and may take a minute to run. </div> End of explanation vdw_handler.method = 'PME' print(f"The vdW method is currently set to: {vdw_handler.method}") Explanation: Change the long-range van der Waals method to be PME End of explanation for vdw_param in forcefield.get_parameter_handler('vdW').parameters[0:3]: print(vdw_param) Explanation: The Open Force Field Toolkit applies vdW parameters using a SMIRKS-based typing scheme. Inspect the first few vdW parameters. These can be changed programmatically, as shown in example 1. End of explanation new_system = forcefield.create_openmm_system(top) new_energy = get_energy(new_system, pdbfile.getPositions()) print(f"Original energy (with LJ cutoff): {orig_energy}") print(f"New energy (using LJ PME): {new_energy}") Explanation: Now recompute the energy of the system using PME for long-range vdW interactions <div class="alert alert-block alert-warning"> <b>Note:</b> This function is currently unoptimized and may take a minute to run. </div> End of explanation
2,092
Given the following text description, write Python code to implement the functionality described below step by step Description: 不均一分散 『Rによる計量経済学』第6章「不均一分散」をPythonで実行する。 テキスト付属データセット(「k0601.csv」等)については出版社サイトよりダウンロードしてください。 例題6-1 「k0601.csv」を用いた均一分散のデータである場合の回帰分析。 BP統計量による不均一分散の有無の仮説検定を行います。 Step1: Breush-Pagan Test BP統計量を計算するにあたって、statsmodels.stats.diagnostic.het_breushpagan()を使う。 ``` statsmodels.stats.diagnostic.het_breushpagan(resid, exog_het) Parameters Step2: これの1つ目がBP統計量で、2つ目がP値。 見やすいように整えておく。 Step3: よって、 BP=0.14630.1463となり0に近い値であることがわかる。 P値は0.702となり有意水準10%でも帰無仮説BP=0を棄却することはできず、均一分散であると結論することができます。 例題6-2 「k0602.csv」を用いた不均一分散のデータである場合の回帰分析。 BP統計量による不均一分散の有無の仮説検定を行います。 データを読み込みます。 Step4: P値は0.003より有意水準1%でも帰無仮説BP=0を棄却することができ、不均一分散であると結論することができます。 例題6-3 「k0602.csv」を用いた不均一分散のデータである場合の回帰分析。 不均一分散が存在する場合、変数を対数化することによって不均一分散の状態の解消を行います。 先ほど読み込んだ「k0602.csv」のデータから変数の対数化を行います。 dataの中に新しいコラム「lnX」「lnY」を追加します。 Step5: あれ?まだ不均一分散残ってますね笑 ちなみに本書では何故か対数化する過程で「k0602.csv」の元のX,Yデータが変わってて対数化による不均一分散の解消が成功しているかのごとくなっています笑 「k0602.csv」と「k0603.csv」のデータ比較。 Step6: やっぱりX, Yの値が変わってる笑 とりあえず「k0603.csv」の場合の回帰も行います。 Step7: 例題6-4 「k0602.csv」を用いた不均一分散のデータである場合の回帰分析。 不均一分散が存在する場合、第3の変数を基準とした比率に変換することによって不均一分散の状態の解消を行います。 例題6-4で使う「k0604.csv」も「k0602.csv」とデータが異なるので、「k0602.csv」を基にしたデータの加工は省き、直接行います。 Step8: ちなみにデータの中身はこのようになっていて、 XZ = X / Z YZ = Y / Z で簡単に求めることが出来ます。
Python Code: %matplotlib inline # -*- coding:utf-8 -*- from __future__ import print_function import numpy as np import pandas as pd import statsmodels.api as sm import statsmodels.stats.diagnostic as smsdia import matplotlib.pyplot as plt import warnings warnings.filterwarnings('ignore') # データ読み込み data = pd.read_csv('example/k0601.csv') # 説明変数設定 X = data['X'] X = sm.add_constant(X) X # 被説明変数設定 Y = data['Y'] Y # OLSの実行(Ordinary Least Squares: 最小二乗法) model = sm.OLS(Y,X) results = model.fit() print(results.summary()) # グラフ生成 plt.plot(data["X"], data["Y"], 'o', label="data") plt.plot(data["X"], results.fittedvalues, label="OLS") plt.xlim(min(data["X"])-1, max(data["X"])+1) plt.ylim(min(data["Y"])-1, max(data["Y"])+1) plt.title('Example6-1') plt.legend(loc=2) plt.show() # 残差(residual) results.resid # 被説明変数(endogenous ) http://statsmodels.sourceforge.net/devel/endog_exog.html results.model.endog # 説明変数(exogenous) results.model.exog Explanation: 不均一分散 『Rによる計量経済学』第6章「不均一分散」をPythonで実行する。 テキスト付属データセット(「k0601.csv」等)については出版社サイトよりダウンロードしてください。 例題6-1 「k0601.csv」を用いた均一分散のデータである場合の回帰分析。 BP統計量による不均一分散の有無の仮説検定を行います。 End of explanation smsdia.het_breushpagan(results.resid, results.model.exog) Explanation: Breush-Pagan Test BP統計量を計算するにあたって、statsmodels.stats.diagnostic.het_breushpagan()を使う。 ``` statsmodels.stats.diagnostic.het_breushpagan(resid, exog_het) Parameters: resid : arraylike, (nobs,) For the Breush-Pagan test, this should be the residual of a regression. exog_het : array_like, (nobs, nvars) This contains variables that might create data dependent heteroscedasticity. Returns: lm : float lagrange multiplier statistic lm_pvalue :float : p-value of lagrange multiplier test fvalue : float f-statistic of the hypothesis that the error variance does not depend on x f_pvalue : float p-value for the f-statistic ``` End of explanation BP = smsdia.het_breushpagan(results.resid, results.model.exog) print('Breusch-Pagan test : ') print('BP = ', BP[0]) print('p-value = ', BP[1]) Explanation: これの1つ目がBP統計量で、2つ目がP値。 見やすいように整えておく。 End of explanation data = pd.read_csv('example/k0602.csv') # 説明変数設定 X = data['X'] X = sm.add_constant(X) X # 被説明変数設定 Y = data['Y'] Y # OLSの実行(Ordinary Least Squares: 最小二乗法) model = sm.OLS(Y,X) results = model.fit() print(results.summary()) # グラフ生成 plt.plot(data["X"], data["Y"], 'o', label="data") plt.plot(data["X"], results.fittedvalues, label="OLS") plt.xlim(min(data["X"])-1, max(data["X"])+1) plt.ylim(min(data["Y"])-1, max(data["Y"])+1) plt.title('Example6-2') plt.legend(loc=2) plt.show() BP = smsdia.het_breushpagan(results.resid, results.model.exog) print('Breusch-Pagan test : ') print('BP = ', BP[0]) print('p-value = ', BP[1]) Explanation: よって、 BP=0.14630.1463となり0に近い値であることがわかる。 P値は0.702となり有意水準10%でも帰無仮説BP=0を棄却することはできず、均一分散であると結論することができます。 例題6-2 「k0602.csv」を用いた不均一分散のデータである場合の回帰分析。 BP統計量による不均一分散の有無の仮説検定を行います。 データを読み込みます。 End of explanation data["lnX"] = np.log(data["X"]) data["lnY"] = np.log(data["Y"]) data # 説明変数設定 X = data['lnX'] X = sm.add_constant(X) X # 被説明変数設定 Y = data['lnY'] Y # OLSの実行(Ordinary Least Squares: 最小二乗法) model = sm.OLS(Y,X) results = model.fit() print(results.summary()) # グラフ生成 plt.plot(data["lnX"], data["lnY"], 'o', label="data") plt.plot(data["lnX"], results.fittedvalues, label="OLS") plt.xlim(min(data["lnX"])-0.5, max(data["lnX"])+0.5) plt.ylim(min(data["lnY"])-0.5, max(data["lnY"])+0.5) plt.title('Example6-3') plt.legend(loc=2) plt.show() # BPテスト BP = smsdia.het_breushpagan(results.resid, results.model.exog) print('Breusch-Pagan test : ') print('BP = ', BP[0]) print('p-value = ', BP[1]) Explanation: P値は0.003より有意水準1%でも帰無仮説BP=0を棄却することができ、不均一分散であると結論することができます。 例題6-3 「k0602.csv」を用いた不均一分散のデータである場合の回帰分析。 不均一分散が存在する場合、変数を対数化することによって不均一分散の状態の解消を行います。 先ほど読み込んだ「k0602.csv」のデータから変数の対数化を行います。 dataの中に新しいコラム「lnX」「lnY」を追加します。 End of explanation data6_2 = pd.read_csv('example/k0602.csv') data6_3 = pd.read_csv('example/k0603.csv') data6_2 data6_3 Explanation: あれ?まだ不均一分散残ってますね笑 ちなみに本書では何故か対数化する過程で「k0602.csv」の元のX,Yデータが変わってて対数化による不均一分散の解消が成功しているかのごとくなっています笑 「k0602.csv」と「k0603.csv」のデータ比較。 End of explanation X = data6_3['lnX'] X = sm.add_constant(X) Y = data6_3['lnY'] model = sm.OLS(Y,X) results = model.fit() print(results.summary()) plt.plot(data6_3["lnX"], data6_3["lnY"], 'o', label="data") plt.plot(data6_3["lnX"], results.fittedvalues, label="OLS") plt.xlim(min(data6_3["lnX"])-0.5, max(data6_3["lnX"])+0.5) plt.ylim(min(data6_3["lnY"])-0.5, max(data6_3["lnY"])+0.5) plt.title('Example6-3') plt.legend(loc=2) plt.show() BP = smsdia.het_breushpagan(results.resid, results.model.exog) print('Breusch-Pagan test : ') print('BP = ', BP[0]) print('p-value = ', BP[1]) Explanation: やっぱりX, Yの値が変わってる笑 とりあえず「k0603.csv」の場合の回帰も行います。 End of explanation data = pd.read_csv('example/k0604.csv') Explanation: 例題6-4 「k0602.csv」を用いた不均一分散のデータである場合の回帰分析。 不均一分散が存在する場合、第3の変数を基準とした比率に変換することによって不均一分散の状態の解消を行います。 例題6-4で使う「k0604.csv」も「k0602.csv」とデータが異なるので、「k0602.csv」を基にしたデータの加工は省き、直接行います。 End of explanation data X = data['XZ'] X = sm.add_constant(X) Y = data['YZ'] model = sm.OLS(Y,X) results = model.fit() print(results.summary()) plt.plot(data["XZ"], data["YZ"], 'o', label="data") plt.plot(data["XZ"], results.fittedvalues, label="OLS") plt.xlim(min(data["XZ"])-0.1, max(data["XZ"])+0.1) plt.ylim(min(data["YZ"])-0.1, max(data["YZ"])+0.1) plt.title('Example6-4') plt.legend(loc=2) plt.show() BP = smsdia.het_breushpagan(results.resid, results.model.exog) print('Breusch-Pagan test : ') print('BP = ', BP[0]) print('p-value = ', BP[1]) Explanation: ちなみにデータの中身はこのようになっていて、 XZ = X / Z YZ = Y / Z で簡単に求めることが出来ます。 End of explanation
2,093
Given the following text description, write Python code to implement the functionality described below step by step Description: Data exploration We use the dataset found at https Step1: Loading data ... just works. read_csv() comes with many convenient arguments, such as skiprows, nrows, na_values, etc. Note that, alternatively, we could have run df = pd.read_csv('https Step2: Selecting data We are already familiar with column selection. The [ ] syntax is the most basic way of indexing. Step3: Columns can also be accessed as attributes (as long as they have a valid Python name). Step4: We can select elements of a DataFrame either by label (with the .loc attribute) or by position (with the .iloc attribute). Row and column indices take the usual order (first and second place, respectively). Step5: Slicing works too. Step6: Label indexing is more natural than positional indexing (think of a function call, where keyword arguments are easier to work with than positional arguments). Step7: Often we want to select data based on certain conditions. Step8: Subsets can be selected by callable functions (returning valid indexers). The following function performs a selection by label (along country and g_whoregion). Step9: So it can serve as a column indexer. Step10: The following function filters for data where the number of cases is greater than 100,000. Step11: So it can serve as a row indexer. Step12: We may want to select or mask data while preserving the original shape. Step13: Hands-on exercises Select the rows of df where country is Greece, age is at most 24, and year is 2000. Name it df1. Write the df1 DataFrame to a CSV file located in the data/ subdirectory. (Hint Step14: Indexing Step15: We could specify that the first (unnamed) column should be used as the index (row labels). Step16: Remember we learnt set_index() in the previous section? We also have reset_index() at our disposal. Step17: And we are back to a default index for this DataFrame. The original index is stored in its own column.
Python Code: import pandas as pd df = pd.read_csv('../data/tidy_who.csv') Explanation: Data exploration We use the dataset found at https://github.com/mkcor/data-wrangling/blob/master/data/tidy_who.csv (see the notebook at the root of that repo for the generation of this dataset). End of explanation df.head() df.shape df.sample(10) df.describe() df['g_whoregion'].unique() df['country'].nunique() Explanation: Loading data ... just works. read_csv() comes with many convenient arguments, such as skiprows, nrows, na_values, etc. Note that, alternatively, we could have run df = pd.read_csv('https://raw.githubusercontent.com/mkcor/data-wrangling/master/data/tidy_who.csv'). End of explanation df['country'].head(3) Explanation: Selecting data We are already familiar with column selection. The [ ] syntax is the most basic way of indexing. End of explanation df.country[1000:1003] Explanation: Columns can also be accessed as attributes (as long as they have a valid Python name). End of explanation df.loc[0, 'country'] df.loc[df.shape[0] - 1, 'country'] df.iloc[0, 0] df.iloc[df.shape[0] - 1, 0] Explanation: We can select elements of a DataFrame either by label (with the .loc attribute) or by position (with the .iloc attribute). Row and column indices take the usual order (first and second place, respectively). End of explanation df.loc[:5, 'country'] Explanation: Slicing works too. End of explanation df.loc[:5, 'country':'type'] df.iloc[:5, :5] Explanation: Label indexing is more natural than positional indexing (think of a function call, where keyword arguments are easier to work with than positional arguments). End of explanation cond = df.year < 1981 df[cond].shape df[cond & (df.country == 'Argentina') & (df.type == 'rel') & (df.sex == 'm')] gr_and_it = df.country.isin(['Greece', 'Italy']) df[gr_and_it].tail() Explanation: Often we want to select data based on certain conditions. End of explanation lambda x: ['country', 'g_whoregion'] Explanation: Subsets can be selected by callable functions (returning valid indexers). The following function performs a selection by label (along country and g_whoregion). End of explanation df.loc[:3, lambda x: ['country', 'g_whoregion']] Explanation: So it can serve as a column indexer. End of explanation lambda x: x.cases > 100000 Explanation: The following function filters for data where the number of cases is greater than 100,000. End of explanation great = df.loc[lambda x: x.cases > 100000, :] great df.cases.loc[lambda x: x > 100000] Explanation: So it can serve as a row indexer. End of explanation great.where(great.country == 'India') great.mask(great.country == 'India') Explanation: We may want to select or mask data while preserving the original shape. End of explanation df1 = df[(df.country == 'Greece') & (df.year == 2000) & (df.age_range.isin([14, 1524]))] df1.to_csv('../data/df1.csv') df2 = pd.read_csv('../data/df1.csv') Explanation: Hands-on exercises Select the rows of df where country is Greece, age is at most 24, and year is 2000. Name it df1. Write the df1 DataFrame to a CSV file located in the data/ subdirectory. (Hint: The method is named to_csv.) Read this CSV file into a DataFrame named df2. What do you notice about the index? (Feel free to fire up a Terminal and look at the CSV file.) End of explanation df1.index df2.index Explanation: Indexing End of explanation pd.read_csv('../data/df1.csv', index_col=0) Explanation: We could specify that the first (unnamed) column should be used as the index (row labels). End of explanation df1.reset_index() Explanation: Remember we learnt set_index() in the previous section? We also have reset_index() at our disposal. End of explanation df1.reset_index().index Explanation: And we are back to a default index for this DataFrame. The original index is stored in its own column. End of explanation
2,094
Given the following text description, write Python code to implement the functionality described below step by step Description: Logic Test A notebook to test the classes and methods within SeaFallLogic.py. Step1: Ship test Create a ship object and change its values. Step2: Island Test See how class inheritence works on an island site.
Python Code: %matplotlib inline import numpy import matplotlib from matplotlib.patches import Circle, Wedge, Polygon from matplotlib.collections import PatchCollection import matplotlib.pyplot as plt import matplotlib.patches as mpatches import matplotlib.lines as mlines import matplotlib.path as mpath import numpy as np import seaborn as sns import networkx as nx import pandas as pd import SeaFallLogic Explanation: Logic Test A notebook to test the classes and methods within SeaFallLogic.py. End of explanation class Ship(): # Rules, pg 8, "Province Boards" also inlcude information about ships def __init__(self): self.damage = [] # hold, a list of objects with max length hold self.hold = [] # upgrades, a list of upgrade objects of max length 2 self.upgrades = [] # values (explore, hold, raid, sail) self._values = (1, 1, 1, 1) # vmax is the maximum number values can reach for (explore, hold, raid, # sail) self._vmax = (5, 5, 5, 5) @property def values(self): return self._values @values.setter def values(self, values): if not isinstance(values, tuple): err_str = ("Not a valid data type. The data type should be a tuple" " of 4 length.") raise ValueError(err_str) elif len(values) != 4: err_str = ("Not a valid data type. The data type should be a tuple" " of 4 length.") raise ValueError(err_str) for val, vmax in zip(values, self.vmax): if val > vmax: raise ValueError("A ship value exceeds its max.") self._values = values @property def vmax(self): return self._vmax @vmax.setter def vmax(self, vmax_tuple): if not isinstance(vmax_tuple, tuple): err_str = ("Not a valid data type. The data type should be a tuple" " of 4 length.") raise ValueError(err_str) elif len(vmax_tuple) != 4: err_str = ("Not a valid data type. The data type should be a tuple" " of 4 length.") raise ValueError(err_str) for val, vmax in zip((5, 5, 5, 5), vmax_tuple): if val > vmax: raise ValueError("The maximum ship values are never less than (5, 5, 5, 5).") self._vmax = vmax ship = Ship() ship.values Explanation: Ship test Create a ship object and change its values. End of explanation class Site(): def __init__(self, dangerous=False, defense=0): # Rules, pg 10, "Dangerous Sites" self.dangerous = dangerous # Rules, pg 10, "Starting an Endeavor" # Rules, pg 7, "Defense" self.defense = defense class IslandSiteMine(Site): def __init__(self, dangerous=False, defense=0, gold=0): super().__init__(dangerous=dangerous, defense=defense) self.gold = gold mine = IslandSiteMine() mine.dangerous mine.defense mine2 = IslandSiteMine(dangerous=True, defense=10, gold=6) mine2.gold class Goods(): valid_goods = { "iron", "linen", "spice", "wood" } def __init__(self): pass Goods.valid_goods "iron" in Goods.valid_goods Explanation: Island Test See how class inheritence works on an island site. End of explanation
2,095
Given the following text description, write Python code to implement the functionality described below step by step Description: Per session and per user analysis Analysis of users. Table of Contents Preparation Function tests User metrics checks Preparation <a id=preparation /> Step1: Per-session analysis Step2: Per user analysis Step3: Common analysis Switch here between users and sessions. Step4: getCheckpointsTimes tinkering Step5: Function tests <a id=functests /> Step6: getUserCheckpoints tinkering Step7: getDiscrepancyGameGForm tinkering Step8: userId = getRandomRedMetricsGUID() userId = '"72002481-18a1-4de2-8749-553bbabe119e"' def getDiscrepancyGameGForm( userId ) Step9: sorted, unique values in series1 that are not in series2 np.setdiff1d(series1.values, series2.values) user has answered questions whose answer they haven't seen in the game gameNotEnough = pd.Series(np.setdiff1d(gformVal.values, gameVal.values)) Step10: if gameVal.values.size!=0 Step11: getCheckpointsTimesUser tinkering Step12: print('second pass') previous = '' for checkpointName in thisSessionTimes.index Step13: getPlayedTimeSessionMode tinkering Step14: getPlayedTimeSession tinkering Step15: mergePlayedTimes tinkering and test Step16: getPlayedTimeUser tinkering Step17: getDeaths tinkering Step18: getDeathsUser tinkering Step19: getUserCraftEventsTotal tinkering Step20: getUserEventsTotal tinkering Step21: getSessionDataPreview tinkering Step22: getUserDataPreview tinkering Step23: Checks on user metrics <a id=checkusermetrics /> Sequence of actions sandbox crafting equip device unequip device add PCONS add 6 add Ampicillin add T &gt; auto craft &gt; auto equip remove T &gt; auto unequip add T &gt; auto equip add 12 &gt; auto craft &gt; auto equip add 6 &gt; auto equip exit crafting dies &gt; auto unequip set language to english Step24: getUserDataVector tinkering Step25: Making sense of temporality of answers of multi-session users What is the behaviour of users who played multiple times?
Python Code: %run "../Functions/3. Per session and per user analysis.ipynb" rmdf152.head() Explanation: Per session and per user analysis Analysis of users. Table of Contents Preparation Function tests User metrics checks Preparation <a id=preparation /> End of explanation testSessionId = "fab3ea03-6ff1-483f-a90a-74ff47d0b556" perSession = rmdf152[rmdf152['type']=='reach'].loc[:,perSessionRelevantColumns] perSession = perSession[perSession['sessionId']==testSessionId] perSession = perSession[perSession['section'].str.startswith('tutorial', na=False)] perSession allSessions = getAllSessions( rmdf152, True ) allSessions.head() allSessions[allSessions['sessionId']==testSessionId] allSessions[allSessions['userId']=='e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1'] Explanation: Per-session analysis End of explanation # English-speaking user who answered the questionnaire - cf 'Google form analysis.ipynb'. localplayerguid = '8d352896-a3f1-471c-8439-0f426df901c1' #localplayerguid = '7037c5b2-c286-498e-9784-9a061c778609' #localplayerguid = '5c4939b5-425b-4d19-b5d2-0384a515539e' #localplayerguid = '7825d421-d668-4481-898a-46b51efe40f0' #localplayerguid = 'acb9c989-b4a6-4c4d-81cc-6b5783ec71d8' localplayerguid perUserRelevantColumns = ['sessionId', 'serverTime', 'section'] sessionsList = getAllSessionsOfUser(rmdf152, localplayerguid, True) sessionsList # List all 'reach' events with those sessionIds. perUser = rmdf152[rmdf152['type']=='reach'].loc[:,perUserRelevantColumns] perUser = perUser[perUser['sessionId'].isin(sessionsList['sessionId'])] perUser = perUser[perUser['section'].str.startswith('tutorial', na=False)] perUser.describe() perUser.head() Explanation: Per user analysis End of explanation #sectionsList = perSession sectionsList = perUser Explanation: Common analysis Switch here between users and sessions. End of explanation testUser = getRandomGFormGUID() testSession = getRandomSessionGUID( _userId = testUser ) timedSections1 = getCheckpointsTimes(testSession) timedSections1 sessionId = testSession _rmDF = rmdf152 testCounter = 0 # Returns a given session's checkpoints, the first server time at which they were reached, and completion time #def getCheckpointsTimes( sessionId, _rmDF = rmdf152 ): reachEvents = _rmDF[_rmDF['type']=='reach'].loc[:,perSessionRelevantColumns] perSession = reachEvents[reachEvents['sessionId']==sessionId] perSession = perSession[perSession['section'].str.startswith('tutorial', na=False)] timedSections = pd.DataFrame(data=0, columns=timedSectionsReachedColumns,index=timedSectionsIndex) timedSections['firstReached'] = pd.Timestamp(0, tz='utc') timedSections['firstCompletionDuration'] = pd.Timedelta.max if(len(perSession) > 0): timedSections["firstReached"] = perSession.groupby("section").agg({ "serverTime": np.min }) timedSections["firstCompletionDuration"] = timedSections["firstReached"].diff() if(timedSections.loc["tutorial1.Checkpoint00","firstReached"] != pd.Timestamp(0, tz='utc')): timedSections.loc["tutorial1.Checkpoint00","firstCompletionDuration"] = \ pd.Timedelta(0) timedSections["firstReached"] = timedSections["firstReached"].fillna(pd.Timestamp(0, tz='utc')) timedSections["firstCompletionDuration"] = timedSections["firstCompletionDuration"].fillna(pd.Timedelta.max) timedSections len(timedSections) chapter = "tutorial1.Checkpoint01" time = '' if(not chapter in timedSections.index): print("no timed sections") else: time = timedSections.loc[chapter,"firstCompletionDuration"] time timedSections1 == timedSections reachEvents.iloc[0,0] Explanation: getCheckpointsTimes tinkering End of explanation #'7412a447-8177-48e9-82c5-cb31032f76a9': didn't answer testUser = getRandomGFormGUID() testResult = getUserDataVector(testUser) print(testUser) testResult testResult[testUser]['death'] testResult = getUserDataVector('e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1') testResult testResult = getUserDataVector('8d352896-a3f1-471c-8439-0f426df901c1') testResult gformNotEnough = [] print(gformNotEnough) gformNotEnough.append(5) print(gformNotEnough) gformNotEnough = pd.Series(gformNotEnough) print(gformNotEnough) gformNotEnough = np.array([]) print(gformNotEnough) gformNotEnough = np.append(gformNotEnough, [5]) print(gformNotEnough) gformNotEnough = pd.Series(gformNotEnough) print(gformNotEnough) testNonVal = pd.Series(['tutorial1.Checkpoint13']) Explanation: Function tests <a id=functests /> End of explanation userId = getRandomRedMetricsGUID() _rmDF = rmdf152 # Returns a given user's unique reached checkpoints #def getUserCheckpoints( userId, _rmDF = rmdf152 ): #print("getUserCheckpoints(" + str(userId) + ")") # List of associated sessions sessionsList = getAllSessionsOfUser( _rmDF, userId, True ) #print("sessionsList=" + str(sessionsList)) # List all 'reach' events with those sessionIds. reachEvents = _rmDF[_rmDF['type']=='reach'].loc[:,perSessionRelevantColumns] perUser = reachEvents[reachEvents['sessionId'].isin(sessionsList['sessionId'].values)] perUser = perUser[perUser['section'].str.startswith('tutorial', na=False)] pd.Series(perUser['section'].unique()) Explanation: getUserCheckpoints tinkering End of explanation gformNonVal = getNonValidatedCheckpoints(userId) gformVal = getValidatedCheckpoints(userId) gameVal = getUserCheckpoints(userId) print(str(gformNonVal)) print() print(str(gformVal)) print() print(str(gameVal)) Explanation: getDiscrepancyGameGForm tinkering End of explanation randomguid = getRandomRedMetricsGUID() randomguid gformNonVal = getNonValidatedCheckpoints(randomguid) gformNonVal gformVal = getValidatedCheckpoints(randomguid) gformVal gameVal = getUserCheckpoints( randomguid ) gameVal Explanation: userId = getRandomRedMetricsGUID() userId = '"72002481-18a1-4de2-8749-553bbabe119e"' def getDiscrepancyGameGForm( userId ): if(hasAnswered(userId)): gformNonVal = getNonValidatedCheckpoints(userId) gformVal = getValidatedCheckpoints(userId) gameVal = getUserCheckpoints(userId) #sorted, unique values in series1 that are not in series2 #np.setdiff1d(series1.values, series2.values) #user has answered questions whose answer they haven't seen in the game gameNotEnough = pd.Series(np.setdiff1d(gformVal.values, gameVal.values)) #user has not answered questions whose answer they have seen in the game gformNotEnough = [] maxGameVal = '' if gameVal.values.size!=0: gameVal.values.max() for nonVal in gformNonVal.values: if nonVal &gt;= maxGameVal: gformNotEnough.append(nonVal) gformNotEnough = pd.Series(gformNotEnough) result = (gameNotEnough, gformNotEnough) else: result = ([],[]) result End of explanation #user has not answered questions whose answer they have seen in the game gformNotEnough = [] maxGameVal = '' Explanation: sorted, unique values in series1 that are not in series2 np.setdiff1d(series1.values, series2.values) user has answered questions whose answer they haven't seen in the game gameNotEnough = pd.Series(np.setdiff1d(gformVal.values, gameVal.values)) End of explanation test = getValidatedCheckpoints(localplayerguid) test maxValue = '' if (len(test) > 0): maxValue = test.values.max() maxValue getNonValidatedCheckpoints(localplayerguid) testlocalplayerguid = '7412a447-8177-48e9-82c5-cb31032f76a9' test = pd.DataFrame({ 'section' : ['tutorial1.Checkpoint00', 'tutorial1.Checkpoint01', 'tutorial1.Checkpoint02'], 'serverTime' : ['0', '1', '2'], 'firstReached' : ['0', '1', '2'], 'firstCompletionDuration' : ['0', '1', '2'], }) test #pd.DataFrame({ 'A' : 1., # 'B' : pd.Timestamp('20130102'), # 'C' : pd.Series(1,index=list(range(4)),dtype='float32'), # 'D' : np.array([3] * 4,dtype='int32'), # 'E' : pd.Categorical(["test","train","test","train"]), # 'F' : 'foo' }) Explanation: if gameVal.values.size!=0: gameVal.values.max() for nonVal in gformNonVal.values: if nonVal >= maxGameVal: gformNotEnough.append(nonVal) gformNotEnough = pd.Series(gformNotEnough) getDiscrepancyGameGForm( randomguid ) End of explanation # incomplete game #_userId = '958a0e85-1634-4559-bce6-d6af28b7e649' _userId = 'dfe8f036-8641-4d6c-8411-8a8346bb0402' #_userId = getRandomRedMetricsGUID() _sessionsList = [] _rmDF = rmdf152 # Returns a given user's checkpoints, the first server time at which they were reached, and completion time #def getCheckpointsTimesUser( _userId, _sessionsList = [], _rmDF = rmdf152 ): # List of associated sessions if( len(_sessionsList) == 0): _sessionsList = getAllSessionsOfUser( _rmDF, _userId, True ) # Call getCheckpointsTimes on all sessions associated with user, # then merge by taking oldest checkpoint completion _timedSections = pd.DataFrame(data=0, columns=timedSectionsReachedColumns,index=timedSectionsIndex) _timedSections["firstReached"] = pd.Timestamp(0, tz='utc') _timedSections["firstCompletionDuration"] = pd.Timedelta.max # merge # for each checkpoint reached, update if necessary for _sessionId in _sessionsList['sessionId']: _thisSessionTimes = getCheckpointsTimes( _sessionId ) for _checkpointName in _thisSessionTimes.index: if ((_thisSessionTimes.loc[_checkpointName, 'firstReached'] != pd.Timestamp(0, tz='utc')) and ((_timedSections.loc[_checkpointName, 'firstReached'] == pd.Timestamp(0, tz='utc')) or (_timedSections.loc[_checkpointName, 'firstReached'] > _thisSessionTimes.loc[_checkpointName, 'firstReached'])) ): _timedSections.loc[_checkpointName, 'firstReached'] = _thisSessionTimes.loc[_checkpointName, 'firstReached'] _timedSections.loc[_checkpointName, 'firstCompletionDuration'] = _thisSessionTimes.loc[_checkpointName, 'firstCompletionDuration'] _timedSections Explanation: getCheckpointsTimesUser tinkering End of explanation testUser = "3fe0632f-b218-41c3-adfd-27083f271c19" testSession = getRandomSessionGUID( _userId = testUser ) _rmDF[_rmDF['sessionId']==sessionId] length = 1 allUserIds = np.array(rmdf152['userId'].unique()) allUserIds = [i for i in allUserIds if not i in ['nan', np.nan, 'null']] for user in allUserIds: testUser = user #getRandomGFormGUID() testSession = getRandomSessionGUID( _userId = testUser ) #testUser = '8172f20e-c29b-4fda-9245-61ab05a84792' if testSession != '': sessionId = testSession #print(sessionId) _rmDF = rmdf152 # Returns a given session's total playtime and day count #def getPlayedTimeSession( sessionId, _rmDF = rmdf152 ): sessionEvents = _rmDF[_rmDF['sessionId']==sessionId] sessionTimesTutorial = sessionEvents[sessionEvents['section'].str.startswith('tutorial', na=False)]['userTime'] #sessionTimesTutorial = sessionTimesTutorial.groupby(sessionTimesTutorial).diff() sessionTimesTutorial.index = sessionTimesTutorial.values sessionTimesTutorial = sessionTimesTutorial.groupby(pd.TimeGrouper('D')).agg({ "start": np.min, "end": np.max }) #, pd.TimeGrouper('D') #sessionEventsSandbox = sessionEvents[sessionEvents['section'].str.startswith('sandbox', na=False)] #print([0,0]) #type(sessionTimesTutorial),sessionTimesTutorial,testUser length = len(sessionTimesTutorial.index) if (length > 1): print("user = " + str(testUser) + " session = " + str(testSession) + " length = " + str(length)) # checks #usersWithSeveralSessions = [] #for userId in allUserIds: # count = countSessions(userId, False, [], rmdf152) # if(count > 3): # usersWithSeveralSessions.append(userId) #print("userId="+str(userId)+" : " + str(count)) #rmdf152[rmdf152['userId']=='57e2b6b7-c308-4492-9228-f753d5b3044c']['customData.platform'].unique() #rmdf152[rmdf152['userId']=='57e2b6b7-c308-4492-9228-f753d5b3044c'] #userId = 'deb089c0-9be3-4b75-9b27-28963c77b10c' #for userId in usersWithSeveralSessions: # print(str(userId)+" :") # for sessionId in getAllSessionsOfUser(rmdf152, userId)['sessionId']: # print(str(sessionId)+" : " + str(getPlayedTimeSession(sessionId))) # print() Explanation: print('second pass') previous = '' for checkpointName in thisSessionTimes.index: if(checkpointName != "tutorial1.Checkpoint00"): if( timedSections.loc[previous,"firstReached"] != pd.Timestamp(0) and timedSections.loc[checkpointName,"firstReached"] != pd.Timestamp(0) ): timedSections.loc[checkpointName,"firstCompletionDuration"] =\ timedSections.loc[checkpointName,"firstReached"] - timedSections.loc[previous,"firstReached"] previous = checkpointName timedSections["firstCompletionDuration"] = timedSections["firstReached"].diff() timedSections getPlayedTimeSession tinkering End of explanation testSession = "7ea5d49a-14f3-40b8-b9c4-d3d52eb0c4e1" #4 #sessionEvents = pd.DataFrame(columns=_rmDF.columns) sessionEvents = rmdf152[rmdf152['sessionId']==testSession] mode = 'tutorial' #def getPlayedTimeSessionMode(sessionEvents, mode): sessionTimes = sessionEvents[sessionEvents['section'].str.startswith(mode, na=False)]['userTime'] sessionTimes.index = sessionTimes.values daysSpent = set() totalSpentTime = pd.Timedelta(0) if(len(sessionTimes) > 0): sessionTimes = sessionTimes.groupby(pd.TimeGrouper('D')).agg({ "start": np.min, "end": np.max }) daysSpent = set(sessionTimes.index) sessionTimes['played'] = sessionTimes['end'] - sessionTimes['start'] totalSpentTime = sessionTimes['played'].sum() {'daysSpent': daysSpent, 'totalSpentTime': totalSpentTime} getPlayedTimeSessionMode(sessionEvents, 'tutorial') getPlayedTimeSessionMode(pd.DataFrame(columns=_rmDF.columns), 'tutorial') Explanation: getPlayedTimeSessionMode tinkering End of explanation #testUser = user #getRandomGFormGUID() #testSession = getRandomSessionGUID( _userId = testUser ) #testUser = '8172f20e-c29b-4fda-9245-61ab05a84792' #testSession = "1d16f3f2-2f76-49ee-bb37-9742ed54287a" #5 + NaT testSession = "7ea5d49a-14f3-40b8-b9c4-d3d52eb0c4e1" #4 sessionId = testSession #print(sessionId) _rmDF = rmdf152 # Returns a given session's total playtime and day count #def getPlayedTimeSession( sessionId, _rmDF = rmdf152 ): sessionEvents = _rmDF[_rmDF['sessionId']==sessionId] tutorialTime = getPlayedTimeSessionMode(sessionEvents, 'tutorial') sandboxTime = getPlayedTimeSessionMode(sessionEvents, 'sandbox') {'tutorial': tutorialTime, 'sandbox': sandboxTime} getPlayedTimeSession('', _rmDF = _rmDF) Explanation: getPlayedTimeSession tinkering End of explanation a = getPlayedTimeSession("054a96ca-c2f1-4967-9b77-6ce4c33c9d33") b = getPlayedTimeSession("e5421d6c-2f55-4279-8d82-bbafbe16d635") a,b c = {'sandbox': { 'daysSpent': { pd.Timestamp('2017-06-07 00:00:00', freq='D'), pd.Timestamp('2017-06-08 00:00:00', freq='D'), pd.Timestamp('2017-06-09 00:00:00', freq='D'), pd.Timestamp('2017-06-10 00:00:00', freq='D'), pd.Timestamp('2017-06-11 00:00:00', freq='D'), }, 'totalSpentTime': pd.Timedelta('0 days 00:09:34.662000') }, 'tutorial': { 'daysSpent': { pd.Timestamp('2017-06-07 00:00:00', freq='D'), pd.Timestamp('2017-06-08 00:00:00', freq='D'), pd.Timestamp('2017-06-09 00:00:00', freq='D'), pd.Timestamp('2017-06-10 00:00:00', freq='D'), pd.Timestamp('2017-06-11 00:00:00', freq='D'), pd.Timestamp('2017-06-12 00:00:00', freq='D'), }, 'totalSpentTime': pd.Timedelta('0 days 00:00:11.007000') } } d = {'sandbox': { 'daysSpent': { pd.Timestamp('2017-06-06 00:00:00', freq='D'), pd.Timestamp('2017-06-07 00:00:00', freq='D'), pd.Timestamp('2017-06-08 00:00:00', freq='D'), pd.Timestamp('2017-06-09 00:00:00', freq='D'), pd.Timestamp('2017-06-10 00:00:00', freq='D'), }, 'totalSpentTime': pd.Timedelta('0 days 00:09:34.662000') }, 'tutorial': { 'daysSpent': { pd.Timestamp('2017-06-05 00:00:00', freq='D'), pd.Timestamp('2017-06-06 00:00:00', freq='D'), pd.Timestamp('2017-06-07 00:00:00', freq='D'), pd.Timestamp('2017-06-08 00:00:00', freq='D'), pd.Timestamp('2017-06-09 00:00:00', freq='D'), pd.Timestamp('2017-06-10 00:00:00', freq='D'), }, 'totalSpentTime': pd.Timedelta('0 days 00:00:11.007000') } } c['tutorial']['daysSpent'] | d['tutorial']['daysSpent'] #a = getPlayedTimeSession("054a96ca-c2f1-4967-9b77-6ce4c33c9d33") #b = getPlayedTimeSession("e5421d6c-2f55-4279-8d82-bbafbe16d635") a = c b = d #print(a['sandbox']['daysSpent'], a['sandbox']['totalSpentTime'],\ #a['tutorial']['daysSpent'], a['tutorial']['totalSpentTime'],\ #b['sandbox']['daysSpent'], b['sandbox']['totalSpentTime'],\ #b['tutorial']['daysSpent'], b['tutorial']['totalSpentTime']) #print(a,b) #def mergePlayedTimes(a, b): result = a.copy() for gameMode in a: result[gameMode] = { 'totalSpentTime': a[gameMode]['totalSpentTime'] + b[gameMode]['totalSpentTime'], 'daysSpent': np.unique(a[gameMode]['daysSpent'] | b[gameMode]['daysSpent']), } result Explanation: mergePlayedTimes tinkering and test End of explanation #userId = 'ae72a4cb-244e-475c-80ea-11a410266645' userId = '6bc0f58c-26ed-4be9-9596-2a9ad8d11d67' _sessionsList = [] _rmDF = rmdf152 # Returns a given user's total playtime and day count #def getPlayedTimeUser( userId, _sessionsList = [], _rmDF = rmdf152 ): result = getPlayedTimeSession('', _rmDF = _rmDF) if(len(_sessionsList) == 0): _sessionsList = getAllSessionsOfUser(_rmDF, userId) for session in _sessionsList['sessionId']: #for session in ["e5421d6c-2f55-4279-8d82-bbafbe16d635","e5421d6c-2f55-4279-8d82-bbafbe16d635","e5421d6c-2f55-4279-8d82-bbafbe16d635"]: playedTimes = getPlayedTimeSession(session, _rmDF) result = mergePlayedTimes(result, playedTimes) result Explanation: getPlayedTimeUser tinkering End of explanation sessionId = "fab3ea03-6ff1-483f-a90a-74ff47d0b556" _rmDF = rmdf152 # Returns a given session's checkpoints, and death count #def getDeaths( sessionId, _rmDF = rmdf152 ): deathEvents = _rmDF[_rmDF['type']=='death'].loc[:,perSessionRelevantColumns] perSession = deathEvents[deathEvents['sessionId']==sessionId] perSession = perSession[perSession['section'].str.startswith('tutorial', na=False)] deathsSections = perSession.groupby("section").size().reset_index(name='deathsCount') deathsSections Explanation: getDeaths tinkering End of explanation userId = 'ae72a4cb-244e-475c-80ea-11a410266645' _rmDF = rmdf152 #def getDeathsUser( userId, _rmDF = rmdf152 ): #print("getDeathsUser(" + str(userId) + ")") # List of associated sessions sessionsList = getAllSessionsOfUser( _rmDF, userId, True ) #print("sessionsList=" + str(sessionsList)) # Call getDeaths on all sessions associated with user, # then merge by adding deathsSections = pd.DataFrame(0, columns=timedSectionsDeathsColumns,index=timedSectionsIndex) for sessionId in sessionsList['sessionId']: #print("processing user " + str(userId) + " with session " + str(sessionId)) deaths = getDeaths( sessionId ) # merge # for each checkpoint reached, update if necessary for index in deaths.index: #print("index=" + str(index)) checkpointName = deaths['section'][index] #print("checkpointName=" + str(checkpointName)) #print("deaths['deathsCount']["+str(index)+"]=" + str(deaths['deathsCount'][index])) deathsSections['deathsCount'][checkpointName] = deathsSections['deathsCount'][checkpointName] + deaths['deathsCount'][index] deathsSections Explanation: getDeathsUser tinkering End of explanation # craftEventCodes = list(["equip","unequip","add","remove"]) eventCode = 'equip' userId = getRandomRedMetricsGUID() sessionsList=[] _rmDF = rmdf152 #def getUserCraftEventsTotal( eventCode, userId, sessionsList=[], _rmDF = rmdf152 ): if(len(sessionsList) == 0): sessionsList = getAllSessionsOfUser( _rmDF, userId, True ) result = 0 if eventCode in craftEventCodes: eventType = craftEventsColumns['eventType'][eventCode] events = _rmDF[_rmDF['type']==eventType] events = events[events[craftEventsColumns['column'][eventCode]].notnull()] perSession = events[events['sessionId'].isin(sessionsList['sessionId'])] result = len(perSession) else: print("incorrect event code '" + eventCode + "'") result, userId Explanation: getUserCraftEventsTotal tinkering End of explanation eventType = 'death' #userId = 'e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1' userId = getRandomRedMetricsGUID() sessionsList=[] _rmDF = rmdf152 #def getUserEventsTotal( eventType, userId, sessionsList=[], _rmDF = rmdf152 ): if(len(sessionsList) == 0): sessionsList = getAllSessionsOfUser( _rmDF, userId, True ) sessionEvents = _rmDF[_rmDF['type']==eventType] perSession = sessionEvents[sessionEvents['sessionId'].isin(sessionsList['sessionId'])] len(perSession) Explanation: getUserEventsTotal tinkering End of explanation userId = getSurveysOfBiologists(gform)[localplayerguidkey].iloc[2] #sample = gform[gform[localplayerguidkey] == userId] _rmDF[_rmDF['sessionId'] == _sessionId]['type'].value_counts() _rmDF = rmdf152 sessions = getAllSessionsOfUser( _rmDF, userId, True ) _sessionId = sessions['sessionId'].iloc[0] # for per-session, manual analysis #def getSessionDataPreview( _sessionId, _rmDF ): _logs = _rmDF[_rmDF['sessionId'] == _sessionId] _timedEvents = _logs['userTime'] _timedEvents = _timedEvents.sort_values() _platform = _logs['customData.platform'].dropna().values if(len(_platform) > 0): _platform = _platform[0] else: _platform = '' _events = _logs['type'].value_counts() result = { 'first' : _timedEvents.iloc[0], 'last' : _timedEvents.iloc[-1], 'platform' : _platform, 'events' : _events } print(result) events, first, last, platform, = result.values() first, last, platform, events Explanation: getSessionDataPreview tinkering End of explanation userId = getSurveysOfBiologists(gform)[localplayerguidkey].iloc[2] #sample = gform[gform[localplayerguidkey] == userId] events, first, last, platform events sdp = getSessionDataPreview(_sessionId, _rmDF = _rmDF) sdp #userId = getRandomGFormGUID() _rmDF = rmdf152 scoreLabel = 'score' # for per-user, manual analysis #def getUserDataPreview( userId, _rmDF = rmdf152 ): result = pd.DataFrame( columns = [userId] ) # [ ] RM result.loc['REDMETRICS ANALYSIS'] = ' ' # [ ] sessions count sessions = getAllSessionsOfUser( _rmDF, userId, True ) result.loc['sessions', userId] = len(sessions) # [ ] first event date result.loc['firstEvent', userId] = getFirstEventDate( userId ) # [ ] time played # [ ] dates played # [ ] first played, last played sessionIds = sessions['sessionId'] for _sessionIdIndex in range(0, len(sessions['sessionId'])): _sessionId = sessionIds.iloc[_sessionIdIndex] sdp = getSessionDataPreview(_sessionId, _rmDF = _rmDF) result.loc['session' + str(_sessionIdIndex) + ' platform',userId] = sdp['platform'] result.loc['session' + str(_sessionIdIndex) + ' first',userId] = sdp['first'] result.loc['session' + str(_sessionIdIndex) + ' last',userId] = sdp['last'] result.loc['session' + str(_sessionIdIndex) + ' events',userId] = str(sdp['events']) # [ ] best chapter # [ ] counts of events: deaths, crafts,... # [ ] GF result.loc['GFORM ANALYSIS'] = ' ' # [ ] score(s) score = getScore( userId ) for _temporality in score.columns: _score = score.loc[scoreLabel,_temporality] if(len(_score)>0): if(_temporality == 'before'): _score = _score[len(_score)-1] else: _score = _score[0] else: _score = np.nan result.loc[scoreLabel+_temporality,userId] = _score # [ ] progression # [ ] demographics result.loc[scoreLabel+'s',userId] = str(score.values) gfDataPreview = getGFormDataPreview(userId, gform) features = {1: 'date', 2: 'temporality RM', 3: 'temporality GF', 4: 'score', 5: 'genderAge'} for key in gfDataPreview: for featureKey in features: result.loc[key + ' ' + features[featureKey]] = str(gfDataPreview[key][features[featureKey]]) index = 0 for match in gfDataPreview[key]['demographic matches']: result.loc[key + ' demographic match ' + str(index)] = repr(match) index += 1 result answerTemporalities #getUserDataPreview(undefinedId) for undefinedId in gform[gform['Temporality'] == answerTemporalities[2]][localplayerguidkey]: getUserDataPreview(undefinedId) Explanation: getUserDataPreview tinkering End of explanation rdfcrafttest = pd.read_csv("../../data/2017-10-10.craft-test.csv") rdfcrafttest = getNormalizedRedMetricsCSV(rdfcrafttest) rdfcrafttest craftEventsColumns craftEventsColumns['column']['equip'] type(craftEventCodes) test = np.unique(np.concatenate((perSessionRelevantColumns, [craftEventsColumns['column']['equip']]))) test # user 344 adds #'e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1' # one of its sessions # fab3ea03-6ff1-483f-a90a-74ff47d0b556 # # user 22 adds #'8d352896-a3f1-471c-8439-0f426df901c1' # # session test craftSessionTest = getSectionsCraftEvents('equip', "fab3ea03-6ff1-483f-a90a-74ff47d0b556") # user test craftUserTest = getUserSectionsCraftEvents('equip', 'e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1') # user count test craftUserTestCount = getUserSectionsCraftEventsTotal('equip', 'e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1') craftUserTestCount print("craftSessionTest=" + str(craftSessionTest)) print("craftUserTest=" + str(craftUserTest)) print("craftUserTestCount=" + str(craftUserTestCount)) columnName = craftEventsColumns['column']['equip'] columnName result = list([]) for entry in rmdf152[columnName]: if not pd.isnull(entry): result.append(entry) result #rmdf152[columnName].notnull() sectionsEvents = pd.DataFrame(0, columns=eventSectionsCountColumns, index=range(0)) sectionsEvents #events = rmdf152[rmdf152['type']==eventType and not rmdf152[craftEventsColumns['column'][eventCode]].isnull()].loc[:,perSessionRelevantColumns] Explanation: Checks on user metrics <a id=checkusermetrics /> Sequence of actions sandbox crafting equip device unequip device add PCONS add 6 add Ampicillin add T &gt; auto craft &gt; auto equip remove T &gt; auto unequip add T &gt; auto equip add 12 &gt; auto craft &gt; auto equip add 6 &gt; auto equip exit crafting dies &gt; auto unequip set language to english End of explanation testUser = getRandomGFormGUID() print(testUser) #testResult = getUserDataVector(testUser) #testResult userId = getRandomGFormGUID() #userId = '1f27519a-971f-4e39-bac7-9920bfc4b05b' #undefined temporality #userId = 'e2f8d5e4-cccd-4d1a-909b-c9c92f6b83c1' #has not answered print(userId) _source = correctAnswers _rmDF = rmdf152 #def getUserDataVector( userId, _source = [], _rmDF = rmdf152 ): sessionsList = getAllSessionsOfUser( _rmDF, userId, True ) columnName = str(userId) data = pd.DataFrame(0, columns=[columnName],index=userDataVectorIndex) score = getScore( userId ) for _temporality in score.columns: _score = score.loc[scoreLabel,_temporality] if(len(_score)>0): if(_temporality == 'before'): _score = _score[len(_score)-1] else: _score = _score[0] else: _score = np.nan data.loc[scoreLabel+_temporality,columnName] = _score data.loc['sessionsCount',columnName] = countSessions( userId, False, sessionsList, _rmDF = _rmDF) for eventName in simpleEvents: if eventName in craftEventCodes: data.loc[eventName,columnName] = getUserCraftEventsTotal(eventName, userId, sessionsList) else: data.loc[eventName,columnName] = getUserEventsTotal(eventName, userId, sessionsList) data.loc['maxChapter', columnName] = int(pd.Series(data = 'tutorial1.Checkpoint00')\ .append(getUserCheckpoints(userId, _rmDF = _rmDF))\ .max()[-2:]) # time spent on each chapter times = getCheckpointsTimesUser(userId) completionTime = 0 chapterTime = pd.Series() for chapter in timedSectionsIndex: deltaTime = times.loc[chapter,"firstCompletionDuration"].total_seconds() chapterTime.loc[int(chapter[-2:])] = deltaTime completionTime += deltaTime # efficiency = (1 + #unlockedchapters)/(time * (1 + #death + #craft + #add + #equip)) data.loc['efficiency', columnName] = np.log(( 1 + data.loc['maxChapter', columnName] ) / \ (completionTime \ * ( 1\ + data.loc['death', columnName] \ + data.loc['craft', columnName]\ + data.loc['add', columnName]\ + data.loc['equip', columnName]\ )\ )) playedTime = getPlayedTimeUser(userId, _rmDF = _rmDF) data.loc['thoroughness', columnName] = \ data.loc['craft', columnName]\ * data.loc['pickup', columnName]\ * ( 1 + np.power(len(playedTime['sandbox']['daysSpent']),2)) totalSpentTime = playedTime['tutorial']['totalSpentTime'] + playedTime['sandbox']['totalSpentTime'] totalSpentDays = len(playedTime['tutorial']['daysSpent'] | playedTime['sandbox']['daysSpent']) data.loc['fun', columnName] = np.log(\ max(1,\ totalSpentTime.total_seconds() * np.power(totalSpentDays,2) )) data.loc['completionTime', columnName] = completionTime for time in chapterTime.index: data.loc[time,columnName] = chapterTime.loc[time] if(len(_source) != 0): if(hasAnswered(userId)): gformLine = gform[gform[localplayerguidkey] == userId] afters = gformLine[gformLine['Temporality'] == 'after'] if(len(afters) > 0): gformLine = afters.iloc[0] else: befores = gformLine[gformLine['Temporality'] == 'before'] if(len(befores) > 0): gformLine = befores.iloc[len(befores)-1] else: gformLine = gformLine.iloc[len(gformLine)-1] # add data from the gform: binary score on each question gformData = getBinarized(gformLine, _source = _source) for question in gformData.index: data.loc[question,columnName] = gformData.loc[question] else: print("warning: user " + userId + " has never answered the survey") print(str(data)) max((1,2)) max(1,(totalSpentTime.total_seconds()* np.power(totalSpentDays,2))) data.loc['fun', columnName] = np.log(max(1,totalSpentTime.total_seconds()* np.power(totalSpentDays,2))) #testUID = "bfdfd356-5d6f-4696-a2f1-c1dc338aa64b" # sessionsCount == 4 userId = getRandomGFormGUID() getUserDataVector(userId) sessionsCounts = getUserSessionsCounts(rmdf152) playersResponders = sessionsCounts[sessionsCounts['userId'].isin(getAllResponders())] len(sessionsCounts), len(playersResponders) playersResponders testUID = playersResponders[playersResponders['counts']==2]['userId'].values[0] answerTimestamps = gform[gform[localplayerguidkey] == testUID]['Timestamp'] Explanation: getUserDataVector tinkering End of explanation import pytz, datetime local = pytz.timezone ("Europe/Berlin") sample = getAllResponders() for userId in sample: sessions = getAllSessionsOfUser(rmdf152,userId) if(len(sessions) > 1): print("------------------user " + userId + " ------------------") print() answerTimestamps = gform[gform[localplayerguidkey] == userId]['Timestamp'] for sessionIndex in sessions.index: sessionId = sessions.loc[sessionIndex, 'sessionId'] _logs = rmdf152[rmdf152['sessionId'] == sessionId] _logs = _logs[_logs.index.isin(_logs['section'].dropna().index)] _timedEvents = _logs['userTime'] _timedEvents = _timedEvents.sort_values() print("session " + str(sessionIndex)) if(len(_timedEvents) > 0): print("\tstart: " + str(_timedEvents[0])) print("\tend: " + str(_timedEvents[-1])) print() for answerTimestampIndex in answerTimestamps.index: survey = answerTimestamps.loc[answerTimestampIndex] utc_dt = survey.astimezone (pytz.utc) print("\tsurvey" + str(answerTimestampIndex)) print("\t" + str(utc_dt)) if(len(_timedEvents) > 0): if((_timedEvents[0] > utc_dt) and (_timedEvents[-1] > utc_dt)): print("\tanswered before playing") elif((_timedEvents[0] < utc_dt) and (_timedEvents[-1] < utc_dt)): print("\tanswered after playing") else: print("\tundefined: overlap") print("\t" + str((_timedEvents[0] > utc_dt, _timedEvents[-1] > utc_dt))) else: print("\tundefined: no event") print() print() print() print() print() _logs = rmdf152[rmdf152['sessionId'] == sessionId][['type', 'userTime', 'section']].values[0] _logs _timedEvents[0], _timedEvents[-1], survey survey < _timedEvents[0], survey < _timedEvents[-1] str((_timedEvents[0] < survey, _timedEvents[-1] > survey)) times eventName getUserSectionsEvents( 'start', userId, sessionsList ) perSession = perSession[perSession['section'].str.startswith('tutorial', na=False)] Explanation: Making sense of temporality of answers of multi-session users What is the behaviour of users who played multiple times? End of explanation
2,096
Given the following text description, write Python code to implement the functionality described below step by step Description: Planar data classification with one hidden layer Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. You will learn how to Step1: 2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y. Step2: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. Step3: You have Step4: Expected Output Step5: You can now plot the decision boundary of these models. Run the code below. Step7: Expected Output Step9: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). <table style="width Step11: Expected Output Step13: Expected Output Step15: Expected Output Step17: Expected output Step19: Expected Output Step21: Expected Output Step22: Expected Output Step23: Expected Output Step24: Expected Output Step25: Interpretation
Python Code: # Package imports import numpy as np import matplotlib.pyplot as plt from testCases import * import sklearn import sklearn.datasets import sklearn.linear_model from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets %matplotlib inline np.random.seed(1) # set a seed so that the results are consistent Explanation: Planar data classification with one hidden layer Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. You will learn how to: - Implement a 2-class classification neural network with a single hidden layer - Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation 1 - Packages Let's first import all the packages that you will need during this assignment. - numpy is the fundamental package for scientific computing with Python. - sklearn provides simple and efficient tools for data mining and data analysis. - matplotlib is a library for plotting graphs in Python. - testCases provides some test examples to assess the correctness of your functions - planar_utils provide various useful functions used in this assignment End of explanation X, Y = load_planar_dataset() # Y = Y[0,:] Explanation: 2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y. End of explanation print(X.shape) # Visualize the data: plt.scatter(X[0, :], X[1, :], c=Y[0,], s=40, cmap=plt.cm.Spectral); Explanation: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. End of explanation ### START CODE HERE ### (≈ 3 lines of code) shape_X = X.shape shape_Y = Y.shape m = shape_Y[1] # training set size ### END CODE HERE ### print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) print ('I have m = %d training examples!' % (m)) Explanation: You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1). Lets first get a better sense of what our data is like. Exercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? Hint: How do you get the shape of a numpy array? (help) End of explanation # Train the logistic regression classifier clf = sklearn.linear_model.LogisticRegressionCV(); clf.fit(X.T, Y[0,:].T); Explanation: Expected Output: <table style="width:20%"> <tr> <td>**shape of X**</td> <td> (2, 400) </td> </tr> <tr> <td>**shape of Y**</td> <td>(1, 400) </td> </tr> <tr> <td>**m**</td> <td> 400 </td> </tr> </table> 3 - Simple Logistic Regression Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset. End of explanation # Plot the decision boundary for logistic regression plot_decision_boundary(lambda x: clf.predict(x), X, Y[0,:]) plt.title("Logistic Regression") # Print accuracy LR_predictions = clf.predict(X.T) print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) + '% ' + "(percentage of correctly labelled datapoints)") Explanation: You can now plot the decision boundary of these models. Run the code below. End of explanation # GRADED FUNCTION: layer_sizes def layer_sizes(X, Y): Arguments: X -- input dataset of shape (input size, number of examples) Y -- labels of shape (output size, number of examples) Returns: n_x -- the size of the input layer n_h -- the size of the hidden layer n_y -- the size of the output layer ### START CODE HERE ### (≈ 3 lines of code) n_x = X.shape[0] # size of input layer n_h = 4 n_y = Y.shape[0] # size of output layer ### END CODE HERE ### return (n_x, n_h, n_y) X_assess, Y_assess = layer_sizes_test_case() (n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess) print("The size of the input layer is: n_x = " + str(n_x)) print("The size of the hidden layer is: n_h = " + str(n_h)) print("The size of the output layer is: n_y = " + str(n_y)) Explanation: Expected Output: <table style="width:20%"> <tr> <td>**Accuracy**</td> <td> 47% </td> </tr> </table> Interpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network model Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer. Here is our model: <img src="images/classification_kiank.png" style="width:600px;height:300px;"> Mathematically: For one example $x^{(i)}$: $$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$ $$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}$$ $$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$ $$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{2} > 0.5 \ 0 & \mbox{otherwise } \end{cases}\tag{5}$$ Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$ Reminder: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( # of input units, # of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent) You often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure Exercise: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer Hint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4. End of explanation # GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: params -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random. ### START CODE HERE ### (≈ 4 lines of code) # W1 = np.random.randn(n_h, n_x) * 0.01 b1 = np.zeros((n_h, 1)) W2 = np.random.randn(n_y, n_h) * 0.01 b2 = np.zeros((n_y, 1)) ### END CODE HERE ### assert (W1.shape == (n_h, n_x)) assert (b1.shape == (n_h, 1)) assert (W2.shape == (n_y, n_h)) assert (b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters n_x, n_h, n_y = initialize_parameters_test_case() parameters = initialize_parameters(n_x, n_h, n_y) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) Explanation: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). <table style="width:20%"> <tr> <td>**n_x**</td> <td> 5 </td> </tr> <tr> <td>**n_h**</td> <td> 4 </td> </tr> <tr> <td>**n_y**</td> <td> 2 </td> </tr> </table> 4.2 - Initialize the model's parameters Exercise: Implement the function initialize_parameters(). Instructions: - Make sure your parameters' sizes are right. Refer to the neural network figure above if needed. - You will initialize the weights matrices with random values. - Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b). - You will initialize the bias vectors as zeros. - Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros. End of explanation # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): Argument: X -- input data of size (n_x, m) parameters -- python dictionary containing your parameters (output of initialization function) Returns: A2 -- The sigmoid output of the second activation cache -- a dictionary containing "Z1", "A1", "Z2" and "A2" # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] ### END CODE HERE ### # Implement Forward Propagation to calculate A2 (probabilities) ### START CODE HERE ### (≈ 4 lines of code) Z1 = np.dot(W1, X) + b1 A1 = np.tanh(Z1) Z2 = np.dot(W2, A1) + b2 A2 = sigmoid(Z2) ### END CODE HERE ### assert(A2.shape == (1, X.shape[1])) cache = {"Z1": Z1, "A1": A1, "Z2": Z2, "A2": A2} return A2, cache X_assess, parameters = forward_propagation_test_case() A2, cache = forward_propagation(X_assess, parameters) # Note: we use the mean here just to make sure that your output matches ours. print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2'])) Explanation: Expected Output: <table style="width:90%"> <tr> <td>**W1**</td> <td> [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] </td> </tr> <tr> <td>**b1**</td> <td> [[ 0.] [ 0.] [ 0.] [ 0.]] </td> </tr> <tr> <td>**W2**</td> <td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td> </tr> <tr> <td>**b2**</td> <td> [[ 0.]] </td> </tr> </table> 4.3 - The Loop Question: Implement forward_propagation(). Instructions: - Look above at the mathematical representation of your classifier. - You can use the function sigmoid(). It is built-in (imported) in the notebook. - You can use the function np.tanh(). It is part of the numpy library. - The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of initialize_parameters()) by using parameters[".."]. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set). - Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function. End of explanation # GRADED FUNCTION: compute_cost def compute_cost(A2, Y, parameters): Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) parameters -- python dictionary containing your parameters W1, b1, W2 and b2 Returns: cost -- cross-entropy cost given equation (13) m = Y.shape[1] # number of example # Retrieve W1 and W2 from parameters ### START CODE HERE ### (≈ 2 lines of code) W1 = parameters['W1'] W2 = parameters['W2'] ### END CODE HERE ### # Compute the cross-entropy cost ### START CODE HERE ### (≈ 2 lines of code) cost = - np.mean((np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), 1 - Y))) ### END CODE HERE ### cost = np.squeeze(cost) # makes sure cost is the dimension we expect. # E.g., turns [[17]] into 17 assert(isinstance(cost, float)) return cost A2, Y_assess, parameters = compute_cost_test_case() print("cost = " + str(compute_cost(A2, Y_assess, parameters))) Explanation: Expected Output: <table style="width:55%"> <tr> <td> -0.000499755777742 -0.000496963353232 0.000438187450959 0.500109546852 </td> </tr> </table> Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$ Exercise: Implement compute_cost() to compute the value of the cost $J$. Instructions: - There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented $- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{2})$: python logprobs = np.multiply(np.log(A2),Y) cost = - np.sum(logprobs) # no need to use a for loop! (you can use either np.multiply() and then np.sum() or directly np.dot()). End of explanation # GRADED FUNCTION: backward_propagation def backward_propagation(parameters, cache, X, Y): Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". X -- input data of shape (2, number of examples) Y -- "true" labels vector of shape (1, number of examples) Returns: grads -- python dictionary containing your gradients with respect to different parameters m = X.shape[1] # First, retrieve W1 and W2 from the dictionary "parameters". ### START CODE HERE ### (≈ 2 lines of code) W1 = parameters['W1'] W2 = parameters['W2'] ### END CODE HERE ### # Retrieve also A1 and A2 from dictionary "cache". ### START CODE HERE ### (≈ 2 lines of code) A1 = cache['A1'] A2 = cache['A2'] ### END CODE HERE ### # Backward propagation: calculate dW1, db1, dW2, db2. ### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above) dZ2 = A2 - Y dW2 = np.dot(dZ2, A1.T) / m db2 = np.mean(dZ2, axis = 1, keepdims = True) dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1,2)) dW1 = np.dot(dZ1, X.T) / m db1 = np.mean(dZ1, axis = 1, keepdims = True) ### END CODE HERE ### grads = {"dW1": dW1, "db1": db1, "dW2": dW2, "db2": db2} return grads parameters, cache, X_assess, Y_assess = backward_propagation_test_case() grads = backward_propagation(parameters, cache, X_assess, Y_assess) print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dW2 = "+ str(grads["dW2"])) print ("db2 = "+ str(grads["db2"])) Explanation: Expected Output: <table style="width:20%"> <tr> <td>**cost**</td> <td> 0.692919893776 </td> </tr> </table> Using the cache computed during forward propagation, you can now implement backward propagation. Question: Implement the function backward_propagation(). Instructions: Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <img src="images/grad_summary.png" style="width:600px;height:300px;"> <!-- $\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$ $\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $ $\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$ $\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $ $\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $ $\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$ - Note that $*$ denotes elementwise multiplication. - The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !--> Tips: To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)). End of explanation # GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters # Retrieve each parameter from the dictionary "parameters" ### START CODE HERE ### (≈ 4 lines of code) W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] ### END CODE HERE ### # Retrieve each gradient from the dictionary "grads" ### START CODE HERE ### (≈ 4 lines of code) dW1 = grads['dW1'] db1 = grads['db1'] dW2 = grads['dW2'] db2 = grads['db2'] ## END CODE HERE ### # Update rule for each parameter ### START CODE HERE ### (≈ 4 lines of code) W1 -= learning_rate * dW1 b1 -= learning_rate * db1 W2 -= learning_rate * dW2 b2 -= learning_rate * db2 ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) Explanation: Expected output: <table style="width:80%"> <tr> <td>**dW1**</td> <td> [[ 0.01018708 -0.00708701] [ 0.00873447 -0.0060768 ] [-0.00530847 0.00369379] [-0.02206365 0.01535126]] </td> </tr> <tr> <td>**db1**</td> <td> [[-0.00069728] [-0.00060606] [ 0.000364 ] [ 0.00151207]] </td> </tr> <tr> <td>**dW2**</td> <td> [[ 0.00363613 0.03153604 0.01162914 -0.01318316]] </td> </tr> <tr> <td>**db2**</td> <td> [[ 0.06589489]] </td> </tr> </table> Question: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2). General gradient descent rule: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter. Illustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley. <img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;"> End of explanation # GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loop print_cost -- if True, print the cost every 1000 iterations Returns: parameters -- parameters learnt by the model. They can then be used to predict. np.random.seed(3) n_x = layer_sizes(X, Y)[0] n_y = layer_sizes(X, Y)[2] # Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters". ### START CODE HERE ### (≈ 5 lines of code) parameters = initialize_parameters(n_x, n_h, n_y) W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] ### END CODE HERE ### # Loop (gradient descent) import pdb for i in range(0, num_iterations): ### START CODE HERE ### (≈ 4 lines of code) # Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache". A2, cache = forward_propagation(X, parameters) # Cost function. Inputs: "A2, Y, parameters". Outputs: "cost". cost = compute_cost(A2, Y, parameters) # Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads". grads = backward_propagation(parameters, cache, X, Y) # Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters". parameters = update_parameters(parameters, grads) ### END CODE HERE ### # Print the cost every 1000 iterations if print_cost and i % 1000 == 0: print ("Cost after iteration %i: %f" %(i, cost)) return parameters X_assess, Y_assess = nn_model_test_case() parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=False) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) Explanation: Expected Output: <table style="width:80%"> <tr> <td>**W1**</td> <td> [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]]</td> </tr> <tr> <td>**b1**</td> <td> [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]]</td> </tr> <tr> <td>**W2**</td> <td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td> </tr> <tr> <td>**b2**</td> <td> [[ 0.00010457]] </td> </tr> </table> 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() Question: Build your neural network model in nn_model(). Instructions: The neural network model has to use the previous functions in the right order. End of explanation # GRADED FUNCTION: predict def predict(parameters, X): Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of our model (red: 0 / blue: 1) # Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold. ### START CODE HERE ### (≈ 2 lines of code) A2, cache = forward_propagation(X, parameters) predictions = A2 >= 0.5 ### END CODE HERE ### return predictions parameters, X_assess = predict_test_case() predictions = predict(parameters, X_assess) print("predictions mean = " + str(np.mean(predictions))) Explanation: Expected Output: <table style="width:90%"> <tr> <td>**W1**</td> <td> [[-4.18494056 5.33220609] [-7.52989382 1.24306181] [-4.1929459 5.32632331] [ 7.52983719 -1.24309422]]</td> </tr> <tr> <td>**b1**</td> <td> [[ 2.32926819] [ 3.79458998] [ 2.33002577] [-3.79468846]]</td> </tr> <tr> <td>**W2**</td> <td> [[-6033.83672146 -6008.12980822 -6033.10095287 6008.06637269]] </td> </tr> <tr> <td>**b2**</td> <td> [[-52.66607724]] </td> </tr> </table> 4.5 Predictions Question: Use your model to predict by building predict(). Use forward propagation to predict results. Reminder: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X &gt; threshold) End of explanation # Build a model with a n_h-dimensional hidden layer parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True) # Plot the decision boundary plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y[0,]) plt.title("Decision Boundary for hidden layer size " + str(4)) Explanation: Expected Output: <table style="width:40%"> <tr> <td>**predictions mean**</td> <td> 0.666666666667 </td> </tr> </table> It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units. End of explanation # Print accuracy predictions = predict(parameters, X) print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%') Explanation: Expected Output: <table style="width:40%"> <tr> <td>**Cost after iteration 9000**</td> <td> 0.218607 </td> </tr> </table> End of explanation # This may take about 2 minutes to run plt.figure(figsize=(16, 32)) hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50] for i, n_h in enumerate(hidden_layer_sizes): plt.subplot(5, 2, i+1) plt.title('Hidden Layer of size %d' % n_h) parameters = nn_model(X, Y, n_h, num_iterations = 5000) plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y[0,]) predictions = predict(parameters, X) accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy)) Explanation: Expected Output: <table style="width:15%"> <tr> <td>**Accuracy**</td> <td> 90% </td> </tr> </table> Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. 4.6 - Tuning hidden layer size (optional/ungraded exercise) Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes. End of explanation # Datasets noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets() datasets = {"noisy_circles": noisy_circles, "noisy_moons": noisy_moons, "blobs": blobs, "gaussian_quantiles": gaussian_quantiles} ### START CODE HERE ### (choose your dataset) dataset = "noisy_moons" ### END CODE HERE ### X, Y = datasets[dataset] X, Y = X.T, Y.reshape(1, Y.shape[0]) # make blobs binary if dataset == "blobs": Y = Y%2 # Visualize the data plt.scatter(X[0, :], X[1, :], c=Y[0,], s=40, cmap=plt.cm.Spectral); Explanation: Interpretation: - The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting. - You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. Optional questions: Note: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation? - Play with the learning_rate. What happens? - What if we change the dataset? (See part 5 below!) <font color='blue'> You've learnt to: - Build a complete neural network with a hidden layer - Make a good use of a non-linear unit - Implemented forward propagation and backpropagation, and trained a neural network - See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets. End of explanation
2,097
Given the following text description, write Python code to implement the functionality described. Description: Check if all the digits of the given number are same Python3 program for the above approach ; Function to check if all the digits in the number N is the same or not ; Get the length of N ; Form the number M of the type K * 111. . . where K is the rightmost digit of N ; Check if the numbers are equal ; Otherwise ; Driver Code
Python Code: import math def checkSameDigits(N ) : length = int(math . log10(N ) ) + 1 ; M =(int(math . pow(10 , length ) ) - 1 ) //(10 - 1 ) ; M *= N % 10 ; if(M == N ) : return "Yes ";  return "No ";  if __name__== "__main __": N = 222 ; print(checkSameDigits(N ) ) ; 
2,098
Given the following text description, write Python code to implement the functionality described below step by step Description: Distribution of local maxima in a Gaussian Random Field In this notebook, I apply the distribution of local maxima of Cheng & Schwartzman. I reproduce the figure with the distribution in 1D, 2D and 3D and then check how much the distribution fits with simulated data. Check code for peak distribution of Cheng&Schwartzman Below I defined the formulae of Cheng&Schwartzman in arXiv Step1: Define formulae Step2: Apply formulae to a range of x-values Step3: Figure 1 from paper Step4: Apply the distribution to simulated data, extracted peaks with FSL I now simulate random field, extract peaks with FSL and compare these simulated peaks with the theoretical distribution. Step5: Are the peaks independent? Below, I take a random sample of peaks to compute distances for computational ease. With 10K peaks, it already takes 15 minutes to compute al distances. Step6: Compute distances between peaks and the difference in their height. Step7: Take the mean of heights in bins of 1.
Python Code: % matplotlib inline import numpy as np import math import nibabel as nib import scipy.stats as stats import matplotlib.pyplot as plt from nipy.labs.utils.simul_multisubject_fmri_dataset import surrogate_3d_dataset import palettable.colorbrewer as cb from nipype.interfaces import fsl import os import pandas as pd import scipy.integrate as integrate Explanation: Distribution of local maxima in a Gaussian Random Field In this notebook, I apply the distribution of local maxima of Cheng & Schwartzman. I reproduce the figure with the distribution in 1D, 2D and 3D and then check how much the distribution fits with simulated data. Check code for peak distribution of Cheng&Schwartzman Below I defined the formulae of Cheng&Schwartzman in arXiv:1503.01328v1. On page 3.3 the density functions are displayed for 1D, 2D and 3D. Consequently, I apply these formulae to a range of x-values, which reproduces Figure 1. End of explanation def peakdens1D(x,k): f1 = (3-k**2)**0.5/(6*math.pi)**0.5*np.exp(-3*x**2/(2*(3-k**2))) f2 = 2*k*x*math.pi**0.5/6**0.5*stats.norm.pdf(x)*stats.norm.cdf(k*x/(3-k**2)**0.5) out = f1+f2 return out def peakdens2D(x,k): f1 = 3**0.5*k**2*(x**2-1)*stats.norm.pdf(x)*stats.norm.cdf(k*x/(2-k**2)**0.5) f2 = k*x*(3*(2-k**2))**0.5/(2*math.pi) * np.exp(-x**2/(2-k**2)) f31 = 6**0.5/(math.pi*(3-k**2))**0.5*np.exp(-3*x**2/(2*(3-k**2))) f32 = stats.norm.cdf(k*x/((3-k**2)*(2-k**2))**0.5) out = f1+f2+f31*f32 return out def peakdens3D(x,k): fd1 = 144*stats.norm.pdf(x)/(29*6**(0.5)-36) fd211 = k**2.*((1.-k**2.)**3. + 6.*(1.-k**2.)**2. + 12.*(1.-k**2.)+24.)*x**2. / (4.*(3.-k**2.)**2.) fd212 = (2.*(1.-k**2.)**3. + 3.*(1.-k**2.)**2.+6.*(1.-k**2.)) / (4.*(3.-k**2.)) fd213 = 3./2. fd21 = (fd211 + fd212 + fd213) fd22 = np.exp(-k**2.*x**2./(2.*(3.-k**2.))) / (2.*(3.-k**2.))**(0.5) fd23 = stats.norm.cdf(2.*k*x / ((3.-k**2.)*(5.-3.*k**2.))**(0.5)) fd2 = fd21*fd22*fd23 fd31 = (k**2.*(2.-k**2.))/4.*x**2. - k**2.*(1.-k**2.)/2. - 1. fd32 = np.exp(-k**2.*x**2./(2.*(2.-k**2.))) / (2.*(2.-k**2.))**(0.5) fd33 = stats.norm.cdf(k*x / ((2.-k**2.)*(5.-3.*k**2.))**(0.5)) fd3 = fd31 * fd32 * fd33 fd41 = (7.-k**2.) + (1-k**2)*(3.*(1.-k**2.)**2. + 12.*(1.-k**2.) + 28.)/(2.*(3.-k**2.)) fd42 = k*x / (4.*math.pi**(0.5)*(3.-k**2.)*(5.-3.*k**2)**0.5) fd43 = np.exp(-3.*k**2.*x**2/(2.*(5-3.*k**2.))) fd4 = fd41*fd42 * fd43 fd51 = math.pi**0.5*k**3./4.*x*(x**2.-3.) f521low = np.array([-10.,-10.]) f521up = np.array([0.,k*x/2.**(0.5)]) f521mu = np.array([0.,0.]) f521sigma = np.array([[3./2., -1.],[-1.,(3.-k**2.)/2.]]) fd521,i = stats.mvn.mvnun(f521low,f521up,f521mu,f521sigma) f522low = np.array([-10.,-10.]) f522up = np.array([0.,k*x/2.**(0.5)]) f522mu = np.array([0.,0.]) f522sigma = np.array([[3./2., -1./2.],[-1./2.,(2.-k**2.)/2.]]) fd522,i = stats.mvn.mvnun(f522low,f522up,f522mu,f522sigma) fd5 = fd51*(fd521+fd522) out = fd1*(fd2+fd3+fd4+fd5) return out Explanation: Define formulae End of explanation xs = np.arange(-4,10,0.01).tolist() ys_3d_k01 = [] ys_3d_k05 = [] ys_3d_k1 = [] ys_2d_k01 = [] ys_2d_k05 = [] ys_2d_k1 = [] ys_1d_k01 = [] ys_1d_k05 = [] ys_1d_k1 = [] for x in xs: ys_1d_k01.append(peakdens1D(x,0.1)) ys_1d_k05.append(peakdens1D(x,0.5)) ys_1d_k1.append(peakdens1D(x,1)) ys_2d_k01.append(peakdens2D(x,0.1)) ys_2d_k05.append(peakdens2D(x,0.5)) ys_2d_k1.append(peakdens2D(x,1)) ys_3d_k01.append(peakdens3D(x,0.1)) ys_3d_k05.append(peakdens3D(x,0.5)) ys_3d_k1.append(peakdens3D(x,1)) Explanation: Apply formulae to a range of x-values End of explanation plt.figure(figsize=(7,5)) plt.plot(xs,ys_1d_k01,color="black",ls=":",lw=2) plt.plot(xs,ys_1d_k05,color="black",ls="--",lw=2) plt.plot(xs,ys_1d_k1,color="black",ls="-",lw=2) plt.plot(xs,ys_2d_k01,color="blue",ls=":",lw=2) plt.plot(xs,ys_2d_k05,color="blue",ls="--",lw=2) plt.plot(xs,ys_2d_k1,color="blue",ls="-",lw=2) plt.plot(xs,ys_3d_k01,color="red",ls=":",lw=2) plt.plot(xs,ys_3d_k05,color="red",ls="--",lw=2) plt.plot(xs,ys_3d_k1,color="red",ls="-",lw=2) plt.ylim([-0.1,0.55]) plt.xlim([-4,4]) plt.show() Explanation: Figure 1 from paper End of explanation os.chdir("/Users/Joke/Documents/Onderzoek/ProjectsOngoing/Power/WORKDIR/") sm=1 smooth_FWHM = 3 smooth_sd = smooth_FWHM/(2*math.sqrt(2*math.log(2))) data = surrogate_3d_dataset(n_subj=1,sk=smooth_sd,shape=(500,500,500),noise_level=1) minimum = data.min() newdata = data - minimum #little trick because fsl.model.Cluster ignores negative values img=nib.Nifti1Image(newdata,np.eye(4)) img.to_filename(os.path.join("RF_"+str(sm)+".nii.gz")) cl=fsl.model.Cluster() cl.inputs.threshold = 0 cl.inputs.in_file=os.path.join("RF_"+str(sm)+".nii.gz") cl.inputs.out_localmax_txt_file=os.path.join("locmax_"+str(sm)+".txt") cl.inputs.num_maxima=10000000 cl.inputs.connectivity=26 cl.inputs.terminal_output='none' cl.run() plt.figure(figsize=(6,4)) plt.imshow(data[1:20,1:20,1]) plt.colorbar() plt.show() peaks = pd.read_csv("locmax_"+str(1)+".txt",sep="\t").drop('Unnamed: 5',1) peaks.Value = peaks.Value + minimum 500.**3/len(peaks) twocol = cb.qualitative.Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.hist(peaks.Value,lw=0,facecolor=twocol[0],normed=True,bins=np.arange(-5,5,0.1),label="observed distribution") plt.xlim([-2,5]) plt.ylim([0,0.6]) plt.plot(xs,ys_3d_k1,color=twocol[1],lw=3,label="theoretical distribution") plt.title("histogram") plt.xlabel("peak height") plt.ylabel("density") plt.legend(loc="upper left",frameon=False) plt.show() peaks[1:5] Explanation: Apply the distribution to simulated data, extracted peaks with FSL I now simulate random field, extract peaks with FSL and compare these simulated peaks with the theoretical distribution. End of explanation ss = 10000 smpl = np.random.choice(len(peaks),ss,replace=False) peaksmpl = peaks.loc[smpl].reset_index() Explanation: Are the peaks independent? Below, I take a random sample of peaks to compute distances for computational ease. With 10K peaks, it already takes 15 minutes to compute al distances. End of explanation dist = [] diff = [] for p in range(ss): for q in range(p+1,ss): xd = peaksmpl.x[q]-peaksmpl.x[p] yd = peaksmpl.y[q]-peaksmpl.y[p] zd = peaksmpl.z[q]-peaksmpl.z[p] if not any(x > 20 or x < -20 for x in [xd,yd,zd]): dist.append(np.sqrt(xd**2+yd**2+zd**2)) diff.append(abs(peaksmpl.Value[p]-peaksmpl.Value[q])) Explanation: Compute distances between peaks and the difference in their height. End of explanation mn = [] ds = np.arange(start=2,stop=100) for d in ds: mn.append(np.mean(np.array(diff)[np.round(np.array(dist))==d])) twocol = cb.qualitative.Paired_12.mpl_colors plt.figure(figsize=(7,5)) plt.plot(dist,diff,"r.",color=twocol[0],linewidth=0,label="combination of 2 points") plt.xlim([2,20]) plt.plot(ds,mn,color=twocol[1],lw=4,label="average over all points in bins with width 1") plt.title("Are peaks independent?") plt.xlabel("Distance between peaks") plt.ylabel("Difference between peaks heights") plt.legend(loc="upper left",frameon=False) plt.show() np.min(dist) def nulprobdensEC(exc,peaks): f0 = exc*np.exp(-exc*(peaks-exc)) return f0 def peakp(x): y = [] iterator = (x,) if not isinstance(x, (tuple, list)) else x for i in iterator: y.append(integrate.quad(lambda x:peakdens3D(x,1),-20,i)[0]) return y fig,axs=plt.subplots(1,5,figsize=(13,3)) fig.subplots_adjust(hspace = .5, wspace=0.3) axs=axs.ravel() thresholds=[2,2.5,3,3.5,4] bins=np.arange(2,5,0.5) x=np.arange(2,10,0.1) twocol=cb.qualitative.Paired_10.mpl_colors for i in range(5): thr=thresholds[i] axs[i].hist(peaks.Value[peaks.Value>thr],lw=0,facecolor=twocol[i*2-2],normed=True,bins=np.arange(thr,5,0.1)) axs[i].set_xlim([thr,5]) axs[i].set_ylim([0,3]) xn = x[x>thr] ynb = nulprobdensEC(thr,xn) ycs = [] for n in xn: ycs.append(peakdens3D(n,1)/(1-peakp(thr)[0])) axs[i].plot(xn,ycs,color=twocol[i*2-1],lw=3,label="C&S") axs[i].plot(xn,ynb,color=twocol[i*2-1],lw=3,linestyle="--",label="EC") axs[i].set_title("threshold:"+str(thr)) axs[i].set_xticks(np.arange(thr,5,0.5)) axs[i].set_yticks([1,2]) axs[i].legend(loc="upper right",frameon=False) axs[i].set_xlabel("peak height") axs[i].set_ylabel("density") plt.show() Explanation: Take the mean of heights in bins of 1. End of explanation
2,099
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: NCC Source ID: SANDBOX-3 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:25 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation