Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
5,000
Given the following text description, write Python code to implement the functionality described below step by step Description: XGBoost HP Tuning on AI Platform This notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository. Citation Step1: The data The Auto MPG Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs Step2: Load the hyperparameter values that are passed to the model during training. In this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.) Step3: Add code to download the data from GCS In this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model. Step4: Use the Hyperparameters Use the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's XGBoost code. Step5: Report the mean accuracy as hyperparameter tuning objective metric. Step6: Export and save the model to GCS Step7: Part 2 Step8: Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info. In this config file several key things are set Step9: Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info. To do this, AI Platform uses a setup.py file to install your dependencies. Step10: Part 3 Step11: Submit the training job. Step12: [Optional] StackDriver Logging You can view the logs for your training job
Python Code: # Replace <PROJECT_ID> and <BUCKET_ID> with proper Project and Bucket ID's: %env PROJECT_ID <PROJECT_ID> %env BUCKET_ID <BUCKET_ID> %env JOB_DIR gs://<BUCKET_ID>/xgboost_job_dir %env REGION us-central1 %env TRAINER_PACKAGE_PATH ./auto_mpg_hp_tuning %env MAIN_TRAINER_MODULE auto_mpg_hp_tuning.train %env RUNTIME_VERSION 1.9 %env PYTHON_VERSION 3.5 %env HPTUNING_CONFIG hptuning_config.yaml ! mkdir auto_mpg_hp_tuning Explanation: XGBoost HP Tuning on AI Platform This notebook trains a model on Ai Platform using Hyperparameter Tuning to predict a car's Miles Per Gallon. It uses Auto MPG Data Set from UCI Machine Learning Repository. Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. How to train your model on AI Platform with HP tuning. Using HP Tuning for training can be done in a few steps: 1. Create your python model file 1. Add argument parsing for the hyperparameter values. (These values are chosen for you in this notebook) 1. Add code to download your data from Google Cloud Storage so that AI Platform can use it 1. Add code to track the performance of your hyperparameter values. 1. Add code to export and save the model to Google Cloud Storage once AI Platform finishes training the model 1. Prepare a package 1. Submit the training job Prerequisites Before you jump in, let’s cover some of the different tools you’ll be using to get HP tuning up and running on AI Platform. Google Cloud Platform lets you build and host applications and websites, store data, and analyze data on Google's scalable infrastructure. AI Platform is a managed service that enables you to easily build machine learning models that work on any type of data, of any size. Google Cloud Storage (GCS) is a unified object storage for developers and enterprises, from live data serving to data analytics/ML to data archiving. Cloud SDK is a command line tool which allows you to interact with Google Cloud products. In order to run this notebook, make sure that Cloud SDK is installed in the same environment as your Jupyter kernel. Overview of Hyperparameter Tuning - Hyperparameter tuning takes advantage of the processing infrastructure of Google Cloud Platform to test different hyperparameter configurations when training your model. Part 0: Setup Create a project on GCP Create a Google Cloud Storage Bucket Enable AI Platform Training and Prediction and Compute Engine APIs Install Cloud SDK Install XGBoost [Optional: used if running locally] Install pandas [Optional: used if running locally] Install cloudml-hypertune [Optional: used if running locally] These variables will be needed for the following steps. * TRAINER_PACKAGE_PATH &lt;./auto_mpg_hp_tuning&gt; - A packaged training application that will be staged in a Google Cloud Storage location. The model file created below is placed inside this package path. * MAIN_TRAINER_MODULE &lt;auto_mpg_hp_tuning.train&gt; - Tells AI Platform which file to execute. This is formatted as follows <folder_name.python_file_name> * JOB_DIR &lt;gs://$BUCKET_ID/xgboost_learn_job_dir&gt; - The path to a Google Cloud Storage location to use for job output. * RUNTIME_VERSION &lt;1.9&gt; - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information. * PYTHON_VERSION &lt;3.5&gt; - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7. * HPTUNING_CONFIG &lt;hptuning_config.yaml&gt; - Path to the job configuration file. Replace: * PROJECT_ID &lt;YOUR_PROJECT_ID&gt; - with your project's id. Use the PROJECT_ID that matches your Google Cloud Platform project. * BUCKET_ID &lt;YOUR_BUCKET_ID&gt; - with the bucket id you created above. * JOB_DIR &lt;gs://YOUR_BUCKET_ID/xgboost_job_dir&gt; - with the bucket id you created above. * REGION &lt;REGION&gt; - select a region from here or use the default 'us-central1'. The region is where the model will be deployed. End of explanation %%writefile ./auto_mpg_hp_tuning/train.py import argparse import datetime import os import pandas as pd import subprocess import pickle from google.cloud import storage import hypertune import xgboost as xgb from random import shuffle def split_dataframe(dataframe, rate=0.8): indices = dataframe.index.values.tolist() length = len(dataframe) shuffle(indices) train_size = int(length * rate) train_indices = indices[:train_size] test_indices = indices[train_size:] return dataframe.iloc[train_indices], dataframe.iloc[test_indices] Explanation: The data The Auto MPG Data Set that this sample uses for training is provided by the UC Irvine Machine Learning Repository. We have hosted the data on a public GCS bucket gs://cloud-samples-data/ml-engine/auto_mpg/. The data has been pre-processed to remove rows with incomplete data so as not to create additional steps for this notebook. Training file is auto-mpg.data Note: Your typical development process with your own data would require you to upload your data to GCS so that AI Platform can access that data. However, in this case, we have put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS. Citation: Dua, D. and Karra Taniskidou, E. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science. Disclaimer This dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset. Part 1: Create your python model file First, we'll create the python model file (provided below) that we'll upload to AI Platform. This is similar to your normal process for creating a XGBoost model. However, there are a few key differences: 1. Downloading the data from GCS at the start of your file, so that AI Platform can access the data. 1. Exporting/saving the model to GCS at the end of your file, so that you can use it for predictions. 1. Define a command-line argument in your main training module for each tuned hyperparameter. 1. Use the value passed in those arguments to set the corresponding hyperparameter in your application's XGBoost code. 1. Use cloudml-hypertune to track your training jobs metrics. The code in this file first handles the hyperparameters passed to the file from AI Platform. Then it loads the data into a pandas DataFrame that can be used by XGBoost. Then the model is fit against the training data and the metrics for that data are shared with AI Platform. Lastly, Python's built in pickle library is used to save the model to a file that can be uploaded to AI Platform's prediction service. Note: In normal practice you would want to test your model locally on a small dataset to ensure that it works, before using it with your larger dataset on AI Platform. This avoids wasted time and costs. Setup the imports and helper functions End of explanation %%writefile -a ./auto_mpg_hp_tuning/train.py parser = argparse.ArgumentParser() parser.add_argument( '--job-dir', # handled automatically by AI Platform help='GCS location to write checkpoints and export models', required=True ) parser.add_argument( '--max_depth', # Specified in the config file help='Maximum depth of the XGBoost tree. default: 3', default=3, type=int ) parser.add_argument( '--n_estimators', # Specified in the config file help='Number of estimators to be created. default: 100', default=100, type=int ) parser.add_argument( '--booster', # Specified in the config file help='which booster to use: gbtree, gblinear or dart. default: gbtree', default='gbtree', type=str ) args = parser.parse_args() Explanation: Load the hyperparameter values that are passed to the model during training. In this tutorial, the Lasso regressor is used, because it has several parameters that can be used to help demonstrate how to choose HP tuning values. (The range of values are set below in the configuration file for the HP tuning values.) End of explanation %%writefile -a ./auto_mpg_hp_tuning/train.py # Public bucket holding the auto mpg data bucket = storage.Client().bucket('cloud-samples-data') # Path to the data inside the public bucket blob = bucket.blob('ml-engine/auto_mpg/auto-mpg.data') # Download the data blob.download_to_filename('auto-mpg.data') # --------------------------------------- # This is where your model code would go. Below is an example model using the auto mpg dataset. # --------------------------------------- # Define the format of your input data including unused columns # (These are the columns from the auto-mpg data files) COLUMNS = [ 'mpg', 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'model-year', 'origin', 'car-name' ] FEATURES = [ 'cylinders', 'displacement', 'horsepower', 'weight', 'acceleration', 'model-year', 'origin' ] TARGET = 'mpg' # Load the training auto mpg dataset with open('./auto-mpg.data', 'r') as train_data: raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS, delim_whitespace=True) raw_training_data = raw_training_data[FEATURES + [TARGET]] train_df, test_df = split_dataframe(raw_training_data, 0.8) Explanation: Add code to download the data from GCS In this case, using the publicly hosted data,AI Platform will then be able to use the data when training your model. End of explanation %%writefile -a ./auto_mpg_hp_tuning/train.py # Create the regressor, here we will use a Lasso Regressor to demonstrate the use of HP Tuning. # Here is where we set the variables used during HP Tuning from # the parameters passed into the python script regressor = xgb.XGBRegressor(max_depth=args.max_depth, n_estimators=args.n_estimators, booster=args.booster ) # Transform the features and fit them to the regressor regressor.fit(train_df[FEATURES], train_df[TARGET]) Explanation: Use the Hyperparameters Use the Hyperparameter values passed in those arguments to set the corresponding hyperparameters in your application's XGBoost code. End of explanation %%writefile -a ./auto_mpg_hp_tuning/train.py # Calculate the mean accuracy on the given test data and labels. score = regressor.score(test_df[FEATURES], test_df[TARGET]) # The default name of the metric is training/hptuning/metric. # We recommend that you assign a custom name. The only functional difference is that # if you use a custom name, you must set the hyperparameterMetricTag value in the # HyperparameterSpec object in your job request to match your chosen name. # https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#HyperparameterSpec hpt = hypertune.HyperTune() hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='my_metric_tag', metric_value=score, global_step=1000) Explanation: Report the mean accuracy as hyperparameter tuning objective metric. End of explanation %%writefile -a ./auto_mpg_hp_tuning/train.py # Export the model to a file model_filename = 'model.pkl' with open(model_filename, "wb") as f: pickle.dump(regressor, f) # Example: job_dir = 'gs://BUCKET_ID/xgboost_job_dir/1' job_dir = args.job_dir.replace('gs://', '') # Remove the 'gs://' # Get the Bucket Id bucket_id = job_dir.split('/')[0] # Get the path bucket_path = job_dir[len('{}/'.format(bucket_id)):] # Example: 'xgboost_job_dir/1' # Upload the model to GCS bucket = storage.Client().bucket(bucket_id) blob = bucket.blob('{}/{}'.format( bucket_path, model_filename)) blob.upload_from_filename(model_filename) Explanation: Export and save the model to GCS End of explanation %%writefile ./auto_mpg_hp_tuning/__init__.py #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Note that __init__.py can be an empty file. Explanation: Part 2: Create Trainer Package with Hyperparameter Tuning Next we need to build the Trainer Package, which holds all your code and dependencies need to train your model on AI Platform. First, we create an empty __init__.py file. End of explanation %%writefile ./hptuning_config.yaml #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # hyperparam.yaml trainingInput: hyperparameters: goal: MAXIMIZE maxTrials: 30 maxParallelTrials: 5 hyperparameterMetricTag: my_metric_tag enableTrialEarlyStopping: TRUE params: - parameterName: max_depth type: INTEGER minValue: 3 maxValue: 8 - parameterName: n_estimators type: INTEGER minValue: 50 maxValue: 200 - parameterName: booster type: CATEGORICAL categoricalValues: [ "gbtree", "gblinear", "dart" ] Explanation: Next, we need to set the hp tuning values used to train our model. Check HyperparameterSpec for more info. In this config file several key things are set: * maxTrials - How many training trials should be attempted to optimize the specified hyperparameters. * maxParallelTrials: 5 - The number of training trials to run concurrently. * params - The set of parameters to tune.. These are the different parameters to pass into your model and the specified ranges you wish to try. * parameterName - The parameter name must be unique amongst all ParameterConfigs * type - The type of the parameter. [INTEGER, DOUBLE, ...] * minValue & maxValue - The range of values that this parameter could be. * scaleType - How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., UNIT_LINEAR_SCALE). End of explanation %%writefile ./setup.py #!/usr/bin/env python # Copyright 2018 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from setuptools import find_packages from setuptools import setup REQUIRED_PACKAGES = ['cloudml-hypertune'] setup( name='auto_mpg_hp_tuning', version='0.1', install_requires=REQUIRED_PACKAGES, packages=find_packages(), include_package_data=True, description='Auto MPG XGBoost HP tuning training application' ) Explanation: Lastly, we need to install the dependencies used in our model. Check adding_standard_pypi_dependencies for more info. To do this, AI Platform uses a setup.py file to install your dependencies. End of explanation ! gcloud config set project $PROJECT_ID Explanation: Part 3: Submit Training Job Next we need to submit the job for training on AI Platform. We'll use gcloud to submit the job which has the following flags: job-name - A name to use for the job (mixed-case letters, numbers, and underscores only, starting with a letter). In this case: auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S") job-dir - The path to a Google Cloud Storage location to use for job output. package-path - A packaged training application that is staged in a Google Cloud Storage location. If you are using the gcloud command-line tool, this step is largely automated. module-name - The name of the main module in your trainer package. The main module is the Python file you call to start the application. If you use the gcloud command to submit your job, specify the main module name in the --module-name argument. Refer to Python Packages to figure out the module name. region - The Google Cloud Compute region where you want your job to run. You should run your training job in the same region as the Cloud Storage bucket that stores your training data. Select a region from here or use the default 'us-central1'. runtime-version - The version of AI Platform to use for the job. If you don't specify a runtime version, the training service uses the default AI Platform runtime version 1.0. See the list of runtime versions for more information. python-version - The Python version to use for the job. Python 3.5 is available with runtime version 1.4 or greater. If you don't specify a Python version, the training service uses Python 2.7. scale-tier - A scale tier specifying the type of processing cluster to run your job on. This can be the CUSTOM scale tier, in which case you also explicitly specify the number and type of machines to use. config - Path to the job configuration file. This file should be a YAML document (JSON also accepted) containing a Job resource as defined in the API Note: Check to make sure gcloud is set to the current PROJECT_ID End of explanation ! gcloud ml-engine jobs submit training auto_mpg_hp_tuning_$(date +"%Y%m%d_%H%M%S") \ --job-dir $JOB_DIR \ --package-path $TRAINER_PACKAGE_PATH \ --module-name $MAIN_TRAINER_MODULE \ --region $REGION \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --scale-tier basic \ --config $HPTUNING_CONFIG Explanation: Submit the training job. End of explanation ! gsutil ls $JOB_DIR/* Explanation: [Optional] StackDriver Logging You can view the logs for your training job: 1. Go to https://console.cloud.google.com/ 1. Select "Logging" in left-hand pane 1. In left-hand pane, go to "AI Platform" and select Jobs 1. In filter by prefix, use the value of $JOB_NAME to view the logs On the logging page of your model, you can view the different results for each HP tuning job. Example: { "trialId": "15", "hyperparameters": { "booster": "dart", "max_depth": "7", "n_estimators": "102" }, "finalMetric": { "trainingStep": "1000", "objectiveValue": 0.9259230441279733 } } [Optional] Verify Model File in GCS View the contents of the destination model folder to verify that all 30 model files have indeed been uploaded to GCS. Note: The model can take a few minutes to train and show up in GCS. End of explanation
5,001
Given the following text description, write Python code to implement the functionality described below step by step Description: Bay Area Bike Share Analysis Introduction Tip Step1: Condensing the Trip Data The first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table. Step3: In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns Step5: You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the summarise_data() function below. As part of this function, the datetime module is used to parse the timestamp strings from the original data file as datetime objects (strptime), which can then be output in a different string format (strftime). The parsed objects also have a variety of attributes and methods to quickly obtain There are two tasks that you will need to complete to finish the summarise_data() function. First, you should perform an operation to convert the trip durations from being in terms of seconds to being in terms of minutes. (There are 60 seconds in a minute.) Secondly, you will need to create the columns for the year, month, hour, and day of the week. Take a look at the documentation for datetime objects in the datetime module. Find the appropriate attributes and method to complete the below code. Step6: Question 3 Step7: Tip Step8: You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on. Let's start looking at how those trips are divided by subscription type. One easy way to build an intuition about the data is to plot it. We'll use the usage_plot() function for this. The second argument of the function allows us to count up the trips across a selected variable, displaying the information in a plot. The expression below will show how many customer and how many subscriber trips were made. Try it out! Step9: Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like? Step10: Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of usage_stats(), we should have expected some trips with very long durations that bring the average to be so much higher than the median Step11: This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will look nicer if we have bin sizes and bin boundaries that correspond to some number of minutes. Fortunately, you can use the optional "boundary" and "bin_width" parameters to adjust the plot. By setting "boundary" to 0, one of the bin edges (in this case the left-most bin) will start at 0 rather than the minimum trip duration. And by setting "bin_width" to 5, each bar will count up data points in five-minute intervals. Step12: Question 4 Step13: Since the summarise_data() function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there. Step14: Now it's your turn to explore the new dataset with usage_stats() and usage_plot() and report your findings! Here's a refresher on how to use the usage_plot() function Step15: Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways. Tip Step16: Question 5a
Python Code: # import all necessary packages and functions. import csv from datetime import datetime import numpy as np import pandas as pd from babs_datacheck import question_3 from babs_visualizations import usage_stats, usage_plot from IPython.display import display import matplotlib.pyplot as plt %matplotlib inline # file locations file_in = '201402_trip_data.csv' file_out = '201309_trip_data.csv' with open(file_out, 'w') as f_out, open(file_in, 'r') as f_in: # set up csv reader and writer objects in_reader = csv.reader(f_in) out_writer = csv.writer(f_out) # write rows from in-file to out-file until specified date reached while True: datarow = next(in_reader) # trip start dates in 3rd column, m/d/yyyy HH:MM formats if datarow[2][:9] == '10/1/2013': break out_writer.writerow(datarow) Explanation: Bay Area Bike Share Analysis Introduction Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. Bay Area Bike Share is a company that provides on-demand bike rentals for customers in San Francisco, Redwood City, Palo Alto, Mountain View, and San Jose. Users can unlock bikes from a variety of stations throughout each city, and return them to any station within the same city. Users pay for the service either through a yearly subscription or by purchasing 3-day or 24-hour passes. Users can make an unlimited number of trips, with trips under thirty minutes in length having no additional charge; longer trips will incur overtime fees. In this project, you will put yourself in the shoes of a data analyst performing an exploratory analysis on the data. You will take a look at two of the major parts of the data analysis process: data wrangling and exploratory data analysis. But before you even start looking at data, think about some questions you might want to understand about the bike share data. Consider, for example, if you were working for Bay Area Bike Share: what kinds of information would you want to know about in order to make smarter business decisions? Or you might think about if you were a user of the bike share service. What factors might influence how you would want to use the service? Question 1: Write at least two questions you think could be answered by data. Answer: The demographic (age, occupation, income, etc...) of the customers. The travel distances. The hotspots (the most popular beginning and ending points of the trips). Using Visualizations to Communicate Findings in Data As a data analyst, the ability to effectively communicate findings is a key part of the job. After all, your best analysis is only as good as your ability to communicate it. In 2014, Bay Area Bike Share held an Open Data Challenge to encourage data analysts to create visualizations based on their open data set. You’ll create your own visualizations in this project, but first, take a look at the submission winner for Best Analysis from Tyler Field. Read through the entire report to answer the following question: Question 2: What visualizations do you think provide the most interesting insights? Are you able to answer either of the questions you identified above based on Tyler’s analysis? Why or why not? Answer: I am most interested in his "where" portion, which the starting and ending stations, and the most popular trips. The visualizations are very insightful and able to fully answer most of my questions, except for the demographic, maybe because it isn't included in the dataset (but could still be guessed based on the starting and ending stations). Data Wrangling Now it's time to explore the data for yourself. Year 1 and Year 2 data from the Bay Area Bike Share's Open Data page have already been provided with the project materials; you don't need to download anything extra. The data comes in three parts: the first half of Year 1 (files starting 201402), the second half of Year 1 (files starting 201408), and all of Year 2 (files starting 201508). There are three main datafiles associated with each part: trip data showing information about each trip taken in the system (*_trip_data.csv), information about the stations in the system (*_station_data.csv), and daily weather data for each city in the system (*_weather_data.csv). When dealing with a lot of data, it can be useful to start by working with only a sample of the data. This way, it will be much easier to check that our data wrangling steps are working since our code will take less time to complete. Once we are satisfied with the way things are working, we can then set things up to work on the dataset as a whole. Since the bulk of the data is contained in the trip information, we should target looking at a subset of the trip data to help us get our bearings. You'll start by looking at only the first month of the bike trip data, from 2013-08-29 to 2013-09-30. The code below will take the data from the first half of the first year, then write the first month's worth of data to an output file. This code exploits the fact that the data is sorted by date (though it should be noted that the first two days are sorted by trip time, rather than being completely chronological). First, load all of the packages and functions that you'll be using in your analysis by running the first code cell below. Then, run the second code cell to read a subset of the first trip data file, and write a new file containing just the subset we are initially interested in. Tip: You can run a code cell like you formatted Markdown cells by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the toolbar after selecting it. While the cell is running, you will see an asterisk in the message to the left of the cell, i.e. In [*]:. The asterisk will change into a number to show that execution has completed, e.g. In [1]. If there is output, it will show up as Out [1]:, with an appropriate number to match the "In" number. End of explanation sample_data = pd.read_csv('201309_trip_data.csv') display(sample_data.head()) Explanation: Condensing the Trip Data The first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table. End of explanation # Display the first few rows of the station data file. station_info = pd.read_csv('201402_station_data.csv') display(station_info.head()) # This function will be called by another function later on to create the mapping. def create_station_mapping(station_data): Create a mapping from station IDs to cities, returning the result as a dictionary. station_map = {} for data_file in station_data: with open(data_file, 'r') as f_in: # set up csv reader object - note that we are using DictReader, which # takes the first row of the file as a header row for each row's # dictionary keys weather_reader = csv.DictReader(f_in) for row in weather_reader: station_map[row['station_id']] = row['landmark'] return station_map Explanation: In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns: the trip duration, start time, start terminal, end terminal, and subscription type. Start time will be divided into year, month, and hour components. We will also add a column for the day of the week and abstract the start and end terminal to be the start and end city. Let's tackle the lattermost part of the wrangling process first. Run the below code cell to see how the station information is structured, then observe how the code will create the station-city mapping. Note that the station mapping is set up as a function, create_station_mapping(). Since it is possible that more stations are added or dropped over time, this function will allow us to combine the station information across all three parts of our data when we are ready to explore everything. End of explanation def summarise_data(trip_in, station_data, trip_out): This function takes trip and station information and outputs a new data file with a condensed summary of major trip information. The trip_in and station_data arguments will be lists of data files for the trip and station information, respectively, while trip_out specifies the location to which the summarized data will be written. # generate dictionary of station - city mapping station_map = create_station_mapping(station_data) with open(trip_out, 'w') as f_out: # set up csv writer object out_colnames = ['duration', 'start_date', 'start_year', 'start_month', 'start_hour', 'weekday', 'start_city', 'end_city', 'subscription_type'] trip_writer = csv.DictWriter(f_out, fieldnames = out_colnames) trip_writer.writeheader() for data_file in trip_in: with open(data_file, 'r') as f_in: # set up csv reader object trip_reader = csv.DictReader(f_in) # collect data from and process each row for row in trip_reader: new_point = {} # convert duration units from seconds to minutes ### Question 3a: Add a mathematical operation below ### ### to convert durations from seconds to minutes. ### new_point['duration'] = float(row['Duration'])/60 # reformat datestrings into multiple columns ### Question 3b: Fill in the blanks below to generate ### ### the expected time values. ### trip_date = datetime.strptime(row['Start Date'], '%m/%d/%Y %H:%M') new_point['start_date'] = trip_date.strftime('%Y-%m-%d') new_point['start_year'] = trip_date.year new_point['start_month'] = trip_date.month new_point['start_hour'] = trip_date.hour new_point['weekday'] = trip_date.weekday() # remap start and end terminal with start and end city new_point['start_city'] = station_map[row['Start Terminal']] new_point['end_city'] = station_map[row['End Terminal']] # two different column names for subscribers depending on file if 'Subscription Type' in row: new_point['subscription_type'] = row['Subscription Type'] else: new_point['subscription_type'] = row['Subscriber Type'] # write the processed information to the output file. trip_writer.writerow(new_point) Explanation: You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the summarise_data() function below. As part of this function, the datetime module is used to parse the timestamp strings from the original data file as datetime objects (strptime), which can then be output in a different string format (strftime). The parsed objects also have a variety of attributes and methods to quickly obtain There are two tasks that you will need to complete to finish the summarise_data() function. First, you should perform an operation to convert the trip durations from being in terms of seconds to being in terms of minutes. (There are 60 seconds in a minute.) Secondly, you will need to create the columns for the year, month, hour, and day of the week. Take a look at the documentation for datetime objects in the datetime module. Find the appropriate attributes and method to complete the below code. End of explanation # Process the data by running the function we wrote above. station_data = ['201402_station_data.csv'] trip_in = ['201309_trip_data.csv'] trip_out = '201309_trip_summary.csv' summarise_data(trip_in, station_data, trip_out) # Load in the data file and print out the first few rows sample_data = pd.read_csv(trip_out) display(sample_data.head()) # Verify the dataframe by counting data points matching each of the time features. question_3(sample_data) Explanation: Question 3: Run the below code block to call the summarise_data() function you finished in the above cell. It will take the data contained in the files listed in the trip_in and station_data variables, and write a new file at the location specified in the trip_out variable. If you've performed the data wrangling correctly, the below code block will print out the first few lines of the dataframe and a message verifying that the data point counts are correct. End of explanation trip_data = pd.read_csv('201309_trip_summary.csv') usage_stats(trip_data) Explanation: Tip: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Exploratory Data Analysis Now that you have some data saved to a file, let's look at some initial trends in the data. Some code has already been written for you in the babs_visualizations.py script to help summarize and visualize the data; this has been imported as the functions usage_stats() and usage_plot(). In this section we'll walk through some of the things you can do with the functions, and you'll use the functions for yourself in the last part of the project. First, run the following cell to load the data, then use the usage_stats() function to see the total number of trips made in the first month of operations, along with some statistics regarding how long trips took. End of explanation usage_plot(trip_data, 'subscription_type') Explanation: You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on. Let's start looking at how those trips are divided by subscription type. One easy way to build an intuition about the data is to plot it. We'll use the usage_plot() function for this. The second argument of the function allows us to count up the trips across a selected variable, displaying the information in a plot. The expression below will show how many customer and how many subscriber trips were made. Try it out! End of explanation usage_plot(trip_data, 'duration') Explanation: Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like? End of explanation usage_plot(trip_data, 'duration', ['duration < 60']) Explanation: Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of usage_stats(), we should have expected some trips with very long durations that bring the average to be so much higher than the median: the plot shows this in a dramatic, but unhelpful way. When exploring the data, you will often need to work with visualization function parameters in order to make the data easier to understand. Here's where the third argument of the usage_plot() function comes in. Filters can be set for data points as a list of conditions. Let's start by limiting things to trips of less than 60 minutes. End of explanation usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5) Explanation: This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will look nicer if we have bin sizes and bin boundaries that correspond to some number of minutes. Fortunately, you can use the optional "boundary" and "bin_width" parameters to adjust the plot. By setting "boundary" to 0, one of the bin edges (in this case the left-most bin) will start at 0 rather than the minimum trip duration. And by setting "bin_width" to 5, each bar will count up data points in five-minute intervals. End of explanation station_data = ['201402_station_data.csv', '201408_station_data.csv', '201508_station_data.csv' ] trip_in = ['201402_trip_data.csv', '201408_trip_data.csv', '201508_trip_data.csv' ] trip_out = 'babs_y1_y2_summary.csv' # This function will take in the station data and trip data and # write out a new data file to the name listed above in trip_out. summarise_data(trip_in, station_data, trip_out) Explanation: Question 4: Which five-minute trip duration shows the most number of trips? Approximately how many trips were made in this range? Answer: Approximately 9,000 trips were made in the 5-10 minutes range. Visual adjustments like this might be small, but they can go a long way in helping you understand the data and convey your findings to others. Performing Your Own Analysis Now that you've done some exploration on a small sample of the dataset, it's time to go ahead and put together all of the data in a single file and see what trends you can find. The code below will use the same summarise_data() function as before to process data. After running the cell below, you'll have processed all the data into a single data file. Note that the function will not display any output while it runs, and this can take a while to complete since you have much more data than the sample you worked with above. End of explanation trip_data = pd.read_csv('babs_y1_y2_summary.csv') display(trip_data.head()) Explanation: Since the summarise_data() function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there. End of explanation usage_stats(trip_data) usage_plot(trip_data, 'start_city', boundary = 0, bin_width = 1) usage_plot(trip_data, 'start_hour', boundary = 0, bin_width = 1) usage_plot(trip_data, 'start_month', boundary = 0, bin_width = 1) usage_plot(trip_data, 'weekday', boundary = 0, bin_width = 1) Explanation: Now it's your turn to explore the new dataset with usage_stats() and usage_plot() and report your findings! Here's a refresher on how to use the usage_plot() function: first argument (required): loaded dataframe from which data will be analyzed. second argument (required): variable on which trip counts will be divided. third argument (optional): data filters limiting the data points that will be counted. Filters should be given as a list of conditions, each element should be a string in the following format: '&lt;field&gt; &lt;op&gt; &lt;value&gt;' using one of the following operations: >, <, >=, <=, ==, !=. Data points must satisfy all conditions to be counted or visualized. For example, ["duration &lt; 15", "start_city == 'San Francisco'"] retains only trips that originated in San Francisco and are less than 15 minutes long. If data is being split on a numeric variable (thus creating a histogram), some additional parameters may be set by keyword. - "n_bins" specifies the number of bars in the resultant plot (default is 10). - "bin_width" specifies the width of each bar (default divides the range of the data by number of bins). "n_bins" and "bin_width" cannot be used simultaneously. - "boundary" specifies where one of the bar edges will be placed; other bar edges will be placed around that value (this may result in an additional bar being plotted). This argument may be used alongside the "n_bins" and "bin_width" arguments. You can also add some customization to the usage_stats() function as well. The second argument of the function can be used to set up filter conditions, just like how they are set up in usage_plot(). End of explanation # Final Plot 1 usage_plot(trip_data, 'start_hour', boundary = 0, bin_width = 1) Explanation: Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways. Tip: In order to add additional cells to a notebook, you can use the "Insert Cell Above" and "Insert Cell Below" options from the menu bar above. There is also an icon in the toolbar for adding new cells, with additional icons for moving the cells up and down the document. By default, new cells are of the code type; you can also specify the cell type (e.g. Code or Markdown) of selected cells from the Cell menu or the dropdown in the toolbar. One you're done with your explorations, copy the two visualizations you found most interesting into the cells below, then answer the following questions with a few sentences describing what you found and why you selected the figures. Make sure that you adjust the number of bins or the bin limits so that they effectively convey data findings. Feel free to supplement this with any additional numbers generated from usage_stats() or place multiple visualizations to support your observations. End of explanation # Final Plot 2 usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5) Explanation: Question 5a: What is interesting about the above visualization? Why did you select it? Answer: The plot shows the usage of the service during the day. We can see that busiest hours are 7-10AM and 4-7PM, which are the time people commute to and from work/school. There are also a fair share of trips happen during the day, but there is no significant increase during lunch time (I guess people don't bike for lunch). End of explanation
5,002
Given the following text description, write Python code to implement the functionality described below step by step Description: CIFAR10 是另外一個 dataset, 和 mnist 一樣,有十種類別(飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船、卡車) https Step1: 查看一下資料 Step2: Q 將之前的 logistic regression 套用過來看看 將之前的 cnn model 套用過來看看 (注意資料格式, channel x H x W 還是 H x W x channel) 試試看改善準確度 增加 Dropout (https
Python Code: import keras from keras.models import Sequential from PIL import Image import numpy as np import tarfile # 下載 dataset url = "https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz" import os import urllib from urllib.request import urlretrieve def reporthook(a,b,c): print("\rdownloading: %5.1f%%"%(a*b*100.0/c), end="") tar_gz = "cifar-10-python.tar.gz" if not os.path.isfile(tar_gz): print('Downloading data from %s' % url) urlretrieve(url, tar_gz, reporthook=reporthook) # 讀取 dataset # 只有 train 和 test 沒有 validation import pickle train_X=[] train_y=[] tar_gz = "cifar-10-python.tar.gz" with tarfile.open(tar_gz) as tarf: for i in range(1, 6): dataset = "cifar-10-batches-py/data_batch_%d"%i print("load",dataset) with tarf.extractfile(dataset) as f: result = pickle.load(f, encoding='latin1') train_X.extend(result['data']/255) train_y.extend(result['labels']) train_X=np.float32(train_X) train_y=np.int32(train_y) dataset = "cifar-10-batches-py/test_batch" print("load",dataset) with tarf.extractfile(dataset) as f: result = pickle.load(f, encoding='latin1') test_X=np.float32(result['data']/255) test_y=np.int32(result['labels']) train_Y = np.eye(10)[train_y] test_Y = np.eye(10)[test_y] # or # from keras.datasets import cifar10 # from keras.utils import np_utils # (train_X, train_y), (test_X, test_y) = cifar10.load_data() # train_Y = np_utils.to_categorical(train_y, 10) # test_Y = np_utils.to_categorical(test_y, 10) Explanation: CIFAR10 是另外一個 dataset, 和 mnist 一樣,有十種類別(飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船、卡車) https://www.cs.toronto.edu/~kriz/cifar.html End of explanation train_X.shape # channels x 高 x 寬 (顏色) 3*32*32 from IPython.display import display def showX(X): int_X = (X*255).clip(0,255).astype('uint8') # N*3072 -> N*3*32*32 -> 32 * 32N * 3 int_X_reshape = np.moveaxis(int_X.reshape(-1,3,32,32), 1, 3) int_X_reshape = int_X_reshape.swapaxes(0,1).reshape(32,-1, 3) display(Image.fromarray(int_X_reshape)) # 訓練資料, X 的前 20 筆 showX(train_X[:20]) print(train_y[:20]) name_array = np.array("飛機、汽車、鳥、貓、鹿、狗、青蛙、馬、船、卡車".split('、')) print(name_array[train_y[:20]]) Explanation: 查看一下資料 End of explanation # 參考答案 # %load q_cifar10_logistic.py # 參考答案 # %load q_cifar10_cnn.py Explanation: Q 將之前的 logistic regression 套用過來看看 將之前的 cnn model 套用過來看看 (注意資料格式, channel x H x W 還是 H x W x channel) 試試看改善準確度 增加 Dropout (https://keras.io/layers/core/#dropout) 增加 BatchNormaliztion (https://keras.io/layers/normalization/) activation 換成其他的? End of explanation
5,003
Given the following text description, write Python code to implement the functionality described below step by step Description: Oregon Curriculum Network <br /> Discovering Math with Python FOCUSING ON THE S FACTOR $\phi$ DOODLES Using $\LaTeX$ <a data-flickr-embed="true" href="https Step1: In showing off the Decimal type, I'm advertising high precision, but not "infinite precision". Please be tolerant of our epsilons (tiny abberations). Step2: JITTERBUG TRANSFORMATION We call this the S Factor by the way. VE Step3: "SMALLGUY" Above is an expression for the volume of said Icosa in tetravolumes. We may think of it as "two applications of the S-Factor bigger" than a smaller cubocta, with edges, get this, equal in magnitude to the volume of the edge 2 icosa. David Koski and I got to calling this cubocta "SmallGuy" (feel free to substitute your own moniker). The Concentric Hierarchy has a Sesame Street flavor (kids' TV show) in some walkx of life, lending to our penchant for colloquialisms. Step4: Another way to reach the SmallGuy is to start with the volume 20 cubocta and shrink its edges by the S Factor, which means volume shrinks by a factor of the reciprocal of said S Factor to the 3rd power or $1/s_factor ^{3}$ Step5: RHOMBIC TRIACONTAHEDRON (RT) <a data-flickr-embed="true" href="https Step6: S3 is our conversion constant for going between XYZ cube volumes and IVM tetra volumes. The two mensuration systems each have their own unit volume, by convention a .5 radius edge cube versus a 1.0 diametered edged tetrahedron, or use edges 1 and 2 if preferred, their ratio will be the same, with the cube a bit bigger. <a data-flickr-embed="true" href="https Step7: E MODULE <a data-flickr-embed="true" href="https Step8: The S factor again, yes? $\sqrt{2}-(\sqrt{2}(\phi^{-3}))= 2\sqrt{2}(\phi^{-2})$ = S Factor. Another expression for the S Factor is $24E + 8e3$ where E means emod, and $e3$ means $E * \phi^{-3}$. Step9: S MODULE Now lets shrink the 20 volumed VE by halving all edges, reducing volume by a factor of 8, to 2.5 Step10: <a data-flickr-embed="true" href="https Step11: The figure below is an S-Factor radius, meaning from the center to each diamond face center on the surface. <a data-flickr-embed="true" href="https
Python Code: from math import sqrt as rt2 from decimal import Decimal, getcontext context = getcontext() context.prec = 50 one = Decimal(1) # 28 digits of precision by default, more on tap two = Decimal(2) three = Decimal(3) five = Decimal(5) nine = Decimal(9) eight = Decimal(8) sqrt2 = two.sqrt() sqrt5 = five.sqrt() Ø = (one + sqrt5)/two S3 = (nine/eight).sqrt() # Got Synergetics? Explanation: Oregon Curriculum Network <br /> Discovering Math with Python FOCUSING ON THE S FACTOR $\phi$ DOODLES Using $\LaTeX$ <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/37271936814/in/photolist-D17NrN-YMAqPs-YMAqhf-YeD8Xh-ZPBUSB-Y3ouSi-XhBkg1-WtCTD7-WtFwVg-VhXZoN-VFDUfa-UXW4tu-VakADe-TWeHGx-TVX4PF-UzeozQ-UtGD6C-TZAAJw-TH8XUh-TRFNEP-Sv4eHf-SZBzQK-SkVmHf-RVG9nU-RYjrxF-Gcea6D-QVaS2M-QtgU8h-QcgRgB-MG4f5j-LMZHu8-LySA48-KVPQgq-KyFu4d-JqMX3i-JrLuuS-HsbyEL-HsbzWo-Hsbwff-Dc4EtD-CM5VMh-DzQtqC-AmvTwh-uyTQjq-sDbPe3-rb6EdZ-qcA5rT-puPHA3-qapc5R-qafBJE" title="Eyeballing Code"><img src="https://farm5.staticflickr.com/4492/37271936814_18e7e72217.jpg" width="500" height="281" alt="Eyeballing Code"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> First, some identity checks (not proofs), using Decimal objects: $\sqrt{2}-(\sqrt{2}(\phi^{-3}))= 2\sqrt{2}(\phi^{-2})$ $(\phi^{-2})+(\phi^{-3})+(\phi^{2}) = 1$ End of explanation (Ø**-2) + (Ø**-3) + ( Ø**-2) sqrt2 - sqrt2 * Ø**-3 Explanation: In showing off the Decimal type, I'm advertising high precision, but not "infinite precision". Please be tolerant of our epsilons (tiny abberations). End of explanation two * sqrt2 * (Ø**-2) icosa = five * sqrt2 * Ø ** 2 icosa ve = Decimal(20) s_factor = ve / icosa s_factor # see? Explanation: JITTERBUG TRANSFORMATION We call this the S Factor by the way. VE:Icosa :: S:E is what to remember. VE is the 12-around-1 nuclear sphere based agglomeration whereas Icosa is dervied from Jitterbugging, a mathematical transformation with a more technical name if you're a math snob (I can be). <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/8393394058/in/photolist-dMGkCE-5y2HNo" title="Jitterbug Transformation"><img src="https://farm9.staticflickr.com/8195/8393394058_b096cc1e20.jpg" width="500" height="375" alt="Jitterbug Transformation"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> End of explanation SmallGuy = icosa * one/s_factor * one/s_factor SmallGuy Explanation: "SMALLGUY" Above is an expression for the volume of said Icosa in tetravolumes. We may think of it as "two applications of the S-Factor bigger" than a smaller cubocta, with edges, get this, equal in magnitude to the volume of the edge 2 icosa. David Koski and I got to calling this cubocta "SmallGuy" (feel free to substitute your own moniker). The Concentric Hierarchy has a Sesame Street flavor (kids' TV show) in some walkx of life, lending to our penchant for colloquialisms. End of explanation ve * (one/s_factor)**3 SmallGuy_edge = two * (one/s_factor) # effect on edges SmallGuy_edge Explanation: Another way to reach the SmallGuy is to start with the volume 20 cubocta and shrink its edges by the S Factor, which means volume shrinks by a factor of the reciprocal of said S Factor to the 3rd power or $1/s_factor ^{3}$ End of explanation superRT = ve * S3 superRT Explanation: RHOMBIC TRIACONTAHEDRON (RT) <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/4148457444/in/photolist-27mJSE2-9AU59Y-97TTvV-8dyvmP-8batBx-8awATh-7YZyV3-7nfvNf-7jzVRo-7jzVSm-5STUzp-5DsYfo-5zTRpB-JoybP-Joybi-6BkWWK" title="Rhombic Triacontahedron"><img src="https://farm3.staticflickr.com/2662/4148457444_60e88eee55.jpg" width="454" height="354" alt="Rhombic Triacontahedron"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> End of explanation Decimal('15') * sqrt2 Explanation: S3 is our conversion constant for going between XYZ cube volumes and IVM tetra volumes. The two mensuration systems each have their own unit volume, by convention a .5 radius edge cube versus a 1.0 diametered edged tetrahedron, or use edges 1 and 2 if preferred, their ratio will be the same, with the cube a bit bigger. <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/9530922237/in/album-72157624750749042/" title="Regular and Right Tetrahedrons Compared"><img src="https://farm4.staticflickr.com/3790/9530922237_dcc672a68a.jpg" width="500" height="375" alt="Regular and Right Tetrahedrons Compared"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> SuperRT is the RT (rhombic triacontahedron) formed by the Icosa and its dual, the Pentagonal Dodecahedron, the two five-fold symmetric shapes in the Platonic set of five polys. The Icosa we're talking about is the one above, derived from the VE of volume 20, through Jitterbugging. If we shrink SuperRT down by $\phi^{-3}$ volume-wise (all edges are now $\phi^{-1}$ their initial length), and carve it into 120 modules (60 left, 60 right), then lo and behold, we have the E modules. Another expression for SuperRT volume is $15\sqrt{2}$. End of explanation emod = (superRT * Ø**-3)/Decimal(120) emod smod = emod * s_factor smod smod/emod Explanation: E MODULE <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/39286474584/in/photolist-22RBsb5-CwvwpP-vLby2U-ujipN3-f75zUP-aDSfHf-8ryEix-8ryECF-7mcmne-5zTRjp-5zY9gA-7k4Eid-7k4Em5-7jZLe2-7jZLhp-7k4Ejf" title="module_studies"><img src="https://farm5.staticflickr.com/4626/39286474584_2df7913f05.jpg" width="500" height="271" alt="module_studies"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> End of explanation Decimal(24) * emod + Decimal(8) * emod * Ø**-3 Explanation: The S factor again, yes? $\sqrt{2}-(\sqrt{2}(\phi^{-3}))= 2\sqrt{2}(\phi^{-2})$ = S Factor. Another expression for the S Factor is $24E + 8e3$ where E means emod, and $e3$ means $E * \phi^{-3}$. End of explanation small_ve = ve / Decimal(8) Explanation: S MODULE Now lets shrink the 20 volumed VE by halving all edges, reducing volume by a factor of 8, to 2.5 End of explanation skew_icosa = small_ve * s_factor * s_factor skew_icosa skew_icosa + (24 * smod) Explanation: <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/6335726352/in/photolist-vLby2U-ujipN3-f75zUP-aDSfHf-8ryEix-8ryECF-7mcmne-5zTRjp-5zY9gA-7k4Eid-7k4Em5-7jZLe2-7jZLhp-7k4Ejf" title="S Module"><img src="https://farm7.staticflickr.com/6114/6335726352_902009df40.jpg" width="500" height="441" alt="S Module"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> As every grade schooler knows, if at all aware of their heritage, said VE inscribes inside the octahedron of volume 4, as does an Icosahedron with flush faces. We do a kind of jitterbugging that makes the VE larger instead of smaller. Two applications of the S Factor does the trick. End of explanation Decimal(60) * smod + Decimal(20) * smod * Ø**-3 Decimal(20) * Ø**-4 Explanation: The figure below is an S-Factor radius, meaning from the center to each diamond face center on the surface. <a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/37516435114/in/photolist-Zacx7b-f763Kv-PjPfT-PjPdz" title="S-Factor Radius"><img src="https://farm5.staticflickr.com/4551/37516435114_d4687b378c.jpg" width="500" height="312" alt="S-Factor Radius"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script> <div align="center">by D.B. Koski using vZome</div> David Koski writes (on Facebook): The volume 4, edge 2 octahedron, has a volume of 4 tetrahedral units or 84S + 20s3 modules S = $(\phi^{-5})/2$ = .045084 s3 = $(\phi^{-8})/2$ = .010643 The icosahedron inside of this octahedron has a volume of 84S+20s3 - 24S = 60S+20s3 = 2.917960 = $20(\phi^{-4})$. Surprisingly, this icosahedron has an edge of 1.08036 or the Sfactor! End of explanation
5,004
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Python Goals Step1: Everything is an Object Everything in Python is considered an object. A string, a list, a function and even a number is an object. For example, you can define a variable to reference a string and then access the methods available for the string object. If you press the tab key after the variable name and period, you will see the methods available for it. Step2: Variables In Python, when you define/create a variable, you are basically creating a reference to an object (i.e string,list,etc). If you want to define/create a new variable from the original variable, you will be creating another reference to the original object rather than copying the contents of the first variable to the second one. Step3: Therefore, if you update the original variable (a), the new variable (b) will automatically reference the updated object. Step4: A variable can have a short name (like x and y) or a more descriptive name (age, dog, owner). Rules for Python variables Step5: Data Types As any other object, you can get information about its type via the built-in function type(). Step6: Combining variables and operations Step7: Binary Operators and Comparisons Step8: The Respective Print statement Step9: Control Flows References Step10: An if statement can be optionally followed by one or more elif blocks and a catch-all else block if all of the conditions are False Step11: Loops For The for statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable object. Step12: While A while loop allows you to execute a block of code until a condition evaluates to false or the loop is ended with a break command. Step13: Data structures References Step14: The list data type has some more methods an you can find them here. One in particular is the list.append() which allows you to add an item to the end of the list. Equivalent to a[len(a) Step15: You can modify the list values too Step16: Dictionaries Dictionaries are sometimes found in other languages as “associative memories” or “associative arrays”. Dictionaries are indexed by keys, which can be any immutable type; strings and numbers can always be keys. It is best to think of a dictionary as a set of key Step17: Tuples A tuple consists of a number of values separated by commas On output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly; they may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression). Step18: Tuples are immutable, and usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing. Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list. Step19: Slicing You can select sections of most sequence types by using slice notation, which in its basic form consists of start Step20: Functions Functions allow you to organize and reuse code blocks. If you repeat the same code across several conditions, you could make that code block a function and re-use it. Functions are declared with the def keyword and returned from with the return keyword Step21: Modules References Step22: You can save the code above in a file named math.py. I created the file for you already in the current folder. All you have to do is import the math_ops.py file Step23: You can get a list of the current installed modules too Step24: Let's import the datetime module. Step25: Explore the datetime available methods. You can do that by typing the module name, a period after that and pressing the tab key or by using the the built-in functionn dir() as shown below Step26: You can also import a module with a custom name
Python Code: for x in list(range(5)): print("One number per loop..") print(x) if x > 2: print("The number is greater than 2") print("----------------------------") Explanation: Introduction to Python Goals: Learn basic Python operations Understand differences in data structures Get familiarized with conditional statements and loops Learn to create custom functions and import python modules Main Reference: McKinney, Wes. Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython. O'Reilly Media. Kindle Edition Indentation Python code is structured by indentation (tabs or spaces) instead of braces which is what other languages normally use. In addition, a colon (:) is used to define the start of an indented code block. End of explanation a = "pedro" a.capitalize() Explanation: Everything is an Object Everything in Python is considered an object. A string, a list, a function and even a number is an object. For example, you can define a variable to reference a string and then access the methods available for the string object. If you press the tab key after the variable name and period, you will see the methods available for it. End of explanation a = [1,2,3] b = a b Explanation: Variables In Python, when you define/create a variable, you are basically creating a reference to an object (i.e string,list,etc). If you want to define/create a new variable from the original variable, you will be creating another reference to the original object rather than copying the contents of the first variable to the second one. End of explanation a.append(4) b Explanation: Therefore, if you update the original variable (a), the new variable (b) will automatically reference the updated object. End of explanation dog_name = 'Pedro' age = 3 is_vaccinated = True birth_year = 2015 is_vaccinated dog_name Explanation: A variable can have a short name (like x and y) or a more descriptive name (age, dog, owner). Rules for Python variables: * A variable name must start with a letter or the underscore character * A variable name cannot start with a number * A variable name can only contain alpha-numeric characters and underscores (A-z, 0-9, and _ ) * Variable names are case-sensitive (age, Age and AGE are three different variables) Reference:https://www.w3schools.com/python/python_variables.asp End of explanation type(age) type(dog_name) type(is_vaccinated) Explanation: Data Types As any other object, you can get information about its type via the built-in function type(). End of explanation x = 4 y = 10 x-y x*y y/x y**x x>y x==y y>=x Explanation: Combining variables and operations End of explanation 2+4 5*6 5>3 Explanation: Binary Operators and Comparisons End of explanation print("Hello Helk!") Explanation: The Respective Print statement End of explanation print("x = " + str(x)) print("y = " + str(y)) if x==y: print('yes') else: print('no') Explanation: Control Flows References: * https://docs.python.org/3/tutorial/controlflow.html * https://docs.python.org/3/reference/compound_stmts.html#the-if-statement If,elif,else statements The if statement is used for conditional execution It selects exactly one of the suites by evaluating the expressions one by one until one is found to be true; then that suite is executed. If all expressions are false, the suite of the else clause, if present, is executed. End of explanation if x==y: print('They are equal') elif x > y: print("It is grater than") else: print("None of the conditionals were true") Explanation: An if statement can be optionally followed by one or more elif blocks and a catch-all else block if all of the conditions are False : End of explanation my_dog_list=['Pedro',3,True,2015] for i in range(0,10): print(i*10) Explanation: Loops For The for statement is used to iterate over the elements of a sequence (such as a string, tuple or list) or other iterable object. End of explanation i = 1 while i <= 5: print(i ** 2) i += 1 i = 1 while i > 0: if i > 5: break print(i ** 2) i += 1 Explanation: While A while loop allows you to execute a block of code until a condition evaluates to false or the loop is ended with a break command. End of explanation my_dog_list=['Pedro',3,True,2015] my_dog_list[0] my_dog_list[2:4] print("My dog's name is " + str(my_dog_list[0]) + " and he is " + str(my_dog_list[1]) + " years old.") Explanation: Data structures References: * https://docs.python.org/3/tutorial/datastructures.html * https://python.swaroopch.com/data_structures.html Lists Lists are data structures that allow you to define an ordered collection of items. Lists are constructed with square brackets, separating items with commas: [a, b, c]. Lists are mutable objects which means that you can modify the values contained in them. The elements of a list can be of different types (string, integer, etc) End of explanation my_dog_list.append("tennis balls") my_dog_list Explanation: The list data type has some more methods an you can find them here. One in particular is the list.append() which allows you to add an item to the end of the list. Equivalent to a[len(a):] = [x]. End of explanation my_dog_list[1] = 4 my_dog_list Explanation: You can modify the list values too: End of explanation my_dog_dict={'name':'Pedro','age':3,'is_vaccinated':True,'birth_year':2015} my_dog_dict my_dog_dict['age'] my_dog_dict.keys() my_dog_dict.values() Explanation: Dictionaries Dictionaries are sometimes found in other languages as “associative memories” or “associative arrays”. Dictionaries are indexed by keys, which can be any immutable type; strings and numbers can always be keys. It is best to think of a dictionary as a set of key: value pairs, with the requirement that the keys are unique (within one dictionary). A pair of braces creates an empty dictionary: {}. Remember that key-value pairs in a dictionary are not ordered in any manner. If you want a particular order, then you will have to sort them yourself before using it. End of explanation my_dog_tuple=('Pedro',3,True,2015) my_dog_tuple Explanation: Tuples A tuple consists of a number of values separated by commas On output tuples are always enclosed in parentheses, so that nested tuples are interpreted correctly; they may be input with or without surrounding parentheses, although often parentheses are necessary anyway (if the tuple is part of a larger expression). End of explanation my_dog_tuple[1] Explanation: Tuples are immutable, and usually contain a heterogeneous sequence of elements that are accessed via unpacking or indexing. Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list. End of explanation seq = [ 7 , 2 , 3 , 7 , 5 , 6 , 0 , 1 ] seq [ 1 : 5 ] Explanation: Slicing You can select sections of most sequence types by using slice notation, which in its basic form consists of start:stop passed to the indexing operator [] End of explanation def square(n): return n ** 2 print("Square root of 2 is " + str(square(2))) number_list = [1,2,3,4,5] for number in number_list: sn = square(number) print("Square root of " + str(number) + " is " + str(sn)) Explanation: Functions Functions allow you to organize and reuse code blocks. If you repeat the same code across several conditions, you could make that code block a function and re-use it. Functions are declared with the def keyword and returned from with the return keyword: End of explanation def square(n): return n ** 2 def cube(n): return n ** 3 Explanation: Modules References: * https://docs.python.org/3/tutorial/modules.html#modules If you quit from the Python interpreter and enter it again, the definitions you have made (functions and variables) are lost. Therefore, if you want to write a somewhat longer program, you are better off using a text editor to prepare the input for the interpreter and running it with that file as input instead. Let's say we define two functions: End of explanation import math_ops for number in number_list: sn = square(number) cn = cube(number) print("Square root of " + str(number) + " is " + str(sn)) print("Cube root of " + str(number) + " is " + str(cn)) print("-------------------------") Explanation: You can save the code above in a file named math.py. I created the file for you already in the current folder. All you have to do is import the math_ops.py file End of explanation help('modules') Explanation: You can get a list of the current installed modules too End of explanation import datetime Explanation: Let's import the datetime module. End of explanation dir(datetime) Explanation: Explore the datetime available methods. You can do that by typing the module name, a period after that and pressing the tab key or by using the the built-in functionn dir() as shown below: End of explanation import datetime as dt dir(dt) Explanation: You can also import a module with a custom name End of explanation
5,005
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting Matrices At some point you may need to plot a matrix. This works a bit differently from regular plots. Step1: In the plot above each cell of the matrix corresponds to one of the coloured grids, with the colour indicating the cell value. Step2: It's possible to smooth the plot by utilizing interpolation. This isn't something that I would recommend though as it hides the structure of your data. Note however that this USED to be the default behaviour of the implot function in earlier versions of Matplotlib. Increasing the sampling density is a better way of generating a nicer plot. Step3: The color scale used to represent the data can also be modified using the cmap keyword argument. Step4: There are lots of different colormaps to choose from.
Python Code: def create_matrix(size): mat = np.zeros((size, size)) for i in range(size): for j in range (size): mat[i, j] = i * j return mat create_matrix(4) mat = create_matrix(20) plt.imshow(mat) plt.colorbar() # Adds a colorbar to the plot to aid in interpretation. plt.xlabel("x") plt.ylabel("y") plt.title("Matrix Plot") Explanation: Plotting Matrices At some point you may need to plot a matrix. This works a bit differently from regular plots. End of explanation mat = create_matrix(20) plt.imshow(mat, interpolation="spline16") plt.colorbar() plt.xlabel("x") plt.ylabel("y") plt.title("Matrix Plot") Explanation: In the plot above each cell of the matrix corresponds to one of the coloured grids, with the colour indicating the cell value. End of explanation mat = create_matrix(60) plt.imshow(mat) plt.colorbar() plt.xlabel("x") plt.ylabel("y") plt.title("Denser Matrix Plot") Explanation: It's possible to smooth the plot by utilizing interpolation. This isn't something that I would recommend though as it hides the structure of your data. Note however that this USED to be the default behaviour of the implot function in earlier versions of Matplotlib. Increasing the sampling density is a better way of generating a nicer plot. End of explanation import matplotlib.cm as cm plt.imshow(mat, cmap=cm.Reds) plt.colorbar() plt.xlabel("x") plt.ylabel("y") plt.title("Switching Color Scale ") Explanation: The color scale used to represent the data can also be modified using the cmap keyword argument. End of explanation plt.imshow(mat, cmap=cm.winter_r) plt.colorbar() plt.xlabel("x") plt.ylabel("y") plt.title("Switching Color Scale ") Explanation: There are lots of different colormaps to choose from. End of explanation
5,006
Given the following text description, write Python code to implement the functionality described below step by step Description: FRETBursts - 8-spot smFRET burst analysis This notebook is part of a tutorial series for the FRETBursts burst analysis software. For a step-by-step introduction to FRETBursts usage please refer to us-ALEX smFRET burst analysis. In this notebook we present a typical FRETBursts workflow for multi-spot smFRET burst analysis. Briefly, we show how to perform background estimation, burst search, burst selection, FRET histograms, and FRET efficiency fit using different methods. Loading the software Step1: Downloading the sample data file The complete example dataset can be downloaded from here. Here we download an 8-spot smFRET measurement file using the download_file in FRETBursts Step2: Selecting a data file Step3: Data load and Burst search Load and process the data Step4: For convenience we can set the correction coefficients right away so that they will be used in the subsequent analysis. The correction coefficients are Step5: NOTE Step6: Perform a background plot as a function of the channel Step7: Let's take a look at the photon waiting times histograms and at the fitted background rates Step8: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel. Let's plot a timetrace for the background to see is there are significant variations during the measurement Step9: We can look at the timetrace of the photon stream (binning) Step10: We can also open the same plot in an interactive window that allows scrolling (uncomment the following lines) Step11: Burst selection and FRET Selecting bursts by burst size (select_bursts.size) Step12: FRET Fitting 2-Gaussian mixture Let's fit the $E$ histogram with a 2-Gaussians model Step13: The fitted parameters are stored in a pandas DataFrame Step14: Weighted Expectation Maximization The expectation maximization (EM) method is particularly suited to resolve population mixtures. Note that the EM algorithm does not fit the histogram but the $E$ distribution with no binning. FRETBursts include a weighted version of the EM algorithm that can take into account the burst size. The algorithm and benchmarks with the 2-Gaussian histogram fit are reported here. You can find the EM algorithm in fretbursts/fit/gaussian_fit.py or typing Step15: The fitted parameters for each channel are stored in the fit_E_res attribute Step16: The model function is stored in Step17: Let's plot the histogram and the model with parameters from the EM fit Step18: Comparing 2-Gaussian and EM fit To quickly compare the 2-Gaussians with the EM fit we convert the EM fit results in a DataFrame Step19: And we compute the difference between the two sets of parameters
Python Code: from fretbursts import * sns = init_notebook() import lmfit; lmfit.__version__ import phconvert; phconvert.__version__ Explanation: FRETBursts - 8-spot smFRET burst analysis This notebook is part of a tutorial series for the FRETBursts burst analysis software. For a step-by-step introduction to FRETBursts usage please refer to us-ALEX smFRET burst analysis. In this notebook we present a typical FRETBursts workflow for multi-spot smFRET burst analysis. Briefly, we show how to perform background estimation, burst search, burst selection, FRET histograms, and FRET efficiency fit using different methods. Loading the software End of explanation url = 'http://files.figshare.com/2182604/12d_New_30p_320mW_steer_3.hdf5' download_file(url, save_dir='./data') Explanation: Downloading the sample data file The complete example dataset can be downloaded from here. Here we download an 8-spot smFRET measurement file using the download_file in FRETBursts: End of explanation filename = "./data/12d_New_30p_320mW_steer_3.hdf5" import os assert os.path.exists(filename) Explanation: Selecting a data file End of explanation d = loader.photon_hdf5(filename) Explanation: Data load and Burst search Load and process the data: End of explanation d.leakage = 0.038 d.gamma = 0.43 Explanation: For convenience we can set the correction coefficients right away so that they will be used in the subsequent analysis. The correction coefficients are: leakage or bleed-through: leakage direct excitation: dir_ex (ALEX-only) gamma-factor gamma The direct excitation cannot be applied to non-ALEX (single-laser) smFRET measurements (like the current one). End of explanation d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7) d.burst_search(L=10, m=10, F=7) Explanation: NOTE: at any later moment, after burst search, a simple reassignment of these coefficient will update the burst data with the new correction values. Compute background and burst search: End of explanation mch_plot_bg(d) Explanation: Perform a background plot as a function of the channel: End of explanation dplot(d, hist_bg); Explanation: Let's take a look at the photon waiting times histograms and at the fitted background rates: End of explanation dplot(d, timetrace_bg); Explanation: Using dplot exactly in the same way as for the single-spot data has now generated 8 subplots, one for each channel. Let's plot a timetrace for the background to see is there are significant variations during the measurement: End of explanation dplot(d, timetrace) xlim(2, 3); ylim(-100, 100); Explanation: We can look at the timetrace of the photon stream (binning): End of explanation #%matplotlib qt #dplot(d, timetrace, scroll=True); #ylim(-100, 100) #%matplotlib inline Explanation: We can also open the same plot in an interactive window that allows scrolling (uncomment the following lines): End of explanation gamma = d.gamma gamma d.gamma = 1 ds = d.select_bursts(select_bursts.size, th1=30, gamma=1) dplot(ds, hist_fret); ds = d.select_bursts(select_bursts.size, th1=25, gamma=gamma, donor_ref=False) dplot(ds, hist_fret); ds = d.select_bursts(select_bursts.size, th1=25, gamma=gamma) dplot(ds, hist_fret, weights='size', gamma=gamma); dplot(ds, scatter_fret_nd_na); ylim(0,200); Explanation: Burst selection and FRET Selecting bursts by burst size (select_bursts.size) End of explanation ds.gamma = 1. bext.bursts_fitter(ds, weights=None) ds.E_fitter.fit_histogram(mfit.factory_two_gaussians(), verbose=False) Explanation: FRET Fitting 2-Gaussian mixture Let's fit the $E$ histogram with a 2-Gaussians model: End of explanation ds.E_fitter.params dplot(ds, hist_fret, weights=None, show_model=True, show_fit_stats=True, fit_from='p2_center'); Explanation: The fitted parameters are stored in a pandas DataFrame: End of explanation # bl.two_gaussian_fit_EM?? EM_results = ds.fit_E_two_gauss_EM(weights=None, gamma=1.) EM_results Explanation: Weighted Expectation Maximization The expectation maximization (EM) method is particularly suited to resolve population mixtures. Note that the EM algorithm does not fit the histogram but the $E$ distribution with no binning. FRETBursts include a weighted version of the EM algorithm that can take into account the burst size. The algorithm and benchmarks with the 2-Gaussian histogram fit are reported here. You can find the EM algorithm in fretbursts/fit/gaussian_fit.py or typing: bl.two_gaussian_fit_EM?? End of explanation ds.fit_E_name, ds.fit_E_res Explanation: The fitted parameters for each channel are stored in the fit_E_res attribute: End of explanation ds.fit_E_model Explanation: The model function is stored in: End of explanation AX = dplot(ds, hist_fret, weights=None) x = np.r_[-0.2: 1.2 : 0.01] for ich, (ax, E_fit) in enumerate(zip(AX.ravel(), EM_results)): ax.axvline(E_fit, ls='--', color='r') ax.plot(x, ds.fit_E_model(x, ds.fit_E_res[ich])) print('E mean: %.2f%% E delta: %.2f%%' %\ (EM_results.mean()*100, (EM_results.max() - EM_results.min())*100)) Explanation: Let's plot the histogram and the model with parameters from the EM fit: End of explanation import pandas as pd EM_results = pd.DataFrame(ds.fit_E_res, columns=['p1_center', 'p1_sigma', 'p2_center', 'p2_sigma', 'p1_amplitude']) EM_results * 100 ds.E_fitter.params * 100 Explanation: Comparing 2-Gaussian and EM fit To quickly compare the 2-Gaussians with the EM fit we convert the EM fit results in a DataFrame: End of explanation (ds.E_fitter.params - EM_results) * 100 Explanation: And we compute the difference between the two sets of parameters: End of explanation
5,007
Given the following text description, write Python code to implement the functionality described below step by step Description: <img src="images/logo.jpg" style="display Step1: <p style="text-align Step2: <p style="text-align Step3: <p style="text-align Step4: <p style="text-align Step5: <p style="text-align Step6: <p style="text-align Step7: <p style="text-align Step8: <p style="text-align Step9: <p style="text-align Step10: <p style="text-align Step11: <p style="text-align Step12: <p style="text-align Step52: <p style="text-align
Python Code: class Animal: pass class Mammal(Animal): pass class Bat(Mammal): pass class Rabbit(Mammal): pass Explanation: <img src="images/logo.jpg" style="display: block; margin-left: auto; margin-right: auto;" alt="לוגו של מיזם לימוד הפייתון. נחש מצויר בצבעי צהוב וכחול, הנע בין האותיות של שם הקורס: לומדים פייתון. הסלוגן המופיע מעל לשם הקורס הוא מיזם חינמי ללימוד תכנות בעברית."> <span style="text-align: right; direction: rtl; float: right;">ירושה – חלק 2</span> <span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> בשיעור הקודם למדנו על <dfn>ירושה</dfn> – מנגנון תכנותי שמאפשר יצירת מחלקה על בסיס תכונותיה ופעולותיה של מחלקה אחרת.<br> סקרנו באילו מקרים נכון לבצע ירושה, ודיברנו על חילוקי הדעות ועל הסיבוכים האפשריים שירושה עלולה לגרום. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> חקרנו את היכולת של תת־מחלקה להגדיר מחדש פעולות של מחלקת־העל וקראנו לכך "דריסה" של פעולה.<br> דיברנו גם על הפונקציה <var>super</var> שמאפשרת לנו לקרוא בקלות לפעולות במחלקת־העל. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ירושה היא כר פורה לשיח בין תיאורטיקנים של מדעי המחשב.<br> נכתבו עליה מילים רבות, והיא נושא מרכזי בדיונים על הנדסת תוכנה.<br> במחברת זו נעמיק ונסקור שימושים וטכניקות שהתפתחו מתוך רעיון הירושה במחלקות. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">שימושים נפוצים לירושה</span> <span style="text-align: right; direction: rtl; float: right; clear: both;">רמות ירושה מרובות</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> הרעיון הבסיסי ביותר בירושה הוא העובדה שיכולה להיות יותר מרמה אחת של ירושה.<br> קרי: אם מחלקה ב ירשה ממחלקה א, מחלקה ג יכולה לרשת ממחלקה ב. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> בקטע הקוד הבא, לדוגמה, המחלקה יונק (<var>Mammal</var>) יורשת מהמחלקה "חיה" (<var>Animal</var>).<br> המחלקות עטלף (<var>Bat</var>) וארנב (<var>Rabbit</var>) יורשות מהמחלקה "יונק". </p> End of explanation Bat.mro() Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> במקרה כזה, המחלקות <var>Bat</var> ו־<var>Rabbit</var> ירשו הן את התכונות והפעולות של המחלקה <var>Mammal</var>, והן את אלו של <var>Animal</var>. </p> <figure> <img src="images/multilevel_inheritance.svg?v=1" style="max-width: 400px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה יש שתי תיבות. השמאלית עם הכותרת Song, ובה התכונות והפעולות השייכות למחלקה. אליה יוצא חץ מהתיבה הימנית, עם הכותרת Acrostic, ובה מופיעות אותן תכונות ופעולות בצבע אפור (כדי להמחיש שהן נלקחות מתוך המחלקה Song)."/> <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">דוגמה לרמות ירושה מרובות: המחלקות <var>Rabbit</var> ו־<var>Bat</var> יורשות ממחלקה שיורשת ממחלקה נוספת. </figcaption> </figure> <p style="text-align: right; direction: rtl; float: right; clear: both;"> אפשר לקבל את שרשרת מחלקות־העל של מחלקה מסוימת לפי סדרן בעזרת <code dir="ltr">class_name.mro()</code>: </p> End of explanation class Dog: def __init__(self, name, gender): self.name = name self.gender = gender def make_sound(self): print("Woof") def __str__(self): return f"I'm {self.name} the dog!" class Cow: def __init__(self, name, gender): self.name = name self.gender = gender def make_sound(self): print("Muuuuuuuuuu") def __str__(self): return f"I'm {self.name} the cow!" class Dove: def __init__(self, name, gender): self.name = name self.gender = gender def make_sound(self): print("Kukukuku!") def __str__(self): return f"I'm {self.name} the dove!" print(Dove("Rexi", "Female")) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> בפנייה לפעולה כלשהי, תחפש פייתון את הפעולה במחלקה הראשונה שמופיעה ב־MRO.<br> אם הפעולה לא מופיעה שם, תיגש פייתון למחלקה שאחריה, כך עד שהיא תגיע ל־<var>object</var> שתמיד יהיה בראש השרשרת.<br> אם הפעולה לא קיימת באף אחת מהמחלקות שמופיעות ב־MRO, פייתון תזרוק <var>NameError</var>. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> מבחינה הנדסית, מומלץ להימנע ככל האפשר מירושה מרובת רמות כשאין בכך צורך ממשי.<br> ירושה מרובת רמות תגדיל את הסיכוי לתסמונת מחלקת־העל השברירית, תקשה על בדיקת התוכנית ותיצור בעיות תחזוקה בעתיד. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">מחלקה מופשטת</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כל חיה משמיעה צליל האופייני לה: כלב נובח, פרה גועה ויונה הומה.<br> נממש מחלקה עבור כל חיה: </p> End of explanation class Animal: def __init__(self, name, gender): self.name = name self.gender = gender class Dog(Animal): def make_sound(self): print("Woof") def __str__(self): return f"I'm {self.name} the dog!" class Cow(Animal): def make_sound(self): print("Muuuuuuuuuu") def __str__(self): return f"I'm {self.name} the cow!" class Dove(Animal): def make_sound(self): print("Kukukuku!") def __str__(self): return f"I'm {self.name} the dove!" print(Dove("Rexi", "Female")) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> נשים לב שצורת כל המחלקות דומה – מקרה קלאסי לירושה: </p> End of explanation class Animal: def __init__(self, name, gender): self.name = name self.gender = gender def make_sound(self): pass def __str__(self): pass class Dog(Animal): def make_sound(self): print("Woof") def __str__(self): return f"I'm {self.name} the dog!" class Cow(Animal): def make_sound(self): print("Muuuuuuuuuu") def __str__(self): return f"I'm {self.name} the cow!" class Dove(Animal): def make_sound(self): print("Kukukuku!") def __str__(self): return f"I'm {self.name} the dove!" print(Dove("Rexi", "Female")) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> לעיתים קרובות אנחנו רוצים לממש קבוצת מחלקות שיש להן את אותן תכונות ופעולות – בדיוק כמו במקרה של <var>Dog</var>, <var>Cow</var> ו־<var>Dove</var>.<br> במקרה כזה נפנה באופן טבעי לירושה, שבה מחלקת־העל תכיל את התכונות והפעולות המשותפות לכל המחלקות. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> החיסרון במחלקת <var>Animal</var> הוא שכעת אפשר ליצור ישירות ממנה מופעים.<br> זה לא מה שהתכוונו שיקרה. המחלקה הזו קיימת רק כדי לייצג רעיון מופשט של חיה, ולאפשר ירושת תכונות ופעולות ממקור אחד.<br> מטרת התוכנית היא לאפשר לבעלי חיים להשמיע קול, ולכן אין משמעות ביצירת מופע מהמחלקה <var>Animal</var> – שהרי ל"חיה" כרעיון מופשט אין קול.<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> חיסרון נוסף הוא שמחלקות יכולות לרשת ממחלקת <var>Animal</var> מבלי לממש את הפעולה <var>make_sound</var>.<br> אם היינו מתכנתים פיסת קוד כללית שמטפלת בחיות, ייתכן שזה היה תקין, אבל לא זה המקרה בקוד שלמעלה.<br> מטרת התוכנה שלנו הייתה מלכתחילה לייצג קולות של חיות, ומחלקה שיורשת מ־<var>Animal</var> ולא מממשת את <var>make_sound</var> היא פתח לבאגים בעתיד. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> הפתרון לשתי הבעיות שהוצגו כאן נקרא בהנדסת תוכנה "<dfn>מחלקה מופשטת</dfn>" (<dfn>Abstract Class</dfn>).<br> זהו מצב שבו אנחנו משתמשים במחלקת־על ל־3 צרכים: </p> <ol style="text-align: right; direction: rtl; float: right; clear: both;"> <li>איגוד של תתי־מחלקות והורשת תכונות ופעולות לכולן (ירושה קלאסית).</li> <li>אכיפה שכל תתי־המחלקות יממשו פעולות שהגדרנו במחלקת־העל כפעולות שחייבים לממש.</li> <li>אכיפה שתאסור יצירת מופעים ממחלקת־העל המופשטת. יצירת מופעים תתאפשר רק מתתי־המחלקות שיורשות אותה.</li> </ol> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נהוג לכתוב במחלקת־העל המופשטת את כותרות הפעולות שעל תת־המחלקות שיורשות ממנה לממש: </p> End of explanation from abc import ABC, abstractmethod class Animal(ABC): @abstractmethod def __init__(self, name, gender): self.name = name self.gender = gender @abstractmethod def make_sound(self): pass @abstractmethod def __str__(self): pass class Dog(Animal): def __init__(self, **kwargs): super().__init__(**kwargs) def make_sound(self): print("Woof") def __str__(self): return f"I'm {self.name} the dog!" class Cow(Animal): def __init__(self, **kwargs): super().__init__(**kwargs) def make_sound(self): print("Muuuuuuuuuu") def __str__(self): return f"I'm {self.name} the cow!" class Dove(Animal): def __init__(self, **kwargs): super().__init__(**kwargs) def make_sound(self): print("Kukukuku!") def __str__(self): return f"I'm {self.name} the dove!" print(Dove(name="Rexi", gender="Female")) Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> הרעיון של מחלקה מופשטת כה נפוץ, שהמודול הפייתוני <var>abc</var> (קיצור של Abstract Base Class) מאפשר לנו להגדיר מחלקה כמופשטת.<br> נציץ ב<a href="https://docs.python.org/3/library/abc.html">תיעוד</a> של המודול וננסה לעקוב אחריו, למרות כל השטרודלים המוזרים שיש שם: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>המחלקה המופשטת שלנו תבצע ירושה מהמחלקה <var>ABC</var> שבמודול <var>abc</var>.</li> <li>מעל כל פעולה מופשטת (כזו שמשמשת רק לירושה ולא עומדת בפני עצמה) נוסיף <code dir="ltr">@abstractmethod</code>.</li> </ul> End of explanation Animal(name="Rexi", gender="Female") Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> חדי העין ישימו לב שהחזרנו את פעולת האתחול <code dir="ltr">__init__</code> לכל תתי־המחלקות.<br> זה קרה כיוון שהגדרנו את <code dir="ltr">__init__</code> כפעולה מופשטת במחלקת <var>Animal</var>.<br> להגדרה של פעולה כמופשטת שתי השלכות מיידיות: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>תתי־המחלקות שיורשות מהמחלקה המופשטת, חייבות לממש את כל הפעולות שמוגדרות במחלקת־העל כמופשטות.</li> <li>גישה ישירה לפעולה המופשטת במחלקה שבה היא הוגדרה במקור כמופשטת, תביא לזריקת שגיאה.</li> </ul> <p style="text-align: right; direction: rtl; float: right; clear: both;"> עכשיו כשנרצה ליצור מופע מ־<var>Animal</var>, נגלה שזה בלתי אפשרי, כיוון ש־<code dir="ltr">Animal.__init__</code> מוגדרת כמופשטת: </p> End of explanation class Fox(Animal): def __init__(self, **kwargs): super().__init__(**kwargs) # What does the fox say? # (there is no make_sound method) def __str__(self): return f"I'm {self.name} the fox!" Fox(name="Ylvis", gender="Male") Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> אם ננסה ליצור תת־מחלקה שיורשת מ־<var>Animal</var> ולא מממשת את אחת הפעולות המופשטות שלה, נגלה שלא נוכל ליצור ממנה מופעים.<br> פייתון תזרוק שגיאה שאחת הפעולות המופשטות לא מוגדרת בתת־המחלקה: </p> End of explanation class Weapon: def __init__(self, strength, **kwargs): super().__init__(**kwargs) self.strength = strength def attack(self, enemy): enemy.decrease_health_points(self.strength) class Vehicle: def __init__(self, max_speed, **kwargs): super().__init__(**kwargs) self.max_speed = max_speed def estimate_arrival_time(self, distance): return distance / (self.max_speed * 80 / 100) class Tank(Weapon, Vehicle): def __init__(self, name, **kwargs): super().__init__(**kwargs) self.name = name Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> עוד דבר מעניין שכדאי לשים לב אליו הוא השימוש הכבד ב־<code dir="ltr">**kwargs</code>.<br> כיוון שהפכנו את <code dir="ltr">__init__</code> למופשטת, אנחנו חייבים לממש אותה בכל תתי־המחלקות שיורשות מ־<var>Animal</var>.<br> למרות זאת, ה־<code dir="ltr">__init__</code> "המעניינת" שעושה השמות לתוך תכונות המופע היא זו של מחלקת־העל <var>Animal</var>,<br> זו שמקבלת את הפרמטרים <var>name</var> ו־<var>gender</var> ומשנה לפיהם את מצב המופע. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ביצירת מופע של אחת מהחיות, נרצה להעביר את הפרמטרים <var>name</var> ו־<var>gender</var> שמיועדים ל־<code dir="ltr">Animal.__init__</code>.<br> אבל ביצירת מופע של יונה, של פרה או של כלב אנחנו נקרא בפועל ל־<code dir="ltr">__init__</code> שמימשו תתי־המחלקות.<br> בשלב הזה, תתי־המחלקות צריכות למצוא דרך לקבל את הפרמטרים הרלוונטיים ולהעביר אותם למחלקת־העל שעושה השמות ל־<var>self</var>.<br> כדי להעביר את כל הפרמטרים שמחלקת־העל צריכה, גם אם חתימת הפעולה <code dir="ltr">Animal.__init__</code> תשתנה בעתיד, אנחנו משתמשים ב־<code dir="ltr">**kwargs</code>. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> השימוש במחלקות מופשטות נפוץ בתכנות כדי לאפשר הרחבה נוחה של התוכנה.<br> מערכות שמאפשרות למתכנתים להרחיב את יכולותיהן בעזרת תוספים, לדוגמה, ישתמשו במחלקות מופשטות כדי להגדיר למתכנת דרך להתממשק עם הקוד של התוכנה. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל ביניים: Friendliness Pellets</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> בחנות הפרחים של מושניק מוכרים זרי פרחים.<br> נמכר עם אגרטל או בלעדיו, ויש בו 3–30 פרחים מסוגים שונים: ורדים, סייפנים וסחלבים.<br> לכל אחד מהפרחים צבע שונה. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> המחיר של סייפן הוא 4 ש"ח ושל סחלב 10 ש"ח.<br> המחיר של ורד נקבע לפי הצבע שלו: ורד לבן עולה 5 ש"ח, וורד אדום עולה 6 ש"ח.<br> עבור זר עם אגרטל, על הלקוח להוסיף 20 ש"ח. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כדי למשוך לקוחות חדשים, מדי פעם מושניק נותן הנחה על הזרים.<br> לכל אחת מההנחות מושניק מצרף הערה שמסבירה מה הסיבה שבגינה ניתנה ההנחה.<br> ישנם שני סוגים של הנחות בחנות: הנחה באחוזים והנחה שקלית.<br> לכל זר יכולות להיות כמה הנחות, אשר יחושבו לפי סדר צירופן. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לדוגמה, אם הזר עלה 200 ש"ח ומושניק החליט לתת הנחה של 10 אחוזים, הזר יעלה כעת 180 ש"ח.<br> אם מושניק החליט לתת הנחה נוספת על הזר, הפעם של 30 ש"ח, מחירו של הזר יהיה כעת 150 ש"ח. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> אפשרו ללקוחות שרוצים חשבונית עבור רכישתם לקבל פירוט מודפס של חישוב המחיר של הזר. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">ירושה מרובה</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ישר ולעניין – יש לי חדשות משוגעות עבורכם: כל מחלקה יכולה לרשת מיותר ממחלקה אחת! </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ניצור מחלקה המייצגת כלי נשק, ומחלקה אחרת המייצגת כלי רכב.<br> מחלקת כלי הנשק תכלול את כוח כלי הנשק (<var>strength</var>) ופעולת תקיפה (<var>attack</var>).<br> מחלקת כלי הרכב תכלול את המהירות המרבית של כלי הרכב (<var>max_speed</var>) ופעולה של הערכת זמן הגעה משוער (<var>estimate_arrival_time</var>). </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> אם נרצה ליצור מחלקת טנק, לדוגמה, ייתכן שנרצה לרשת גם ממחלקת כלי הנשק וגם ממחלקת כלי הרכב.<br> פייתון מאפשרת לנו לעשות זאת בקלות יחסית: </p> End of explanation tank = Tank(name="Merkava Mk 4M Windbreaker", max_speed=64, strength=17.7) distance = 101 # km to Eilat # לטנק יש תכונות של טנק, של כלי נשק ושל כלי רכב print(f"Tank name: {tank.name}") print(f"Hours from here to Eilat: {tank.estimate_arrival_time(distance)}") print(f"Weapon strength: {tank.strength}") Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> בדוגמה שלמעלה מחלקת <var>Tank</var> יורשת את התכונות ואת הפעולות, הן של מחלקת <var>Vehicle</var> והן של מחלקת <var>Weapon</var>.<br> כל מה שהיינו צריכים לעשות זה לציין בסוגריים שאחרי המחלקה <var>Tank</var> את כל המחלקות שמהן אנחנו רוצים לרשת, כשהן מופרדות בפסיק זו מזו.<br> ניצור טנק לדוגמה, ונראה איך הוא קיבל את התכונות של כל מחלקות־העל שמהן הוא ירש: </p> End of explanation Tank.mro() Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> לפני שנסביר בהרחבה איך הכול עובד, נעצור רגע כדי להגיד שירושה מרובה היא נושא שיחה נפיץ.<br> בגלל הסיבוכיות והשכבות הנוספות שהוא מוסיף לכל מופע, תמצאו מתכנתים רבים <a href="https://softwareengineering.stackexchange.com/q/218458/224112">שמתנגדים בתוקף לרעיון הירושה המרובה</a>.<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נציג בזריזות את אחת הבעיות הקלאסיות שנובעות מירושה מרובה – <a href="https://en.wikipedia.org/wiki/Multiple_inheritance#The_diamond_problem">בעיית היהלום</a>.<br> בבעיה זו מוצגת תת־מחלקה שיורשת מ־2 מחלקות־על, שהן עצמן תת־מחלקות של מחלקה נוספת.<br> לדוגמה: מחלקת "כפתור" יורשת ממחלקת "לחיץ" וממחלקת "מלבן", שיורשות ממחלקת "אובייקט". </p> <figure> <img src="images/diamond_problem.svg?v=0" style="max-width: 450px; margin-right: auto; margin-left: auto; text-align: center;" alt="עץ ירושה בצורת מעוין. בחלק העליון מוצגת המחלקה 'אובייקט', מתוכה יוצאים שני חצים למחלקות 'לחיץ' ו'מלבן' שנמצאות תחתיה. משתיהן יוצאים חצים לתת־המחלקה 'כפתור' שיורשת מהן וממוקמת הכי נמוך בתמונה."/> <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;"> עץ ירושה בצורת יהלום המדגים את הבעיה המתוארת. </figcaption> </figure> <p style="text-align: right; direction: rtl; float: right; clear: both;"> אם גם "לחיץ" וגם "מלבן" מימשו את פעולת <code>__str__</code> אבל מחלקת "כפתור" לא, באיזו גרסה של <code>__str__</code> מחלקת "כפתור" צריכה להשתמש?<br> ומה בנוגע למצב שבו גם "לחיץ" וגם "אובייקט" מימשו את הפעולה?<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כפי שוודאי הצלחתם להבין, ירושה מרובה יכולה להכניס אותנו להרבה "פינות" שלא מוגדרות היטב.<br> מהסיבה הזו פייתון החליטה לעשות מעשה, וליישר בכוח את העץ.<br> נביט בשרשרת הירושה של מחלקת <var>Tank</var>: </p> End of explanation class Clickable: def __init__(self, **kwargs): super().__init__(**kwargs) self.clicks = 0 def click(self): self.clicks = self.clicks + 1 class LargeRectangle: def __init__(self, width=10, height=5, **kwargs): super().__init__(**kwargs) self.width = width self.height = height def size(self): return self.width * self.height class Button(Clickable, LargeRectangle): pass Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> אף על פי שטכנית היינו מצפים לראות עץ שבו <var>Weapon</var> ו־<var>Vehicle</var> נמצאות באותה רמה ומצביעות על <var>Tank</var>,<br> בפועל פייתון "משטחת" את עץ הירושה כך ש־<var>Tank</var> יורשת מ־<var>Weapon</var> שיורשת מ־<var>Vehicle</var>.<br> הסדר של המחלקות לאחר השיטוח נקבע לפי אלגוריתם שנקרא <a href="https://en.wikipedia.org/wiki/C3_linearization">C3 Linearization</a>, אבל זה לא משהו שבאמת חשוב לדעת בשלב הזה. </p> <figure> <img src="images/multiple_inheritance.svg?v=4" style="max-width: 900px; margin-right: auto; margin-left: auto; text-align: center;" alt="בתמונה יש שני צדדים המופרדים בקו מקווקו. כותרת הצד הימני היא 'ירושה מרובה לפי הספר', ובה רואים את מחלקות Weapon ו־Vehicle באותו גובה, כששתיהן מצביעות ישירות למחלקת Tank. כותרת הצד השמאלי של התמונה היא 'ירושה בפייתון', ובה רואים את מחלקת Tank יורשת ממחלקת Vehicle שיורשת ממחלקת Weapon."/> <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">דוגמה להשטחה שפייתון עושה לירושה מרובה. </figcaption> </figure> <span style="text-align: right; direction: rtl; float: right; clear: both;">Mixin־ים</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> רעיון נפוץ המבוסס על ירושה מרובה הוא Mixin.<br> <dfn>Mixin</dfn> היא מחלקה שאין לה תכלית בפני עצמה, והיא קיימת כדי "לתרום" תכונות ופעולות למחלקה שתירש אותה.<br> לרוב נשתמש בשתי Mixins לפחות, כדי ליצור מהן מחלקות מורכבות יותר, שכוללות את הפעולות והתכונות של כל ה־Mixins.<br> לדוגמה: ניצור מחלקת "כפתור" (<var>Button</var>) שיורשת ממחלקת "מלבן גדול" (<var>LargeRectangle</var>) וממחלקת "לחיץ" (<var>Clickable</var>). </p> End of explanation buy_now = Button() buy_now.click() print(f"Button size (from LargeRectangle class): {buy_now.size()}") print(f"Button clicks (from Clickable class): {buy_now.clicks}") Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> ניצור כפתור ונראה שהוא אכן קיבל את התכונות והפעולות משתי המחלקות: </p> End of explanation from abc import ABC, abstractmethod class Color: Describe the game pieces' color BLACK = 0 WHITE = 1 def enemy_of(color): Return the opponent color. if color == Color.BLACK: return Color.WHITE return Color.BLACK class Board: Create and maintain the game board. # Some functions below will not work well with altered board size. BOARD_SIZE = (8, 8) def __init__(self): self.reset() def get_square(self, row, col): Return the game piece by its position on board. If there is no piece in this position, or if the position does not exist - return False. if self.is_valid_square(row, col): return self.board[row][col] def set_square(self, row, col, piece): Place piece on board. self.board[row][col] = piece def is_valid_square(self, row, column): Return True if square in board bounds, False otherwise. row_exists = row in range(self.BOARD_SIZE[0]) column_exists = column in range(self.BOARD_SIZE[1]) return row_exists and column_exists def is_empty_square(self, square): Return True if square is unoccupied, False otherwise. An empty square is a square which has no game piece on it. If the square is out of board bounds, we consider it empty. return self.get_square(*square) is None def _generate_back_row(self, color): Place player's first row pieces on board. row_by_color = {Color.BLACK: 0, Color.WHITE: self.BOARD_SIZE[0] - 1} row = row_by_color[color] order = (Rook, Knight, Bishop, Queen, King, Bishop, Knight, Rook) params = {'color': color, 'row': row} return [order[i](col=i, **params) for i in range(self.BOARD_SIZE[0])] def _generate_pawns_row(self, color): Place player's pawns row on board. row_by_color = {Color.BLACK: 1, Color.WHITE: self.BOARD_SIZE[0] - 2} row = row_by_color[color] params = {'color': color, 'row': row} return [Pawn(col=i, **params) for i in range(self.BOARD_SIZE[0])] def get_pieces(self, color=None): Yield the player's pieces. If color is unspecified (None), yield all pieces on board. for row in self.board: for square in row: if square is not None and (color in (None, square.color)): yield square def reset(self): Set traditional board and pieces in initial positions. self.board = [ self._generate_back_row(Color.BLACK), self._generate_pawns_row(Color.BLACK), [None] * self.BOARD_SIZE[0], [None] * self.BOARD_SIZE[0], [None] * self.BOARD_SIZE[0], [None] * self.BOARD_SIZE[0], self._generate_pawns_row(Color.WHITE), self._generate_back_row(Color.WHITE), ] def move(self, source, destination): Move a piece from its place to a designated location. piece = self.get_square(*source) return piece.move(board=self, destination=destination) def __str__(self): Return current state of the board for display purposes. printable = "" for row in self.board: for col in row: if col is None: printable = printable + " ▭ " else: printable = printable + f" {col} " printable = printable + '\n' return printable class Piece(ABC): Represent a general chess piece. def __init__(self, color, row, col, **kwargs): super().__init__(**kwargs) self.color = color self.row = row self.col = col self.moved = False self.directions = set() def is_possible_target(self, board, target): Return True if the move is legal, False otherwise. A move is considered legal if the piece can move from its current location to the target location. is_target_valid = board.is_valid_square(*target) is_empty_square = board.is_empty_square(target) is_hitting_enemy = self.is_enemy(board.get_square(*target)) return is_target_valid and (is_empty_square or is_hitting_enemy) @abstractmethod def get_valid_moves(self, board): Yield the valid target positions the piece can travel to. pass def get_position(self): Return piece current position. return self.row, self.col def is_enemy(self, piece): Return if the piece belongs to the opponent. if piece is None: return False return piece.color != self.color def move(self, board, destination): Change piece position on the board. Return True if the piece's position has successfully changed. Return False otherwise. if not self.is_possible_target(board, destination): return False if destination not in self.get_valid_moves(board): return False board.set_square(*self.get_position(), None) board.set_square(*destination, self) self.row, self.col = destination self.moved = True return True def get_squares_threatens(self, board): Get all the squares which this piece threatens. This is usually just where the piece can go, but sometimes the piece threat squares which are different than the squares it can travel to. for move in self.get_valid_moves(board): yield move @abstractmethod def __str__(self): pass class WalksDiagonallyMixin: Define diagonal movement on the board. This mixin should be used only in a Piece subclasses. Its purpose is to add possible movement directions to a specific kind of game piece. def __init__(self, **kwargs): super().__init__(**kwargs) self.directions.update({ (-1, -1), (1, -1), (-1, 1), (1, 1), }) class WalksStraightMixin: Define straight movement on the board. This mixin should be used only in a Piece subclasses. Its purpose is to add possible movement directions to a specific kind of game piece. def __init__(self, **kwargs): super().__init__(**kwargs) self.directions.update({ (0, -1), (-1, 0), (1, 0), (0, 1), }) class WalksMultipleStepsMixin: Define a same-direction, multiple-step movement on the board. This mixin should be used only on a Piece subclasses. Its purpose is to allow a piece to travel long distances based on a single-step pattern. For example, the bishop can move diagonally up to 7 squares per turn (in an orthodox chess game). This mixin allows it if the `directions` property is set to the 4 possible diagonal steps. It does so by overriding the get_valid_moves method and uses the instance `directions` property to determine the possible step for the piece. def get_valid_moves(self, board, **kwargs): Yield the valid target positions the piece can travel to. for row_change, col_change in self.directions: steps = 1 stop_searching_in_this_direction = False while not stop_searching_in_this_direction: new_row = self.row + row_change * steps new_col = self.col + col_change * steps target = (new_row, new_col) is_valid_target = self.is_possible_target(board, target) if is_valid_target: yield target steps = steps + 1 is_hit_enemy = self.is_enemy(board.get_square(*target)) if not is_valid_target or (is_valid_target and is_hit_enemy): stop_searching_in_this_direction = True class Bishop(WalksDiagonallyMixin, WalksMultipleStepsMixin, Piece): A classic Bishop chess piece. The bishop moves any number of blank squares diagonally. def __str__(self): if self.color == Color.WHITE: return '♗' return '♝' class Rook(WalksStraightMixin, WalksMultipleStepsMixin, Piece): A classic Rook chess piece. The rook moves any number of blank squares straight. def __str__(self): if self.color == Color.WHITE: return '♖' return '♜' class Queen( WalksStraightMixin, WalksDiagonallyMixin, WalksMultipleStepsMixin, Piece, ): A classic Queen chess piece. The queen moves any number of blank squares straight or diagonally. def __str__(self): if self.color == Color.WHITE: return '♕' return '♛' class Pawn(Piece): A classic Pawn chess piece. A pawn moves straight forward one square, if that square is empty. If it has not yet moved, a pawn also has the option of moving two squares straight forward, provided both squares are empty. Pawns can only move forward. A pawn can capture an enemy piece on either of the two squares diagonally in front of the pawn. It cannot move to those squares if they are empty, nor to capture an enemy in front of it. A pawn can also be involved in en-passant or in promotion, which is yet to be implemented on this version of the game. DIRECTION_BY_COLOR = {Color.BLACK: 1, Color.WHITE: -1} def __init__(self, **kwargs): super().__init__(**kwargs) self.forward = self.DIRECTION_BY_COLOR[self.color] def _get_regular_walk(self): Return position after a single step forward. return self.row + self.forward, self.col def _get_double_walk(self): Return position after a double step forward. src_row, src_col = self.get_position() return (src_row + self.forward * 2, src_col) def _get_diagonal_walks(self): Returns position after a diagonal move. This only happens when hitting an enemy. It could also happen on "en-passant", which is unimplemented feature for now. src_row, src_col = self.get_position() return ( (src_row + self.forward, src_col + 1), (src_row + self.forward, src_col - 1), ) def is_possible_target(self, board, target): Return True if the Pawn's move is legal, False otherwise. This one is a bit more complicated than the usual case. Pawns can only move forward. They also can move two ranks forward if they have yet to move. Not like the other pieces, pawns can't hit the enemy using their regular movement. They have to hit it diagonally, and can't take a step forward if the enemy is just in front of them. is_valid_move = board.is_valid_square(*target) is_step_forward = ( board.is_empty_square(target) and target == self._get_regular_walk() ) is_valid_double_step_forward = ( board.is_empty_square(target) and not self.moved and target == self._get_double_walk() and self.is_possible_target(board, self._get_regular_walk()) ) is_hitting_enemy = ( self.is_enemy(board.get_square(*target)) and target in self._get_diagonal_walks() ) return is_valid_move and ( is_step_forward or is_valid_double_step_forward or is_hitting_enemy ) def get_squares_threatens(self, board, **kwargs): Get all the squares which the pawn can attack. for square in self._get_diagonal_walks(): if board.is_valid_square(*square): yield square def get_valid_moves(self, board, **kwargs): Yield the valid target positions the piece can travel to. The Pawn case is a special one - see is_possible_target's documentation for further details. targets = ( self._get_regular_walk(), self._get_double_walk(), *self._get_diagonal_walks(), ) for target in targets: if self.is_possible_target(board, target): yield target def __str__(self): if self.color == Color.WHITE: return '♙' return '♟' class Knight(Piece): A classic Knight chess piece. Can travel to the nearest square not on the same rank, file, or diagonal. It is not blocked by other pieces: it jumps to the new location. def __init__(self, **kwargs): super().__init__(**kwargs) self.directions.update({ (-2, 1), (-1, 2), (1, 2), (2, 1), # Upper part (-2, -1), (-1, -2), (1, -2), (2, -1), # Lower part }) def get_valid_moves(self, board, **kwargs): super().get_valid_moves(board, **kwargs) for add_row, add_col in self.directions: target = (add_row + self.row, add_col + self.col) if self.is_possible_target(board, target): yield target def __str__(self): if self.color == Color.WHITE: return '♘' return '♞' class King(WalksStraightMixin, WalksDiagonallyMixin, Piece): A classic King chess piece. Can travel one step, either diagonally or straight. It cannot travel to places where he will be threatened. def _get_threatened_squares(self, board): Yield positions in which the king will be captured. enemy = Color.enemy_of(self.color) for piece in board.get_pieces(color=enemy): for move in piece.get_squares_threatens(board): yield move def is_possible_target(self, board, target): Return True if the king's move is legal, False otherwise. The king should not move to a square that the enemy threatens. is_regular_valid = super().is_possible_target(board, target) threatened_squares = self._get_threatened_squares(board) return is_regular_valid and target not in threatened_squares def get_valid_moves(self, board, **kwargs): super().get_valid_moves(board, **kwargs) for add_row, add_col in self.directions: target = (add_row + self.row, add_col + self.col) if self.is_possible_target(board, target): yield target def get_squares_threatens(self, board): Get all the squares that this piece may move to. This method is especially useful to see if other kings fall into this piece's territory. To prevent recursion, this function returns all squares we threat even if we can't go there. For example, take a scenario where the White Bishop is in B2, and the Black King is in B3. The White King is in D3, but it is allowed to go into C3 to threaten the black king if the white bishop protects it. for direction in self.directions: row, col = self.get_position() row = row + direction[0] col = col + direction[1] if board.is_valid_square(row, col): yield (row, col) def __str__(self): if self.color == Color.WHITE: return '♔' return '♚' Explanation: <p style="text-align: right; direction: rtl; float: right; clear: both;"> בחלק משפות התכנות האחרות ל־Mixins יש תחביר מיוחד, אולם בפייתון משתמשים פשוט בירושה מרובה.<br> מהסיבה הזו, בין היתר, ההבדל בין Mixins לבין ירושה מרובה עלול להיראות מעורפל מעט.<br> הדגש ב־Mixins הוא שהן מוגדרות כארגז תכונות או פעולות שאפשר לרשת, והן לא מיועדות לכך שיצרו ישירות מהן מופעים.<br> אפשר להגיד שכל מקרה של Mixins משתמש בירושה מרובה, אך לא כל מקרה של ירושה מרובה כולל Mixins. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">מונחים</span> <dl style="text-align: right; direction: rtl; float: right; clear: both;"> <dt>מחלקה מופשטת (Abstract Class)</dt> <dd> מחלקה שלא נהוג ליצור ממנה מופעים.<br> המחלקה המופשטת תגדיר פעולות ללא מימוש שנקראות "פעולות מופשטות".<br> המחלקות שיורשות מהמחלקה המופשטת יממשו את הפעולות הללו.<br> </dd> <dt>ירושה מרובה (Multiple Inheritance)</dt> <dd> ירושה שמחלקה מבצעת ממחלקות רבות, במטרה לקבל את תכונותיהן ופעולותיהן של כמה מחלקות־על. </dd> <dt>Mixin</dt> <dd> מחלקה שמטרתה לספק תכונות ופעולות למחלקות היורשות אותה, ואין לה שימוש כשהיא עומדת בפני עצמה.<br> לרוב נעשה שימוש ביותר מ־Mixin אחת בעזרת ירושה מרובה. </dd> </dl> <span style="text-align: right; direction: rtl; float: right; clear: both;">תרגיל לדוגמה</span> <span style="text-align: right; direction: rtl; float: right; clear: both;">הקדמה</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> שחמט הוא משחק שבו שני שחקנים מתמודדים זה מול זה על לכידת מלכו של היריב.<br> לכל אחד מהשחקנים יש צבא שמורכב מכלי משחק, ועליהם לעשות בצבא שימוש מושכל כדי להשיג יתרון על פני השחקן השני.<br> כדי להבדיל בין כלי המשחק של השחקנים, כלי המשחק של שחקן אחד לבנים ואילו של השחקן השני שחורים. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ממשו לוח של משחק שחמט בגודל 8 על 8 משבצות.<br> </p> <figure> <img src="images/chessboard.svg" style="max-width: 200px; margin-right: auto; margin-left: auto; text-align: center;" alt="תמונה של לוח שחמט בגודל 8 על 8, ללא כלים עליו. הריבוע השמאלי עליון הוא לבן, והריבועים הסמוכים אליו צבועים בחום. השורה העליונה ביותר ממוספרת 8, זו שתחתיה 7 וכן הלאה. השורה התחתונה ביותר ממוספרת 1. הטור השמאלי ביותר מסומן באות א, זה שמימינו מסומן באות ב, וכך הלאה. הטור הימני ביותר מסומן באות ח."/> <figcaption style="margin-top: 2rem; text-align: center; direction: rtl;">לוח שחמט סטנדרטי ללא כלים עליו.<br>מקור: ויקיפדיה. </figcaption> </figure> <p style="text-align: right; direction: rtl; float: right; clear: both;"> בשחמט 6 סוגים של כלי משחק: רגלי, צריח, פרש, רץ, מלך ומלכה.<br> בפתיחת המשחק, שורות 1 ו־2 מלאות בכליו של השחקן הלבן ושורות 7 ו־8 מלאות בכליו של השחקן השחור.<br> הכלים מסודרים על הלוח כדלקמן: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>על כל המשבצות בשורה 2 ובשורה 7 מונחים רגלים.</li> <li>בשורות 1 ו־8, בטורים א ו־ח מונחים צריחים.</li> <li>בשורות 1 ו־8, בטורים ב ו־ז מונחים פרשים.</li> <li>בשורות 1 ו־8, בטורים ג ו־ו מונחים רצים.</li> <li>בשורות 1 ו־8, בטור ד מונחת המלכה.</li> <li>בשורות 1 ו־8, בטור ה מונח המלך.</li> </ul> <p style="text-align: right; direction: rtl; float: right; clear: both;"> <a href="https://en.wikipedia.org/wiki/Rules_of_chess#Movement">חוקי התנועה</a> של הכלים מפורטים להלן: </p> <ul style="text-align: right; direction: rtl; float: right; clear: both;"> <li>מלכה יכולה לנוע מספר בלתי מוגבל של משבצות לכל כיוון, לצדדים או באלכסון.</li> <li>צריח יכול לנוע מספר בלתי מוגבל של צעדים לצדדים: אנכי או אופקי.</li> <li>רץ יכול לנוע מספר בלתי מוגבל של צעדים באלכסון.</li> <li>פרש יכול לנוע שני צעדים אופקית ואז עוד צעד אנכית, או שני צעדים אנכית ואז עוד צעד אופקית.</li> <li>רגלי יכול לנוע צעד אחד לכיוון היריב. אם הוא עדיין לא נע ממקומו ההתחלתי, הוא יכול לבחור לעשות שני צעדים לכיוון היריב.</li> <li>מלך יכול לנוע לכל משבצת הסמוכה לו, כל עוד כלי היריב לא יכולים להגיע למשבצת הזו בתורם.</li> </ul> <p style="text-align: right; direction: rtl; float: right; clear: both;"> כלי בתנועה לא יכול "לדלג" מעל כלי משחק אחרים – אם כלי נמצא בדרכו, הכלי שבתנועה לא יוכל לגשת למשבצות שנמצאות מעבר לאותו כלי חוסם.<br> יוצא דופן לכלל זה הוא פרש – שיכול לדלג מעל כלים אחרים.<br> אם הכלי החוסם הוא כלי של האויב, הכלי בתנועה רשאי לעבור ולעמוד על המשבצת של הכלי החוסם (נקרא גם "להכות אותו"), ולהסיר את הכלי החוסם מהמשחק.<br> יוצא דופן לכלל זה הוא רגלי – הוא לא יכול להכות חייל שחוסם אותו מלפנים, והוא כן יכול להכות חייל שנמצא באלכסון הימני או השמאלי של כיוון ההתקדמות שלו. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> השחמטאים שביניכם יכולים להתעלם כרגע ממהלכים "מיוחדים", כמו הכאה דרך הילוכו, הצרחה או הכתרה.<br> כמו כן, בינתיים נתעלם מהחוק שקובע שאם המלך מאוים, השחקן חייב לבצע מהלך שיסיר ממנו את האיום. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">התרגיל</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> ממשו מחלקת <var>Board</var> שתכיל תכונה בשם <var>board</var>.<br> ביצירת הלוח, ייווצר לוח תקני מאויש בכלי משחק כפי שהוסבר לעיל. </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> לצורך כך, צרו מחלקה כללית בשם <var>Piece</var> המייצגת כלי משחק בשחמט.<br> לכל כלי משחק צבע (<var>color</var>), השורה שבה הוא נמצא (<var>row</var>) והעמודה שבה הוא נמצא (<var>column</var>).<br> צרו גם מחלקות עבור כל אחד מהכלים: <var>Pawn</var> (רגלי), <var>Rook</var> (צריח), <var>Knight</var> (פרש), <var>Bishop</var> (רץ), <var>Queen</var> (מלכה) ו־<var>King</var> (מלך).<br> </p> <p style="text-align: right; direction: rtl; float: right; clear: both;"> אפשרו לכל אחד מכלי המשחק לזוז על הלוח לפי החוקיות שתוארה מעלה.<br> דאגו שקריאה ל־<var>Board</var> תדפיס לוח ועליו כלי משחק לפי המצב העדכני של הלוח. </p> <span style="text-align: right; direction: rtl; float: right; clear: both;">פתרון אפשרי</span> <p style="text-align: right; direction: rtl; float: right; clear: both;"> נפתח ונאמר שאם אתם אמיצים במידה מספקת – אנחנו ממליצים לכם לנסות לפתור את התרגיל בעצמכם.<br> הוא בהחלט לא פשוט ועלול לקחת לא מעט זמן, ולכן הפתרון שלנו מוגש פה: </p> End of explanation
5,008
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Landice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Ice Albedo Is Required Step7: 1.4. Atmospheric Coupling Variables Is Required Step8: 1.5. Oceanic Coupling Variables Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required Step11: 2.2. Code Version Is Required Step12: 2.3. Code Languages Is Required Step13: 3. Grid Land ice grid 3.1. Overview Is Required Step14: 3.2. Adaptive Grid Is Required Step15: 3.3. Base Resolution Is Required Step16: 3.4. Resolution Limit Is Required Step17: 3.5. Projection Is Required Step18: 4. Glaciers Land ice glaciers 4.1. Overview Is Required Step19: 4.2. Description Is Required Step20: 4.3. Dynamic Areal Extent Is Required Step21: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required Step22: 5.2. Grounding Line Method Is Required Step23: 5.3. Ice Sheet Is Required Step24: 5.4. Ice Shelf Is Required Step25: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required Step26: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required Step27: 7.2. Ocean Is Required Step28: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required Step29: 8.2. Melting Is Required Step30: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required Step31: 9.2. Approximation Is Required Step32: 9.3. Adaptive Timestep Is Required Step33: 9.4. Timestep Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'landice') Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: TEST-INSTITUTE-2 Source ID: SANDBOX-1 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:44 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation
5,009
Given the following text description, write Python code to implement the functionality described below step by step Description: Python 3 Functions Step1: We can unpack a list or tuple into positional arguments using a star * Step2: Similarly, we can use double star ** to unpack a dictionary into keyword arguments. Step3: Comprehensions With <a href="http Step4: Dictionary comprehension
Python Code: import math def euclidean_distance(x1, y1, x2, y2): return math.sqrt((x1 - x2) ** 2 + (y1-y2) ** 2) euclidean_distance(0,0,1,1) Explanation: Python 3 Functions End of explanation values_list = [0,0,1,1] euclidean_distance(*values_list) values_tuple = (0,0,1,1) euclidean_distance(*values_tuple) Explanation: We can unpack a list or tuple into positional arguments using a star *: End of explanation values_dict = { 'x1': 0, 'y1': 0, 'x2': 1, 'y2': 1 } euclidean_distance(**values_dict) list(zip([1,2,3,4,5,6])) Explanation: Similarly, we can use double star ** to unpack a dictionary into keyword arguments. End of explanation # List comprehension [num ** 2 for num in range(-10, 11)] [num ** 2 for num in range(-10, 11) if num > 0] # Set comprehension names = [ 'Bob', 'JOHN', 'alice', 'bob', 'ALICE', 'J', 'Bob' ] { name[0].upper() + name[1:].lower() for name in names if len(name) > 1 } Explanation: Comprehensions With <a href="http://python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html">comprehensions</a>, we can build a sequence based on another iterable. End of explanation s = "Action Is Eloquence" counts = dict() for char in s: counts[char] = counts.get(char, 0) + 1 counts freq = { k.lower() : counts.get(k.lower(), 0) + counts.get(k.upper(), 0) for k in counts.keys() if k.isalpha() } freq Explanation: Dictionary comprehension End of explanation
5,010
Given the following text description, write Python code to implement the functionality described below step by step Description: [1-1] 動画作成用のモジュールをインポートして、動画を表示可能なモードにセットします。 Step1: [1-2] x軸上を一定速度で移動するボールの動画を描きます。 動画のGIFファイル「animation01.gif」も同時に作成します。 Step2: [1-3] 3個のランダムウォークの動画を描きます。 動画のGIFファイル「animation02.gif」も同時に作成します。
Python Code: import numpy as np import matplotlib.pyplot as plt import matplotlib.animation as animation from numpy.random import randint %matplotlib nbagg Explanation: [1-1] 動画作成用のモジュールをインポートして、動画を表示可能なモードにセットします。 End of explanation fig = plt.figure(figsize=(6,2)) subplot = fig.add_subplot(1,1,1) subplot.set_xlim(0,50) subplot.set_ylim(-1,1) x = 0 images = [] for _ in range(50): image = subplot.scatter([x],[0]) images.append([image]) x += 1 ani = animation.ArtistAnimation(fig, images, interval=100) ani.save('animation01.gif', writer='imagemagick', fps=10) Explanation: [1-2] x軸上を一定速度で移動するボールの動画を描きます。 動画のGIFファイル「animation01.gif」も同時に作成します。 End of explanation fig = plt.figure(figsize=(6,4)) subplot = fig.add_subplot(1,1,1) y1s, y2s, y3s = [], [], [] y1, y2, y3 = 0, 0, 0 images = [] for t in range(100): y1s.append(y1) y2s.append(y2) y3s.append(y3) image1, = subplot.plot(range(t+1), y1s, color='blue') image2, = subplot.plot(range(t+1), y2s, color='green') image3, = subplot.plot(range(t+1), y3s, color='red') images.append([image1, image2, image3]) y1 += randint(-1,2) y2 += randint(-1,2) y3 += randint(-1,2) ani = animation.ArtistAnimation(fig, images, interval=100) ani.save('animation02.gif', writer='imagemagick', fps=10) Explanation: [1-3] 3個のランダムウォークの動画を描きます。 動画のGIFファイル「animation02.gif」も同時に作成します。 End of explanation
5,011
Given the following text description, write Python code to implement the functionality described below step by step Description: <h2> Finding specific text in a corpus of scanned documents </h2> Step1: Here are a few of the images we are going to search. <img src="https Step2: <h2> Translating a large document in parallel. </h2> As the number of items increases, we need to parallelize the cells. Here, we translate Alice in Wonderland into Spanish. <p> This cell creates a worker that calls the Translate API. This has to be done on each Spark worker; this is why the import is within the function. Step3: Now we are ready to execute the above function in parallel on the Spark cluster Step4: Note Step5: <h2> Sentiment analysis in parallel </h2> Here, we do sentiment analysis on a bunch of text in parallel. This is similar to the translate. Step7: But this time, instead of processing a text file, let's process data from BigQuery. We will pull articles on JavaScript from Hacker News and look at the sentiment associated with them.
Python Code: from googleapiclient.discovery import build import subprocess images = subprocess.check_output(["gsutil", "ls", "gs://{}/unstructured/photos".format(BUCKET)]) images = list(filter(None,images.split('\n'))) print(images) Explanation: <h2> Finding specific text in a corpus of scanned documents </h2> End of explanation # Running Vision API to find images that have a specific search term import base64 SEARCH_TERM = u"1321" for IMAGE in images: print(IMAGE) vservice = build('vision', 'v1', developerKey=APIKEY) request = vservice.images().annotate(body={ 'requests': [{ 'image': { 'source': { 'gcs_image_uri': IMAGE } }, 'features': [{ 'type': 'TEXT_DETECTION', 'maxResults': 100, }] }], }) outputs = request.execute(num_retries=3) # print outputs if 'responses' in outputs and len(outputs['responses']) > 0 and 'textAnnotations' in outputs['responses'][0]: for output in outputs['responses'][0]['textAnnotations']: if SEARCH_TERM in output['description']: print("image={} contains the following text: {}".format(IMAGE, output['description'])) Explanation: Here are a few of the images we are going to search. <img src="https://storage.googleapis.com/cloud-training-demos-ml/unstructured/photos/snapshot1.png" /> <img src="https://storage.googleapis.com/cloud-training-demos-ml/unstructured/photos/snapshot2.png" /> <img src="https://storage.googleapis.com/cloud-training-demos-ml/unstructured/photos/snapshot5.png" /> End of explanation def executeTranslate(inputs): from googleapiclient.discovery import build service = build('translate', 'v2', developerKey=APIKEY) translator = service.translations() outputs = translator.list(source='en', target='es', q=inputs).execute() return outputs['translations'][0]['translatedText'] print("Added executeTranslate() function.") s Explanation: <h2> Translating a large document in parallel. </h2> As the number of items increases, we need to parallelize the cells. Here, we translate Alice in Wonderland into Spanish. <p> This cell creates a worker that calls the Translate API. This has to be done on each Spark worker; this is why the import is within the function. End of explanation alice = sc.textFile("gs://cpb103-public-files/alice-short-transformed.txt") alice = alice.map(lambda x: x.split(".")) for eachSentence in alice.take(10): print("{0}".format(eachSentence)) Explanation: Now we are ready to execute the above function in parallel on the Spark cluster End of explanation aliceTranslated = alice.map(executeTranslate) for eachSentance in aliceTranslated.take(10): print("{0}".format(eachSentance)) Explanation: Note: The book has also been transformed so all the new lines have been removed. This allows the book to be imported as a long string. The text is then split on the periods to create an array of strings. The loop just shows the input. <p> This code runs the translation in parallel on the Spark cluster and shows the results. End of explanation def executeSentimentAnalysis(quote): from googleapiclient.discovery import build lservice = build('language', 'v1beta1', developerKey=APIKEY) response = lservice.documents().analyzeSentiment( body={ 'document': { 'type': 'PLAIN_TEXT', 'content': quote } }).execute() return response print("Added executeSentimentAnalysis() function.") Explanation: <h2> Sentiment analysis in parallel </h2> Here, we do sentiment analysis on a bunch of text in parallel. This is similar to the translate. End of explanation import pandas as pd from pandas.io import gbq print("Imports run.") print 'Running query...' df = gbq.read_gbq( SELECT title, text FROM [bigquery-public-data:hacker_news.stories] where text > " " and title contains("JavaScript") LIMIT 10 , project_id=PROJECT_ID) #Convert Pandas DataFrame to RDD rdd = sqlContext.createDataFrame(df).rdd print(rdd.take(2)) # extract text field from Dictionary comments = rdd.map(lambda x: x[1]) sentiments = comments.map(executeSentimentAnalysis) for sentiment in sentiments.collect(): print("Score:{0} and Magnitude:{1}".format(sentiment['documentSentiment']['score'], sentiment['documentSentiment']['magnitude'])) Explanation: But this time, instead of processing a text file, let's process data from BigQuery. We will pull articles on JavaScript from Hacker News and look at the sentiment associated with them. End of explanation
5,012
Given the following text description, write Python code to implement the functionality described below step by step Description: More API Examples This notebook contains EVEN MORE API examples so you can get an idea of the types of services available. There's a world of API's out there for the taking, and we cannot teach them all to you. We can only teach you how they work in general... the details are 100% up to you! You should get your own API keys as appropriate. No guarantees my keys will work for you Caller Id/ Get a location for a Phone number This uses the cosmin phone number lookup API as found on https Step1: Get current exchange rates This example uses http Step2: GeoIP lookup Step3: An API for sentiment analysis... Process some text and more here Step4: Searching iTunes Here's an example of the iTunes search API. I'm searching for "Mandatory fun" and printing out the track names. Step5: Earthquakes anyone? Here's an example of the significant earthquakes from the past week. Information on this API can be found here Step6: Spotify The spotify example shows you how to call an API which uses the OAUTH2 prococol. This is a two step process. The first request, you request a token, and in the second request you call the api with that token. Twitter, Facebook, Google, and many other services use this approach. Typically you will use the client credentials flow, which does not explicitly require the user to consent. https
Python Code: import requests phone = input("Enter your phone number: ") params = { 'phone' : phone } headers={ "X-Mashape-Key": "sNi0LJs3rBmshZL7KQOrRWXZqIsBp1XUjhnjsnYUsE6iKo14Nc", "Accept": "application/json" } response = requests.get("https://cosmin-us-phone-number-lookup.p.mashape.com/get.php", params=params, headers=headers ) phone_data = response.json() phone_data Explanation: More API Examples This notebook contains EVEN MORE API examples so you can get an idea of the types of services available. There's a world of API's out there for the taking, and we cannot teach them all to you. We can only teach you how they work in general... the details are 100% up to you! You should get your own API keys as appropriate. No guarantees my keys will work for you Caller Id/ Get a location for a Phone number This uses the cosmin phone number lookup API as found on https://market.mashape.com/explore This api requires headers to be passed into the get() request. The API key and the requested output of json are sent into the header. Enter a phone number as input like 3154432911 and then the API will output JSON data consisting of caller ID data and GPS coordinates. End of explanation import requests apikey = '159f1a48ad7a3d6f4dbe5d5a71c2135c' # get your own at fixer.io params = { 'access_key': apikey } # US Dollars response = requests.get("http://data.fixer.io/api/latest", params=params ) rates = response.json() rates Explanation: Get current exchange rates This example uses http://fixer.io to get the current currency exchange rates. End of explanation import requests ip = "128.230.182.170" apikey = 'f9117fcd34312f9083a020af5836e337' # get your own at ipstack.com params = { 'access_key': apikey } # US Dollars url = f"http://api.snoopi.io/{ip}" response = requests.get( url, params=params ) rates = response.json() rates Explanation: GeoIP lookup: Find the lat/lng of an IP Address Every computer on the internet has a unique IP Address. This service when given an IP address will return back where that IP Address is located. Pretty handy API which is commonly used with mobile devices to determine approximate location when the GPS is turned off. End of explanation # sentiment message = input("How are you feeling today? ") url = 'http://text-processing.com/api/sentiment/' options = { 'text' : message} response = requests.post(url, data = options) sentiment = response.json() print(sentiment) Explanation: An API for sentiment analysis... Process some text and more here: http://text-processing.com End of explanation term = 'Mandatory Fun' params = { 'term' : term } response = requests.get('https://itunes.apple.com/search', params = params) search = response.json() for r in search['results']: print(r['trackName']) Explanation: Searching iTunes Here's an example of the iTunes search API. I'm searching for "Mandatory fun" and printing out the track names. End of explanation response = requests.get('https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/significant_week.geojson') quakes = response.json() for q in quakes['features']: print(q['properties']['title']) Explanation: Earthquakes anyone? Here's an example of the significant earthquakes from the past week. Information on this API can be found here: http://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson.php End of explanation from base64 import b64encode # USE YOUR OWN CREDENTIALS THESE ARE EXAMPLES client_id = "413fe60240a7ad1881bcca301a345" client_secret = "f6eae3c49a8a9a5c82cb00cfb153" # Step one, get the access token payload = { 'grant_type' : 'client_credentials'} response = requests.post("https://accounts.spotify.com/api/token", auth=(client_id,client_secret),data=payload) token = response.json()['access_token'] print(f"Access token: {token}") # Step two and beyond, use the access token to call the api url = "https://api.spotify.com/v1/tracks/2TpxZ7JUBn3uw46aR7qd6V" header = {"Authorization" : f"Bearer {token}"} response = requests.get(url, headers=header) response.json() Explanation: Spotify The spotify example shows you how to call an API which uses the OAUTH2 prococol. This is a two step process. The first request, you request a token, and in the second request you call the api with that token. Twitter, Facebook, Google, and many other services use this approach. Typically you will use the client credentials flow, which does not explicitly require the user to consent. https://developer.spotify.com/documentation/general/guides/authorization-guide/ API's that use this approach will issue you a client id and a client secret. The id is always the same but the secret may be changed. We use that client id and client secret to get an bearer access token. Notice how we pass into the post a named argument auth= which authenticates with the client id/secret. Next we use the bearer access token to make subsequent calls to the api. End of explanation
5,013
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: High Order Taylor Maps I (original by Dario Izzo - extended by Ekin Ozturk) Building upon the notebook here, we show the use of desolver for numerically integrating the system of differential equations $\dot{\mathbf y} = \mathbf f(\mathbf y)$ Step2: We perform the numerical integration using floats (the standard way) Step3: We perform the numerical integration using gduals (to get a HOTM) Step4: We visualize the HOTM Step5: How much faster is now to evaluate the Map rather than perform a new numerical integration?
Python Code: %matplotlib inline from matplotlib import pyplot as plt import os import numpy as np os.environ['DES_BACKEND'] = 'numpy' import desolver as de import desolver.backend as D from desolver.backend import gdual_double as gdual T = 1e-3 @de.rhs_prettifier(equ_repr="[vr, -1/r**2 + r*vt**2, vt, -2*vt*vr/r]", md_repr=r$$ \begin{array}{l} \dot r = v_r \\ \dot v_r = - \frac 1{r^2} + r v_\theta^2\\ \dot \theta = v_\theta \\ \dot v_\theta = -2 \frac{v_\theta v_r}{r} \end{array} $$) def eom_kep_polar(t,y,**kwargs): return D.array([y[1], - 1 / y[0] / y[0] + y[0] * y[3]*y[3], y[3], -2*y[3]*y[1]/y[0] - T]) eom_kep_polar # The initial conditions ic = [1.,0.1,0.,1.] Explanation: High Order Taylor Maps I (original by Dario Izzo - extended by Ekin Ozturk) Building upon the notebook here, we show the use of desolver for numerically integrating the system of differential equations $\dot{\mathbf y} = \mathbf f(\mathbf y)$: $$ \begin{array}{l} \dot r = v_r \ \dot v_r = - \frac 1{r^2} + r v_\theta^2\ \dot \theta = v_\theta \ \dot v_\theta = -2 \frac{v_\theta v_r}{r} \end{array} $$ which describe, in non dimensional units, the Keplerian motion of a mass point object around some primary body. We show how we can build a high order Taylor map (HOTM, indicated with $\mathcal M$) representing the final state of the system at the time $T$ as a function of the initial conditions. In other words, we build a polinomial representation of the relation $\mathbf y(T) = \mathbf f(\mathbf y(0), T)$. Writing the initial conditions as $\mathbf y(0) = \overline {\mathbf y}(0) + \mathbf {dy}$, our HOTM will be written as: $$ \mathbf y(T) = \mathcal M(\mathbf {dy}) $$ and will be valid in a neighbourhood of $\overline {\mathbf y}(0)$. Importing Stuff End of explanation D.set_float_fmt('float64') float_integration = de.OdeSystem(eom_kep_polar, y0=ic, dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict()) float_integration.set_method("RK45") float_integration.integrate(eta=True) # Here we transform from polar to cartesian coordinates # to then plot y = float_integration.y cx = [it[0]*np.sin(it[2]) for it in y.astype(np.float64)] cy = [it[0]*np.cos(it[2]) for it in y.astype(np.float64)] plt.plot(cx,cy) plt.title("Orbit resulting from the chosen initial conditions") plt.xlabel("x") plt.ylabel("y") Explanation: We perform the numerical integration using floats (the standard way) End of explanation # Order of the Taylor Map. If we have 4 variables the number of terms in the Taylor expansion in 329 at order 7 order = 5 # We now define the initial conditions as gdual (not float) ic_g = [gdual(ic[0], "r", order), gdual(ic[1], "vr", order), gdual(ic[2], "t", order), gdual(ic[3], "vt", order)] import time start_time = time.time() D.set_float_fmt('gdual_double') gdual_integration = de.OdeSystem(eom_kep_polar, y0=ic_g, dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict()) gdual_integration.set_method("RK45") gdual_integration.integrate(eta=True) print("--- %s seconds ---" % (time.time() - start_time)) # We extract the last point yf = gdual_integration.y[-1] # And unpack it into some convinient names rf,vrf,tf,vtf = yf # We compute the final cartesian components xf = rf * D.sin(tf) yf = rf * D.cos(tf) # Note that you can get the latex representation of the gdual print(xf._repr_latex_()) print("xf (latex):") xf # We can extract the value of the polinomial when $\mathbf {dy} = 0$ print("Final x from the gdual integration", xf.constant_cf) print("Final y from the gdual integration", yf.constant_cf) # And check its indeed the result of the 'reference' trajectory (the lineariation point) print("\nFinal x from the float integration", cx[-1]) print("Final y from the float integration", cy[-1]) Explanation: We perform the numerical integration using gduals (to get a HOTM) End of explanation # Let us now visualize the Taylor map by creating a grid of perturbations on the initial conditions and # evaluating the map for those values Npoints = 10 # 10000 points epsilon = 1e-3 grid = np.arange(-epsilon,epsilon,2*epsilon/Npoints) nxf = [0] * len(grid)**4 nyf = [0] * len(grid)**4 i=0 import time start_time = time.time() for dr in grid: for dt in grid: for dvr in grid: for dvt in grid: nxf[i] = xf.evaluate({"dr":dr, "dt":dt, "dvr":dvr,"dvt":dvt}) nyf[i] = yf.evaluate({"dr":dr, "dt":dt, "dvr":dvr,"dvt":dvt}) i = i+1 print("--- %s seconds ---" % (time.time() - start_time)) f, axarr = plt.subplots(1,3,figsize=(15,5)) # Normal plot of the final map axarr[0].plot(nxf,nyf,'.') axarr[0].plot(cx,cy) axarr[0].set_title("The map") # Zoomed plot of the final map (equal axis) axarr[1].plot(nxf,nyf,'.') axarr[1].plot(cx,cy) axarr[1].set_xlim([cx[-1] - 0.1, cx[-1] + 0.1]) axarr[1].set_ylim([cy[-1] - 0.1, cy[-1] + 0.1]) axarr[1].set_title("Zoom") # Zoomed plot of the final map (unequal axis) axarr[2].plot(nxf,nyf,'.') axarr[2].plot(cx,cy) axarr[2].set_xlim([cx[-1] - 0.01, cx[-1] + 0.01]) axarr[2].set_ylim([cy[-1] - 0.1, cy[-1] + 0.1]) axarr[2].set_title("Stretch") #axarr[1].set_xlim([cx[-1] - 0.1, cx[-1] + 0.1]) #axarr[1].set_ylim([cy[-1] - 0.1, cy[-1] + 0.1]) Explanation: We visualize the HOTM End of explanation # First we profile the method evaluate (note that you need to call the method 4 times to get the full state) %timeit xf.evaluate({"dr":epsilon, "dt":epsilon, "dvr":epsilon,"dvt":epsilon}) # Then we profile the Runge-Kutta 4 integrator %%timeit D.set_float_fmt('float64') float_integration = de.OdeSystem(eom_kep_polar, y0=[it + epsilon for it in ic], dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict()) float_integration.set_method("RK45") float_integration.integrate(eta=False) # It seems the speedup is 2-3 orders of magnitude, but did we loose precision? # We plot the error in the final result as computed by the HOTM and by the Runge-Kutta # as a function of the distance from the original initial conditions out = [] pert = np.arange(0,0.1,1e-3) for epsilon in pert: res_map_xf = xf.evaluate({"dr":epsilon, "dt":epsilon, "dvr":epsilon,"dvt":epsilon}) res_int = de.OdeSystem(eom_kep_polar, y0=[it + epsilon for it in ic], dense_output=False, t=(0, 5.), dt=0.01, rtol=1e-12, atol=1e-12, constants=dict()) res_int.set_method("RK45") res_int.integrate() res_int_x = [it.y[0]*np.sin(it.y[2]) for it in res_int] res_int_xf = res_int_x[-1] out.append(np.abs(res_map_xf - res_int_xf)) plt.semilogy(pert,out) plt.title("Error introduced by the use of the polynomial") plt.xlabel("Perturbation of the initial conditions") plt.ylabel("Error in estimating the final state (x)") Explanation: How much faster is now to evaluate the Map rather than perform a new numerical integration? End of explanation
5,014
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to "Doing Science" in Python for REAL Beginners Python is one of many languages you can use for research and HW purposes. In the next few days, we will work through many of the tool, tips, and tricks that we as graduate students (and PhD researchers) use on a daily basis. We will NOT attempt to teach you all of Python--there isn't time. We will however build up a set of code(s) that will allow you to read and write data, make beautiful publish-worthy plots, fit a line (or any function) to data, and set up algorithms. You will also begin to learn the syntax of Python and can hopefuly apply this knowledge to your current and future work. Before we begin, a few words on navigating the iPython Notebook Step1: A. Numbers and Calculations Note Step2: These simple operations on numbers in Python 3 works exactly as you'd expect, but that's not true across all programming languages. For example, in Python 2, an older version of Python that is still used often in scientific programming Step3: Next, let's create a list of numbers and do math to that list. Step4: How many elements or numbers does the list c contain? Yes, this is easy to count now, but you will eventually work with lists that contains MANY numbers. To get the length of a list (or array), use len(). Step5: Now, some math... Let's square each value in c and put those values in a new list called d. To square a variable (or number), you use **. So $3^{**}2=9$. The rest of the math operations (+ / - x) are 'normal.' Step6: This should not have worked. Why? The short answer is that a list is very useful, but it is not an array. However, you can convert your lists to arrays (and back again if you feel you need to). In order to do this conversion (and just about anything else), we need something extra. Python is a fantastic language because it is very powerful and flexible. Also, it is like modular furniture or modular building. You have the Python foundation and choose which modules you want/need and load them before you start working. One of the most loved here at UMD is the NumPy module (pronounced num-pie). This is the something extra (the module), that we need. For more information, see www.numpy.org/. When we plot this data below, we will also need a module for plotting. First, let us import NumPy. Step7: To convert our list $c = [0,1,2,3,4,5,6,7,8,9]$ to an array we use numpy.array(), Step8: Great! However, typing numpy over and over again can get tiresome, so we can import it and give it a shorter name. It is common to use the following Step9: In this notation, converting a list to an array would be np.array(c). B. Our first plot! Now lets make a quick plot! We'll use the values from our list turned array, $c$, and plot $c^2$. Step10: C. Arrays of numbers You can also create arrays of numbers using NumPy. Two very useful ways to create arrays are by selecting the interval you care about (say, 0 to 10) and either specifying how far apart you want your values to be (numpy.arange), OR the total number of values you want (np.linspace). Let's look at some examples. Step11: Next make an array with endpoints 0 and 1 (include 0 and 1), that has 50 values in it. You can use either (both?) np.arange or np.linspace. Which is easier to you? How many numbers do you get? Are these numbers integers or floats (decimal place)? Step12: Next make an array with endpoints 0 and 2.5 (include 0 and 2.5), that has values spaced out in increments of 0.05. For example Step13: Next, let's plot these two arrays. Call them $a$ and $b$, or $x$ and $y$ (whichever you prefer--this is your code!), for example Step14: For all the possible plotting symbols, see Step15: D. A more complicated function Step16: Would this code be easy to edit for other temperatures? Now, let's look at code that does the same thing, but is more documented Step17: Plotting Multiple Curves Step18: Next, let's have you try an example. We mentioned above that wavelength and frequency are related by the speed of light, $$ c = \lambda \nu. $$ We also described a blackbody in terms of the wavelength. However, we can also describe a blackbody in terms of frequency, $$ B(\nu,T) = \dfrac{2h\nu^3}{c^2} \dfrac{1}{e^{\frac{h\nu}{kT}}-1}. $$ We can do this because $$ B_\nu d\nu = B_\lambda d\lambda $$ where $$ \nu = \frac{c}{\lambda} \quad \text{and} \quad \frac{d\nu}{d\lambda} = \left| - \frac{c}{\lambda^2}\right|. $$ EXERCISE
Python Code: #test cell Explanation: Introduction to "Doing Science" in Python for REAL Beginners Python is one of many languages you can use for research and HW purposes. In the next few days, we will work through many of the tool, tips, and tricks that we as graduate students (and PhD researchers) use on a daily basis. We will NOT attempt to teach you all of Python--there isn't time. We will however build up a set of code(s) that will allow you to read and write data, make beautiful publish-worthy plots, fit a line (or any function) to data, and set up algorithms. You will also begin to learn the syntax of Python and can hopefuly apply this knowledge to your current and future work. Before we begin, a few words on navigating the iPython Notebook: There are two main types of cells : Code and Text In "code" cells "#" at the beginning of a line marks the line as comment In "code" cells every non commented line is intepreted In "code" cells, commands that are preceded by % are "magics" and are special commands in Ipython to add some functionality to the runtime interactive environment. Shift+Return shortcut to execute a cell Alt+Return shortcut to execute a cell and create another one below Here you can find a complete documentation about the notebook. http://ipython.org/ipython-doc/1/interactive/notebook.html In particular have a look at the section about the keyboard shortcuts. And remember that : Indentation has a meaning ( we'll talk about this when we cover loops) Indexes start from 0 ( similar to C ) We will discuss more about these concepts while doing things. Let's get started now!!!! End of explanation ## You can use Python as a calculator: 5*7 #This is a comment and does not affect your code. #You can have as many as you want. #No worries. 5+7 5-7 5/7 Explanation: A. Numbers and Calculations Note: To insert comments to yourself (this is always a great idea), use the # symbol. End of explanation a2 = 10 b = 7 print(a2) print(b) print(a*b , a+b, a/b) a = 5. b = 7 print(a*b, a+b, a/b) Explanation: These simple operations on numbers in Python 3 works exactly as you'd expect, but that's not true across all programming languages. For example, in Python 2, an older version of Python that is still used often in scientific programming: $5/7$ $\neq$ $5./7$ The two calculations below would be equal on most calculators, but they are not equal to Python 2 and many other languages. The first, $5/7$ is division between integers and the answer will be an integer. The second, $5./7$ is division between a float (a number with a decimal) and an integer. In Python 2, $5/7$ = $0$ Since division of integers must return an integer, in this case 0. This is not the same for 5./7, which is $5./7$ = $0.7142857142857143$ This is something to keep in mind when working with programming languages, but Python 3 takes care of this for you. However, for the sake of consistency, it is best to use float division rather than integer division. Let's assign some variables and print() them to the screen. End of explanation c = [0,1,2,3,4,5,6,7,8,9,10,11] print(c) Explanation: Next, let's create a list of numbers and do math to that list. End of explanation len(c) Explanation: How many elements or numbers does the list c contain? Yes, this is easy to count now, but you will eventually work with lists that contains MANY numbers. To get the length of a list (or array), use len(). End of explanation d = c**2 Explanation: Now, some math... Let's square each value in c and put those values in a new list called d. To square a variable (or number), you use **. So $3^{**}2=9$. The rest of the math operations (+ / - x) are 'normal.' End of explanation import sys sys.path Explanation: This should not have worked. Why? The short answer is that a list is very useful, but it is not an array. However, you can convert your lists to arrays (and back again if you feel you need to). In order to do this conversion (and just about anything else), we need something extra. Python is a fantastic language because it is very powerful and flexible. Also, it is like modular furniture or modular building. You have the Python foundation and choose which modules you want/need and load them before you start working. One of the most loved here at UMD is the NumPy module (pronounced num-pie). This is the something extra (the module), that we need. For more information, see www.numpy.org/. When we plot this data below, we will also need a module for plotting. First, let us import NumPy. End of explanation c = np.array(c) d = c**2 print(d) Explanation: To convert our list $c = [0,1,2,3,4,5,6,7,8,9]$ to an array we use numpy.array(), End of explanation import numpy as np Explanation: Great! However, typing numpy over and over again can get tiresome, so we can import it and give it a shorter name. It is common to use the following: End of explanation %matplotlib inline import matplotlib import matplotlib.pyplot as plt x = c y = d p = plt.plot(x,y) p = plt.plot(x,y**2) Explanation: In this notation, converting a list to an array would be np.array(c). B. Our first plot! Now lets make a quick plot! We'll use the values from our list turned array, $c$, and plot $c^2$. End of explanation np.arange(0,10,2) #here the step size is 2. So you'll get even numbers. np.linspace(0,10,2) #here you're asking for 2 values. Guess what they'll be! Explanation: C. Arrays of numbers You can also create arrays of numbers using NumPy. Two very useful ways to create arrays are by selecting the interval you care about (say, 0 to 10) and either specifying how far apart you want your values to be (numpy.arange), OR the total number of values you want (np.linspace). Let's look at some examples. End of explanation np.linspace(0,1,50) np.arange(0,1.02,.02) Explanation: Next make an array with endpoints 0 and 1 (include 0 and 1), that has 50 values in it. You can use either (both?) np.arange or np.linspace. Which is easier to you? How many numbers do you get? Are these numbers integers or floats (decimal place)? End of explanation np.arange(0,2.55,0.05) Explanation: Next make an array with endpoints 0 and 2.5 (include 0 and 2.5), that has values spaced out in increments of 0.05. For example: 0, 0.05, 0.1, 0.15... You can use either np.arange or np.linspace. Which is easier to you? How many numbers do you get? Are these numbers integers or floats (decimal place)? End of explanation import numpy as np %matplotlib inline import matplotlib import matplotlib.pyplot as plt ema = np.linspace(0,1,50) bob = np.arange(0,2.5,0.05) # Clear the plotting field. plt.clf() # No need to add anything inside these parentheses. plt.plot(ema,bob,'b*') # The 'ro' says you want to use Red o plotting symbols. Explanation: Next, let's plot these two arrays. Call them $a$ and $b$, or $x$ and $y$ (whichever you prefer--this is your code!), for example: a = np.linspace(). Fill in the missing bits in the code below. End of explanation import numpy as np %matplotlib inline import matplotlib import matplotlib.pyplot as plt x = np.linspace(-1,1,10) y = np.sqrt(x) print(x) print(y) # Clear the plotting field. plt.clf() # No need to add anything inside these parentheses. plt.xlim([0,1.1]) plt.plot(x,y,'ro') # The 'ro' says you want to use Red o plotting symbols. plt.xlabel('my x') plt.ylabel('my y') plt.title('Happy new year') #Would you like to save your plot? Uncomment the below line. Here, we use savefig('nameOffigure') #It should save to the folder you are currently working out of. plt.savefig('MyFirstFigure.jpg') #What if we had error bars to deal with? We already have a built in solution that works much like plt.plot! #Below is a commented example of what an errorbar plot line would be for an array of errors called, dy. #plt.errorbar(x,y,y_err=dy) Explanation: For all the possible plotting symbols, see: http://matplotlib.org/api/markers_api.html. Next, let's plot the positive half of a circle. Let's also add labels in using plt.title(), plt.xlabel(), and plt.ylabel(). End of explanation import numpy as np import matplotlib import matplotlib.pyplot as plt x = np.linspace(100,2000,10000)*1e-9 #wavelength, we want a range of 100 nm to 10000 nm, but in METERS Blam = 2.0*6.626e-34*2.998e8**2/x**5/(np.exp(6.626e-34*2.998e8/(x*1.381e-23*5800.0))-1.0) plt.clf() p = plt.plot(x*1e9,Blam) #we multiply by 1e9 so that the x axis shows nm xl = plt.xlabel('Wavelength (nm)') yl = plt.ylabel('Spectral Radiance ()') #What are the units? Explanation: D. A more complicated function: The Planck Blackbody Curve Any object at a temperature above absolute zero (at 0 Kelvin, atoms stop moving), emits light of all wavelengths with varying degrees of efficiency. Recall that light is electromagnetic radiation. We see this radiation with our eyes when the wavelength is 400-700 nm, and we feel it as heat when the wavelength is 700 nm - 1mm. A blackbody is an object that is a perfect emitter. This means that it perfectly absorbs any energy that hits it and re-emits this energy over ALL wavelengths. The spectrum of a blackbody looks like a rainbow to our eyes. The expression for this "rainbow" over all wavelengths is given by, $$ B(\lambda,T) = \dfrac{2hc^2}{\lambda^5} \dfrac{1}{e^{\frac{hc}{\lambda kT}}-1} $$ where $\lambda$ is the wavelength, $T$ is the temperature of the object, $h$ is Planck's constant, $c$ is the speed of light, and $k$ is the Boltzmann constant. https://en.wikipedia.org/wiki/Black-body_radiation Plotting a blackbody curve: First, let us look at code that might be confusing to read later on. End of explanation import numpy as np import matplotlib import matplotlib.pyplot as plt # Constants in MKS (meters, kilograms, & seconds) h = 6.626e-34 # J s c = 2.998e8 # m/s k = 1.381e-23 # J/K # Let's pick the sun. YOU will need to PICK the temperature and then a range of # frequencies or wavelengths that "make sense" for that temperature. # We know that the sun peaks in visible part of the spectrum. This wavelength # is close to 500 nm. Let's have the domain (x values) go from 100 nm to 2000 nm. # 1 nm = 10^-9 m = 10^-7 cm. lam = np.linspace(100,2000,10000)*1e-9 #wavelength in nm nu = c/lam T = 5800.0 exp = np.exp(h*c/(lam*k*T)) num = 2.0 * h * c**2 denom = lam**5 * (exp - 1.0) Blam = num/denom plt.clf() p = plt.plot(lam*1e9,Blam) xl = plt.xlabel('Wavelength (nm)') yl = plt.ylabel(r'Spectral Radiance (W m$^{-3}$)') #What are the units? # Try a log-log plot. #p = plt.loglog(wav,Bnu) Explanation: Would this code be easy to edit for other temperatures? Now, let's look at code that does the same thing, but is more documented: End of explanation import numpy as np import matplotlib import matplotlib.pyplot as plt # Constants in MKS (meters, kilograms, & seconds) h = 6.626e-34 # c = 2.998e8 # m/s k = 1.381e-23 # J/K # Let's try to recreate the plot above. # Pick temperatures: T1 = 7000 K , T2= 5800 K, and T3 = 4000 K. # Let's have the domain (x values) go from 100 nm to 2000 nm. # 1 nm = 10^-9 m. wav = np.linspace(100,2000,10000)*1e-9 #in meters T1 = 7000. T2 = 5800. T3 = 4000. num = 2.0 * h * c**2 exp1 = np.exp(h*c/(wav*k*T1)) denom1 = wav**5 * (exp1 - 1.0) exp2 = np.exp(h*c/(wav*k*T2)) denom2 = wav**5 * (exp2 - 1.0) exp3 = np.exp(h*c/(wav*k*T3)) denom3 = wav**5 * (exp3 - 1.0) Bnu1 = num/denom1 Bnu2 = num/denom2 Bnu3 = num/denom3 plt.clf() p1 = plt.plot(wav*1e9,Bnu1,label='T =7000 K') p2 = plt.plot(wav*1e9,Bnu2,label='T = 5800 K') p3 = plt.plot(wav*1e9,Bnu3,label='T = 4000 K') xl = plt.xlabel('Wavelength (nm)') yl = plt.ylabel(r'Spectral Radiance (W m$^{-3}$)') l = plt.legend(loc='lower right') Explanation: Plotting Multiple Curves: End of explanation import numpy as np import matplotlib import matplotlib.pyplot as plt # Constants in MKS (meters, kilograms, & seconds) h = 6.626e-34 # c = 2.998e8 # m/s k = 1.381e-23 # J/K # Let's try to recreate the plot above. # Pick three temperatures. # Decide on a domain in Hertz (frequency) that makes sense. # c = nu x lambda, nu = c/lambda #### Put your code here ### Explanation: Next, let's have you try an example. We mentioned above that wavelength and frequency are related by the speed of light, $$ c = \lambda \nu. $$ We also described a blackbody in terms of the wavelength. However, we can also describe a blackbody in terms of frequency, $$ B(\nu,T) = \dfrac{2h\nu^3}{c^2} \dfrac{1}{e^{\frac{h\nu}{kT}}-1}. $$ We can do this because $$ B_\nu d\nu = B_\lambda d\lambda $$ where $$ \nu = \frac{c}{\lambda} \quad \text{and} \quad \frac{d\nu}{d\lambda} = \left| - \frac{c}{\lambda^2}\right|. $$ EXERCISE: Make the same plot above (for 3 separate temperatures) with the x axis showing frequency (in Hertz). End of explanation
5,015
Given the following text description, write Python code to implement the functionality described below step by step Description: Step3: Identify Fraud from Enron email Machine Learning Project Tools library Step5: Tester module Step6: Data exploration and cleaning Step7: Basic statistics Step8: This shows that the dataset is stored as a dictionary. Each person in the dataset is represented by a key-value pair. There are 146 such pairs. The value is itself a dictionary containing 21 key-value pairs corresponding to the financial and email features. Feature exploration Step9: Let's have a look at missing values Step10: The column loan_advances has 142 missing values out of 146 observations.It is unlikely to be useful in the model we are trying to build. Step11: The entry 'Eugene Lockhart' has only NAs, except for poi which has a meaningful value. This matches the content of the file enron61702insiderpay.pdf, which shows that all his values are zero. In a sense, this person is an outlier, however we have to decide whether we want to retain im in the data or not. In other words, do we believe that this is a correct observaton or an error, and if we think it is correct, it is usefull to keep an observation that has only zeros? My view on this is that the observation is probably correct (there is a number of other individuals with very few non-zero features) and might be useful to the model, so I will retain it. In this dataset, missing values obviously mean zero. However, when working with financial data, one often has to convert values to their logarithm. With zeros and negative numbers, this leads to undefined values. We will therefore replace all NAs with a very small number (1.e-5). Step12: restricted_stock_deferred seems to have negative values only according to enron61702insiderpay.pdf, however its maximum value is $15,456,290. Let's investigate Step13: When comparing these values with the pdf file, I realize that the data is shifted to the left by one column, hence the errors. Presumably, there might be other such occurences so I now need to go through the data and manually fix these. Data cleaning Step14: There are only two problematic observations. Let's correct them manually Step15: This confirms we successfully cleaned up the data. Look for outliers Step16: I will now make a scatter plot of the first two variables Step17: There is an observation that immediately stands out. It corresponds to the highest values of both salary and deferral_payments. When checking the numbers against the document named enron61702insiderpay.pdf, we see that these values correspond to the 'TOTAL' line and are therefore an artefact of the data collection process rather than an actual observation. Step18: The data looks a lot more sensible now. There are still two significant outliers but they correspond to actual staff members (Jeffrey Skelling and Mark Frevert). As often with financial data, we might need to opt for log scales in further exploration. But for now, we are trying to identify outliers so we will stick to linear scales. Let's continue to plot observations Step19: Wow, now we have someone whose total payments is one order of magnitude above everyone else's. This is Kenneth Lay; the bulk of the payments come from Loan Advances. We will need to make a decision as to whether we want to keep him in the data or not... Let's see what this plot looks like with a logarithmic y-scale Step20: Now there seems to be a correlation between the two variables. Step21: Again, Kenneth Lay stands out with a total stock value of over $49mil. This is a real observations, so I decide to keep it for now. Step22: The two variables seem associated but the relationship may not be linear, even taking the log of total_stock_value. Let's now look at email features Step23: One employee stands out as extremely verbose! They sent almost 3 times as many messages as they received. Let's find out who they were Step24: The Wikipedia page about Vince Kaminski tells us he was Managing Director for Research and repetedly voiced objections to Enron's practices, warning that a single event could trigger a cascade of provision clauses in creditor contracts that would quickly lead to the demise of Enron. He was unfortunately proved right... Would this explain the discrepancy between the number of emails sent andb received? A detailed analysis of his emails might give us some insight into this but this is outside the scope of this project. Step25: Again, let's look at who the two outliers are Step26: David Delainey is a POI; in fact he was amongst the first convicted employees of Enron. John Lavorato is not a POI, and seems to have privately expressed concerns about some of Enron's behaviour. Look at individual variable distributions Step27: Many salaries are 0 -- presumably, not current Enron employees. Salaries are distributed in a fairly normal way, if we exclude the 0 values. Step28: For deferral_payments, I had to use a log transformation to get something resembling a normal distribution. Note that the count of non-zero values is low. Step29: Again, a logarithmic transformation is required to have a roughly normal distribution of the non-zero values. Step30: The number of non-zero values does not justify using this predictor in the model. Step31: Here again, I used a log transformation. Step32: Look for associations between features and labels Step33: It looks like salary and bonus might be good predictors. expenses seem less significant. Finally, director_fees seems useful because none of the POI seems to have received any director fees. However, this also applies to many non-POIs so it is not enough for perfect prediction. Moreover, very few employees received these fees so the information might not be very significant. Step34: Let's continue our investigation with the next set of predictors. For these predictors, a logarithmic y scale is more adequate Step35: Some of these variables have very few non-zero values, such as loan advances for instance. deferral_payment might not be a very strong predictor, but the other variables seem all significant. However we need to be warry of not duplicating information, so we should probably not include total_payments (which is the sum of all other pay-related features), or conversely we should only keep the total but not (all) the elements making it up. Let's continue with the stock value features Step36: It appears that only non-POI have non-zero values for restricted_stock_deferred. However the number of non-zero observations is low. The other predictors all seem useful, but total_stock_value is the sum of all of them so we may need to choose whether to keep the total or the individual predictors that make it up. Finally, let's have a look at the email features Step37: All these predictors seem relevant to predict the POI status of a member of staff. Note that there seems to be a circular logic in these features Step38: Let's see if these variables teach us anything Step39: The boxes for POI / non-POI in the first plot overlap quite a lot, meaning the predictor might not be as useful as others, but median values are quite diffent. It seems that on average, POI tend to send far less emails than they receive, which is intuitively consistent with senior executives being cc'd on a lot of conversations. Feature selection Based on the analysis above, I will make a feature selection for my first model, bearing in mind the need to keep the feature count as low as possible. There are also some variables that need to be converted to their logarithmic values. Note
Python Code: #!/usr/bin/python A general tool for converting data from the dictionary format to an (n x k) python list that's ready for training an sklearn algorithm n--no. of key-value pairs in dictonary k--no. of features being extracted dictionary keys are names of persons in dataset dictionary values are dictionaries, where each key-value pair in the dict is the name of a feature, and its value for that person In addition to converting a dictionary to a numpy array, you may want to separate the labels from the features--this is what targetFeatureSplit is for so, if you want to have the poi label as the target, and the features you want to use are the person's salary and bonus, here's what you would do: feature_list = ["poi", "salary", "bonus"] data_array = featureFormat( data_dictionary, feature_list ) label, features = targetFeatureSplit(data_array) the line above (targetFeatureSplit) assumes that the label is the _first_ item in feature_list--very important that poi is listed first! import numpy as np import pickle import sys from sklearn.cross_validation import StratifiedShuffleSplit sys.path.append("../tools/") #from feature_format import featureFormat, targetFeatureSplit def featureFormat( dictionary, features, remove_NaN=True, remove_all_zeroes=True, remove_any_zeroes=False, sort_keys = False): convert dictionary to numpy array of features remove_NaN = True will convert "NaN" string to 0.0 remove_all_zeroes = True will omit any data points for which all the features you seek are 0.0 remove_any_zeroes = True will omit any data points for which any of the features you seek are 0.0 sort_keys = True sorts keys by alphabetical order. Setting the value as a string opens the corresponding pickle file with a preset key order (this is used for Python 3 compatibility, and sort_keys should be left as False for the course mini-projects). NOTE: first feature is assumed to be 'poi' and is not checked for removal for zero or missing values. return_list = [] # Key order - first branch is for Python 3 compatibility on mini-projects, # second branch is for compatibility on final project. if isinstance(sort_keys, str): import pickle keys = pickle.load(open(sort_keys, "rb")) elif sort_keys: keys = sorted(dictionary.keys()) else: keys = dictionary.keys() for key in keys: tmp_list = [] for feature in features: try: dictionary[key][feature] except KeyError: print "error: key ", feature, " not present" return value = dictionary[key][feature] if value=="NaN" and remove_NaN: value = 0 tmp_list.append( float(value) ) # Logic for deciding whether or not to add the data point. append = True # exclude 'poi' class as criteria. if features[0] == 'poi': test_list = tmp_list[1:] else: test_list = tmp_list ### if all features are zero and you want to remove ### data points that are all zero, do that here if remove_all_zeroes: append = False for item in test_list: if item != 0 and item != "NaN": append = True break ### if any features for a given data point are zero ### and you want to remove data points with any zeroes, ### handle that here if remove_any_zeroes: if 0 in test_list or "NaN" in test_list: append = False ### Append the data point if flagged for addition. if append: return_list.append( np.array(tmp_list) ) return np.array(return_list) def targetFeatureSplit( data ): given a numpy array like the one returned from featureFormat, separate out the first feature and put it into its own list (this should be the quantity you want to predict) return targets and features as separate lists (sklearn can generally handle both lists and numpy arrays as input formats when training/predicting) target = [] features = [] for item in data: target.append( item[0] ) features.append( item[1:] ) return target, features Explanation: Identify Fraud from Enron email Machine Learning Project Tools library End of explanation #!/usr/bin/pickle a basic script for importing student's POI identifier, and checking the results that they get from it requires that the algorithm, dataset, and features list be written to my_classifier.pkl, my_dataset.pkl, and my_feature_list.pkl, respectively that process should happen at the end of poi_id.py import pickle import sys from sklearn.cross_validation import StratifiedShuffleSplit sys.path.append("../tools/") #from feature_format import featureFormat, targetFeatureSplit PERF_FORMAT_STRING = "\ \tAccuracy: {:>0.{display_precision}f}\tPrecision: {:>0.{display_precision}f}\t\ Recall: {:>0.{display_precision}f}\tF1: {:>0.{display_precision}f}\tF2: {:>0.{display_precision}f}" RESULTS_FORMAT_STRING = "\tTotal predictions: {:4d}\tTrue positives: {:4d}\tFalse positives: {:4d}\ \tFalse negatives: {:4d}\tTrue negatives: {:4d}" def test_classifier(clf, dataset, feature_list, folds = 1000): data = featureFormat(dataset, feature_list, sort_keys = True) labels, features = targetFeatureSplit(data) print labels print features cv = StratifiedShuffleSplit(labels, folds, random_state = 42) true_negatives = 0 false_negatives = 0 true_positives = 0 false_positives = 0 for train_idx, test_idx in cv: features_train = [] features_test = [] labels_train = [] labels_test = [] for ii in train_idx: features_train.append( features[ii] ) labels_train.append( labels[ii] ) for jj in test_idx: features_test.append( features[jj] ) labels_test.append( labels[jj] ) ### fit the classifier using training set, and test on test set clf.fit(features_train, labels_train) predictions = clf.predict(features_test) for prediction, truth in zip(predictions, labels_test): if prediction == 0 and truth == 0: true_negatives += 1 elif prediction == 0 and truth == 1: false_negatives += 1 elif prediction == 1 and truth == 0: false_positives += 1 elif prediction == 1 and truth == 1: true_positives += 1 else: print "Warning: Found a predicted label not == 0 or 1." print "All predictions should take value 0 or 1." print "Evaluating performance for processed predictions:" break #print true_negatives, false_negatives, true_positives, false_positives try: total_predictions = true_negatives + false_negatives + false_positives + true_positives accuracy = 1.0*(true_positives + true_negatives)/total_predictions precision = 1.0*true_positives/(true_positives+false_positives) recall = 1.0*true_positives/(true_positives+false_negatives) f1 = 2.0 * true_positives/(2*true_positives + false_positives+false_negatives) f2 = (1+2.0*2.0) * precision*recall/(4*precision + recall) print clf print PERF_FORMAT_STRING.format(accuracy, precision, recall, f1, f2, display_precision = 5) print RESULTS_FORMAT_STRING.format(total_predictions, true_positives, false_positives, false_negatives, true_negatives) print "" except: print "Got a divide by zero when trying out:", clf print "Precision or recall may be undefined due to a lack of true positive predicitons." CLF_PICKLE_FILENAME = "my_classifier.pkl" DATASET_PICKLE_FILENAME = "my_dataset.pkl" FEATURE_LIST_FILENAME = "my_feature_list.pkl" def dump_classifier_and_data(clf, dataset, feature_list): with open(CLF_PICKLE_FILENAME, "w") as clf_outfile: pickle.dump(clf, clf_outfile) with open(DATASET_PICKLE_FILENAME, "w") as dataset_outfile: pickle.dump(dataset, dataset_outfile) with open(FEATURE_LIST_FILENAME, "w") as featurelist_outfile: pickle.dump(feature_list, featurelist_outfile) def load_classifier_and_data(): with open(CLF_PICKLE_FILENAME, "r") as clf_infile: clf = pickle.load(clf_infile) with open(DATASET_PICKLE_FILENAME, "r") as dataset_infile: dataset = pickle.load(dataset_infile) with open(FEATURE_LIST_FILENAME, "r") as featurelist_infile: feature_list = pickle.load(featurelist_infile) return clf, dataset, feature_list def main(): ### load up student's classifier, dataset, and feature_list clf, dataset, feature_list = load_classifier_and_data() ### Run testing script test_classifier(clf, dataset, feature_list) #if __name__ == '__main__': # main() Explanation: Tester module End of explanation import matplotlib.pyplot as plt import pandas as pd % matplotlib inline # Load data from pickle files: with open('final_project_dataset.pkl', 'rb') as f: data_dict = pickle.load(f) Explanation: Data exploration and cleaning End of explanation print "Dataset type:", type(data_dict) print "Number of key-value pairs in dictionary:", len(data_dict) print "List of keys in dictionary:", data_dict.keys() print "Number of elements in a key-value pair:", len(data_dict['SHANKMAN JEFFREY A']) print "Example of contents of a key-value pair:", data_dict['SHANKMAN JEFFREY A'] Explanation: Basic statistics: End of explanation features_list = ['poi', 'salary', 'bonus', 'long_term_incentive', 'deferred_income', 'deferral_payments', 'loan_advances','other', 'expenses', 'director_fees', 'total_payments', 'exercised_stock_options', 'restricted_stock', 'restricted_stock_deferred', 'total_stock_value', 'from_messages', 'to_messages', 'from_poi_to_this_person', 'from_this_person_to_poi', 'shared_receipt_with_poi'] data_df = pd.DataFrame.from_dict(data_dict, orient = 'index', dtype = float) Explanation: This shows that the dataset is stored as a dictionary. Each person in the dataset is represented by a key-value pair. There are 146 such pairs. The value is itself a dictionary containing 21 key-value pairs corresponding to the financial and email features. Feature exploration: We first need to convert the dataset into a Panda dataframe for convenience. End of explanation data_df.isnull().sum(axis = 0).sort_values(ascending = False) Explanation: Let's have a look at missing values: End of explanation data_df.isnull().sum(axis =1).sort_values(ascending = False) data_df.loc['LOCKHART EUGENE E', :] Explanation: The column loan_advances has 142 missing values out of 146 observations.It is unlikely to be useful in the model we are trying to build. End of explanation data_df.fillna(1.e-5, inplace = True) data_df = data_df[features_list] data_df.describe() Explanation: The entry 'Eugene Lockhart' has only NAs, except for poi which has a meaningful value. This matches the content of the file enron61702insiderpay.pdf, which shows that all his values are zero. In a sense, this person is an outlier, however we have to decide whether we want to retain im in the data or not. In other words, do we believe that this is a correct observaton or an error, and if we think it is correct, it is usefull to keep an observation that has only zeros? My view on this is that the observation is probably correct (there is a number of other individuals with very few non-zero features) and might be useful to the model, so I will retain it. In this dataset, missing values obviously mean zero. However, when working with financial data, one often has to convert values to their logarithm. With zeros and negative numbers, this leads to undefined values. We will therefore replace all NAs with a very small number (1.e-5). End of explanation data_df[data_df['restricted_stock_deferred'] == np.max(data_df['restricted_stock_deferred'])] Explanation: restricted_stock_deferred seems to have negative values only according to enron61702insiderpay.pdf, however its maximum value is $15,456,290. Let's investigate: End of explanation data_df[(np.floor(data_df['salary'] + data_df['bonus'] + data_df['long_term_incentive'] + data_df['deferred_income'] + \ data_df['deferral_payments'] + data_df['loan_advances'] + data_df['other'] + data_df['expenses'] + \ data_df['director_fees']) != np.floor(data_df['total_payments'])) | \ (np.floor(data_df['exercised_stock_options'] + data_df['restricted_stock'] + \ data_df['restricted_stock_deferred']) != np.floor(data_df['total_stock_value']))] Explanation: When comparing these values with the pdf file, I realize that the data is shifted to the left by one column, hence the errors. Presumably, there might be other such occurences so I now need to go through the data and manually fix these. Data cleaning: An easy way to detect discrepancies such as described above is to check that totals (payments and stock value) are equal to the sum of their components. End of explanation # Robert Belfer: for j in xrange(1, 14): data_df.ix['BELFER ROBERT', j] = data_df.ix['BELFER ROBERT', j + 1] data_df.ix['BELFER ROBERT', 14] = 1.e-5 # Sanjay Bhatnagar: for j in xrange(14, 2, -1): data_df.ix['BHATNAGAR SANJAY', j] = data_df.ix['BHATNAGAR SANJAY', j - 1] data_df.ix['BHATNAGAR SANJAY', 1] = 1.e-5 data_df.loc[['BELFER ROBERT', 'BHATNAGAR SANJAY']] Explanation: There are only two problematic observations. Let's correct them manually: End of explanation data_df = data_df.drop(['THE TRAVEL AGENCY IN THE PARK']) Explanation: This confirms we successfully cleaned up the data. Look for outliers: In our list of DataFrame indexes shown above, we can see a name that is obviously not a real person: 'THE TRAVEL AGENCY IN THE PARK'. Some research show that this is a travel agency that was contracted to Enron while related to the wife of one of Enron's executives. There might be conflict of interest here, but we since we are investigating persons and not suppliers, I chose to drop this observation. End of explanation sp = data_df.plot.scatter(x = 'salary', y = 'deferral_payments', c = 'poi', edgecolors = 'Blue', s = 50) Explanation: I will now make a scatter plot of the first two variables: End of explanation # Drop the 'TOTAL' row: data_df = data_df.drop(['TOTAL']) data_df.describe() sp = data_df.plot.scatter(x = 'salary', y = 'deferral_payments', c = 'poi', edgecolors = 'Blue', s = 50) Explanation: There is an observation that immediately stands out. It corresponds to the highest values of both salary and deferral_payments. When checking the numbers against the document named enron61702insiderpay.pdf, we see that these values correspond to the 'TOTAL' line and are therefore an artefact of the data collection process rather than an actual observation. End of explanation sp = data_df.plot.scatter(x = 'salary', y = 'bonus', c = 'poi', edgecolors = 'Blue', s = 50) sp = data_df.plot.scatter(x = 'salary', y = 'expenses', c = 'poi', edgecolors = 'Blue', s = 50) sp = data_df.plot.scatter(x = 'salary', y = 'total_payments', c = 'poi', edgecolors = 'Blue', s = 50) Explanation: The data looks a lot more sensible now. There are still two significant outliers but they correspond to actual staff members (Jeffrey Skelling and Mark Frevert). As often with financial data, we might need to opt for log scales in further exploration. But for now, we are trying to identify outliers so we will stick to linear scales. Let's continue to plot observations: End of explanation sp = data_df.plot.scatter(x = 'salary', y = 'total_payments', c = 'poi', edgecolors = 'Blue', s = 50) sp.set_yscale('log') sp.set_ylim(1.0e4, 1.5e8) Explanation: Wow, now we have someone whose total payments is one order of magnitude above everyone else's. This is Kenneth Lay; the bulk of the payments come from Loan Advances. We will need to make a decision as to whether we want to keep him in the data or not... Let's see what this plot looks like with a logarithmic y-scale: End of explanation sp = data_df.plot.scatter(x = 'salary', y = 'total_stock_value', c = 'poi', edgecolors = 'Blue', s = 50) Explanation: Now there seems to be a correlation between the two variables. End of explanation sp = data_df.plot.scatter(x = 'salary', y = 'total_stock_value', c = 'poi', edgecolors = 'Blue', s = 50) sp.set_yscale('log') sp.set_ylim(1.0e4, 1.5e8) Explanation: Again, Kenneth Lay stands out with a total stock value of over $49mil. This is a real observations, so I decide to keep it for now. End of explanation sp = data_df.plot.scatter(x = 'to_messages', y = 'from_messages', c = 'poi', edgecolors = 'Blue', s = 50) Explanation: The two variables seem associated but the relationship may not be linear, even taking the log of total_stock_value. Let's now look at email features: End of explanation data_df[data_df['from_messages'] == np.max(data_df['from_messages'])] Explanation: One employee stands out as extremely verbose! They sent almost 3 times as many messages as they received. Let's find out who they were: End of explanation sp = data_df.plot.scatter(x = 'from_poi_to_this_person', y = 'from_this_person_to_poi', c = 'poi', edgecolors = 'Blue', s = 50) Explanation: The Wikipedia page about Vince Kaminski tells us he was Managing Director for Research and repetedly voiced objections to Enron's practices, warning that a single event could trigger a cascade of provision clauses in creditor contracts that would quickly lead to the demise of Enron. He was unfortunately proved right... Would this explain the discrepancy between the number of emails sent andb received? A detailed analysis of his emails might give us some insight into this but this is outside the scope of this project. End of explanation data_df[data_df['from_this_person_to_poi'] == np.max(data_df['from_this_person_to_poi'])] data_df[data_df['from_poi_to_this_person'] == np.max(data_df['from_poi_to_this_person'])] Explanation: Again, let's look at who the two outliers are: End of explanation fig, ax = plt.subplots() data_df.hist('salary', ax = ax, bins = 50) ax.set_ylim(0, 20) Explanation: David Delainey is a POI; in fact he was amongst the first convicted employees of Enron. John Lavorato is not a POI, and seems to have privately expressed concerns about some of Enron's behaviour. Look at individual variable distributions: In this section we will plot histograms for each feature variable and apply log transformations where required to make them closer to a normal distribution, which helps many machine learning models. End of explanation fig, ax = plt.subplots() data_df.hist('deferral_payments', ax = ax, bins = np.logspace(np.log10(1.e-5), np.log10(2.e6), 50)) ax.set_xscale('log') ax.set_xlim(1e3, None) ax.set_ylim(0, 10) Explanation: Many salaries are 0 -- presumably, not current Enron employees. Salaries are distributed in a fairly normal way, if we exclude the 0 values. End of explanation fig, ax = plt.subplots() data_df.hist('total_payments', ax = ax, bins = np.logspace(np.log10(1.e-5), np.log10(2.e8), 50)) ax.set_xscale('log') ax.set_xlim(1.e2, None) ax.set_ylim(0, None) Explanation: For deferral_payments, I had to use a log transformation to get something resembling a normal distribution. Note that the count of non-zero values is low. End of explanation fig, ax = plt.subplots() data_df.hist('loan_advances', ax = ax, bins = 50) Explanation: Again, a logarithmic transformation is required to have a roughly normal distribution of the non-zero values. End of explanation fig, ax = plt.subplots() data_df.hist('bonus', ax = ax, bins = np.logspace(np.log10(1.e-5), np.log10(2.e6), 100)) ax.set_xscale('log') ax.set_xlim(1e4, None) ax.set_ylim(0, 20) Explanation: The number of non-zero values does not justify using this predictor in the model. End of explanation fig, ax = plt.subplots() data_df.hist('restricted_stock_deferred', ax = ax, bins = 50) Explanation: Here again, I used a log transformation. End of explanation fig, ax = plt.subplots(2, 2, sharey = False, figsize = (10, 15)) bp = data_df.boxplot(['salary', 'bonus', 'expenses', 'director_fees'], by = 'poi', ax = ax) Explanation: Look for associations between features and labels: In this section, we will boxplot the poi label against the feature variables to try and find the most relevant features to select in our model. End of explanation data_df.loc[:, ['director_fees', 'poi']] Explanation: It looks like salary and bonus might be good predictors. expenses seem less significant. Finally, director_fees seems useful because none of the POI seems to have received any director fees. However, this also applies to many non-POIs so it is not enough for perfect prediction. Moreover, very few employees received these fees so the information might not be very significant. End of explanation data_df.loc[:, 'deferred_income'] = np.abs(data_df.loc[:, 'deferred_income']) fig, axes = plt.subplots(2, 3, sharey = False, figsize = (15, 15)) bp = data_df.boxplot(['deferral_payments', 'loan_advances', 'deferred_income', 'long_term_incentive', 'other', 'total_payments'], by = 'poi', ax = axes) for i in range(2): for j in range(3): axes[i][j].set_yscale('log') axes[i][j].set_ylim(1000, None) Explanation: Let's continue our investigation with the next set of predictors. For these predictors, a logarithmic y scale is more adequate: End of explanation data_df.loc[:, 'restricted_stock_deferred'] = np.abs(data_df.loc[:, 'restricted_stock_deferred']) fig, axes = plt.subplots(2, 2, sharey = False, figsize = (10, 10)) bp = data_df.boxplot(['restricted_stock_deferred', 'exercised_stock_options', 'restricted_stock', 'total_stock_value'], by = 'poi', ax = axes) for i in range(2): for j in range(2): axes[i][j].set_yscale('log') axes[i][j].set_ylim(1000, None) Explanation: Some of these variables have very few non-zero values, such as loan advances for instance. deferral_payment might not be a very strong predictor, but the other variables seem all significant. However we need to be warry of not duplicating information, so we should probably not include total_payments (which is the sum of all other pay-related features), or conversely we should only keep the total but not (all) the elements making it up. Let's continue with the stock value features: End of explanation fig, axes = plt.subplots(1, 5, sharey = False, figsize = (20, 10)) bp = data_df.boxplot(['to_messages', 'from_messages', 'from_poi_to_this_person', 'from_this_person_to_poi', 'shared_receipt_with_poi'], by = 'poi', ax = axes) for i in range(5): axes[i].set_yscale('log') axes[i].set_ylim(1, None) Explanation: It appears that only non-POI have non-zero values for restricted_stock_deferred. However the number of non-zero observations is low. The other predictors all seem useful, but total_stock_value is the sum of all of them so we may need to choose whether to keep the total or the individual predictors that make it up. Finally, let's have a look at the email features: End of explanation extended_data = data_df.loc[:, ['poi', 'salary', 'bonus', 'expenses', 'director_fees']] extended_data.loc[:, 'sent_vs_received'] = data_df.loc[:, 'from_messages'] / data_df.loc[:, 'to_messages'] extended_data.loc[:, 'total_emails']= data_df.loc[:, 'from_messages'] + data_df.loc[:, 'to_messages'] extended_data.loc[:, 'emails_with_poi'] = data_df.loc[:, 'from_this_person_to_poi'] + \ data_df.loc[:, 'from_poi_to_this_person'] + \ data_df.loc[:, 'shared_receipt_with_poi'] Explanation: All these predictors seem relevant to predict the POI status of a member of staff. Note that there seems to be a circular logic in these features: To predict whether or not someone is a POI, we look at whether they sent emails or received emails from other POIs, which implies that we already know if they are POIs or not... Synthetic features Given the small dataset size, I would like to restrict the number of predictors to as low a number as possible. To that end, I will try to aggregate some of the variables in a meaningful way. My first idea is to use the email features to look at from / to ratios and total number of emails involving POIs. End of explanation fig, axes = plt.subplots(1, 3, sharey = False, figsize = (15, 10)) bp = extended_data.boxplot(['sent_vs_received', 'total_emails', 'emails_with_poi'], by = 'poi', ax = axes) Explanation: Let's see if these variables teach us anything: End of explanation # Create and export final dataset extended_data.loc[:, 'log_bonus'] = np.log(data_df.loc[:, 'bonus']) extended_data.loc[:, 'log_deferred_income'] = np.log(data_df.loc[:, 'deferred_income']) extended_data.loc[:, 'log_long_term_incentive'] = np.log(data_df.loc[:, 'long_term_incentive']) extended_data.loc[:, 'log_other'] = np.log(data_df.loc[:, 'other']) extended_data.loc[:, 'log_restricted_stock_deferred'] = np.log(data_df.loc[:, 'restricted_stock_deferred']) extended_data.loc[:, 'log_total_stock_value'] = np.log(data_df.loc[:, 'total_stock_value']) # List of features used in the model features_list = list(extended_data.columns) # Put dataset into the dict format expected by the test module my_data_dict = extended_data.to_dict(orient = 'index') print features_list extended_data.describe() extended_data.columns Explanation: The boxes for POI / non-POI in the first plot overlap quite a lot, meaning the predictor might not be as useful as others, but median values are quite diffent. It seems that on average, POI tend to send far less emails than they receive, which is intuitively consistent with senior executives being cc'd on a lot of conversations. Feature selection Based on the analysis above, I will make a feature selection for my first model, bearing in mind the need to keep the feature count as low as possible. There are also some variables that need to be converted to their logarithmic values. Note: By accident, I discovered that keeping both Selected features: salary log(bonus) expenses director_fees log(deferred_income) log(long_term_incentive) log(other) log(restricted_stock_deferred) log(total_stock_value) sent_vs_received total_emails emails_with_poi End of explanation
5,016
Given the following text description, write Python code to implement the functionality described below step by step Description: Simple Linear Regression We need to read our data from a <tt>csv</tt> file. The module csv offers a number of functions for reading and writing a <tt>csv</tt> file. Step1: The data we want to read is contained in the <tt>csv</tt> file cars.csv, which is located in the subdirectory Python. In this file, the first column has the miles per gallon, while the engine displacement is given in the third column. On MacOs and Linux systems we can peek at this file via the next cell. Step2: In order to read the file we use the method DictReader from the module csv. The DictReader returns a dictionary for every row of the csv file. The keys of this dictionary are the column headers of the csv file. When reading this file, we convert miles per gallon into km per litre and cubic inches into litres. Step3: Now kpl is a list of floating point numbers specifying the <em style="color Step4: The number of data pairs of the form $\langle x, y \rangle$ that we have read is stored in the variable m. Step5: In order to be able to plot the fuel efficiency versus the engine displacement we turn the lists displacement and mpg into numpy arrays. This is also usefull in order to compute the coefficients $\vartheta_0$ and $\vartheta_1$ later. Step6: Since <em style="color Step7: We compute the average engine displacement according to the formula Step8: We compute the average fuel consumption according to the formula Step9: The coefficient $\vartheta_1$ is computed according to the formula Step10: The coefficient $\vartheta_0$ is computed according to the formula Step11: Let us plot the line $y(x) = ϑ0 + ϑ1 \cdot x$ together with our data Step12: We see there is quite a bit of variation and apparently the engine displacement explains only a part of the fuel consumption. In order to compute the coefficient of determination, i.e. the statistics $R^2$, we first compute the total sum of squares TSS according to the following formula Step13: Next, we compute the residual sum of squares RSS as follows Step14: Now $R^2$ is calculated via the formula
Python Code: import csv Explanation: Simple Linear Regression We need to read our data from a <tt>csv</tt> file. The module csv offers a number of functions for reading and writing a <tt>csv</tt> file. End of explanation !cat cars.csv || type cars.csv Explanation: The data we want to read is contained in the <tt>csv</tt> file cars.csv, which is located in the subdirectory Python. In this file, the first column has the miles per gallon, while the engine displacement is given in the third column. On MacOs and Linux systems we can peek at this file via the next cell. End of explanation with open('cars.csv') as handle: reader = csv.DictReader(handle, delimiter=',') kpl = [] # kilometer per litre displacement = [] # engine displacement for row in reader: x = float(row['displacement']) * 0.0163871 y = float(row['mpg']) * 1.60934 / 3.78541 print(f'{row["name"]:35s}: displacement = {x:5.2f} litres, kpl = {y:5.2f} km per litres') displacement.append(x) kpl .append(y) Explanation: In order to read the file we use the method DictReader from the module csv. The DictReader returns a dictionary for every row of the csv file. The keys of this dictionary are the column headers of the csv file. When reading this file, we convert miles per gallon into km per litre and cubic inches into litres. End of explanation kpl[:5] displacement[:5] Explanation: Now kpl is a list of floating point numbers specifying the <em style="color:blue;">fuel efficiency</em>, while the list displacement contains the corresponding <em style="color:blue;">engine displacements</em> measured in litres. We display these values for the first 5 cars. End of explanation m = len(displacement) m Explanation: The number of data pairs of the form $\langle x, y \rangle$ that we have read is stored in the variable m. End of explanation import numpy as np import matplotlib.pyplot as plt import seaborn as sns Explanation: In order to be able to plot the fuel efficiency versus the engine displacement we turn the lists displacement and mpg into numpy arrays. This is also usefull in order to compute the coefficients $\vartheta_0$ and $\vartheta_1$ later. End of explanation X = np.array(displacement) Y = np.array([100 / y for y in kpl]) plt.figure(figsize=(12, 10)) sns.set(style='whitegrid') plt.scatter(X, Y, c='b', s=4) # 'b' is short for blue plt.xlabel('engine displacement in litres') plt.ylabel('litre per 100 km') plt.title('Fuel Consumption vs. Engine Displacement') Explanation: Since <em style="color:blue;">kilometres per litre</em> is the inverse of the fuel consumption, the vector Y is defined as follows: End of explanation xMean = np.mean(X) xMean Explanation: We compute the average engine displacement according to the formula: $$ \bar{\mathbf{x}} = \frac{1}{m} \cdot \sum\limits_{i=1}^m x_i $$ End of explanation yMean = np.mean(Y) yMean Explanation: We compute the average fuel consumption according to the formula: $$ \bar{\mathbf{y}} = \frac{1}{m} \cdot \sum\limits_{i=1}^m y_i $$ End of explanation ϑ1 = np.sum((X - xMean) * (Y - yMean)) / np.sum((X - xMean) ** 2) ϑ1 Explanation: The coefficient $\vartheta_1$ is computed according to the formula: $$ \vartheta_1 = \frac{\sum\limits_{i=1}^m \bigl(x_i - \bar{\mathbf{x}}\bigr) \cdot \bigl(y_i - \bar{\mathbf{y}}\bigr)}{ \sum\limits_{i=1}^m \bigl(x_i - \bar{\mathbf{x}}\bigr)^2} $$ End of explanation ϑ0 = yMean - ϑ1 * xMean ϑ0 Explanation: The coefficient $\vartheta_0$ is computed according to the formula: $$ \vartheta_0 = \bar{\mathbf{y}} - \vartheta_1 \cdot \bar{\mathbf{x}} $$ End of explanation xMax = max(X) + 0.2 plt.figure(figsize=(12, 10)) sns.set(style='whitegrid') plt.scatter(X, Y, c='b') plt.plot([0, xMax], [ϑ0, ϑ0 + ϑ1 * xMax], c='r') plt.xlabel('engine displacement in cubic inches') plt.ylabel('fuel consumption in litres per 100 km') plt.title('Fuel Consumption versus Engine Displacement') Explanation: Let us plot the line $y(x) = ϑ0 + ϑ1 \cdot x$ together with our data: End of explanation TSS = np.sum((Y - yMean) ** 2) TSS Explanation: We see there is quite a bit of variation and apparently the engine displacement explains only a part of the fuel consumption. In order to compute the coefficient of determination, i.e. the statistics $R^2$, we first compute the total sum of squares TSS according to the following formula: $$ \mathtt{TSS} := \sum\limits_{i=1}^m \bigl(y_i - \bar{\mathbf{y}}\bigr)^2 $$ End of explanation RSS = np.sum((ϑ1 * X + ϑ0 - Y) ** 2) RSS Explanation: Next, we compute the residual sum of squares RSS as follows: $$ \mathtt{RSS} := \sum\limits_{i=1}^m \bigl(\vartheta_1 \cdot x_i + \vartheta_0 - y_i\bigr)^2 $$ End of explanation R2 = 1 - RSS/TSS R2 Explanation: Now $R^2$ is calculated via the formula: $$ R^2 = 1 - \frac{\mathtt{RSS}}{\mathtt{TSS}}$$ End of explanation
5,017
Given the following text description, write Python code to implement the functionality described below step by step Description: Topic Modeling wiht Latent Semantic Analysis Latent Semantic Analysis (LSA) is a method for reducing the dimnesionality of documents treated as a bag of words. It is used for document classification, clustering and retrieval. For example, LSA can be used to search for prior art given a new patent application. In this homework, we will implement a small library for simple latent semantic analysis as a practical example of the application of SVD. The ideas are very similar to PCA. We will implement a toy example of LSA to get familiar with the ideas. If you want to use LSA or similar methods for statiscal language analyis, the most efficient Python library is probably gensim - this also provides an online algorithm - i.e. the training information can be continuously updated. Other useful functions for processing natural language can be found in the Natural Lnaguage Toolkit. Note Step8: Exercise 1 (10 points). Calculating pairwise distance matrices. Suppose we want to construct a distance matrix between the rows of a matrix. For example, given the matrix python M = np.array([[1,2,3],[4,5,6]]) the distance matrix using Euclidean distance as the measure would be python [[ 0.000 1.414 2.828] [ 1.414 0.000 1.414] [ 2.828 1.414 0.000]] if $M$ was a collection of column vectors. Write a function to calculate the pairwise-distance matrix given the matrix $M$ and some arbitrary distance function. Your functions should have the following signature Step13: Exercise 2 (20 points). Write 3 functions to calculate the term frequency (tf), the inverse document frequency (idf) and the product (tf-idf). Each function should take a single argument docs, which is a dictionary of (key=identifier, value=dcoument text) pairs, and return an appropriately sized array. Convert '-' to ' ' (space), remove punctuation, convert text to lowercase and split on whitespace to generate a collection of terms from the dcoument text. tf = the number of occurrences of term $i$ in document $j$ idf = $\log \frac{n}{1 + \text{df}_i}$ where $n$ is the total number of documents and $\text{df}_i$ is the number of documents in which term $i$ occurs. Print the table of tf-idf values for the following document collection ``` s1 = "The quick brown fox" s2 = "Brown fox jumps over the jumps jumps jumps" s3 = "The the the lazy dog elephant." s4 = "The the the the the dog peacock lion tiger elephant" docs = {'s1' Step15: Exercise 3 (20 points). Write a function that takes a matrix $M$ and an integer $k$ as arguments, and reconstructs a reduced matrix using only the $k$ largest singular values. Use the scipy.linagl.svd function to perform the decomposition. This is the least squares approximation to the matrix $M$ in $k$ dimensions. Apply the function you just wrote to the following term-frequency matrix for a set of $9$ documents using $k=2$ and print the reconstructed matrix $M'$. M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0], [1, 0, 1, 0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 1, 0, 0, 0, 0], [0, 1, 1, 2, 0, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 1, 1]]) Calculate the pairwise correlation matrix for the original matrix M and the reconstructed matrix using $k=2$ singular values (you may use scipy.stats.spearmanr to do the calculations). Consider the fist 5 sets of documents as one group $G1$ and the last 4 as another group $G2$ (i.e. first 5 and last 4 columns). What is the average within group correlation for $G1$, $G2$ and the average cross-group correlation for G1-G2 using either $M$ or $M'$. (Do not include self-correlation in the within-group calculations.). Step16: Exercise 4 (40 points). Clustering with LSA Begin by loading a pubmed database of selected article titles using 'pickle'. With the following
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt import pandas as pd import scipy.linalg as la import scipy.stats as st Explanation: Topic Modeling wiht Latent Semantic Analysis Latent Semantic Analysis (LSA) is a method for reducing the dimnesionality of documents treated as a bag of words. It is used for document classification, clustering and retrieval. For example, LSA can be used to search for prior art given a new patent application. In this homework, we will implement a small library for simple latent semantic analysis as a practical example of the application of SVD. The ideas are very similar to PCA. We will implement a toy example of LSA to get familiar with the ideas. If you want to use LSA or similar methods for statiscal language analyis, the most efficient Python library is probably gensim - this also provides an online algorithm - i.e. the training information can be continuously updated. Other useful functions for processing natural language can be found in the Natural Lnaguage Toolkit. Note: The SVD from scipy.linalg performs a full decomposition, which is inefficient since we only need to decompose until we get the first k singluar values. If the SVD from scipy.linalg is too slow, please use the sparsesvd function from the sparsesvd package to perform SVD instead. You can install in the usual way with !pip install sparsesvd Then import the following python from sparsesvd import sparsesvd from scipy.sparse import csc_matrix and use as follows python sparsesvd(csc_matrix(M), k=10) End of explanation # Questions 1.1 to 1.5 def squared_euclidean_norm(u, axis=-1): return (u**2).sum(axis) def euclidean_norm(u, axis=-1): return np.sqrt(squared_euclidean_norm(u, axis)) def squared_euclidean_dist(u, v, axis=-1): Returns squared Euclidean distance between two vectors. return squared_euclidean_norm(u-v, axis) def euclidean_dist(u, v, axis=-1): Return Euclidean distacne between two vectors. return np.sqrt(squared_euclidean_dist(u, v, axis)) def cosine_dist(u, v, axis=-1): Returns cosine of angle betwwen two vectors. # return 1 - np.dot(u, v)/(la.norm(u)*la.norm(v)) return 1 - (u * v).sum(axis)/(euclidean_norm(u, axis) * euclidean_norm(v, axis)) def loop_row_pdist(M, f): REturns pairwise-distance matrix assuming M consists of row vectors.. nrows, ncols = M.shape return np.array([[f(M[u,:], M[v,:]) for u in range(nrows)] for v in range(nrows)]) def loop_col_pdist(M, f): REturns pairwise-distance matrix assuming M consists of column vectors.. nrows, ncols = M.shape return np.array([[f(M[:,u], M[:,v]) for u in range(ncols)] for v in range(ncols)]) def broadcast_row_pdist(M, f): REturns pairwise-distance matrix assuming M consists of row vectors.. return f(M[None,:,:], M[:,None,:]) def broadcast_col_pdist(M, f): REturns pairwise-distance matrix assuming M consists of column vectors.. return f(M[:,None,:], M[:,:,None], axis=0) # Q1 checking reuslts M = np.array([[1,2,3],[4,5,6]]) # dist = euclidean_dist for dist in (cosine_dist, euclidean_dist, squared_euclidean_dist): print(loop_row_pdist(M, dist), '\n') print(broadcast_row_pdist(M, dist), '\n') print(loop_col_pdist(M, dist), '\n') print(broadcast_col_pdist(M, dist)) Explanation: Exercise 1 (10 points). Calculating pairwise distance matrices. Suppose we want to construct a distance matrix between the rows of a matrix. For example, given the matrix python M = np.array([[1,2,3],[4,5,6]]) the distance matrix using Euclidean distance as the measure would be python [[ 0.000 1.414 2.828] [ 1.414 0.000 1.414] [ 2.828 1.414 0.000]] if $M$ was a collection of column vectors. Write a function to calculate the pairwise-distance matrix given the matrix $M$ and some arbitrary distance function. Your functions should have the following signature: def func_name(M, distance_func): pass Write a distance function for the Euclidean, squared Euclidean and cosine measures. Write the function using looping for M as a collection of row vectors. Write the function using looping for M as a collection of column vectors. Wrtie the function using broadcasting for M as a colleciton of row vectors. Write the function using broadcasting for M as a colleciton of column vectors. For 3 and 4, try to avoid using transposition (but if you get stuck, there will be no penalty for using transpoition). Check that all four functions give the same result when applied to the given matrix $M$. End of explanation # The tf() function is optional - it can also be coded directly into tfs() # Questino 2.1 def tf(doc): Returns the number of times each term occurs in a dcoument. We preprocess the document to strip punctuation and convert to lowercase. Terms are found by splitting on whitespace. from collections import Counter from string import punctuation table = dict.fromkeys(map(ord, punctuation)) terms = doc.lower().replace('-', ' ').translate(table).split() return Counter(terms) def tfs(docs): Create a term freqeuncy dataframe from a dictionary of documents. from operator import add df = pd.DataFrame({k: tf(v) for k, v in docs.items()}).fillna(0) return df # Question 2.2 def idf(docs): Find inverse document frequecny series from a dictionry of doucmnets. term_freq = tfs(docs) num_docs = len(docs) doc_freq = (term_freq > 0).sum(axis=1) return np.log(num_docs/(1 + doc_freq)) # Question 2.3 def tf_idf(docs): Return the product of the term-frequency and inverse document freqeucny. return tfs(docs).mul(idf(docs), axis=0) # Question 2.4 s1 = "The quick brown fox" s2 = "Brown fox jumps over the jumps jumps jumps" s3 = "The the the lazy dog elephant." s4 = "The the the the the dog peacock lion tiger elephant" docs = {'s1': s1, 's2': s2, 's3': s3, 's4': s4} tf_idf(docs) Explanation: Exercise 2 (20 points). Write 3 functions to calculate the term frequency (tf), the inverse document frequency (idf) and the product (tf-idf). Each function should take a single argument docs, which is a dictionary of (key=identifier, value=dcoument text) pairs, and return an appropriately sized array. Convert '-' to ' ' (space), remove punctuation, convert text to lowercase and split on whitespace to generate a collection of terms from the dcoument text. tf = the number of occurrences of term $i$ in document $j$ idf = $\log \frac{n}{1 + \text{df}_i}$ where $n$ is the total number of documents and $\text{df}_i$ is the number of documents in which term $i$ occurs. Print the table of tf-idf values for the following document collection ``` s1 = "The quick brown fox" s2 = "Brown fox jumps over the jumps jumps jumps" s3 = "The the the lazy dog elephant." s4 = "The the the the the dog peacock lion tiger elephant" docs = {'s1': s1, 's2': s2, 's3': s3, 's4': s4} ``` End of explanation # Question 3.1 def svd_projection(M, k): Returns the matrix M reconstructed using only k singluar values U, s, V = la.svd(M, full_matrices=False) s[k:] = 0 M_ = U.dot(np.diag(s).dot(V)) try: return pd.DataFrame(M_, index=M.index, columns=M.columns) except AttributeError: return M_ # Qeustion 3.2 M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0], [1, 0, 1, 0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 1, 0, 0, 0, 0], [0, 1, 1, 2, 0, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 1, 1]]) Md = svd_projection(M, 2) Md # Question 3.3 # Results for full-rank matrix (not graded - just here for comparison) rho, pval = st.spearmanr(M) np.mean(rho[:5, :5][np.tril_indices_from(rho[:5, :5], 1)]), \ np.mean(rho[5:, 5:][np.tril_indices_from(rho[5:, 5:], 1)]), \ rho[5:, :5].mean() rho # Results after LSA (graded) # G1/G1, G2/G2 and G1/G2 average correlation rho, pval = st.spearmanr(Md) np.mean(rho[:5, :5][np.tril_indices_from(rho[:5, :5], 1)]), \ np.mean(rho[5:, 5:][np.tril_indices_from(rho[5:, 5:], 1)]), \ rho[5:, :5].mean() rho Explanation: Exercise 3 (20 points). Write a function that takes a matrix $M$ and an integer $k$ as arguments, and reconstructs a reduced matrix using only the $k$ largest singular values. Use the scipy.linagl.svd function to perform the decomposition. This is the least squares approximation to the matrix $M$ in $k$ dimensions. Apply the function you just wrote to the following term-frequency matrix for a set of $9$ documents using $k=2$ and print the reconstructed matrix $M'$. M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0], [1, 0, 1, 0, 0, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 1, 0, 1, 0, 0, 0, 0], [0, 1, 1, 2, 0, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 1, 1, 1, 0], [0, 0, 0, 0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 1, 1]]) Calculate the pairwise correlation matrix for the original matrix M and the reconstructed matrix using $k=2$ singular values (you may use scipy.stats.spearmanr to do the calculations). Consider the fist 5 sets of documents as one group $G1$ and the last 4 as another group $G2$ (i.e. first 5 and last 4 columns). What is the average within group correlation for $G1$, $G2$ and the average cross-group correlation for G1-G2 using either $M$ or $M'$. (Do not include self-correlation in the within-group calculations.). End of explanation # Quesiton 4.1 import pickle docs = pickle.load(open('pubmed.pic', 'rb')) df = tf_idf(docs) df.shape # Question 4.2 k = 10 T, s, D = la.svd(df) print(T.shape, s.shape, D.shape, '\n') df_10 = T[:,:k].dot(np.diag(s[:k]).dot(D[:k,:])) assert(df.shape == df_10.shape) df_10 # Question 4.2 (alternative solution 1 setting unwanted singluar values to zero) T, s, D = la.svd(df, full_matrices=False) print(T.shape, s.shape, D.shape, '\n') s[10:] = 0 df_10 = T.dot(np.diag(s).dot(D)) assert(df.shape == df_10.shape) df_10 ! pip install sparsesvd # Question 4.2 (alternative solution 2 using sparsesvd) from scipy.sparse import csc_matrix from sparsesvd import sparsesvd k = 10 T, s, D = sparsesvd(csc_matrix(df), k=k) print(T.shape, s.shape, D.shape, '\n') df_10 = T.T.dot(np.diag(s).dot(D)) assert(df.shape == df_10.shape) print(df_10) # Question 4.3 from scipy.cluster.hierarchy import linkage, dendrogram from scipy.spatial.distance import pdist, squareform plt.figure(figsize=(16,36)) T, s, D = sparsesvd(csc_matrix(df), k=100) x = np.diag(s).dot(D).T data_dist = pdist(x, metric='cosine') # computing the distance data_link = linkage(data_dist) # computing the linkage labels = [c[:40] for c in df.columns[:]] dendrogram(data_link, orientation='right', labels=labels); # Quesiton 4.4 k = 10 T, s, D = sparsesvd(csc_matrix(df), k=100) doc = {'mystery': open('mystery.txt').read()} terms = tf_idf(doc) query_terms = df.join(terms).fillna(0)['mystery'] q = query_terms.T.dot(T.T.dot(np.diag(1.0/s))) ranked_docs = df.columns[np.argsort(cosine_dist(q, x))][::-1] print("Query article:", ) print(' '.join(line.strip() for line in doc['mystery'].splitlines()[:2])) print() print("Most similar") print('='*80) for i, title in enumerate(ranked_docs[:10]): print('%03d' % i, title) print() print("Most dissimilar") print('='*80) for i, title in enumerate(ranked_docs[-10:]): print('%03d' % (len(docs) - i), title) Explanation: Exercise 4 (40 points). Clustering with LSA Begin by loading a pubmed database of selected article titles using 'pickle'. With the following: import pickle docs = pickle.load(open('pubmed.pic', 'rb')) Create a tf-idf matrix for every term that appears at least once in any of the documents. What is the shape of the tf-idf matrix? Perform SVD on the tf-idf matrix to obtain $U \Sigma V^T$ (often written as $T \Sigma D^T$ in this context with $T$ representing the terms and $D$ representing the documents). If we set all but the top $k$ singular values to 0, the reconstructed matrix is essentially $U_k \Sigma_k V_k^T$, where $U_k$ is $m \times k$, $\Sigma_k$ is $k \times k$ and $V_k^T$ is $k \times n$. Terms in this reduced space are represented by $U_k \Sigma_k$ and documents by $\Sigma_k V^T_k$. Reconstruct the matrix using the first $k=10$ singular values. Use agglomerative hierachical clustering with complete linkage to plot a dendrogram and comment on the likely number of document clusters with $k = 100$. Use the dendrogram function from SciPy . Determine how similar each of the original documents is to the new document mystery.txt. Since $A = U \Sigma V^T$, we also have $V = A^T U S^{-1}$ using orthogonality and the rule for transposing matrix products. This suggests that in order to map the new document to the same concept space, first find the tf-idf vector $v$ for the new document - this must contain all (and only) the terms present in the existing tf-idx matrix. Then the query vector $q$ is given by $v^T U_k \Sigma_k^{-1}$. Find the 10 documents most similar to the new document and the 10 most dissimilar. End of explanation
5,018
Given the following text description, write Python code to implement the functionality described below step by step Description: This problem originated from a blog post I wrote for DataCamp on graph optimization here. The algorithm I sketched out there for solving the Chinese Problem on the Sleeping Giant state park trail network has since been formalized into the postman_problems python library. I've also added the Rural Postman solver that is implemented here. So the three main enhancements in this post from the original DataCamp article and my second iteration published here updating to networkx 2.0 are Step1: Create Graph from OSM Step2: Adding edges that don't exist on OSM, but should Step3: Adding distance to OSM graph Using the haversine formula to calculate distance between each edge. Step4: Create graph of required trails only A simple heuristic with a couple tweaks is all we need to create the graph with required edges Step5: Viz Sleeping Giant Trails All trails required for the Giant Master Step6: <iframe src="https Step7: Add attributes for supplementary edges Step8: Ensuring that we're left with one single connected component Step9: Viz Connected Component The map below visualizes the required edges and nodes of interest (intersections and dead-ends where degree != 2) Step10: <iframe src="https Step11: <iframe src="https Step12: Contract Edges We could run the RPP algorithm on the graph as-is with >5000 edges. However, we can simplify computation by contracting edges into logical trail segments first. More details on the intuition and methodology in the 50 states post. Step13: Edge contraction reduces the number of edges fed to the RPP algorithm by a factor of ~40. Step14: Solve CPP First, let's see how well the Chinese Postman solution works. Create CPP edgelist Step15: Start node The route is designed to start at the far east end of the park on the Blue trail (node '735393342'). While the CPP and RPP solutions will return a Eulerian circuit (loop back to the starting node), we could truncate this last long doublebacking segment when actually running the route <img src="https Step16: CPP Stats (distances in meters) Step17: Solve RPP With the CPP as benchmark, let's see how well we do when we allow for optional edges in the route. Step18: Required vs optional edge counts (1=required and 0=optional) Step19: Solve RPP Step20: RPP Stats (distances in meters) Step21: Leveraging the optional roads and trails, we're able to shave a about 3 miles off the CPP route. Total mileage checks in at 30.71, just under a 50K (30.1 miles). Step22: Viz RPP Solution Step23: Create graph from RPP solution Step24: Viz Step25: <iframe src="https Step26: Edge walks per color Step27: <iframe src="https
Python Code: import mplleaflet import networkx as nx import pandas as pd import matplotlib.pyplot as plt from collections import Counter # can be found in https://github.com/brooksandrew/postman_problems_examples from osm2nx import read_osm, haversine from graph import contract_edges, create_rpp_edgelist from postman_problems.tests.utils import create_mock_csv_from_dataframe from postman_problems.solver import rpp, cpp from postman_problems.stats import calculate_postman_solution_stats Explanation: This problem originated from a blog post I wrote for DataCamp on graph optimization here. The algorithm I sketched out there for solving the Chinese Problem on the Sleeping Giant state park trail network has since been formalized into the postman_problems python library. I've also added the Rural Postman solver that is implemented here. So the three main enhancements in this post from the original DataCamp article and my second iteration published here updating to networkx 2.0 are: 1. OpenStreetMap for graph data and visualization. 2. Implementing the Rural Postman algorithm to consider optional edges. 3. Leveraging the postman_problems library. This code, notebook and data for this post can be found in the postman_problems_examples repo. The motivation and background around this problem is written up more thoroughly in the previous posts and postman_problems. Table of Contents Table of Contents {:toc} End of explanation # load OSM to a directed NX g_d = read_osm('sleepinggiant.osm') # create an undirected graph g = g_d.to_undirected() Explanation: Create Graph from OSM End of explanation g.add_edge('2318082790', '2318082832', id='white_horseshoe_fix_1') Explanation: Adding edges that don't exist on OSM, but should End of explanation for e in g.edges(data=True): e[2]['distance'] = haversine(g.node[e[0]]['lon'], g.node[e[0]]['lat'], g.node[e[1]]['lon'], g.node[e[1]]['lat']) Explanation: Adding distance to OSM graph Using the haversine formula to calculate distance between each edge. End of explanation g_t = g.copy() for e in g.edges(data=True): # remove non trails name = e[2]['name'] if 'name' in e[2] else '' if ('Trail' not in name.split()) or (name is None): g_t.remove_edge(e[0], e[1]) # remove non Sleeping Giant trails elif name in [ 'Farmington Canal Linear Trail', 'Farmington Canal Heritage Trail', 'Montowese Trail', '(white blazes)']: g_t.remove_edge(e[0], e[1]) # cleaning up nodes left without edges for n in nx.isolates(g_t.copy()): g_t.remove_node(n) Explanation: Create graph of required trails only A simple heuristic with a couple tweaks is all we need to create the graph with required edges: Keep any edge with 'Trail' in the name attribute. Manually remove the handful of trails that are not part of the required Giant Master route. End of explanation fig, ax = plt.subplots(figsize=(1,8)) pos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()} nx.draw_networkx_edges(g_t, pos, width=2.5, edge_color='black', alpha=0.7) mplleaflet.save_html(fig, 'maps/sleepinggiant_trails_only.html') Explanation: Viz Sleeping Giant Trails All trails required for the Giant Master: End of explanation edge_ids_to_add = [ '223082783', '223077827', '40636272', '223082785', '222868698', '223083721', '222947116', '222711152', '222711155', '222860964', '223083718', '222867540', 'white_horseshoe_fix_1' ] edge_ids_to_remove = [ '17220599' ] Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/sleepinggiant_trails_only.html" height="400" width="750"></iframe> Connect Edges In order to run the RPP algorithm from postman_problems, the required edges of the graph must form a single connected component. We're almost there with the Sleeping Giant trail map as-is, so we'll just connect a few components manually. Here's an example of a few floating components (southwest corner of park): <img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_disconnected_components.png" width="500"> OpenStreetMap makes finding these edge (way) IDs simple. Once grabbing the ? cursor, you can click on any edge to retrieve IDs and attributes. <img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/osm_edge_lookup.png" width="1000"> Define OSM edges to add and remove from graph End of explanation for e in g.edges(data=True): way_id = e[2].get('id').split('-')[0] if way_id in edge_ids_to_add: g_t.add_edge(e[0], e[1], **e[2]) g_t.add_node(e[0], lat=g.node[e[0]]['lat'], lon=g.node[e[0]]['lon']) g_t.add_node(e[1], lat=g.node[e[1]]['lat'], lon=g.node[e[1]]['lon']) if way_id in edge_ids_to_remove: if g_t.has_edge(e[0], e[1]): g_t.remove_edge(e[0], e[1]) for n in nx.isolates(g_t.copy()): g_t.remove_node(n) Explanation: Add attributes for supplementary edges End of explanation len(list(nx.connected_components(g_t))) Explanation: Ensuring that we're left with one single connected component: End of explanation fig, ax = plt.subplots(figsize=(1,12)) # edges pos = {k: (g_t.node[k].get('lon'), g_t.node[k].get('lat')) for k in g_t.nodes()} nx.draw_networkx_edges(g_t, pos, width=3.0, edge_color='black', alpha=0.6) # nodes (intersections and dead-ends) pos_x = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes() if (g_t.degree(k)==1) | (g_t.degree(k)>2)} nx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=35.0, node_color='red', alpha=0.9) mplleaflet.save_html(fig, 'maps/trails_only_intersections.html') Explanation: Viz Connected Component The map below visualizes the required edges and nodes of interest (intersections and dead-ends where degree != 2): End of explanation name2color = { 'Green Trail': 'green', 'Quinnipiac Trail': 'blue', 'Tower Trail': 'black', 'Yellow Trail': 'yellow', 'Red Square Trail': 'red', 'White/Blue Trail Link': 'lightblue', 'Orange Trail': 'orange', 'Mount Carmel Avenue': 'black', 'Violet Trail': 'violet', 'blue Trail': 'blue', 'Red Triangle Trail': 'red', 'Blue Trail': 'blue', 'Blue/Violet Trail Link': 'purple', 'Red Circle Trail': 'red', 'White Trail': 'gray', 'Red Diamond Trail': 'red', 'Yellow/Green Trail Link': 'yellowgreen', 'Nature Trail': 'forestgreen', 'Red Hexagon Trail': 'red', None: 'black' } fig, ax = plt.subplots(figsize=(1,10)) pos = {k: (g_t.node[k]['lon'], g_t.node[k]['lat']) for k in g_t.nodes()} e_color = [name2color[e[2].get('name')] for e in g_t.edges(data=True)] nx.draw_networkx_edges(g_t, pos, width=3.0, edge_color=e_color, alpha=0.5) nx.draw_networkx_nodes(g_t, pos_x, nodelist=pos_x.keys(), node_size=30.0, node_color='black', alpha=0.9) mplleaflet.save_html(fig, 'maps/trails_only_color.html', tiles='cartodb_positron') Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_intersections.html" height="400" width="750"></iframe> Viz Trail Color Because we can and it's pretty. End of explanation print('{:0.2f} miles of required trail.'.format(sum([e[2]['distance']/1609.34 for e in g_t.edges(data=True)]))) Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/trails_only_color.html" height="400" width="750"></iframe> Check distance This is strikingly close (within 0.25 miles) to what I calculated manually with some guess work from the SG trail map on the first pass at this problem here, before leveraging OSM. End of explanation print('Number of edges in trail graph: {}'.format(len(g_t.edges()))) # intialize contracted graph g_tc = nx.MultiGraph() # add contracted edges to graph for ce in contract_edges(g_t, 'distance'): start_node, end_node, distance, path = ce contracted_edge = { 'start_node': start_node, 'end_node': end_node, 'distance': distance, 'name': g[path[0]][path[1]].get('name'), 'required': 1, 'path': path } g_tc.add_edge(start_node, end_node, **contracted_edge) g_tc.node[start_node]['lat'] = g.node[start_node]['lat'] g_tc.node[start_node]['lon'] = g.node[start_node]['lon'] g_tc.node[end_node]['lat'] = g.node[end_node]['lat'] g_tc.node[end_node]['lon'] = g.node[end_node]['lon'] Explanation: Contract Edges We could run the RPP algorithm on the graph as-is with >5000 edges. However, we can simplify computation by contracting edges into logical trail segments first. More details on the intuition and methodology in the 50 states post. End of explanation print('Number of edges in contracted trail graoh: {}'.format(len(g_tc.edges()))) Explanation: Edge contraction reduces the number of edges fed to the RPP algorithm by a factor of ~40. End of explanation # create list with edge attributes and "from" & "to" nodes tmp = [] for e in g_tc.edges(data=True): tmpi = e[2].copy() # so we don't mess w original graph tmpi['start_node'] = e[0] tmpi['end_node'] = e[1] tmp.append(tmpi) # create dataframe w node1 and node2 in order eldf = pd.DataFrame(tmp) eldf = eldf[['start_node', 'end_node'] + list(set(eldf.columns)-{'start_node', 'end_node'})] # create edgelist mock CSV elfn = create_mock_csv_from_dataframe(eldf) Explanation: Solve CPP First, let's see how well the Chinese Postman solution works. Create CPP edgelist End of explanation circuit_cpp, gcpp = cpp(elfn, start_node='735393342') Explanation: Start node The route is designed to start at the far east end of the park on the Blue trail (node '735393342'). While the CPP and RPP solutions will return a Eulerian circuit (loop back to the starting node), we could truncate this last long doublebacking segment when actually running the route <img src="https://github.com/brooksandrew/postman_problems_examples/raw/master/sleepinggiant/fig/sleepinggiant_starting_node.png" width="600"> Solve End of explanation cpp_stats = calculate_postman_solution_stats(circuit_cpp) cpp_stats print('Miles in CPP solution: {:0.2f}'.format(cpp_stats['distance_walked']/1609.34)) Explanation: CPP Stats (distances in meters) End of explanation %%time dfrpp = create_rpp_edgelist(g_tc, graph_full=g, edge_weight='distance', max_distance=2500) Explanation: Solve RPP With the CPP as benchmark, let's see how well we do when we allow for optional edges in the route. End of explanation Counter( dfrpp['required']) Explanation: Required vs optional edge counts (1=required and 0=optional) End of explanation # create mockfilename elfn = create_mock_csv_from_dataframe(dfrpp) %%time # solve circuit_rpp, grpp = rpp(elfn, start_node='735393342') Explanation: Solve RPP End of explanation rpp_stats = calculate_postman_solution_stats(circuit_rpp) rpp_stats Explanation: RPP Stats (distances in meters) End of explanation print('Miles in RPP solution: {:0.2f}'.format(rpp_stats['distance_walked']/1609.34)) Explanation: Leveraging the optional roads and trails, we're able to shave a about 3 miles off the CPP route. Total mileage checks in at 30.71, just under a 50K (30.1 miles). End of explanation # hack to convert 'path' from str back to list. Caused by `create_mock_csv_from_dataframe` for e in circuit_rpp: if type(e[3]['path']) == str: exec('e[3]["path"]=' + e[3]["path"]) Explanation: Viz RPP Solution End of explanation g_tcg = g_tc.copy() # calc shortest path between optional nodes and add to graph for e in circuit_rpp: granular_type = 'trail' if e[3]['required'] else 'optional' # add granular optional edges to g_tcg path = e[3]['path'] for pair in list(zip(path[:-1], path[1:])): if (g_tcg.has_edge(pair[0], pair[1])) and (g_tcg[pair[0]][pair[1]][0].get('granular_type') == 'optional'): g_tcg[pair[0]][pair[1]][0]['granular_type'] = 'trail' else: g_tcg.add_edge(pair[0], pair[1], granular='True', granular_type=granular_type) # add granular nodes from optional edge paths to g_tcg for n in path: g_tcg.add_node(n, lat=g.node[n]['lat'], lon=g.node[n]['lon']) Explanation: Create graph from RPP solution End of explanation fig, ax = plt.subplots(figsize=(1,8)) pos = {k: (g_tcg.node[k].get('lon'), g_tcg.node[k].get('lat')) for k in g_tcg.nodes()} el_opt = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'optional'] nx.draw_networkx_edges(g_tcg, pos, edgelist=el_opt, width=6.0, edge_color='blue', alpha=1.0) el_tr = [e for e in g_tcg.edges(data=True) if e[2].get('granular_type') == 'trail'] nx.draw_networkx_edges(g_tcg, pos, edgelist=el_tr, width=3.0, edge_color='black', alpha=0.8) mplleaflet.save_html(fig, 'maps/rpp_solution_opt_edges.html', tiles='cartodb_positron') Explanation: Viz: RPP optional edges The RPP algorithm picks up some logical shortcuts using the optional trails and a couple short stretches of road. <font color='black'>black</font>: required trails <font color='blue'>blue</font>: optional trails and roads End of explanation ## Create graph directly from rpp_circuit and original graph w lat/lon (g) color_seq = [None, 'black', 'magenta', 'orange', 'yellow'] grppviz = nx.MultiGraph() for e in circuit_rpp: for n1, n2 in zip(e[3]['path'][:-1], e[3]['path'][1:]): if grppviz.has_edge(n1, n2): grppviz[n1][n2][0]['linewidth'] += 2 grppviz[n1][n2][0]['cnt'] += 1 else: grppviz.add_edge(n1, n2, linewidth=2.5) grppviz[n1][n2][0]['color_st'] = 'black' if g_t.has_edge(n1, n2) else 'red' grppviz[n1][n2][0]['cnt'] = 1 grppviz.add_node(n1, lat=g.node[n1]['lat'], lon=g.node[n1]['lon']) grppviz.add_node(n2, lat=g.node[n2]['lat'], lon=g.node[n2]['lon']) for e in grppviz.edges(data=True): e[2]['color_cnt'] = color_seq[1] if 'cnt' not in e[2] else color_seq[e[2]['cnt'] ] Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_opt_edges.html" height="400" width="750"></iframe> Viz: RPP edges counts End of explanation fig, ax = plt.subplots(figsize=(1,10)) pos = {k: (grppviz.node[k]['lon'], grppviz.node[k]['lat']) for k in grppviz.nodes()} e_width = [e[2]['linewidth'] for e in grppviz.edges(data=True)] e_color = [e[2]['color_cnt'] for e in grppviz.edges(data=True)] nx.draw_networkx_edges(grppviz, pos, width=e_width, edge_color=e_color, alpha=0.7) mplleaflet.save_html(fig, 'maps/rpp_solution_edge_cnts.html', tiles='cartodb_positron') Explanation: Edge walks per color: <font color='black'>black</font>: 1 <br> <font color='magenta'>magenta</font>: 2 <br> End of explanation geojson = {'features':[], 'type': 'FeatureCollection'} time = 0 path = list(reversed(circuit_rpp[0][3]['path'])) for e in circuit_rpp: if e[3]['path'][0] != path[-1]: path = list(reversed(e[3]['path'])) else: path = e[3]['path'] for n in path: time += 1 doc = {'type': 'Feature', 'properties': { 'latitude': g.node[n]['lat'], 'longitude': g.node[n]['lon'], 'time': time, 'id': e[3].get('id') }, 'geometry':{ 'type': 'Point', 'coordinates': [g.node[n]['lon'], g.node[n]['lat']] } } geojson['features'].append(doc) with open('circuit_rpp.geojson','w') as f: json.dump(geojson, f) Explanation: <iframe src="https://cdn.rawgit.com/brooksandrew/postman_problems_examples/master/sleepinggiant/maps/rpp_solution_edge_cnts.html" height="400" width="750"></iframe> Create geojson solution Used for the forthcoming D3 route animation. End of explanation
5,019
Given the following text description, write Python code to implement the functionality described below step by step Description: Demonstration of distribution reweighting hep_ml.reweight contains methods to reweight distributions. Typically we use reweighting of monte-carlo to fight drawbacks of simulation, though there are many applications. In this example we reweight multidimensional distibutions Step1: Downloading data Step2: prepare train and test samples train part is used to train reweighting rule test part is used to evaluate reweighting rule comparing the following things Step3: Original distributions KS = Kolmogorov-Smirnov distance Step4: train part of original distribution Step5: test part for target distribution Step6: Bins-based reweighting in n dimensions Typical way to reweight distributions is based on bins. Step7: Gradient Boosted Reweighter This algorithm is inspired by gradient boosting and is able to fight curse of dimensionality. It uses decision trees and special loss functiion (ReweightLossFunction). GBReweighter supports negative weights (to reweight MC to splotted real data). Step8: Comparing some simple expressions Step9: GB-discrimination let's check how well the classifier is able to distinguish these distributions. ROC AUC is taken as measure of quality. For this puprose we split data into train and test, then train a classifier do distinguish these distributions. If ROC AUC = 0.5 on test, distibutions are equal, if ROC AUC = 1.0, they are ideally separable. Step10: Folding reweighter With FoldingReweighter one can simpler do cross-validation and at the end obtain unbiased weights for the whole original samples Step11: GB discrimination for reweighting rule
Python Code: %pylab inline figsize(16, 8) import root_numpy import pandas from hep_ml import reweight Explanation: Demonstration of distribution reweighting hep_ml.reweight contains methods to reweight distributions. Typically we use reweighting of monte-carlo to fight drawbacks of simulation, though there are many applications. In this example we reweight multidimensional distibutions: original and target, the aim is to find new weights for original distribution, such that these multidimensional distributions will coincide. These is a toy example without real physical meaning. Pay attention: equality of distibutions for each feature $\neq$ equality of multivariate dist All samples are divided into training and validation part. Training part is used to fit reweighting rule and test part is used to estimate reweighting quality. End of explanation storage = 'https://github.com/arogozhnikov/hep_ml/blob/data/data_to_download/' !wget -O ../data/MC_distribution.root -nc $storage/MC_distribution.root?raw=true !wget -O ../data/RD_distribution.root -nc $storage/RD_distribution.root?raw=true columns = ['hSPD', 'pt_b', 'pt_phi', 'vchi2_b', 'mu_pt_sum'] original = root_numpy.root2array('../data/MC_distribution.root', branches=columns) target = root_numpy.root2array('../data/RD_distribution.root', branches=columns) original = pandas.DataFrame(original) target = pandas.DataFrame(target) original_weights = numpy.ones(len(original)) Explanation: Downloading data End of explanation from sklearn.cross_validation import train_test_split # divide original samples into training ant test parts original_train, original_test = train_test_split(original) # divide target samples into training ant test parts target_train, target_test = train_test_split(target) original_weights_train = numpy.ones(len(original_train)) original_weights_test = numpy.ones(len(original_test)) from hep_ml.metrics_utils import ks_2samp_weighted hist_settings = {'bins': 100, 'normed': True, 'alpha': 0.7} def draw_distributions(original, target, new_original_weights): for id, column in enumerate(columns, 1): xlim = numpy.percentile(numpy.hstack([target[column]]), [0.01, 99.99]) subplot(2, 3, id) hist(original[column], weights=new_original_weights, range=xlim, **hist_settings) hist(target[column], range=xlim, **hist_settings) title(column) print 'KS over ', column, ' = ', ks_2samp_weighted(original[column], target[column], weights1=new_original_weights, weights2=numpy.ones(len(target), dtype=float)) Explanation: prepare train and test samples train part is used to train reweighting rule test part is used to evaluate reweighting rule comparing the following things: Kolmogorov-Smirnov distances for 1d projections n-dim distibutions using ML (see below). End of explanation # pay attention, actually we have very few data len(original), len(target) draw_distributions(original, target, original_weights) Explanation: Original distributions KS = Kolmogorov-Smirnov distance End of explanation draw_distributions(original_train, target_train, original_weights_train) Explanation: train part of original distribution End of explanation draw_distributions(original_test, target_test, original_weights_test) Explanation: test part for target distribution End of explanation bins_reweighter = reweight.BinsReweighter(n_bins=20, n_neighs=1.) bins_reweighter.fit(original_train, target_train) bins_weights_test = bins_reweighter.predict_weights(original_test) # validate reweighting rule on the test part comparing 1d projections draw_distributions(original_test, target_test, bins_weights_test) Explanation: Bins-based reweighting in n dimensions Typical way to reweight distributions is based on bins. End of explanation reweighter = reweight.GBReweighter(n_estimators=50, learning_rate=0.1, max_depth=3, min_samples_leaf=1000, gb_args={'subsample': 0.4}) reweighter.fit(original_train, target_train) gb_weights_test = reweighter.predict_weights(original_test) # validate reweighting rule on the test part comparing 1d projections draw_distributions(original_test, target_test, gb_weights_test) Explanation: Gradient Boosted Reweighter This algorithm is inspired by gradient boosting and is able to fight curse of dimensionality. It uses decision trees and special loss functiion (ReweightLossFunction). GBReweighter supports negative weights (to reweight MC to splotted real data). End of explanation def check_ks_of_expression(expression): col_original = original_test.eval(expression, engine='python') col_target = target_test.eval(expression, engine='python') w_target = numpy.ones(len(col_target), dtype='float') print 'No reweight KS:', ks_2samp_weighted(col_original, col_target, weights1=original_weights_test, weights2=w_target) print 'Bins reweight KS:', ks_2samp_weighted(col_original, col_target, weights1=bins_weights_test, weights2=w_target) print 'GB Reweight KS:', ks_2samp_weighted(col_original, col_target, weights1=gb_weights_test, weights2=w_target) check_ks_of_expression('hSPD') check_ks_of_expression('hSPD * pt_phi') check_ks_of_expression('hSPD * pt_phi * vchi2_b') check_ks_of_expression('pt_b * pt_phi / hSPD ') check_ks_of_expression('hSPD * pt_b * vchi2_b / pt_phi') Explanation: Comparing some simple expressions: the most interesting is checking some other variables in multidimensional distributions (those are expressed via original variables). End of explanation from sklearn.ensemble import GradientBoostingClassifier from sklearn.cross_validation import train_test_split from sklearn.metrics import roc_auc_score data = numpy.concatenate([original_test, target_test]) labels = numpy.array([0] * len(original_test) + [1] * len(target_test)) weights = {} weights['original'] = original_weights_test weights['bins'] = bins_weights_test weights['gb_weights'] = gb_weights_test for name, new_weights in weights.items(): W = numpy.concatenate([new_weights / new_weights.sum() * len(target_test), [1] * len(target_test)]) Xtr, Xts, Ytr, Yts, Wtr, Wts = train_test_split(data, labels, W, random_state=42, train_size=0.51) clf = GradientBoostingClassifier(subsample=0.3, n_estimators=50).fit(Xtr, Ytr, sample_weight=Wtr) print name, roc_auc_score(Yts, clf.predict_proba(Xts)[:, 1], sample_weight=Wts) Explanation: GB-discrimination let's check how well the classifier is able to distinguish these distributions. ROC AUC is taken as measure of quality. For this puprose we split data into train and test, then train a classifier do distinguish these distributions. If ROC AUC = 0.5 on test, distibutions are equal, if ROC AUC = 1.0, they are ideally separable. End of explanation from hep_ml.reweight import FoldingReweighter # define base reweighter reweighter_base = reweight.GBReweighter(n_estimators=50, learning_rate=0.1, max_depth=3, min_samples_leaf=1000, gb_args={'subsample': 0.4}) reweighter = reweight.FoldingReweighter(reweighter_base, n_folds=2) # it is not needed divide data into train/test parts; rewighter can be train on the whole samples reweighter.fit(original, target) # predict method provides unbiased weights prediction for the whole sample # folding reweighter contains two reweighters, each is trained on one half of samples # during predictions each reweighter predicts another half of samples not used in training folding_weights = reweighter.predict_weights(original) draw_distributions(original, target, folding_weights) Explanation: Folding reweighter With FoldingReweighter one can simpler do cross-validation and at the end obtain unbiased weights for the whole original samples End of explanation data = numpy.concatenate([original, target]) labels = numpy.array([0] * len(original) + [1] * len(target)) weights = {} weights['original'] = original_weights weights['2-folding'] = folding_weights for name, new_weights in weights.items(): W = numpy.concatenate([new_weights / new_weights.sum() * len(target), [1] * len(target)]) Xtr, Xts, Ytr, Yts, Wtr, Wts = train_test_split(data, labels, W, random_state=42, train_size=0.51) clf = GradientBoostingClassifier(subsample=0.3, n_estimators=30).fit(Xtr, Ytr, sample_weight=Wtr) print name, roc_auc_score(Yts, clf.predict_proba(Xts)[:, 1], sample_weight=Wts) Explanation: GB discrimination for reweighting rule End of explanation
5,020
Given the following text description, write Python code to implement the functionality described below step by step Description: Discerning Haggis 2016-ml-contest submission Author Step1: Convenience functions Step2: Load, treat and color data We try smoothing the data using several windows. Step3: Retrain and predict Finally we train a neural network using all data available, and apply it to our blind test.
Python Code: %matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.axes_grid1 import make_axes_locatable import seaborn as sns sns.set(style='whitegrid', rc={'lines.linewidth': 2.5, 'figure.figsize': (10, 8), 'text.usetex': False, # 'font.family': 'sans-serif', # 'font.sans-serif': 'Optima LT Std', }) from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPClassifier from sklearn.metrics import confusion_matrix from scipy.stats import truncnorm Explanation: Discerning Haggis 2016-ml-contest submission Author: Carlos Alberto da Costa Filho, University of Edinburgh Load libraries End of explanation def make_facies_log_plot(logs, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im=ax[5].imshow(cluster, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[5]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-1): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) Explanation: Convenience functions End of explanation validationFull = pd.read_csv('../validation_data_nofacies.csv') training_data = pd.read_csv('../facies_vectors.csv') # Treat Data training_data.fillna(training_data.mean(),inplace=True) training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() training_data.describe() # Color Data # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite # 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) #make_facies_log_plot( # training_data[training_data['Well Name'] == 'SHRIMPLIN'], # facies_colors) correct_facies_labels = training_data['Facies'].values feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1) feature_vectors.describe() scaler = preprocessing.StandardScaler().fit(feature_vectors) scaled_features = scaler.transform(feature_vectors) X_train, X_test, y_train, y_test = train_test_split(scaled_features, correct_facies_labels, test_size=0.2) clf = MLPClassifier(solver='lbfgs', alpha=.1, hidden_layer_sizes=(300,300,300)) clf.fit(X_train,y_train) conf_te = confusion_matrix(y_test, clf.predict(X_test)) print('Predicted accuracy: %.3f%%' % (100*accuracy(conf_te),)) Explanation: Load, treat and color data We try smoothing the data using several windows. End of explanation clf_final = MLPClassifier(solver='lbfgs', alpha=0.1, hidden_layer_sizes=(300,300,300)) clf_final.fit(scaled_features,correct_facies_labels) validationFullsm = validationFull.copy() validation_features = validationFullsm.drop(['Formation', 'Well Name', 'Depth'], axis=1) scaled_validation = scaler.transform(validation_features) validation_output = clf_final.predict(scaled_validation) validationFull['Facies']=validation_output validationFull.to_csv('well_data_with_facies_DH_sub3.csv') Explanation: Retrain and predict Finally we train a neural network using all data available, and apply it to our blind test. End of explanation
5,021
Given the following text description, write Python code to implement the functionality described below step by step Description: Bienvenid@s a Jupyter Los cuadernos de Jupyter son una herramienta interactiva que te permite preparar documentos con código ejecutable, ecuaciones, texto, imágenes, videos, entre otros, que te ayuda a enriquecer o explicar la lógica detallada de tu código. Los cuadernos de jupyter son comúnmente usados en Step1: Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación al final del nombre de la función. Ejercicio 2 En la siguiente celda busca las siguientes funciones Step2: Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución del código y cambiar del directorio de trabajo, entre otras. para ver qué funciones mágicas hay en Jupyter sólo tienes que escribir python %magic Todas las funciones "mágicas" empiezan con el signo de porcentaje % Gráficas Ahora veremos unos ejemplos de gráficas y cómo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostración de nature Step4: La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$ Ejercicio 4 Edita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión
Python Code: # Lo primero que ejecutarás será 'Hola Jupyter' print('Hola Jupyter') Explanation: Bienvenid@s a Jupyter Los cuadernos de Jupyter son una herramienta interactiva que te permite preparar documentos con código ejecutable, ecuaciones, texto, imágenes, videos, entre otros, que te ayuda a enriquecer o explicar la lógica detallada de tu código. Los cuadernos de jupyter son comúnmente usados en: ciencia, análisis de datos y educación. Eso no significa que son de uso exclusivo para esos casos, en mi experiencia los cuadernos me han ayudado a organizar mis pensamientos y visualizar mejor los datos que analizo. Tal vez te preguntes ¿qué puede hacer Jupyter por mí? Bueno, aquí hay unas cuántas ventajas de usar Jupyter: Trabajar en tu lenguaje preferido. Jupyter tiene soporte para más de 40 lenguajes de programación, incluyendo los que son populares para el análisis de datos como Python (obviamente), R, Julia y Scala. Así que si por algún extraño motivo decides que no te gustó python (lo dudo :P) puedes seguir disfrutando las libretas de jupyter. Compartir tus libretas con quien quieras. En donde quieras y como quieras. Esto significa que puedes escribir tareas y reportes en tus libretas y mandarlos a alguien específico o subirlos a tu repositorio de Github y que todo el mundo sepa lo que hiciste. Esto es como reproducibilidad al máximo... Convierte tus libretas a archivos de diferentas formatos. Puedes convertir tus libretas a documentos estáticos como HTML, LaTeX, PDF, Markdown, reStructuredText,etc. La documentación está en este vínculo y la forma fácil de hacerlo es: Click en "File" Pon el cursos sobre "Download as" Selecciona la opción que prefieras. Widgets interactivos Puedes crear salidas con videos, barras para cambiar valores... Exploraremos esto más adelante. Ahora que ya sabes lo que Jupyter es, vamos a usarlo!! Formato y ejecución de celdas los cuadernos de Jupyter se manejan con celdas y es importante saber que cada celda puede ser de distintos tipos, unas van a ser de código (ya sea python, r, julia o el kernel deseado) y otras de markdown. Lo primero que vas a hacer es dar click en la celda anterior y en esta para que notes que ambas son celdas de markdown. Puedes notar que hay formas para que en el documento aparezcan enlaces, listas, palabras resaltadas en negritas, encabezados. Aunado a esto puedes insertar fórmulas en LaTex, código en HTML, imágenes, tablas y código no ejecutable pero resaltado con la sintaxis apropiada. Ejemplos con estilos usados normalmente Encabezados Si escribes esto: # Encabezado 1 ## Encabezado 2 ### Encabezado 3 ... ###### Encabezado 6 Obtienes esto: Encabezado 1 Encabezado 2 Encabezado 3 ... Encabezado 6 Modificaciones en las letras Puedes hacer que las letras se vean en negritas, cursivas o tachadas de la siguiente forma: **Negritas** o __Negritas__ *Cursivas* o _Cursivas_ ~~Tachado~~ Negrita, Negrita, Cursiva, Cursiva, ~~Tachada~~ Listas Puedes listar objetos ya sea con puntos o con números de la siguiente forma - Objeto 1 - Sub objeto - Otro sub objeto - Objeto 2 1. Objeto 1 2. Objeto 2 Objeto 1 sub objeto otro sub objeto Objeto 2 Enlaces a páginas Para insertar enlaces a páginas relevantes puedes hacerlo directamente copiando y pegando el url o puedes hacer que una palabra esté ligada a un vínculo de a siguiente forma [palabra](dirección) Cheatsheet de Markdown. En esta página puedes encontrar la forma de hacer muchas cosas más Ejercicio 1 Crea una tabla que tenga como encabezados: Miembro Edad Género De todos los miembros de tu familia Las celdas por default van a estar en modo de código pero puedes cambiarla de dos formas, una en la barra que está llena de íconos hay una parte que dice Code, esta nada más tienes que hacer un click y cambiarlo a Markdown. La seguna es usando el teclado y lo que debes de hacer es seleccionar tu celda, presionar la tecla Esc, presionar la letra m y presionar Enter para empezar a escribir... Una vez que terminaste de escribir, puedes ejecutar cada celda ya sea presionando Shift + Enter o con el boton de "play" en la barra de herramientas Ya que vimos la parte de Markdown veamos la parte de código. Cada vez que creas una libreta nueva tu puedes decir en qué lenguaje quieres tu kernel, en nuestro caso sólo veremos python 3 porque no hemos instalado ningún otro kernel para jupyter pero si te interesa puedes ver cómo hacerlo aquí Así que el código que veremos a continuación es en python. End of explanation variable = 50 saludo = 'Hola' Explanation: Cada celda la puedes usar para escribir el código que tu quieras y si de repente se te olvida alguna función o tienes duda de si el nombre es correcto IPython es muy amable en ese sentido. Para saber acerca de una función, es decir cuál es su salida o los parámetros que necesita puedes usar el signo de interrogación al final del nombre de la función. Ejercicio 2 En la siguiente celda busca las siguientes funciones: sum, max, round, mean. No olvides ejecutar la celda después de haber escrito las funciones. Como te pudiste dar cuenta, cuando no encuentra la función te da un error... En IPython, y por lo tanto en Jupyter, hay una utilidad de completar con Tab. Esto quiere decir que si tu empiezas a escribir el nombre de una variable, función o atributo, no tienes que escribirlo todo, puedes empezar con unas cuantas letras y automáticamente (si es lo único que empieza de esa forma) lo va a completar por ti. Todos los flojos y/o olvidadizos amamos esta herramienta. En caso de que haya varias opciones no se va a completar, pero si lo vuelves a oprimir te va a mostrar en la celda todas las opciones que tienes... End of explanation # Importa matplotlib (paquete para graficar) y numpy (paquete para arreglos). # Fíjate en el la función mágica para que aparezca nuestra gráfica en la celda. %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Crea un arreglo de 30 valores para x que va de 0 a 5. x = np.linspace(0, 5, 30) y = x**2 # grafica y versus x fig, ax = plt.subplots(nrows=1, ncols=1) ax.plot(x, y, color='red') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('A simple graph of $y=x^2$') Explanation: Ejercicio 3 Empieza a escribir las primeras tres letras de cada elemento de la celda anterior y presiona tab para ver si se puede autocompletar También hay funciones mágicas que nos permitirán hacer diversas tareas como mostrar las gráficas que se produzcan en el código dentro de una celda, medir el tiempo de ejecución del código y cambiar del directorio de trabajo, entre otras. para ver qué funciones mágicas hay en Jupyter sólo tienes que escribir python %magic Todas las funciones "mágicas" empiezan con el signo de porcentaje % Gráficas Ahora veremos unos ejemplos de gráficas y cómo hacerlas interactivas. Estos ejemplos fueron tomados de la libreta para demostración de nature End of explanation # Importa matplotlib y numpy # con la misma "magia". %matplotlib inline import matplotlib.pyplot as plt import numpy as np # Importa la función interactiva de IPython usada # para construir los widgets interactivos from IPython.html.widgets import interact def plot_sine(frequency=4.0, grid_points=12, plot_original=True): Grafica muestras discretas de una curva sinoidal en ``[0, 1]``. x = np.linspace(0, 1, grid_points + 2) y = np.sin(2 * frequency * np.pi * x) xf = np.linspace(0, 1, 1000) yf = np.sin(2 * frequency * np.pi * xf) fig, ax = plt.subplots(figsize=(8, 6)) ax.set_xlabel('x') ax.set_ylabel('signal') ax.set_title('Aliasing in discretely sampled periodic signal') if plot_original: ax.plot(xf, yf, color='red', linestyle='solid', linewidth=2) ax.plot(x, y, marker='o', linewidth=2) # la función interactiva construye automáticamente una interfase de usuario para explorar # la gráfica de la función de seno. interact(plot_sine, frequency=(1.0, 22.0, 0.5), grid_points=(10, 16, 1), plot_original=True) Explanation: La gráfica que estás viendo sigue la siguiente ecuación $$y=x^2$$ Ejercicio 4 Edita el código de arriba y vuélvelo a correr pero ahora intenta reemplazar la expresión: y = x**2 con: y=np.sin(x) Gráficas interactivas End of explanation
5,022
Given the following text description, write Python code to implement the functionality described below step by step Description: 2018/9/15-16 WNixalo https Step1: the torch.Tensor.sum(dim) call takes an integer argument as the axis along which to sum. This applies to NumPy arrays as well. In this case xb.sum(-1) will turn a 64x784 tensor into a size 64 tensor. This creates a tensor with each element being the total sum of its corresponding size 784 (28x28 flattened) image from the minibatch. Step2: torch.unsqueeze returns a tensor with a dimension of size 1 inserted at the specified position. the returned tensor shares the smae underlying data with this tensor. Step3: taking a look at what .unsqueeze does; what does the tensor look like right before unsqueeze is applied to it? Step4: making sure I didn't need parentheses there Step5: Okay so .unsqueeze turns the size 64 tensor into a 64x1 tensor, so it's nicely packaged up with the first element being the 64-long vector ... or something like that right? Step6: The unsqueezed tensor doesn't look as 'nice'.. I guess. So it's packaged into a single column vector because we'll need that for the linear algebra we'll do to it later yeah? Step7: Oh this is cool. I was wondering how .unsqeeze worked for tensors with multiple items in multiple dimensions (ie Step8: So .unsqueeze turns our size 64x10 ... ohhhhhhhh I misread Step9: Oh this is where I was confused. I'm not throwing xb into Log Softmax. I'm throwing xb • w + bias. The shape going into the log softmax function is not 64x784, it's 64x10. Yeah that makes sense. well duh it has to. Each value in the tensor is an activation for a class, for each image in the minibatch. So by the magic of machine learning, each activation encapsulates the effect of the weights and biases on that input element with respect to that class. So that means that the .unsqueeze oepration is not going to be giving a 64x784 vector. Step10: Note the loss equals that in cell Out[25] above as it should. Back to teasing this apart by hand. The minibatch Step11: The minibatch's activations as they head into the Log Softmax Step12: The minibatch activations after the Log Softmax and before heading into Negative Log Likelihood Step13: The loss value computed via NLL on the Log Softmax activations Step14: Okay. Now questions. What is indexing input by [range(target.shape[0]), target] supposed to be doing? I established before that A[range(n)] is valid if n ≤ A.shape[0]. So what's going on is I'm range-indexing the 1st dimension of the LogSoftmax activations with the length of the target tensor, and the rest of the dimension indices being the ..target tensor itself? That means the index is this Step15: Okay. What does it look like when I index a tensor – forget range-idx for now – with another tensor? Step16: Okay.. Step17: Uh, moment of truth Step18: Oof course. What happened. Is it.. yes. I'm indexing the wrong array. Also no value in target is greater than the number of classes ... oh... oh ffs. Okay. I range index by the length of target's first dim to get the entire first dim of the LogSoftmax activations, and each vector in that index is itself indexed by the value of the target. Less-shitty English Step19: When the activations are activating, only the weights and biases are having a say. Right? Step20: Right. Now what about the Log Softmax operation itself? Well okay I can simulate this by hand Step21: umm.... ... So it's torch.tensor not torch.Tensor? Got a lot of errors trying to specify a datatype with capital T. Alright then. Step22: Good it works. Now to change things. The question was if any of the dropped values (non-target index) had any effect on the loss - since the loss was only calculated on error from the correct target. Basically
Python Code: from pathlib import Path import requests data_path = Path('data') path = data_path/'mnist' path.mkdir(parents=True, exist_ok=True) url = 'http://deeplearning.net/data/mnist/' filename = 'mnist.pkl.gz' (path/filename) if not (path/filename).exists(): content = requests.get(url+filename).content (path/filename).open('wb').write(content) import pickle, gzip with gzip.open(path/filename, 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1') %matplotlib inline from matplotlib import pyplot import numpy as np pyplot.imshow(x_train[0].reshape((28,28)), cmap="gray") x_train.shape import torch x_train,y_train,x_valid,y_valid = map(torch.tensor, (x_train,y_train,x_valid,y_valid)) n,c = x_train.shape x_train, x_train.shape, y_train.min(), y_train.max() import math weights = torch.rand(784, 10)/math.sqrt(784) weights.requires_grad_() bias = torch.zeros(10, requires_grad=True) def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def model(xb): return log_softmax(xb @ weights + bias) xb.shape, xb.sum(-1).shape Explanation: 2018/9/15-16 WNixalo https://github.com/fastai/fastai_v1/blob/master/dev_nb/001a_nn_basics.ipynb End of explanation bs = 64 xb = x_train[0:bs] # a mini-batch from x preds = model(xb) preds[0], preds.shape def nll(input, target): return -input[range(target.shape[0]), target].mean() loss_func = nll yb = y_train[0:bs] loss_func(preds, yb) preds[0] ((x_train[0:bs]@weights+bias) - (x_train[0:bs]@weights+bias).exp().sum(-1).log().unsqueeze(-1))[0] preds[0] nll(preds, yb) -preds[range(yb.shape[0]), yb].mean() type(preds) preds[range(0)] preds[0] preds[range(1)] preds[range(2)] preds[:2] type(preds) np.array([[range(10)]])[range(1)] A = np.array([[range(10)]]) A.shape A[range(2)] A.shape len(A[0]) A.shape[0] A[0] A[range(1)] xb.sum() xb.numpy().sum(-1) xb.sum(-1) Explanation: the torch.Tensor.sum(dim) call takes an integer argument as the axis along which to sum. This applies to NumPy arrays as well. In this case xb.sum(-1) will turn a 64x784 tensor into a size 64 tensor. This creates a tensor with each element being the total sum of its corresponding size 784 (28x28 flattened) image from the minibatch. End of explanation xb.sum(-1) xb[0].sum() Explanation: torch.unsqueeze returns a tensor with a dimension of size 1 inserted at the specified position. the returned tensor shares the smae underlying data with this tensor. End of explanation xb.exp().sum(-1).log() xb.exp().sum(-1).log()[0] Explanation: taking a look at what .unsqueeze does; what does the tensor look like right before unsqueeze is applied to it? End of explanation (xb.exp().sum(-1).log())[0] xb.exp().sum(-1).log().unsqueeze(-1)[:10] np.array([i for i in range(10)]).shape torch.Tensor([i for i in range(10)]).shape xb.exp().sum(-1).log().unsqueeze(-1).numpy().shape Explanation: making sure I didn't need parentheses there End of explanation xb.exp().sum(-1).log()[:10] Explanation: Okay so .unsqueeze turns the size 64 tensor into a 64x1 tensor, so it's nicely packaged up with the first element being the 64-long vector ... or something like that right? End of explanation preds.unsqueeze(-1).shape Explanation: The unsqueezed tensor doesn't look as 'nice'.. I guess. So it's packaged into a single column vector because we'll need that for the linear algebra we'll do to it later yeah? End of explanation preds.unsqueeze(-1)[:2] Explanation: Oh this is cool. I was wondering how .unsqeeze worked for tensors with multiple items in multiple dimensions (ie: not just a single row vector). Well this is what it does: End of explanation # logsoftmax(xb) ls_xb = log_softmax(xb) log_softmax(xb@weights+bias)[0] (xb@weights).shape xb.shape (xb@weights).shape Explanation: So .unsqueeze turns our size 64x10 ... ohhhhhhhh I misread: torch.unsqueeze returns a tensor with a dimension of size 1 inserted at the specified position. doesn't mean it repackages the original tensor into a 1-dimensional tensor. I was wonder how it knew how long to make it (you'd have to just concatenate everything, but then in what order?). No, a size-1 dimension is inserted where you tell it. So if it's an (X,Y) matrix, you go and give it a Z dimension, but that Z only contains the original (X,Y), ie: the only thing added is a dimension. Okay, interesting. Not exactly sure yet why we want 3 dimensions, but I kinda get it. Is it related to our data being 28x28x1? Wait isn't PyTorch's ordering N x [C x H x W] ? So it's unrelated then? Or useful for returning 64x784 to 64x28x28? I think that's not the case? Don't know. So what's up with the input[range(.. thing?: End of explanation # for reference: xb = x_train[0:bs] yb = y_train[0:bs] def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def model(xb): return log_softmax(xb @ weights + bias) preds = model(xb) def nll(input, target): return -input[range(target.shape[0]), target].mean() loss = nll(preds, yb) loss Explanation: Oh this is where I was confused. I'm not throwing xb into Log Softmax. I'm throwing xb • w + bias. The shape going into the log softmax function is not 64x784, it's 64x10. Yeah that makes sense. well duh it has to. Each value in the tensor is an activation for a class, for each image in the minibatch. So by the magic of machine learning, each activation encapsulates the effect of the weights and biases on that input element with respect to that class. So that means that the .unsqueeze oepration is not going to be giving a 64x784 vector. End of explanation xb, xb.shape Explanation: Note the loss equals that in cell Out[25] above as it should. Back to teasing this apart by hand. The minibatch: End of explanation (xb @ weights + bias)[:2] (xb @ weights + bias).shape Explanation: The minibatch's activations as they head into the Log Softmax: End of explanation log_softmax(xb@weights+bias)[:2] log_softmax(xb@weights+bias).shape Explanation: The minibatch activations after the Log Softmax and before heading into Negative Log Likelihood: End of explanation nll(log_softmax(xb@weights+bias), yb) Explanation: The loss value computed via NLL on the Log Softmax activations: End of explanation [range(yb.shape[0]), yb] Explanation: Okay. Now questions. What is indexing input by [range(target.shape[0]), target] supposed to be doing? I established before that A[range(n)] is valid if n ≤ A.shape[0]. So what's going on is I'm range-indexing the 1st dimension of the LogSoftmax activations with the length of the target tensor, and the rest of the dimension indices being the ..target tensor itself? That means the index is this: End of explanation xb[yb] Explanation: Okay. What does it look like when I index a tensor – forget range-idx for now – with another tensor? End of explanation xb.shape, yb.shape array_1 = np.array([[str(j)+str(i) for i in range(10)] for j in range(5)]) array_1 array_2 = np.array([i for i in range(len(array_1[0]))]) array_2 Explanation: Okay.. End of explanation array_1[range(array_2.shape[0]), array_2] Explanation: Uh, moment of truth: End of explanation # for reference (again): xb = x_train[0:bs] yb = y_train[0:bs] def log_softmax(x): return x - x.exp().sum(-1).log().unsqueeze(-1) def model(xb): return log_softmax(xb @ weights + bias) preds = model(xb) def nll(input, target): return -input[range(target.shape[0]), target].mean() loss = nll(preds, yb) Explanation: Oof course. What happened. Is it.. yes. I'm indexing the wrong array. Also no value in target is greater than the number of classes ... oh... oh ffs. Okay. I range index by the length of target's first dim to get the entire first dim of the LogSoftmax activations, and each vector in that index is itself indexed by the value of the target. Less-shitty English: take the first dimension of the activations; that should be batch_size x num_classes activations; so: num_classes values in each of batch_size vectors; Now for each of those vectors, pull out the value indexed by the corresponding index-value in the target tensor. Oh I see. So just now I was confused that there was redundant work being done. yeah kinda. It's Linear Algebra. See, the weights and biases produce the entire output-activations tensor. Meaning: the dot-product & addition operation creates probabilities for every class for every image in the minibatch. Yeah that can be a lot; linalg exists in a block-like world & it's easy to get carried away (I think). And that answers another question: the loss function here only cares about how wrong the correct class was. Looks like the incorrect classes are totally ignored (hence a bit of mental hesitation for me because it looks like 90% of the information is being thrown away (it is)). Now, that's not what's going on when the Log Softmax is being computed. Gotta think about that a moment.. could activations for non-target classes affect the target-activations during the Log Softmax step, before they're disgarded in the NLL? xb - xb.exp().sum(-1).log().unsqueeze(-1) is the magic line (xb is x in the definition). End of explanation xb.shape, weights.shape np.array([[1,1,1],[2,2,2],[3,3,3]]) @ np.array([[1],[2],[3]]) np.array([[1,1,1],[2,2,2],[-11,0,3]]) @ np.array([[1],[2],[3]]) Explanation: When the activations are activating, only the weights and biases are having a say. Right? End of explanation yb.type() # batch size of 3 xb_tmp = np.array([[1,1,1,1,1],[2,2,2,2,2],[3,3,3,3,3]]) yb_tmp = np.array([0,1,2]) # 4 classes c = 4 w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])]) xb_tmp = torch.Tensor(xb_tmp) yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor w_tmp = torch.Tensor(w_tmp) Explanation: Right. Now what about the Log Softmax operation itself? Well okay I can simulate this by hand: End of explanation torch.tensor([[1, 2, 3]],dtype=torch.int32) xb_tmp.shape, yb_tmp.shape, w_tmp.shape xb.shape, yb.shape, weights.shape actv_tmp = log_softmax(xb_tmp @ w_tmp) actv_tmp nll(actv_tmp, yb_tmp) Explanation: umm.... ... So it's torch.tensor not torch.Tensor? Got a lot of errors trying to specify a datatype with capital T. Alright then. End of explanation # batch size of 3 xb_tmp = np.array([[0,1,1,0,0]]) yb_tmp = np.array([1]) # 4 classes c = 4 w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])]) xb_tmp = torch.Tensor(xb_tmp) yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor w_tmp = torch.Tensor(w_tmp) xb_tmp @ w_tmp # LogSoftmax(activations) actv_tmp = log_softmax(xb_tmp @ w_tmp) actv_tmp # NLL Loss loss = nll(actv_tmp, yb_tmp) loss def cross_test(x, y): # batch size of 3 xb_tmp = np.array(x) yb_tmp = np.array(y) # 4 classes c = 4 w_tmp = np.array([[i for i in range(c)] for j in range(xb_tmp.shape[1])]) xb_tmp = torch.Tensor(xb_tmp) yb_tmp = torch.tensor(yb_tmp, dtype=torch.int64) # see: https://pytorch.org/docs/stable/tensors.html#torch-tensor w_tmp = torch.Tensor(w_tmp) print(f'Activation: {xb_tmp @ w_tmp}') # LogSoftmax(activations) actv_tmp = log_softmax(xb_tmp @ w_tmp) print(f'Log Softmax: {actv_tmp}') # NLL Loss loss = nll(actv_tmp, yb_tmp) print(f'NLL Loss: {loss}') w_tmp cross_test([[1,1,1,1,1]], [1]) cross_test([[1,1,1,1,0]], [1]) cross_test([[1,1,1,0,0]], [1]) cross_test([[1,1,1,1,0]], [1]) cross_test([[1,1,0,0,0]], [1]) Explanation: Good it works. Now to change things. The question was if any of the dropped values (non-target index) had any effect on the loss - since the loss was only calculated on error from the correct target. Basically: is there any lateral flow of information? So I'll check this by editing values in the softmax activation that are not of the correct index. Wait that shouldn't have an effect anyway. No the question is if information earlier in the stream had an effect later on. It is 4:12 am.. Aha. My question was if the activations that created the non-target class probabilities had any effect on target classes. Which is asking if there is crossing of information in the ... oh. I confused myself with the minibatches. Ignore those, there'd be something very wrong if there was cross-talk between them. I want to know if there is cross-talk within an individual tensor as it travels through the model. End of explanation
5,023
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: write a function encode_output() to one-hot encode English output sequences.
Python Code:: # one hot encode target sequence def encode_output(sequences, vocab_size): ylist = list() for sequence in sequences: encoded = to_categorical(sequence, num_classes=vocab_size) ylist.append(encoded) y = array(ylist) y = y.reshape(sequences.shape[0], sequences.shape[1], vocab_size) return y
5,024
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Some questions 1. Change coints Problem Step3: 2. Plant flowers Problem Step5: 3. Jumping rabit! Problems
Python Code: # change coins # TODO: find all the combination instead of just number of combination? # TODO: find the min number coint used? def change_coins(coins: list, n: int, m: int): :param m: the amount of money :param n: the type of coins dp = [0 for i in range(m+1)] dp[0] = 1 # base case, m = 0, method is 1 for coin in coins: for _ in range(m): money = _ + 1 dp[money] += dp[money-coin] return dp[m] # test coins = [1, 2] n = 2 m = 4 print(change_coins(coins, n, m)) Explanation: Some questions 1. Change coints Problem: Assume we have $n$ kind of coins as change ${c_1, c_2, ..., c_n}$, $m$ is the amount of money needs to change. How many methods are there to change the money? Example n = 3, {1, 2, 5} m = 100 End of explanation # Method 1 def count_1(n: int, m: int): :param n: the number of boarder :param m: the number of colors # base case if n <= 0: return 0 if n <= 2: return m ** n # 2 x2 for 4 sum = 0 sum += (m-1)*count_1(n-2, m) # Why m-1. Because we have to make sure that the last color is different from the previous one. sum += (m-1)*count_1(n-1, m) return sum # test n, m = 5, 3 print(count_1(n, m)) Explanation: 2. Plant flowers Problem: You can painting a fence made of N boards. You have 2 buckets of paint - black and white. Ypu can not paint more than 2 adjacent voards with the same paint. How many ways are there to paint the fence? Reference: https://codereview.stackexchange.com/questions/63614/count-number-of-ways-to-paint-a-fence-with-n-posts-using-k-colors End of explanation # 3. Jumping rabit mem = {} def jump_rabit(m: int): if m <= 3: return m-1 if m in mem.keys(): return mem[m] else: mem[m] = jump_rabit(m-1) + jump_rabit(m-2) return mem[m] # test m = 3 print(jump_rabit(m)) Explanation: 3. Jumping rabit! Problems: THe road map for the rabit is a list with length of $m$, the rabit can either jump 1 step or 2 steps per jump. How many jumping methods are there for the rabit reach the end (never jump back). End of explanation
5,025
Given the following text description, write Python code to implement the functionality described below step by step Description: Collections Module The collections module is a built-in module that implements specialized container data types providing alternatives to Python’s general purpose built-in containers. We've already gone over the basics Step1: Counter() with lists Step2: Counter with strings Step3: Counter with words in a sentence Step4: Common patterns when using the Counter() object Step5: defaultdict defaultdict is a dictionary like object which provides all methods provided by dictionary but takes first argument (default_factory) as default data type for the dictionary. Using defaultdict is faster than doing the same using dict.set_default method. A defaultdict will never raise a KeyError. Any key that does not exist gets the value returned by the default factory. Step6: Can also initialize with default values Step7: OrderedDict An OrderedDict is a dictionary subclass that remembers the order in which its contents are added. Fro example a normal dictionary Step8: An Ordered Dictionary Step9: Equality with an Ordered Dictionary A regular dict looks at its contents when testing for equality. An OrderedDict also considers the order the items were added. A normal Dictionary Step10: An Ordered Dictionary Step11: namedtuple The standard tuple uses numerical indexes to access its members, for example Step12: For simple use cases, this is usually enough. On the other hand, remembering which index should be used for each value can lead to errors, especially if the tuple has a lot of fields and is constructed far from where it is used. A namedtuple assigns names, as well as the numerical index, to each member. Each kind of namedtuple is represented by its own class, created by using the namedtuple() factory function. The arguments are the name of the new class and a string containing the names of the elements. You can basically think of namedtuples as a very quick way of creating a new object/class type with some attribute fields. For example Step13: We construct the namedtuple by first passing the object type name (Dog) and then passing a string with the variety of fields as a string with spaces between the field names. We can then call on the various attributes
Python Code: from collections import Counter Explanation: Collections Module The collections module is a built-in module that implements specialized container data types providing alternatives to Python’s general purpose built-in containers. We've already gone over the basics: dict, list, set, and tuple. Now we'll learn about the alternatives that the collections module provides. Counter Counter is a dict subclass which helps count hash-able objects. Inside of it elements are stored as dictionary keys and the counts of the objects are stored as the value. Lets see how it can be used: End of explanation l = [1,2,2,2,2,3,3,3,1,2,1,12,3,2,32,1,21,1,223,1] Counter(l) Explanation: Counter() with lists End of explanation Counter('aabsbsbsbhshhbbsbs') Explanation: Counter with strings End of explanation s = 'How many times does each word show up in this sentence word times each each word' words = s.split() Counter(words) # Methods with Counter() c = Counter(words) c.most_common(2) Explanation: Counter with words in a sentence End of explanation sum(c.values()) # total of all counts c.clear() # reset all counts list(c) # list unique elements set(c) # convert to a set dict(c) # convert to a regular dictionary c.items() # convert to a list of (elem, cnt) pairs Counter(dict(list_of_pairs)) # convert from a list of (elem, cnt) pairs c.most_common()[:-n-1:-1] # n least common elements c += Counter() # remove zero and negative counts Explanation: Common patterns when using the Counter() object End of explanation from collections import defaultdict d = {} d['one'] d = defaultdict(object) d['one'] for item in d: print item Explanation: defaultdict defaultdict is a dictionary like object which provides all methods provided by dictionary but takes first argument (default_factory) as default data type for the dictionary. Using defaultdict is faster than doing the same using dict.set_default method. A defaultdict will never raise a KeyError. Any key that does not exist gets the value returned by the default factory. End of explanation d = defaultdict(lambda: 0) d['one'] Explanation: Can also initialize with default values: End of explanation print 'Normal dictionary:' d = {} d['a'] = 'A' d['b'] = 'B' d['c'] = 'C' d['d'] = 'D' d['e'] = 'E' for k, v in d.items(): print k, v Explanation: OrderedDict An OrderedDict is a dictionary subclass that remembers the order in which its contents are added. Fro example a normal dictionary: End of explanation print 'OrderedDict:' d = collections.OrderedDict() d['a'] = 'A' d['b'] = 'B' d['c'] = 'C' d['d'] = 'D' d['e'] = 'E' for k, v in d.items(): print k, v Explanation: An Ordered Dictionary: End of explanation print 'Dictionaries are equal? ' d1 = {} d1['a'] = 'A' d1['b'] = 'B' d2 = {} d2['b'] = 'B' d2['a'] = 'A' print d1 == d2 Explanation: Equality with an Ordered Dictionary A regular dict looks at its contents when testing for equality. An OrderedDict also considers the order the items were added. A normal Dictionary: End of explanation print 'Dictionaries are equal? ' d1 = collections.OrderedDict() d1['a'] = 'A' d1['b'] = 'B' d2 = collections.OrderedDict() d2['b'] = 'B' d2['a'] = 'A' print d1 == d2 Explanation: An Ordered Dictionary: End of explanation t = (12,13,14) t[0] Explanation: namedtuple The standard tuple uses numerical indexes to access its members, for example: End of explanation from collections import namedtuple Dog = namedtuple('Dog','age breed name') sam = Dog(age=2,breed='Lab',name='Sammy') frank = Dog(age=2,breed='Shepard',name="Frankie") Explanation: For simple use cases, this is usually enough. On the other hand, remembering which index should be used for each value can lead to errors, especially if the tuple has a lot of fields and is constructed far from where it is used. A namedtuple assigns names, as well as the numerical index, to each member. Each kind of namedtuple is represented by its own class, created by using the namedtuple() factory function. The arguments are the name of the new class and a string containing the names of the elements. You can basically think of namedtuples as a very quick way of creating a new object/class type with some attribute fields. For example: End of explanation sam sam.age sam.breed sam[0] Explanation: We construct the namedtuple by first passing the object type name (Dog) and then passing a string with the variety of fields as a string with spaces between the field names. We can then call on the various attributes: End of explanation
5,026
Given the following text description, write Python code to implement the functionality described below step by step Description: Running EnergyPlus from Eppy It would be great if we could run EnergyPlus directly from our IDF wouldn’t it? Well here’s how we can. Step1: if you are in a terminal, you will see something like this Step4: Note Step7: Running in parallel processes using Generators Maybe you want to run a 100 or a 1000 simulations. The code above will not let you do that, since it will try to load 1000 files into memory. Now you need to use generators (python's secret sauce. if you don't know this, you need to look into it). Here is a code using generators. Now you can simulate a 1000 files gmulti.py Step10: True Multi-processing What if you want to run your simulations on multiple computers. What if those computers are on other networks (some at home and the other in your office and others in your server room) and some on the cloud. There is an experimental repository where you can do this. Keep an eye on this Step11: Debugging and reporting problems Debugging issues with IDF.run() used to be difficult, since you needed to go and hunt for the eplusout.err file, and the error message returned was not at all helpful. Now the output from EnergyPlus is returned in the error message, as well as the location and contents of eplusout.err. For example, this is the error message produced when running an IDF which contains an “HVACTemplate
Python Code: # you would normaly install eppy by doing # python setup.py install # or # pip install eppy # or # easy_install eppy # if you have not done so, uncomment the following three lines import sys # pathnameto_eppy = 'c:/eppy' pathnameto_eppy = '../' sys.path.append(pathnameto_eppy) from eppy.modeleditor import IDF iddfile = "/Applications/EnergyPlus-8-3-0/Energy+.idd" IDF.setiddname(iddfile) idfname = "/Applications/EnergyPlus-8-3-0/ExampleFiles/BasicsFiles/Exercise1A.idf" epwfile = "/Applications/EnergyPlus-8-3-0/WeatherData/USA_IL_Chicago-OHare.Intl.AP.725300_TMY3.epw" idf = IDF(idfname, epwfile) idf.run() Explanation: Running EnergyPlus from Eppy It would be great if we could run EnergyPlus directly from our IDF wouldn’t it? Well here’s how we can. End of explanation help(idf.run) Explanation: if you are in a terminal, you will see something like this:: Processing Data Dictionary Processing Input File Initializing Simulation Reporting Surfaces Beginning Primary Simulation Initializing New Environment Parameters Warming up {1} Warming up {2} Warming up {3} Warming up {4} Warming up {5} Warming up {6} Starting Simulation at 07/21 for CHICAGO_IL_USA COOLING .4% CONDITIONS DB=&gt;MWB Initializing New Environment Parameters Warming up {1} Warming up {2} Warming up {3} Warming up {4} Warming up {5} Warming up {6} Starting Simulation at 01/21 for CHICAGO_IL_USA HEATING 99.6% CONDITIONS Writing final SQL reports EnergyPlus Run Time=00hr 00min 0.24sec It’s as simple as that to run using the EnergyPlus defaults, but all the EnergyPlus command line interface options are also supported. To get a description of the options available, as well as the defaults you can call the Python built-in help function on the IDF.run method and it will print a full description of the options to the console. End of explanation multiprocessing runs import os from eppy.modeleditor import IDF from eppy.runner.run_functions import runIDFs def make_eplaunch_options(idf): Make options for run, so that it runs like EPLaunch on Windows idfversion = idf.idfobjects['version'][0].Version_Identifier.split('.') idfversion.extend([0] * (3 - len(idfversion))) idfversionstr = '-'.join([str(item) for item in idfversion]) fname = idf.idfname options = { 'ep_version':idfversionstr, # runIDFs needs the version number 'output_prefix':os.path.basename(fname).split('.')[0], 'output_suffix':'C', 'output_directory':os.path.dirname(fname), 'readvars':True, 'expandobjects':True } return options def main(): iddfile = "/Applications/EnergyPlus-9-3-0/Energy+.idd" # change this for your operating system IDF.setiddname(iddfile) epwfile = "USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" runs = [] # File is from the Examples Folder idfname = "HVACTemplate-5ZoneBaseboardHeat.idf" idf = IDF(idfname, epwfile) theoptions = make_eplaunch_options(idf) runs.append([idf, theoptions]) # copy of previous file idfname = "HVACTemplate-5ZoneBaseboardHeat1.idf" idf = IDF(idfname, epwfile) theoptions = make_eplaunch_options(idf) runs.append([idf, theoptions]) num_CPUs = 2 runIDFs(runs, num_CPUs) if __name__ == '__main__': main() Explanation: Note: idf.run() works for E+ version >= 8.3 Running in parallel processes If you have acomputer with multiple cores, you may want to use all the cores. EnergyPlus allows you to run simulations on multiple cores. Here is an example script of how use eppy to run on multiple cores End of explanation multiprocessing runs using generators instead of a list when you are running a 100 files you have to use generators import os from eppy.modeleditor import IDF from eppy.runner.run_functions import runIDFs def make_eplaunch_options(idf): Make options for run, so that it runs like EPLaunch on Windows idfversion = idf.idfobjects['version'][0].Version_Identifier.split('.') idfversion.extend([0] * (3 - len(idfversion))) idfversionstr = '-'.join([str(item) for item in idfversion]) fname = idf.idfname options = { 'ep_version':idfversionstr, # runIDFs needs the version number 'output_prefix':os.path.basename(fname).split('.')[0], 'output_suffix':'C', 'output_directory':os.path.dirname(fname), 'readvars':True, 'expandobjects':True } return options def main(): iddfile = "/Applications/EnergyPlus-9-3-0/Energy+.idd" IDF.setiddname(iddfile) epwfile = "USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" # File is from the Examples Folder idfname1 = "HVACTemplate-5ZoneBaseboardHeat.idf" # copy of previous file idfname2 = "HVACTemplate-5ZoneBaseboardHeat1.idf" fnames = [idfname1, idfname1] idfs = (IDF(fname, epwfile) for fname in fnames) runs = ((idf, make_eplaunch_options(idf) ) for idf in idfs) num_CPUs = 2 runIDFs(runs, num_CPUs) if __name__ == '__main__': main() Explanation: Running in parallel processes using Generators Maybe you want to run a 100 or a 1000 simulations. The code above will not let you do that, since it will try to load 1000 files into memory. Now you need to use generators (python's secret sauce. if you don't know this, you need to look into it). Here is a code using generators. Now you can simulate a 1000 files gmulti.py End of explanation single run EPLaunch style import os from eppy.modeleditor import IDF from eppy.runner.run_functions import runIDFs def make_eplaunch_options(idf): Make options for run, so that it runs like EPLaunch on Windows idfversion = idf.idfobjects['version'][0].Version_Identifier.split('.') idfversion.extend([0] * (3 - len(idfversion))) idfversionstr = '-'.join([str(item) for item in idfversion]) fname = idf.idfname options = { # 'ep_version':idfversionstr, # runIDFs needs the version number # idf.run does not need the above arg # you can leave it there and it will be fine :-) 'output_prefix':os.path.basename(fname).split('.')[0], 'output_suffix':'C', 'output_directory':os.path.dirname(fname), 'readvars':True, 'expandobjects':True } return options def main(): iddfile = "/Applications/EnergyPlus-9-3-0/Energy+.idd" # change this for your operating system and E+ version IDF.setiddname(iddfile) epwfile = "USA_CA_San.Francisco.Intl.AP.724940_TMY3.epw" # File is from the Examples Folder idfname = "HVACTemplate-5ZoneBaseboardHeat.idf" idf = IDF(idfname, epwfile) theoptions = make_eplaunch_options(idf) idf.run(**theoptions) if __name__ == '__main__': main() Explanation: True Multi-processing What if you want to run your simulations on multiple computers. What if those computers are on other networks (some at home and the other in your office and others in your server room) and some on the cloud. There is an experimental repository where you can do this. Keep an eye on this: https://github.com/pyenergyplus/zeppy Make idf.run() work like EPLaunch I like the function make_eplaunch_options. Can I use it to do a single run ? Yes! You can. An Explanation: EPLaunch is an application that comes with EnergyPlus on the Windows Platform. It has a default functionality that people become familiar with and come to expect. make_eplaunch_options set up the idf.run() arguments so that it behaves in the same way as EPLaunch. Here is the Sample code below. Modify and use it for your needs End of explanation E eppy.runner.run_functions.EnergyPlusRunError: E Program terminated: EnergyPlus Terminated--Error(s) Detected. E E Contents of EnergyPlus error file at C:\Users\jamiebull1\git\eppy\eppy\tests\test_dir\eplusout.err E Program Version,EnergyPlus, Version 8.9.0-40101eaafd, YMD=2018.10.14 20:49, E ** Severe ** Line: 107 You must run the ExpandObjects program for "HVACTemplate:Thermostat" E ** Fatal ** Errors occurred on processing input file. Preceding condition(s) cause termination. E ...Summary of Errors that led to program termination: E ..... Reference severe error count=1 E ..... Last severe error=Line: 107 You must run the ExpandObjects program for "HVACTemplate:Thermostat" E ************* Warning: Node connection errors not checked - most system input has not been read (see previous warning). E ************* Fatal error -- final processing. Program exited before simulations began. See previous error messages. E ************* EnergyPlus Warmup Error Summary. During Warmup: 0 Warning; 0 Severe Errors. E ************* EnergyPlus Sizing Error Summary. During Sizing: 0 Warning; 0 Severe Errors. E ************* EnergyPlus Terminated--Fatal Error Detected. 0 Warning; 1 Severe Errors; Elapsed Time=00hr 00min 0.16sec Explanation: Debugging and reporting problems Debugging issues with IDF.run() used to be difficult, since you needed to go and hunt for the eplusout.err file, and the error message returned was not at all helpful. Now the output from EnergyPlus is returned in the error message, as well as the location and contents of eplusout.err. For example, this is the error message produced when running an IDF which contains an “HVACTemplate:Thermostat” object without passing expand_objects=True to idf.run(): End of explanation
5,027
Given the following text description, write Python code to implement the functionality described below step by step Description: Tensor Network Random Unitary Evolution This example demonstrates some features of TensorNetwork manipulation as well as the use of MatrixProductState.gate, based on 'evolving' an intitial MPS with many random nearest neighbour unitaries. Step1: First we specify how sites we want, how many gates to apply, and some other parameters Step2: We generate a unique tag for each gate we will apply, which we can also use to address all the gates only. Then we apply each gate to the MPS inplace Step3: To make the graph a bit neater we can supply some fixed positions Step4: When fixing graphs, it might also be necessary to play with the spring parameter k Step5: We can see the 'lightcone' effect of adding propagate_tags='sites. Next let's form the norm overlap, and add one tag to all the gates, and another to all the non-gate tensors Step6: Again, it's a bit messy so we can specify some positions for some tensors Step7: iterations can also be increased if the graph is not relaxing well. Step8: Later color tags take precedence over earlier ones. Since this circuit is still relatively low depth, we can fully contract it as well Step9: Manually perform partial trace Here, to perform the partial trace we need to do two things. (i) Make a copy of the vector to be the 'bra' with different indices, (ii) match up the subsystem indices we want to trace out in the 'ket' and 'bra' Step10: Again we can graph this Step11: Estimate Subsystem Entropy We can treat this whole reduced density matrix as an effective linear operator, $A$, then calculate for example its entropy as a spectral sum function, $-\text{Tr}(A \log_2 A)$. First we set the left and right indices, and turn it into a scipy.sparse.linalg.LinearOperator Step12: This can be quite slow, so wise to check progress Step13: Which yields the final entropy (in bits) of the central 20 qubits as
Python Code: %matplotlib inline from quimb.tensor import * from quimb import * import numpy as np Explanation: Tensor Network Random Unitary Evolution This example demonstrates some features of TensorNetwork manipulation as well as the use of MatrixProductState.gate, based on 'evolving' an intitial MPS with many random nearest neighbour unitaries. End of explanation # the initial state n = 50 cyclic = False chi = 4 # intial bond dimension psi = MPS_rand_state(n, chi, cyclic=cyclic, tags='KET', dtype='complex128') # the gates n_gates = 5 * n gates = [rand_uni(4) for _ in range(n_gates)] u_tags = [f'U{i}' for i in range(n_gates)] Explanation: First we specify how sites we want, how many gates to apply, and some other parameters: End of explanation for U, t in zip(gates, u_tags): # generate a random coordinate i = np.random.randint(0, n - int(not cyclic)) # apply the next gate to the coordinate # propagate_tags='sites' (the default in fact) specifies that the # new gate tensor should inherit the site tags from tensors it acts on psi.gate_(U, where=[i, i + 1], tags=t, propagate_tags='sites') psi.graph(color=['KET']) Explanation: We generate a unique tag for each gate we will apply, which we can also use to address all the gates only. Then we apply each gate to the MPS inplace: End of explanation fix = { # [key - tags that uniquely locate a tensor]: [val - (x, y) coord] **{('KET', f'I{i}'): (i, +10) for i in range(n)}, # can also use a external index, 'k0' etc, as a key to fix it **{f'k{i}': (i, -10) for i in range(n)}, } Explanation: To make the graph a bit neater we can supply some fixed positions: End of explanation psi.graph(fix=fix, k=0.001, color=['I5', 'I15', 'I25', 'I35', 'I45']) Explanation: When fixing graphs, it might also be necessary to play with the spring parameter k: End of explanation psiH = psi.H psiH.retag_({'KET': 'BRA'}) # specify this to distinguish norm = (psiH | psi) norm.add_tag('UGs', where=u_tags, which='any') norm.add_tag('VEC0', where=u_tags, which='!any') norm.graph(color=['VEC0', 'UGs']) Explanation: We can see the 'lightcone' effect of adding propagate_tags='sites. Next let's form the norm overlap, and add one tag to all the gates, and another to all the non-gate tensors: End of explanation fix = { **{(f'I{i}', 'KET', 'VEC0'): (i, -20) for i in range(n)}, **{(f'I{i}', 'BRA', 'VEC0'): (i, +20) for i in range(n)}, } Explanation: Again, it's a bit messy so we can specify some positions for some tensors: End of explanation (psiH | psi).graph( color=['VEC0', 'UGs', 'I5', 'I15', 'I25', 'I35', 'I45'], node_size=30, iterations=500, fix=fix, k=0.0001) Explanation: iterations can also be increased if the graph is not relaxing well. End of explanation # this calculates an opimized path for the contraction, which is cached # the path can also be inspected with `print(expr)` expr = (psi.H | psi).contract(all, get='path-info') %%time (psi.H | psi) ^ all Explanation: Later color tags take precedence over earlier ones. Since this circuit is still relatively low depth, we can fully contract it as well: End of explanation # make a 'bra' vector copy with 'upper' indices psiH = psi.H psiH.retag_({'KET': 'BRA'}) # this automatically reindexes the TN psiH.site_ind_id = 'b{}' # define two subsystems sysa = range(15, 35) sysb = [i for i in range(n) if i not in sysa] # join indices for sysb only psi.reindex_sites('dummy_ptr{}', sysb, inplace=True) psiH.reindex_sites('dummy_ptr{}', sysb, inplace=True) rho_ab = (psiH | psi) rho_ab fix = { **{f'k{i}': (i, -10) for i in range(n)}, **{(f'I{i}', 'KET', 'VEC0'): (i, 0) for i in range(n)}, **{(f'I{i}', 'BRA', 'VEC0'): (i, 10) for i in range(n)}, **{f'b{i}': (i, 20) for i in range(n)}, } Explanation: Manually perform partial trace Here, to perform the partial trace we need to do two things. (i) Make a copy of the vector to be the 'bra' with different indices, (ii) match up the subsystem indices we want to trace out in the 'ket' and 'bra': End of explanation rho_ab.graph(color=['VEC0'] + [f'I{i}' for i in sysa], iterations=500, fix=fix, k=0.001) Explanation: Again we can graph this: End of explanation right_ix = [f'b{i}' for i in sysa] left_ix = [f'k{i}' for i in sysa] rho_ab_lo = rho_ab.aslinearoperator(left_ix, right_ix) rho_ab_lo Explanation: Estimate Subsystem Entropy We can treat this whole reduced density matrix as an effective linear operator, $A$, then calculate for example its entropy as a spectral sum function, $-\text{Tr}(A \log_2 A)$. First we set the left and right indices, and turn it into a scipy.sparse.linalg.LinearOperator: End of explanation S_a = - approx_spectral_function(rho_ab_lo, f=xlogx, verbosity=1, R=10) Explanation: This can be quite slow, so wise to check progress: End of explanation S_a Explanation: Which yields the final entropy (in bits) of the central 20 qubits as: End of explanation
5,028
Given the following text description, write Python code to implement the functionality described below step by step Description: Computational model for group analysis Demo code for Ito et al., 2017. Generates exact figures from Supplementary Fig. 3, and several comparable figures to Fig. 4. Author Step1: ESSENTIAL parameters to modify Identify data directory Indicate number of threads to perform analyses (this originally was performed on a large linux servers with > 20 CPUs) Step2: Basic simulation parameters Step3: 1.0 Construct sample network matrix and visualize group FC matrices 1.1 Construct and visualize synaptic matrix for a single sample subject (Fig. 4A) We generate a random synaptic matrix using the model.py module for demonstration Step4: 1.2 Visualize group average resting-state FC from simulated data (analogous to Fig. 4B) Visualize both Pearson FC and multiple linear regression Step5: 2.0 Compute out-of-network intrinsic FC (analogous to Fig. 4D) Step6: 3.0 Run group analysis on network-to-network information transfer mapping output using simulated data (Supplementary Fig. 3A-D) Note Step7: 3.2 Statistical testing on results and plot Step8: 4.0 Run group analysis on network-to-network information transfer mapping output using SVM decoding (as opposed to predicted-to-actual RSA analysis) (Supplementary Fig. 3E-H) 4.1 Network-to-network information transfer mapping on simulated neural data USING SVMs Step9: 4.2 Statistical testing on results and plot
Python Code: import numpy as np import sys sys.path.append('utils/') import os os.environ['OMP_NUM_THREADS'] = str(1) import matplotlib.pyplot as plt % matplotlib inline import scipy.stats as stats import statsmodels.api as sm import multiprocessing as mp import sklearn.preprocessing as preprocessing import sklearn.svm as svm import statsmodels.sandbox.stats.multicomp as mc # Import custom modules import multregressionconnectivity as mreg import model import analysis from matplotlib.colors import Normalize # Code to generate a normalized midpoint for plt.imshow visualization function class MidpointNormalize(Normalize): def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False): self.midpoint = midpoint Normalize.__init__(self, vmin, vmax, clip) def __call__(self, value, clip=None): # I'm ignoring masked values and all kinds of edge cases to make a # simple example... x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1] return np.ma.masked_array(np.interp(value, x, y)) Explanation: Computational model for group analysis Demo code for Ito et al., 2017. Generates exact figures from Supplementary Fig. 3, and several comparable figures to Fig. 4. Author: Takuya Ito ([email protected]) Ito T, Kulkarni KR, Schultz DH, Mill RD, Chen RH, Solomyak LI, Cole MW (2017). Cognitive task information is transferred between brain regions via resting-state network topology. bioRxiv. https://doi.org/10.1101/101782 Summary: Reads in data generated from running simulations on a compute cluster (20 simulations/subjects). For each simulated subject, we run a resting-state simulation, a task-state simulation (for topdown hub stimulation), a second task-state simulation (for simultaneous topdown and bottomup network stimulation), and perform the information transfer mapping procedure for each task. Each task consists for 4 different task conditions. Simulations are run using a network with five communities, comprising of a single hub community and four local communities. We employ a firing rate code model, and simulate functional MRI data by convolving the simulated signal with a hemodynamic response function (defined in model.py module). See Supplemental materials/methods for a full description. The model (see Stern et al., 2014) $$ \frac{dx_{i}}{dt} \tau_{i} = -x_{i}(t) + s \hspace{3 pt} \phi \hspace{1 pt} \bigg{(} x_i(t) \bigg{)} + g \bigg{(} \sum_{j\neq i}^{N} W_{ij} \hspace{3 pt} \phi \hspace{1 pt} \bigg{(} x_{j}(t) \bigg{)} \bigg{)} + I_{i}(t)$$ where $x_i$ is the activity of region $i$, $\tau_{i}$ is the time constant for region $i$, $s$ is the recurrent (local) coupling, $g$ is the global coupling parameter, $\phi$ is the bounded transfer function (in this scenario is the hyperbolic tangent), $W_{xy}$ is the synaptic connectivity matrix, and $I$ is the task-stimulation (if any). Simulation description (see Methods for full description) N.B. All simulations were performed separately using the provided modules; data provided are a direct product for running the provided model. This notebook is configured to run analyses in parallel using the multiprocessing module in python. 1.0 Load and visualize synaptic connectivity, resting-state FC 2.0 Compute out-of-network degree centrality 3.0 Load in network-to-network activity flow mapping predictions; perform predicted-to-actual similarity analysis (RSA) 4.0 Seconday analysis: Load in network-to-network activity flow mapping predictions; perform predicted-to-actual similarity analysis (using SVMs) Parameters: global coupling parameter g = 1.0 local coupling parameter s = 1.0 Sampling rate of 10ms Additional notes: Simulation data was generated using the function model.subjectSimulationAndSaveToFile() See help(model.subjectSimulationAndSaveToFile) for more details Simulations were performed on a compute cluster at Rutgers University (NM3 compute cluster) A single subject simulation takes about ~10-12 minutes to complete for model.subjectSimulationAndSaveToFile 0 - Import modules and define essential parameters End of explanation # Specify the directory to read in provided data # If data file was unzippd in current working directory this shouldn't need to be changed datadir = 'ItoEtAl2017_Simulations/' # Specify number of CPUs to process on (using multiprocessing module in python) nproc = 10 # Output file to save generated figures (from this notebook) outputdir = './figures/' # default with output in current working directory if not os.path.exists(outputdir): os.makedirs(outputdir) Explanation: ESSENTIAL parameters to modify Identify data directory Indicate number of threads to perform analyses (this originally was performed on a large linux servers with > 20 CPUs) End of explanation nsubjs = range(0,20) # number of simulations (i.e., subject numbers) nblocks = 20 # number of blocks per task condition #### Define the condition numbers associated with each task # Conditions 1-4 are for top-down stimulation only (i.e., task 1) topdown_only = range(1,5) # Conditions 5-9 are simultaneous top-down (hub-network) and bottom-up (local-network) stimulation (i.e., task 2) topdown_and_bottomup = range(5,9) Explanation: Basic simulation parameters End of explanation #### Set up subject networks #### # Parameters for subject's networks ncommunities = 5 innetwork_dsity = .35 outnetwork_dsity = .05 hubnetwork_dsity = .20 nodespercommunity = 50 totalnodes = nodespercommunity*ncommunities ########## # Construct structural matrix W = model.generateStructuralNetwork(ncommunities=ncommunities, innetwork_dsity=innetwork_dsity, outnetwork_dsity=outnetwork_dsity, hubnetwork_dsity=hubnetwork_dsity, nodespercommunity=nodespercommunity, showplot=False) # Construct synaptic matrix G = model.generateSynapticNetwork(W, showplot=False) # Define community affiliation vector Ci = np.repeat(np.arange(ncommunities),nodespercommunity) # Plot figure plt.figure() # norm = MidpointNormalize(midpoint=0) plt.imshow(G,origin='lower',interpolation='none') plt.xlabel('Regions') plt.ylabel('Regions') plt.title('Synaptic Weight Matrix', y=1.04, fontsize=18) plt.colorbar() Explanation: 1.0 Construct sample network matrix and visualize group FC matrices 1.1 Construct and visualize synaptic matrix for a single sample subject (Fig. 4A) We generate a random synaptic matrix using the model.py module for demonstration End of explanation fcmat_pearson = np.zeros((totalnodes,totalnodes,len(nsubjs))) fcmat_multreg = np.zeros((totalnodes,totalnodes,len(nsubjs))) ########## # Load in subject FC data scount = 0 for subj in nsubjs: indir = datadir + '/restfc/' # Load in pearson FC matrix filename1 = 'subj' + str(subj) + '_restfc_pearson.txt' fcmat_pearson[:,:,scount] = np.loadtxt(indir + filename1, delimiter=',') # Loda in multreg FC matrix filename2 = 'subj' + str(subj) + '_restfc_multreg.txt' fcmat_multreg[:,:,scount] = np.loadtxt(indir + filename2, delimiter=',') scount += 1 ########## # Plot group FC averages plt.figure() avg = np.mean(fcmat_pearson,axis=2) np.fill_diagonal(avg,0) plt.imshow(avg ,origin='lower',interpolation='none')#,vmin=0) plt.xlabel('Regions') plt.ylabel('Regions') plt.title('Group Rest FC Matrix\nPearson FC', y=1.04, fontsize=18) plt.colorbar() plt.tight_layout() plt.figure() avg = np.mean(fcmat_multreg,axis=2) np.fill_diagonal(avg,0) plt.imshow(avg ,origin='lower',interpolation='none')#,vmin=-.08,vmax=.08) plt.xlabel('Regions') plt.ylabel('Regions') plt.title('Group Rest FC Matrix\nMultiple Regression FC', y=1.04, fontsize=18) plt.colorbar() plt.tight_layout() Explanation: 1.2 Visualize group average resting-state FC from simulated data (analogous to Fig. 4B) Visualize both Pearson FC and multiple linear regression End of explanation outofnet_intrinsicFC = np.zeros((ncommunities,len(nsubjs))) indices = np.arange(nodespercommunity*ncommunities) ########## # Calculate average out-of-network degree across subjects scount = 0 for subj in nsubjs: for net in range(0,ncommunities): # if net == hubnet: continue net_ind = np.where(Ci==net)[0] net_ind.shape = (len(net_ind),1) outofnet_ind = np.setxor1d(net_ind,indices) outofnet_ind.shape = (len(outofnet_ind),1) outofnet_intrinsicFC[net,scount] = np.mean(fcmat_multreg[net_ind, outofnet_ind.T, scount]) scount += 1 # Compute average stats fcmean = np.mean(outofnet_intrinsicFC,axis=1) fcerr = np.std(outofnet_intrinsicFC,axis=1)/np.sqrt(len(nsubjs)) ########## # Plot figure fig = plt.bar(range(len(fcmean)), fcmean, yerr=fcerr) # fig = plt.ylim([.09,0.10]) fig = plt.xticks(np.arange(.4,5.4,1.0),['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4'],fontsize=14) fig = plt.ylabel('Multiple Regression FC', fontsize=16) fig = plt.xlabel('Networks', fontsize=16) fig = plt.title("Average Out-Of-Network IntrinsicFC\nSimulated Resting-State Data", fontsize=18, y=1.02) fig = plt.tight_layout() Explanation: 2.0 Compute out-of-network intrinsic FC (analogous to Fig. 4D) End of explanation # Empty variables for topdown task analysis ite_topdown = np.zeros((ncommunities,ncommunities,len(nsubjs))) # Empty variables for topdown and bottomup task analysis ite_topdownbottomup = np.zeros((ncommunities,ncommunities,len(nsubjs))) ########## # Run predicted-to-actual similarity for every network-to-network configuration (using RSA approach) for i in range(ncommunities): for j in range(ncommunities): if i==j: continue fromnet = i net = j nblocks = nblocks ## First run on topdown only task conditions inputs = [] for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_only,nblocks,Ci,nodespercommunity,datadir)) # Run multiprocessing across subjects pool = mp.Pool(processes=nproc) results_topdown = pool.map_async(analysis.predictedToActualRSA, inputs).get() pool.close() pool.join() ## Second run on topdown and bottomup task conditions inputs = [] for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_and_bottomup,nblocks,Ci,nodespercommunity,datadir)) # Run multiprocessing pool = mp.Pool(processes=nproc) results_topdownbottomup = pool.map_async(analysis.predictedToActualRSA, inputs).get() pool.close() pool.join() ## Get results and store in network X network X subjects matrix scount = 0 for subj in nsubjs: # Obtain topdown task results ite = results_topdown[scount] ite_topdown[i,j,scount] = ite # Obtain topdown and bottom up task results ite = results_topdownbottomup[scount] ite_topdownbottomup[i,j,scount] = ite scount += 1 Explanation: 3.0 Run group analysis on network-to-network information transfer mapping output using simulated data (Supplementary Fig. 3A-D) Note: Activity flow mapping procedure (and subsequent data) was generated on the compute cluster. Code that generated data is included in model.py. We demonstrate the 'predicted-to-actual' similarity analysis below. 3.1 Network-to-network information transfer mapping on simulated neural data Region-to-region activity flow mapping is performed already (with provided simulation data); we only perform the predicted-to-actual similarity analysis. Perform for two tasks: (1) topdown only task conditions (stimulation of hub network only); (2) topdown and bottom up task conditions (stimulation of both hub network and local networks). End of explanation # Instantiate empty result matrices tmat_topdown = np.zeros((ncommunities,ncommunities)) pmat_topdown = np.ones((ncommunities,ncommunities)) tmat_topdownbottomup = np.zeros((ncommunities,ncommunities)) pmat_topdownbottomup = np.ones((ncommunities,ncommunities)) # Run t-tests for every network-to-network configuration for i in range(ncommunities): for j in range(ncommunities): if i==j: continue ########## ## Run statistical test for first task (topdown only stim) t, p = stats.ttest_1samp(ite_topdown[i,j,:],0) tmat_topdown[i,j] = t # Make p-value one-sided (for one-sided t-test) if t > 0: p = p/2.0 else: p = 1-p/2.0 pmat_topdown[i,j] = p ########## ## Run statistical test for second task (topdown and bottomup stim) t, p = stats.ttest_1samp(ite_topdownbottomup[i,j,:],0) # Make p-value one-sided (for one-sided t-test) tmat_topdownbottomup[i,j] = t if t > 0: p = p/2.0 else: p = 1-p/2.0 pmat_topdownbottomup[i,j] = p ########## # Run FDR correction on p-values (exclude diagonal values) ## TopDown Task qmat_topdown = np.ones((ncommunities,ncommunities)) triu_ind = np.triu_indices(ncommunities,k=1) tril_ind = np.tril_indices(ncommunities,k=-1) all_ps = np.hstack((pmat_topdown[triu_ind],pmat_topdown[tril_ind])) h, all_qs = mc.fdrcorrection0(all_ps) # the first half of all qs belong to triu, second half belongs to tril qmat_topdown[triu_ind] = all_qs[:len(triu_ind[0])] qmat_topdown[tril_ind] = all_qs[len(tril_ind[0]):] binary_mat_topdown = qmat_topdown < .05 ## TopDown and BottomUp Task qmat_topdownbottomup = np.ones((ncommunities,ncommunities)) triu_ind = np.triu_indices(ncommunities,k=1) tril_ind = np.tril_indices(ncommunities,k=-1) all_ps = np.hstack((pmat_topdownbottomup[triu_ind],pmat_topdownbottomup[tril_ind])) h, all_qs = mc.fdrcorrection0(all_ps) # the first half of all qs belong to triu, second half belongs to tril qmat_topdownbottomup[triu_ind] = all_qs[:len(triu_ind[0])] qmat_topdownbottomup[tril_ind] = all_qs[len(tril_ind[0]):] binary_mat_topdownbottomup = qmat_topdownbottomup < .05 ########## # Plot figures for topdown task # (Unthresholded plot) plt.figure(figsize=(12,10)) plt.subplot(121) norm = MidpointNormalize(midpoint=0) plt.imshow(np.mean(ite_topdown,axis=2),norm=norm,origin='lower',interpolation='None',cmap='bwr') plt.title('Network-to-Network ITE (using RSA) (Unthresholded)\nTopDown Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) # (Thresholded plot) plt.subplot(122) threshold_acc = np.multiply(binary_mat_topdown,np.mean(ite_topdown,axis=2)) norm = MidpointNormalize(midpoint=0) plt.imshow(threshold_acc,norm=norm,origin='lower',interpolation='None',cmap='bwr') plt.title('Network-to-Network ITE (using RSA) (Thresholded)\nTopDown Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) plt.tight_layout() plt.savefig(outputdir + 'SFig_CompModel_RSA_topdownOnly.pdf') ########## # Plot figures for topdown and bottomup task # (Unthresholded plot) plt.figure(figsize=(12,10)) ((12,10)) plt.subplot(121) norm = MidpointNormalize(midpoint=0) plt.imshow(np.mean(ite_topdownbottomup,axis=2),origin='lower',interpolation='None',norm=norm,cmap='bwr') plt.title('Network-to-Network ITE (using RSA) (Unthresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) # (Thresholded plot) plt.subplot(122) threshold_acc = np.multiply(binary_mat_topdownbottomup,np.mean(ite_topdownbottomup,axis=2)) norm = MidpointNormalize(midpoint=0) plt.imshow(threshold_acc,origin='lower',interpolation='None',norm=norm,cmap='bwr') plt.title('Network-to-Network ITE (using RSA)(Thresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) plt.tight_layout() plt.savefig(outputdir + 'SFig_CompModel_RSA_topdownbottomup.pdf') Explanation: 3.2 Statistical testing on results and plot End of explanation # Empty variables for topdown task analysis svm_topdown = np.zeros((ncommunities,ncommunities,len(nsubjs))) # Empty variables for topdown and bottomup task analysis svm_topdownbottomup = np.zeros((ncommunities,ncommunities,len(nsubjs))) ########## # Run predicted-to-actual similarity for every network-to-network configuration (using RSA approach) for i in range(ncommunities): for j in range(ncommunities): if i==j: continue fromnet = i net = j nblocks = nblocks ## First run on topdown only task conditions inputs = [] for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_only,nblocks,Ci,nodespercommunity,datadir)) # Run multiprocessing across subjects pool = mp.Pool(processes=nproc) results_topdown = pool.map_async(analysis.predictedToActualSVM, inputs).get() pool.close() pool.join() ## Second run on topdown and bottomup task conditions inputs = [] for subj in nsubjs: inputs.append((subj,net,fromnet,topdown_and_bottomup,nblocks,Ci,nodespercommunity,datadir)) # Run multiprocessing pool = mp.Pool(processes=nproc) results_topdownbottomup = pool.map_async(analysis.predictedToActualSVM, inputs).get() pool.close() pool.join() ## Get results and store in network X network X subjects matrix scount = 0 for subj in nsubjs: # Obtain topdown task results svm = results_topdown[scount] svm_topdown[i,j,scount] = svm # Obtain topdown and bottom up task results svm = results_topdownbottomup[scount] svm_topdownbottomup[i,j,scount] = svm scount += 1 Explanation: 4.0 Run group analysis on network-to-network information transfer mapping output using SVM decoding (as opposed to predicted-to-actual RSA analysis) (Supplementary Fig. 3E-H) 4.1 Network-to-network information transfer mapping on simulated neural data USING SVMs End of explanation # Instantiate empty result matrices tmat_topdown_svm = np.zeros((ncommunities,ncommunities)) pmat_topdown_svm = np.ones((ncommunities,ncommunities)) tmat_topdownbottomup_svm = np.zeros((ncommunities,ncommunities)) pmat_topdownbottomup_svm = np.ones((ncommunities,ncommunities)) # Perform accuracy decoding t-test against chance, which is 25% for a 4-way classification chance = .25 for i in range(ncommunities): for j in range(ncommunities): if i==j: continue # Run statistical test for first task (topdown only stim) t, p = stats.ttest_1samp(svm_topdown[i,j,:],chance) tmat_topdown_svm[i,j] = t # Make p-value one-sided (for one-sided t-test) if t > 0: p = p/2.0 else: p = 1-p/2.0 pmat_topdown_svm[i,j] = p # Run statistical test for second task (topdown and bottomup stim) t, p = stats.ttest_1samp(svm_topdownbottomup[i,j,:],chance) tmat_topdownbottomup_svm[i,j] = t # Make p-value one-sided (for one-sided t-test) if t > 0: p = p/2.0 else: p = 1-p/2.0 pmat_topdownbottomup_svm[i,j] = p ## TopDown Tasks # Run FDR correction on p-values (Don't get diagonal values) qmat_topdown_svm = np.ones((ncommunities,ncommunities)) triu_ind = np.triu_indices(ncommunities,k=1) tril_ind = np.tril_indices(ncommunities,k=-1) all_ps = np.hstack((pmat_topdown_svm[triu_ind],pmat_topdown_svm[tril_ind])) h, all_qs = mc.fdrcorrection0(all_ps) # the first half of all qs belong to triu, second half belongs to tril qmat_topdown_svm[triu_ind] = all_qs[:len(triu_ind[0])] qmat_topdown_svm[tril_ind] = all_qs[len(tril_ind[0]):] binary_mat_topdown_svm = qmat_topdown_svm < .05 ## TopDown and BottomUp Tasks # Run FDR correction on p-values (Don't get diagonal values) qmat_topdownbottomup_svm = np.ones((ncommunities,ncommunities)) triu_ind = np.triu_indices(ncommunities,k=1) tril_ind = np.tril_indices(ncommunities,k=-1) all_ps = np.hstack((pmat_topdownbottomup_svm[triu_ind],pmat_topdownbottomup_svm[tril_ind])) h, all_qs = mc.fdrcorrection0(all_ps) # the first half of all qs belong to triu, second half belongs to tril qmat_topdownbottomup_svm[triu_ind] = all_qs[:len(triu_ind[0])] qmat_topdownbottomup_svm[tril_ind] = all_qs[len(tril_ind[0]):] binary_mat_topdownbottomup_svm = qmat_topdownbottomup_svm < .05 #### ## Plot figures for Top Down Task # Unthresholded map plt.figure(figsize=(12,10)) plt.subplot(121) mat = np.mean(svm_topdown,axis=2) norm = MidpointNormalize(midpoint=0) plt.imshow(mat,norm=norm,origin='lower',interpolation='None',cmap='bwr') plt.title('Network-to-Network ITE (using SVMs) (Unthresholded)\nTopDown Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=0.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) plt.tight_layout() plt.savefig(outputdir + 'SFig_CompModel_SVM_topdownOnly_Unthresholded.pdf') # Thresholded map plt.subplot(122) mat = np.mean(svm_topdown,axis=2) mat = np.multiply(binary_mat_topdown_svm,mat) norm = MidpointNormalize(midpoint=0) plt.imshow(mat,norm=norm,origin='lower',interpolation='None',cmap='bwr') plt.title('Network-to-Network ITE (using SVMs) (Thresholded)\nTopDown Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=0.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) plt.tight_layout() plt.savefig(outputdir + 'SFig_CompModel_SVM_topdownOnly.pdf') #### ## Plot figures for Top Down AND Bottom Up Task # Unthresholded map plt.figure(figsize=(12,10)) plt.subplot(121) mat = np.mean(svm_topdownbottomup,axis=2) norm = MidpointNormalize(midpoint=0) plt.imshow(mat,origin='lower',interpolation='None',norm=norm,cmap='bwr') plt.title('Network-to-Network ITE (using SVMs) (Unthresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=0.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) # Thresholded map plt.subplot(122) mat = np.mean(svm_topdownbottomup,axis=2) mat = np.multiply(binary_mat_topdownbottomup_svm,mat) norm = MidpointNormalize(midpoint=0) plt.imshow(mat,origin='lower',interpolation='None',norm=norm,cmap='bwr') plt.title('Network-to-Network ITE (using SVMs) (Thresholded)\nTopDownBottomUp Tasks',fontsize=16, y=1.02) plt.colorbar(fraction=0.046) plt.yticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.xticks(range(ncommunities), ['FlexHub', 'Net1', 'Net2', 'Net3', 'Net4']) plt.ylabel('Network ActFlow FROM',fontsize=15) plt.xlabel('Network ActFlow TO',fontsize=15) plt.tight_layout() plt.savefig(outputdir + 'SFig_CompModel_SVM_topdownbottomup.pdf') Explanation: 4.2 Statistical testing on results and plot End of explanation
5,029
Given the following text description, write Python code to implement the functionality described below step by step Description: RNN 기본 구조와 Keras를 사용한 RNN 구현 신경망을 사용하여 문장(sentence)이나 시계열(time series) 데이터와 같은 순서열(sequence)를 예측하는 문제를 푸는 경우, 예측하고자 하는 값이 더 오랜 과거의 데이터에 의존하게 하려면 시퀀스를 나타내는 벡터의 크기를 증가시켜야 한다. 예를 들어 10,000개의 단어로 구성된 단어장을 사용하는 언어 모형에서 과거 100개 단어의 순서열에 대해 출력을 예측하려면 1,000,000 차원의 입력 벡터가 필요하다. RNN(Recurrent Neural Network)는 뉴런의 상태(state)를 저장하고 이를 다음 스텝에서의 입력으로 사용함으로써 긴 순서열에 대해서도 예측을 할 수 있는 신경망 구조이다. 여기에서는 RNN의 기본 구조와 Keras 파이썬 패키지에서 지원하는 RNN 구현 방법에 대해 알아본다. RNN의 기본 구조 일반적인 feedforward 신경망 구조는 다음과 같이 출력 벡터 $y$가 입력 $x$ 벡터와 신경망 가중치 행렬 $U$의 곱에 activation 함수를 적용한 결과로 나타난다. $$ y = \sigma ( U x ) $$ 이 식에서 $\sigma$는 activation 함수를 뜻한다. 하나의 은닉층(hidden layer)을 가지는 MLP(Multi-Layer Perceptron)의 경우에는 다음과 같이 표현할 수 있다. $$ h = \sigma(U x) $$ $$ o = \sigma(V h) $$ 이 식에서 $h$는 은닉층 벡터, $o$은 출력 벡터, $U$는 입력으로부터 은닉층까지의 가중치 행렬, $V$는 은닉층으로부터 츨력까지의 가중치 행렬이다. RNN에서는 출력 벡터 $o$ 이외에도 상태 벡터 $s$를 출력한다. 상태 벡터는 일종의 은닉층 벡터와 비슷하지만 입력 $x$ 뿐 아니라 바로 전단계의 상태 벡터 값에도 의존한다. 출력 벡터는 상태 벡터의 값에 의존한다. $$ s_t = \sigma(Ux_t + Ws_{t-1}) $$ $$ o_t = \sigma(Vs_t) $$ 여기에서 첨자 $t$는 순서열의 순서를 나타낸다. RNN은 시간 스텝에 따라 연결해서 펼쳐놓으면 무한개의 은닉층을 가진 MLP와 유사한 효과가 있다. 그림으로 나타내면 다음과 같다. <img src="http Step1: 이 시계열 자료에서 입력 순서열과 target 값을 만든다. 입력 순서열은 3 스텝 크기의 순서열을 사용하고 target으로는 그 다음 시간 스텝의 값을 사용한다. 즉, 3개의 순서열을 입력한 다음 마지막 출력값이 target과 일치하게 만드는 sequence-to-value (many-to-one) 문제를 풀어보도록 한다. Keras 에서 RNN 을 사용하려면 입력 데이터는 (nb_samples, timesteps, input_dim) 크기를 가지는 ndim=3인 3차원 텐서(tensor) 형태이어야 한다. nb_samples Step2: Keras의 SimpleRNN 클래스 Keras에서 신경망 모형은 다음과 같은 순서로 만든다. Sequential 클래스 객체인 모형을 생성한다. add 메서드로 다양한 레이어를 추가한다. compile 메서드로 목적함수 및 최적화 방법을 지정한다. fit 메서드로 가중치를 계산한다. 우선 가장 단순한 신경망 구조인 SimpleRNN 클래스를 사용하는 방법은 다음 코드와 같다. 여기에서는 SimpleRNN 클래스 객체로 10개의 뉴런을 가지는 RNN 층을 만든다. 첫번째 인수로는 뉴런의 크기, input_dim 인수로는 벡터의 크기, input_length 인수로는 순서열으 길이를 입력한다. 그 다음으로 SimpleRNN 클래스 객체에서 나오는 10개의 출력값을 하나로 묶어 실수 값을 출력으로 만들기 위해 Dense 클래스 객체를 추가하였다. 손실 함수로는 mean-squred-error를, 최적화 방법으로는 단순한 stochastic gradient descent 방법을 사용한다. Step3: 일단 학습을 시키기 이전에 나오는 출력을 살펴보자. Step4: fit 메서드로 학습을 한다. Step5: 학습을 마친 후의 출력은 다음과 같다. Step6: SimpleRNN 클래스 생성시 return_sequences 인수를 True로 하면 출력 순서열 중 마지막 값만 출력하는 것이 아니라 전체 순서열을 3차원 텐서 형태로 출력하므로 sequence-to-sequence 문제로 풀 수 있다. 다만 입력 순서열과 출력 순서열의 크기는 같아야 한다. 다만 이 경우에는 다음에 오는 Dense 클래스 객체를 TimeDistributed wrapper를 사용하여 3차원 텐서 입력을 받을 수 있게 확장해 주어야 한다. Step7: 이 번에는 출력값도 3개짜리 순서열로 한다. Step8: 학습 결과는 다음과 같다.
Python Code: s = np.sin(2 * np.pi * 0.125 * np.arange(20)) plt.plot(s, 'ro-') plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.show() Explanation: RNN 기본 구조와 Keras를 사용한 RNN 구현 신경망을 사용하여 문장(sentence)이나 시계열(time series) 데이터와 같은 순서열(sequence)를 예측하는 문제를 푸는 경우, 예측하고자 하는 값이 더 오랜 과거의 데이터에 의존하게 하려면 시퀀스를 나타내는 벡터의 크기를 증가시켜야 한다. 예를 들어 10,000개의 단어로 구성된 단어장을 사용하는 언어 모형에서 과거 100개 단어의 순서열에 대해 출력을 예측하려면 1,000,000 차원의 입력 벡터가 필요하다. RNN(Recurrent Neural Network)는 뉴런의 상태(state)를 저장하고 이를 다음 스텝에서의 입력으로 사용함으로써 긴 순서열에 대해서도 예측을 할 수 있는 신경망 구조이다. 여기에서는 RNN의 기본 구조와 Keras 파이썬 패키지에서 지원하는 RNN 구현 방법에 대해 알아본다. RNN의 기본 구조 일반적인 feedforward 신경망 구조는 다음과 같이 출력 벡터 $y$가 입력 $x$ 벡터와 신경망 가중치 행렬 $U$의 곱에 activation 함수를 적용한 결과로 나타난다. $$ y = \sigma ( U x ) $$ 이 식에서 $\sigma$는 activation 함수를 뜻한다. 하나의 은닉층(hidden layer)을 가지는 MLP(Multi-Layer Perceptron)의 경우에는 다음과 같이 표현할 수 있다. $$ h = \sigma(U x) $$ $$ o = \sigma(V h) $$ 이 식에서 $h$는 은닉층 벡터, $o$은 출력 벡터, $U$는 입력으로부터 은닉층까지의 가중치 행렬, $V$는 은닉층으로부터 츨력까지의 가중치 행렬이다. RNN에서는 출력 벡터 $o$ 이외에도 상태 벡터 $s$를 출력한다. 상태 벡터는 일종의 은닉층 벡터와 비슷하지만 입력 $x$ 뿐 아니라 바로 전단계의 상태 벡터 값에도 의존한다. 출력 벡터는 상태 벡터의 값에 의존한다. $$ s_t = \sigma(Ux_t + Ws_{t-1}) $$ $$ o_t = \sigma(Vs_t) $$ 여기에서 첨자 $t$는 순서열의 순서를 나타낸다. RNN은 시간 스텝에 따라 연결해서 펼쳐놓으면 무한개의 은닉층을 가진 MLP와 유사한 효과가 있다. 그림으로 나타내면 다음과 같다. <img src="http://d3kbpzbmcynnmx.cloudfront.net/wp-content/uploads/2015/09/rnn.jpg" style="width: 80%;"> <small>이미지 출처: http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/</small> 다만 MLP와 달리 상태 벡터의 변환 행렬이 고정되어 있다. 순서열 예측 RNN이 기존 신경망과 가장 다른 점은 순서열을 처리할 수 있다는 점이다. RNN에 입력 벡터 순서열 $x_1, x_2, \ldots, x_n$ 을 차례대로 입력하면 상태 순서열 $s_1, s_2, \ldots, s_n$이 내부적으로 생성되고 출력으로는 출력 순서열 $o_1, o_2, \ldots, o_n$ 이 나온다. 만약 원하는 결과가 출력 순서열 $o_1, o_2, \ldots, o_n$ 이 target 순서열 $y_1, y_2, \ldots, y_n$ 과 같아지는 것이라면 입력 순서열 길이와 출력 순서열 길이가 같은 특수한 경우의 sequnce-to-sequence (many-to-many) 예측 문제가 되고 순서열의 마지막 출력 $o_n$ 값이 $y_n$값과 같아지는 것만 목표라면 단순한 sequence to value (many-to-one) 문제가 된다. <img src="https://deeplearning4j.org/img/rnn_masking_1.png" style="width: 100%;"> <small>이미지 출처: https://deeplearning4j.org/usingrnns</small> Back-Propagation Through Time (BPTT) RNN은 시간에 따라 펼쳐놓으면 구조가 MLP와 유사하기 때문에 Back-Propagation 방법으로 gradient를 계산할 수 있다. 다만 실제로 여러개의 은닉층이 있는 것이 아니라 시간 차원에서 존재하기 때문에 Back-Propagation Through Time (BPTT) 방법이라고 한다. Keras를 사용한 RNN 구현 파이썬용 신경망 패키지인 Keras를 사용해서 RNN을 구현해 보자. Keras 는 theano 나 tensorflow 를 사용하여 신경망을 구현해 줄 수 있도록 하는 고수준 라이브러리다. Keras 는 다양한 형태의 신경망 구조를 블럭 형태로 제공하고 있으며 SimpleRNN, LSTM, GRU 와 같은 RNN 구조도 제공한다. Keras에서 제공하는 RNN 에 대한 자세한 내용은 다음을 참조한다. https://keras.io/layers/recurrent/ 시계열 예측 문제 풀어야 할 문제는 다음과 같은 사인파형 시계열을 입력으로 다음 스텝의 출력을 예측하는 단순한 시계열 예측 문제이다. End of explanation from scipy.linalg import toeplitz S = np.fliplr(toeplitz(np.r_[s[-1], np.zeros(s.shape[0] - 2)], s[::-1])) S[:5, :3] X_train = S[:-1, :3][:, :, np.newaxis] Y_train = S[:-1, 3] X_train.shape, Y_train.shape X_train[:2] Y_train[:2] plt.subplot(211) plt.plot([0, 1, 2], X_train[0].flatten(), 'bo-', label="input sequence") plt.plot([3], Y_train[0], 'ro', label="target") plt.xlim(-0.5, 4.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("First sample sequence") plt.subplot(212) plt.plot([1, 2, 3], X_train[1].flatten(), 'bo-', label="input sequence") plt.plot([4], Y_train[1], 'ro', label="target") plt.xlim(-0.5, 4.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.tight_layout() plt.show() Explanation: 이 시계열 자료에서 입력 순서열과 target 값을 만든다. 입력 순서열은 3 스텝 크기의 순서열을 사용하고 target으로는 그 다음 시간 스텝의 값을 사용한다. 즉, 3개의 순서열을 입력한 다음 마지막 출력값이 target과 일치하게 만드는 sequence-to-value (many-to-one) 문제를 풀어보도록 한다. Keras 에서 RNN 을 사용하려면 입력 데이터는 (nb_samples, timesteps, input_dim) 크기를 가지는 ndim=3인 3차원 텐서(tensor) 형태이어야 한다. nb_samples: 자료의 수 timesteps: 순서열의 길이 input_dim: x 벡터의 크기 여기에서는 단일 시계열이므로 input_dim = 1 이고 3 스텝 크기의 순서열을 사용하므로 timesteps = 3 이며 자료의 수는 18 개이다. 다음코드와 같이 원래의 시계열 벡터를 Toeplitz 행렬 형태로 변환하여 3차원 텐서를 만든다. End of explanation from keras.models import Sequential from keras.layers import SimpleRNN, Dense np.random.seed(0) model = Sequential() model.add(SimpleRNN(10, input_dim=1, input_length=3)) model.add(Dense(1)) model.compile(loss='mse', optimizer='sgd') Explanation: Keras의 SimpleRNN 클래스 Keras에서 신경망 모형은 다음과 같은 순서로 만든다. Sequential 클래스 객체인 모형을 생성한다. add 메서드로 다양한 레이어를 추가한다. compile 메서드로 목적함수 및 최적화 방법을 지정한다. fit 메서드로 가중치를 계산한다. 우선 가장 단순한 신경망 구조인 SimpleRNN 클래스를 사용하는 방법은 다음 코드와 같다. 여기에서는 SimpleRNN 클래스 객체로 10개의 뉴런을 가지는 RNN 층을 만든다. 첫번째 인수로는 뉴런의 크기, input_dim 인수로는 벡터의 크기, input_length 인수로는 순서열으 길이를 입력한다. 그 다음으로 SimpleRNN 클래스 객체에서 나오는 10개의 출력값을 하나로 묶어 실수 값을 출력으로 만들기 위해 Dense 클래스 객체를 추가하였다. 손실 함수로는 mean-squred-error를, 최적화 방법으로는 단순한 stochastic gradient descent 방법을 사용한다. End of explanation plt.plot(Y_train, 'ro-', label="target") plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output") plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Before training") plt.show() Explanation: 일단 학습을 시키기 이전에 나오는 출력을 살펴보자. End of explanation history = model.fit(X_train, Y_train, nb_epoch=100, verbose=0) plt.plot(history.history["loss"]) plt.title("Loss") plt.show() Explanation: fit 메서드로 학습을 한다. End of explanation plt.plot(Y_train, 'ro-', label="target") plt.plot(model.predict(X_train[:,:,:]), 'bs-', label="output") plt.xlim(-0.5, 20.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("After training") plt.show() Explanation: 학습을 마친 후의 출력은 다음과 같다. End of explanation from keras.layers import TimeDistributed model2 = Sequential() model2.add(SimpleRNN(20, input_dim=1, input_length=3, return_sequences=True)) model2.add(TimeDistributed(Dense(1))) model2.compile(loss='mse', optimizer='sgd') Explanation: SimpleRNN 클래스 생성시 return_sequences 인수를 True로 하면 출력 순서열 중 마지막 값만 출력하는 것이 아니라 전체 순서열을 3차원 텐서 형태로 출력하므로 sequence-to-sequence 문제로 풀 수 있다. 다만 입력 순서열과 출력 순서열의 크기는 같아야 한다. 다만 이 경우에는 다음에 오는 Dense 클래스 객체를 TimeDistributed wrapper를 사용하여 3차원 텐서 입력을 받을 수 있게 확장해 주어야 한다. End of explanation X_train2 = S[:-3, 0:3][:, :, np.newaxis] Y_train2 = S[:-3, 3:6][:, :, np.newaxis] X_train2.shape, Y_train2.shape X_train2[:2] Y_train2[:2] plt.subplot(211) plt.plot([0, 1, 2], X_train2[0].flatten(), 'bo-', label="input sequence") plt.plot([3, 4, 5], Y_train2[0].flatten(), 'ro-', label="target sequence") plt.xlim(-0.5, 6.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("First sample sequence") plt.subplot(212) plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence") plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence") plt.xlim(-0.5, 6.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.tight_layout() plt.show() history2 = model2.fit(X_train2, Y_train2, nb_epoch=100, verbose=0) plt.plot(history2.history["loss"]) plt.title("Loss") plt.show() Explanation: 이 번에는 출력값도 3개짜리 순서열로 한다. End of explanation plt.subplot(211) plt.plot([1, 2, 3], X_train2[1].flatten(), 'bo-', label="input sequence") plt.plot([4, 5, 6], Y_train2[1].flatten(), 'ro-', label="target sequence") plt.plot([4, 5, 6], model2.predict(X_train2[1:2,:,:]).flatten(), 'gs-', label="output sequence") plt.xlim(-0.5, 7.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Second sample sequence") plt.subplot(212) plt.plot([2, 3, 4], X_train2[2].flatten(), 'bo-', label="input sequence") plt.plot([5, 6, 7], Y_train2[2].flatten(), 'ro-', label="target sequence") plt.plot([5, 6, 7], model2.predict(X_train2[2:3,:,:]).flatten(), 'gs-', label="output sequence") plt.xlim(-0.5, 7.5) plt.ylim(-1.1, 1.1) plt.legend() plt.title("Third sample sequence") plt.tight_layout() plt.show() Explanation: 학습 결과는 다음과 같다. End of explanation
5,030
Given the following text description, write Python code to implement the functionality described below step by step Description: Simulation with the Shyft API Introduction In Part I of the simulation tutorials, we covered conducting a very simple simulation of an example catchment using configuration files. This is a typical use case, but assumes that you have a model well configured and ready for simulation. In practice, one is likely to be interested in exploring different models, configurations, and data sources. Shyft provides a set of tools to conduct this type of analysis, known as the "multiple working model" hypothesis. This is in fact a key idea of Shyft -- to make it simple to evaluate the impact of the selection of model routine (or forcing data) on the performance of the simulation. In this notebook we walk through a lower level paradigm of working with the toolbox and using the Shyft API directly to conduct the simulations. This notebook is guiding through the simulation process of a catchment. The following steps are described Step1: The Shyft Environment This next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories Step2: 2. Shyft initialization In Part I we used the YAMLSimConfig to read information regarding a preconfigured case. The configuration data for this case was stored in .yaml files. In this tutorial, we will walk through the process of using Shyft directly from the shyft.api and loading data from scratch. We will use the same dataset as an example. The data is stored in the shyft-data repository, so you must be sure to have cloned that in a parallel directory to shyft-doc. We can review the yaml configuration files from first tutorial. In particular, let's start by looking at the datasets_config_file which points to our data. Let's just look at a part of this file under the sources stanza Step3: The way we have structured the example datasets is to have a single netcdf file for each variable (e.g. precipitation, temperature, etc.) and a collection of stations within each file. The file has dimensions of time and station, where station is a unique id. The variables are time, series_name, x, y, z to give coordinates of the station. In addition we use a crs which is a code used to identify the coordinate system used. This is the approach of the example, you may use whatever works for you to keep your geolocation data coordinated. Lastly, the variable precipitation contains the precipitation data. Creating a Shyft source In Shyft we have the concept of a source. The idea is that a source provides data to the simulation. Every cell in the region contains an env_ts attribute, which is a collection of the forcing time series. Currently, environmental time series include Step6: You can explore further all the individual variables, but take note that the p variable has all the metadata required for geolocation as well as units. Further it contains 10 stations with 8760 points. Also note that the time variable contains a units definition which we will use. Step7: Note that in this example, the two values are the same. However, using shyft utctime we assure we keep track of the timezone. If your netcdf time.units attribute is different, Shyft will make the converstion into utctime. Look at the data Before proceeding, we'll take a quick look at our precipitation data to assure it is as we expect. Step8: Create api.TimeSeries Now, we have the data and time, or index value. Next we will create a Shyft object type, shyft.api.TimeSeries. To the uninitiated, this step may feel complex. In actuality it is a simple step and enables a great deal of underlying functionality within Shyft. We'll extract the data from the netcdf file and work with the individual data series. NOTE the purpose here is to demonstrate the details, and is a very inefficient way to work. As stated previously, normally one would write their own repository to conduct all of this 'under the covers' using optimized python tools (e.g. xarray, numpy, etc.). The point here is that in the next steps, you can see how to work with individual data series that you may have read into python from, say for example, a ahem text file. Step9: Exploring the Shyft API In the following section we'll explore several components of the shyft.api. We consider the strength of Shyft to lie within this Application Programming Interface, or API. To the uninitiated, it adds quite a degree of complexity. However, once you understand the different components and paradigms of Shyft, you'll see the flexibility the API offers provides a great number of possibilities for exploring hydrologic simulations. The API approach will take a bit more code to get started, but will allow great flexibility later on. Step10: In the first tutorial we discussed the region_model. If you are unfamiliar with this class, we recommend reviewing the description. The concept of Shyft repositories In Shyft, we consider that input data is a "source". Our source data resides in some kind of data serialization... be it a text file, netcdf file, or database... One could have any kind of storage format for the source data. Repositories are Python based interfaces to data. Several have been created within Shyft already, but users are encouraged to create their own. A guiding paradigm to Shyft is that data should live as close to the source as possible (ideally, at the source). The repositories connect to the data source and make the data available to Shyft. Step11: Now we have exposed the repositories that we connected to our region_model during configuration. Having access to the repositories, means that we have access to the input data sources directly (found in geo_ts_repository). We also have several other repositories, including a repository for the interpolation parameters, initial state, and the region_model. We'll explore some of these a bit deeper now. But first we'll expose a few more pieces of information from the region_model while we're at it. Step12: The epsg is simply the domain projection information for our simulation. bbox provides the bounding box coordinates. period gives the total period of the simulation. Lastly, we create a tuple of the 'geolocated timeseries names' or geo_ts_names as it is referred to here. And use this to get the sources out of our repository. Note that these names Step13: The geo_ts_repo is a collection of geolocated timeseries repositories. Note that geo_ts_repo has an attribute Step14: We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting. Let's plot the precip of each of the sources Step15: Before we leave this section, we can also take a quick look at the interpolation_param_repo. This is a different type of repository, and it contains the parameters that will be passed to the interpolation algorithm to take a point-source timeseries and interpolate them to the Shyft cells, or in the context of of the API Step16: One quickly recognizes the same input source type keywords that are used as keys to the params dictionary. params is simply a dictionary of dictionaries which contains the parameters used by the interpolation model that is specific for each source type. In closing, one is encouraged to understand well the concept of the repositories. As a user of Shyft, it is likely you'll want to create your own repository to access your data directly rather than creating input files for Shyft. Keep in mind that the repositories are Python code, and not a part of the core C++ code of Shyft. They are designed to provide an interface between the C++ code and potentially more 'pythonic' paradigms. In the following section, you'll see that we populate a C++ class from a repository collection. The ARegionEnvironment class The next thing we'll do is to create an api.ARegionEnvironment class to use in our custom simulation. As the geo_ts_repo was a Python interface that provided a collection of all the timeseries repositories, the region_env is an API type that provides a container of the "sources" of data specific to the model. We will now create an api.ARegionEnvironment from the geo_ts_repo. It may be helpful to think of a region_env as the container of input data for the region_model -- in fact, that is what it is. Step17: What we have done here is to convert our input data from the Python based repositories into a C++ type object that is used in the Shyft core. It may feel redundant to geo_ts_repo, but there are underlying differences. Still, you'll see that now the 'sources' are direct attributes of the region_env class Step18: Interpolation Parameters In the same manner that we need to convert the sources from the Python based container, we'll also create an API object from the interpolation_param_repo. Step19: Okay, now we are set to rebuild our region_model from scratch. In the next few steps we're going to walk through initialization of the region_model to set it up for simulation. Initialization of the region_model The two shyft.api types Step20: Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. Step21: As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot. Now we're going to create a function that will read initial states from the initial_state_repo. In practice, this is already done by the ConfgiSimulator, but to demonstrate lower level functions, we'll reset the states of our region_model Step22: Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual state object, we'll see init_state contains, for each cell in our simulation, the state variables for each 'method' of the method stack. The .id of type CellStateId member ensures that the geographic personality of a saved state can be matched to the cells of the model that you now keep in memory (a safeguard). Let's look more closely
Python Code: # Pure python modules and jupyter notebook functionality # first you should import the third-party python modules which you'll use later on # the first line enables that figures are shown inline, directly in the notebook %matplotlib inline import os import datetime as dt import pandas as pd from netCDF4 import Dataset from os import path import sys from matplotlib import pyplot as plt Explanation: Simulation with the Shyft API Introduction In Part I of the simulation tutorials, we covered conducting a very simple simulation of an example catchment using configuration files. This is a typical use case, but assumes that you have a model well configured and ready for simulation. In practice, one is likely to be interested in exploring different models, configurations, and data sources. Shyft provides a set of tools to conduct this type of analysis, known as the "multiple working model" hypothesis. This is in fact a key idea of Shyft -- to make it simple to evaluate the impact of the selection of model routine (or forcing data) on the performance of the simulation. In this notebook we walk through a lower level paradigm of working with the toolbox and using the Shyft API directly to conduct the simulations. This notebook is guiding through the simulation process of a catchment. The following steps are described: Loading required python modules and setting path to SHyFT installation Shyft initialization Running a Shyft simulation with updated parameters Activating the simulation only for selected catchments Setting up different input datasets Changing state collection settings Post processing and extracting results 1. Loading required python modules and setting path to SHyFT installation Shyft requires a number of different modules to be loaded as part of the package. Below, we describe the required steps for loading the modules, and note that some steps are only required for the use of the jupyter notebook. End of explanation # try to auto-configure the path, -will work in all cases where doc and data # are checked out at same level shyft_data_path = path.abspath("../../../shyft-data") if path.exists(shyft_data_path) and 'SHYFT_DATA' not in os.environ: os.environ['SHYFT_DATA']=shyft_data_path # shyft should be available either by it's install in python # or by PYTHONPATH set by user prior to starting notebook. # This is equivalent to the two lines below # shyft_path=path.abspath('../../../shyft') # sys.path.insert(0,shyft_path) # Shyft imports from shyft import api Explanation: The Shyft Environment This next step is highly specific on how and where you have installed Shyft. If you have followed the guidelines at github, and cloned the three shyft repositories: i) shyft, ii) shyft-data, and iii) shyft-doc, then you may need to tell jupyter notebooks where to find shyft. Uncomment the relevant lines below. If you have a 'system' shyft, or used conda install -s sigbjorn shyft to install shyft, then you probably will want to make sure you have set the SHYFT_DATA directory correctly, as otherwise, Shyft will assume the above structure and fail. This has to be done before import shyft. In that case, uncomment the relevant lines below. note: it is most likely that you'll need to do one or the other. End of explanation # now let's read the 'precipitation.nc' file into our workspace using a netCDF4 Dataset data_directory = os.path.join(os.environ['SHYFT_DATA'], 'netcdf/orchestration-testdata/') precipitation = Dataset(os.path.join(data_directory, 'precipitation.nc')) # explore the prec object to see what variables it contains. print(precipitation) Explanation: 2. Shyft initialization In Part I we used the YAMLSimConfig to read information regarding a preconfigured case. The configuration data for this case was stored in .yaml files. In this tutorial, we will walk through the process of using Shyft directly from the shyft.api and loading data from scratch. We will use the same dataset as an example. The data is stored in the shyft-data repository, so you must be sure to have cloned that in a parallel directory to shyft-doc. We can review the yaml configuration files from first tutorial. In particular, let's start by looking at the datasets_config_file which points to our data. Let's just look at a part of this file under the sources stanza: sources: - repository: !!python/name:shyft.repository.netcdf.cf_geo_ts_repository.CFDataRepository types: - precipitation params: stations_met: ../shyft-data/netcdf/orchestration-testdata/precipitation.nc selection_criteria: null As described earlier, yaml configuration files provide a great deal of flexibility to configure runs, and save information. This particular example is showing one repository within the file for the type "precipitation". In this tutorial, we're simply interested in the path for the data. Looking at the file you'll see all data sets are stored within: ../shyft-data/netcdf/orchestration-testdata/ Recall that we have either set above, or from installation, have a SHYFT_DATA environment variable. This is still where our data is stored for this example. Next we can use this variable and read in the actually precipitation data using a pure (netCDF4) python approach. Note we continue to work with the example netcdf files, but one could easily use their own data here as well Repositories To use Shyft efficiently, one should familiarize themselves with the concept of repositories. Briefly, here, we'll borrow some of the functions in our example repository used to read the netcdf data. Hopefully, between the examples here, and the information provided in the repositories section, you'll gain enough understanding to start to write your own respositories for your data. Again, refering to the sources stanza above, note that the first key is in fact repository and directs a user to the class that is used to read the data: CFDataRepository. This repository provides more functionality than is required for the example here, for example, for sub-setting data. However, once you have completed this tutorial it is a good idea to take a look at the CFDataRepository to gain an understanding of how they work. But first, let's read that precipitation data... End of explanation # prec contains all the precipitation forcing data we'll use: x = precipitation.variables['x'] y = precipitation.variables['y'] z = precipitation.variables['z'] crs = precipitation.variables['crs'] time = precipitation.variables['time'] prec = precipitation.variables['precipitation'] names = precipitation.variables['series_name'] print(prec) print(time) Explanation: The way we have structured the example datasets is to have a single netcdf file for each variable (e.g. precipitation, temperature, etc.) and a collection of stations within each file. The file has dimensions of time and station, where station is a unique id. The variables are time, series_name, x, y, z to give coordinates of the station. In addition we use a crs which is a code used to identify the coordinate system used. This is the approach of the example, you may use whatever works for you to keep your geolocation data coordinated. Lastly, the variable precipitation contains the precipitation data. Creating a Shyft source In Shyft we have the concept of a source. The idea is that a source provides data to the simulation. Every cell in the region contains an env_ts attribute, which is a collection of the forcing time series. Currently, environmental time series include: precipitation temperature rel_hum radiation wind_speed These are the forcing variables required for simulation using the existing algorithms, and for each one there is a Shyft source type. The cells get populated with these forcing data through an interpolation step. This is true both for point time series data or gridded forcing data. In the latter case, the interpolation treats each grid cell as a 'station'. Improvements to this approach are on the development roadmap. We now need to take our time series data read in from the netcdf file, and create api.PrecipitationSource which also requires that we convert our raw numpy time series into api.TimeSeries. The following steps may seem complex, and some may wonder why not work with plain numpy. The reason is for efficiency. Shyft is optimized using specialized data types. We use the api.TimeSeries in order to assure robust and efficient underlying calculations. We will work through a full example for precipitation, and then rely on a repository to load the rest of our data. End of explanation from netcdftime import utime import numpy as np These are the current supported regular time-step intervals delta_t_dic = {'days': api.deltahours(24), 'hours': api.deltahours(1), 'minutes': api.deltaminutes(1), 'seconds': api.Calendar.SECOND} def convert_netcdf_time(time_spec:str, t): Converts supplied numpy array to shyft utctime given netcdf time_spec. Throws exception if time-unit is not supported, i.e. not part of delta_t_dic as specified in this file. Parameters ---------- time_spec: string from netcdef like 'hours since 1970-01-01 00:00:00' t: numpy array Returns ------- numpy array type int64 with new shyft utctime units (seconds since 1970utc) u = utime(time_spec) t_origin = api.Calendar(int(u.tzoffset)).time(u.origin.year, u.origin.month, u.origin.day, u.origin.hour, u.origin.minute, u.origin.second) #print (t[0],t_origin,delta_t_dic[u.units]) delta_t = delta_t_dic[u.units] #return (t_origin + delta_t * t[:]).astype(np.int64) return t_origin + delta_t * t[:].astype(np.int64) # note, the above could have simply been imported: from shyft.repository.netcdf.time_conversion import convert_netcdf_time as cnt Explanation: You can explore further all the individual variables, but take note that the p variable has all the metadata required for geolocation as well as units. Further it contains 10 stations with 8760 points. Also note that the time variable contains a units definition which we will use. End of explanation fig = plt.figure(figsize=(10, 6)) for i, name in enumerate(names): y = prec[:,i] plt.plot(time, y, label=name) plt.title('10 Precipitation Stations') plt.ylabel('Precipitation [mm]') plt.xlabel('UTC Time [seconds since 1970-0-0]') plt.legend() Explanation: Note that in this example, the two values are the same. However, using shyft utctime we assure we keep track of the timezone. If your netcdf time.units attribute is different, Shyft will make the converstion into utctime. Look at the data Before proceeding, we'll take a quick look at our precipitation data to assure it is as we expect. End of explanation # we'll start by creating a time_axis: # help(api.TimeAxis) t = time[:] # get the end time of the last interval as an integer: end_t = int(2*t[-1] - t[-2]) # create a time_axis for the simulation time_axis = api.TimeAxis(api.UtcTimeVector.from_numpy(t.astype(int)), end_t) # here we used the call structure: __init__( (object)arg1, (UtcTimeVector)time_points, (int)t_end) -> None # creates a time-axis by specifying the time_points and t-end of the last interval # next convert data time series into Shyft.api.TimeSeries objects: p0 = api.TimeSeries(time_axis, api.DoubleVector.from_numpy(prec[:,0].flatten()), api.POINT_AVERAGE_VALUE) Explanation: Create api.TimeSeries Now, we have the data and time, or index value. Next we will create a Shyft object type, shyft.api.TimeSeries. To the uninitiated, this step may feel complex. In actuality it is a simple step and enables a great deal of underlying functionality within Shyft. We'll extract the data from the netcdf file and work with the individual data series. NOTE the purpose here is to demonstrate the details, and is a very inefficient way to work. As stated previously, normally one would write their own repository to conduct all of this 'under the covers' using optimized python tools (e.g. xarray, numpy, etc.). The point here is that in the next steps, you can see how to work with individual data series that you may have read into python from, say for example, a ahem text file. End of explanation # get the simulator from yaml from shyft.orchestration.configuration.yaml_configs import YAMLSimConfig from shyft.orchestration.simulators.config_simulator import ConfigSimulator config_file_path = os.path.abspath("./nea-config/neanidelva_simulation.yaml") cfg = YAMLSimConfig(config_file_path, "neanidelva") simulator = ConfigSimulator(cfg) # region_model = simulator.region_model region_model_id = simulator.region_model_id interpolation_id = simulator.interpolation_id Explanation: Exploring the Shyft API In the following section we'll explore several components of the shyft.api. We consider the strength of Shyft to lie within this Application Programming Interface, or API. To the uninitiated, it adds quite a degree of complexity. However, once you understand the different components and paradigms of Shyft, you'll see the flexibility the API offers provides a great number of possibilities for exploring hydrologic simulations. The API approach will take a bit more code to get started, but will allow great flexibility later on. End of explanation # expose the repositories region_model_repo = simulator.region_model_repository interpolation_param_repo = simulator.ip_repos geo_ts_repo = simulator.geo_ts_repository initial_state_repo = simulator.initial_state_repo Explanation: In the first tutorial we discussed the region_model. If you are unfamiliar with this class, we recommend reviewing the description. The concept of Shyft repositories In Shyft, we consider that input data is a "source". Our source data resides in some kind of data serialization... be it a text file, netcdf file, or database... One could have any kind of storage format for the source data. Repositories are Python based interfaces to data. Several have been created within Shyft already, but users are encouraged to create their own. A guiding paradigm to Shyft is that data should live as close to the source as possible (ideally, at the source). The repositories connect to the data source and make the data available to Shyft. End of explanation epsg = region_model.bounding_region.epsg() bbox = region_model.bounding_region.bounding_polygon(epsg) period = region_model.time_axis.total_period() geo_ts_names = ("temperature", "wind_speed", "precipitation", "relative_humidity", "radiation") sources = geo_ts_repo.get_timeseries(geo_ts_names, period, geo_location_criteria=bbox) Explanation: Now we have exposed the repositories that we connected to our region_model during configuration. Having access to the repositories, means that we have access to the input data sources directly (found in geo_ts_repository). We also have several other repositories, including a repository for the interpolation parameters, initial state, and the region_model. We'll explore some of these a bit deeper now. But first we'll expose a few more pieces of information from the region_model while we're at it. End of explanation # explore geo_ts_repo #help(geo_ts_repo) Explanation: The epsg is simply the domain projection information for our simulation. bbox provides the bounding box coordinates. period gives the total period of the simulation. Lastly, we create a tuple of the 'geolocated timeseries names' or geo_ts_names as it is referred to here. And use this to get the sources out of our repository. Note that these names: temperature wind_speed precipitation relative_humidity radiation Are embedded into Shyft as timeseries names that are required for simulations. In the current implementations, these are the default names used in repositories and, at present, the only forcing data required. If one were to develop new algorithms that reqiured other forcings, you would need to define these in a custom repository. See interfaces.py for more details. Before going further, let's look at what we have so far... We won't look in detail at all the repositories, but let's take a look at the geo_ts_repo: End of explanation # above we already created a `sources` dictionary by # using the `get_timeseries` method. This method takes a # list of the timeseries names as input and a period # of type 'shyft.api._api.UtcPeriod' # it returns a dictionary, keyed by the names of the timeseries prec = sources['precipitation'] # `prec` is now a `api.PrecipitationSourceVector` and if you look # you'll see it 10 elements: print(len(prec)) Explanation: The geo_ts_repo is a collection of geolocated timeseries repositories. Note that geo_ts_repo has an attribute: .geo_ts_repositories... this seems redundant? This is simply a list of the repositories this class is 'managing'. Maybe we want to look at the precipitation input series in more detail. We can get at those via this class. NOTE, this may not be the most typical way to look at your input data (presumably you may have already done this before the simulation working with the raw netcdf files), but in case you wish to see the datasets from the "model" perspective, this is how you gain access. Also, maybe you want to conduct a simulation, then make a data correction. You could do that by accessing the values here. Each of the aforementioned series types have a specialized source vector type in Shyft. In the case of precipitation it is a api.PrecipitationSourceVector. If we dig into this, we'll find some aspects familiar from the first tutorial. Let's get the precipitation timeseries out of the repository for the period of the simulation first: End of explanation fig, ax = plt.subplots(figsize=(15,10)) for pr in prec: t,p = [dt.datetime.utcfromtimestamp(t_.start) for t_ in pr.ts.time_axis], pr.ts.values ax.plot(t,p, label=pr.mid_point().x) #uid is empty now, but we reserve for later use fig.autofmt_xdate() ax.legend(title="Precipitation Input Sources") ax.set_ylabel("precip[mm/hr]") Explanation: We can explore further and see each element is in itself an api.PrecipitationSource, which has a timeseries (ts). Recall from the first tutorial that we can easily convert the timeseries.time_axis into datetime values for plotting. Let's plot the precip of each of the sources: End of explanation interpolation_param_repo.params Explanation: Before we leave this section, we can also take a quick look at the interpolation_param_repo. This is a different type of repository, and it contains the parameters that will be passed to the interpolation algorithm to take a point-source timeseries and interpolate them to the Shyft cells, or in the context of of the API: region_model.cells. We'll quickly look at the .params attribute, which is a dictionary. End of explanation def get_region_env(sources_): region_env_ = api.ARegionEnvironment() region_env_.temperature = sources_["temperature"] region_env_.precipitation = sources_["precipitation"] region_env_.radiation = sources_["radiation"] region_env_.wind_speed = sources_["wind_speed"] region_env_.rel_hum = sources_["relative_humidity"] return region_env_ region_env = get_region_env(sources) Explanation: One quickly recognizes the same input source type keywords that are used as keys to the params dictionary. params is simply a dictionary of dictionaries which contains the parameters used by the interpolation model that is specific for each source type. In closing, one is encouraged to understand well the concept of the repositories. As a user of Shyft, it is likely you'll want to create your own repository to access your data directly rather than creating input files for Shyft. Keep in mind that the repositories are Python code, and not a part of the core C++ code of Shyft. They are designed to provide an interface between the C++ code and potentially more 'pythonic' paradigms. In the following section, you'll see that we populate a C++ class from a repository collection. The ARegionEnvironment class The next thing we'll do is to create an api.ARegionEnvironment class to use in our custom simulation. As the geo_ts_repo was a Python interface that provided a collection of all the timeseries repositories, the region_env is an API type that provides a container of the "sources" of data specific to the model. We will now create an api.ARegionEnvironment from the geo_ts_repo. It may be helpful to think of a region_env as the container of input data for the region_model -- in fact, that is what it is. End of explanation print(len(region_env.precipitation)) type(region_env.precipitation[0]) Explanation: What we have done here is to convert our input data from the Python based repositories into a C++ type object that is used in the Shyft core. It may feel redundant to geo_ts_repo, but there are underlying differences. Still, you'll see that now the 'sources' are direct attributes of the region_env class: End of explanation interpolation_parameters = interpolation_param_repo.get_parameters(interpolation_id) Explanation: Interpolation Parameters In the same manner that we need to convert the sources from the Python based container, we'll also create an API object from the interpolation_param_repo. End of explanation #region_model.run_interpolation(interpolation_parameters, region_model.time_axis, region_env) region_model.interpolate(interpolation_parameters, region_env) Explanation: Okay, now we are set to rebuild our region_model from scratch. In the next few steps we're going to walk through initialization of the region_model to set it up for simulation. Initialization of the region_model The two shyft.api types: api.ARegionEnvironment and api.InterpolationParameter together are used to initialize the region_model. In the next step, all of the timeseries input sources are interpolated to the geolocated model cells. After this step, each cell is the model has it's own env_ts which contains the timeseries for that cell. Let's first do the interpolation, the we can explore the region_model.cells a bit further. End of explanation from matplotlib.cm import jet as jet from matplotlib.colors import Normalize # get all the cells for one sub-catchment with 'id' == 1228 c1228 = [c for c in region_model.cells if c.geo.catchment_id() == 1228] # for plotting, create an mpl normalizer based on min,max elevation elv = [c.geo.mid_point().z for c in c1228] norm = Normalize(min(elv), max(elv)) #plot with line color a function of elevation fig, ax = plt.subplots(figsize=(15,10)) # here we are cycling through each of the cells in c1228 for dat,elv in zip([c.env_ts.temperature.values for c in c1228], [c.mid_point().z for c in c1228]): ax.plot(dat, color=jet(norm(elv)), label=int(elv)) # the following is just to plot the legend entries and not related to Shyft handles, labels = ax.get_legend_handles_labels() # sort by labels import operator hl = sorted(zip(handles, labels), key=operator.itemgetter(1)) handles2, labels2 = zip(*hl) # show legend, but only every fifth entry ax.legend(handles2[::5], labels2[::5], title='Elevation [m]') Explanation: Okay, that was simple. Let's look at the timeseries in some individual cells. The following is a bit of a contrived example, but it shows some aspects of the api. We'll plot the temperature series of all the cells in one sub-catchment, and color them by elevation. End of explanation # create a function to read the states from the state repository def get_init_state_from_repo(initial_state_repo_, region_model_id_=None, timestamp=None): states = initial_state_repo_.find_state( region_model_id_criteria=region_model_id_, utc_timestamp_criteria=timestamp) if len(states) > 0: state_id = states[0].state_id # most_recent_state i.e. <= start time else: raise Exception('No initial state matching criteria.') return initial_state_repo_.get_state(state_id) init_state = get_init_state_from_repo(initial_state_repo, region_model_id_=region_model_id, timestamp=region_model.time_axis.start) Explanation: As we would expect from the temperature kriging method, we should find higher elevations have colder temperatures. As an exercise you could explore this relationship using a scatter plot. Now we're going to create a function that will read initial states from the initial_state_repo. In practice, this is already done by the ConfgiSimulator, but to demonstrate lower level functions, we'll reset the states of our region_model: End of explanation def print_pub_attr(obj): #only public attributes print(f'{obj.__class__.__name__}:\t',[attr for attr in dir(obj) if attr[0] is not '_']) print(len(init_state)) init_state_cell0 = init_state[0] # the identifier print_pub_attr(init_state_cell0.id) # gam snow states print_pub_attr(init_state_cell0.state.gs) #init_state_cell0.kirchner states print_pub_attr(init_state_cell0.state.kirchner) Explanation: Don't worry too much about the function for now, but do take note of the init_state object that we created. This is another container, this time it is a class that contains PTGSKStateWithId objects, which are specific to the model stack implemented in the simulation (in this case PTGSK). If we explore an individual state object, we'll see init_state contains, for each cell in our simulation, the state variables for each 'method' of the method stack. The .id of type CellStateId member ensures that the geographic personality of a saved state can be matched to the cells of the model that you now keep in memory (a safeguard). Let's look more closely: End of explanation
5,031
Given the following text description, write Python code to implement the functionality described below step by step Description: In this example we will begin to cover some of the extraction utilities in sisl allowing one to really go in-depth on analysis of calculations. We will begin by creating a large graphene flake, then subsequently a hole will be created by removing a circular shape. Subsequent to the calculations you are encouraged to explore the sisl toolbox for ways to extract important information regarding your system. Step1: We will also (for physical reasons) remove all dangling bonds, and secondly we will create a list of atomic indices which corresponds to the atoms at the edge of the hole. Step2: Exercises Please carefully go through the RUN.fdf file before running TBtrans, determine what each flag means and what it tells TBtrans to output. Now run TBtrans Step3: As this system is not a pristine periodic system we have a variety of options available for analysis. First and foremost we plot the transmission Step4: The contained data in the *.TBT.nc file is very much dependent on the flags in the fdf-file. To ease the overview of the available output and what is contained in the file one can execute the following block to see the content of the file. Check whether the bulk transmission is present in the output file and if so, add it to the plot above to compare the bulk transmission with the device transmission. There are two electrodes, hence two bulk transmissions. Is there a difference between the two? If so, why, if not, why not? Step5: Density of states We may also plot the Green function density of states as well as the spectral density of states from each electrode Step6: Can you from the above three quantities determine whether there are any localized states in the examined system? HINT Step7: Comparing the two previous figures leaves us with little knowlegde of the DOS ratio. I.e. both plots show the total DOS and they are summed over a different number of atoms (or orbitals if you wish). Instead of showing the total DOS we can normalize the DOS by the number of atoms; $1/N_a$ where $N_a$ is the number of atoms in the selected DOS region. With this normalization we can compare the average DOS on all atoms with the average DOS on only edge atoms. The tbtncSileTBtrans can readily do this for you. Read about the norm keyword in .DOS, and also look at the documentation for the .norm function Step8: DOS depending on the distance from the hole We can further analyze the DOS evolution for atoms farther and farther from the hole. The following DOS analysis will extract DOS (from $\mathbf G$) for the edge atoms, then for the nearest neighbours to the edge atoms, and for the next-nearest neighbours to the edge atoms. Try and extend the plot to contain the DOS of the next-next-nearest neighbours to the edge atoms.
Python Code: graphene = sisl.geom.graphene(orthogonal=True) elec = graphene.tile(25, axis=0) H = sisl.Hamiltonian(elec) H.construct(([0.1, 1.43], [0., -2.7])) H.write('ELEC.nc') device = elec.tile(15, axis=1) device = device.remove( device.close( device.center(what='cell'), R=10.) ) Explanation: In this example we will begin to cover some of the extraction utilities in sisl allowing one to really go in-depth on analysis of calculations. We will begin by creating a large graphene flake, then subsequently a hole will be created by removing a circular shape. Subsequent to the calculations you are encouraged to explore the sisl toolbox for ways to extract important information regarding your system. End of explanation dangling = [ia for ia in device.close(device.center(what='cell'), R=14.) if len(device.close(ia, R=1.43)) < 3] device = device.remove(dangling) edge = [] for ia in device.close(device.center(what='cell'), R=14.): if len(device.close(ia, R=1.43)) < 4: edge.append(ia) edge = np.array(edge) # Pretty-print the list of atoms (for use in sdata) # Note we add 1 to get fortran indices print(sisl.utils.list2str(edge + 1)) Hdev = sisl.Hamiltonian(device) Hdev.construct(([0.1, 1.43], [0, -2.7])) Hdev.geometry.write('device.xyz') Hdev.write('DEVICE.nc') Explanation: We will also (for physical reasons) remove all dangling bonds, and secondly we will create a list of atomic indices which corresponds to the atoms at the edge of the hole. End of explanation tbt = sisl.get_sile('siesta.TBT.nc') Explanation: Exercises Please carefully go through the RUN.fdf file before running TBtrans, determine what each flag means and what it tells TBtrans to output. Now run TBtrans: tbtrans RUN.fdf In addition to the previous example, many more files are now being created (for all files the siesta.TBT.AV* files are the $k$-averaged equivalents. You should, while reading the below list, also be able to specify which of the fdf-flags that are responsible for the creation of which files. siesta.TBT.ADOS_* The $k$-resolved spectral density of states injected from the named electrode siesta.TBT.BDOS_* The $k$-resolved bulk Green function density of states for the named electrode siesta.TBT.BTRANS_* The $k$-resolved bulk transmission through the named electrode siesta.TBT.DOS Green function density of states siesta.TBT.TEIG_&lt;1&gt;-&lt;2&gt; Transmission eigenvalues for electrons injected from &lt;1&gt; and emitted in &lt;2&gt;. This exercise mainly shows a variety of methods useful for extracting data from the *.TBT.nc files in a simple and consistent manner. You are encouraged to play with the routines, because the next example forces you to utilize them. End of explanation plt.plot(tbt.E, tbt.transmission(),label='Av'); plt.plot(tbt.E, tbt.transmission(kavg=tbt.kindex([0,0,0])), label=r'$\Gamma$'); plt.legend() plt.ylabel('Transmission'); plt.xlabel('Energy [eV]'); plt.ylim([0, None]); Explanation: As this system is not a pristine periodic system we have a variety of options available for analysis. First and foremost we plot the transmission: End of explanation print(tbt.info()) Explanation: The contained data in the *.TBT.nc file is very much dependent on the flags in the fdf-file. To ease the overview of the available output and what is contained in the file one can execute the following block to see the content of the file. Check whether the bulk transmission is present in the output file and if so, add it to the plot above to compare the bulk transmission with the device transmission. There are two electrodes, hence two bulk transmissions. Is there a difference between the two? If so, why, if not, why not? End of explanation DOS_all = [tbt.DOS(), tbt.ADOS(0), tbt.ADOS(1)] plt.plot(tbt.E, DOS_all[0], label='G'); plt.plot(tbt.E, DOS_all[1], label=r'$A_L$'); plt.plot(tbt.E, DOS_all[2], label=r'$A_R$'); plt.ylim([0, None]); plt.ylabel('Total DOS [1/eV]'); plt.xlabel('Energy [eV]'); plt.legend(); Explanation: Density of states We may also plot the Green function density of states as well as the spectral density of states from each electrode: End of explanation DOS_edge = [tbt.DOS(atoms=edge), tbt.ADOS(0, atoms=edge), tbt.ADOS(1, atoms=edge)] plt.plot(tbt.E, DOS_edge[0], label='G'); plt.plot(tbt.E, DOS_edge[1], label=r'$A_L$'); plt.plot(tbt.E, DOS_edge[2], label=r'$A_R$'); plt.ylim([0, None]); plt.ylabel('DOS on edge atoms [1/eV]'); plt.xlabel('Energy [eV]'); plt.legend(); Explanation: Can you from the above three quantities determine whether there are any localized states in the examined system? HINT: What is the sum of the spectral density of states ($\mathbf A_i$) compared to the Green function ($\mathbf G$) density of states? Examining DOS on individual atoms The total DOS is a measure of the DOS spread out in the entire atomic region. However, TBtrans calculates, and stores all DOS related quantities in orbital resolution. I.e. we are able to post-process the DOS and examine the atom and/or orbital resolved DOS. To do this the .DOS and .ADOS routines have two important keywords, 1) atom and 2) orbital which may be used to extract a subset of the DOS quantities. For details on extracting these subset quantities please read the documentation by executing the following line: help(tbt.DOS) The following code will extract the DOS only on the atoms in the hole edge. End of explanation N_all = tbt.norm(<change here>) N_edge = tbt.norm(<change here>) plt.plot(tbt.E, DOS_all[0] / N_all, label=r'$G$'); plt.plot(tbt.E, DOS_edge[0] / N_edge, label=r'$G_E$'); plt.ylim([0, None]); plt.ylabel('DOS [1/eV/atom]'); plt.xlabel('Energy [eV]'); plt.legend(); Explanation: Comparing the two previous figures leaves us with little knowlegde of the DOS ratio. I.e. both plots show the total DOS and they are summed over a different number of atoms (or orbitals if you wish). Instead of showing the total DOS we can normalize the DOS by the number of atoms; $1/N_a$ where $N_a$ is the number of atoms in the selected DOS region. With this normalization we can compare the average DOS on all atoms with the average DOS on only edge atoms. The tbtncSileTBtrans can readily do this for you. Read about the norm keyword in .DOS, and also look at the documentation for the .norm function: help(tbt.DOS) help(tbt.norm) Now create a plot with DOS normalized per atom by editing the below lines, feel free to add the remaining DOS plots to have them all: End of explanation # Get nearest neighbours to the edge atoms n_edge = Hdev.edges(edge, exclude=edge) # Get next-nearest neighbours to the edge atoms nn_edge = Hdev.edges(n_edge, exclude=np.hstack((edge, n_edge))) # Try and create the next-next-nearest neighbours to the edge atoms and add it to the plot plt.plot(tbt.E, tbt.DOS(atoms=edge, norm='atom'), label='edge: G'); plt.plot(tbt.E, tbt.DOS(atoms=n_edge, norm='atom'), label='n-edge: G'); plt.plot(tbt.E, tbt.DOS(atoms=nn_edge, norm='atom'), label='nn-edge: G'); plt.ylim([0, None]); plt.ylabel('DOS [1/eV/atom]'); plt.xlabel('Energy [eV]'); plt.legend(); Explanation: DOS depending on the distance from the hole We can further analyze the DOS evolution for atoms farther and farther from the hole. The following DOS analysis will extract DOS (from $\mathbf G$) for the edge atoms, then for the nearest neighbours to the edge atoms, and for the next-nearest neighbours to the edge atoms. Try and extend the plot to contain the DOS of the next-next-nearest neighbours to the edge atoms. End of explanation
5,032
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Code Optimization and Multi Threading Writing Python code is one thing - writing efficient code is a much different thing. Optimizing your code may take a while; if it takes longer to work on the code than it takes to run it, the additional time spent on optimization is a bad investment. However, if you use that specific piece of code on a daily basis, it might be worth to spend more time on the optimization. Code Optimization You can optimize the runtime of your code by identifying bottlenecks Step3: The code works and returns the correct result - but it takes a while to calculate the result. This is in part due to the fact the we cut our function into 10000000 segments; a smaller number would work, too, but it we need this many iterations to signify the impact different functions used here have. Let's see how the runtime changes by changing the code. Lists List functions are useful but very memory consuming Step5: This is only a little bit faster, since there is still a list function in the code Step7: This is only a little bit faster. Numpy Arrays A good approach to saving runtime is always to use numpy functionality. Step8: Using numpy makes a huge difference, since numpy uses libraries written in C, which is much faster than Python. Numpy and Scipy Functions The most important rule to optimize your code is the following Step10: Using scipy.integrate.quad is again significantly faster. This is in part due to the fact that it does not use 10000000 segments over which it calculates the integral. Instead, it uses an adaptive algorithm that estimates (and outputs) the uncertainty on the integral and stop iterating if that uncertainty is smaller than some threshold. General Guideline to Optimize your Code Use the following rules in your coding to make your code run efficiently Step11: The results generated by the different tasks do not rely on each other, i.e., they could in principle be all calculated at the same time. This can be realized using the multiprocessing module
Python Code: import time def square(x): return x**2 def quadrature(func, a, b, n=10000000): use the quadrature rule to determine the integral over the function from a to b # calculate individual elements integral_elements = [func(a)/2.] for k in range(1, n): integral_elements.append(func(a+k*float(b-a)/n)) integral_elements.append(func(b)/2.) # sum up elements result = 0 for element in integral_elements: result += element # normalize result and return return float(b-a)/n*result start = time.time() print quadrature(square, 0, 1) print 'this took', time.time()-start, 's' Explanation: Code Optimization and Multi Threading Writing Python code is one thing - writing efficient code is a much different thing. Optimizing your code may take a while; if it takes longer to work on the code than it takes to run it, the additional time spent on optimization is a bad investment. However, if you use that specific piece of code on a daily basis, it might be worth to spend more time on the optimization. Code Optimization You can optimize the runtime of your code by identifying bottlenecks: find passages of your code that govern its runtime. Consider the following example in which the integral over a function is determined using the quadrature rule (https://en.wikipedia.org/wiki/Numerical_integration#Quadrature_rules_based_on_interpolating_functions): End of explanation def quadrature(func, a, b, n=10000000): use the quadrature rule to determine the integral over the function from a to b result = 0 # calculate individual elements and sum them up result += func(a)/2. for k in range(1, n): result += func(a+k*float(b-a)/n) result += func(b)/2. # normalize result and return return float(b-a)/n*result start = time.time() print quadrature(square, 0, 1) print 'this took', time.time()-start, 's' Explanation: The code works and returns the correct result - but it takes a while to calculate the result. This is in part due to the fact the we cut our function into 10000000 segments; a smaller number would work, too, but it we need this many iterations to signify the impact different functions used here have. Let's see how the runtime changes by changing the code. Lists List functions are useful but very memory consuming: every time something is appended to a list, Python checks how much memory is left at current location in the memory to add future elements. If the memory at the current location runs low, it will move the list in the memory, reserving space for additional list elements up to 1/2 the length of the current list - this is a problem for large lists. In this case, the use of lists is not necessary. Instead of using integral_elements, we can add up the results variable (a simple float value) on-the-fly, which also saves us the second for loop: End of explanation def quadrature(func, a, b, n=10000000): use the quadrature rule to determine the integral over the function from a to b result = 0 k = 1 # calculate individual elements and sum them up result += func(a)/2. while k < n: result += func(a+k*float(b-a)/n) k += 1 result += func(b)/2. # normalize result and return return float(b-a)/n*result start = time.time() print quadrature(square, 0, 1) print 'this took', time.time()-start, 's' Explanation: This is only a little bit faster, since there is still a list function in the code: range. Instead of using the for loop, let's try a while loop that uses an integer to count: End of explanation import numpy as np def quadrature(func, a, b, n=10000000): use the quadrature rule to determine the integral over the function from a to b result = 0 # calculate individual elements and sum them up result += func(a)/2. steps = a + np.arange(1, n, 1)*float(b-a)/n result += np.sum(func(steps)) result += func(b)/2. # normalize result and return return float(b-a)/n*result start = time.time() print quadrature(square, 0, 1) print 'this took', time.time()-start, 's' Explanation: This is only a little bit faster. Numpy Arrays A good approach to saving runtime is always to use numpy functionality. End of explanation from scipy.integrate import quad start = time.time() print quad(square, 0, 1) print 'this took', time.time()-start, 's' Explanation: Using numpy makes a huge difference, since numpy uses libraries written in C, which is much faster than Python. Numpy and Scipy Functions The most important rule to optimize your code is the following: see if what you want to do is already implemented in numpy, scipy, or some other module. Usually, people writing code for these modules know what they do and use a very efficient implementation. For instance, quadrature integration is actually already implemented as part of scipy.integrate: End of explanation import time import numpy as np from scipy.integrate import quad def task(x): this simulates a complicated tasks that takes input x and returns some float based on x time.sleep(1) # simulate that calculating the result takes 1 sec return x**2 # an array with input parameters input = np.random.rand(10) start = time.time() results = [] for x in input: results.append(task(x)) print results print 'this took', time.time()-start, 's' Explanation: Using scipy.integrate.quad is again significantly faster. This is in part due to the fact that it does not use 10000000 segments over which it calculates the integral. Instead, it uses an adaptive algorithm that estimates (and outputs) the uncertainty on the integral and stop iterating if that uncertainty is smaller than some threshold. General Guideline to Optimize your Code Use the following rules in your coding to make your code run efficiently: 1. whatever you want to do, check if there is already an existing function available from numpy, scipy, some other module 2. minimize the use of lists; use tuples or numpy arrays (better) instead 3. use numpy functions on arrays - they are especially designed for that and usually run faster than list functions Multiprocessing You will often have to deal with problems that require running the exact same code for a number of different input parameters. The runtime of the entire program will be long, since all processes (i.e., each run using a set of input parameters) will be run sequentially. End of explanation from multiprocessing import Pool pool = Pool() # create a pool object start = time.time() results = pool.map(task, input) # map the function 'task' on the input data print results print 'this took', time.time()-start, 's' Explanation: The results generated by the different tasks do not rely on each other, i.e., they could in principle be all calculated at the same time. This can be realized using the multiprocessing module: End of explanation
5,033
Given the following text description, write Python code to implement the functionality described below step by step Description: ================================================================ Demonstration of how to use ClickableImage / generate_2d_layout. ================================================================ In this example, we open an image file, then use ClickableImage to return 2D locations of mouse clicks (or load a file already created). Then, we use generate_2d_layout to turn those xy positions into a layout for use with plotting topo maps. In this way, you can take arbitrary xy positions and turn them into a plottable layout. Step2: Load data and click
Python Code: # Authors: Christopher Holdgraf <[email protected]> # # License: BSD (3-clause) from scipy.ndimage import imread import numpy as np from matplotlib import pyplot as plt from os import path as op import mne from mne.viz import ClickableImage, add_background_image # noqa from mne.channels import generate_2d_layout # noqa print(__doc__) # Set parameters and paths plt.rcParams['image.cmap'] = 'gray' im_path = op.join(op.dirname(mne.__file__), 'data', 'image', 'mni_brain.gif') # We've already clicked and exported layout_path = op.join(op.dirname(mne.__file__), 'data', 'image') layout_name = 'custom_layout.lout' Explanation: ================================================================ Demonstration of how to use ClickableImage / generate_2d_layout. ================================================================ In this example, we open an image file, then use ClickableImage to return 2D locations of mouse clicks (or load a file already created). Then, we use generate_2d_layout to turn those xy positions into a layout for use with plotting topo maps. In this way, you can take arbitrary xy positions and turn them into a plottable layout. End of explanation im = imread(im_path) plt.imshow(im) This code opens the image so you can click on it. Commented out because we've stored the clicks as a layout file already. # The click coordinates are stored as a list of tuples click = ClickableImage(im) click.plot_clicks() coords = click.coords # Generate a layout from our clicks and normalize by the image lt = generate_2d_layout(np.vstack(coords), bg_image=im) lt.save(layout_path + layout_name) # To save if we want # We've already got the layout, load it lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False) # Create some fake data nchans = len(lt.pos) nepochs = 50 sr = 1000 nsec = 5 events = np.arange(nepochs).reshape([-1, 1]) events = np.hstack([events, np.zeros([nepochs, 2], dtype=int)]) data = np.random.randn(nepochs, nchans, sr * nsec) info = mne.create_info(nchans, sr, ch_types='eeg') epochs = mne.EpochsArray(data, info, events) evoked = epochs.average() # Using the native plot_topo function with the image plotted in the background f = evoked.plot_topo(layout=lt, fig_background=im) Explanation: Load data and click End of explanation
5,034
Given the following text description, write Python code to implement the functionality described below step by step Description: Plot QNM Frequencies under different scenarios to demonstrate Conventions and Properties. Summary Step1: Plot Single QNM Frequency on jf = [-1,1] using tabulated data Step2: Plot the single QNM Step3: Plot -m on separate figure for comparison Step4: Demonstrate Symmetry Property by plotting two QNMs that differ only by the sign of m Step5: But note that coincident solutions correspond to pairs Step6: But what do I mean by coincident solutions?? When solving leaver's equations for a given l and m, there are both positive and neagtive frequency solutions. Let's try to visualize this.
Python Code: # Define which base QNM to use. Note that the same QNM with m --> -m may be used at some point. l,m,n = 2,1,0 # Useful to development: turn module reloading %load_ext autoreload # Inline plotting %matplotlib inline # Force module recompile %autoreload 2 # Import kerr and numpy from kerr import leaver from kerr.formula.zdqnm_frequencies import kappa from numpy import linspace,array,sin,pi,zeros,arange,ones from numpy.linalg import norm from kerr.basics import rgb # Setup plotting backend from mpl_toolkits.mplot3d import Axes3D from matplotlib.pyplot import * import matplotlib.pyplot as my import matplotlib as mpl mpl.rcParams['lines.linewidth'] = 1 mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 12 mpl.rcParams['axes.labelsize'] = 20 #from kerr.formula.zdqnm_sepconstants import SC as scfit Explanation: Plot QNM Frequencies under different scenarios to demonstrate Conventions and Properties. Summary: ... End of explanation # Define a function to plot the real and imaginary parts of the complex frequency given l,m,n. # Defining this function will save save time/code later. def plot_mode(l,m,n,linestyle='-',conj=None): if conj is not None: x_,wc_,sc_ = conj # %%%%%%%%%%%%%%%%%%%%%%%%%%% # # APPLY SYMMETRY RELATIONSHIP # # FOR m --> -1*m # # %%%%%%%%%%%%%%%%%%%%%%%%%%% # wc_ = -wc_.conj() sc_ = sc_.conj() jf_range = 0.99* sin( 0.5*pi * linspace(-1,1,101) ) wc = zeros(jf_range.shape).astype(complex) sc = zeros(jf_range.shape).astype(complex) for k,jf in enumerate(jf_range): wc[k],sc[k] = leaver(jf,l,m,n) fig = figure( figsize=9*array([1,1]) ) grey = 0.8*array([1,1,1]) ax = [0,0,0,0] #x = jf_range x = kappa([jf_range,l,m]) jfzeroline = lambda : axvline( x[jf_range==min(abs(jf_range))], linestyle='--', color=grey ) ax[0]=subplot(2,2,1); jfzeroline() if conj is not None: plot( x_, wc_.real, color=grey, linewidth=4 ) plot( x, wc.real, linestyle ) # xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m)) ylabel(r'$\mathrm{Re}\,\tilde{\omega}_{%i%i%i}$' % (l,m,n) ) ax[1]=subplot(2,2,2); jfzeroline() gca().yaxis.set_label_position("right"); gca().yaxis.tick_right() if conj is not None: plot( x_, wc_.imag, color=grey, linewidth=4 ) plot( x, wc.imag, linestyle ) # xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m)) ylabel(r'$\mathrm{Im}\,\tilde{\omega}_{%i%i%i}$' % (l,m,n) ) ax[2]=subplot(2,2,3); jfzeroline() if conj is not None: plot( x_, sc_.real, color=grey, linewidth=4 ) #plot( x, scfit[(l,m,n)](jf_range).real, color=0*grey, linestyle='--', alpha=0.1, linewidth=6 ) plot( x, sc.real, 'm'+linestyle ) xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m)) ylabel(r'$\mathrm{Re}\,\tilde{K}_{%i%i%i}$' % (l,m,n) ) ax[3]=subplot(2,2,4); jfzeroline() gca().yaxis.set_label_position("right"); gca().yaxis.tick_right() if conj is not None: plot( x_, sc_.imag, color=grey, linewidth=4 ) #plot( x, -scfit[(l,m,n)](jf_range).imag, color=0*grey, linestyle='--', alpha=0.1, linewidth=6 ) plot( x, sc.imag, 'm'+linestyle ) xlabel(r'$\kappa_{%i%i}(j_f)$'%(l,m)) ylabel(r'$\mathrm{Im}\,\tilde{K}_{%i%i%i}$' % (l,m,n) ) show() # return x,wc,sc Explanation: Plot Single QNM Frequency on jf = [-1,1] using tabulated data End of explanation # Plot the desired QNM x,wc,sc = plot_mode(l,m,n) Explanation: Plot the single QNM End of explanation plot_mode(l,-m,n); Explanation: Plot -m on separate figure for comparison End of explanation # plot_mode(l,-m,n,linestyle='--',conj=(x,wc,sc)); Explanation: Demonstrate Symmetry Property by plotting two QNMs that differ only by the sign of m End of explanation # Inline plotting #%matplotlib inline #%matplotlib notebook mpl.rcParams['lines.linewidth'] = 1 mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 16 mpl.rcParams['axes.labelsize'] = 20 # jf = 0.68 n_range = arange(4) # wc = zeros(n_range.shape).astype(complex) sc = zeros(n_range.shape).astype(complex) wc_= zeros(n_range.shape).astype(complex) sc_= zeros(n_range.shape).astype(complex) for k,n in enumerate(n_range): wc[k] ,sc[k] = leaver( jf,l, m,n ) wc_[k],sc_[k] = leaver(-jf,l,-m,n ) fig = figure( figsize=8*array([1,1]) ) ms = 8; clr = rgb(n_range.size,jet=True) for k in range( len(n_range) ): plot( wc[k].real, wc[k].imag, 'o', ms=ms, mec=0.3*clr[k], mfc=clr[k], alpha=0.4 ) #plot(-wc[k].real, wc[k].imag, 'ok', ms=ms, alpha=0.4, mfc='none' ) for k in range( len(n_range) ): plot( wc_[k].real, wc_[k].imag, 's', ms=ms, mec=0.3*clr[k], mfc=clr[k], alpha=0.4 ) #plot(-wc_[k].real, wc_[k].imag, 'xk', ms=ms, alpha=0.4, mfc='none' ) # Label axes xlabel(r'$\mathrm{Re}\,\tilde{w}_{%i%in}$' % (l,m) ) ylabel(r'$\mathrm{Im}\,\tilde{w}_{%i%in}$' % (l,m) ) print norm(wc+wc_.conj()) Explanation: But note that coincident solutions correspond to pairs: (jf,l,m,n) and (-jf,l,-m,n) End of explanation # First, let's interpolate the separation constants. This will help a lot with plotting. # This requires interp2d import scipy.interpolate as intpl interp2d = intpl.interp2d from numpy import hstack,vstack from numpy import meshgrid from matplotlib import cm fig = figure( figsize=(15,10) ) ax = fig.add_subplot(111, projection='3d') x = vstack( [wc_.real,wc.real] ) y = vstack( [wc_.imag,wc.imag] ) z = vstack( [sc_.real,sc.real] ) SCR = interp2d(x,y,z) gca().scatter( x,y,z, c='r', marker='o',s=12) print z # Create grid x_range = linspace( min(x.reshape(x.size,)), max(x.reshape(x.size,)) ) y_range = linspace( min(y.reshape(y.size,)), max(y.reshape(y.size,)) ) xx,yy = meshgrid(x_range,y_range) zz = SCR(xx,yy) gca().plot_surface(x_range,y_range,zz,cmap=cm.coolwarm,linewidth=0) Explanation: But what do I mean by coincident solutions?? When solving leaver's equations for a given l and m, there are both positive and neagtive frequency solutions. Let's try to visualize this. End of explanation
5,035
Given the following text description, write Python code to implement the functionality described below step by step Description: Computing various MNE solutions This example shows example fixed- and free-orientation source localizations produced by MNE, dSPM, sLORETA, and eLORETA. Step1: Fixed orientation First let's create a fixed-orientation inverse, with the default weighting. Step2: Let's look at the current estimates using MNE. We'll take the absolute value of the source estimates to simplify the visualization. Step3: Next let's use the default noise normalization, dSPM Step4: And sLORETA Step5: And finally eLORETA Step6: Free orientation Now let's not constrain the orientation of the dipoles at all by creating a free-orientation inverse. Step7: Let's look at the current estimates using MNE. We'll take the absolute value of the source estimates to simplify the visualization. Step8: Next let's use the default noise normalization, dSPM Step9: And sLORETA Step10: And finally eLORETA
Python Code: # Author: Eric Larson <[email protected]> # # License: BSD (3-clause) import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' # Read data fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif' evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory', baseline=(None, 0)) fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif' fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif' fwd = mne.read_forward_solution(fname_fwd) cov = mne.read_cov(fname_cov) Explanation: Computing various MNE solutions This example shows example fixed- and free-orientation source localizations produced by MNE, dSPM, sLORETA, and eLORETA. End of explanation inv = make_inverse_operator(evoked.info, fwd, cov, loose=0., depth=0.8, verbose=True) Explanation: Fixed orientation First let's create a fixed-orientation inverse, with the default weighting. End of explanation snr = 3.0 lambda2 = 1.0 / snr ** 2 kwargs = dict(initial_time=0.08, hemi='both', subjects_dir=subjects_dir, size=(600, 600)) stc = abs(apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True)) brain = stc.plot(figure=1, **kwargs) brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14) Explanation: Let's look at the current estimates using MNE. We'll take the absolute value of the source estimates to simplify the visualization. End of explanation stc = abs(apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True)) brain = stc.plot(figure=2, **kwargs) brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14) Explanation: Next let's use the default noise normalization, dSPM: End of explanation stc = abs(apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True)) brain = stc.plot(figure=3, **kwargs) brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14) Explanation: And sLORETA: End of explanation stc = abs(apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True)) brain = stc.plot(figure=4, **kwargs) brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14) Explanation: And finally eLORETA: End of explanation inv = make_inverse_operator(evoked.info, fwd, cov, loose=1., depth=0.8, verbose=True) Explanation: Free orientation Now let's not constrain the orientation of the dipoles at all by creating a free-orientation inverse. End of explanation stc = apply_inverse(evoked, inv, lambda2, 'MNE', verbose=True) brain = stc.plot(figure=5, **kwargs) brain.add_text(0.1, 0.9, 'MNE', 'title', font_size=14) Explanation: Let's look at the current estimates using MNE. We'll take the absolute value of the source estimates to simplify the visualization. End of explanation stc = apply_inverse(evoked, inv, lambda2, 'dSPM', verbose=True) brain = stc.plot(figure=6, **kwargs) brain.add_text(0.1, 0.9, 'dSPM', 'title', font_size=14) Explanation: Next let's use the default noise normalization, dSPM: End of explanation stc = apply_inverse(evoked, inv, lambda2, 'sLORETA', verbose=True) brain = stc.plot(figure=7, **kwargs) brain.add_text(0.1, 0.9, 'sLORETA', 'title', font_size=14) Explanation: And sLORETA: End of explanation stc = apply_inverse(evoked, inv, lambda2, 'eLORETA', verbose=True) brain = stc.plot(figure=8, **kwargs) brain.add_text(0.1, 0.9, 'eLORETA', 'title', font_size=14) Explanation: And finally eLORETA: End of explanation
5,036
Given the following text description, write Python code to implement the functionality described below step by step Description: <h2>Vorbereitung numerischer Daten</h2> <h3>Normalisierung</h3> Step1: <h3>Standardisierung</h3> Step2: Wann wird Normalisierung und wann wird Standardisierung verwendet ? Standardisierung setzt voraus, dass unsere Beobachtungen Normalverteilt sind. <h3>Vorbereiten von Daten mit Kategorien(Klassen)</h3> Categorical Data Wir verwenden<br> <li>Integer Encoding - LabelEncoder <li>One Hot Encoding - OneHotEncoder Step3: <h3>Vorbereitung von Zeitreihen mit unterschiedlichen Längen</h3> Wir verwenden Padding Step4: Kürzen der Sequenzen
Python Code: # import pandas as pd # Zum Arbeiten mit Series # Für die Normalisierung von Daten verwenden wir den MinMaxScaler # Definieren einer Panda Series # data=[11.0,201.0,301.0,41.0,501.0,601.0,701.0,81.0,901.0,1001.0] # Worin besteht hier das Problem ? data=[11.0,201.0,301.0,41.0,501.0,601.0,701.0,81.0,901.0,1001.0] # Übergabe der Series Daten an die Variable series # Ausgabe der series Daten # Welcher Datentyp wird verwendet ? # Ausgabe der shape # # mit dieser shape Struktur (10,) haben NNs Probleme ! # NNs erwarten 2D oder 3D Datenstrukturen ! # Bestimmen des Minimums # Bestimmen des Maximums # Bestimmen der Standardabweichung # Bestimmung des Mittelwertes # Bestimmen der Summe # Übergabe der Werte an die Variable values als numpy array # Ausgabe der Variablen values # Ausgabe der shape dieses arrays # Mit dieser shape haben NNs Probleme # es werden hüfig 2D oder 3D daten erwartet ! # Umwandlung in 2D array # Wichtig: Zuweisung an Variable # Ausgabe der shape # Initialisierung des MinMaxScaler # Trainieren des MinMaxScaler mit scaler.fit() # Ausgabe der Minimum und Maximum Werte print('Minimum: %f, Maximum: %f'% (scaler.data_min_ , scaler.data_max_)) # Umwandeln (Normalisierung der Daten) mit scaler.transform() # Ausgabe der Daten # Übergabe der ursprünglichen Daten mit scaler.inverse_transform() # Ausgabe mit print() Explanation: <h2>Vorbereitung numerischer Daten</h2> <h3>Normalisierung</h3> End of explanation # Zur Standardisierung verwenden wir den StandardScaler from sklearn.preprocessing import StandardScaler # Verwenden der Wurzel - Laden von sqrt aus der Bibliothek math # Definieren einer Panda Series data=[111.0,211.0,301.0,41.0,501.0,601.0,701.0,81.0,901.0,1001.0] # Übergabe der pandas Series an die Variable data series =Series(data) # Ausgabe mit print() # Übergabe der Werte als array an die Variable values values = series.values # Überprüfen der values.shape values.shape # Erzeugen einer vollständigen 2D Struktur mit values.reshape() values.shape # Initialisieren des StandardScaler # Trainieren des StandardScaler # Ausgabe des Mittelwertes, der Varainz und der Standardabweichung # Formattierung der Dezimalzahlen print('Mittelwert: %6.2f, Varianz: %9.2f, Standardabweichung: %6.2f' % (scaler.mean_ , scaler.var_ , sqrt(scaler.var_))) # Standardisierung mit scaler.transform() # Ausgabe mit print() # Welche Eigenschaften haben Standardisierte Daten ? # Übergabe der ursprünglichen Werte und Ausgabe # Dies kann nützlich sein wenn vorhergesagte Werte wieder # in die originale Skalierung verwandelt werden sollen. Explanation: <h3>Standardisierung</h3> End of explanation # Laden der Bibliotheken from numpy import array from numpy import argmax from sklearn.preprocessing import LabelEncoder from sklearn.preprocessing import OneHotEncoder # Definition des Beispiels daten = ['kalt', 'kalt','warm','kalt','heiss','heiss','lauwarm','warm','heiss'] #Übergabe der Werte # Ausgabe mit print # integer encoding # Initialisierung # fit_transform() # Ausgabe mit print() # OneHot Encoding mit sparse = False, categories='auto' # values=values.reshape(len(values),1) # Ausgabe mit print() print(onehot_encoded) Explanation: Wann wird Normalisierung und wann wird Standardisierung verwendet ? Standardisierung setzt voraus, dass unsere Beobachtungen Normalverteilt sind. <h3>Vorbereiten von Daten mit Kategorien(Klassen)</h3> Categorical Data Wir verwenden<br> <li>Integer Encoding - LabelEncoder <li>One Hot Encoding - OneHotEncoder End of explanation # Laden der Bibliothek from keras.preprocessing.sequence import pad_sequences # Definition der Zeitreihen sequences = [[1,2,3,4,5], [1,2,3],[5,8,9,10,11,12],[1]] # Padding # Ausgabe mit print() # Padding am Ende der Sequence # Ausgabe Explanation: <h3>Vorbereitung von Zeitreihen mit unterschiedlichen Längen</h3> Wir verwenden Padding End of explanation # Kürzen der Sequenz von Vorne # Ausgabe mit print() print(truncated) # Kürzen der Sequenz von Hinten: maxlen=3, truncating='post' # Ausgabe mit print() Explanation: Kürzen der Sequenzen End of explanation
5,037
Given the following text description, write Python code to implement the functionality described below step by step Description: regular expressions Step1: metacharacters special characters that you can use in regular expressions that have a special meaning Step2: define your own character classes inside your regular expression, write [aeiou]` Step3: metacharacters 2 Step4: aside Step5: metacharacters 3 Step6: capturing read the whole corpus in as one big string Step7: using re.search to capture
Python Code: input_str = "Yes, my zip code is 12345. I heard that Gary's zip code is 23456. But 212 is not a zip code." import re zips= re.findall(r"\d{5}", input_str) zips from urllib.request import urlretrieve urlretrieve("https://raw.githubusercontent.com/ledeprogram/courses/master/databases/data/enronsubjects.txt", "enronsubjects.txt") subjects = [x.strip() for x in open("enronsubjects.txt").readlines()] #x.trip()[\n] subjects[:10] subjects[-10] [line for line in subjects if line.startswith("Hi!")] import re [line for line in subjects if re.search("shipping", line)] # if line string match the "" parameter Explanation: regular expressions End of explanation [line for line in subjects if re.search("sh.pping", line)] #. means any single character sh.pping is called class # subjects that contain a time, e.g., 5: 52pm or 12:06am [line for line in subjects if re.search("\d:\d\d\wm", line)] # \d:\d\d\wm a template read character by character [line for line in subjects if re.search("\.\.\.\.\.", line)] # subject lines that have dates, e.g. 12/01/99 [line for line in subjects if re.search("\d\d/\d\d/\d\d", line)] [line for line in subjects if re.search("6/\d\d/\d\d", line)] Explanation: metacharacters special characters that you can use in regular expressions that have a special meaning: they stand in for multiple different characters of the same "class" .: any char \w any alphanumeric char (a-z, A-Z, 0-9) \s any whitespace char ("_", \t,\n) \S any non-whitespace char \d any digit(0-9) . actual period End of explanation [line for line in subjects if re.search("[aeiou][aeiou][aeiou][aeiou]",line)] [line for line in subjects if re.search("F[wW]:", line)] #F followed by either a lowercase w followed by a uppercase W # subjects that contain a time, e.g., 5: 52pm or 12:06am [line for line in subjects if re.search("\d:[012345]\d[apAP][mM]", line)] Explanation: define your own character classes inside your regular expression, write [aeiou]` End of explanation # begin with the New York character #anchor the search of the particular string [line for line in subjects if re.search("^[Nn]ew [Yy]ork", line)] [line for line in subjects if re.search("\.\.\.$", line)] [line for line in subjects if re.search("!!!!!$", line)] # find sequence of characters that match "oil" [line for line in subjects if re.search("\boil\b", line)] Explanation: metacharacters 2: anchors ^ beginning of a str $ end of str \b word boundary End of explanation x = "this is \na test" print(x) x= "this is\t\t\tanother test" print(x) # ascii backspace print("hello there\b\b\b\b\bhi") print("hello\nthere") print("hello\\nthere") normal = "hello\nthere" raw = r"hello\nthere" #don't interpret any escape character in the raw string print("normal:", normal) print("raw:", raw) [line for line in subjects if re.search(r"\boil\b", line)] #r for regular expression, include r for regular expression all the time [line for line in subjects if re.search(r"\b\.\.\.\b", line)] [line for line in subjects if re.search(r"\banti", line)] #\b only search anti at the beginning of the word Explanation: aside: matacharacters and escape characters escape sequences \n: new line; \t: tab \backslash \b: word boundary End of explanation [line for line in subjects if re.search(r"[A-Z]{15,}", line)] [line for line in subjects if re.search(r"[aeiou]{4}", line)] #find words that have 4 characters from aeiou for each line [line for line in subjects if re.search(r"^F[wW]d?:", line)] # find method that begins with F followed by either w or W and either a d or not d is not there; ? means find that character d or not [line for line in subjects if re.search(r"[nN]ews.*!$", line)] # * means any characters between ews and ! [line in line in subjects if re.search(r"^R[eE]:.*\b[iI]nvestor", line)] ### more metacharacters: alternation (?:x|y) match either x or y (?:x|y|z) match x,y, or z [line for line in subjects if re.search(r"\b(?:[Cc]at|[kK]itty|[kK]itten)\b", line)] [line for line in subjects if re.search(r"(energy|oil|electricity)\b", line)] Explanation: metacharacters 3: quantifiers {n} matches exactly n times {n.m} matches at least n times, but no more than m times {n,} matches at least n times, but maybe infinite times + match at least onece ({1}) * match zero or more times ? match one time or zero times End of explanation all_subjects = open("enronsubjects.txt").read() all_subjects[:1000] # domain names: foo.org, cheese.net, stuff.come re.findall(r"\b\w+\.(?:come|net|org)\b", all_subjects) ## differences between re.search() yes/no ##re.findall [] input_str = "Yes, my zip code is 12345. I heard that Gary's zip code is 23456. But 212 is not a zip code." re.search(r"\b\d{5}\b", input_str) re.findall(r"New York \b\w+\b", all_subjects) re.findall(r"New York (\b\w+\b)", all_subjects) #the things in (): to group for the findall method Explanation: capturing read the whole corpus in as one big string End of explanation src = "this example has been used 423 times" if re.search(r"\d\d\d\d", src): print("yup") else: print("nope") src = "this example has been used 423 times" match = re.search(r"\d\d\d", src) type(match) print(match.start()) print(match.end()) print(match.group()) for line in subjects: match = re.search(r"[A-Z]{15,}", line) if match: #if find that match print(match.group()) courses=[ ] print "Course catalog report:" for item in courses: match = re.search(r"^(\w+) (\d+): (.*)$", item) print(match.group(1)) #group 1: find the item in first group print("Course dept", match.group(1)) print("Course #", match.group(2)) print("Course title", match.group(3)) Explanation: using re.search to capture End of explanation
5,038
Given the following text description, write Python code to implement the functionality described below step by step Description: Data Preparation Step1: Set up for hour of day plots Step2: Set up for Day of Week and Weekend vs Weekday plots Step3: Plots Motivation Weekday vs Weekend Step4: The proportion of respondents motivated by work/school drops significantly from weekday to weekend. The proportion of respondents motivated by media increase significantly from weekday to weekend. All other motivations see a very silgtht increase. Motivation by Day of Week Step5: Motivations besides media and work/school maintain a fairly constant proportion throughout the week. The work/school proportion stays at a high contant 20% M-Th, drops until Saturday to almost 10% and then rebounds on Sunday to 15%. Similarly, media stays at a high contant M-F, then Jumps up during the weekend. Motivation Step6: The familiar pattern. Motivations besides media and work/school maintain a fairly constant proportion throughout the day. Work/school maintains the highest proportion during work/school hours. Media has the highest proportion outside of work/school hours. Information Depth Step7: Non of the differences are significant, athough it appears as if a higher proportion of respondents are seeking an in depth reading on the weekend. Information Depth Step8: Not much. Maybe a bump in overview and a dip in fact during the day. Information Depth Step9: Again, no significant change. Weekend readrs may be more likely to familiar with the topic. Information Depth
Python Code: df = load_responses_with_traces() df['click_dt_local'] = df.apply(lambda x: utc_to_local(x['click_dt_utc'], x['geo_data']['timezone']), axis = 1) df = df[df['click_dt_local'].notnull()].copy() print('Num Responses with a timezone', df.shape[0]) Explanation: Data Preparation End of explanation df['local_hour_of_day'] = df['click_dt_local'].apply(lambda x: x.hour) df['local_hour_of_day_div2'] = df['click_dt_local'].apply(lambda x: 2 * int(x.hour / 2)) df['local_hour_of_day_div3'] = df['click_dt_local'].apply(lambda x: 3 * int(x.hour / 3)) df['local_hour_of_day_div4'] = df['click_dt_local'].apply(lambda x: 4 * int(x.hour / 4)) hour_of_day_div2_xticks = ['%d-%d' % e for e in zip(range(0, 24, 2), range(2, 25, 2))] hour_of_day_div3_xticks = ['%d-%d' % e for e in zip(range(0, 24, 3), range(3, 25, 3))] hour_of_day_div4_xticks = ['%d-%d' % e for e in zip(range(0, 24, 4), range(4, 25, 4))] Explanation: Set up for hour of day plots End of explanation def get_day_of_week(t): return t.weekday() def get_day_type(t): if t.weekday() > 4: return 1 else: return 0 df['local_day_of_week'] = df['click_dt_local'].apply(get_day_of_week) df['local_day_type'] = df['click_dt_local'].apply(get_day_type) day_of_week_xticks = [ 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday' ] day_type_xticks = ['weekday', 'weekend'] Explanation: Set up for Day of Week and Weekend vs Weekday plots End of explanation d_single_motivation = df[df['motivation'].apply(lambda x: len(x.split('|')) == 1)] print('Num Response Traces with a single motivation:', d_single_motivation.shape) x = 'local_day_type' hue = 'motivation' d = d_single_motivation xticks = day_type_xticks figsize = (4, 6) xlim = (-0.25, 1.25) hue_order = ['media', 'work/school', 'intrinsic learning', 'bored/random', 'conversation', 'current event', 'personal decision'] title = 'Motivation by Weekday vs Weekend' plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim ) Explanation: Plots Motivation Weekday vs Weekend End of explanation x = 'local_day_of_week' hue = 'motivation' d = d_single_motivation xticks = day_of_week_xticks figsize = (6, 6) xlim = (-0.25, 6.25) hue_order = ['media', 'work/school', 'intrinsic learning', 'bored/random', 'conversation', 'current event', 'personal decision'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim ) Explanation: The proportion of respondents motivated by work/school drops significantly from weekday to weekend. The proportion of respondents motivated by media increase significantly from weekday to weekend. All other motivations see a very silgtht increase. Motivation by Day of Week End of explanation x = 'local_hour_of_day_div4' hue = 'motivation' d = d_single_motivation d = d[d['local_day_type'] == 0] xticks = hour_of_day_div4_xticks figsize = (8, 6) xlim = (-0.5, 20.5) hue_order = ['media', 'work/school', 'intrinsic learning', 'bored/random', 'conversation', 'current event', 'personal decision'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim ) Explanation: Motivations besides media and work/school maintain a fairly constant proportion throughout the week. The work/school proportion stays at a high contant 20% M-Th, drops until Saturday to almost 10% and then rebounds on Sunday to 15%. Similarly, media stays at a high contant M-F, then Jumps up during the weekend. Motivation: Hour of Day End of explanation x = 'local_day_type' hue = 'information depth' xticks = day_type_xticks figsize = (4, 8) xlim = (-0.25, 1.25) hue_order = ['in-depth', 'overview', 'fact'] plot_over_time(df, x, xticks, hue, hue_order, figsize, xlim ) Explanation: The familiar pattern. Motivations besides media and work/school maintain a fairly constant proportion throughout the day. Work/school maintains the highest proportion during work/school hours. Media has the highest proportion outside of work/school hours. Information Depth: Weekday vs Weekend End of explanation x = 'local_hour_of_day_div4' hue = 'information depth' d = df[df['local_day_type'] == 0] xticks = hour_of_day_div4_xticks figsize = (8, 6) xlim = (-0.5, 20.5) hue_order = ['in-depth', 'overview', 'fact'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim ) Explanation: Non of the differences are significant, athough it appears as if a higher proportion of respondents are seeking an in depth reading on the weekend. Information Depth: Hour of Day End of explanation x = 'local_day_type' hue = 'prior knowledge' xticks = day_type_xticks figsize = (4, 8) xlim = (-0.25, 1.25) hue_order = ['familiar', 'unfamiliar'] plot_over_time(df, x, xticks, hue, hue_order, figsize, xlim ) Explanation: Not much. Maybe a bump in overview and a dip in fact during the day. Information Depth: Weekday vs Weekend End of explanation x = 'local_hour_of_day_div4' hue = 'prior knowledge' d = df[df['local_day_type'] == 0] xticks = hour_of_day_div4_xticks figsize = (8, 6) xlim = (-0.5, 20.5) hue_order = ['familiar', 'unfamiliar'] plot_over_time(d, x, xticks, hue, hue_order, figsize, xlim ) Explanation: Again, no significant change. Weekend readrs may be more likely to familiar with the topic. Information Depth: Hour of Day End of explanation
5,039
Given the following text description, write Python code to implement the functionality described below step by step Description: Hypothesis Testing Copyright 2016 Allen Downey License Step1: Part One Suppose you observe an apparent difference between two groups and you want to check whether it might be due to chance. As an example, we'll look at differences between first babies and others. The first module provides code to read data from the National Survey of Family Growth (NSFG). Step2: We'll look at a couple of variables, including pregnancy length and birth weight. The effect size we'll consider is the difference in the means. Other examples might include a correlation between variables or a coefficient in a linear regression. The number that quantifies the size of the effect is called the "test statistic". Step3: For the first example, I extract the pregnancy length for first babies and others. The results are pandas Series objects. Step4: The actual difference in the means is 0.078 weeks, which is only 13 hours. Step5: The null hypothesis is that there is no difference between the groups. We can model that by forming a pooled sample that includes first babies and others. Step6: Then we can simulate the null hypothesis by shuffling the pool and dividing it into two groups, using the same sizes as the actual sample. Step7: The result of running the model is two NumPy arrays with the shuffled pregnancy lengths Step8: Then we compute the same test statistic using the simulated data Step9: If we run the model 1000 times and compute the test statistic, we can see how much the test statistic varies under the null hypothesis. Step10: Here's the sampling distribution of the test statistic under the null hypothesis, with the actual difference in means indicated by a gray line. Step11: The p-value is the probability that the test statistic under the null hypothesis exceeds the actual value. Step20: In this case the result is about 15%, which means that even if there is no difference between the groups, it is plausible that we could see a sample difference as big as 0.078 weeks. We conclude that the apparent effect might be due to chance, so we are not confident that it would appear in the general population, or in another sample from the same population. STOP HERE Part Two We can take the pieces from the previous section and organize them in a class that represents the structure of a hypothesis test. Step25: HypothesisTest is an abstract parent class that encodes the template. Child classes fill in the missing methods. For example, here's the test from the previous section. Step26: Now we can run the test by instantiating a DiffMeansPermute object Step27: And we can plot the sampling distribution of the test statistic under the null hypothesis. Step28: Difference in standard deviation Exercize 1 Step29: Here's the code to test your solution to the previous exercise. Step30: Difference in birth weights Now let's run DiffMeansPermute again to see if there is a difference in birth weight between first babies and others. Step31: In this case, after 1000 attempts, we never see a sample difference as big as the observed difference, so we conclude that the apparent effect is unlikely under the null hypothesis. Under normal circumstances, we can also make the inference that the apparent effect is unlikely to be caused by random sampling. One final note Step32: We expect the mean in both groups to be near 100, but just by random chance, it might be higher or lower. Step33: We can use DiffMeansPermute to compute the p-value for this fake data, which is the probability that we would see a difference between the groups as big as what we saw, just by chance. Step34: Now let's check the p-value. If it's less than 0.05, the result is statistically significant, and we can publish it. Otherwise, we can try again. Step36: You can probably see where this is going. If we play this game over and over (or if many researchers play it in parallel), the false positive rate can be as high as 100%. To see this more clearly, let's simulate 100 researchers playing this game. I'll take the code we have so far and wrap it in a function Step37: Now let's run that function 100 times and save the p-values. Step38: On average, we expect to get a false positive about 5 times out of 100. To see why, let's plot the histogram of the p-values we got. Step40: The distribution of p-values is uniform from 0 to 1. So it falls below 5% about 5% of the time. Exercise Step41: Now let's run it 100 times with an actual difference of 5 Step42: With sample size 100 and an actual difference of 5, the power of the test is approximately 65%. That means if we ran this hypothetical experiment 100 times, we'd expect a statistically significant result about 65 times. That's pretty good, but it also means we would NOT get a statistically significant result about 35 times, which is a lot. Again, let's look at the distribution of p-values
Python Code: %matplotlib inline import numpy import scipy.stats import matplotlib.pyplot as plt import first Explanation: Hypothesis Testing Copyright 2016 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation live, firsts, others = first.MakeFrames() Explanation: Part One Suppose you observe an apparent difference between two groups and you want to check whether it might be due to chance. As an example, we'll look at differences between first babies and others. The first module provides code to read data from the National Survey of Family Growth (NSFG). End of explanation def TestStatistic(data): group1, group2 = data test_stat = abs(group1.mean() - group2.mean()) return test_stat Explanation: We'll look at a couple of variables, including pregnancy length and birth weight. The effect size we'll consider is the difference in the means. Other examples might include a correlation between variables or a coefficient in a linear regression. The number that quantifies the size of the effect is called the "test statistic". End of explanation group1 = firsts.prglngth group2 = others.prglngth Explanation: For the first example, I extract the pregnancy length for first babies and others. The results are pandas Series objects. End of explanation actual = TestStatistic((group1, group2)) actual Explanation: The actual difference in the means is 0.078 weeks, which is only 13 hours. End of explanation n, m = len(group1), len(group2) pool = numpy.hstack((group1, group2)) Explanation: The null hypothesis is that there is no difference between the groups. We can model that by forming a pooled sample that includes first babies and others. End of explanation def RunModel(): numpy.random.shuffle(pool) data = pool[:n], pool[n:] return data Explanation: Then we can simulate the null hypothesis by shuffling the pool and dividing it into two groups, using the same sizes as the actual sample. End of explanation RunModel() Explanation: The result of running the model is two NumPy arrays with the shuffled pregnancy lengths: End of explanation TestStatistic(RunModel()) Explanation: Then we compute the same test statistic using the simulated data: End of explanation test_stats = numpy.array([TestStatistic(RunModel()) for i in range(1000)]) test_stats.shape Explanation: If we run the model 1000 times and compute the test statistic, we can see how much the test statistic varies under the null hypothesis. End of explanation plt.axvline(actual, linewidth=3, color='0.8') plt.hist(test_stats, color='C4', alpha=0.5) plt.xlabel('difference in means') plt.ylabel('count'); Explanation: Here's the sampling distribution of the test statistic under the null hypothesis, with the actual difference in means indicated by a gray line. End of explanation pvalue = sum(test_stats >= actual) / len(test_stats) pvalue Explanation: The p-value is the probability that the test statistic under the null hypothesis exceeds the actual value. End of explanation class HypothesisTest(object): Represents a hypothesis test. def __init__(self, data): Initializes. data: data in whatever form is relevant self.data = data self.MakeModel() self.actual = self.TestStatistic(data) self.test_stats = None def PValue(self, iters=1000): Computes the distribution of the test statistic and p-value. iters: number of iterations returns: float p-value self.test_stats = numpy.array([self.TestStatistic(self.RunModel()) for _ in range(iters)]) count = sum(self.test_stats >= self.actual) return count / iters def MaxTestStat(self): Returns the largest test statistic seen during simulations. return max(self.test_stats) def PlotHist(self, label=None): Draws a Cdf with vertical lines at the observed test stat. plt.hist(self.test_stats, color='C4', alpha=0.5) plt.axvline(self.actual, linewidth=3, color='0.8') plt.xlabel('test statistic') plt.ylabel('count') def TestStatistic(self, data): Computes the test statistic. data: data in whatever form is relevant raise UnimplementedMethodException() def MakeModel(self): Build a model of the null hypothesis. pass def RunModel(self): Run the model of the null hypothesis. returns: simulated data raise UnimplementedMethodException() Explanation: In this case the result is about 15%, which means that even if there is no difference between the groups, it is plausible that we could see a sample difference as big as 0.078 weeks. We conclude that the apparent effect might be due to chance, so we are not confident that it would appear in the general population, or in another sample from the same population. STOP HERE Part Two We can take the pieces from the previous section and organize them in a class that represents the structure of a hypothesis test. End of explanation class DiffMeansPermute(HypothesisTest): Tests a difference in means by permutation. def TestStatistic(self, data): Computes the test statistic. data: data in whatever form is relevant group1, group2 = data test_stat = abs(group1.mean() - group2.mean()) return test_stat def MakeModel(self): Build a model of the null hypothesis. group1, group2 = self.data self.n, self.m = len(group1), len(group2) self.pool = numpy.hstack((group1, group2)) def RunModel(self): Run the model of the null hypothesis. returns: simulated data numpy.random.shuffle(self.pool) data = self.pool[:self.n], self.pool[self.n:] return data Explanation: HypothesisTest is an abstract parent class that encodes the template. Child classes fill in the missing methods. For example, here's the test from the previous section. End of explanation data = (firsts.prglngth, others.prglngth) ht = DiffMeansPermute(data) p_value = ht.PValue(iters=1000) print('\nmeans permute pregnancy length') print('p-value =', p_value) print('actual =', ht.actual) print('ts max =', ht.MaxTestStat()) Explanation: Now we can run the test by instantiating a DiffMeansPermute object: End of explanation ht.PlotHist() Explanation: And we can plot the sampling distribution of the test statistic under the null hypothesis. End of explanation # Solution goes here Explanation: Difference in standard deviation Exercize 1: Write a class named DiffStdPermute that extends DiffMeansPermute and overrides TestStatistic to compute the difference in standard deviations. Is the difference in standard deviations statistically significant? End of explanation data = (firsts.prglngth, others.prglngth) ht = DiffStdPermute(data) p_value = ht.PValue(iters=1000) print('\nstd permute pregnancy length') print('p-value =', p_value) print('actual =', ht.actual) print('ts max =', ht.MaxTestStat()) Explanation: Here's the code to test your solution to the previous exercise. End of explanation data = (firsts.totalwgt_lb.dropna(), others.totalwgt_lb.dropna()) ht = DiffMeansPermute(data) p_value = ht.PValue(iters=1000) print('\nmeans permute birthweight') print('p-value =', p_value) print('actual =', ht.actual) print('ts max =', ht.MaxTestStat()) Explanation: Difference in birth weights Now let's run DiffMeansPermute again to see if there is a difference in birth weight between first babies and others. End of explanation group1 = numpy.random.normal(100, 15, size=100) group2 = numpy.random.normal(100, 15, size=100) Explanation: In this case, after 1000 attempts, we never see a sample difference as big as the observed difference, so we conclude that the apparent effect is unlikely under the null hypothesis. Under normal circumstances, we can also make the inference that the apparent effect is unlikely to be caused by random sampling. One final note: in this case I would report that the p-value is less than 1/1000 or less than 0.001. I would not report p=0, because the apparent effect is not impossible under the null hypothesis; just unlikely. Part Three In this section, we'll explore the dangers of p-hacking by running multiple tests until we find one that's statistically significant. Suppose we want to compare IQs for two groups of people. And suppose that, in fact, the two groups are statistically identical; that is, their IQs are drawn from a normal distribution with mean 100 and standard deviation 15. I'll use numpy.random.normal to generate fake data I might get from running such an experiment: End of explanation group1.mean(), group2.mean() Explanation: We expect the mean in both groups to be near 100, but just by random chance, it might be higher or lower. End of explanation data = (group1, group2) ht = DiffMeansPermute(data) p_value = ht.PValue(iters=1000) p_value Explanation: We can use DiffMeansPermute to compute the p-value for this fake data, which is the probability that we would see a difference between the groups as big as what we saw, just by chance. End of explanation if p_value < 0.05: print('Congratulations! Publish it!') else: print('Too bad! Please try again.') Explanation: Now let's check the p-value. If it's less than 0.05, the result is statistically significant, and we can publish it. Otherwise, we can try again. End of explanation def run_a_test(sample_size=100): Generate random data and run a hypothesis test on it. sample_size: integer returns: p-value group1 = numpy.random.normal(100, 15, size=sample_size) group2 = numpy.random.normal(100, 15, size=sample_size) data = (group1, group2) ht = DiffMeansPermute(data) p_value = ht.PValue(iters=200) return p_value Explanation: You can probably see where this is going. If we play this game over and over (or if many researchers play it in parallel), the false positive rate can be as high as 100%. To see this more clearly, let's simulate 100 researchers playing this game. I'll take the code we have so far and wrap it in a function: End of explanation num_experiments = 100 p_values = numpy.array([run_a_test() for i in range(num_experiments)]) sum(p_values < 0.05) Explanation: Now let's run that function 100 times and save the p-values. End of explanation bins = numpy.linspace(0, 1, 21) bins plt.hist(p_values, bins, color='C4', alpha=0.5) plt.axvline(0.05, linewidth=3, color='0.8') plt.xlabel('p-value') plt.ylabel('count'); Explanation: On average, we expect to get a false positive about 5 times out of 100. To see why, let's plot the histogram of the p-values we got. End of explanation def run_a_test2(actual_diff, sample_size=100): Generate random data and run a hypothesis test on it. actual_diff: The actual difference between groups. sample_size: integer returns: p-value group1 = numpy.random.normal(100, 15, size=sample_size) group2 = numpy.random.normal(100 + actual_diff, 15, size=sample_size) data = (group1, group2) ht = DiffMeansPermute(data) p_value = ht.PValue(iters=200) return p_value Explanation: The distribution of p-values is uniform from 0 to 1. So it falls below 5% about 5% of the time. Exercise: If the threshold for statistical signficance is 5%, the probability of a false positive is 5%. You might hope that things would get better with larger sample sizes, but they don't. Run this experiment again with a larger sample size, and see for yourself. Part four In the previous section, we computed the false positive rate, which is the probability of seeing a "statistically significant" result, even if there is no statistical difference between groups. Now let's ask the complementary question: if there really is a difference between groups, what is the chance of seeing a "statistically significant" result? The answer to this question is called the "power" of the test. It depends on the sample size (unlike the false positive rate), and it also depends on how big the actual difference is. We can estimate the power of a test by running simulations similar to the ones in the previous section. Here's a version of run_a_test that takes the actual difference between groups as a parameter: End of explanation p_values = numpy.array([run_a_test2(5) for i in range(100)]) sum(p_values < 0.05) Explanation: Now let's run it 100 times with an actual difference of 5: End of explanation plt.hist(p_values, bins, color='C4', alpha=0.5) plt.axvline(0.05, linewidth=3, color='0.8') plt.xlabel('p-value') plt.ylabel('count'); Explanation: With sample size 100 and an actual difference of 5, the power of the test is approximately 65%. That means if we ran this hypothetical experiment 100 times, we'd expect a statistically significant result about 65 times. That's pretty good, but it also means we would NOT get a statistically significant result about 35 times, which is a lot. Again, let's look at the distribution of p-values: End of explanation
5,040
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploratory Data Analysis Using Python and BigQuery Learning Objectives Analyze a Pandas Dataframe Create Seaborn plots for Exploratory Data Analysis in Python Write a SQL query to pick up specific fields from a BigQuery dataset Exploratory Analysis in BigQuery Introduction This lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Import Libraries Step1: Load the Dataset Here, we create a directory called usahousing. This directory will hold the dataset that we copy from Google Cloud Storage. Step2: Next, we copy the Usahousing dataset from Google Cloud Storage. Step3: Then we use the "ls" command to list files in the directory. This ensures that the dataset was copied. Step4: Next, we read the dataset into a Pandas dataframe. Step5: Inspect the Data Step6: Let's check for any null values. Step7: Let's take a peek at the first and last five rows of the data for all columns. Step8: Explore the Data Let's create some simple plots to check out the data! Step9: Create a distplot showing "median_house_value". Step10: Create a jointplot showing "median_income" versus "median_house_value". Step11: You can see below that this is the state of California! Step12: Explore and create ML datasets In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Learning Objectives Access and explore a public BigQuery dataset on NYC Taxi Cab rides Visualize your dataset using the Seaborn library <h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https Step13: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this. We will also store the BigQuery result in a Pandas dataframe named "trips" Step14: <h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering. Step15: Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Note the extra WHERE clauses. Step16: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable. Let's also examine whether the toll amount is captured in the total amount. Step17: Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool. Let's also look at the distribution of values within the columns.
Python Code: import os import matplotlib.pyplot as plt import numpy as np import pandas as pd # Seaborn is a Python data visualization library based on matplotlib. import seaborn as sns from google.cloud import bigquery %matplotlib inline Explanation: Exploratory Data Analysis Using Python and BigQuery Learning Objectives Analyze a Pandas Dataframe Create Seaborn plots for Exploratory Data Analysis in Python Write a SQL query to pick up specific fields from a BigQuery dataset Exploratory Analysis in BigQuery Introduction This lab is an introduction to linear regression using Python and Scikit-Learn. This lab serves as a foundation for more complex algorithms and machine learning models that you will encounter in the course. We will train a linear regression model to predict housing price. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Import Libraries End of explanation if not os.path.isdir("../data/explore"): os.makedirs("../data/explore") Explanation: Load the Dataset Here, we create a directory called usahousing. This directory will hold the dataset that we copy from Google Cloud Storage. End of explanation !gsutil cp gs://cloud-training-demos/feat_eng/housing/housing_pre-proc.csv ../data/explore Explanation: Next, we copy the Usahousing dataset from Google Cloud Storage. End of explanation !ls -l ../data/explore Explanation: Then we use the "ls" command to list files in the directory. This ensures that the dataset was copied. End of explanation df_USAhousing = # TODO 1: Your code goes here Explanation: Next, we read the dataset into a Pandas dataframe. End of explanation # Show the first five row. df_USAhousing.head() Explanation: Inspect the Data End of explanation df_USAhousing.isnull().sum() df_stats = df_USAhousing.describe() df_stats = df_stats.transpose() df_stats df_USAhousing.info() Explanation: Let's check for any null values. End of explanation print("Rows : ", df_USAhousing.shape[0]) print("Columns : ", df_USAhousing.shape[1]) print("\nFeatures : \n", df_USAhousing.columns.tolist()) print("\nMissing values : ", df_USAhousing.isnull().sum().values.sum()) print("\nUnique values : \n", df_USAhousing.nunique()) Explanation: Let's take a peek at the first and last five rows of the data for all columns. End of explanation _ = sns.heatmap(df_USAhousing.corr()) Explanation: Explore the Data Let's create some simple plots to check out the data! End of explanation # TODO 2a: Your code goes here sns.set_style("whitegrid") df_USAhousing["median_house_value"].hist(bins=30) plt.xlabel("median_house_value") x = df_USAhousing["median_income"] y = df_USAhousing["median_house_value"] plt.scatter(x, y) plt.show() Explanation: Create a distplot showing "median_house_value". End of explanation # TODO 2b: Your code goes here sns.countplot(x="ocean_proximity", data=df_USAhousing) # takes numeric only? # plt.figure(figsize=(20,20)) g = sns.FacetGrid(df_USAhousing, col="ocean_proximity") _ = g.map(plt.hist, "households") # takes numeric only? # plt.figure(figsize=(20,20)) g = sns.FacetGrid(df_USAhousing, col="ocean_proximity") _ = g.map(plt.hist, "median_income") Explanation: Create a jointplot showing "median_income" versus "median_house_value". End of explanation x = df_USAhousing["latitude"] y = df_USAhousing["longitude"] plt.scatter(x, y) plt.show() Explanation: You can see below that this is the state of California! End of explanation %%bigquery SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount # TODO 3: Set correct BigQuery public dataset for nyc-tlc yellow taxi cab trips # Tip: For projects with hyphens '-' be sure to escape with backticks `` FROM LIMIT 10 Explanation: Explore and create ML datasets In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Learning Objectives Access and explore a public BigQuery dataset on NYC Taxi Cab rides Visualize your dataset using the Seaborn library <h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows. Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format. End of explanation %%bigquery trips SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 print(len(trips)) # We can slice Pandas dataframes as if they were arrays trips[:10] Explanation: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this. We will also store the BigQuery result in a Pandas dataframe named "trips" End of explanation # TODO 4: Visualize your dataset using the Seaborn library. # Plot the distance of the trip as X and the fare amount as Y. ax = sns.regplot( x="", y="", fit_reg=False, ci=None, truncate=True, data=trips, ) ax.figure.set_size_inches(10, 8) Explanation: <h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering. End of explanation %%bigquery trips SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 # TODO 4a: Filter the data to only include non-zero distance trips and fares above $2.50 AND print(len(trips)) ax = sns.regplot( x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips, ) ax.figure.set_size_inches(10, 8) Explanation: Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Note the extra WHERE clauses. End of explanation tollrides = trips[trips["tolls_amount"] > 0] tollrides[tollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"] notollrides = trips[trips["tolls_amount"] == 0] notollrides[notollrides["pickup_datetime"] == "2012-02-27 09:19:10 UTC"] Explanation: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable. Let's also examine whether the toll amount is captured in the total amount. End of explanation trips.describe() Explanation: Looking at a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool. Let's also look at the distribution of values within the columns. End of explanation
5,041
Given the following text description, write Python code to implement the functionality described below step by step Description: This demonstration shows how CCMC/FCIQMC [1,2] calculations with complex wavefunctions and replica tricks can be analysed. For clarity, the extractor, preparator and analyser are defined explicity, instead of simply using get_results. Step1: Note that this is the original default CCMC/FCIQMC HANDE columns/key mapping in preparator. It is a private variable really, so user access is not encouraged, but shown here for clarity. Step2: Now we execute our executor, preparator and analyser. Note that while we have three output files, we only have two calculations, one was restarted. Since we are merging using UUIDs, the order we are passing the output file names in does not matter. With legacy merge type, this would not work. Order matters there. Here, the first and third output file's data will be merged. Step3: In this analysis model, all replicas [3] are analysed individually. Complex data is dealt with as follows Step4: The data has additional columns with this data Step5: Not everything was analysed long enough to be analysed. Step6: This was unsuccessful Step7: Lower initial population, lower shoulder [4]? Step8: There seems to be a trend.
Python Code: from pyhande.data_preparing.hande_ccmc_fciqmc import PrepHandeCcmcFciqmc from pyhande.extracting.extractor import Extractor from pyhande.error_analysing.blocker import Blocker from pyhande.results_viewer.get_results import analyse_data extra = Extractor() # Keep the defaults, merge using UUIDs. prep = PrepHandeCcmcFciqmc() ana = Blocker.inst_hande_ccmc_fciqmc(start_its=[20000, 20000]) # Keep the defaults, find starting iterations automatically, using 'blocking' finder. Explanation: This demonstration shows how CCMC/FCIQMC [1,2] calculations with complex wavefunctions and replica tricks can be analysed. For clarity, the extractor, preparator and analyser are defined explicity, instead of simply using get_results. End of explanation prep._observables_init Explanation: Note that this is the original default CCMC/FCIQMC HANDE columns/key mapping in preparator. It is a private variable really, so user access is not encouraged, but shown here for clarity. End of explanation results = analyse_data(["data/replica_complex_fciqmc_init_pop_10.out.xz", "data/replica_complex_fciqmc_init_pop_100.out.xz", "data/replica_complex_fciqmc_init_pop_10_part2.out.xz"], extra, prep, ana) Explanation: Now we execute our executor, preparator and analyser. Note that while we have three output files, we only have two calculations, one was restarted. Since we are merging using UUIDs, the order we are passing the output file names in does not matter. With legacy merge type, this would not work. Order matters there. Here, the first and third output file's data will be merged. End of explanation prep.observables Explanation: In this analysis model, all replicas [3] are analysed individually. Complex data is dealt with as follows: The mapping of ref_key (the reference population N_0) and sum_key (sum_j H_0j N_j) are adapted, they are now the (negative) magnitudes of the complex equivalents. The ratio of the means of those magnitudes is then the mean projected energy (credits to Charlie Scott for first implementation, see his note in data_preparing/hande_ccmc_fciqmc.py.). End of explanation prep.data Explanation: The data has additional columns with this data: End of explanation results.summary_pretty Explanation: Not everything was analysed long enough to be analysed. End of explanation results.analyser.no_opt_block Explanation: This was unsuccessful: End of explanation results.plot_shoulder() Explanation: Lower initial population, lower shoulder [4]? End of explanation results.add_shoulder() results.add_metadata(['qmc:D0_population']) results.compare_obs(['shoulder height', 'D0_population']) Explanation: There seems to be a trend. End of explanation
5,042
Given the following text description, write Python code to implement the functionality described below step by step Description: Statistical Moments - Skewness and Kurtosis By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie. Part of the Quantopian Lecture Series Step1: Sometimes mean and variance are not enough to describe a distribution. When we calculate variance, we square the deviations around the mean. In the case of large deviations, we do not know whether they are likely to be positive or negative. This is where the skewness and symmetry of a distribution come in. A distribution is <i>symmetric</i> if the parts on either side of the mean are mirror images of each other. For example, the normal distribution is symmetric. The normal distribution with mean $\mu$ and standard deviation $\sigma$ is defined as $$ f(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$ We can plot it to confirm that it is symmetric Step2: A distribution which is not symmetric is called <i>skewed</i>. For instance, a distribution can have many small positive and a few large negative values (negatively skewed) or vice versa (positively skewed), and still have a mean of 0. A symmetric distribution has skewness 0. Positively skewed unimodal (one mode) distributions have the property that mean > median > mode. Negatively skewed unimodal distributions are the reverse, with mean < median < mode. All three are equal for a symmetric unimodal distribution. The explicit formula for skewness is Step3: Although skew is less obvious when graphing discrete data sets, we can still compute it. For example, below are the skew, mean, and median for S&P 500 returns 2012-2014. Note that the skew is negative, and so the mean is less than the median. Step4: Kurtosis Kurtosis attempts to measure the shape of the deviation from the mean. Generally, it describes how peaked a distribution is compared the the normal distribution, called mesokurtic. All normal distributions, regardless of mean and variance, have a kurtosis of 3. A leptokurtic distribution (kurtosis > 3) is highly peaked and has fat tails, while a platykurtic distribution (kurtosis < 3) is broad. Sometimes, however, kurtosis in excess of the normal distribution (kurtosis - 3) is used, and this is the default in scipy. A leptokurtic distribution has more frequent large jumps away from the mean than a normal distribution does while a platykurtic distribution has fewer. Step5: The formula for kurtosis is $$ K = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) $$ while excess kurtosis is given by $$ K_E = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) - \frac{3(n-1)^2}{(n-2)(n-3)} $$ For a large number of samples, the excess kurtosis becomes approximately $$ K_E \approx \frac{1}{n} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} - 3 $$ Since above we were considering perfect, continuous distributions, this was the form that kurtosis took. However, for a set of samples drawn for the normal distribution, we would use the first definition, and (excess) kurtosis would only be approximately 0. We can use scipy to find the excess kurtosis of the S&P 500 returns from before. Step6: The histogram of the returns shows significant observations beyond 3 standard deviations away from the mean, multiple large spikes, so we shouldn't be surprised that the kurtosis is indicating a leptokurtic distribution. Other standardized moments It's no coincidence that the variance, skewness, and kurtosis take similar forms. They are the first and most important standardized moments, of which the $k$th has the form $$ \frac{E[(X - E[X])^k]}{\sigma^k} $$ The first standardized moment is always 0 $(E[X - E[X]] = E[X] - E[E[X]] = 0)$, so we only care about the second through fourth. All of the standardized moments are dimensionless numbers which describe the distribution, and in particular can be used to quantify how close to normal (having standardized moments $0, \sigma, 0, \sigma^2$) a distribution is. Normality Testing Using Jarque-Bera The Jarque-Bera test is a common statistical test that compares whether sample data has skewness and kurtosis similar to a normal distribution. We can run it here on the S&P 500 returns to find the p-value for them coming from a normal distribution. The Jarque Bera test's null hypothesis is that the data came from a normal distribution. Because of this it can err on the side of not catching a non-normal process if you have a low p-value. To be safe it can be good to increase your cutoff when using the test. Remember to treat p-values as binary and not try to read into them or compare them. We'll use a cutoff of 0.05 for our p-value. Test Calibration Remember that each test is written a little differently across different programming languages. You might not know whether it's the null or alternative hypothesis that the tested data comes from a normal distribution. It is recommended that you use the ? notation plus online searching to find documentation on the test; plus it is often a good idea to calibrate a test by checking it on simulated data and making sure it gives the right answer. Let's do that now. Step7: Great, if properly calibrated we should expect to be wrong $5\%$ of the time at a 0.05 significance level, and this is pretty close. This means that the test is working as we expect.
Python Code: import matplotlib.pyplot as plt import numpy as np import scipy.stats as stats Explanation: Statistical Moments - Skewness and Kurtosis By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, and Delaney Granizo-Mackenzie. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. End of explanation # Plot a normal distribution with mean = 0 and standard deviation = 2 xs = np.linspace(-6,6, 300) normal = stats.norm.pdf(xs) plt.plot(xs, normal); Explanation: Sometimes mean and variance are not enough to describe a distribution. When we calculate variance, we square the deviations around the mean. In the case of large deviations, we do not know whether they are likely to be positive or negative. This is where the skewness and symmetry of a distribution come in. A distribution is <i>symmetric</i> if the parts on either side of the mean are mirror images of each other. For example, the normal distribution is symmetric. The normal distribution with mean $\mu$ and standard deviation $\sigma$ is defined as $$ f(x) = \frac{1}{\sigma \sqrt{2 \pi}} e^{-\frac{(x - \mu)^2}{2 \sigma^2}} $$ We can plot it to confirm that it is symmetric: End of explanation # Generate x-values for which we will plot the distribution xs2 = np.linspace(stats.lognorm.ppf(0.01, .7, loc=-.1), stats.lognorm.ppf(0.99, .7, loc=-.1), 150) # Negatively skewed distribution lognormal = stats.lognorm.pdf(xs2, .7) plt.plot(xs2, lognormal, label='Skew > 0') # Positively skewed distribution plt.plot(xs2, lognormal[::-1], label='Skew < 0') plt.legend(); Explanation: A distribution which is not symmetric is called <i>skewed</i>. For instance, a distribution can have many small positive and a few large negative values (negatively skewed) or vice versa (positively skewed), and still have a mean of 0. A symmetric distribution has skewness 0. Positively skewed unimodal (one mode) distributions have the property that mean > median > mode. Negatively skewed unimodal distributions are the reverse, with mean < median < mode. All three are equal for a symmetric unimodal distribution. The explicit formula for skewness is: $$ S_K = \frac{n}{(n-1)(n-2)} \frac{\sum_{i=1}^n (X_i - \mu)^3}{\sigma^3} $$ Where $n$ is the number of observations, $\mu$ is the arithmetic mean, and $\sigma$ is the standard deviation. The sign of this quantity describes the direction of the skew as described above. We can plot a positively skewed and a negatively skewed distribution to see what they look like. For unimodal distributions, a negative skew typically indicates that the tail is fatter on the left, while a positive skew indicates that the tail is fatter on the right. End of explanation start = '2012-01-01' end = '2015-01-01' pricing = get_pricing('SPY', fields='price', start_date=start, end_date=end) returns = pricing.pct_change()[1:] print 'Skew:', stats.skew(returns) print 'Mean:', np.mean(returns) print 'Median:', np.median(returns) plt.hist(returns, 30); Explanation: Although skew is less obvious when graphing discrete data sets, we can still compute it. For example, below are the skew, mean, and median for S&P 500 returns 2012-2014. Note that the skew is negative, and so the mean is less than the median. End of explanation # Plot some example distributions plt.plot(xs,stats.laplace.pdf(xs), label='Leptokurtic') print 'Excess kurtosis of leptokurtic distribution:', (stats.laplace.stats(moments='k')) plt.plot(xs, normal, label='Mesokurtic (normal)') print 'Excess kurtosis of mesokurtic distribution:', (stats.norm.stats(moments='k')) plt.plot(xs,stats.cosine.pdf(xs), label='Platykurtic') print 'Excess kurtosis of platykurtic distribution:', (stats.cosine.stats(moments='k')) plt.legend(); Explanation: Kurtosis Kurtosis attempts to measure the shape of the deviation from the mean. Generally, it describes how peaked a distribution is compared the the normal distribution, called mesokurtic. All normal distributions, regardless of mean and variance, have a kurtosis of 3. A leptokurtic distribution (kurtosis > 3) is highly peaked and has fat tails, while a platykurtic distribution (kurtosis < 3) is broad. Sometimes, however, kurtosis in excess of the normal distribution (kurtosis - 3) is used, and this is the default in scipy. A leptokurtic distribution has more frequent large jumps away from the mean than a normal distribution does while a platykurtic distribution has fewer. End of explanation print "Excess kurtosis of returns: ", stats.kurtosis(returns) Explanation: The formula for kurtosis is $$ K = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) $$ while excess kurtosis is given by $$ K_E = \left ( \frac{n(n+1)}{(n-1)(n-2)(n-3)} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} \right ) - \frac{3(n-1)^2}{(n-2)(n-3)} $$ For a large number of samples, the excess kurtosis becomes approximately $$ K_E \approx \frac{1}{n} \frac{\sum_{i=1}^n (X_i - \mu)^4}{\sigma^4} - 3 $$ Since above we were considering perfect, continuous distributions, this was the form that kurtosis took. However, for a set of samples drawn for the normal distribution, we would use the first definition, and (excess) kurtosis would only be approximately 0. We can use scipy to find the excess kurtosis of the S&P 500 returns from before. End of explanation from statsmodels.stats.stattools import jarque_bera N = 1000 M = 1000 pvalues = np.ndarray((N)) for i in range(N): # Draw M samples from a normal distribution X = np.random.normal(0, 1, M); _, pvalue, _, _ = jarque_bera(X) pvalues[i] = pvalue # count number of pvalues below our default 0.05 cutoff num_significant = len(pvalues[pvalues < 0.05]) print float(num_significant) / N Explanation: The histogram of the returns shows significant observations beyond 3 standard deviations away from the mean, multiple large spikes, so we shouldn't be surprised that the kurtosis is indicating a leptokurtic distribution. Other standardized moments It's no coincidence that the variance, skewness, and kurtosis take similar forms. They are the first and most important standardized moments, of which the $k$th has the form $$ \frac{E[(X - E[X])^k]}{\sigma^k} $$ The first standardized moment is always 0 $(E[X - E[X]] = E[X] - E[E[X]] = 0)$, so we only care about the second through fourth. All of the standardized moments are dimensionless numbers which describe the distribution, and in particular can be used to quantify how close to normal (having standardized moments $0, \sigma, 0, \sigma^2$) a distribution is. Normality Testing Using Jarque-Bera The Jarque-Bera test is a common statistical test that compares whether sample data has skewness and kurtosis similar to a normal distribution. We can run it here on the S&P 500 returns to find the p-value for them coming from a normal distribution. The Jarque Bera test's null hypothesis is that the data came from a normal distribution. Because of this it can err on the side of not catching a non-normal process if you have a low p-value. To be safe it can be good to increase your cutoff when using the test. Remember to treat p-values as binary and not try to read into them or compare them. We'll use a cutoff of 0.05 for our p-value. Test Calibration Remember that each test is written a little differently across different programming languages. You might not know whether it's the null or alternative hypothesis that the tested data comes from a normal distribution. It is recommended that you use the ? notation plus online searching to find documentation on the test; plus it is often a good idea to calibrate a test by checking it on simulated data and making sure it gives the right answer. Let's do that now. End of explanation _, pvalue, _, _ = jarque_bera(returns) if pvalue > 0.05: print 'The returns are likely normal.' else: print 'The returns are likely not normal.' Explanation: Great, if properly calibrated we should expect to be wrong $5\%$ of the time at a 0.05 significance level, and this is pretty close. This means that the test is working as we expect. End of explanation
5,043
Given the following text description, write Python code to implement the functionality described below step by step Description: Python for Bioinformatics This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics Note Step1: Listing 13.1 Step2: Listing 13.2 Step3: Listing 13.3 Step4: Listing 13.4 Step5: Listing 13.5 Step6: Listing 13.6 Step7: Listing 13.7
Python Code: !curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2 !mkdir samples !tar xvfj samples.tar.bz2 -C samples import re mo = re.search('hello', 'Hello world, hello Python!') mo.group() mo.span() 'Hello world, hello Python!'.index('hello') import re mo = re.search('[Hh]ello', 'Hello world, hello Python!') mo.group() re.findall("[Hh]ello","Hello world, hello Python,!") re.finditer("[Hh]ello", "Hello world, hello Python,!") mos = re.finditer("[Hh]ello", "Hello world, hello Python,!") for x in mos: print(x.group()) print(x.span()) mo = re.match("hello", "Hello world, hello Python!") print (mo) mo = re.match("Hello", "Hello world, hello Python!") mo mo.group() mo.span() re.findall("[Hh]ello","Hello world, hello Python,!") rgx = re.compile("[Hh]ello") rgx.findall("Hello world, hello Python,!") rgx = re.compile("[Hh]ello") rgx.search("Hello world, hello Python,!") rgx.match("Hello world, hello Python,!") rgx.findall("Hello world, hello Python,!") Explanation: Python for Bioinformatics This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics Note: Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples. Chapter 13: Regular Expressions End of explanation import re seq = "ATATAAGATGCGCGCGCTTATGCGCGCA" rgx = re.compile("TAT") i = 1 for mo in rgx.finditer(seq): print('Ocurrence {0}: {1}'.format(i, mo.group())) print('Position: From {0} to {1}'.format(mo.start(), mo.end())) i += 1 import re seq = "ATATAAGATGCGCGCGCTTATGCGCGCA" rgx = re.compile("(GC){3,}") result = rgx.search(seq) result.group() result.groups() rgx = re.compile("((GC){3,})") result = rgx.search(seq) result.groups() # Only the inner group is non-capturing rgx = re.compile("((?:GC){3,})") result = rgx.search(seq) result.groups() rgx = re.compile("TAT") # No group at all. rgx.findall(seq) # This returns a list of matching strings. rgx = re.compile("(GC){3,}") # One group. Return a list rgx.findall(seq) # with the group for each match. rgx = re.compile("((GC){3,})") # Two groups. Return a rgx.findall(seq) # list with tuples for each match. rgx = re.compile("((?:GC){3,})") # Using a non-capturing rgx.findall(seq) # group to get only the matches. Explanation: Listing 13.1: findTAT.py: Find the first “TAT” repeat End of explanation import re rgx = re.compile("(?P<TBX>TATA..).*(?P<CGislands>(?:GC){3,})") seq = "ATATAAGATGCGCGCGCTTATGCGCGCA" result = rgx.search(seq) print(result.group('CGislands')) print(result.group('TBX')) Explanation: Listing 13.2: subgroups.py: Find multiple sub-patterns End of explanation import re, sys myregex = re.compile(sys.argv[2]) counter = 0 with open(sys.argv[1]) as fh: for line in fh: if myregex.search(line): counter += 1 print(counter) Explanation: Listing 13.3: regexsys1.py: Count lines with a user-supplied pattern on it End of explanation import re, sys myregex = re.compile(sys.argv[2]) i = 0 with open(sys.argv[1]) as fh: for line in fh: i += len(myregex.findall(line)) print(i) Explanation: Listing 13.4: countinfile.py: Count the occurrences of a pattern in a file End of explanation import re regex = re.compile("(?:GC){3,}") seq="ATGATCGTACTGCGCGCTTCATGTGATGCGCGCGCGCAGACTATAAG" print ("Before:",seq) print ("After:",regex.sub("",seq)) Explanation: Listing 13.5: deletegc.py: Delete GC repeats (more than 3 GC in a row) End of explanation import re pattern = "[LIVM]{2}.RL[DE].{4}RLE" with open('samples/Q5R5X8.fas') as fh: fh.readline() # Discard the first line. seq = "" for line in fh: seq += line.strip() rgx = re.compile(pattern) result = rgx.search(seq) patternfound = result.group() span = result.span() leftpos = span[0]-10 if leftpos<0: leftpos = 0 print(seq[leftpos:span[0]].lower() + patternfound + seq[span[1]:span[1]+10].lower()) Explanation: Listing 13.6: searchinfasta.py: Search a pattern in a FASTA file End of explanation import re regex = re.compile(' |\d|\n|\t') seq = '' for line in open('samples/pMOSBlue.txt'): seq += regex.sub('',line) print (seq) Explanation: Listing 13.7: cleanseq.py: Cleans a DNA sequence End of explanation
5,044
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook contains a resumed table of the q-learners results. The results are the ones evaluated on the test set, with the learned actions (without learning on the test set) Step1: Predictor Results Step2: Automatic Trader Results Step3: Features that were used in the experiments Step4: Training Results Step5: Test Results without learning in the test set Step6: Test Results with learning in the test set (always keeping causality) Step7: Sharpe increases resumed Step8: Best Agent The Agent with the best sharpe test increase (learning or not learning) was chosen as the "best".
Python Code: # Basic imports import os import pandas as pd import matplotlib.pyplot as plt import numpy as np import sys from time import time import pickle %matplotlib inline %pylab inline pylab.rcParams['figure.figsize'] = (20.0, 10.0) %load_ext autoreload %autoreload 2 sys.path.append('../../') Explanation: This notebook contains a resumed table of the q-learners results. The results are the ones evaluated on the test set, with the learned actions (without learning on the test set) End of explanation pred_results_df = pd.DataFrame({ 1.0: {'train_r2': 0.983486, 'train_mre': 0.008762, 'test_r2': 0.976241, 'test_mre': 0.013906}, 7.0: {'train_r2': 0.906177, 'train_mre': 0.026232, 'test_r2': 0.874892, 'test_mre': 0.034764}, 14.0: {'train_r2': 0.826779, 'train_mre': 0.037349, 'test_r2': 0.758697, 'test_mre': 0.051755}, 28.0: {'train_r2': 0.696077, 'train_mre': 0.052396, 'test_r2': 0.515802, 'test_mre': 0.078545}, 56.0: {'train_r2': 0.494079, 'train_mre': 0.073589, 'test_r2': 0.152134, 'test_mre': 0.108190}, }).T pred_results_df = pred_results_df[['train_r2', 'test_r2', 'train_mre', 'test_mre']] pred_results_df.index.name = 'ahead_days' pred_results_df Explanation: Predictor Results End of explanation features = [ 'dyna', 'states', 'actions', 'training_days', 'epochs', 'predictor', 'random_decrease', ] metrics =[ 'sharpe_ratio', 'cumulative_return', 'epoch_time' ] names = [ 'simple_q_learner', 'simple_q_learner_1000_states', 'simple_q_learner_1000_states_4_actions_full_training', 'simple_q_learner_1000_states_full_training', 'simple_q_learner_100_epochs', 'simple_q_learner_11_actions', 'simple_q_learner_fast_learner', 'simple_q_learner_fast_learner_1000_states', 'simple_q_learner_fast_learner_11_actions', 'simple_q_learner_fast_learner_3_actions', 'simple_q_learner_fast_learner_full_training', 'simple_q_learner_full_training', 'dyna_q_1000_states_full_training', 'dyna_q_learner', 'dyna_q_with_predictor', 'dyna_q_with_predictor_full_training', 'dyna_q_with_predictor_full_training_dyna1', ] feat_data = np.array([ # (dyna, states, actions, training_days, epochs, predictor, random_decrease) [0, 125, 2, 512, 15, False, 0.9999], # simple_q_learner [0, 1000, 2, 512, 15, False, 0.9999], # simple_q_learner_1000_states [0, 1000, 4, 5268, 7, False, 0.9999], # simple_q_learner_1000_states_4_actions_full_training [0, 1000, 2, 5268, 15, False, 0.9999], # simple_q_learner_1000_states_full_training [0, 125, 2, 512, 100, False, 0.9999], # simple_q_learner_100_epochs [0, 125, 11, 512, 10, False, 0.9999], # simple_q_learner_11_actions [0, 125, 2, 512, 4, False, 0.999], # simple_q_learner_fast_learner [0, 1000, 2, 512, 4, False, 0.999], # simple_q_learner_fast_learner_1000_states [0, 125, 11, 512, 4, False, 0.999], # simple_q_learner_fast_learner_11_actions [0, 125, 3, 512, 4, False, 0.999], # simple_q_learner_fast_learner_3_actions [0, 125, 2, 5268, 4, False, 0.999], # simple_q_learner_fast_learner_full_training [0, 125, 2, 5268, 15, False, 0.9999], # simple_q_learner_full_training [20, 1000, 2, 5268, 7, False, 0.9999], # dyna_q_1000_states_full_training [20, 125, 2, 512, 4, False, 0.9999], # dyna_q_learner [20, 125, 2, 512, 4, True, 0.9999], # dyna_q_with_predictor [20, 125, 2, 5268, 4, True, 0.9999], # dyna_q_with_predictor_full_training [1, 125, 2, 5268, 4, True, 0.9999], # dyna_q_with_predictor_full_training_dyna1 ]) experiments_df = pd.DataFrame(feat_data, columns=features, index=names) experiments_df.index.name = 'nb_name' train_res_data = { 'simple_q_learner': {'sharpe': 1.9858481612185834, 'cum_ret': 0.38359700000000174, 'epoch_time': 18.330891609191895}, 'simple_q_learner_1000_states': {'sharpe': 3.4470302925746776, 'cum_ret': 0.7292610000000004, 'epoch_time': 18.28188133239746}, 'simple_q_learner_1000_states_4_actions_full_training': {'sharpe': 2.2430093688893264, 'cum_ret': 30.14936200000002, 'epoch_time': 157.69741320610046}, 'simple_q_learner_1000_states_full_training': {'sharpe': 2.366638028444387, 'cum_ret': 79.61800199999992, 'epoch_time': 159.31651139259338}, 'simple_q_learner_100_epochs': {'sharpe': 4.093353629096188, 'cum_ret': 0.6627280000000009, 'epoch_time': 9.004882335662842}, 'simple_q_learner_11_actions': {'sharpe': 1.5440407782808305, 'cum_ret': 0.2412700000000001, 'epoch_time': 11.08903431892395}, 'simple_q_learner_fast_learner': {'sharpe': 2.8787265519379908, 'cum_ret':0.5468269999999986, 'epoch_time': 18.931288242340088}, 'simple_q_learner_fast_learner_1000_states': {'sharpe': 2.031446601959524, 'cum_ret': 0.3971230000000021, 'epoch_time': 19.006957530975342}, 'simple_q_learner_fast_learner_11_actions': {'sharpe': 3.241438316121647, 'cum_ret': 0.541966, 'epoch_time': 18.913504123687744}, 'simple_q_learner_fast_learner_3_actions': {'sharpe': 2.9448069674427555, 'cum_ret': 0.4873689999999995, 'epoch_time': 18.46741485595703}, 'simple_q_learner_fast_learner_full_training': {'sharpe': 1.0444534903132408, 'cum_ret': 0.7844770000000019, 'epoch_time': 143.5039553642273}, 'simple_q_full_training': {'sharpe': 1.2592450659232495, 'cum_ret': 1.7391450000000006, 'epoch_time': 115.70198798179626}, 'dyna_q_1000_states_full_training': {'sharpe': 2.2964510954840325, 'cum_ret': 94.75696199999993, 'epoch_time': 242.88240551948547}, 'dyna_q_learner': {'sharpe': 3.706435588713091, 'cum_ret': 0.4938250000000006 , 'epoch_time': 18.87182092666626}, 'dyna_q_with_predictor': {'sharpe': 3.2884867210125845, 'cum_ret': 0.5397989999999993, 'epoch_time': 458.8401937484741}, 'dyna_q_with_predictor_full_training': {'sharpe': 1.0037137587999854, 'cum_ret': 2.565081999999997, 'epoch_time': 7850.391056537628}, 'dyna_q_with_predictor_full_training_dyna1': {'sharpe': 0.48228187419119906, 'cum_ret': 0.1737430000000002, 'epoch_time': 730.5918335914612}, } train_res_data_df = pd.DataFrame(train_res_data).T test_res_data_no_learning = { 'simple_q_learner': {'sharpe': 0.3664203166030617, 'cum_ret': 0.06372499999999937, 'epoch_time': 17.75287628173828}, 'simple_q_learner_1000_states': {'sharpe': -0.013747768227987086, 'cum_ret': -0.013047000000000142, 'epoch_time': 17.661759853363037}, 'simple_q_learner_1000_states_4_actions_full_training': {'sharpe': 0.9400492987950515, 'cum_ret': 0.10791900000000054, 'epoch_time': 13.83948016166687}, 'simple_q_learner_1000_states_full_training': {'sharpe': 1.4827065747174577, 'cum_ret': 0.22123900000000085, 'epoch_time': 9.844955205917358}, 'simple_q_learner_100_epochs': {'sharpe': 0.6420028402682839, 'cum_ret': 0.10032399999999986, 'epoch_time': 9.116246461868286}, 'simple_q_learner_11_actions': {'sharpe': 0.15616450321809833, 'cum_ret': 0.019991000000000758, 'epoch_time': 10.187344551086426}, 'simple_q_learner_fast_learner': {'sharpe': 0.9643510680410812, 'cum_ret': 0.18794100000000125, 'epoch_time': 18.13912320137024}, 'simple_q_learner_fast_learner_1000_states': {'sharpe': 0.8228017709095453, 'cum_ret': 0.16162700000000063, 'epoch_time': 19.452654361724854}, 'simple_q_learner_fast_learner_11_actions': {'sharpe': 0.8238261816524384, 'cum_ret': 0.12766000000000033, 'epoch_time': 18.901001930236816}, 'simple_q_learner_fast_learner_3_actions': {'sharpe': 0.6332862559879147, 'cum_ret': 0.08036399999999966, 'epoch_time': 19.221533060073853}, 'simple_q_learner_fast_learner_full_training': {'sharpe': 1.2605807833904492, 'cum_ret': 0.056606000000000156, 'epoch_time': 11.412826538085938}, 'simple_q_full_training': {'sharpe': -0.2562905901467118, 'cum_ret': -0.027945999999999693, 'epoch_time': 8.009900569915771}, 'dyna_q_1000_states_full_training': {'sharpe': 0.4267994866360769, 'cum_ret': 0.0652820000000005, 'epoch_time': 14.224964618682861}, 'dyna_q_learner': {'sharpe': 0.5191712068491942, 'cum_ret': 0.07307299999999883, 'epoch_time': 16.431984901428223}, 'dyna_q_with_predictor': {'sharpe': 0.7435489843809434, 'cum_ret': 0.10403399999999974, 'epoch_time': 6.692898988723755}, 'dyna_q_with_predictor_full_training': {'sharpe': -0.33503797163532956, 'cum_ret': -0.029740999999999795, 'epoch_time': 8.51533818244934}, 'dyna_q_with_predictor_full_training_dyna1': {'sharpe': 0.20288841658633258, 'cum_ret': 0.008380000000000276, 'epoch_time': 10.236766338348389}, } test_res_data_no_learning_df = pd.DataFrame(test_res_data_no_learning).T test_res_data_learning = { 'simple_q_learner': {'sharpe': 0.9735950444291429, 'cum_ret': 0.1953619999999998, 'epoch_time': 18.097697019577026}, 'simple_q_learner_1000_states': {'sharpe': -0.0867440896667206, 'cum_ret': -0.027372000000001173, 'epoch_time': 17.762672901153564}, 'simple_q_learner_1000_states_4_actions_full_training': {'sharpe': 1.109613523501088, 'cum_ret': 0.12868000000000057, 'epoch_time': 9.899595499038696}, 'simple_q_learner_1000_states_full_training': {'sharpe': 1.5176752934460862, 'cum_ret': 0.2069550000000011, 'epoch_time': 9.233611106872559}, 'simple_q_learner_100_epochs': {'sharpe': 0.09274627213069256, 'cum_ret': 0.008058000000000565, 'epoch_time': 8.653764009475708}, 'simple_q_learner_11_actions': {'sharpe': 0.4691456599751897, 'cum_ret': 0.07124699999999917, 'epoch_time': 10.827114582061768}, 'simple_q_learner_fast_learner': {'sharpe': 0.6020182964860242, 'cum_ret': 0.09249299999999816, 'epoch_time': 17.882429122924805}, 'simple_q_learner_fast_learner_1000_states': {'sharpe': 0.17618139275375405, 'cum_ret': 0.02545300000000017, 'epoch_time': 15.724592685699463}, 'simple_q_learner_fast_learner_11_actions': {'sharpe': 0.9608337022400049, 'cum_ret': 0.1406880000000006, 'epoch_time': 17.67305564880371}, 'simple_q_learner_fast_learner_3_actions': {'sharpe': 0.3254406127664859, 'cum_ret': 0.04086700000000043, 'epoch_time': 18.100637197494507}, 'simple_q_learner_fast_learner_full_training': {'sharpe': 1.2605807833904492, 'cum_ret': 0.056606000000000156, 'epoch_time': 12.214732885360718}, 'simple_q_full_training': {'sharpe': 0.3139835605580342, 'cum_ret': 0.02497299999999969, 'epoch_time': 7.958802700042725}, 'dyna_q_1000_states_full_training': {'sharpe': 0.48863969848043476, 'cum_ret': 0.06846099999999988, 'epoch_time': 18.820592880249023}, 'dyna_q_learner': {'sharpe': 0.0700928915599047, 'cum_ret': 0.004358999999999114, 'epoch_time': 18.085463523864746}, 'dyna_q_with_predictor': {'sharpe': 0.6954014537549168, 'cum_ret': 0.09154599999999946, 'epoch_time': 338.36568880081177}, 'dyna_q_with_predictor_full_training': {'sharpe': -0.8531759696425502, 'cum_ret': -0.07708900000000052, 'epoch_time': 375.830899477005}, 'dyna_q_with_predictor_full_training_dyna1': {'sharpe': -0.15635735184097058, 'cum_ret': -0.006745999999999919, 'epoch_time': 38.24271035194397}, } test_res_data_learning_df = pd.DataFrame(test_res_data_learning).T train_benchmark_data = { 'simple_q_learner': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_1000_states': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_1000_states_4_actions_full_training': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, 'simple_q_learner_1000_states_full_training': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, 'simple_q_learner_100_epochs': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_11_actions': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_fast_learner': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_fast_learner_1000_states': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_fast_learner_11_actions': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_fast_learner_3_actions': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'simple_q_learner_fast_learner_full_training': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, 'simple_q_full_training': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, 'dyna_q_1000_states_full_training': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, 'dyna_q_learner': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'dyna_q_with_predictor': {'sharpe_bench': 1.601691549431671, 'cum_ret_bench': 0.4244923418116293}, 'dyna_q_with_predictor_full_training': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, 'dyna_q_with_predictor_full_training_dyna1': {'sharpe_bench': 0.4566770027925799, 'cum_ret_bench': 3.304502617801047}, } train_benchmark_data_df = pd.DataFrame(train_benchmark_data).T test_benchmark_data = { 'simple_q_learner': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_1000_states': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_1000_states_4_actions_full_training': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_1000_states_full_training': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_100_epochs': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_11_actions': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_fast_learner': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_fast_learner_1000_states': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_fast_learner_11_actions': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_fast_learner_3_actions': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_learner_fast_learner_full_training': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'simple_q_full_training': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'dyna_q_1000_states_full_training': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'dyna_q_learner': {'sharpe_bench': 0.44271542660031676, 'cum_ret_bench': 0.1070225832012679}, 'dyna_q_with_predictor': {'sharpe_bench': 0.2930367522823553, 'cum_ret_bench': 0.05002151977428149}, 'dyna_q_with_predictor_full_training': {'sharpe_bench': 0.2930367522823553, 'cum_ret_bench': 0.05002151977428149}, 'dyna_q_with_predictor_full_training_dyna1': {'sharpe_bench': 0.3772011734533203, 'cum_ret_bench': 0.07288030223327424}, } test_benchmark_data_df = pd.DataFrame(test_benchmark_data).T Explanation: Automatic Trader Results End of explanation print(experiments_df.shape) experiments_df experiments_df.to_csv('../../data/experiments_df.csv') Explanation: Features that were used in the experiments End of explanation training_res_df = train_res_data_df.join(train_benchmark_data_df) training_res_df.index.name = 'nb_name' training_res_df['sharpe_increase'] = training_res_df['sharpe'] - training_res_df['sharpe_bench'] training_res_df['cum_ret_increase'] = training_res_df['cum_ret'] - training_res_df['cum_ret_bench'] print(training_res_df.shape) training_res_df training_res_df.to_csv('../../data/training_res_df.csv') Explanation: Training Results End of explanation test_no_learn_res_df = test_res_data_no_learning_df.join(test_benchmark_data_df) test_no_learn_res_df.index.name = 'nb_name' test_no_learn_res_df['sharpe_increase'] = test_no_learn_res_df['sharpe'] - test_no_learn_res_df['sharpe_bench'] test_no_learn_res_df['cum_ret_increase'] = test_no_learn_res_df['cum_ret'] - test_no_learn_res_df['cum_ret_bench'] print(test_no_learn_res_df.shape) test_no_learn_res_df test_no_learn_res_df.to_csv('../../data/test_no_learn_res_df.csv') Explanation: Test Results without learning in the test set End of explanation test_learn_res_df = test_res_data_learning_df.join(test_benchmark_data_df) test_learn_res_df.index.name = 'nb_name' test_learn_res_df['sharpe_increase'] = test_learn_res_df['sharpe'] - test_learn_res_df['sharpe_bench'] test_learn_res_df['cum_ret_increase'] = test_learn_res_df['cum_ret'] - test_learn_res_df['cum_ret_bench'] print(test_learn_res_df.shape) test_learn_res_df test_learn_res_df.to_csv('../../data/test_learn_res_df.csv') Explanation: Test Results with learning in the test set (always keeping causality) End of explanation SHARPE_Q = 'sharpe_increase' sharpe_q_df = pd.DataFrame(training_res_df[SHARPE_Q]).rename(columns={SHARPE_Q:'sharpe_i_train'}) sharpe_q_df = sharpe_q_df.join(test_no_learn_res_df[SHARPE_Q].rename('sharpe_i_test_no_learn')) sharpe_q_df = sharpe_q_df.join(test_learn_res_df[SHARPE_Q].rename('sharpe_i_test_learn')) print(sharpe_q_df.shape) sharpe_q_df Explanation: Sharpe increases resumed End of explanation best_agent_name = 'simple_q_learner_1000_states_full_training' pd.DataFrame(experiments_df.loc[best_agent_name]).T indexes = ['training', 'test_no_learn', 'test_learn'] best_agent_df = pd.concat([ training_res_df.loc[best_agent_name], test_no_learn_res_df.loc[best_agent_name], test_learn_res_df.loc[best_agent_name], ], axis=1).T best_agent_df.index = indexes best_agent_df Explanation: Best Agent The Agent with the best sharpe test increase (learning or not learning) was chosen as the "best". End of explanation
5,045
Given the following text description, write Python code to implement the functionality described below step by step Description: Building database I have 6 SS WPS2 genomes, so I'll concatenate sequences into a single combined fasta file Version 1.3 Step1: OK - verified Running Blast query Step2: Optimizing number of threads - No real difference Step6: Defining functions Step7: Making all_WPS-2 Database [ln3] again more scalable way Step8: Reading Raw input file and creating specific input file Step9: After iteration0 is done Step10: Outputting iteration 1 Step11: Repeating what we've done Step12: adding those to the database Step13: Blasting assembled and unasembled contigs Step14: Parsing Blast Outputs Trying pandas instead Step15: Adding "Metric", "Seq_size", "Seq_nt" fields indeces of metagenome sequences and the recruited from blast dataframe are placed in lists of the same size Step16: Number of unique scaffolds found Step17: Evalue sorting Step18: Filtering out by evalue cutoff Step19: Efect of evalue on # of unique scaffolds found Step20: Doing some stats on the recruited data Step21: Retrieving sequences from dataframe list Step22: Adding Seq_nt and Size entries Step24: Indexing approach Less memory intensive, close index after it's done Step25: Filtering Step Step26: Batch export to csv and multiple FASTA
Python Code: from Bio import SeqIO import time import os import shutil import pandas #parameters version = 'v1.4' project_name = 'wps2_bl_metagenome' e_value = 1e-20 iden = 95.0 metric = 50.0 !cat ../ss_genomes/AP_WPS-2_bacterium* > ../ss_genomes/all_AP_WPS-2_bacterium.fna !makeblastdb -in ../ss_genomes/all_AP_WPS-2_bacterium.fna -dbtype nucl -title "combined_ss_WPS2" -out ../blast_db/combined_ss_WPS2/combined_ss_WPS2 -parse_seqids Explanation: Building database I have 6 SS WPS2 genomes, so I'll concatenate sequences into a single combined fasta file Version 1.3: Functional programming Combined assembled and unassembled End of explanation import time start =time.time() !blastp -db test_pp_pmo -query 76969.assembled.faa -out assembl_contigs_vs_test_pp_pmo2.tab -evalue .00001 -outfmt 6 -num_threads 8 print "command was executed in %d seconds"%(time.time()-start) Explanation: OK - verified Running Blast query End of explanation ex_time=list() num_thread=list() for i in [1,2,4,6,8]: start =time.time() !blastp -db test_pp_pmo -query 76969.assembled.faa -out assembl_contigs_vs_test_pp_pmo2.tab -evalue .00001 -outfmt 6 -num_threads {i} elapsed=time.time()-start print 'number of threads: ', i print 'exec time: ', elapsed ex_time.append(elapsed) num_thread.append(i) Explanation: Optimizing number of threads - No real difference End of explanation def parse_contigs_ind(f_name): Returns sequences index from the input files(s) remember to close index object after use handle = open(f_name, "rU") record_dict = SeqIO.index(f_name,"fasta") handle.close() return record_dict #returning specific sequences and overal list def retrive_sequence(contig_lst, rec_dic): Returns list of sequence elements from dictionary/index of SeqIO objects specific to the contig_lst parameter contig_seqs = list() #record_dict = rec_dic #handle.close() for contig in contig_lst: contig_seqs.append(rec_dic[contig].seq.tostring()) return contig_seqs def filter_seq_dict(key_lst, rec_dic): Returns filtered dictionary element from rec_dic according to sequence names passed in key_lst return { key: rec_dic[key] for key in key_lst } def unique_scaffold_topEval(dataframe): #returns pandas series object variables = list(dataframe.columns.values) scaffolds=dict() rows=list() for row in dataframe.itertuples(): #if row[1]=='Ga0073928_10002560': if row[1] not in scaffolds: scaffolds[row[1]]=row else: if row[11]<scaffolds[row[1]][11]: scaffolds[row[1]]=row rows=scaffolds.values() #variables=['quid', 'suid', 'iden', 'alen', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits'] df = pandas.DataFrame([[getattr(i,j) for j in variables] for i in rows], columns = variables) return df Explanation: Defining functions End of explanation iteration = 0 blast_db_homeDir = "../blast_db/" #!mkdir blast_db_homeDir Explanation: Making all_WPS-2 Database [ln3] again more scalable way End of explanation #command_line file of reference sequences reference_0 = "../ss_genomes/all_AP_WPS-2_bacterium.fna" input_dir = "input_files" if os.path.exists(input_dir): shutil.rmtree(input_dir) try: os.mkdir(input_dir) except OSError: raise #returns records_0 = parse_contigs_ind(reference_0) ref_out_0 = "input_files/reference0.fna" with open(ref_out_0, "w") as handle: SeqIO.write(records_0.values(), handle, "fasta") #NO NEED TO CLOSE with statement will automatically close the file records_0.close() Explanation: Reading Raw input file and creating specific input file End of explanation output_1 = "../ss_genomes/iteration1.fna" iteration1_all =cutoff_contigs['1e-20_cutoff_contigs'] print len(iteration1_all) len((iteration1_all[iteration1_all['iden']>=95.0])) print len((iteration1_all[iteration1_all['iden']>=95.0])) print len((iteration1_all[iteration1_all['Metric']>=50])) iteration1_filtered = iteration1_all[(iteration1_all['iden']>=95.0)&(iteration1_all['Metric']>=50)] len(iteration1_filtered) names = iteration1_filtered['quid'].tolist() records_0 = dict(parse_contigs_ind(input_0)) #all_records = parse_contigs_ind(assembled_contigs) records_1 = filter_seq_dict(names, all_records) print type(records_1) records_1.update(records_0) len(records_0.items()) len(records_1.items()) Explanation: After iteration0 is done: filter assembled e-20: >=95 iden, >=50 metric add records to output_1 End of explanation #records_0 = parse_contigs_ind(input_0) output_1_0 = "../ss_genomes/iteration1_0.fna" output_1 = "../ss_genomes/iteration1_0.fna" with open(output_1_0, "w") as handle: SeqIO.write(records_0.values(), handle, "fasta") with open(output_1, "w") as handle: SeqIO.write(records_1.values(), handle, "fasta") Explanation: Outputting iteration 1 End of explanation type(records_1) # output_0 = "../ss_genomes/iteration0.fna" # with open(output_0, "w") as handle: # SeqIO.write(records_0.values(), handle, "fasta") len(records_0) Explanation: Repeating what we've done End of explanation (iteration1_all[iteration1_all['iden']>=95.0])&(iteration1_all[iteration1_all['alen']>=50]) dct1 in_db = '../ss_genomes/all_AP_WPS-2_bacterium.fna' title = 'combined_ss_WPS2' outfile = "../blast_db/combined_ss_WPS2/combined_ss_WPS2" !makeblastdb -in {infile} -dbtype nucl -title "{title}" -out {outfile} -parse_seqids Explanation: adding those to the database End of explanation import time q_fname1 = './../IMG Data/76969.assembled.fna' q_fname2 = './../IMG Data/76969.unassembled_illumina.fna' out_name1= 'all_wps2_q_assembled76969.tab' out_name2= 'all_wps2_q_unasembled76969.tab' database = '../blast_db/combined_ss_WPS2/combined_ss_WPS2' # start =time.time() # !blastn -db {database} -query "{q_fname1}" -out {out_name1} -evalue .00001 -outfmt 6 -num_threads 8 # elapsed=time.time()-start # print 'exec time: ', elapsed # start =time.time() # !blastn -db {database} -query "{q_fname2}" -out {out_name2} -evalue .00001 -outfmt 6 -num_threads 8 # elapsed=time.time()-start # print 'exec time: ', elapsed Explanation: Blasting assembled and unasembled contigs End of explanation del recruited_mg recruited_mg = [] blast_cols = ['quid', 'suid', 'iden', 'alen', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits'] recruited_mg_1 = pandas.read_csv(out_name1 ,sep="\t", header=None) recruited_mg_1.columns=['quid', 'suid', 'iden', 'alen', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits'] recruited_mg_2 = pandas.read_csv(out_name2 ,sep="\t", header=None) recruited_mg_2.columns=['quid', 'suid', 'iden', 'alen', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits'] recruited_mg = [recruited_mg_1, recruited_mg_2] len(recruited_mg[0]) len(recruited_mg[1]) Explanation: Parsing Blast Outputs Trying pandas instead End of explanation #For linux use symlink metagenome_1 ="../../pp_metagenome1/IMG Data/76969.assembled.fna" metagenome_2 ="../../pp_metagenome1/IMG Data/76969.unassembled_illumina.fna" #!Beware of closing all records enries eventually all_records_mg_1=parse_contigs_ind(metagenome_1) all_records_mg_2=parse_contigs_ind(metagenome_2) all_records=[all_records_mg_1, all_records_mg_2] start = time.time() #wps2_eval_cutoff['Seq_nt']=retrive_sequence(contig_list, all_records) for i in range(len(recruited_mg)): #cutoff_contigs[dataframe]=evalue_filter(cutoff_contigs[dataframe]) recruited_mg[i]=unique_scaffold_topEval(recruited_mg[i]) contig_list = recruited_mg[i]['quid'].tolist() recruited_mg[i]['Seq_nt']=retrive_sequence(contig_list, all_records[i]) recruited_mg[i]['Seq_size']=recruited_mg[i]['Seq_nt'].apply(lambda x: len(x)) recruited_mg[i]['Coverage']=recruited_mg[i]['alen'].apply(lambda x: float(x))/recruited_mg[i]['Seq_size'] recruited_mg[i]['Metric']=recruited_mg[i]['Coverage']*recruited_mg[i]['iden'] recruited_mg[i] = recruited_mg[i][['quid', 'suid', 'iden', 'alen','Coverage','Metric', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits','Seq_size', 'Seq_nt']] #all_records[i].close()# keep open if multiple iterations elapsed=time.time()-start print elapsed # don't close just yet # do after it is not needed for i in range(len(recruited_mg)): all_records[i].close() print len(recruited_mg[0]) print len(recruited_mg[1]) Explanation: Adding "Metric", "Seq_size", "Seq_nt" fields indeces of metagenome sequences and the recruited from blast dataframe are placed in lists of the same size: No of -m (metagenome) sequences End of explanation len(set(recruited_mg[0]['quid'])) len(set(recruited_mg[1]['quid'])) Explanation: Number of unique scaffolds found End of explanation #result = df.sort(['A', 'B'], ascending=[1, 0])\ # wps2_assembled_sort_eval = wps2_assembled.sort_values(by=['eval'], ascending=False) # wps2_assembled_sort_eval Explanation: Evalue sorting End of explanation # eval_cutoff = 1e-20 # wps2_assembled_filtered = wps2_assembled_sort_eval[wps2_assembled_sort_eval['eval']<eval_cutoff] # wps2_assembled_filtered Explanation: Filtering out by evalue cutoff End of explanation # #Doing a # unique_scaffolds = list() # eval_cutoffs = [1e-10, 1e-20, 1e-30, 1e-40, 1e-50, 1e-60, 1e-70, 1e-80, 1e-90, 1e-100, 1e-110, 1e-120, 1e-130, 1e-140, 1e-150] # #for e_val in eval_cutoffs: # for e_val in eval_cutoffs: # wps2_assembled_filtered = wps2_assembled_sort_eval[wps2_assembled_sort_eval['eval']<e_val] # unique_scaffolds.append(len(set(wps2_assembled_filtered['quid']))) # print eval_cutoffs # print unique_scaffolds # import matplotlib # import matplotlib.pyplot as plt # %matplotlib inline # #plt.semilogy(t, np.exp(-t/5.0)) # plt.plot() # plt.semilogx(eval_cutoffs, unique_scaffolds, marker = 'o', basex=10) # plt.title('Number of unique scaffolds as a function of BLAST e-Value') # plt.xticks(eval_cutoffs) # plt.ylabel('No. of unique scaffolds') # plt.xlabel('BLAST e-Value') # plt.grid(True) # plt.ylim((0,12000)) # plt.show() Explanation: Efect of evalue on # of unique scaffolds found End of explanation #assembled ranges = range(5, 210, 15) unique_scaffolds = list() eval_ranges = [10**(-x) for x in ranges] assembled_filtered_lst = list() #for e_val in eval_cutoffs: for i in range(len(eval_ranges)): assembled_filtered = recruited_mg[0][recruited_mg[0]['eval']<eval_ranges[i]] assembled_filtered_lst.append(assembled_filtered) #unique_scaffolds.append(len(set(wps2_assembled_filtered['quid']))) unique_scaffolds.append(len(set(assembled_filtered_lst[i]['quid']))) print ranges print eval_ranges print unique_scaffolds import matplotlib import matplotlib.pyplot as plt %matplotlib inline import matplotlib.pylab as pylab params = {'legend.fontsize': 'x-large', 'figure.figsize': (15, 5), 'axes.labelsize': 'x-large', 'axes.titlesize':'x-large', 'xtick.labelsize':'x-large', 'ytick.labelsize':'x-large'} pylab.rcParams.update(params) #plt.semilogy(t, np.exp(-t/5.0)) plt.plot() plt.semilogx(eval_ranges, unique_scaffolds, marker = 'o', basex=10) plt.title('Number of unique assembled scaffolds as a function of BLAST e-Value', fontsize=25) plt.xticks(eval_ranges, fontsize=18) plt.yticks(fontsize=18) plt.ylabel('No. of unique scaffolds', fontsize=20) plt.xlabel('BLAST e-Value', fontsize=20) plt.grid(True) plt.ylim((0,12000)) fig = matplotlib.pyplot.gcf() fig.set_size_inches(18.5, 10.5) fig.savefig('assembled_fn_evalue.pdf', dpi=300) plt.show() Explanation: Doing some stats on the recruited data End of explanation # dataframe_names= [str(x)+"_cutoff_contigs" for x in eval_ranges] # dataframe_names # # cutoff = eval_ranges[3] # # cutoff_contigs = wps2_assembled_filtered_lst[3] # cutoff_contigs={} # # #cutoff_contigs # for i in range(len(eval_ranges)): # cutoff_contigs[dataframe_names[i]]=wps2_assembled_filtered_lst[i] len(cutoff_contigs['1e-50_cutoff_contigs']) print list(cutoff_contigs['1e-50_cutoff_contigs'].columns.values) Explanation: Retrieving sequences from dataframe list End of explanation # from Bio import SeqIO # assembled_contigs = "../IMG Data/76969.assembled.fna" # handle = open(assembled_contigs, "rU") # record_dict = SeqIO.to_dict(SeqIO.parse(handle,"fasta")) # handle.close() # rec = record_dict["Ga0073928_11111377"] # rec Explanation: Adding Seq_nt and Size entries End of explanation # start = time.time() # all_records = parse_contigs_ind(assembled_contigs) # print time.time()-start # all_records.close() # all_records['Ga0073928_10022750'] # all_records=parse_contigs_ind(assembled_contigs) # start = time.time() # #wps2_eval_cutoff['Seq_nt']=retrive_sequence(contig_list, all_records) # for dataframe in cutoff_contigs: # #cutoff_contigs[dataframe]=evalue_filter(cutoff_contigs[dataframe]) # contig_list = cutoff_contigs[dataframe]['quid'].tolist() # cutoff_contigs[dataframe]['Seq_nt']=retrive_sequence(contig_list, all_records) # cutoff_contigs[dataframe]['Seq_size']=cutoff_contigs[dataframe]['Seq_nt'].apply(lambda x: len(x)) # cutoff_contigs[dataframe]['Coverage']=cutoff_contigs[dataframe]['alen'].apply(lambda x: float(x))/cutoff_contigs[dataframe]['Seq_size'] # cutoff_contigs[dataframe]['Metric']=cutoff_contigs[dataframe]['Coverage']*cutoff_contigs[dataframe]['iden'] # cutoff_contigs[dataframe] = cutoff_contigs[dataframe][['quid', 'suid', 'iden', 'alen','Coverage','Metric', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits','Seq_size', 'Seq_nt']] # elapsed=time.time()-start # print elapsed # all_records.close() # cutoff_contigs['1e-50_cutoff_contigs'] # cutoff_contigs['1e-50_cutoff_contigs'].loc[7444]['Seq_nt'] # !cutoff_contigs['1e-50_cutoff_contigs'].loc[7444]['Seq_nt'][:30] "../IMG Data/76969.assembled.fna" # #verification by manual searching and extracting the sequence # len(''.join(CCTGTTAGTGCTTCGGCGCCCGATACAATCTCGAGCAGCTCCTTGGTTATCGCGGCTTGA # CGCGCGTTGTTCATAGCGATCGTCAGTTCATCGATCAATTTGCCGGCGTTTTCGGTGGCG # TTGTTCATTGCAAGCAGCTGAGCTGCGTAGAATGACGCATCGGTCTCCAGCATCGCCGAA # AATAGTGTGAACTCCAGATATTTCGGCAAGAGTTTTGAAAGTACGAACTCCGGCGAAGGC # ACGATTTCGACCGCGCCGCGCGGACCCGTGGATGGAGCCGCCGCGGTTTCATGCTCGATC # GGGACAAGCTGCCGTAGCTCGGGTCGCTGCACCATCATCGAGACATGTTTGGACGAAACG # AGGATGATGTCGCCGATCTCTCCGGCGGTGAAATCGGCGGTGACGCTCTGCGCGAGTTCG # TGCGCGGTTTGCAGTTTGGACGGCGCACTCAGCGGCCAGCCCGGCCGATCTCCTAGGCCC # CAGCGGCGCACAGCGTTGCGAGCTTTTATGCCGACGGTGTAAAAACGCGCGTCCGCGTGC # TGACGCCAGAAGCGCTCGGCCTCGCGAATGACGTTGGAGTTGAATGCGCCGGCCAG.split())) # len(cutoff_contigs['1e-50_cutoff_contigs'].loc[7444]['Seq_nt']) recruited_mg[0] # wps2_eval_cutoff['Seq_size']=wps2_eval_cutoff['Seq_nt'].apply(lambda x: len(x)) # wps2_eval_cutoff = wps2_eval_cutoff[['quid', 'suid', 'iden', 'alen', 'mism', 'gapo', 'qsta', 'qend', 'ssta', 'send', 'eval', 'bits','Seq_size', 'Seq_nt']] # wps2_eval_cutoff Explanation: Indexing approach Less memory intensive, close index after it's done: record_dict.close() End of explanation for i in range(len(recruited_mg)): recruited_mg[i]=recruited_mg[i][(recruited_mg[i]['iden']>=iden)&(recruited_mg[i]['Metric']>=metric)&(recruited_mg[i]['eval']<=e_value)] print len(recruited_mg[0]) print len(recruited_mg[1]) Explanation: Filtering Step End of explanation #wps2_eval_cutoff.to_csv(path_or_buf="wps2_eval_cutoff.csv", sep='\t') prefix="iteration1_"+str(version)+"_"+str(e_value)+"_recruited_mg" for i in range(len(recruited_mg)): records= [] outfile1 = prefix+str(i)+".csv" # try: # os.remove(outfile1) # except OSError: # pass recruited_mg[i].to_csv(outfile1, sep='\t') ids = recruited_mg[i]['quid'].tolist() #if len(ids)==len(sequences): for j in range(len(ids)): records.append(all_records[i][ids[j]]) outfile2 = prefix+str(i)+".fasta" # try: # os.remove(outfile2) # except OSError: # pass with open(outfile2, "w") as output_handle: SeqIO.write(records, output_handle, "fasta") #print prefix+dataframe_name Explanation: Batch export to csv and multiple FASTA End of explanation
5,046
Given the following text description, write Python code to implement the functionality described below step by step Description: K-Means Step1: Some of the parameters were don't change in these results, so we can delete them (natural number of clusters, dimensionality and number of iterations). Furthermore, We can delete the rounds column because it becomes useless after averaging the times. Step2: Below is some statistics about the timings for the rounds. The important thing to notice is that there is low variance on the data, which suggests that the results are consistent. Step3: Time analysis This section explores some of the results of the runtimes of the algorithms. Step4: Speedup over NumPy Step5: Speedup over Python
Python Code: # necessary imports %pylab inline import seaborn as sns import pandas as pd # locations of the results results_filename="/home/chiroptera/workspace/QCThesis/CUDA/tests/test1v2/results.csv" #local #results_filename="https://raw.githubusercontent.com/Chiroptera/QCThesis/master/CUDA/tests/test1v2/results.csv" #git repo results = pd.read_csv(results_filename) print "Structure of the results" results.head() N_labels=[1e3,5e3,1e4,5e4,1e5,5e5,1e6,2e6,4e6] K_labels=[5,10,20,30,40,50,100,250,500] Explanation: K-Means: Python vs NumPy vs CUDA Author: Diogo Silva This notebook contains an analysis of the results from executing the different implementations (CUDA, NumPy, Python) of the K-Means algorithm. End of explanation results.drop(['R','NATC','D','iters'], axis=1, inplace=True) results.head() Explanation: Some of the parameters were don't change in these results, so we can delete them (natural number of clusters, dimensionality and number of iterations). Furthermore, We can delete the rounds column because it becomes useless after averaging the times. End of explanation rounds = results.groupby(['type','N','K'],as_index = True) results_mean = rounds.mean() rounds.describe() Explanation: Below is some statistics about the timings for the rounds. The important thing to notice is that there is low variance on the data, which suggests that the results are consistent. End of explanation times = results_mean.loc["cuda"] times['cuda']=times['time'] times['numpy']=results_mean.loc["numpy"] times['python']=results_mean.loc["python"] times['s_cuda_np']=times['numpy']/times['cuda'] times['s_cuda_py']=times['python']/times['cuda'] times['s_np_py']=times['python']/times['numpy'] times a=times.groupby(level='K') #a.get_group(20)['python'].plot(subplots=True,layout=(2,2)) p=a.get_group(20)[['python','numpy','cuda']].plot(title="Time evolution; 20 clusters",logy=True) plt.xticks(range(len(N_labels)),N_labels) plt.xlabel("Cardinality") a.get_group(500)[['python','numpy','cuda']].plot(title="Time evolution; 500 clusters",logy=True) plt.xticks(range(len(N_labels)),N_labels) plt.xlabel("Cardinality") b=times.groupby(level='N') b.get_group(1e5)[['python','numpy','cuda']].plot(title="Time evolution by number of clusters; 1e5 datapoints",logy=True) plt.xticks(range(len(K_labels)),K_labels) plt.xlabel("Number of clusters") b.get_group(1e5)[['numpy','cuda']].plot(title="Time evolution by number of clusters; 1e5 datapoints",logy=True) plt.xticks(range(len(K_labels)),K_labels) plt.xlabel("Number of clusters") b.get_group(4e6)[['numpy','cuda']].plot(title="Time evolution by number of clusters; 4e6 datapoints",logy=True) plt.xticks(range(len(K_labels)),K_labels) plt.xlabel("Number of clusters") Explanation: Time analysis This section explores some of the results of the runtimes of the algorithms. End of explanation s_cuda_np = results_mean.loc['numpy'] / results_mean.loc['cuda'] #s_cuda_np['speedup']=s_cuda_np['time'] s_cuda_np.groupby(level=['K']).describe() for key, grp in s_cuda_np.groupby(level=['K']): plt.plot(grp['time'],label=key)#grp.index.levels[0], plt.legend(loc='best') plt.title("Speedup by cardinality") plt.plot([0, 8], [1, 1], 'k-', lw=2) plt.ylabel("Speedup") plt.xlabel("Cardinality") plt.xticks(range(len(N_labels)),N_labels) s_cuda_np.groupby(level=['N']).describe() for key, grp in s_cuda_np.groupby(level=['N']): plt.plot(grp['time'],label=key)#grp.index.levels[0], plt.plot([0, 8], [1, 1], 'k-', lw=2) #slowdown/speedup threshold plt.legend(loc='best') plt.title("Speedup by cardinality") plt.ylabel("Speedup") plt.xlabel("Number of clusters") plt.xticks(range(len(K_labels)),K_labels) Explanation: Speedup over NumPy End of explanation s_cuda_py = results_mean.loc['python'] / results_mean.loc['cuda'] for key, grp in s_cuda_py.groupby(level=['K']): plt.plot(grp['time'],label=key)#grp.index.levels[0], plt.plot([0, 8], [1, 1], 'k-', lw=2) #slowdown/speedup threshold plt.legend(loc='best') plt.title("Speedup by cardinality") plt.ylabel("Speedup") plt.xlabel("Cardinality") plt.xticks(range(len(N_labels)),N_labels) for key, grp in s_cuda_py.groupby(level=['N']): plt.plot(grp['time'],label=key)#grp.index.levels[0], plt.plot([0, 8], [1, 1], 'k-', lw=2) #slowdown/speedup threshold plt.legend(loc='best') plt.title("Speedup by cardinality") plt.ylabel("Speedup") plt.xlabel("Number of clusters") plt.xticks(range(len(K_labels)),K_labels) Explanation: Speedup over Python End of explanation
5,047
Given the following text description, write Python code to implement the functionality described below step by step Description: IST256 Lesson 07 Files Zybook Ch7 P4E Ch7 Links Participation Step1: A. erry B. berr C. berry D. bey Vote Now Step2: A. iic B. ike C. mic D. iso Vote Now Step3: A. tony B. tiny C. tinyt D. tonyt Vote Now Step4: Writing a To File Step5: Watch Me Code 1 Let’s Write two programs. Save a text message to a file. Retrieve the text message from the file. Check Yourself Step6: A. 1 B. 2 C. 3 D. 4 Vote Now Step7: A. 1 B. 2 C. 3 D. 4 Vote Now Step8: End-To-End Example (Pre-Recorded) How Many Calories in that Beer? Let's write a program to search a data file of 254 popular beers. Given the name of the beer the program will return the number of calories. Watch this here
Python Code: x = input() if x.find("rr")!= -1: y = x[1:] else: y = x[:-1] print(y) Explanation: IST256 Lesson 07 Files Zybook Ch7 P4E Ch7 Links Participation: https://poll.ist256.com Zoom Chat! Agenda Go Over Homework H06 New Stuff The importance of a persistence layer in programming. How to read and write from files. Techniques for reading a file a line at a time. Using exception handling with files. FEQT (Future Exam Questions Training) 1 What is the output of the following code when berry is input on line 1? End of explanation x = input() y = x.split() w = "" for z in y: w = w + z[1] print(w) Explanation: A. erry B. berr C. berry D. bey Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 2 What is the output of the following code when mike is cold is input on line 1? End of explanation x = input() x = x + x x = x.replace("o","i") x = x[:5] print(x) Explanation: A. iic B. ike C. mic D. iso Vote Now: https://poll.ist256.com FEQT (Future Exam Questions Training) 3 What is the output of the following code when tony is input on line 1? End of explanation # all at once with open(filename, 'r') as handle: contents = handle.read() # a line at a time with open(filename, 'r') as handle: for line in handle.readlines(): do_something_with_line Explanation: A. tony B. tiny C. tinyt D. tonyt Vote Now: https://poll.ist256.com Connect Activity Which of the following is not an example of secondary (persistent) memory? A. Flash Memory B. Hard Disk Drive (HDD) C. Random-Access Memory (RAM) D. Solid State Disk (SSD) Vote Now: https://poll.ist256.com Files == Persistence Files add a Persistence Layer to our computing environment where we can store our data after the program completes. Think: Saving a game's progress or saving your work! When our program Stores data, we open the file for writing. When our program Reads data, we open the file for reading. To read or write a file we must first open it, which gives us a special variable called a file handle. We then use the file handle to read or write from the file. The read() function reads from the write() function writes to the file through the file handle. Reading From a File Two approaches... that's it! End of explanation # write mode with open(filename, 'w') as handle: handle.write(something) # append mode with open(filename, 'a') as handle: handle.write(something) Explanation: Writing a To File End of explanation a = "savename.txt" with open(a,'w') as b: c = input("Enter your name: ") b.write(c) Explanation: Watch Me Code 1 Let’s Write two programs. Save a text message to a file. Retrieve the text message from the file. Check Yourself: Which line 1 Which line number creates the file handle? End of explanation with open("sample.txt","r") as f: for line in f.readlines(): print(line) g = "done" Explanation: A. 1 B. 2 C. 3 D. 4 Vote Now: https://poll.ist256.com Watch Me Code 2 Common patterns for reading and writing more than one item to a file. - Input a series of grades, write them to a file one line at a time. - Read in that file one line at a time, print average. Check Yourself: Which line 2 On which line number does the file handle no longer exist? End of explanation try: file = 'data.txt' with open(file,'r') as f: print( f.read() ) except FileNotFoundError: print(f"{file} was not found!") Explanation: A. 1 B. 2 C. 3 D. 4 Vote Now: https://poll.ist256.com Your Operating System and You Files are stored in your secondary memory in folders. When the python program is in the same folder as the file, no path is required. When the file is in a different folder, a path is required. Absolute paths point to a file starting at the root of the hard disk. Relative paths point to a file starting at the current place on the hard disk. Python Path Examples <table style="font-size:1.0em;"> <thead><tr> <th>What</th> <th>Windows</th> <th>Mac/Linux</th> </tr></thead> <tbody> <tr> <td><code> File in current folder </code></td> <td> "file.txt" </td> <td> "file.txt"</td> </tr> <tr> <td><code> File up one folder from the current folder </code></td> <td> "../file.txt"</td> <td> "../file.txt"</td> </tr> <tr> <td><code> File in a folder from the current folder </code></td> <td> "folder1/file.txt" </td> <td> "folder1/file.txt" </td> </tr> <tr> <td><code> Absolute path to file in a folder</code></td> <td> "C:/folder1/file.txt" </td> <td> "/folder1/file.txt"</td> </tr> </tbody> </table> Check Yourself: Path - Is this path relative or absolute? "/path/to/folder/file.txt" A. Relative B. Absolute C. Neither D. Not sure Vote Now: https://poll.ist256.com Handling Errors with Try…Except I/O is the ideal use case for exception handling. Don't assume you can read a file! Use try… except! End of explanation file = "a.txt" with open(file,'w'): file.write("Hello") Explanation: End-To-End Example (Pre-Recorded) How Many Calories in that Beer? Let's write a program to search a data file of 254 popular beers. Given the name of the beer the program will return the number of calories. Watch this here: https://youtu.be/s-1ToO0dJIs End-To-End Example A Better Spell Check In this example, we create a better spell checker than the one from small group. read words from a file read text to check from a file. Conclusion Activity : One Question Challenge What is wrong with the following code: End of explanation
5,048
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting a line chart in matplotlib Step1: Plotting a line chart from a Pandas object Step2: Creating bar charts Step3: Creating a pie chart Step4: Defining elements of a plot Defining axes, ticks, and grids Step5: Plot Formatting Defining plot color, customizing line styles, setting marker styles Step6: Note Step7: Create labels and annotations labeling plot features, adding a legend, annotating plot Step8: Time series visualizations Step9: Histograms, box plots, and scatter plots
Python Code: x=range(1,10) y=[1,2,3,4,0,4,3,2,1] plt.plot(x,y) Explanation: Plotting a line chart in matplotlib End of explanation # address = some data set # cars = pd.read_csv(address) # cars.columns = ['car_names','mpg','cyl','disp','hp','drat','wt','qsec','vs','am',gear',carb'] #mpg = cars['mpg'] #mpg.plot() Explanation: Plotting a line chart from a Pandas object End of explanation plt.bar(x,y) #Creating bar chart from pandas object #mpg.plot(kind='bar') Explanation: Creating bar charts: End of explanation x=[1,2,3,4,0.5] plt.pie(x) plt.show() #plt.savefig('pie_chart.jpeg') Saves plot as jpeg in wd #plt.show() Explanation: Creating a pie chart End of explanation x=range(1,10) y=[1,2,3,4,0,4,3,2,1] fig=plt.figure() ax = fig.add_axes([.1, .1,1,1]) ax.plot(x,y) fig = plt.figure() ax = fig.add_axes([.1, .1,1,1]) ax.set_xlim([1,9]) ax.set_ylim([0,5]) ax.set_xticks([0,1,2,4,5,6,8,9,10]) ax.set_yticks([0,1,2,3,4,5]) ax.plot(x,y) fig = plt.figure() ax = fig.add_axes([.1, .1,1,1]) ax.set_xlim([1,9]) ax.set_ylim([0,5]) ax.grid() #seems like this takes away grid? different than tutorial ax.plot(x,y) fig = plt.figure() fig, (ax1,ax2)=plt.subplots(1,2) ax1.plot(x) ax2.plot(x,y) Explanation: Defining elements of a plot Defining axes, ticks, and grids End of explanation x = range(1,10) y= [1,2,3,4,0.5,4,3,2,1] plt.bar(x,y) wide=[0.5,0.5,0.5,0.9,0.9,0.5,0.5,0.9,0.9] color = ['salmon'] plt.bar(x, y, width=wide, color=color, align='center') Explanation: Plot Formatting Defining plot color, customizing line styles, setting marker styles End of explanation z = [1,2,3,4,0.5] color_theme = ['#A9A9A9', '#FFA07A', '#B0E0E6','#FFE4CA','#BDB76B'] #hex codes plt.pie(z, colors = color_theme) plt.show() #line styles x1= range(0,10) y1=[10,9,8,7,6,5,4,3,2,1] plt.plot(x,y,ls = 'steps', lw = 5) plt.plot(x1,y1, ls = '--', lw = 10) #plot markers plt.plot(x,y,marker = '1', mew=20) plt.plot(x1,y1, marker = '+', mew=15) Explanation: Note: with pandas objects color_theme = ['darkgray','lightsalmon','powderblue'] df.plot(color=color_theme) End of explanation #functional method x= range(1,10) y=[1,2,3,4,0.5,4,3,2,1] plt.bar(x,y) plt.xlabel('your x-axis label') plt.ylabel('your y-axis label') z= [1,2,3,4,0.5] veh_type = ['bicycle', 'motorbike','car','van','stroller'] plt.pie(z, labels= veh_type) plt.show() #object oriented has ways of doing this too #uses cars dataset #didn't copy down everything #fig = plt.figure() #ax = fig.add_axes([]) #mpg.plot() #ax.set_xticks(range(32)) #ax.set_xticklabels() #ax.set_title('Title goes here') #ax.set_xlabel('car names') #ax.set_ylabel('miles/gal') #ax.legend(loc='best') #add legend plt.pie(z) plt.legend(veh_type, loc='best') plt.show() #annotate #object oriented: ax.annotate('Toyota Corolla', xy=(19,33,9), xytext=(21,35), #arrowprops = dict(facecolor='black',shrink=0.05)) Explanation: Create labels and annotations labeling plot features, adding a legend, annotating plot End of explanation #address = 'address' #df = pd.read_csv(address, index_col='Order Date', parse_dates=True) #df.head() #df2 = df.sample(n=100, random_state=25, axis=0) #plt.xlabel('Order Date') #plt.ylabel('Order Quantity') #plt.title('Superstore Sales') #df2['Order Quantity'].plot() Explanation: Time series visualizations End of explanation from pandas.tools.plotting import scatter_matrix #pandas dataset import #mpg.plot(kind='hist') # or plt.hist(mpg) with plt.show() #with seaborn #sb.distplot(mpg) #scatterplots #cars.plot(kind='scatter',x='hp',y='mpg',c=['darkgray'], s=150) #sb.regplot(x='hp', y='mpg', data=cars, scatter=True) #seaborn automatically creates trend line #sb.pairplot(cars) #get subset of data using dataframes: #cars_df = pd.DataFrame((cars.ix[:,(1,3,4,6)].values), columns = ['mpg', 'disp','hp','wt']) #boxplots #cars.boxplot(column='mpg', by='am') #in seaborn #sb.boxplot(x='am', y='mpg', data=cars, palette='hls') Explanation: Histograms, box plots, and scatter plots End of explanation
5,049
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'snu', 'sam0-unicon', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: SNU Source ID: SAM0-UNICON Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:38 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" # TODO - please enter value(s) Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" # TODO - please enter value(s) Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" # TODO - please enter value(s) Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" # TODO - please enter value(s) Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
5,050
Given the following text description, write Python code to implement the functionality described below step by step Description: Author Step1: Now that we already have general idea of Data Set. Let's work with features Features 'pixel0' to 'pixel783' Step2: First let's try to plot digit from first 5 data point Step3: PCA transformation Now, we shall try to reduce dimensionality of all features from 'pixel0' to 'pixel783' by using Principal component analysis (PCA), PCA is an orthogonal linear transformation that turns a set of possibly correlated variables into a new set of variables that are as uncorrelated as possible. there are several class to implement different kind of PCA in sklearn but we will work with PCA class. Step4: Number of features has reduced from 784 to 87 Step5: Predicting with Kmeans Step6: Preparing for kaggle submission Step7: Performance Evaluation Splitting train data set Step8: Evaluating model using splitted data set Step9: Hyperparameters Class KMeans has a parameter 'n_clusters', representing the number of clusters to form as well as the number of centroids to generate. We could use elbow method to select number of clusters in KMeans model.
Python Code: import pandas as pd train_data=pd.read_csv('/train.csv') train_data.head() train_data.tail() train_data.dtypes train_data.info() Explanation: Author : Vu Tran. Other info is on github Kaggle Competition: Digit Recognizer Info from Competition Site Description Evaluation Data Set First attempt: Working with data: exploring labeled Data Set Features 'pixel0' to 'pixel783' Training Kmeans Predicting with Kmeans Preparing for kaggle submission Performance Evaluation Splitting train data set Evaluating performance using splitted data set Hyperparameters Second attempt (in progress) Info from Competition Site Description The goal in this competition is to take an image of a handwritten single digit, and determine what that digit is. As the competition progresses, we will release tutorials which explain different machine learning algorithms and help you to get started. The data for this competition were taken from the MNIST dataset. The MNIST ("Modified National Institute of Standards and Technology") dataset is a classic within the Machine Learning community that has been extensively studied. More detail about the dataset, including Machine Learning algorithms that have been tried on it and their levels of success, can be found at http://yann.lecun.com/exdb/mnist/index.html. Evaluation Data Set Data Files File Name | Available Formats ------------------|------------------ train | .csv (73.22 mb) test | .csv (48.75 mb) The data files train.csv and test.csv contain gray-scale images of hand-drawn digits, from zero through nine. Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive. The training data set, (train.csv), has 785 columns. The first column, called "label", is the digit that was drawn by the user. The rest of the columns contain the pixel-values of the associated image. Each pixel column in the training set has a name like pixelx, where x is an integer between 0 and 783, inclusive. To locate this pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27, inclusive. Then pixelx is located on row i and column j of a 28 x 28 matrix, (indexing by zero). For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below. Visually, if we omit the "pixel" prefix, the pixels make up the image like this: 000 001 002 003 ... 026 027 028 029 030 031 ... 054 055 056 057 058 059 ... 082 083 | | | | ... | | 728 729 730 731 ... 754 755 756 757 758 759 ... 782 783 The test data set, (test.csv), is the same as the training set, except that it does not contain the "label" column. Your submission file should be in the following format: For each of the 28000 images in the test set, output a single line with the digit you predict. For example, if you predict that the first image is of a 3, the second image is of a 7, and the third image is of a 8, then your submission file would look like: 3 7 8 (27997 more lines) The evaluation metric for this contest is the categorization accuracy, or the proportion of test images that are correctly classified. For example, a categorization accuracy of 0.97 indicates that you have correctly classified all but 3% of the images. First attempt Working with Data We will first explore the labled DataSet. End of explanation train_features=train_data.values[:,1:] train_target=train_data.label train_target[:5] Explanation: Now that we already have general idea of Data Set. Let's work with features Features 'pixel0' to 'pixel783' End of explanation %matplotlib inline import matplotlib.pyplot as plt import numpy as np def plot_img(sample): fig=plt.figure(figsize=(10,10)) fig.subplots_adjust(left=0, right=1, bottom=0, top=1,hspace=0.05, wspace=0.05) for i in range(sample.shape[0]): img=np.reshape(sample[i],(28,28)) p=fig.add_subplot(sample.shape[0],sample.shape[0],i+1, xticks=[], yticks=[]) p.imshow(img,cmap=plt.cm.bone) plot_img(train_features[0:5]) Explanation: First let's try to plot digit from first 5 data point End of explanation from sklearn.decomposition import PCA my_pca=PCA(n_components=0.9) pca_train_features=my_pca.fit_transform(train_features) pca_train_features.shape Explanation: PCA transformation Now, we shall try to reduce dimensionality of all features from 'pixel0' to 'pixel783' by using Principal component analysis (PCA), PCA is an orthogonal linear transformation that turns a set of possibly correlated variables into a new set of variables that are as uncorrelated as possible. there are several class to implement different kind of PCA in sklearn but we will work with PCA class. End of explanation from sklearn.cluster import KMeans model=KMeans(init='k-means++') model.fit(pca_train_features,train_target) Explanation: Number of features has reduced from 784 to 87 End of explanation #features from test data test_data=pd.read_csv('/test.csv') test_features=test_data.values[:,;] pca_test_features=my_pca.transform(test_features) #predicting original test set. prediction=model.predict(pca_test_features) Explanation: Predicting with Kmeans End of explanation #preparing submission file pd.DataFrame({"ImageId": range(1,len(prediction)+1), "Label": prediction}).to_csv('first_attempt.csv', index=False, header=True) Explanation: Preparing for kaggle submission End of explanation from sklearn.cross_validation import train_test_split # Split 80-20 train vs test data split_train_features, split_test_features, split_train_target, split_test_target= train_test_split(train_features,train_target,test_size=0.20,random_state=0) Explanation: Performance Evaluation Splitting train data set End of explanation from sklearn.metrics import accuracy_score from sklearn.decomposition import PCA from sklearn.cluster import KMeans #pre-processing split train my_pca=PCA(n_components=0.9) pca_split_train_features=my_pca.fit_transform(split_train_features) #pre-processing split test features pca_split_test_features=my_pca.transform(split_test_features) #fit and predict using split data model=KMeans(init='k-means++') model.fit(pca_split_train_features,split_train_target) split_prediction=model.predict(pca_split_test_features) score=accuracy_score(split_test_target, split_prediction) print (score(split_test_target, split_prediction)) Explanation: Evaluating model using splitted data set End of explanation from scipy import cluster #using elbow method to select number of clusters in KMeans model initial = [cluster.vq.kmeans(pca_train_features,i) for i in range(1,10)] fig=plt.figure() ax=fig.add_subplot(111) ax.plot([var for (cent,var) in initial]) plt.grid(True) plt.xlabel('Number of clusters') plt.ylabel('distortion') plt.title('Elbow for K-means clustering') plt.show() Explanation: Hyperparameters Class KMeans has a parameter 'n_clusters', representing the number of clusters to form as well as the number of centroids to generate. We could use elbow method to select number of clusters in KMeans model. End of explanation
5,051
Given the following text description, write Python code to implement the functionality described below step by step Description: Author Step1: Yeah! Here are the percentages! Dataset Sanity We will check for data discrepancies. Some variables (FAP, Application type) are expressed as codes and name. So we will check for their agreement. We'll also investigate missing or weird data. First, let's have a quick summary of the data. Step2: Good news, Everything seems to be up here! Let's see if there are any discrepancies between names and codes. Starting with FAP (Job types). Step3: So far so good, Perfect concordance between FAP codes and FAP names. It is worthy to note that these 195 FAP represents only a subset of the entire FAP (225) as they are described here. What about concordance between application type codes and names? Step4: We also have a 1 to 1 correspondance between application type codes and names. Is there anything going weird between FAP code and the application type rank? Like two #1? Step5: Nothing like that here. Are there any weird values for ranks? Step6: Nope, they are going from first to fourth and we've already seen that there are set for every FAP. However there may be misassigned (e.g. an application type with fourth rank showing up alone). Step7: Everything is in order! Let's take care of the new comer... The percentage. Basic stats? Step8: Here application modes not observed (0%) are not represented. So note that the mean has no real meaning here. Let's end by a manual check for dataset adequacy with Pôle Emploi website. Application modes rank for "Tuyauteurs" on the 09/01/2017 at IMT is Step9: Yay! We have a match! Maybe we should check that the sum of the percentages are close to 100%? Step10: For one FAP, the percentages sum to ~100%. Conclusion I guess we're done for basic sanity checks. Everything looks sane! Basic Overview First let's see what are these application modes. Step11: The different possibilities are not super precise... Thus, only 'Candidature spontanée', 'Réseau...' and 'Intermédiaires du placement can be directly useful. For some FAP, only one, two or three application modes have been observed. Let's have a look to how often this appears. Step12: 70% the job types have data for the 4 application modes. But we can still find some for which only 1 (<10%) or 2 modes (~10%) are observed. So what is the application mode that is the most frequently ranked first? Step13: Network seems to be slightly better than other modes. Nothing new for the Bayesian people... However placement agencies gathers also 30% of the first rank modes. Let's use percentages now by having a glimpse on the modes that represent more than half of the observations per job type. Step14: No doubt, when one channel is doing most of the job it's pretty often the network (36%). But except for the "others" category, other modes appears also to be successful. Conclusion Application mode definitions are not very granular. As we already knew, network ranks first, both when considering the ranks and taking into account application modes with more than 50% of the observations. This make us think that there is space of personalisation. Next step will pursue on hos this dataset could be useful to us. Recommendations 1. Basic For example, we could investigate what are the job types for which application mode really makes the difference? Let's start with the easiest. When only one mode shows up. Step15: The case of independant farmers that mostly apply spontaneously for jobs opens the question of the scope of this application mode ('Candidature spontanée'). Maybe it also includes people that create or take over companies. Here, for clear cut combinations of job type/application modes, use of professional or personal network is less represented. What are the other jobs for which the application modes are really determinant? First we'll have a look to the gap between the modes ranked first and second. How clear cut it is? Step16: Most of the time (124/195) there are less than a 20% difference between the first and the second application mode. But on right hand we can clearly see thats some application modes are highly recommended for certain jobs. Let's have a look to application modes that gather more than 60% difference between first and second. Step17: Network is still there. But there are some job types for which we could recommend to have a look to placement agencies. Same for spontaneous application, maybe we could suggest hints to investigate the region specific ecosystem or job boards. One of the main question was, what is the added value of percentages over ranks. So let's have a look to the first rank. How diverse it is? Step18: For half of the job types, the application mode ranked first represents less than half of the recruitment channels observed. Is it more relevant to propose more than one, let's say two? Step19: It may be like pushing open doors here, but some users may benefits for more than one application mode suggestion (2). And for most job types, the first two modes gather more than 65% of the observations. What are the jobs for which the first modes are almost equally relevant? Step20: 33 job types have less than a 15% difference between the first and the second application mode while both modes gather more than 70% of the observations. They include some public service job types (school principals or professors) that you can access by "concours" (permanent position) or spontaneous application/placement agencies (fixed-term contract). Users are already aware of these possibilities. But for other jobs like "cashiers", pushing both advice (spontaneous application and network) seems like a good strategy! We know that for 70% of the job types the 4 application modes have been observed. For which job type the latest mode may be interesting? Let's see what is the distribution of the percentages of the least observed mode. Step21: The maximum percentage observed for the application mode ranked last, is 21%. What are the job types for which the latest mode percentage is not that ridiculous (75th percentile)? Step22: Some of these results seem to be contradictory with our knowledge Step23: There is only 20 job types for which Network has not been reported as an application mode. When it has been reported, it is usually ranked as the first or second application mode. Among the jobs for which Network is ranked second for application mode Step24: There are only 2 cases where the Network is ranked second with more than 36% of the observations. Even if it won't concern that much job types, a threshold at 40% seems reasonable. What about job types for which the Network mode does not reach this threshold? Step25: It sounds sensible that these job types do not put network as their top mode of application. We expect the advice to be included in the two stars section. However, it appears that we could improve the phrasing for less qualified workers. The Job types that are above the 40% threshold are Step26: For these jobs, it appears relevant that the network advice is put in the user priorities. Another way to personnalize the Network advice could be to put am emphasis on the observed percentage for the job types for which the advantage is striking.
Python Code: import os from os import path import pandas as pd import seaborn as _ DATA_FOLDER = os.getenv('DATA_FOLDER') modes = pd.read_csv(path.join(DATA_FOLDER, 'imt/application_modes.csv')) modes.head() Explanation: Author: Marie Laure, [email protected] Application modes (IMT) Dataset retrieved from emploi-store-dev The IMT dataset provides regional statistics about different jobs. Here we are interested in the "application modes" subset of this dataset. It gathers the means by which people find jobs. Previously, we retrieved IMT data by scraping the IMT website. Concerning application modes, the present dataset not only proposes application modes ranks (as before) but also percentages per FAP codes. As an exploratory step, we are interested in understanding what is the added value of the percentages compared to the ranks. This dataset can be obtained with the following command: docker-compose run --rm data-analysis-prepare make data/imt/application_modes.csv Loading and General View First let's load the csv file: End of explanation modes.describe(include='all').head(2) Explanation: Yeah! Here are the percentages! Dataset Sanity We will check for data discrepancies. Some variables (FAP, Application type) are expressed as codes and name. So we will check for their agreement. We'll also investigate missing or weird data. First, let's have a quick summary of the data. End of explanation modes.groupby('FAP_CODE').FAP_NAME.nunique().value_counts() Explanation: Good news, Everything seems to be up here! Let's see if there are any discrepancies between names and codes. Starting with FAP (Job types). End of explanation modes.groupby('APPLICATION_TYPE_CODE').APPLICATION_TYPE_NAME.nunique().value_counts() Explanation: So far so good, Perfect concordance between FAP codes and FAP names. It is worthy to note that these 195 FAP represents only a subset of the entire FAP (225) as they are described here. What about concordance between application type codes and names? End of explanation modes.groupby('FAP_CODE').APPLICATION_TYPE_CODE.value_counts().value_counts() Explanation: We also have a 1 to 1 correspondance between application type codes and names. Is there anything going weird between FAP code and the application type rank? Like two #1? End of explanation modes.APPLICATION_TYPE_ORDER.unique() Explanation: Nothing like that here. Are there any weird values for ranks? End of explanation def check_order(fap_modes): num_modes = len(fap_modes) if num_modes == 1: if fap_modes.iloc[0].RECRUT_PERCENT != 100: raise Exception ('Single observations should have 100% percentage') if fap_modes.iloc[0].APPLICATION_TYPE_ORDER != 1: raise Exception ('Single observations should be ranked first') return for i in range(num_modes - 1): if int(fap_modes.APPLICATION_TYPE_ORDER.iloc[i]) != i + 1: raise Exception ('Rank order not consistent') if fap_modes.RECRUT_PERCENT.iloc[i] < \ fap_modes.RECRUT_PERCENT.iloc[i + 1]: raise Exception ('Percentage order not consistent') modes.sort_values(\ 'APPLICATION_TYPE_ORDER').groupby('FAP_CODE').apply(check_order); Explanation: Nope, they are going from first to fourth and we've already seen that there are set for every FAP. However there may be misassigned (e.g. an application type with fourth rank showing up alone). End of explanation modes.RECRUT_PERCENT.describe() Explanation: Everything is in order! Let's take care of the new comer... The percentage. Basic stats? End of explanation modes[modes.FAP_CODE == 'D2Z41'] Explanation: Here application modes not observed (0%) are not represented. So note that the mean has no real meaning here. Let's end by a manual check for dataset adequacy with Pôle Emploi website. Application modes rank for "Tuyauteurs" on the 09/01/2017 at IMT is: 1. Candidature spontanée Here we have: End of explanation def sum_percentages(fap_modes): num_modes = len(fap_modes) sum = 0.0 for i in range(num_modes): sum += fap_modes.RECRUT_PERCENT.iloc[i] if sum < 99.9 or sum > 100.1: print('{} {}'.format(fap_modes.FAP_CODE, sum)) modes.groupby('FAP_CODE').apply(sum_percentages) Explanation: Yay! We have a match! Maybe we should check that the sum of the percentages are close to 100%? End of explanation pd.options.display.max_colwidth = 100 modes.APPLICATION_TYPE_NAME.drop_duplicates().to_frame() Explanation: For one FAP, the percentages sum to ~100%. Conclusion I guess we're done for basic sanity checks. Everything looks sane! Basic Overview First let's see what are these application modes. End of explanation modes.groupby('FAP_CODE').size().value_counts(normalize=True).plot(kind='bar'); Explanation: The different possibilities are not super precise... Thus, only 'Candidature spontanée', 'Réseau...' and 'Intermédiaires du placement can be directly useful. For some FAP, only one, two or three application modes have been observed. Let's have a look to how often this appears. End of explanation modes[modes.APPLICATION_TYPE_ORDER == 1]\ .APPLICATION_TYPE_NAME.value_counts(normalize=True)\ .plot.pie(figsize=(6, 6), label=''); Explanation: 70% the job types have data for the 4 application modes. But we can still find some for which only 1 (<10%) or 2 modes (~10%) are observed. So what is the application mode that is the most frequently ranked first? End of explanation modes[modes.RECRUT_PERCENT >= 50].APPLICATION_TYPE_NAME\ .value_counts(normalize=True)\ .plot.pie(figsize=(6, 6), label=''); Explanation: Network seems to be slightly better than other modes. Nothing new for the Bayesian people... However placement agencies gathers also 30% of the first rank modes. Let's use percentages now by having a glimpse on the modes that represent more than half of the observations per job type. End of explanation total_modes = modes.groupby('FAP_CODE').size() modes['total_modes'] = modes.FAP_CODE.map(total_modes) modes[modes.total_modes == 1][['APPLICATION_TYPE_NAME','FAP_NAME']] Explanation: No doubt, when one channel is doing most of the job it's pretty often the network (36%). But except for the "others" category, other modes appears also to be successful. Conclusion Application mode definitions are not very granular. As we already knew, network ranks first, both when considering the ranks and taking into account application modes with more than 50% of the observations. This make us think that there is space of personalisation. Next step will pursue on hos this dataset could be useful to us. Recommendations 1. Basic For example, we could investigate what are the job types for which application mode really makes the difference? Let's start with the easiest. When only one mode shows up. End of explanation def compute_top2_diff(fap_modes): if len(fap_modes) == 1: return 100 return fap_modes.iloc[0] - fap_modes.iloc[1] top2_diff = modes.sort_values(\ 'APPLICATION_TYPE_ORDER').groupby('FAP_CODE').RECRUT_PERCENT.apply(compute_top2_diff) top2_diff.hist(); Explanation: The case of independant farmers that mostly apply spontaneously for jobs opens the question of the scope of this application mode ('Candidature spontanée'). Maybe it also includes people that create or take over companies. Here, for clear cut combinations of job type/application modes, use of professional or personal network is less represented. What are the other jobs for which the application modes are really determinant? First we'll have a look to the gap between the modes ranked first and second. How clear cut it is? End of explanation modes['top2_diff'] = modes.FAP_CODE.map(top2_diff) modes[modes.top2_diff >= 60].APPLICATION_TYPE_NAME.value_counts() Explanation: Most of the time (124/195) there are less than a 20% difference between the first and the second application mode. But on right hand we can clearly see thats some application modes are highly recommended for certain jobs. Let's have a look to application modes that gather more than 60% difference between first and second. End of explanation modes[modes.APPLICATION_TYPE_ORDER == 1].RECRUT_PERCENT.describe() Explanation: Network is still there. But there are some job types for which we could recommend to have a look to placement agencies. Same for spontaneous application, maybe we could suggest hints to investigate the region specific ecosystem or job boards. One of the main question was, what is the added value of percentages over ranks. So let's have a look to the first rank. How diverse it is? End of explanation def compute_top2_sum(fap_modes): if len(fap_modes) == 1: return 100 return fap_modes.iloc[0] + fap_modes.iloc[1] top2_sum = modes.sort_values(\ 'APPLICATION_TYPE_ORDER').groupby('FAP_CODE').RECRUT_PERCENT.apply(compute_top2_sum) top2_sum.hist(); Explanation: For half of the job types, the application mode ranked first represents less than half of the recruitment channels observed. Is it more relevant to propose more than one, let's say two? End of explanation modes['top2_sum'] = modes.FAP_CODE.map(top2_sum) modes[(modes.top2_sum > 70) & (modes.top2_diff < 15) & (modes.APPLICATION_TYPE_ORDER < 3)].\ sort_values(['FAP_CODE', 'top2_sum'], ascending = False) Explanation: It may be like pushing open doors here, but some users may benefits for more than one application mode suggestion (2). And for most job types, the first two modes gather more than 65% of the observations. What are the jobs for which the first modes are almost equally relevant? End of explanation last_modes = modes[modes.APPLICATION_TYPE_ORDER == 4] last_modes.RECRUT_PERCENT.plot(kind ='box'); Explanation: 33 job types have less than a 15% difference between the first and the second application mode while both modes gather more than 70% of the observations. They include some public service job types (school principals or professors) that you can access by "concours" (permanent position) or spontaneous application/placement agencies (fixed-term contract). Users are already aware of these possibilities. But for other jobs like "cashiers", pushing both advice (spontaneous application and network) seems like a good strategy! We know that for 70% of the job types the 4 application modes have been observed. For which job type the latest mode may be interesting? Let's see what is the distribution of the percentages of the least observed mode. End of explanation last_modes[last_modes.RECRUT_PERCENT > 16][['APPLICATION_TYPE_NAME', 'FAP_NAME', 'RECRUT_PERCENT']].\ sort_values('RECRUT_PERCENT', ascending = False) Explanation: The maximum percentage observed for the application mode ranked last, is 21%. What are the job types for which the latest mode percentage is not that ridiculous (75th percentile)? End of explanation modes[modes.APPLICATION_TYPE_CODE == 'R2'].APPLICATION_TYPE_ORDER.value_counts() Explanation: Some of these results seem to be contradictory with our knowledge: e.g. the fact that network is ranked last for law professionals. Thus, we keep our main strategy of promoting the Network. However it seems relevant to drop spontaneous advice when users are interested in jobs in which this mode is almost never observed (here less than 16%). 2. Network Here at Bayes, we are conviced that Network is really important. Let's see if and how the application modes reported by newly recruited people enforce our statement. First, how often is the network reported as the way newly recruited people? End of explanation network_ranked_second = modes[(modes.APPLICATION_TYPE_ORDER == 2) & (modes.APPLICATION_TYPE_CODE == 'R2')] network_ranked_second.RECRUT_PERCENT.hist(); Explanation: There is only 20 job types for which Network has not been reported as an application mode. When it has been reported, it is usually ranked as the first or second application mode. Among the jobs for which Network is ranked second for application mode End of explanation network_ranked_second[network_ranked_second.RECRUT_PERCENT < 40].\ sort_values('RECRUT_PERCENT', ascending=False)[['FAP_NAME', 'RECRUT_PERCENT']] Explanation: There are only 2 cases where the Network is ranked second with more than 36% of the observations. Even if it won't concern that much job types, a threshold at 40% seems reasonable. What about job types for which the Network mode does not reach this threshold? End of explanation network_ranked_second[network_ranked_second.RECRUT_PERCENT >= 40].\ FAP_NAME.to_frame() Explanation: It sounds sensible that these job types do not put network as their top mode of application. We expect the advice to be included in the two stars section. However, it appears that we could improve the phrasing for less qualified workers. The Job types that are above the 40% threshold are: End of explanation network_ranked_first = modes[(modes.APPLICATION_TYPE_ORDER == 1) & (modes.APPLICATION_TYPE_CODE == 'R2')] network_ranked_first_ordered = network_ranked_first[['FAP_NAME', 'RECRUT_PERCENT', 'top2_diff', 'total_modes']].\ sort_values('RECRUT_PERCENT', ascending=False) network_ranked_first_ordered.head(10) Explanation: For these jobs, it appears relevant that the network advice is put in the user priorities. Another way to personnalize the Network advice could be to put am emphasis on the observed percentage for the job types for which the advantage is striking. End of explanation
5,052
Given the following text description, write Python code to implement the functionality described below step by step Description: DiscreteDP Example Step4: Optimal solution We skip the description of the model, just writing down the Bellman equation Step6: For comparison, let us also consider the implementation with the DiscreteDP class Step7: The following paramter values are from lakemodel_example.py. Step8: Let us check that the results coincide Step9: Take a look at the optimal solution for $c = 40$ for example Step10: Performance comparison Step11: Optimal unemployment insurance policy We compute the optimal level of unemployment insurance as in the lecture, mimicking lakemodel_example.py.
Python Code: %matplotlib inline from __future__ import division, print_function import numpy as np import scipy.stats import scipy.optimize import scipy.sparse from numba import jit import matplotlib.pyplot as plt from quantecon.markov import DiscreteDP Explanation: DiscreteDP Example: Job Search Daisuke Oyama Faculty of Economics, University of Tokyo We study an optimal stopping problem, in the context of job search as discussed in http://quant-econ.net/py/lake_model.html. End of explanation class JobSearchModel(object): Job search model. Parameters ---------- w : array_like(float, ndim=1) Array containing wage levels. Must be ordered in ascending order. pdf : array_like(float, ndim=1) Wage distribution. beta : scalar(float) Discount factor alaph :scalar(float) Firing probability. gamma : scalar(float) Wage offer arrival probability. rho: scalar(float) Degree of (constant) relative risk aversion. def __init__(self, w, pdf, beta, alpha=0, gamma=1, rho=0): w = np.asarray(w) self.pdf = np.asarray(pdf) self.beta, self.alpha, self.gamma, self.rho = beta, alpha, gamma, rho self.u_w = self.u(w) def u(self, y): y must be array_like. rho = self.rho small_number = -9999999 y = np.asarray(y, dtype=float) nonpositive = (y <= 0) if rho == 1: util = np.log(y) else: util = (y**(1 - rho) - 1)/(1 - rho) util[nonpositive] = small_number return util def solve(self, c, *args, **kwargs): Solve directly s_star and U and V_s. S = len(self.u_w) a0 = 1 - (1 - self.alpha) * self.beta a1 = self.beta * self.gamma coeff = a1 / a0 u_c = self.u(np.array([c]))[0] s_star = _bisect(self.u_w, self.pdf, u_c, coeff) C = np.zeros(S, dtype=int) C[s_star:] = 1 U = a0 * u_c + a1 * self.u_w[s_star:].dot(self.pdf[s_star:]) U /= a0 + a1 * self.pdf[s_star:].sum() U /= 1 - self.beta V = np.empty(S) V[:s_star] = U V[s_star:] = (self.u_w[s_star:] + self.alpha * self.beta * U) / a0 return V, U, C def stationary_distribution(self, C): lamb = self.pdf.dot(C) * self.gamma pi = np.array([self.alpha, lamb]) pi /= pi.sum() return pi @jit(nopython=True) def _bisect(u_w, pdf, u_c, coeff): lo = -1 hi = len(u_w) while(lo < hi-1): m = (lo + hi) // 2 lhs = u_w[m] - u_c rhs = 0 for i in range(m+1, len(u_w)): rhs += (u_w[i] - u_w[m]) * pdf[i] rhs *= coeff if lhs > rhs: hi = m else: lo = m return hi Explanation: Optimal solution We skip the description of the model, just writing down the Bellman equation: $$ \begin{aligned} U &= u(c) + \beta \left[(1 - \gamma) U + \gamma E[V_s]\right], \ V_s &= \max\left{U, u(w_s) + \beta \left[(1 - \alpha) V_s + \alpha U\right] \right}. \end{aligned} $$ For this class of problem, we can characterize the solution analytically. The optimal policy $\sigma^$ is monotone; it is characterized by a threshold $s^$, for which $\sigma^(s) = 1$ if and only if $s \geq s^$, where actions $0$ and $1$ represent "reject" and "accept", respectively. The threshold is defined as follows: Let $$ \begin{aligned} g(s) &= u(w_s) - u(c), \ h(s) &= \frac{\beta \gamma}{1 - \beta (1 - \alpha)} \sum_{s' \geq s} p_s u(w_s). \end{aligned} $$ It is easy to see that $g$ is increasing and $h$ is decreasing. Then the threshold $s^$ is such that $s \geq s^$ if and only if $g(s) > h(s)$. Given $s^$, the optimal values can be computed as follows: $$ \begin{aligned} U &= \frac{{1 - (1 - \alpha) \beta} u(c) + \beta \gamma \sum_{s \geq s^} p_s u(w_s)} {(1 - \beta) \left[{1 - (1 - \alpha) \beta} + \beta \gamma \sum_{s \geq s^} p_s\right]}, \ V_s &= \begin{cases} U & \text{if $s < s^$} \ \dfrac{u(w_s) + \alpha \beta U}{1 - (1 - \alpha) \beta} & \text{if $s \geq s^*$}. \end{cases} \end{aligned} $$ The optimal policy defines a Markov chain over ${\text{unemployed}, \text{employed}}$. Its stationary distribution is $\pi = \left(\frac{\alpha}{\alpha + \lambda}, \frac{\lambda}{\alpha + \lambda}\right)$, where $\lambda = \gamma \sum_{s \geq s^*} p(w_s)$; note that the flow from unemployed to employed is $\lambda$, while the flow from employed to unemployed is $\alpha$. The expected value at the stationary distribution is $$ \pi_0 U + \pi_1 \frac{\sum_{s \geq s^} p_s V_s}{\sum_{s \geq s^} p_s}. $$ The following implements the job search problem with the analytical solution above: End of explanation class JobSearchModelDiscreteDP(JobSearchModel): Job search model with DiscreteDP. def __init__(self, w, pdf, beta, alpha=0, gamma=1, rho=0): super(JobSearchModelDiscreteDP, self).__init__(w, pdf, beta, alpha, gamma, rho) # Number of states # s = 0, ..., len(w)-1: wage w[s] offered, s = len(w): no offer num_states = len(w) + 1 # Number of actions: 0: reject, 1: accept num_actions = 2 L = num_states*num_actions - 1 s_indices, a_indices = np.empty(L), np.empty(L) s_indices[-1], a_indices[-1] = len(w), 0 s_indices[:-1] = np.repeat(np.arange(len(w)), num_actions) a_indices[:-1] = np.tile(np.arange(num_actions), len(w)) R0 = np.zeros(L) R0[[num_actions*i+1 for i in range(len(w))]] = self.u_w Q = scipy.sparse.lil_matrix((L, num_states)) it = np.nditer((s_indices, a_indices)) for (s, a) in it: i = it.iterindex if a == 0: Q[i, -1] = 1 - self.gamma Q[i, :len(w)] = self.pdf*self.gamma else: # if a == 1 Q[i, s], Q[i, -1] = 1 - self.alpha, self.alpha self.ddp = DiscreteDP(R0, Q, beta, s_indices, a_indices) self.num_iter = None def solve(self, c, *args, **kwargs): n, m = self.ddp.num_states, self.ddp.num_actions self.ddp.R[[m*i for i in range(n)]] = self.u(np.array([c]))[0] res = self.ddp.solve(*args, **kwargs) V = res.v[:-1] # Values of jobs U = res.v[-1] # Value of unemployed C = res.sigma[:-1] self.num_iter = res.num_iter return V, U, C Explanation: For comparison, let us also consider the implementation with the DiscreteDP class: End of explanation w = np.linspace(0, 175, 201) # wage grid # compute probability of each wage level logw_dist = scipy.stats.norm(np.log(20.),1) cdf = logw_dist.cdf(np.log(w)) pdf = cdf[1:]-cdf[:-1] pdf /= pdf.sum() w = (w[1:] + w[:-1])/2 gamma = 1 alpha = 0.013 # Monthly alpha_q = (1-(1-alpha)**3) # Quarterly beta = 0.99 rho = 2 # risk-aversion js = JobSearchModel(w, pdf, beta, alpha_q, gamma, rho) js_ddp = JobSearchModelDiscreteDP(w, pdf, beta, alpha_q, gamma, rho) Explanation: The following paramter values are from lakemodel_example.py. End of explanation cs = np.linspace(1, 75, 25) bools = [] for c in cs: V, U, C = js.solve(c=c) V1, U1, C1 = js_ddp.solve(c=c) bools.append(np.allclose(V, V1)) bools.append(np.allclose(U, U1)) bools.append(np.array_equal(C, C1)) print(all(bools)) Explanation: Let us check that the results coincide: End of explanation c = 40 V, U, C = js.solve(c=c) s_star = len(w) - C.sum() print(r"Optimal policy: Accept if and only if w >= {0}".format(w[s_star])) fig, ax = plt.subplots(figsize=(8,5)) ax.plot(w, V, label=r'$V$') ax.plot((w[0], w[-1]), (U, U), 'r--', label=r'$U$') ax.set_xlabel('Wage') ax.set_ylabel('Value') ax.set_title('Optimal value function') plt.legend(loc=2) plt.show() Explanation: Take a look at the optimal solution for $c = 40$ for example: End of explanation c = 40 %timeit js.solve(c=c) %timeit js_ddp.solve(c=c) Explanation: Performance comparison: End of explanation class UnemploymentInsurancePolicy(object): def __init__(self, w, pdf, beta, alpha=0, gamma=1, rho=0): self.w, self.pdf, self.beta, self.alpha, self.gamma, self.rho = \ w, pdf, beta, alpha, gamma, rho def solve_job_search_model(self, c, T): js = JobSearchModel(self.w-T, self.pdf, self.beta, self.alpha, self.gamma, self.rho) V, U, C = js.solve(c=c-T) pi = js.stationary_distribution(C) return V, U, C, pi def implement(self, c): def budget_balance(T): _, _, _, pi = self.solve_job_search_model(c, T) return T - pi[0]*c # Budget balancing tax given c T = scipy.optimize.brentq(budget_balance, 0, c) V, U, C, pi = self.solve_job_search_model(c, T) EV = (C*V).dot(self.pdf)/(C.dot(self.pdf)) W = pi[0] * U + pi[1] * EV return T, W, pi uip = UnemploymentInsurancePolicy(w, pdf, beta, alpha_q, gamma, rho) grid_size = 501 #25 cvec = np.linspace(5, 135, grid_size) Ts, Ws = np.empty(grid_size), np.empty(grid_size) pis = np.empty((grid_size, 2)) for i, c in enumerate(cvec): T, W, pi = uip.implement(c=c) Ts[i], Ws[i], pis[i] = T, W, pi i_max = Ws.argmax() print('Optimal unemployment benefit:', cvec[i_max]) def plot(ax, y_vec, title): ax.plot(cvec, y_vec) ax.set_xlabel(r"$c$") ax.vlines(cvec[i_max], ax.get_ylim()[0], y_vec[i_max], "k", "-.") ax.set_title(title) fig, axes = plt.subplots(2, 2, figsize=(10, 6)) plot(axes[0, 0], Ws, "Welfare") plot(axes[0, 1], Ts, "Taxes") plot(axes[1, 0], pis[:, 1], "Employment Rate") plot(axes[1, 1], pis[:, 0], "Unemployment Rate") plt.tight_layout() plt.show() fig, ax = plt.subplots() plot(ax, cvec-Ts, "Net Compensation") plt.show() Explanation: Optimal unemployment insurance policy We compute the optimal level of unemployment insurance as in the lecture, mimicking lakemodel_example.py. End of explanation
5,053
Given the following text description, write Python code to implement the functionality described below step by step Description: This Colab demonstrates how to use the FuzzBench analysis library to show experiment results that might not be included in the default report. Get the data Each report contains a link to the raw data, e.g., see the bottom of our sample report Step1: Get the code Step2: Experiment results Step3: Top level results Step4: Rank by median on benchmarks, then by average rank Step5: Rank by pair-wise statistical test wins on benchmarks, then by average rank Step6: Rank by median on benchmarks, then by avereage normalized score Step7: Rank by average rank on benchmarks, then by avereage rank Step8: Benchmark level results
Python Code: !wget https://www.fuzzbench.com/reports/sample/data.csv.gz Explanation: This Colab demonstrates how to use the FuzzBench analysis library to show experiment results that might not be included in the default report. Get the data Each report contains a link to the raw data, e.g., see the bottom of our sample report: https://www.fuzzbench.com/reports/sample/index.html. Find all our published reports at https://www.fuzzbench.com/reports. End of explanation # Install requirements. !pip install Orange3 pandas scikit-posthocs scipy seaborn # Get FuzzBench !git clone https://github.com/google/fuzzbench.git # Add fuzzbench to PYTHONPATH. import sys; sys.path.append('fuzzbench') Explanation: Get the code End of explanation import pandas from fuzzbench.analysis import experiment_results, plotting from IPython.display import SVG, Image # Load the data and initialize ExperimentResults. experiment_data = pandas.read_csv('data.csv.gz') fuzzer_names = experiment_data.fuzzer.unique() plotter = plotting.Plotter(fuzzer_names) results = experiment_results.ExperimentResults(experiment_data, '.', plotter) Explanation: Experiment results End of explanation results.summary_table Explanation: Top level results End of explanation # The critial difference plot visualizes this ranking SVG(results.critical_difference_plot) results.rank_by_median_and_average_rank.to_frame() Explanation: Rank by median on benchmarks, then by average rank End of explanation results.rank_by_stat_test_wins_and_average_rank.to_frame() Explanation: Rank by pair-wise statistical test wins on benchmarks, then by average rank End of explanation results.rank_by_median_and_average_normalized_score.to_frame() Explanation: Rank by median on benchmarks, then by avereage normalized score End of explanation results.rank_by_average_rank_and_average_rank.to_frame() Explanation: Rank by average rank on benchmarks, then by avereage rank End of explanation # List benchmarks benchmarks = {b.name:b for b in results.benchmarks} for benchmark_name in benchmarks.keys(): print(benchmark_name) sqlite = benchmarks['sqlite3_ossfuzz'] SVG(sqlite.violin_plot) SVG(sqlite.coverage_growth_plot) SVG(sqlite.mann_whitney_plot) # Show p values sqlite.mann_whitney_p_values Explanation: Benchmark level results End of explanation
5,054
Given the following text description, write Python code to implement the functionality described below step by step Description: Instalación Para el correcto funcionamiento del código realizado para este proyecto es necesario seguir las siguientes indicaciones Step1: Para la extracción y manejo de datos se creó una matriz con los datos necesarios para su correcta consulta,esto se realiza con la función Datos_necesarios() ubicada en consultas.py Consulta Para realizar la consulta debe tener en cuenta los siguientes requerimientos Step2: ¿Cómo funciona el programa? Los códigos realizados para este proyecto de aula estan implementados en python y utilizan los siguientes paquetes Step3: El codigo contenido en GenArEst.py lo que hara es generar una serie de archivos- necesarios para las estadisticas descriptiva e inferencial. Estadistica
Python Code: %%bash python descarga.py Explanation: Instalación Para el correcto funcionamiento del código realizado para este proyecto es necesario seguir las siguientes indicaciones: 1. Instalar los paquetes beautifulsoup4 y Requests en Python: + pip install beautifulsoup4 Requests. 2. Instalar Numpy. + sudo pip install numpy 4. Instalar Ipython. + pip install ipython 5. Instalar R. + apt install -y r-base 5. Intalar Gnuplot. + apt -y install gnuplot 1. Ahora, para dar soporte al uso de gnuplot en el notebook: * pip install --upgrade --no-cache-dir git+https://github.com/has2k1/gnuplot_kernel.git@master 6. Clonar el repositorio: + git clone https://github.com/manuela98/Emergencias_911_.git Inicialización Dirijase a la carpeta Codigo y ejecute descarga.py para descargar el archivo tzr.csv que contienen los datos con los que se trabajara. End of explanation #Ejemplo from IPython.display import display, Markdown import consulta as c display(Markdown(c.mark(c.Consulta()))) Explanation: Para la extracción y manejo de datos se creó una matriz con los datos necesarios para su correcta consulta,esto se realiza con la función Datos_necesarios() ubicada en consultas.py Consulta Para realizar la consulta debe tener en cuenta los siguientes requerimientos: Escribir las siguientes lineas de código en una celda posterior a esta: from IPython.display import display, Markdown import consulta as c display(Markdown(c.Consulta())) Al ejecutar la celda, aparecerá un cuadro en el cuál deberá ingresar el tipo de dato que desea buscar. Inmediatamente despues aparecerá otro cuadro en donde debe ingresar lo que desea buscar. Las opciones son las siguites: Tittle : Sí su búsqueda es por tipo de emergencia. Ejemplos: EMS, Fire, Traffic Description : Sí su búsqueda es por descripción de la emergencia. Ejemplos: FALL VICTIM , RESPIRATORY EMERGENCY , VEHICLE ACCIDENT Date : Sí su búsqueda es por le fecha en que ocurrió la llamada. Ejemplos: 2015-12-10 , 2015-10-10 Hour : Sí su búsqueda es por la hora en que ocurrió la llamada. Ejemplos: 08:07:00 , 08:47:13 Si ha realizado la búsqueda de manera correcta se mostrará una tabla con las coincidencias del dato ingresado. la tabla le brindará una información completa que contiene Titulo, descripción, fecha y hora independientemente de cual haya sido el tipo de dato a buscar. End of explanation %%bash python GenArEst.py Explanation: ¿Cómo funciona el programa? Los códigos realizados para este proyecto de aula estan implementados en python y utilizan los siguientes paquetes: + Requests + Beautifulsoup4 + Webbrowser + numpy + Ipython descarga.py Este archivo es el encargado de la extracción web de la base de datos que se descargará como archivo de texto plano con el nombre de tzr.csv. Este código ingresa a la página http://montcoalert.org/gettingdata/ que es nuestra fuente de datos y busca la url del archivo tzr.csv que esta nos facilita y lo descarga en el directorio Codigo. consulta.py Este archivo contiene dos funciones: + Datos_necesarios():Carga el archivo de texto a una matriz y luego selecciona las columnas de los datos con mas relevancia en este caso y lo carga a la matriz donde se realiza la consulta. + Consulta():Solicita al usuario que ingrese el tipo de dato y el dato de búsqueda,luego llama a la función Datos_necesarios() que se encarga de buscar las coincidencias del dato ingresado en la matriz. Inmediatamente después genera una tabla markdown con los resultados de la búsqueda. Estadistica-Analisis de Datos Despúes de realizar la instalacion,inicializacion indicadas anteriormente(si no esta en el directorio Codigo dirigase a el) y ejecute GenArEst.py como se muestra a continuación. End of explanation %%bash Rscript Estadistica.R #Estes archivo contiene el codigo que genera la estadistica y las imagenes. Explanation: El codigo contenido en GenArEst.py lo que hara es generar una serie de archivos- necesarios para las estadisticas descriptiva e inferencial. Estadistica: Se tomo un conjunto de datos de interes para realizar estadistica basica con r(media,max,min,desviacion estandar)tanto por dias como por meses: End of explanation
5,055
Given the following text description, write Python code to implement the functionality described below step by step Description: Using FastText via Gensim This tutorial is about using fastText model in Gensim. There are two ways you can use fastText in Gensim - Gensim's native implementation of fastText and Gensim wrapper for fastText's original C++ code. Here, we'll learn to work with fastText library for training word-embedding models, saving & loading them and performing similarity operations & vector lookups analogous to Word2Vec. When to use FastText? The main principle behind fastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings. fastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to be the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams. According to a detailed comparison of Word2Vec and FastText in this notebook, fastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases. Training time for fastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100). fastText can be used to obtain vectors for out-of-vocabulary (OOV) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data. Training models For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim) for training our model. For using the wrapper for fastText, you need to have fastText setup locally to be able to train models. See installation instructions for fastText if you don't have fastText installed already. Using Gensim's implementation of fastText Step1: Using wrapper for fastText's C++ code Step2: Training hyperparameters Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec - - model Step3: The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model. Word vector lookup Note Step4: The word vector lookup operation only works if at least one of the component character ngrams is present in the training corpus. For example - Step5: The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary - Step6: Similarity operations Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data. Step7: Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here. Other similarity operations
Python Code: import gensim import os from gensim.models.word2vec import LineSentence from gensim.models.fasttext import FastText as FT_gensim # Set file names for train and test data data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) + os.sep lee_train_file = data_dir + 'lee_background.cor' lee_data = LineSentence(lee_train_file) model_gensim = FT_gensim(size=100) # build the vocabulary model_gensim.build_vocab(lee_data) # train the model model_gensim.train(lee_data, total_examples=model_gensim.corpus_count, epochs=model_gensim.iter) print(model_gensim) Explanation: Using FastText via Gensim This tutorial is about using fastText model in Gensim. There are two ways you can use fastText in Gensim - Gensim's native implementation of fastText and Gensim wrapper for fastText's original C++ code. Here, we'll learn to work with fastText library for training word-embedding models, saving & loading them and performing similarity operations & vector lookups analogous to Word2Vec. When to use FastText? The main principle behind fastText is that the morphological structure of a word carries important information about the meaning of the word, which is not taken into account by traditional word embeddings, which train a unique word embedding for every individual word. This is especially significant for morphologically rich languages (German, Turkish) in which a single word can have a large number of morphological forms, each of which might occur rarely, thus making it hard to train good word embeddings. fastText attempts to solve this by treating each word as the aggregation of its subwords. For the sake of simplicity and language-independence, subwords are taken to be the character ngrams of the word. The vector for a word is simply taken to be the sum of all vectors of its component char-ngrams. According to a detailed comparison of Word2Vec and FastText in this notebook, fastText does significantly better on syntactic tasks as compared to the original Word2Vec, especially when the size of the training corpus is small. Word2Vec slightly outperforms FastText on semantic tasks though. The differences grow smaller as the size of training corpus increases. Training time for fastText is significantly higher than the Gensim version of Word2Vec (15min 42s vs 6min 42s on text8, 17 mil tokens, 5 epochs, and a vector size of 100). fastText can be used to obtain vectors for out-of-vocabulary (OOV) words, by summing up vectors for its component char-ngrams, provided at least one of the char-ngrams was present in the training data. Training models For the following examples, we'll use the Lee Corpus (which you already have if you've installed gensim) for training our model. For using the wrapper for fastText, you need to have fastText setup locally to be able to train models. See installation instructions for fastText if you don't have fastText installed already. Using Gensim's implementation of fastText End of explanation from gensim.models.wrappers.fasttext import FastText as FT_wrapper # Set FastText home to the path to the FastText executable ft_home = '/home/chinmaya/GSOC/Gensim/fastText/fasttext' # train the model model_wrapper = FT_wrapper.train(ft_home, lee_train_file) print(model_wrapper) Explanation: Using wrapper for fastText's C++ code End of explanation # saving a model trained via Gensim's fastText implementation model_gensim.save('saved_model_gensim') loaded_model = FT_gensim.load('saved_model_gensim') print(loaded_model) # saving a model trained via fastText wrapper model_wrapper.save('saved_model_wrapper') loaded_model = FT_wrapper.load('saved_model_wrapper') print(loaded_model) Explanation: Training hyperparameters Hyperparameters for training the model follow the same pattern as Word2Vec. FastText supports the folllowing parameters from the original word2vec - - model: Training architecture. Allowed values: cbow, skipgram (Default cbow) - size: Size of embeddings to be learnt (Default 100) - alpha: Initial learning rate (Default 0.025) - window: Context window size (Default 5) - min_count: Ignore words with number of occurrences below this (Default 5) - loss: Training objective. Allowed values: ns, hs, softmax (Default ns) - sample: Threshold for downsampling higher-frequency words (Default 0.001) - negative: Number of negative words to sample, for ns (Default 5) - iter: Number of epochs (Default 5) - sorted_vocab: Sort vocab by descending frequency (Default 1) - threads: Number of threads to use (Default 12) In addition, FastText has three additional parameters - - min_n: min length of char ngrams (Default 3) - max_n: max length of char ngrams (Default 6) - bucket: number of buckets used for hashing ngrams (Default 2000000) Parameters min_n and max_n control the lengths of character ngrams that each word is broken down into while training and looking up embeddings. If max_n is set to 0, or to be lesser than min_n, no character ngrams are used, and the model effectively reduces to Word2Vec. To bound the memory requirements of the model being trained, a hashing function is used that maps ngrams to integers in 1 to K. For hashing these character sequences, the Fowler-Noll-Vo hashing function (FNV-1a variant) is employed. Note: As in the case of Word2Vec, you can continue to train your model while using Gensim's native implementation of fastText. However, continuation of training with fastText models while using the wrapper is not supported. Saving/loading models Models can be saved and loaded via the load and save methods. End of explanation print('night' in model_wrapper.wv.vocab) print('nights' in model_wrapper.wv.vocab) print(model_wrapper['night']) print(model_wrapper['nights']) Explanation: The save_word2vec_method causes the vectors for ngrams to be lost. As a result, a model loaded in this way will behave as a regular word2vec model. Word vector lookup Note: Operations like word vector lookups and similarity queries can be performed in exactly the same manner for both the implementations of fastText so they have been demonstrated using only the fastText wrapper here. FastText models support vector lookups for out-of-vocabulary words by summing up character ngrams belonging to the word. End of explanation # Raises a KeyError since none of the character ngrams of the word `axe` are present in the training data model_wrapper['axe'] Explanation: The word vector lookup operation only works if at least one of the component character ngrams is present in the training corpus. For example - End of explanation # Tests if word present in vocab print("word" in model_wrapper.wv.vocab) # Tests if vector present for word print("word" in model_wrapper) Explanation: The in operation works slightly differently from the original word2vec. It tests whether a vector for the given word exists or not, not whether the word is present in the word vocabulary. To test whether a word is present in the training word vocabulary - End of explanation print("nights" in model_wrapper.wv.vocab) print("night" in model_wrapper.wv.vocab) model_wrapper.similarity("night", "nights") Explanation: Similarity operations Similarity operations work the same way as word2vec. Out-of-vocabulary words can also be used, provided they have at least one character ngram present in the training data. End of explanation # The example training corpus is a toy corpus, results are not expected to be good, for proof-of-concept only model_wrapper.most_similar("nights") model_wrapper.n_similarity(['sushi', 'shop'], ['japanese', 'restaurant']) model_wrapper.doesnt_match("breakfast cereal dinner lunch".split()) model_wrapper.most_similar(positive=['baghdad', 'england'], negative=['london']) question_file_path = data_dir + 'questions-words.txt' model_wrapper.accuracy(questions=question_file_path) # Word Movers distance sentence_obama = 'Obama speaks to the media in Illinois'.lower().split() sentence_president = 'The president greets the press in Chicago'.lower().split() # Remove their stopwords. from nltk.corpus import stopwords stopwords = stopwords.words('english') sentence_obama = [w for w in sentence_obama if w not in stopwords] sentence_president = [w for w in sentence_president if w not in stopwords] # Compute WMD. distance = model_wrapper.wmdistance(sentence_obama, sentence_president) distance Explanation: Syntactically similar words generally have high similarity in fastText models, since a large number of the component char-ngrams will be the same. As a result, fastText generally does better at syntactic tasks than Word2Vec. A detailed comparison is provided here. Other similarity operations End of explanation
5,056
Given the following text description, write Python code to implement the functionality described below step by step Description: Detailed A/B Test Experiment Problem There are 2 options for the landing page Step1: Data Overview Each pgeview is a unique cookie Control group Step2: Step 1 - Choose Metrics Invariate Metrics Ck = unique daily cookies (pageviews) on a page Dmin = 3000 Cl = unique daily clicks on the free trial button Dmin = 240 CTP = Cl/Ck, free trial button click through probability Dmin = 0.01 Evaluation Metrics GConversion = enrolled/Cl It's gross conversion Dmin = 0.01 Probability of daily enrolled among daily clicked free trial button Retention = paid/enrolled Dmin = 0.01 Probability of daily paid among daily enrolled NConversion = paid/Cl It's net conversion Dmin = 0.01 Probability of daily paid among daily clicked free trial button Step 2.1 - Estimate Metrics Baseline Values The baseline values are the values of these metrics before the change. These values are given by the data provider Udacity, it's their rough estimation. Step4: Step 2.2 Estimate Standard Deviation & Sample Size Estimate Standard Deviation of Metrics This is for later estimating sample size, confidence interval. <b>The more variant of a metric, the more difficult to reach to a significant result.</b> In order to estimate variance, <b>assume metrics probabilities p_hat are binomially distributed</b>, (probability density function PDF is binomially distribution), then standard deviation is Step5: Gross Conversion std p_hat = enrolled/Cl probability of daily enrolled among daily clicked free trial button Step6: Revention std Step7: Net Conversion std Step8: Estimate Sample Size Hypothesis H0 Step9: Gross Conversion Sample Size Calculated sample_size means estimated number of enrolled in each group In order to get estimated page_views (unique cookies), needs to use sample_size/(400/5000) * 2 for both control and experiments groups multuple 2 means for both groups Step10: Retention Sample Size Calculated sample_size means estimated number of paid in each group In order to get estimated page_views (unique cookies), needs to use sample_size/(82.5/5000) * 2 for both control and experiments groups multuple 2 means for both groups The required page_views is too large, if 40000 views a day, it will take 100 days to get the data. So this metric Retention might be given up. Step11: Net Conversion Sample Size Calculated sample_size means estimated number of paid in each group In order to get estimated page_views (unique cookies), needs to use sample_size/(400/5000) * 2 for both control and experiments groups multuple 2 means for both groups Assume 40000 page_views per day, in order to get such amount page views, it takes about 3 weeks. Step12: Step 3 - Control Group vs. Experiment Group Step13: Step 3.1 - Differences in Invariant Metrics (Sanity Check) The goal is to verify the experiment is conducted as expected, and won't be affected by other factors. Also to make sure the data collection is correct. Invariant Metrics Ck = unique daily cookies (pageviews) on a page Cl = unique daily clicks on the free trial button CTP = Cl/Ck, free trial button click through probability <b>We need to compare the invariant metrics in both groups, to make sure the differences are not significant.</b> Step14: Compare pageviews We want to verify that the difference between pageview counts in the 2 groups are not significant. When sample size n is large enough, it can be approximated as normal distribution. We want to test that pbserved p_hat = control group pageviews/both groups pageviews is not significantly different from p=0.5. Margin of Error ME = Z1-α/2 * std We need to calculate ME at 95% confidence interval Then we will get Confidence Interval CI = [p_hat - ME, p_hat + ME] If p=0.5 is within CI, then the difference between the 2 groups are expected Step15: Compare clicks Similar to pageviews comparison above. Step16: Compare CTP (Click Through Probability) Because CTP is a proportion in the population, so we need to use pooled standard deviation to calculate the margin of error. p_pool = (experiment_clicks + control_clicks)/(experiment_pageviews + control_pageviews) std_pool = sqrt(p_pool*(1-p_pool)*(1/experiment_pageviews + 1/control_pageviews)) Step17: Summary for Sanity Check We have checked all the 3 invariant metrics between the 2 groups, all showing the differences are not significant, so the experiments of using these 2 groups should be less likely affected by other factors. Step 3.2 - Differences in Evaluation Metrics Similar to the sanity check above, here is to check the differences in evaluation metrics between the 2 groups,to see Step18: Compare Gross Conversion The method here is almost the same as what's used in "Compare CTP" above. <b>Observation</b> As we can see in the result, the change in experiment group is both statistically and pratically significant. From control group to experiment group, there is 2.06% decrease in enrollment, and it's significant. So less people will enroll when the website is showing free trail option, comparing to asscess to materials option. Step19: Compare Net Conversion <b>Observation</b> As we can see, it's not statistically significant but it's practically significant. From control group to experiment group, there is 0.49% drop. Practically it's significant means there will be less payment for free trail option, comparing with access to materials option. This drop will affect the business. Step20: Step 3.3 - Differences in Trending Sign Test The purpose of this test is to check whether the decrease/increase trend is evident in daily data. prob(success) = (n!/(x! * (n-x)!)) * pow(p, x) * pow(1-p, n-x) x - number of "success", success means when experiment group increased from control group in the record n - total records p - the probability of being success, this is binomial distribution, so p=0.5 p-value is the sum of prob(success) from each success record. When p-value is smaller than alpha, then the success is significant. This is the <b>online calculator</b> to get p-value
Python Code: import pandas as pd import math import numpy as np from scipy.stats import norm Explanation: Detailed A/B Test Experiment Problem There are 2 options for the landing page: "start free trial" If the visitor clicks this option, will be asked for credit card info, and after 14 days they will be charged automatically. "access course materials" If the visitor clicks this option, they can watch videos and take the quiz for free, but won't get coaching nor certificate. The goal for A/B test here is to see which option could help maximize the course completion. Hmm, in fact this can be hard to guess, since when it's free students may not finish the course, even though there might be more clicks; when it's paid trail, students may be more likely to finish the course even though there might be less visitors. Hypothesis H0: P_control - P_experiment = 0 P_control is the conversion rate of control group, P_experiment is the conversion rate of experiment group. H1: P_control - P_experiment = d d is the detectable effect When the results appear to be significant, we reject H0. Metrics Invariate Metrics They are majorly used for sanity check. Pick those metrics considered not to change, and make sire they won't change dramatically between the control and experiment groups. Evaluation Metrics The metrics are expected to see changes between the control and the experiment group.These metrics are related to the business goals. Each metrics has a Dmin, indicating the min change that's practically significant to the business. Overall Process Choose invariant metrics and evaluation metrics. Estimation of baselines and Sample Size Estimate population metrics baseline Estimate sample metrics baseline Estimate sample size for each evaluation metrics, only keep the metrics that have practical amount of sample size. Verify null hypothese - Control Group vs. Experiment Group Sanity check with invariant metrics Differences in evaluation metrics, using confidence interval, Dmin to check both statistical, practical significance. Differences in trending, using p_value to check statistical significance. End of explanation # Control group control_df = pd.read_csv('ab_control.csv') control_df.head() # Experiment group experiment_df = pd.read_csv('ab_experiment.csv') experiment_df.head() control_df.describe() experiment_df.describe() Explanation: Data Overview Each pgeview is a unique cookie Control group: "access course materials" option Experiment group: "free trail" option End of explanation baseline = {'Cookies': 40000, 'Clicks': 3200, 'Enrollments': 660, 'CTP': 0.08, 'GConversion': 0.20625, 'Retention': 0.53, 'NConversion': 0.109313} Explanation: Step 1 - Choose Metrics Invariate Metrics Ck = unique daily cookies (pageviews) on a page Dmin = 3000 Cl = unique daily clicks on the free trial button Dmin = 240 CTP = Cl/Ck, free trial button click through probability Dmin = 0.01 Evaluation Metrics GConversion = enrolled/Cl It's gross conversion Dmin = 0.01 Probability of daily enrolled among daily clicked free trial button Retention = paid/enrolled Dmin = 0.01 Probability of daily paid among daily enrolled NConversion = paid/Cl It's net conversion Dmin = 0.01 Probability of daily paid among daily clicked free trial button Step 2.1 - Estimate Metrics Baseline Values The baseline values are the values of these metrics before the change. These values are given by the data provider Udacity, it's their rough estimation. End of explanation sample_baseline = baseline.copy() sample_baseline['Cookies'] = 5000 # assume sample size is 5000 sample_baseline['Clicks'] = baseline['Clicks'] * 5000/baseline['Cookies'] sample_baseline['Enrollments'] = baseline['Enrollments'] * 5000/baseline['Cookies'] sample_baseline def get_binomial_std(p_hat, n): p_hat: baseline probability of the event to occur n: sample size return: the standard deviation std = round(math.sqrt(p_hat*(1-p_hat)/n),4) return std Explanation: Step 2.2 Estimate Standard Deviation & Sample Size Estimate Standard Deviation of Metrics This is for later estimating sample size, confidence interval. <b>The more variant of a metric, the more difficult to reach to a significant result.</b> In order to estimate variance, <b>assume metrics probabilities p_hat are binomially distributed</b>, (probability density function PDF is binomially distribution), then standard deviation is: std = sqrt(p_hat*(1-p_hat)/n) p_hat: baseline probability of the event to occur n: sample size The reason we assume it's binomial distribution for probability density function is because, the logic here is if not option A then option B. The assumption is only valid when the unit of diversion of the experiment is equal to the unit of analysis (the denominator of the metric formula). If the assumption is not valid, the calculated std can be different and better to estimate empirically. End of explanation gross_conversion = {} gross_conversion['d_min'] = 0.01 gross_conversion['p_hat'] = sample_baseline['GConversion'] gross_conversion['n'] = sample_baseline['Clicks'] gross_conversion['std'] = get_binomial_std(gross_conversion['p_hat'], gross_conversion['n']) gross_conversion Explanation: Gross Conversion std p_hat = enrolled/Cl probability of daily enrolled among daily clicked free trial button End of explanation retention = {} retention['d_min'] = 0.01 retention['p_hat'] = sample_baseline['Retention'] retention['n'] = sample_baseline['Enrollments'] retention['std'] = get_binomial_std(retention['p_hat'], retention['n']) retention Explanation: Revention std End of explanation net_conversion = {} net_conversion['d_min'] = 0.0075 net_conversion['p_hat'] = sample_baseline['NConversion'] net_conversion['n'] = sample_baseline['Clicks'] net_conversion['std'] = get_binomial_std(net_conversion['p_hat'], net_conversion['n']) net_conversion Explanation: Net Conversion std End of explanation def get_z_score(alpha): return norm.ppf(alpha) def get_stds(p, d): std1 = math.sqrt(2*p*(1-p)) std2 = math.sqrt(p*(1-p) + (p+d)*(1-(p+d))) std_lst = [std1, std2] return std_lst def get_sample_size(std_lst, alpha, beta, d): n = pow(get_z_score(1-alpha/2)*std_lst[0] + get_z_score(1-beta)*std_lst[1], 2)/pow(d,2) return n alpha = 0.05 beta = 0.2 Explanation: Estimate Sample Size Hypothesis H0: P_control - P_experiment = 0 P_control is the conversion rate of control group, P_experiment is the conversion rate of experiment group. H1: P_control - P_experiment = d d is the detectable effect Sample Size Formula n = pow(Z1-α/2 * std1 + Z1-β * std2, 2)/pow(d, 2) Z1-α/2 is the Z score for 1-α/2, α is the probability of Type I error Z1-β is the Z score for 1-β (Power), β is the probability of Type II error std1 = sqrt(2p*(1-p)) std2 = sqrt(p*(1-p) + (p+d)*(1-(p+d))) p is the baseline conversion rate, it's the p_hat from above d is the detectable effect, it's the d_min from above This is the <b>online calculator</b> for sample size: https://www.evanmiller.org/ab-testing/sample-size.html Given p, d, α and 1-β End of explanation gross_conversion['sample_size'] = round(get_sample_size(get_stds(gross_conversion['p_hat'], gross_conversion['d_min']), alpha, beta, gross_conversion['d_min'])) gross_conversion['page_views'] = 2*round(gross_conversion['sample_size']/(gross_conversion['n']/5000)) gross_conversion Explanation: Gross Conversion Sample Size Calculated sample_size means estimated number of enrolled in each group In order to get estimated page_views (unique cookies), needs to use sample_size/(400/5000) * 2 for both control and experiments groups multuple 2 means for both groups End of explanation retention['sample_size'] = round(get_sample_size(get_stds(retention['p_hat'], retention['d_min']), alpha, beta, retention['d_min'])) retention['page_views'] = 2*round(retention['sample_size']/(retention['n']/5000)) retention Explanation: Retention Sample Size Calculated sample_size means estimated number of paid in each group In order to get estimated page_views (unique cookies), needs to use sample_size/(82.5/5000) * 2 for both control and experiments groups multuple 2 means for both groups The required page_views is too large, if 40000 views a day, it will take 100 days to get the data. So this metric Retention might be given up. End of explanation net_conversion['sample_size'] = round(get_sample_size(get_stds(net_conversion['p_hat'], net_conversion['d_min']), alpha, beta, net_conversion['d_min'])) net_conversion['page_views'] = 2*round(net_conversion['sample_size']/(net_conversion['n']/5000)) net_conversion Explanation: Net Conversion Sample Size Calculated sample_size means estimated number of paid in each group In order to get estimated page_views (unique cookies), needs to use sample_size/(400/5000) * 2 for both control and experiments groups multuple 2 means for both groups Assume 40000 page_views per day, in order to get such amount page views, it takes about 3 weeks. End of explanation control_df.head() experiment_df.head() Explanation: Step 3 - Control Group vs. Experiment Group End of explanation p=0.5 alpha=0.05 def get_std(p, total_size): std = math.sqrt(p*(1-p)/total_size) return std def get_marginOferror(std, alpha): me = round(get_z_score(1-alpha/2)*std, 4) return me Explanation: Step 3.1 - Differences in Invariant Metrics (Sanity Check) The goal is to verify the experiment is conducted as expected, and won't be affected by other factors. Also to make sure the data collection is correct. Invariant Metrics Ck = unique daily cookies (pageviews) on a page Cl = unique daily clicks on the free trial button CTP = Cl/Ck, free trial button click through probability <b>We need to compare the invariant metrics in both groups, to make sure the differences are not significant.</b> End of explanation control_pageviews = control_df['Pageviews'].sum() experiment_pageviews = experiment_df['Pageviews'].sum() print(control_pageviews, experiment_pageviews) total_pageviews = control_pageviews + experiment_pageviews p_hat = control_pageviews/(total_pageviews) std = get_std(p, total_pageviews) me = get_marginOferror(std, alpha) print('If ' + str(p) +' is within [' + str(round(p_hat - me, 4)) + ', ' + str(round(p_hat + me, 4)) + '], then the difference is expected.') Explanation: Compare pageviews We want to verify that the difference between pageview counts in the 2 groups are not significant. When sample size n is large enough, it can be approximated as normal distribution. We want to test that pbserved p_hat = control group pageviews/both groups pageviews is not significantly different from p=0.5. Margin of Error ME = Z1-α/2 * std We need to calculate ME at 95% confidence interval Then we will get Confidence Interval CI = [p_hat - ME, p_hat + ME] If p=0.5 is within CI, then the difference between the 2 groups are expected End of explanation control_clicks = control_df['Clicks'].sum() experiment_clicks = experiment_df['Clicks'].sum() print(control_clicks, experiment_clicks) total_clicks = control_clicks + experiment_clicks p_hat = control_clicks/(total_clicks) std = get_std(p, total_clicks) me = get_marginOferror(std, alpha) print('If ' + str(p) +' is within [' + str(round(p_hat - me, 4)) + ', ' + str(round(p_hat + me, 4)) + '], then the difference is expected.') Explanation: Compare clicks Similar to pageviews comparison above. End of explanation control_ctp = control_clicks/control_pageviews experiment_ctp = experiment_clicks/experiment_pageviews p_pool = (control_clicks + experiment_clicks)/(control_pageviews + experiment_pageviews) std_pool = math.sqrt(p_pool*(1-p_pool)*(1/experiment_pageviews + 1/control_pageviews)) me = get_marginOferror(std_pool, alpha) diff = round(experiment_ctp - control_ctp, 4) print('If ' + str(diff) +' is within [' + str(round(0 - me, 4)) + ', ' + str(round(0 + me, 4)) + '], then the difference is expected.') Explanation: Compare CTP (Click Through Probability) Because CTP is a proportion in the population, so we need to use pooled standard deviation to calculate the margin of error. p_pool = (experiment_clicks + control_clicks)/(experiment_pageviews + control_pageviews) std_pool = sqrt(p_pool*(1-p_pool)*(1/experiment_pageviews + 1/control_pageviews)) End of explanation print(control_df.isnull().sum()) print() print(experiment_df.isnull().sum()) Explanation: Summary for Sanity Check We have checked all the 3 invariant metrics between the 2 groups, all showing the differences are not significant, so the experiments of using these 2 groups should be less likely affected by other factors. Step 3.2 - Differences in Evaluation Metrics Similar to the sanity check above, here is to check the differences in evaluation metrics between the 2 groups,to see: Whether the differences are statistically significant. Whether the differences are practically significant. So that the changes are big enough to be beneficial to the business. Difference is not included in the confidence interval [Dmin - ME, Dmin + ME] As Step 2 has found, Gross Conversion and Net Conversion can be the evaluation metrics, while Retention is not. All because of the limitation in data collection in reality. Evaluation Metrics GConversion = enrolled/Cl It's gross conversion Dmin = 0.01 Probability of daily enrolled among daily clicked free trial button NConversion = paid/Cl It's net conversion Dmin = 0.01 Probability of daily paid among daily clicked free trial button End of explanation control_clicks = control_df['Clicks'].loc[control_df['Enrollments'].notnull()].sum() experiment_clicks = experiment_df['Clicks'].loc[experiment_df['Enrollments'].notnull()].sum() print('Clicks', control_clicks, experiment_clicks) control_enrolls = control_df['Enrollments'].sum() experiment_enrolls = experiment_df['Enrollments'].sum() print('Enrollments', control_enrolls, experiment_enrolls) control_GC = control_enrolls/control_clicks experiment_GC = experiment_enrolls/experiment_clicks print('Gross Conversion', control_GC, experiment_GC) p_pool = (control_enrolls + experiment_enrolls)/(control_clicks + experiment_clicks) std_pool = math.sqrt(p_pool*(1-p_pool)*(1/control_clicks + 1/experiment_clicks)) me = get_marginOferror(std_pool, alpha) print(p_pool, std_pool, me) # Statistical significance GC_diff = round(experiment_GC - control_GC, 4) print('If ' + str(GC_diff) +' is within [' + str(round(0 - me, 4)) + ', ' + str(round(0 + me, 4)) + '], then the difference is expected, and the change is not significant.') # Practically significance d_min = gross_conversion['d_min'] print('If ' + str(GC_diff) +' is within [' + str(round(d_min - me, 4)) + ', ' + str(round(d_min + me, 4)) + '], then the change is not practically significant.') Explanation: Compare Gross Conversion The method here is almost the same as what's used in "Compare CTP" above. <b>Observation</b> As we can see in the result, the change in experiment group is both statistically and pratically significant. From control group to experiment group, there is 2.06% decrease in enrollment, and it's significant. So less people will enroll when the website is showing free trail option, comparing to asscess to materials option. End of explanation control_clicks = control_df['Clicks'].loc[control_df['Payments'].notnull()].sum() experiment_clicks = experiment_df['Clicks'].loc[experiment_df['Payments'].notnull()].sum() print('Clicks', control_clicks, experiment_clicks) control_paid = control_df['Payments'].sum() experiment_paid = experiment_df['Payments'].sum() print('Payments', control_paid, experiment_paid) control_NC = control_paid/control_clicks experiment_NC = experiment_paid/experiment_clicks print('Net Conversion', control_NC, experiment_NC) p_pool = (control_paid + experiment_paid)/(control_clicks + experiment_clicks) std_pool = math.sqrt(p_pool*(1-p_pool)*(1/control_clicks + 1/experiment_clicks)) me = get_marginOferror(std_pool, alpha) print(p_pool, std_pool, me) # Statistical significance NC_diff = round(experiment_NC - control_NC, 4) print('If ' + str(NC_diff) +' is within [' + str(round(0 - me, 4)) + ', ' + str(round(0 + me, 4)) + '], then the difference is expected, and the change is not significant.') # Practically significance d_min = net_conversion['d_min'] print('If ' + str(NC_diff) +' is within [' + str(round(d_min - me, 4)) + ', ' + str(round(d_min + me, 4)) + '], then the change is not practically significant.') Explanation: Compare Net Conversion <b>Observation</b> As we can see, it's not statistically significant but it's practically significant. From control group to experiment group, there is 0.49% drop. Practically it's significant means there will be less payment for free trail option, comparing with access to materials option. This drop will affect the business. End of explanation control_experiment_df = control_df.join(experiment_df, lsuffix='_control', rsuffix='_experiment') print(control_experiment_df.shape) control_experiment_df.head() control_experiment_df.isnull().sum() control_experiment_df.dropna(inplace=True) print(control_experiment_df.shape) control_experiment_df.isnull().sum() # If it's "success", assign 1, otherwise 0 control_experiment_df['GC_increase'] = np.where( control_experiment_df['Enrollments_experiment']/control_experiment_df['Clicks_experiment'] \ > control_experiment_df['Enrollments_control']/control_experiment_df['Clicks_control'], 1, 0) control_experiment_df['NC_increase'] = np.where( control_experiment_df['Payments_experiment']/control_experiment_df['Clicks_experiment'] \ > control_experiment_df['Payments_control']/control_experiment_df['Clicks_control'], 1, 0) control_experiment_df[['GC_increase', 'NC_increase']].head() print(control_experiment_df['GC_increase'].value_counts()) print(control_experiment_df['NC_increase'].value_counts()) GC_success_ct = control_experiment_df['GC_increase'].value_counts()[1] NC_success_ct = control_experiment_df['NC_increase'].value_counts()[1] print(GC_success_ct, NC_success_ct) p = 0.5 alpha = 0.05 n = control_experiment_df.shape[0] print(n) def get_probability(x, n): prob = round(math.factorial(n)/(math.factorial(x)*math.factorial(n-x))*pow(p,x)*pow(1-p, n-x), 4) return prob def get_p_value(x, n): p_value = 0 for i in range(0, x+1): p_value += get_probability(i, n) return round(p_value*2, 4) # 2 side p_value print ("GC Change is significant if", get_p_value(GC_success_ct,n), "is smaller than", alpha) print ("NC Change is significant if", get_p_value(NC_success_ct,n), "is smaller than", alpha) Explanation: Step 3.3 - Differences in Trending Sign Test The purpose of this test is to check whether the decrease/increase trend is evident in daily data. prob(success) = (n!/(x! * (n-x)!)) * pow(p, x) * pow(1-p, n-x) x - number of "success", success means when experiment group increased from control group in the record n - total records p - the probability of being success, this is binomial distribution, so p=0.5 p-value is the sum of prob(success) from each success record. When p-value is smaller than alpha, then the success is significant. This is the <b>online calculator</b> to get p-value: https://www.graphpad.com/quickcalcs/binomial1/ Given x, n, p It provides both single & double sided p-value <b>Observation</b> Same as what found in Step 3.2, the changes in Gross Conversion is significant, while in Net Conversion is not statistically significant. One thing to note here is, no matter to check prob(success) or prob(failure) here, the significance results are the same, even though p_value can be different. End of explanation
5,057
Given the following text description, write Python code to implement the functionality described below step by step Description: Name Data preparation by using a template to submit a job to Cloud Dataflow Labels GCP, Cloud Dataflow, Kubeflow, Pipeline Summary A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow. Details Intended use Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline. Runtime arguments Argument | Description | Optional | Data type | Accepted values | Default | Step1: Load the component using KFP SDK Step2: Sample Note Step3: Set sample parameters Step4: Example pipeline that uses the component Step5: Compile the pipeline Step6: Submit the pipeline for execution Step7: Inspect the output
Python Code: %%capture --no-stderr KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz' !pip3 install $KFP_PACKAGE --upgrade Explanation: Name Data preparation by using a template to submit a job to Cloud Dataflow Labels GCP, Cloud Dataflow, Kubeflow, Pipeline Summary A Kubeflow Pipeline component to prepare data by using a template to submit a job to Cloud Dataflow. Details Intended use Use this component when you have a pre-built Cloud Dataflow template and want to launch it as a step in a Kubeflow Pipeline. Runtime arguments Argument | Description | Optional | Data type | Accepted values | Default | :--- | :---------- | :----------| :----------| :---------- | :----------| project_id | The ID of the Google Cloud Platform (GCP) project to which the job belongs. | No | GCPProjectID | | | gcs_path | The path to a Cloud Storage bucket containing the job creation template. It must be a valid Cloud Storage URL beginning with 'gs://'. | No | GCSPath | | | launch_parameters | The parameters that are required to launch the template. The schema is defined in LaunchTemplateParameters. The parameter jobName is replaced by a generated name. | Yes | Dict | A JSON object which has the same structure as LaunchTemplateParameters | None | location | The regional endpoint to which the job request is directed.| Yes | GCPRegion | | None | staging_dir | The path to the Cloud Storage directory where the staging files are stored. A random subdirectory will be created under the staging directory to keep the job information. This is done so that you can resume the job in case of failure.| Yes | GCSPath | | None | validate_only | If True, the request is validated but not executed. | Yes | Boolean | | False | wait_interval | The number of seconds to wait between calls to get the status of the job. | Yes | Integer | | 30 | Input data schema The input gcs_path must contain a valid Cloud Dataflow template. The template can be created by following the instructions in Creating Templates. You can also use Google-provided templates. Output Name | Description :--- | :---------- job_id | The id of the Cloud Dataflow job that is created. Caution & requirements To use the component, the following requirements must be met: - Cloud Dataflow API is enabled. - The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details. - The Kubeflow user service account is a member of: - roles/dataflow.developer role of the project. - roles/storage.objectViewer role of the Cloud Storage Object gcs_path. - roles/storage.objectCreator role of the Cloud Storage Object staging_dir. Detailed description You can execute the template locally by following the instructions in Executing Templates. See the sample code below to learn how to execute the template. Follow these steps to use the component in a pipeline: 1. Install the Kubeflow Pipeline SDK: End of explanation import kfp.components as comp dataflow_template_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/01a23ae8672d3b18e88adf3036071496aca3552d/components/gcp/dataflow/launch_template/component.yaml') help(dataflow_template_op) Explanation: Load the component using KFP SDK End of explanation !gsutil cat gs://dataflow-samples/shakespeare/kinglear.txt Explanation: Sample Note: The following sample code works in an IPython notebook or directly in Python code. In this sample, we run a Google-provided word count template from gs://dataflow-templates/latest/Word_Count. The template takes a text file as input and outputs word counts to a Cloud Storage bucket. Here is the sample input: End of explanation # Required Parameters PROJECT_ID = '<Please put your project ID here>' GCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash # Optional Parameters EXPERIMENT_NAME = 'Dataflow - Launch Template' OUTPUT_PATH = '{}/out/wc'.format(GCS_WORKING_DIR) Explanation: Set sample parameters End of explanation import kfp.dsl as dsl import json @dsl.pipeline( name='Dataflow launch template pipeline', description='Dataflow launch template pipeline' ) def pipeline( project_id = PROJECT_ID, gcs_path = 'gs://dataflow-templates/latest/Word_Count', launch_parameters = json.dumps({ 'parameters': { 'inputFile': 'gs://dataflow-samples/shakespeare/kinglear.txt', 'output': OUTPUT_PATH } }), location = '', validate_only = 'False', staging_dir = GCS_WORKING_DIR, wait_interval = 30): dataflow_template_op( project_id = project_id, gcs_path = gcs_path, launch_parameters = launch_parameters, location = location, validate_only = validate_only, staging_dir = staging_dir, wait_interval = wait_interval) Explanation: Example pipeline that uses the component End of explanation pipeline_func = pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) Explanation: Compile the pipeline End of explanation #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) Explanation: Submit the pipeline for execution End of explanation !gsutil cat $OUTPUT_PATH* Explanation: Inspect the output End of explanation
5,058
Given the following text description, write Python code to implement the functionality described below step by step Description: 20 newsgroups classification Here we use the 20 newsgroups text dataset by Ken Lang, which is a dataset of 20,000 messages from 20 different newsgroups. One thousand messages from each newsgroup were sampled randomly and classified by newsgroup. The standard GloVe (Global Vectors for Word Representation) word vector model of the Stanford NLP Group is used for this task. reference Step1: data Step2: data preparation Step3: model Step4: model
Python Code: %autosave 120 import numpy as np np.random.seed(1337) from IPython.display import SVG from keras.models import Model from keras.preprocessing.text import Tokenizer from keras.preprocessing.sequence import pad_sequences from keras.layers import ( Concatenate, Conv1D, Dense, Dropout, Embedding, Flatten, Input, MaxPooling1D ) from keras.utils.np_utils import to_categorical from keras.utils.vis_utils import model_to_dot import matplotlib.pyplot as plt import numpy as np import pandas as pd from pathlib import Path from sklearn.datasets import fetch_20newsgroups import warnings warnings.filterwarnings("ignore") def summary_and_diagram(model): model.summary() return SVG(model_to_dot(model).create(prog='dot', format='svg')) #SVG(model_to_dot(model, show_shapes=True, show_layer_names=True).create(prog='dot', format='svg')) def model_training_plot(history): plt.plot(history.history['acc'], marker='.', label='train') plt.plot(history.history['val_acc'], marker='.', label='validation') plt.title('accuracy') plt.grid(True) plt.xlabel('epoch') plt.ylabel('accuracy') plt.legend(loc='best') plt.show(); %matplotlib inline plt.rcParams["figure.figsize"] = [10, 10] Explanation: 20 newsgroups classification Here we use the 20 newsgroups text dataset by Ken Lang, which is a dataset of 20,000 messages from 20 different newsgroups. One thousand messages from each newsgroup were sampled randomly and classified by newsgroup. The standard GloVe (Global Vectors for Word Representation) word vector model of the Stanford NLP Group is used for this task. reference: GloVe: Global Vectors for Word Representation, Empirical Methods in Natural Language Processing (EMNLP), J. Pennington, R. Socher and C. D. Manning (2014) Bash wget http://nlp.stanford.edu/data/glove.6B.zip unzip glove.6B.zip We use this data to train 1D convolutional neural networks in Keras to classify the messages into one of two newsgroup classes. For the case of one of the models, a dropout probability of 0.1 was applied to convolutional layers in towers and the more standard approach of a dropout probability of 0.5 was applied to the more output dense layer [ref]. references/inspirations reference reference reference reference imports End of explanation categories = ['alt.atheism', 'soc.religion.christian'] newsgroups_train = fetch_20newsgroups(subset='train', shuffle=True, categories=categories) print(f'number of training samples: {len(newsgroups_train.data)}') example_sample_data = "\n".join(newsgroups_train.data[0].split("\n")[10:15]) example_sample_category = categories[newsgroups_train.target[0]] print(f'\nexample training sample of category {example_sample_category}:' f'\n\n{example_sample_data}') Explanation: data End of explanation labels = newsgroups_train.target texts = newsgroups_train.data max_sequence_length = 1000 max_words = 20000 tokenizer = Tokenizer(num_words=max_words) tokenizer.fit_on_texts(texts) sequences = tokenizer.texts_to_sequences(texts) word_index = tokenizer.word_index #print(sequences[0][:10]) print(f'{len(word_index)} unique tokens found') labels = to_categorical(np.array(labels)) data = pad_sequences(sequences, maxlen=max_sequence_length) print(f'data tensor shape: {data.shape}\n' f'targets tensor shape: {labels.shape}') indices = np.arange(data.shape[0]); np.random.shuffle(indices) data = data[indices] labels = labels[indices] cross_validation_split = 0.3 nb_validation_samples = int(cross_validation_split * data.shape[0]) x_train = data[:-nb_validation_samples] y_train = labels[:-nb_validation_samples] x_val = data[-nb_validation_samples:] y_val = labels[-nb_validation_samples:] print(f'training samples shape: {x_train.shape}\n' f'validation samples shape: {y_train.shape}\n\n' f'training samples positive/negative reviews: {y_train.sum(axis=0)}\n' f'validation samples positive/negative reviews: {y_val.sum(axis=0)}') embeddings_index = {} with open('glove.6B.100d.txt') as f: for line in f: values = line.split(' ') word = values[0] embeddings_index[word] = np.asarray(values[1:], dtype='float32') print(f'word vectors: {len(embeddings_index)}') word_vector_dimensionality = 100 embedding_matrix = np.random.random( (len(word_index) + 1, word_vector_dimensionality)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: # Words not in the embedding index are all zero elements. embedding_matrix[i] = embedding_vector print(f'embedding matrix shape: {embedding_matrix.shape}') Explanation: data preparation End of explanation embedding_layer = Embedding(len(word_index) + 1, word_vector_dimensionality, weights=[embedding_matrix], input_length=max_sequence_length, trainable=False) inputs = Input(shape=(max_sequence_length,), dtype='int32') # inputs x = embedding_layer(inputs) # embedded sequences x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(35)(x) # global max pooling x = Flatten()(x) x = Dense(300, activation='relu')(x) x = Dropout(rate=0.5)(x) preds = Dense(2, activation='softmax', name='preds')(x) model = Model(input=inputs, output=preds) model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['acc']) summary_and_diagram(model) %%time history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=60, batch_size=32, verbose=False) model_training_plot(history) print(f'max. validation accuracy observed: {max(model.history.history["val_acc"])}') print(f'max. validation accuracy history index: {model.history.history["val_acc"].index(max(model.history.history["val_acc"]))}') Explanation: model: convolutional neural network End of explanation embedding_layer = Embedding(len(word_index) + 1, word_vector_dimensionality, weights=[embedding_matrix], input_length=max_sequence_length, trainable=False) inputs = Input(shape=(max_sequence_length,), dtype='int32') x = embedding_layer(inputs) convolutional_layer_towers = [] for kernel_size in [3, 4, 5]: _x = Conv1D(filters=128, kernel_size=kernel_size, activation='relu')(x) _x = Dropout(rate=0.1)(_x) _x = MaxPooling1D(5)(_x) convolutional_layer_towers.append(_x) x = Concatenate(axis=1)(convolutional_layer_towers) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(5)(x) x = Conv1D(128, 5, activation='relu')(x) x = MaxPooling1D(30)(x) x = Flatten()(x) x = Dense(128, activation='relu')(x) x = Dropout(rate=0.5)(x) preds = Dense(2, activation='softmax', name='preds')(x) model = Model(input=inputs, output=preds) model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['acc']) summary_and_diagram(model) %%time history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=100, batch_size=32, verbose=False) model_training_plot(history) print(f'max. validation accuracy observed: {max(model.history.history["val_acc"])}') print(f'max. validation accuracy history index: {model.history.history["val_acc"].index(max(model.history.history["val_acc"]))}') Explanation: model: convolutional neural network with multiple towers of varying kernel sizes End of explanation
5,059
Given the following text description, write Python code to implement the functionality described below step by step Description: Algorithms Exercise 2 Imports Step2: Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following
Python Code: %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns import numpy as np Explanation: Algorithms Exercise 2 Imports End of explanation def find_peaks(a): Find the indices of the local maxima in a sequence. peaks = [] data = np.array(a) deriv = np.diff(data) if deriv[0] < 0: peaks.append(0) for i in range(1,len(deriv)): if deriv[i]<0 and deriv[i-1]>0: peaks.append(i) if deriv[-1] >0: peaks.append(len(data)-1) return np.array(peaks) p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert np.allclose(p3, np.array([0])) Explanation: Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input. End of explanation from sympy import pi, N pi_digits_str = str(N(pi, 10001))[2:] pi_dig_num=np.array(list(pi_digits_str)) pi_dig_num=pi_dig_num.astype(int) peak=find_peaks(pi_dig_num) peakdiff=np.diff(peak) plt.hist(peakdiff, bins=100, width=1, color='k',edgecolor='b', align='right'); plt.xlabel("Distance between adjacent local maxima in pi") plt.ylabel("Counts") plt.title("Distance between adjacent local maxima in pi"); assert True # use this for grading the pi digits histogram Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following: Convert that string to a Numpy array of integers. Find the indices of the local maxima in the digits of $\pi$. Use np.diff to find the distances between consequtive local maxima. Visualize that distribution using an appropriately customized histogram. End of explanation
5,060
Given the following text description, write Python code to implement the functionality described below step by step Description: Comparing initial point generation methods Holger Nahrstaedt 2020 .. currentmodule Step1: Toy model We will use the Step2: Objective The objective of this example is to find one of these minima in as few iterations as possible. One iteration is defined as one call to the Step3: Note that this can take a few minutes. Step4: This plot shows the value of the minimum found (y axis) as a function of the number of iterations performed so far (x axis). The dashed red line indicates the true value of the minimum of the Step5: n_random_starts = 10 produces the best results
Python Code: print(__doc__) import numpy as np np.random.seed(123) import matplotlib.pyplot as plt Explanation: Comparing initial point generation methods Holger Nahrstaedt 2020 .. currentmodule:: skopt Bayesian optimization or sequential model-based optimization uses a surrogate model to model the expensive to evaluate function func. There are several choices for what kind of surrogate model to use. This notebook compares the performance of: Halton sequence, Hammersly sequence, Sobol' sequence and Latin hypercube sampling as initial points. The purely random point generation is used as a baseline. End of explanation from skopt.benchmarks import hart6 as hart6_ # redefined `hart6` to allow adding arbitrary "noise" dimensions def hart6(x, noise_level=0.): return hart6_(x[:6]) + noise_level * np.random.randn() from skopt.benchmarks import branin as _branin def branin(x, noise_level=0.): return _branin(x) + noise_level * np.random.randn() from matplotlib.pyplot import cm import time from skopt import gp_minimize, forest_minimize, dummy_minimize def plot_convergence(result_list, true_minimum=None, yscale=None, title="Convergence plot"): ax = plt.gca() ax.set_title(title) ax.set_xlabel("Number of calls $n$") ax.set_ylabel(r"$\min f(x)$ after $n$ calls") ax.grid() if yscale is not None: ax.set_yscale(yscale) colors = cm.hsv(np.linspace(0.25, 1.0, len(result_list))) for results, color in zip(result_list, colors): name, results = results n_calls = len(results[0].x_iters) iterations = range(1, n_calls + 1) mins = [[np.min(r.func_vals[:i]) for i in iterations] for r in results] ax.plot(iterations, np.mean(mins, axis=0), c=color, label=name) #ax.errorbar(iterations, np.mean(mins, axis=0), # yerr=np.std(mins, axis=0), c=color, label=name) if true_minimum: ax.axhline(true_minimum, linestyle="--", color="r", lw=1, label="True minimum") ax.legend(loc="best") return ax def run(minimizer, initial_point_generator, n_initial_points=10, n_repeats=1): return [minimizer(func, bounds, n_initial_points=n_initial_points, initial_point_generator=initial_point_generator, n_calls=n_calls, random_state=n) for n in range(n_repeats)] def run_measure(initial_point_generator, n_initial_points=10): start = time.time() # n_repeats must set to a much higher value to obtain meaningful results. n_repeats = 1 res = run(gp_minimize, initial_point_generator, n_initial_points=n_initial_points, n_repeats=n_repeats) duration = time.time() - start # print("%s %s: %.2f s" % (initial_point_generator, # str(init_point_gen_kwargs), # duration)) return res Explanation: Toy model We will use the :class:benchmarks.hart6 function as toy model for the expensive function. In a real world application this function would be unknown and expensive to evaluate. End of explanation from functools import partial example = "hart6" if example == "hart6": func = partial(hart6, noise_level=0.1) bounds = [(0., 1.), ] * 6 true_minimum = -3.32237 n_calls = 40 n_initial_points = 10 yscale = None title = "Convergence plot - hart6" else: func = partial(branin, noise_level=2.0) bounds = [(-5.0, 10.0), (0.0, 15.0)] true_minimum = 0.397887 n_calls = 30 n_initial_points = 10 yscale="log" title = "Convergence plot - branin" from skopt.utils import cook_initial_point_generator # Random search dummy_res = run_measure("random", n_initial_points) lhs = cook_initial_point_generator( "lhs", lhs_type="classic", criterion=None) lhs_res = run_measure(lhs, n_initial_points) lhs2 = cook_initial_point_generator("lhs", criterion="maximin") lhs2_res = run_measure(lhs2, n_initial_points) sobol = cook_initial_point_generator("sobol", randomize=False, min_skip=1, max_skip=100) sobol_res = run_measure(sobol, n_initial_points) halton_res = run_measure("halton", n_initial_points) hammersly_res = run_measure("hammersly", n_initial_points) grid_res = run_measure("grid", n_initial_points) Explanation: Objective The objective of this example is to find one of these minima in as few iterations as possible. One iteration is defined as one call to the :class:benchmarks.hart6 function. We will evaluate each model several times using a different seed for the random number generator. Then compare the average performance of these models. This makes the comparison more robust against models that get "lucky". End of explanation plot = plot_convergence([("random", dummy_res), ("lhs", lhs_res), ("lhs_maximin", lhs2_res), ("sobol'", sobol_res), ("halton", halton_res), ("hammersly", hammersly_res), ("grid", grid_res)], true_minimum=true_minimum, yscale=yscale, title=title) plt.show() Explanation: Note that this can take a few minutes. End of explanation lhs2 = cook_initial_point_generator("lhs", criterion="maximin") lhs2_15_res = run_measure(lhs2, 12) lhs2_20_res = run_measure(lhs2, 14) lhs2_25_res = run_measure(lhs2, 16) Explanation: This plot shows the value of the minimum found (y axis) as a function of the number of iterations performed so far (x axis). The dashed red line indicates the true value of the minimum of the :class:benchmarks.hart6 function. Test with different n_random_starts values End of explanation plot = plot_convergence([("random - 10", dummy_res), ("lhs_maximin - 10", lhs2_res), ("lhs_maximin - 12", lhs2_15_res), ("lhs_maximin - 14", lhs2_20_res), ("lhs_maximin - 16", lhs2_25_res)], true_minimum=true_minimum, yscale=yscale, title=title) plt.show() Explanation: n_random_starts = 10 produces the best results End of explanation
5,061
Given the following text description, write Python code to implement the functionality described below step by step Description: Exploration of Prudential Life Insurance Data Data retrieved from Step1: Seperation of columns into categorical, continous and discrete Step2: Importing life insurance data set Step3: Pre-processing raw dataset for NaN values Step4: Create or import the test data set Step5: Data transformation and extraction Data groupings Step6: Categorical normalization Step7: Grouping of various categorical data sets Histograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt Step8: Histograms and descriptive statistics for Product_Info_1-7 Step9: Split dataframes into categorical, continuous, discrete, dummy, and response Step10: Descriptive statistics and scatter plot relating Product_Info_2 and Response
Python Code: # Importing libraries %pylab inline %matplotlib inline import pandas as pd import matplotlib.pyplot as plt from matplotlib.colors import LogNorm from sklearn import preprocessing import numpy as np # Convert variable data into categorical, continuous, discrete, # and dummy variable lists the following into a dictionary Explanation: Exploration of Prudential Life Insurance Data Data retrieved from: https://www.kaggle.com/c/prudential-life-insurance-assessment File descriptions: train.csv - the training set, contains the Response values test.csv - the test set, you must predict the Response variable for all rows in this file sample_submission.csv - a sample submission file in the correct format Data fields: Variable | Description -------- | ------------ Id | A unique identifier associated with an application. Product_Info_1-7 | A set of normalized variables relating to the product applied for Ins_Age | Normalized age of applicant Ht | Normalized height of applicant Wt | Normalized weight of applicant BMI | Normalized BMI of applicant Employment_Info_1-6 | A set of normalized variables relating to the employment history of the applicant. InsuredInfo_1-6 | A set of normalized variables providing information about the applicant. Insurance_History_1-9 | A set of normalized variables relating to the insurance history of the applicant. Family_Hist_1-5 | A set of normalized variables relating to the family history of the applicant. Medical_History_1-41 | A set of normalized variables relating to the medical history of the applicant. Medical_Keyword_1-48 | A set of dummy variables relating to the presence of/absence of a medical keyword being associated with the application. Response | This is the target variable, an ordinal variable relating to the final decision associated with an application The following variables are all categorical (nominal): Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41 The following variables are continuous: Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5 The following variables are discrete: Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32 Medical_Keyword_1-48 are dummy variables. My thoughts are as follows: The main dependent variable is the Risk Response (1-8) What are variables are correlated to the risk response? How do I perform correlation analysis between variables? Import libraries End of explanation s = ["Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41", "Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5", "Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32"] varTypes = dict() varTypes['categorical'] = s[0].split(', ') varTypes['continuous'] = s[1].split(', ') varTypes['discrete'] = s[2].split(', ') varTypes['dummy'] = ["Medical_Keyword_"+str(i) for i in range(1,49)] #Prints out each of the the variable types as a check #for i in iter(varTypes['dummy']): #print i Explanation: Seperation of columns into categorical, continous and discrete End of explanation #Import training data d_raw = pd.read_csv('prud_files/train.csv') d = d_raw.copy() len(d.columns) Explanation: Importing life insurance data set End of explanation # Get all the columns that have NaNs d = d_raw.copy() a = pd.isnull(d).sum() nullColumns = a[a>0].index.values #for c in nullColumns: #d[c].fillna(-1) #Determine the min and max values for the NaN columns a = pd.DataFrame(d, columns=nullColumns).describe() a_min = a[3:4] a_max = a[7:8] nullList = ['Family_Hist_4', 'Medical_History_1', 'Medical_History_10', 'Medical_History_15', 'Medical_History_24', 'Medical_History_32'] pd.DataFrame(a_max, columns=nullList) # Convert all NaNs to -1 and sum up all medical keywords across columns df = d.fillna(-1) b = pd.DataFrame(df[varTypes["dummy"]].sum(axis=1), columns=["Medical_Keyword_Sum"]) df= pd.concat([df,b], axis=1, join='outer') Explanation: Pre-processing raw dataset for NaN values End of explanation #Turn split train to test on or off. #If on, 10% of the dataset is used for feature training #If off, training set is loaded from file splitTrainToTest = 1 if(splitTrainToTest): d_gb = df.groupby("Response") df_test = pd.DataFrame() for name, group in d_gb: df_test = pd.concat([df_test, group[:len(group)/10]], axis=0, join='outer') print "test data is 10% training data" else: d_test = pd.read_csv('prud_files/test.csv') df_test = d_test.fillna(-1) b = pd.DataFrame(df[varTypes["dummy"]].sum(axis=1), columns=["Medical_Keyword_Sum"]) df_test= pd.concat([df_test,b], axis=1, join='outer') print "test data is prud_files/test.csv" Explanation: Create or import the test data set End of explanation df_cat = df[["Id","Response"]+varTypes["categorical"]] df_disc = df[["Id","Response"]+varTypes["discrete"]] df_cont = df[["Id","Response"]+varTypes["continuous"]] df_dummy = df[["Id","Response"]+varTypes["dummy"]] df_cat_test = df_test[["Id","Response"]+varTypes["categorical"]] df_disc_test = df_test[["Id","Response"]+varTypes["discrete"]] df_cont_test = df_test[["Id","Response"]+varTypes["continuous"]] df_dummy_test = df_test[["Id","Response"]+varTypes["dummy"]] ## Extract categories of each column df_n = df[["Response", "Medical_Keyword_Sum"]+varTypes["categorical"]+varTypes["discrete"]+varTypes["continuous"]].copy() df_test_n = df_test[["Response","Medical_Keyword_Sum"]+varTypes["categorical"]+varTypes["discrete"]+varTypes["continuous"]].copy() #Get all the Product Info 2 categories a = pd.get_dummies(df["Product_Info_2"]).columns.tolist() norm_PI2_dict = dict() #Create an enumerated dictionary of Product Info 2 categories i=1 for c in a: norm_PI2_dict.update({c:i}) i+=1 print norm_PI2_dict df_n = df_n.replace(to_replace={'Product_Info_2':norm_PI2_dict}) df_test_n = df_test_n.replace(to_replace={'Product_Info_2':norm_PI2_dict}) df_n Explanation: Data transformation and extraction Data groupings End of explanation # normalizes a single dataframe column and returns the result def normalize_df(d): min_max_scaler = preprocessing.MinMaxScaler() x = d.values.astype(np.float) #return pd.DataFrame(min_max_scaler.fit_transform(x)) return pd.DataFrame(min_max_scaler.fit_transform(x)) def normalize_cat(d): for x in varTypes["discrete"]: try: a = pd.DataFrame(normalize_df(d_disc[x])) a.columns=[str("n"+x)] d_disc = pd.concat([d_disc, a], axis=1, join='outer') except Exception as e: print e.args print "Error on "+str(x)+" w error: "+str(e) return d_disc def normalize_disc(d_disc): for x in varTypes["discrete"]: try: a = pd.DataFrame(normalize_df(d_disc[x])) a.columns=[str("n"+x)] d_disc = pd.concat([d_disc, a], axis=1, join='outer') except Exception as e: print e.args print "Error on "+str(x)+" w error: "+str(e) return d_disc # t= categorical, discrete, continuous def normalize_cols(d, t = "categorical"): for x in varTypes[t]: try: a = pd.DataFrame(normalize_df(d[x])) a.columns=[str("n"+x)] a = pd.concat(a, axis=1, join='outer') except Exception as e: print e.args print "Error on "+str(x)+" w error: "+str(e) return a def normalize_response(d): a = pd.DataFrame(normalize_df(d["Response"])) a.columns=["nResponse"] #d_cat = pd.concat([d_cat, a], axis=1, join='outer') return a df_n_2 = df_n.copy() df_n_test_2 = df_test_n.copy() df_n_2 = df_n_2[["Response"]+varTypes["categorical"]+varTypes["discrete"]] df_n_test_2 = df_n_test_2[["Response"]+varTypes["categorical"]+varTypes["discrete"]] df_n_2 = df_n_2.apply(normalize_df, axis=1) df_n_test_2 = df_n_test_2.apply(normalize_df, axis=1) df_n_3 = pd.concat([df["Id"],df_n["Medical_Keyword_Sum"],df_n_2, df_n[varTypes["continuous"]]],axis=1,join='outer') df_n_test_3 = pd.concat([df_test["Id"],df_test_n["Medical_Keyword_Sum"],df_n_test_2, df_test_n[varTypes["continuous"]]],axis=1,join='outer') train_data = df_n_3.values test_data = df_n_test_3.values from sklearn import linear_model clf = linear_model.Lasso(alpha = 0.1) clf.fit(X_train, Y_train) pred = clf.predict(X_test) print accuracy_score(pred, Y_test) from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators = 1) #model = model.fit(train_data[0:,2:],train_data[0:,0]) from sklearn.naive_bayes import GaussianNB from sklearn.metrics import accuracy_score clf = GaussianNB() clf.fit(train_data[0:,2:],train_data[0:,0]) pred = clf.predict(X_test) print accuracy_score(pred, Y_test) from sklearn.metrics import accuracy_score df_n.columns.tolist() d_cat = df_cat.copy() d_cat_test = df_cat_test.copy() d_cont = df_cont.copy() d_cont_test = df_cont_test.copy() d_disc = df_disc.copy() d_disc_test = df_disc_test.copy() #df_cont_n = normalize_cols(d_cont, "continuous") #df_cont_test_n = normalize_cols(d_cont_test, "continuous") df_cat_n = normalize_cols(d_cat, "categorical") df_cat_test_n = normalize_cols(d_cat_test, "categorical") df_disc_n = normalize_cols(d_disc, "discrete") df_disc_test_n = normalize_cols(d_disc, "discrete") a = df_cat_n.iloc[:,62:] # TODO: Clump into function #rows are normalized into binary columns of groupings # Define various group by data streams df = d gb_PI2 = df.groupby('Product_Info_1') gb_PI2 = df.groupby('Product_Info_2') gb_Ins_Age = df.groupby('Ins_Age') gb_Ht = df.groupby('Ht') gb_Wt = df.groupby('Wt') gb_response = df.groupby('Response') #Outputs rows the differnet categorical groups for c in df.columns: if (c in varTypes['categorical']): if(c != 'Id'): a = [ str(x)+", " for x in df.groupby(c).groups ] print c + " : " + str(a) df_prod_info = pd.DataFrame(d, columns=["Response"]+ [ "Product_Info_"+str(x) for x in range(1,8)]) df_emp_info = pd.DataFrame(d, columns=["Response"]+ [ "Employment_Info_"+str(x) for x in range(1,6)]) # continous df_bio = pd.DataFrame(d, columns=["Response", "Ins_Age", "Ht", "Wt","BMI"]) # all the values are discrete (0 or 1) df_med_kw = pd.DataFrame(d, columns=["Response"]+ [ "Medical_Keyword_"+str(x) for x in range(1,48)]) Explanation: Categorical normalization End of explanation plt.figure(0) plt.subplot(121) plt.title("Categorical - Histogram for Risk Response") plt.xlabel("Risk Response (1-7)") plt.ylabel("Frequency") plt.hist(df.Response) plt.savefig('images/hist_Response.png') print df.Response.describe() print "" plt.subplot(122) plt.title("Normalized - Histogram for Risk Response") plt.xlabel("Normalized Risk Response (1-7)") plt.ylabel("Frequency") plt.hist(df_cat_n.nResponse) plt.savefig('images/hist_norm_Response.png') print df_cat_n.nResponse.describe() print "" def plotContinuous(d, t): plt.title("Continuous - Histogram for "+ str(t)) plt.xlabel("Normalized "+str(t)+"[0,1]") plt.ylabel("Frequency") plt.hist(d) plt.savefig("images/hist_"+str(t)+".png") #print df.iloc[:,:1].describe() print "" for i in range(i,len(df_cat.columns: plt.figure(1) plotContinuous(df.Ins_Age, "Ins_Age") plt.show() df_disc.describe()[7:8] plt.figure(1) plt.title("Continuous - Histogram for Ins_Age") plt.xlabel("Normalized Ins_Age [0,1]") plt.ylabel("Frequency") plt.hist(df.Ins_Age) plt.savefig('images/hist_Ins_Age.png') print df.Ins_Age.describe() print "" plt.figure(2) plt.title("Continuous - Histogram for BMI") plt.xlabel("Normalized BMI [0,1]") plt.ylabel("Frequency") plt.hist(df.BMI) plt.savefig('images/hist_BMI.png') print df.BMI.describe() print "" plt.figure(3) plt.title("Continuous - Histogram for Wt") plt.xlabel("Normalized Wt [0,1]") plt.ylabel("Frequency") plt.hist(df.Wt) plt.savefig('images/hist_Wt.png') print df.Wt.describe() print "" plt.show() plt.show() Explanation: Grouping of various categorical data sets Histograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt End of explanation k=1 for i in range(1,8): ''' print "The iteration is: "+str(i) print df['Product_Info_'+str(i)].describe() print "" ''' plt.figure(i) if(i == 4): plt.title("Continuous - Histogram for Product_Info_"+str(i)) plt.xlabel("Normalized value: [0,1]") plt.ylabel("Frequency") plt.hist(df['Product_Info_'+str(i)]) plt.savefig('images/hist_Product_Info_'+str(i)+'.png') else: if(i != 2): plt.subplot(1,2,1) plt.title("Cat-Hist- Product_Info_"+str(i)) plt.xlabel("Categories") plt.ylabel("Frequency") plt.hist(df['Product_Info_'+str(i)]) plt.savefig('images/hist_Product_Info_'+str(i)+'.png') plt.subplot(1,2,2) plt.title("Normalized - Histogram of Product_Info_"+str(i)) plt.xlabel("Categories") plt.ylabel("Frequency") plt.hist(df_cat_n['nProduct_Info_'+str(i)]) plt.savefig('images/hist_norm_Product_Info_'+str(i)+'.png') elif(i == 2): plt.title("Cat-Hist Product_Info_"+str(i)) plt.xlabel("Categories") plt.ylabel("Frequency") df.Product_Info_2.value_counts().plot(kind='bar') plt.savefig('images/hist_Product_Info_'+str(i)+'.png') plt.show() Explanation: Histograms and descriptive statistics for Product_Info_1-7 End of explanation catD = df.loc[:,varTypes['categorical']] contD = df.loc[:,varTypes['continuous']] disD = df.loc[:,varTypes['discrete']] dummyD = df.loc[:,varTypes['dummy']] respD = df.loc[:,['id','Response']] Explanation: Split dataframes into categorical, continuous, discrete, dummy, and response End of explanation prod_info = [ "Product_Info_"+str(i) for i in range(1,8)] a = catD.loc[:, prod_info[1]] stats = catD.groupby(prod_info[1]).describe() c = gb_PI2.Response.count() plt.figure(0) plt.scatter(c[0],c[1]) plt.figure(0) plt.title("Histogram of "+"Product_Info_"+str(i)) plt.xlabel("Categories " + str((a.describe())['count'])) plt.ylabel("Frequency") for i in range(1,8): a = catD.loc[:, "Product_Info_"+str(i)] if(i is not 4): print a.describe() print "" plt.figure(i) plt.title("Histogram of "+"Product_Info_"+str(i)) plt.xlabel("Categories " + str((catD.groupby(key).describe())['count'])) plt.ylabel("Frequency") #fig, axes = plt.subplots(nrows = 1, ncols = 2) #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key)) #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key)) if a.dtype in (np.int64, np.float, float, int): a.hist() # Random functions #catD.Product_Info_1.describe() #catD.loc[:, prod_info].groupby('Product_Info_2').describe() #df[varTypes['categorical']].hist() catD.head(5) #Exploration of the discrete data disD.describe() disD.head(5) #Iterate through each categorical column of data #Perform a 2D histogram later i=0 for key in varTypes['categorical']: #print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l) plt.figure(i) plt.title("Histogram of "+str(key)) plt.xlabel("Categories " + str((df.groupby(key).describe())['count'])) #fig, axes = plt.subplots(nrows = 1, ncols = 2) #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key)) #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key)) if df[key].dtype in (np.int64, np.float, float, int): df[key].hist() i+=1 #Iterate through each 'discrete' column of data #Perform a 2D histogram later i=0 for key in varTypes['discrete']: #print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l) plt.figure(i) fig, axes = plt.subplots(nrows = 1, ncols = 2) #Histogram based on normalized value counts of the data set disD[key].value_counts().hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key)) #Cumulative histogram based on normalized value counts of the data set disD[key].value_counts().hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key)) i+=1 #2D Histogram i=0 for key in varTypes['categorical']: #print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l) plt.figure(i) #fig, axes = plt.subplots(nrows = 1, ncols = 2) x = catD[key].value_counts(normalize=True) y = df['Response'] plt.hist2d(x[1], y, bins=40, norm=LogNorm()) plt.colorbar() #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key)) #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key)) i+=1 #Iterate through each categorical column of data #Perform a 2D histogram later i=0 for key in varTypes['categorical']: #print "The category is: {0} with value_counts: {1} and detailed tuple: {2} ".format(key, l.count(), l) plt.figure(i) #fig, axes = plt.subplots(nrows = 1, ncols = 2) #catD[key].value_counts(normalize=True).hist(ax=axes[0]); axes[0].set_title("Histogram: "+str(key)) #catD[key].value_counts(normalize=True).hist(cumulative=True,ax=axes[1]); axes[1].set_title("Cumulative HG: "+str(key)) if df[key].dtype in (np.int64, np.float, float, int): #(1.*df[key].value_counts()/len(df[key])).hist() df[key].value_counts(normalize=True).plot(kind='bar') i+=1 df.loc('Product_Info_1') Explanation: Descriptive statistics and scatter plot relating Product_Info_2 and Response End of explanation
5,062
Given the following text description, write Python code to implement the functionality described below step by step Description: Outline Glossary 7. Observing Systems Previous Step1: Import section specific modules Step2: TODO Step3: Figure 7.5.1 Step4: Figure 7.5.2 Step5: Figure 7.5.3 Step6: Figure 7.5.4 Step7: Figure 7.5.5 Step8: Figure 7.5.6 Step9: Figure 7.5.7 Step10: Figure 7.5.8 Step11: Figure 7.7.1 Step12: Figure 7.7.2 Step13: Figure 7.7.3
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import HTML HTML('../style/course.css') #apply general CSS Explanation: Outline Glossary 7. Observing Systems Previous: 7.4 Digital Correlators Next: 7.6 Polarization and Antenna Feeds Import standard modules: End of explanation from IPython.display import Image HTML('../style/code_toggle.html') Explanation: Import section specific modules: End of explanation Image(filename='figures/AntennaRadiationDiagram.png', width=400) Explanation: TODO: why is it important to understand the primary beam a direction-dependent effect present the E jones what is intrinsic and apparent flux example: sky field with and without beam attenuation why is it a plane wave? source is in the far field, define far-field distance, define far/near field consider a single dish optics intution of a primary beam: reflecting a plane wave to a focus using a simple parabolic dish simple aperture of a parabolic dish is a disc -> bessel beam pattern ; fourier relation voltage beam, power beam ; coordinate system primary lobe, sidelobes, FWHM/resolution, directivity, gain adding complexity: aperture blockage, secondary reflector aperture efficiency: surface accuracy, blockage, taper, spillover typical types of dishes: primary focus, cassegrain, offset gregorian (with examples) antenna mounts parallactic angle example: primary beam of kat-7 (beam pattern, freq dependence, spatial slice, freq slice) example: primary beam of meerkat (beam pattern, freq dependence, spatial slice, freq slice) extra: pointing accuracy, jitter, deformation the relation of a primary beam of a single dish to the PSF of an interferometric array how is the primary beam used in calibration and imaging? https://www.cv.nrao.edu/course/astr534/ReflectorAntennas.html https://www.cv.nrao.edu/course/astr534/RadioTelescopes.html white book chapter 3 malloux antenna theory 7.5 The Primary Beam (E- and P-Jones) <a id='instrum:sec:pb'></a> The primary beam of an antenna (also known as the radiation pattern) is the directional dependence of the gain of the antenna. The primary beam of the antenna is the most important direction-dependent propagation effect. It has a multiplicative effect in the image plane, and a convolutional effect in the visibility plane, due to the Fourier Transform relationship between the image and visibility planes. End of explanation Image(filename='figures/PrimaryBeam_1410MHz_labeled.png', width=700) Explanation: Figure 7.5.1: Schematic diagram of an antenna radiation pattern (Image taken from https://commons.wikimedia.org/wiki/File:Sidelobes_en.svg).<a id='instrum:fig:rad_pat'></a> Example: Primary beam of the JVLA (Jansky Very Large Array) End of explanation Image(filename='figures/BGvsRadius.png', width=500) Explanation: Figure 7.5.2: Primary beam of the JVLA at 1.41 GHz. <a id='instrum:fig:jvla_pb'></a> For a JVLA antenna, the primary beam gain varies with direction at a given frequency (see Figures 7.5.1 and 7.5.2), and with frequency towards a given direction (see Figures 7.5.3 and 7.5.4; the beam pattern scales with frequency, becoming more compact at higher frequencies). End of explanation Image(url="figures/beam_freq_variation.gif",width=400) Explanation: Figure 7.5.3: Gain across a horizontal cross section through the centre of the beam pattern shown in Figure 7.5.2. The peak at the centre corresponds to the mainlobe, with the first null and the first sidelobe on either side. <a id='instrum:fig:pb_horiz_xsec'></a> End of explanation Image(filename='figures/BGvsFreq.png', width=500) Explanation: Figure 7.5.4: Variation of the beam pattern for a JVLA antenna over the frequency range 1.3 $-$ 1.6 GHz. As the frequency increases, the beam pattern becomes more compact. <a id='instrum:fig:pb_freq_gif'></a> End of explanation Image(filename='figures/AltAzAntennaRotation.png', width=700) Explanation: Figure 7.5.5: Variation of the beam gain with frequency at the position of the source marked by a black dot in Figure 7.5.4. <a id='instrum:fig:pb_gain_freq'></a> A JVLA antenna, which has an alt-azimuth mount, rotates relative to the sky during the course of an observation. End of explanation Image(url="figures/beam_rotate.gif",width=400) Explanation: Figure 7.5.6: Relative rotation of the primary beam pattern wrt the sky during the course of an observation. The blue and green sources experience different beam gains at different times, while the beam gain for the red source at the phase centre remains unchanged. <a id='instrum:fig:pb_rot_sky'></a> The rotation of the primary beam causes the beam gain in a given direction to vary with time. End of explanation Image(filename='figures/BGvsHA.png', width=500) Explanation: Figure 7.5.7: Rotation of the beam pattern in Figure 7.5.2 during the course of an observation. <a id='instrum:fig:pb_rot_gif'></a> End of explanation Image(filename='figures/antenna_mounts.png', width=600) Explanation: Figure 7.5.8: Variation of the beam gain as a function of the hour angle at the position of the source marked by a black dot in Figure 7.5.7. <a id='instrum:fig:pb_gain_rot'></a> 7.7 Antenna mounts and parallactic angle <a id='instrum:sec:mounts_and_pa'></a> TODO: P-Jones fold into primary beam section main point: different polarization calibration issues depending on mount introduce transit arrays here? 7.7.1 Antenna mounts <a id='instrum:sec:ant_mounts'></a> Antenna mounts can be of two types: 1. Alt-azimuth mount 1. Equatorial mount An antenna with an alt-azimuth mount tracks a source in the sky by rotating along two axes - altitude (vertical) and azimuth (vertical). An antenna with an equatorial mount tracks a source by rotating about the polar axis (i.e., an axis which points towards the celestial pole). End of explanation Image(filename='figures/AltAzAntennaRotation.png', width=700) Explanation: Figure 7.7.1: Alt-azimuth and equatorial mounts illustrated for the case of an optical telescope (Image taken from http://www.oasi.org.uk/Telescopes/CFTOB/Mounts.gif). <a id='instrum:fig:ant_mounts'></a> For an antenna with an alt-azimuth mount, the antenna primary beam rotates relative to the sky during the course of an observation. For an equatorially-mounted antenna, the relative orientation of the antenna primary beam and the sky remains unchanged throughout the course of an observation. End of explanation Image(filename='figures/EquatorialAntennaRotation.png', width=700) Explanation: Figure 7.7.2: Relative rotation of the beam and the sky during the course of an observation, for an antenna with an alt-azimuth mount. <a id='instrum:fig:altaz'></a> End of explanation Image(filename='figures/ParallacticAngle.png', width=500) Explanation: Figure 7.7.3: Relative rotation of the beam and the sky during the course of an observation, for an antenna with an equatorial mount. <a id='instrum:fig:equatorial'></a> 7.7.2 Parallactic angle <a id='instrum:sec:pa'></a> The parallactic angle is the spherical angle between two great circles on the celestial sphere - one passing through the source and the zenith, and the other passing through the source and the north celestial pole. End of explanation
5,063
Given the following text description, write Python code to implement the functionality described below step by step Description: Using marigoso to Post a Comment to a Blogger Post A simple tutorial demonstrating how to use marigoso to automatically launch a browser and post a comment to a blog post. Install marigoso Execute the command below to install marigoso. Step1: Launch the browser We can use marigoso Test to launch a browser. Step2: Navigate Once the browser is launched it will display a blank page. We then instruct this browser to navigate to the blog post. Step3: Accept Google's Cookie Policy Once the browser has visited Blogger, it may or may not display the Cookie Policy of Google. We then use the "press_available" function of the browser to accept that policy if it is available. Step4: Switch to iFrame The comment section or comment form of the post is inside an iframe. In order for us to be able to interact with the comment form, we need to use the Selenium function called "switch_to.frame". First we obtain the iframe and then switch into it. Step5: Keyboard Type After switching to the comment iframe, we can finally interact with the comment form. We grab the comment field and enter anything we would like to say. Here we are simulating the action of typing something in the keyboard to enter into the comment form. Step6: Select an Item from a Dropdown Menu We then simulate mouse movement for selecting an item in a dropdown menu. You will not see any actual mouse movement in this case however, because we will just grab the dropdown and instruct it to select the item we want. Step7: After simulating the keyboard typing and dropdown selection, the page should now look like the image below Step8: After clicking the "Publish" button, the page will then be redirected to the Google Sign-in page. What goes beyond here is now an exercise for the reader. Please try it on your own. Write the next code that will automatically log you to your Google Account. Disposing the browser Once your finished, you can quit the the browser using the command below.
Python Code: pip install -U marigoso Explanation: Using marigoso to Post a Comment to a Blogger Post A simple tutorial demonstrating how to use marigoso to automatically launch a browser and post a comment to a blog post. Install marigoso Execute the command below to install marigoso. End of explanation from marigoso import Test browser = Test().launch_browser("Firefox") Explanation: Launch the browser We can use marigoso Test to launch a browser. End of explanation browser.get_url("http://pytestuk.blogspot.co.uk/2015/11/testing.html") Explanation: Navigate Once the browser is launched it will display a blank page. We then instruct this browser to navigate to the blog post. End of explanation browser.press_available("id=cookieChoiceDismiss") Explanation: Accept Google's Cookie Policy Once the browser has visited Blogger, it may or may not display the Cookie Policy of Google. We then use the "press_available" function of the browser to accept that policy if it is available. End of explanation iframe = browser.get_element("css=div#bc_0_0T_box iframe") browser.switch_to.frame(iframe) Explanation: Switch to iFrame The comment section or comment form of the post is inside an iframe. In order for us to be able to interact with the comment form, we need to use the Selenium function called "switch_to.frame". First we obtain the iframe and then switch into it. End of explanation browser.kb_type("id=commentBodyField", "An example of Selenium automation in Python.") Explanation: Keyboard Type After switching to the comment iframe, we can finally interact with the comment form. We grab the comment field and enter anything we would like to say. Here we are simulating the action of typing something in the keyboard to enter into the comment form. End of explanation browser.select_text("id=identityMenu", "Google Account") Explanation: Select an Item from a Dropdown Menu We then simulate mouse movement for selecting an item in a dropdown menu. You will not see any actual mouse movement in this case however, because we will just grab the dropdown and instruct it to select the item we want. End of explanation browser.submit_btn("Publish") Explanation: After simulating the keyboard typing and dropdown selection, the page should now look like the image below: Submit the Form Finally, we submit the form by using the "submit_btn" function, which will simulate pressing the "Publish" button in the page. End of explanation browser.quit() Explanation: After clicking the "Publish" button, the page will then be redirected to the Google Sign-in page. What goes beyond here is now an exercise for the reader. Please try it on your own. Write the next code that will automatically log you to your Google Account. Disposing the browser Once your finished, you can quit the the browser using the command below. End of explanation
5,064
Given the following text description, write Python code to implement the functionality described below step by step Description: PS Clock Control This notebook demonstrates how to use Clocks class to control the PL clocks. By default, there are at most 4 PL clocks enabled in the system. They all can be reprogrammed to valid clock rates. Whenever the overlay is downloaded, the required clocks will also be configured. References Step1: Note Step2: Set Clock Rates The easiest way is to set the attributes directly. Random clock rates are used in the following examples; the clock manager will set the clock rates with best effort. If the desired frequency and the closest possible clock rate differs more than 1%, a warning will be raised. Step3: Reset Clock Rates Recover the original clock rates. This can be done by simply reloading the overlay (overlay will be downloaded automatically after instantiation).
Python Code: import os, warnings from pynq import PL from pynq import Overlay if not os.path.exists(PL.bitfile_name): warnings.warn('There is no overlay loaded after boot.', UserWarning) Explanation: PS Clock Control This notebook demonstrates how to use Clocks class to control the PL clocks. By default, there are at most 4 PL clocks enabled in the system. They all can be reprogrammed to valid clock rates. Whenever the overlay is downloaded, the required clocks will also be configured. References: https://www.xilinx.com/support/documentation/user_guides/ug585-Zynq-7000-TRM.pdf End of explanation from pynq import Clocks print(f'CPU: {Clocks.cpu_mhz:.6f}MHz') print(f'FCLK0: {Clocks.fclk0_mhz:.6f}MHz') print(f'FCLK1: {Clocks.fclk1_mhz:.6f}MHz') print(f'FCLK2: {Clocks.fclk2_mhz:.6f}MHz') print(f'FCLK3: {Clocks.fclk3_mhz:.6f}MHz') Explanation: Note: If you see a warning message in the above cell, it means that no overlay has been loaded after boot, hence the PL server is not aware of the current status of the PL. In that case you won't be able to run this notebook until you manually load an overlay at least once using: python from pynq import Overlay ol = Overlay('your_overlay.bit') If you do not see any warning message, you can safely proceed. Show All Clocks The following example shows all the current clock rates on the board. End of explanation Clocks.fclk0_mhz = 27.123456 Clocks.fclk1_mhz = 31.436546 Clocks.fclk2_mhz = 14.597643 Clocks.fclk3_mhz = 0.251954 print(f'CPU: {Clocks.cpu_mhz:.6f}MHz') print(f'FCLK0: {Clocks.fclk0_mhz:.6f}MHz') print(f'FCLK1: {Clocks.fclk1_mhz:.6f}MHz') print(f'FCLK2: {Clocks.fclk2_mhz:.6f}MHz') print(f'FCLK3: {Clocks.fclk3_mhz:.6f}MHz') Explanation: Set Clock Rates The easiest way is to set the attributes directly. Random clock rates are used in the following examples; the clock manager will set the clock rates with best effort. If the desired frequency and the closest possible clock rate differs more than 1%, a warning will be raised. End of explanation _ = Overlay(PL.bitfile_name) print(f'FCLK0: {Clocks.fclk0_mhz:.6f}MHz') print(f'FCLK1: {Clocks.fclk1_mhz:.6f}MHz') print(f'FCLK2: {Clocks.fclk2_mhz:.6f}MHz') print(f'FCLK3: {Clocks.fclk3_mhz:.6f}MHz') Explanation: Reset Clock Rates Recover the original clock rates. This can be done by simply reloading the overlay (overlay will be downloaded automatically after instantiation). End of explanation
5,065
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmos MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required Step9: 2.2. Canonical Horizontal Resolution Is Required Step10: 2.3. Range Horizontal Resolution Is Required Step11: 2.4. Number Of Vertical Levels Is Required Step12: 2.5. High Top Is Required Step13: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required Step14: 3.2. Timestep Shortwave Radiative Transfer Is Required Step15: 3.3. Timestep Longwave Radiative Transfer Is Required Step16: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required Step17: 4.2. Changes Is Required Step18: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required Step19: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required Step20: 6.2. Scheme Method Is Required Step21: 6.3. Scheme Order Is Required Step22: 6.4. Horizontal Pole Is Required Step23: 6.5. Grid Type Is Required Step24: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required Step25: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required Step26: 8.2. Name Is Required Step27: 8.3. Timestepping Type Is Required Step28: 8.4. Prognostic Variables Is Required Step29: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required Step30: 9.2. Top Heat Is Required Step31: 9.3. Top Wind Is Required Step32: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required Step33: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required Step34: 11.2. Scheme Method Is Required Step35: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required Step36: 12.2. Scheme Characteristics Is Required Step37: 12.3. Conserved Quantities Is Required Step38: 12.4. Conservation Method Is Required Step39: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required Step40: 13.2. Scheme Characteristics Is Required Step41: 13.3. Scheme Staggering Type Is Required Step42: 13.4. Conserved Quantities Is Required Step43: 13.5. Conservation Method Is Required Step44: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required Step45: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required Step46: 15.2. Name Is Required Step47: 15.3. Spectral Integration Is Required Step48: 15.4. Transport Calculation Is Required Step49: 15.5. Spectral Intervals Is Required Step50: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required Step51: 16.2. ODS Is Required Step52: 16.3. Other Flourinated Gases Is Required Step53: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required Step54: 17.2. Physical Representation Is Required Step55: 17.3. Optical Methods Is Required Step56: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required Step57: 18.2. Physical Representation Is Required Step58: 18.3. Optical Methods Is Required Step59: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required Step60: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required Step61: 20.2. Physical Representation Is Required Step62: 20.3. Optical Methods Is Required Step63: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required Step64: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required Step65: 22.2. Name Is Required Step66: 22.3. Spectral Integration Is Required Step67: 22.4. Transport Calculation Is Required Step68: 22.5. Spectral Intervals Is Required Step69: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required Step70: 23.2. ODS Is Required Step71: 23.3. Other Flourinated Gases Is Required Step72: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required Step73: 24.2. Physical Reprenstation Is Required Step74: 24.3. Optical Methods Is Required Step75: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required Step76: 25.2. Physical Representation Is Required Step77: 25.3. Optical Methods Is Required Step78: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required Step79: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required Step80: 27.2. Physical Representation Is Required Step81: 27.3. Optical Methods Is Required Step82: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required Step83: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required Step84: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required Step85: 30.2. Scheme Type Is Required Step86: 30.3. Closure Order Is Required Step87: 30.4. Counter Gradient Is Required Step88: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required Step89: 31.2. Scheme Type Is Required Step90: 31.3. Scheme Method Is Required Step91: 31.4. Processes Is Required Step92: 31.5. Microphysics Is Required Step93: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required Step94: 32.2. Scheme Type Is Required Step95: 32.3. Scheme Method Is Required Step96: 32.4. Processes Is Required Step97: 32.5. Microphysics Is Required Step98: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required Step99: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required Step100: 34.2. Hydrometeors Is Required Step101: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required Step102: 35.2. Processes Is Required Step103: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required Step104: 36.2. Name Is Required Step105: 36.3. Atmos Coupling Is Required Step106: 36.4. Uses Separate Treatment Is Required Step107: 36.5. Processes Is Required Step108: 36.6. Prognostic Scheme Is Required Step109: 36.7. Diagnostic Scheme Is Required Step110: 36.8. Prognostic Variables Is Required Step111: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required Step112: 37.2. Cloud Inhomogeneity Is Required Step113: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required Step114: 38.2. Function Name Is Required Step115: 38.3. Function Order Is Required Step116: 38.4. Convection Coupling Is Required Step117: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required Step118: 39.2. Function Name Is Required Step119: 39.3. Function Order Is Required Step120: 39.4. Convection Coupling Is Required Step121: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required Step122: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required Step123: 41.2. Top Height Direction Is Required Step124: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required Step125: 42.2. Number Of Grid Points Is Required Step126: 42.3. Number Of Sub Columns Is Required Step127: 42.4. Number Of Levels Is Required Step128: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required Step129: 43.2. Type Is Required Step130: 43.3. Gas Absorption Is Required Step131: 43.4. Effective Radius Is Required Step132: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required Step133: 44.2. Overlap Is Required Step134: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required Step135: 45.2. Sponge Layer Is Required Step136: 45.3. Background Is Required Step137: 45.4. Subgrid Scale Orography Is Required Step138: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required Step139: 46.2. Source Mechanisms Is Required Step140: 46.3. Calculation Method Is Required Step141: 46.4. Propagation Scheme Is Required Step142: 46.5. Dissipation Scheme Is Required Step143: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required Step144: 47.2. Source Mechanisms Is Required Step145: 47.3. Calculation Method Is Required Step146: 47.4. Propagation Scheme Is Required Step147: 47.5. Dissipation Scheme Is Required Step148: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required Step149: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required Step150: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required Step151: 50.2. Fixed Value Is Required Step152: 50.3. Transient Characteristics Is Required Step153: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required Step154: 51.2. Fixed Reference Date Is Required Step155: 51.3. Transient Method Is Required Step156: 51.4. Computation Method Is Required Step157: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required Step158: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required Step159: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-am4', 'atmos') Explanation: ES-DOC CMIP6 Model Properties - Atmos MIP Era: CMIP6 Institute: NOAA-GFDL Source ID: GFDL-AM4 Topic: Atmos Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. Properties: 156 (127 required) Model descriptions: Model description details Initialized From: CMIP5:GFDL-CM3 Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:34 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Overview 2. Key Properties --&gt; Resolution 3. Key Properties --&gt; Timestepping 4. Key Properties --&gt; Orography 5. Grid --&gt; Discretisation 6. Grid --&gt; Discretisation --&gt; Horizontal 7. Grid --&gt; Discretisation --&gt; Vertical 8. Dynamical Core 9. Dynamical Core --&gt; Top Boundary 10. Dynamical Core --&gt; Lateral Boundary 11. Dynamical Core --&gt; Diffusion Horizontal 12. Dynamical Core --&gt; Advection Tracers 13. Dynamical Core --&gt; Advection Momentum 14. Radiation 15. Radiation --&gt; Shortwave Radiation 16. Radiation --&gt; Shortwave GHG 17. Radiation --&gt; Shortwave Cloud Ice 18. Radiation --&gt; Shortwave Cloud Liquid 19. Radiation --&gt; Shortwave Cloud Inhomogeneity 20. Radiation --&gt; Shortwave Aerosols 21. Radiation --&gt; Shortwave Gases 22. Radiation --&gt; Longwave Radiation 23. Radiation --&gt; Longwave GHG 24. Radiation --&gt; Longwave Cloud Ice 25. Radiation --&gt; Longwave Cloud Liquid 26. Radiation --&gt; Longwave Cloud Inhomogeneity 27. Radiation --&gt; Longwave Aerosols 28. Radiation --&gt; Longwave Gases 29. Turbulence Convection 30. Turbulence Convection --&gt; Boundary Layer Turbulence 31. Turbulence Convection --&gt; Deep Convection 32. Turbulence Convection --&gt; Shallow Convection 33. Microphysics Precipitation 34. Microphysics Precipitation --&gt; Large Scale Precipitation 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics 36. Cloud Scheme 37. Cloud Scheme --&gt; Optical Cloud Properties 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution 40. Observation Simulation 41. Observation Simulation --&gt; Isscp Attributes 42. Observation Simulation --&gt; Cosp Attributes 43. Observation Simulation --&gt; Radar Inputs 44. Observation Simulation --&gt; Lidar Inputs 45. Gravity Waves 46. Gravity Waves --&gt; Orographic Gravity Waves 47. Gravity Waves --&gt; Non Orographic Gravity Waves 48. Solar 49. Solar --&gt; Solar Pathways 50. Solar --&gt; Solar Constant 51. Solar --&gt; Orbital Parameters 52. Solar --&gt; Insolation Ozone 53. Volcanos 54. Volcanos --&gt; Volcanoes Treatment 1. Key Properties --&gt; Overview Top level key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "AGCM" # "ARCM" # "Other: [Please specify]" DOC.set_value("AGCM") Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of atmospheric model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "primitive equations" # "non-hydrostatic" # "anelastic" # "Boussinesq" # "hydrostatic" # "quasi-hydrostatic" # "Other: [Please specify]" DOC.set_value("hydrostatic") DOC.set_value("primitive equations") Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the atmosphere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Resolution Characteristics of the model resolution 2.1. Horizontal Resolution Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.4. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on the computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.resolution.high_top') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 2.5. High Top Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Advection_tracer = 30 min, physics = 30 min") Explanation: 3. Key Properties --&gt; Timestepping Characteristics of the atmosphere model time stepping 3.1. Timestep Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the dynamics, e.g. 30 min. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("3 hours") Explanation: 3.2. Timestep Shortwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the shortwave radiative transfer, e.g. 1.5 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestep Longwave Radiative Transfer Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for the longwave radiative transfer, e.g. 3 hours. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "modified" DOC.set_value("present day") Explanation: 4. Key Properties --&gt; Orography Characteristics of the model orography 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the orography. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.key_properties.orography.changes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "related to ice sheets" # "related to tectonics" # "modified mean" # "modified variance if taken into account in model (cf gravity waves)" # TODO - please enter value(s) Explanation: 4.2. Changes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N If the orography type is modified describe the time adaptation changes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid --&gt; Discretisation Atmosphere grid discretisation 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of grid discretisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "spectral" # "fixed grid" # "Other: [Please specify]" DOC.set_value("fixed grid") Explanation: 6. Grid --&gt; Discretisation --&gt; Horizontal Atmosphere discretisation in the horizontal 6.1. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "finite elements" # "finite volumes" # "finite difference" # "centered finite difference" DOC.set_value("finite volumes") Explanation: 6.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "second" # "third" # "fourth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.3. Scheme Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation function order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "filter" # "pole rotation" # "artificial island" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. Horizontal Pole Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal discretisation pole singularity treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Gaussian" # "Latitude-Longitude" # "Cubed-Sphere" # "Icosahedral" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.5. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "isobaric" # "sigma" # "hybrid sigma-pressure" # "hybrid pressure" # "vertically lagrangian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7. Grid --&gt; Discretisation --&gt; Vertical Atmosphere discretisation in the vertical 7.1. Coordinate Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Type of vertical coordinate system End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Dynamical Core Characteristics of the dynamical core 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere dynamical core End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the dynamical core of the model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Adams-Bashforth" # "explicit" # "implicit" # "semi-implicit" # "leap frog" # "multi-step" # "Runge Kutta fifth order" # "Runge Kutta second order" # "Runge Kutta third order" # "Other: [Please specify]" DOC.set_value("explicit") Explanation: 8.3. Timestepping Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestepping framework type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "surface pressure" # "wind components" # "divergence/curl" # "temperature" # "potential temperature" # "total water" # "water vapour" # "water liquid" # "water ice" # "total water moments" # "clouds" # "radiation" # "Other: [Please specify]" DOC.set_value("Other: vapour/solid/liquid") DOC.set_value("clouds") DOC.set_value("potential temperature") DOC.set_value("wind components") Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of the model prognostic variables End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" DOC.set_value("sponge layer") Explanation: 9. Dynamical Core --&gt; Top Boundary Type of boundary layer at the top of the model 9.1. Top Boundary Condition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Zero flux") Explanation: 9.2. Top Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary heat treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Top Wind Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top boundary wind treatment End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "sponge layer" # "radiation boundary condition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Dynamical Core --&gt; Lateral Boundary Type of lateral boundary condition (if the model is a regional model) 10.1. Condition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Type of lateral boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Monotonic constraint and divergence damping") Explanation: 11. Dynamical Core --&gt; Diffusion Horizontal Horizontal diffusion scheme 11.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal diffusion scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "iterated Laplacian" # "bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal diffusion scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heun" # "Roe and VanLeer" # "Roe and Superbee" # "Prather" # "UTOPIA" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Dynamical Core --&gt; Advection Tracers Tracer advection scheme 12.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Tracer advection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Eulerian" # "modified Euler" # "Lagrangian" # "semi-Lagrangian" # "cubic semi-Lagrangian" # "quintic semi-Lagrangian" # "mass-conserving" # "finite volume" # "flux-corrected" # "linear" # "quadratic" # "quartic" # "Other: [Please specify]" DOC.set_value("finite volume") Explanation: 12.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "dry mass" # "tracer mass" # "Other: [Please specify]" DOC.set_value("Other: mass") Explanation: 12.3. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Tracer advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Priestley algorithm" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.4. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracer advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "VanLeer" # "Janjic" # "SUPG (Streamline Upwind Petrov-Galerkin)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamical Core --&gt; Advection Momentum Momentum advection scheme 13.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Momentum advection schemes name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "2nd order" # "4th order" # "cell-centred" # "staggered grid" # "semi-staggered grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Scheme Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa D-grid" # "Arakawa E-grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Scheme Staggering Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Angular momentum" # "Horizontal momentum" # "Enstrophy" # "Mass" # "Total energy" # "Vorticity" # "Other: [Please specify]" DOC.set_value("Vorticity") Explanation: 13.4. Conserved Quantities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Momentum advection scheme conserved quantities End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "conservation fixer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Conservation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Momentum advection scheme conservation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.aerosols') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "sulphate" # "nitrate" # "sea salt" # "dust" # "ice" # "organic" # "BC (black carbon / soot)" # "SOA (secondary organic aerosols)" # "POM (particulate organic matter)" # "polar stratospheric ice" # "NAT (nitric acid trihydrate)" # "NAD (nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particle)" # "Other: [Please specify]" DOC.set_value("BC (black carbon / soot)") DOC.set_value("dust") DOC.set_value("organic") DOC.set_value("sea salt") DOC.set_value("sulphate") Explanation: 14. Radiation Characteristics of the atmosphere radiation process 14.1. Aerosols Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Aerosols whose radiative effect is taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Radiation --&gt; Shortwave Radiation Properties of the shortwave radiation scheme 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of shortwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" DOC.set_value("wide-band model") Explanation: 15.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Shortwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) DOC.set_value(18) Explanation: 15.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Shortwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiation --&gt; Shortwave GHG Representation of greenhouse gases in the shortwave radiation scheme 16.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiation --&gt; Shortwave Cloud Ice Shortwave radiative properties of ice crystals in clouds 17.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiation --&gt; Shortwave Cloud Liquid Shortwave radiative properties of liquid droplets in clouds 18.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiation --&gt; Shortwave Cloud Inhomogeneity Cloud inhomogeneity in the shortwave radiation scheme 19.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiation --&gt; Shortwave Aerosols Shortwave radiative properties of aerosols 20.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the shortwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiation --&gt; Shortwave Gases Shortwave radiative properties of gases 21.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General shortwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Radiation --&gt; Longwave Radiation Properties of the longwave radiation scheme 22.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of longwave radiation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the longwave radiation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "wide-band model" # "correlated-k" # "exponential sum fitting" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Spectral Integration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme spectral integration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "two-stream" # "layer interaction" # "bulk" # "adaptive" # "multi-stream" # "Other: [Please specify]" DOC.set_value("layer interaction") Explanation: 22.4. Transport Calculation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Longwave radiation transport calculation methods End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) DOC.set_value(10) Explanation: 22.5. Spectral Intervals Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Longwave radiation scheme number of spectral intervals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CO2" # "CH4" # "N2O" # "CFC-11 eq" # "CFC-12 eq" # "HFC-134a eq" # "Explicit ODSs" # "Explicit other fluorinated gases" # "O3" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiation --&gt; Longwave GHG Representation of greenhouse gases in the longwave radiation scheme 23.1. Greenhouse Gas Complexity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CFC-12" # "CFC-11" # "CFC-113" # "CFC-114" # "CFC-115" # "HCFC-22" # "HCFC-141b" # "HCFC-142b" # "Halon-1211" # "Halon-1301" # "Halon-2402" # "methyl chloroform" # "carbon tetrachloride" # "methyl chloride" # "methylene chloride" # "chloroform" # "methyl bromide" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. ODS Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HFC-134a" # "HFC-23" # "HFC-32" # "HFC-125" # "HFC-143a" # "HFC-152a" # "HFC-227ea" # "HFC-236fa" # "HFC-245fa" # "HFC-365mfc" # "HFC-43-10mee" # "CF4" # "C2F6" # "C3F8" # "C4F10" # "C5F12" # "C6F14" # "C7F16" # "C8F18" # "c-C4F8" # "NF3" # "SF6" # "SO2F2" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Other Flourinated Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiation --&gt; Longwave Cloud Ice Longwave radiative properties of ice crystals in clouds 24.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud ice crystals End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bi-modal size distribution" # "ensemble of ice crystals" # "mean projected area" # "ice water path" # "crystal asymmetry" # "crystal aspect ratio" # "effective crystal radius" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.2. Physical Reprenstation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud ice crystals in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiation --&gt; Longwave Cloud Liquid Longwave radiative properties of liquid droplets in clouds 25.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with cloud liquid droplets End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud droplet number concentration" # "effective cloud droplet radii" # "droplet size distribution" # "liquid water path" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "geometric optics" # "Mie theory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to cloud liquid droplets in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Monte Carlo Independent Column Approximation" # "Triplecloud" # "analytic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiation --&gt; Longwave Cloud Inhomogeneity Cloud inhomogeneity in the longwave radiation scheme 26.1. Cloud Inhomogeneity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method for taking into account horizontal cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiation --&gt; Longwave Aerosols Longwave radiative properties of aerosols 27.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with aerosols End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "number concentration" # "effective radii" # "size distribution" # "asymmetry" # "aspect ratio" # "mixing state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.2. Physical Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical representation of aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "T-matrix" # "geometric optics" # "finite difference time domain (FDTD)" # "Mie theory" # "anomalous diffraction approximation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27.3. Optical Methods Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Optical methods applicable to aerosols in the longwave radiation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "scattering" # "emission/absorption" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiation --&gt; Longwave Gases Longwave radiative properties of gases 28.1. General Interactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General longwave radiative interactions with gases End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Turbulence Convection Atmosphere Convective Turbulence and Clouds 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of atmosphere convection and turbulence End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Mellor-Yamada" # "Holtslag-Boville" # "EDMF" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Turbulence Convection --&gt; Boundary Layer Turbulence Properties of the boundary layer turbulence scheme 30.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Boundary layer turbulence scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "TKE prognostic" # "TKE diagnostic" # "TKE coupled with water" # "vertical profile of Kz" # "non-local diffusion" # "Monin-Obukhov similarity" # "Coastal Buddy Scheme" # "Coupled with convection" # "Coupled with gravity waves" # "Depth capped at cloud base" # "Other: [Please specify]" DOC.set_value("vertical profile of Kz") Explanation: 30.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Boundary layer turbulence scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Closure Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boundary layer turbulence scheme closure order End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 30.4. Counter Gradient Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Uses boundary layer turbulence scheme counter gradient End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Donner(1993) deep cumulus") Explanation: 31. Turbulence Convection --&gt; Deep Convection Properties of the deep convection scheme 31.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Deep convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "adjustment" # "plume ensemble" # "Other: [Please specify]" DOC.set_value("mass-flux") Explanation: 31.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "CAPE" # "bulk" # "ensemble" # "CAPE/WFN based" # "TKE/CIN based" # "Other: [Please specify]" DOC.set_value("CAPE") Explanation: 31.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Deep convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vertical momentum transport" # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "updrafts" # "downdrafts" # "radiative effect of anvils" # "re-evaporation of convective precipitation" # "Other: [Please specify]" DOC.set_value("convective momentum transport") DOC.set_value("detrainment") DOC.set_value("entrainment") DOC.set_value("penetrative convection") DOC.set_value("radiative effect of anvils") DOC.set_value("updrafts") DOC.set_value("vertical momentum transport") Explanation: 31.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of deep convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Bretherton et al. (2004)") Explanation: 32. Turbulence Convection --&gt; Shallow Convection Properties of the shallow convection scheme 32.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Shallow convection scheme name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mass-flux" # "cumulus-capped boundary layer" # "Other: [Please specify]" DOC.set_value("mass-flux") Explanation: 32.2. Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N shallow convection scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "same as deep (unified)" # "included in boundary layer turbulence" # "separate diagnosis" DOC.set_value("separate diagnosis") Explanation: 32.3. Scheme Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 shallow convection scheme method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convective momentum transport" # "entrainment" # "detrainment" # "penetrative convection" # "re-evaporation of convective precipitation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Physical processes taken into account in the parameterisation of shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "tuning parameter based" # "single moment" # "two moment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.5. Microphysics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Microphysics scheme for shallow convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33. Microphysics Precipitation Large Scale Cloud Microphysics and Precipitation 33.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of large scale cloud microphysics and precipitation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Tiedtke (1993)") Explanation: 34. Microphysics Precipitation --&gt; Large Scale Precipitation Properties of the large scale precipitation scheme 34.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the large scale precipitation parameterisation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "liquid rain" # "snow" # "hail" # "graupel" # "Other: [Please specify]" DOC.set_value("liquid rain") DOC.set_value("snow") Explanation: 34.2. Hydrometeors Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Precipitating hydrometeors taken into account in the large scale precipitation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Rotstayn (1997) and Ming et al. (2006)") Explanation: 35. Microphysics Precipitation --&gt; Large Scale Cloud Microphysics Properties of the large scale cloud microphysics scheme 35.1. Scheme Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name of the microphysics parameterisation scheme used for large scale clouds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "mixed phase" # "cloud droplets" # "cloud ice" # "ice nucleation" # "water vapour deposition" # "effect of raindrops" # "effect of snow" # "effect of graupel" # "Other: [Please specify]" DOC.set_value("cloud droplets") DOC.set_value("cloud ice") DOC.set_value("mixed phase") DOC.set_value("water vapour deposition") Explanation: 35.2. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Large scale cloud microphysics processes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Cloud Scheme Characteristics of the cloud scheme 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the atmosphere cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.2. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "atmosphere_radiation" # "atmosphere_microphysics_precipitation" # "atmosphere_turbulence_convection" # "atmosphere_gravity_waves" # "atmosphere_solar" # "atmosphere_volcano" # "atmosphere_cloud_simulator" # TODO - please enter value(s) Explanation: 36.3. Atmos Coupling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Atmosphere components that are linked to the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 36.4. Uses Separate Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "entrainment" # "detrainment" # "bulk cloud" # "Other: [Please specify]" DOC.set_value("Other: prognostic cloud area, liquid, and ice for stratiform; convective areas from cumulus parameterizations") Explanation: 36.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Processes included in the cloud scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.6. Prognostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a prognostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 36.7. Diagnostic Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the cloud scheme a diagnostic scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "cloud amount" # "liquid" # "ice" # "rain" # "snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.8. Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List the prognostic variables used by the cloud scheme, if applicable. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "random" # "maximum" # "maximum-random" # "exponential" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 37. Cloud Scheme --&gt; Optical Cloud Properties Optical cloud properties 37.1. Cloud Overlap Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account overlapping of cloud layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Cloud Inhomogeneity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Method for taking into account cloud inhomogeneity End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 38. Cloud Scheme --&gt; Sub Grid Scale Water Distribution Sub-grid scale water distribution 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("Tiedtke (1993) prognostic for stratiform; Donner et al. (2001) , Bretherton et al. (2004), and Wilcox and Donner (2007) for convective") Explanation: 38.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 38.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale water distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" DOC.set_value("coupled with deep") DOC.set_value("coupled with shallow") Explanation: 38.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale water distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # TODO - please enter value(s) Explanation: 39. Cloud Scheme --&gt; Sub Grid Scale Ice Distribution Sub-grid scale ice distribution 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 39.2. Function Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function name End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 39.3. Function Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sub-grid scale ice distribution function type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "coupled with deep" # "coupled with shallow" # "not coupled with convection" # TODO - please enter value(s) Explanation: 39.4. Convection Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Sub-grid scale ice distribution coupling with convection End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40. Observation Simulation Characteristics of observation simulation 40.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of observation simulator characteristics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "no adjustment" # "IR brightness" # "visible optical depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Observation Simulation --&gt; Isscp Attributes ISSCP Characteristics 41.1. Top Height Estimation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator ISSCP top height estimation methodUo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "lowest altitude level" # "highest altitude level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. Top Height Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator ISSCP top height direction End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Inline" # "Offline" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 42. Observation Simulation --&gt; Cosp Attributes CFMIP Observational Simulator Package attributes 42.1. Run Configuration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP run configuration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.2. Number Of Grid Points Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of grid points End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.3. Number Of Sub Columns Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 42.4. Number Of Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator COSP number of levels End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 43. Observation Simulation --&gt; Radar Inputs Characteristics of the cloud radar simulator 43.1. Frequency Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar frequency (Hz) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "surface" # "space borne" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 43.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.3. Gas Absorption Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses gas absorption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 43.4. Effective Radius Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator radar uses effective radius End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "ice spheres" # "ice non-spherical" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44. Observation Simulation --&gt; Lidar Inputs Characteristics of the cloud lidar simulator 44.1. Ice Types Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Cloud simulator lidar ice type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "max" # "random" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 44.2. Overlap Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Cloud simulator lidar overlap End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 45. Gravity Waves Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources. 45.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of gravity wave parameterisation in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Rayleigh friction" # "Diffusive sponge layer" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.2. Sponge Layer Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Sponge layer in the upper levels in order to avoid gravity wave reflection at the top. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "continuous spectrum" # "discrete spectrum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 45.3. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background wave distribution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "effect on drag" # "effect on lifting" # "enhanced topography" # "Other: [Please specify]" DOC.set_value("effect on drag") Explanation: 45.4. Subgrid Scale Orography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Subgrid scale orography effects taken into account. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 46. Gravity Waves --&gt; Orographic Gravity Waves Gravity waves generated due to the presence of orography 46.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "linear mountain waves" # "hydraulic jump" # "envelope orography" # "low level flow blocking" # "statistical sub-grid scale variance" # "Other: [Please specify]" DOC.set_value("statistical sub-grid scale variance") Explanation: 46.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "non-linear calculation" # "more than two cardinal directions" # "Other: [Please specify]" DOC.set_value("non-linear calculation") Explanation: 46.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "includes boundary layer ducting" # "Other: [Please specify]" DOC.set_value("linear theory") Explanation: 46.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" DOC.set_value("single wave") Explanation: 46.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 47. Gravity Waves --&gt; Non Orographic Gravity Waves Gravity waves generated by non-orographic processes. 47.1. Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Commonly used name for the non-orographic gravity wave scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "convection" # "precipitation" # "background spectrum" # "Other: [Please specify]" DOC.set_value("background spectrum") Explanation: 47.2. Source Mechanisms Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave source mechanisms End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "spatially dependent" # "temporally dependent" DOC.set_value("spatially dependent") Explanation: 47.3. Calculation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Non-orographic gravity wave calculation method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "linear theory" # "non-linear theory" # "Other: [Please specify]" DOC.set_value("linear theory") Explanation: 47.4. Propagation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave propogation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "total wave" # "single wave" # "spectral" # "linear" # "wave saturation vs Richardson number" # "Other: [Please specify]" DOC.set_value("single wave") Explanation: 47.5. Dissipation Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Non-orographic gravity wave dissipation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 48. Solar Top of atmosphere solar insolation characteristics 48.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of solar insolation of the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SW radiation" # "precipitating energetic particles" # "cosmic rays" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 49. Solar --&gt; Solar Pathways Pathways for solar forcing of the atmosphere 49.1. Pathways Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Pathways for the solar forcing of the atmosphere model domain End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" DOC.set_value("transient") Explanation: 50. Solar --&gt; Solar Constant Solar constant and top of atmosphere insolation characteristics 50.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of the solar constant. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 50.2. Fixed Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the solar constant is fixed, enter the value of the solar constant (W m-2). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") DOC.set_value("From Kopp et al. (2005, Solar Physics)") Explanation: 50.3. Transient Characteristics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 solar constant transient characteristics (W m-2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "transient" DOC.set_value("fixed") Explanation: 51. Solar --&gt; Orbital Parameters Orbital parameters and top of atmosphere insolation characteristics 51.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time adaptation of orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) DOC.set_value(23) Explanation: 51.2. Fixed Reference Date Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date for fixed orbital parameters (yyyy) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 51.3. Transient Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of transient orbital parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Berger 1978" # "Laskar 2004" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 51.4. Computation Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used for computing orbital parameters. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False DOC.set_value(True) Explanation: 52. Solar --&gt; Insolation Ozone Impact of solar insolation on stratospheric ozone 52.1. Solar Ozone Impact Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does top of atmosphere insolation impact on stratospheric ozone? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 53. Volcanos Characteristics of the implementation of volcanoes 53.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview description of the implementation of volcanic effects in the atmosphere End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "high frequency solar constant anomaly" # "stratospheric aerosols optical thickness" # "Other: [Please specify]" DOC.set_value("Other: via stratospheric aerosols optical thickness") Explanation: 54. Volcanos --&gt; Volcanoes Treatment Treatment of volcanoes in the atmosphere 54.1. Volcanoes Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How volcanic effects are modeled in the atmosphere. End of explanation
5,066
Given the following text description, write Python code to implement the functionality described below step by step Description: Q3 This question will focusing on indexing lists and dictionaries directly, no loops needed. A Reassign index to be the middle index of the list list_of_numbers. DO NOT hard-code a number (hard-coding means using literals; see L2, slide 27). Step1: B Reassign middle to be the middle value of the list list_of_numbers. DO NOT hard-code a number (hard-coding means using literals; see L2, slide 27). Step2: C Write a single boolean statement that tests whether the defined key exists in the defined dictionary. Step3: D Write a single boolean statement that tests whether the defined value exists in the defined dictionary. Step4: E Write code to split the following string sentence into words, and store only the second-to-last word in the variable last_word.
Python Code: import numpy as np np.random.seed(8948394) list_of_numbers = np.random.randint(10, size = 100000).tolist() index = -1 ### BEGIN SOLUTION ### END SOLUTION Explanation: Q3 This question will focusing on indexing lists and dictionaries directly, no loops needed. A Reassign index to be the middle index of the list list_of_numbers. DO NOT hard-code a number (hard-coding means using literals; see L2, slide 27). End of explanation import numpy as np np.random.seed(95448) list_of_numbers = np.random.randint(10, size = 100000).tolist() middle = -1 ### BEGIN SOLUTION ### END SOLUTION Explanation: B Reassign middle to be the middle value of the list list_of_numbers. DO NOT hard-code a number (hard-coding means using literals; see L2, slide 27). End of explanation dictionary = {"voila": "in", "view": "a", "humble": "vaudvillian", "veteran": "cast", "vicariously": "as", "both": "victim", "and": "villain"} key = "villain" ### BEGIN SOLUTION ### END SOLUTION Explanation: C Write a single boolean statement that tests whether the defined key exists in the defined dictionary. End of explanation dictionary = {"voila": "in", "view": "a", "humble": "vaudvillian", "veteran": "cast", "vicariously": "as", "both": "victim", "and": "villain"} value = "villain" ### BEGIN SOLUTION ### END SOLUTION Explanation: D Write a single boolean statement that tests whether the defined value exists in the defined dictionary. End of explanation sentence = "A data scientist is someone who is better at computer science than a statistician, and better at statistics than a computer scientist." last_word = None ### BEGIN SOLUTION ### END SOLUTION Explanation: E Write code to split the following string sentence into words, and store only the second-to-last word in the variable last_word. End of explanation
5,067
Given the following text description, write Python code to implement the functionality described below step by step Description: In this post, we'll look at a couple of statistics functions in Python. These statistics functions are part of the Python Standard Library in the statistics module. The four functions we'll use in this post are common in statistics Step1: Calculate the mean To calculate the mean, or average of our test scores, use the statistics module's mean() function. Step2: Calculate the median To calculate the median, or middle value of our test scores, use the statistics module's median() function. If there are an odd number of values, median() returns the middle value. If there are an even number of values median() returns an average of the two middle values. Step3: Calculate the mode To calculate the mode, or most often value of our test scores, use the statistics module's mode() function. If there is more than one number which occurs most often, mode() returns an error. ```python mode([1, 1, 2, 2, 3]) StatisticsError Step4: Calculate the standard deviation To calculate the standard deviation, or spread of the test scores, use the statistics module's stdev() function. A large standard deviation indicates the data is spread out; a small standard deviation indicates the data is clustered close together. Step5: Alternatively, we can import the whole statistics module at once (all the functions in the staticsitics module) using the the line
Python Code: from statistics import mean, median, mode, stdev test_scores = [60 , 83, 83, 91, 100] Explanation: In this post, we'll look at a couple of statistics functions in Python. These statistics functions are part of the Python Standard Library in the statistics module. The four functions we'll use in this post are common in statistics: mean - average value median - middle value mode - most often value standard deviation - spread of values To access Python's statistics functions, we need to import the functions from the statistics module using the statement: python from statistics import mean, median, mode, stdev After the import statement, the functions mean(), median(), mode() and stdev()(standard deviation) can be used. Since the statistics module is part of the Python Standard Library, no external packages need to be installed. Let's imagine we have a data set of 5 test scores. The test scores are 60, 83, 91 and 100. These test scores can be stored in a Python list. Python lists are defined with square brackets [ ]. Elements in Python lists are separated with commas. End of explanation mean(test_scores) Explanation: Calculate the mean To calculate the mean, or average of our test scores, use the statistics module's mean() function. End of explanation median(test_scores) 83 Explanation: Calculate the median To calculate the median, or middle value of our test scores, use the statistics module's median() function. If there are an odd number of values, median() returns the middle value. If there are an even number of values median() returns an average of the two middle values. End of explanation mode(test_scores) Explanation: Calculate the mode To calculate the mode, or most often value of our test scores, use the statistics module's mode() function. If there is more than one number which occurs most often, mode() returns an error. ```python mode([1, 1, 2, 2, 3]) StatisticsError: no unique mode; found 2 equally common values ``` If there is no value that occurs most often (all the values are unique or occur the same number of times), mode() also returns an error. ```python mode([1,2,3]) StatisticsError: no unique mode; found 3 equally common values ``` End of explanation stdev(test_scores) Explanation: Calculate the standard deviation To calculate the standard deviation, or spread of the test scores, use the statistics module's stdev() function. A large standard deviation indicates the data is spread out; a small standard deviation indicates the data is clustered close together. End of explanation import statistics test_scores = [60 , 83, 83, 91, 100] statistics.mean(test_scores) statistics.median(test_scores) statistics.mode(test_scores) statistics.stdev(test_scores) Explanation: Alternatively, we can import the whole statistics module at once (all the functions in the staticsitics module) using the the line: python import statistics Then to use the functions from the module, we need to call the names statistics.mean(), statistics.median(), statistics.mode(), and statistics.stdev(). See below: End of explanation
5,068
Given the following text description, write Python code to implement the functionality described below step by step Description: (OTBALN)= 2.1 Operaciones y transformaciones básicas del Álgebra Lineal Numérica ```{admonition} Notas para contenedor de docker Step1: a)Para hacer ceros por debajo del pivote $a_1 = -2$ Step2: ```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$ ``` Step3: ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera. ``` Step4: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$. ``` Step5: A continuación se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior Step6: b) Para hacer ceros por debajo del pivote $a_2 = 3$ Step7: ```{margin} Recuerda la definición de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$ ``` Step8: ```{margin} Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda. ``` Step9: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$. ``` Step10: A continuación se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior Step11: (EG2)= Ejemplo aplicando transformaciones de Gauss a una matriz Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su diagonal y tener una forma triangular superior, realizamos los productos matriciales Step12: Para hacer ceros por debajo del pivote $a_{11} = -1$ Step13: ```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$ ``` Step14: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1 Step15: Y se debe aplicar $L_1$ a las columnas número 2 y 3 de $A$ para completar el producto $L_1A$ Step16: ```{margin} Aplicando $L_1$ a la tercer columna de $A$ Step17: A continuación se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior Step18: ```{admonition} Observación Step19: ```{margin} Este es el primer renglón de $A$. ``` Step20: ```{margin} Tomando el primer renglón del producto $L_1A$. ``` Step21: por lo que la multiplicación $L_1A$ entonces modifica del segundo renglón de $A$ en adelante y de la segunda columna de $A$ en adelante. ```{admonition} Observación Step22: ```{margin} El resultado de este producto es un escalar. ``` Step23: y puede escribirse de forma compacta Step24: Entonces los productos $\ell_1 e_1^T A[1 Step25: $$\ell_1A[0,3]$$ Step26: ```{admonition} Observación Step27: Y finalmente la aplicación de $L_1$ al segundo renglón y segunda columna en adelante de $A$ queda Step28: Compárese con Step29: Entonces sólo falta colocar el primer renglón y primera columna al producto. Para esto combinamos columnas y renglones en numpy con column_stack y row_stack Step30: que es el resultado de Step31: Lo que falta para obtener una matriz triangular superior es hacer la multiplicación $L_2L_1A$. Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}{32}}{a^{(1)}{22}} \right )^T$ donde Step32: Utilizamos la definición $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico para construir al vector de Householder Step33: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` Step34: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_H$, directamente se tiene $R_H x = x - \beta vv^Tx$. ``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $R_Hx$ Step35: El resultado de $R_Hx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por Step36: ```{admonition} Observación Step37: Ejemplo aplicando reflectores de Householder a un vector Considérese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$". Otra opción para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico Step38: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` Step39: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_H$, directamente se tiene $R_H x = x - \beta vv^Tx$. ``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $R_Hx$ Step40: ```{admonition} Observación Step41: ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$ Step42: ```{margin} Recuerda la definición de $v= A[1 Step43: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` Step44: La siguiente lista la utilizamos para guardar a las partes esenciales del vector de Householder y las betas Step45: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1 Step46: ```{margin} Recuerda $A^{(1)} = R_1 A^{(0)}$. ``` Step47: ```{admonition} Observación Step48: A continuación queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$. ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$ Step49: ```{margin} Recuerda la definición de $v= A[2 Step50: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` Step51: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2 Step52: ```{margin} Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sólo operamos en $A^{(2)}[2 Step53: ```{margin} Se preserva la norma $2$ o Euclidiana de $A[2 Step54: A continuación queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$. Step55: ```{margin} Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sólo operamos en $A^{(2)}[3 Step56: Entonces sólo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en numpy con column_stack y row_stack Step57: La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es Step58: Podemos verificar lo anterior comparando con la matriz $R$ de la factorización $QR$ de $A$ Step59: ```{admonition} Ejercicio Step60: Podemos verificar lo anterior comparando con la matriz $Q$ de la factorización $QR$ de $A$ Step61: (TROT)= Transformaciones de rotación En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}\forall i=1,2,\dots,m, j=1, 2, \dots, n$. Si $u, v \in \mathbb{R}^2-{0}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un ángulo $\theta$ para llevarlo a la dirección de $v$ Step62: La matriz $R_O$ es Step63: ```{admonition} Observación Step64: Entrada $a_{21}$, plano $(1,2)$ Step65: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` Step66: Hacemos copia para un fácil manejo de los índices y matrices modificadas. Podríamos también usar numpy.view. Step67: ```{margin} $A^{(1)} = R_{12}^\theta A^{(0)}$. ``` Step68: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1 Step69: Entrada $a_{31}$, plano $(1,3)$ Step70: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` Step71: ```{margin} $A^{(2)} = R_{13}^\theta A^{(1)}$. ``` Step72: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1 Step73: Entrada $a_{41}$, plano $(1,4)$ Step74: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` Step75: ```{margin} $A^{(3)} = R_{14}^\theta A^{(2)}$. ``` Step76: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1 Step77: Entrada $a_{32}$, plano $(2,3)$ Step78: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` Step79: ```{margin} $A^{(4)} = R_{23}^\theta A^{(3)}$. ``` Step80: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1
Python Code: import numpy as np import math np.set_printoptions(precision=3, suppress=True) Explanation: (OTBALN)= 2.1 Operaciones y transformaciones básicas del Álgebra Lineal Numérica ```{admonition} Notas para contenedor de docker: Comando de docker para ejecución de la nota de forma local: nota: cambiar &lt;ruta a mi directorio&gt; por la ruta de directorio que se desea mapear a /datos dentro del contenedor de docker y &lt;versión imagen de docker&gt; por la versión más actualizada que se presenta en la documentación. docker run --rm -v &lt;ruta a mi directorio&gt;:/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:&lt;versión imagen de docker&gt; password para jupyterlab: qwerty Detener el contenedor de docker: docker stop jupyterlab_optimizacion Documentación de la imagen de docker palmoreck/jupyterlab_optimizacion:&lt;versión imagen de docker&gt; en liga. ``` Nota generada a partir de liga1, liga2 y liga3. ```{admonition} Al final de esta nota la comunidad lectora: :class: tip Entenderá cómo utilizar transformaciones típicas en el álgebra lineal numérica en la que se basan muchos de los algoritmos del análisis numérico. En específico aprenderá cómo aplicar las transformaciones de Gauss, reflexiones de Householder y rotaciones Givens a vectores y matrices. Se familizarizará con la notación vectorial y matricial de las operaciones básicas del álgebra lineal numérica. ``` Las operaciones básicas del Álgebra Lineal Numérica podemos dividirlas en vectoriales y matriciales. Vectoriales Transponer: $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \ x_2 \ \vdots \ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$ Suma: $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$ Multiplicación por un escalar: $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$. Producto interno estándar o producto punto: $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$. Multiplicación point wise: $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$. División point wise: $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$. Producto exterior o outer product: $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-ésimo renglón de $A$. Matriciales Transponer: $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ji}$. Sumar: $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$. Multiplicación por un escalar: $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$ Multiplicación por un vector: $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$. Multiplicación entre matrices: $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$. Multiplicación point wise: $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$. División point wise: $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$. Como ejemplos de transformaciones básicas del Álgebra Lineal Numérica se encuentran: (TGAUSS)= Transformaciones de Gauss En esta sección suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R},\forall i,j=1,2,\dots,n$. ```{margin} Como ejemplo de vector canónico tenemos: $e_1=(1,0)^T$ en $\mathbb{R}^2$ o $e_3 = (0,0,1,0,0)$ en $\mathbb{R}^5$. ``` Considérese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-ésimo vector canónico: vector con un $1$ en la posición $k$ y ceros en las entradas restantes. ```{admonition} Definición Una transformación de Gauss está definida de forma general como $L_k = I_n - \ell_ke_k^T$ con $\ell_k = (0,0,\dots,\ell_{k+1,k},\dots,\ell_{n,k})^T$ y $\ell_{i,k}=\frac{a_{ik}}{a_{kk}} \forall i=k+1,\dots,n$. $a_{kk}$ se le nombra pivote y debe ser diferente de cero. ``` Las transformaciones de Gauss se utilizan para hacer ceros por debajo del pivote. (EG1)= Ejemplo aplicando transformaciones de Gauss a un vector Considérese al vector $a=(-2,3,4)^T$. Definir una transformación de Gauss para hacer ceros por debajo de $a_1$ y otra transformación de Gauss para hacer cero la entrada $a_3$ Solución: End of explanation a = np.array([-2,3,4]) pivote = a[0] Explanation: a)Para hacer ceros por debajo del pivote $a_1 = -2$: End of explanation l1 = np.array([0,a[1]/pivote, a[2]/pivote]) Explanation: ```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$ ``` End of explanation e1 = np.array([1,0,0]) Explanation: ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera. ``` End of explanation L1_a = a-l1*(e1.dot(a)) print(L1_a) Explanation: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$. ``` End of explanation L1 = np.eye(3) - np.outer(l1,e1) print(L1) print(L1@a) Explanation: A continuación se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior: {margin} $L_1 = I_3 - \ell_1 e_1^T$. End of explanation a = np.array([-2,3,4]) pivote = a[1] Explanation: b) Para hacer ceros por debajo del pivote $a_2 = 3$: End of explanation l2 = np.array([0,0, a[2]/pivote]) Explanation: ```{margin} Recuerda la definición de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$ ``` End of explanation e2 = np.array([0,1,0]) Explanation: ```{margin} Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda. ``` End of explanation L2_a = a-l2*(e2.dot(a)) print(L2_a) Explanation: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$. ``` End of explanation L2 = np.eye(3) - np.outer(l2,e2) print(L2) print(L2@a) Explanation: A continuación se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior: ```{margin} $L_2 = I_3 - \ell_2 e_2^T$. ``` End of explanation A = np.array([[-1, 2, 5], [4, 5, -7], [3, 0, 8]], dtype=float) print(A) Explanation: (EG2)= Ejemplo aplicando transformaciones de Gauss a una matriz Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su diagonal y tener una forma triangular superior, realizamos los productos matriciales: $$L_2 L_1 A$$ donde: $L_1, L_2$ son transformaciones de Gauss. Posterior a realizar el producto $L_2 L_1 A$ se obtiene una matriz triangular superior: $$ L_2L_1A = \left [ \begin{array}{ccc} * & * & *\ 0 & * & * \ 0 & 0 & *  \end{array} \right ] $$ Ejemplo: a) Utilizando $L_1$ End of explanation pivote = A[0, 0] Explanation: Para hacer ceros por debajo del pivote $a_{11} = -1$: End of explanation l1 = np.array([0,A[1,0]/pivote, A[2,0]/pivote]) e1 = np.array([1,0,0]) Explanation: ```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$ ``` End of explanation L1_A_1 = A[:,0]-l1*(e1.dot(A[:,0])) print(L1_A_1) Explanation: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1:3,1] = A[1:3,1] - \ell_1 e_1^T A[1:3,1]$. ``` End of explanation L1_A_2 = A[:,1]-l1*(e1.dot(A[:,1])) print(L1_A_2) Explanation: Y se debe aplicar $L_1$ a las columnas número 2 y 3 de $A$ para completar el producto $L_1A$: ```{margin} Aplicando $L_1$ a la segunda columna de $A$: $A[1:3,2]$. ``` End of explanation L1_A_3 = A[:,2]-l1*(e1.dot(A[:,2])) print(L1_A_3) Explanation: ```{margin} Aplicando $L_1$ a la tercer columna de $A$: $A[1:3,3]$. ``` End of explanation L1 = np.eye(3) - np.outer(l1,e1) print(L1) print(L1 @ A) Explanation: A continuación se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior: {margin} $L_1 = I_3 - \ell_1 e_1^T$. End of explanation print(A) Explanation: ```{admonition} Observación :class: tip Al aplicar $L_1$ a la primer columna de $A$ siempre obtenemos ceros por debajo del pivote que en este caso es $a_{11}$. ``` (EG2.1)= Después de hacer la multiplicación $L_1A$ en cualquiera de los dos casos (construyendo o no explícitamente $L_1$) no se modifica el primer renglón de $A$: End of explanation print(A[0,:]) Explanation: ```{margin} Este es el primer renglón de $A$. ``` End of explanation print((L1 @ A)[0,:]) Explanation: ```{margin} Tomando el primer renglón del producto $L_1A$. ``` End of explanation print(e1.dot(A[:, 1])) Explanation: por lo que la multiplicación $L_1A$ entonces modifica del segundo renglón de $A$ en adelante y de la segunda columna de $A$ en adelante. ```{admonition} Observación :class: tip Dada la forma de $L_1 = I_3 - \ell_1e_1^T$, al hacer la multiplicación por la segunda y tercer columna de $A$ se tiene: $$e_1^T A[1:3,2] = A[0,2]$$ $$e_1^T A[1:3,3] = A[0,3]$$ respectivamente. ``` ```{margin} El resultado de este producto es un escalar. ``` End of explanation print(e1.dot(A[:, 2])) Explanation: ```{margin} El resultado de este producto es un escalar. ``` End of explanation print(A[0, 1:3]) #observe that we have to use 2+1=3 as the second number after ":" in 1:3 print(A[0, 1:]) #also we could have use this statement Explanation: y puede escribirse de forma compacta: $$e_1^T A[1:3,2:3] = A[0, 2:3]$$ End of explanation print(l1*A[0, 1]) Explanation: Entonces los productos $\ell_1 e_1^T A[1 : 3,2]$ y $\ell_1 e_1^T A[1 :3 ,3]$ quedan respectivamente como: $$\ell_1A[0, 2]$$ End of explanation print(l1*A[0, 2]) Explanation: $$\ell_1A[0,3]$$ End of explanation print(np.outer(l1[1:3],A[0,1:3])) print(np.outer(l1[1:],A[0,1:])) #also we could have use this statement Explanation: ```{admonition} Observación :class: tip En los dos cálculos anteriores, las primeras entradas son iguales a $0$ por lo que es consistente con el hecho que únicamente se modifican dos entradas de la segunda y tercer columna de $A$. ``` De forma compacta y aprovechando funciones en NumPy como np.outer se puede calcular lo anterior como: End of explanation print(A[1:, 1:] - np.outer(l1[1:],A[0,1:])) Explanation: Y finalmente la aplicación de $L_1$ al segundo renglón y segunda columna en adelante de $A$ queda: ```{margin} Observa que por la definición de la transformación de Gauss, no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A = A - \ell_1 e_1^T A$ y podemos aprovechar lo anterior para sólo operar de la segunda columna y segundo renglón en adelante. ``` End of explanation print(L1 @ A) Explanation: Compárese con: End of explanation A_aux = A[1:, 1:] - np.outer(l1[1:],A[0,1:]) m, n = A.shape number_of_zeros = m-1 A_aux_2 = np.column_stack((np.zeros(number_of_zeros), A_aux)) # stack two zeros print(A_aux_2) A_aux_3 = np.row_stack((A[0, :], A_aux_2)) print(A_aux_3) Explanation: Entonces sólo falta colocar el primer renglón y primera columna al producto. Para esto combinamos columnas y renglones en numpy con column_stack y row_stack: End of explanation print(L1 @ A) Explanation: que es el resultado de: End of explanation x = np.array([1,2,3]) print(x) Explanation: Lo que falta para obtener una matriz triangular superior es hacer la multiplicación $L_2L_1A$. Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}{32}}{a^{(1)}{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$. ```{admonition} Ejercicio :class: tip Calcular el producto $L_2 L_1 A$ para la matriz anterior y para la matriz: $$ A = \left [ \begin{array}{ccc} 1 & 4 & -2 \ -3 & 9 & 8 \ 5 & 1 & -6 \end{array} \right] $$ tomando en cuenta que en este caso $L_2$ sólo opera del segundo renglón y segunda columna en adelante: <img src="https://dl.dropboxusercontent.com/s/su4z0obupk95vql/transf_Gauss_outer_product.png?dl=0" heigth="550" width="550"> y obtener una matriz triangular superior en cada ejercicio. ``` ```{admonition} Comentarios Las transformaciones de Gauss se utilizan para la fase de eliminación del método de eliminación Gaussiana o también llamada factorización $LU$. Ver Gaussian elimination. La factorización $P, L, U$ que es la $LU$ con permutaciones por pivoteo parcial es un método estable numéricamente respecto al redondeo en la práctica pero inestable en la teoría. ``` (MATORTMATCOLORTONO)= Matriz ortogonal y matriz con columnas ortonormales Un conjunto de vectores ${x_1, \dots, x_p}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza: <img src="https://dl.dropboxusercontent.com/s/cekagqnxe0grvu4/vectores_ortogonales.png?dl=0" heigth="550" width="550"> ```{admonition} Comentarios Si el conjunto ${x_1,\dots,x_n}$ en $\mathbb{R}^m$ satisface $x_i^Tx_j= \delta_{ij}= \begin{cases} 1 &\text{ si } i=j,\ 0 &\text{ si } i\neq j \end{cases}$, ver Kronecker_delta se le nombra conjunto ortonormal, esto es, constituye un conjunto ortogonal y cada elemento del conjunto tiene norma $2$ o Euclidiana igual a $1$: $||x_i||_2 = 1, \forall i=1,\dots,n$. Si definimos a la matriz $X$ con columnas dadas por cada uno de los vectores del conjunto ${x_1,\dots, x_n}$: $X=(x_1, \dots , x_n) \in \mathbb{R}^{m \times n}$ entonces la propiedad de que cada par de columnas satisfaga $x_i^Tx_j=\delta_{ij}$ se puede escribir en notación matricial como $X^TX = I_n$ con $I_n$ la matriz identidad de tamaño $n$ si $n \leq m$ o bien $XX^T=I_m$ si $m \leq n$. A la matriz $X$ se le nombra matriz con columnas ortonormales. Si cada $x_i$ está en $\mathbb{R}^n$ (en lugar de $\mathbb{R}^m$) entonces construímos a la matriz $X$ como el punto anterior con la diferencia que $X \in \mathbb{R}^{n \times n}$. En este caso $X$ se le nombra matriz ortogonal. Entre las propiedades más importantes de las matrices ortogonales o con columnas ortonormales es que son isometrías bajo la norma $2$ o Euclidiana y multiplicar por tales matrices es estable numéricamente bajo el redondeo, ver {ref}Condición de un problema y estabilidad de un algoritmo &lt;CPEA&gt;. ``` (TREF)= Transformaciones de reflexión En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}, \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Reflectores de Householder ```{margin} Recuerda que $u^\perp = {x \in \mathbb{R}^m| u^Tx=0}$ es un subespacio de $\mathbb{R}^m$ de dimensión $m-1$ y es el complemento ortogonal de $u$. ``` ```{admonition} Definición Las reflexiones de Householder son matrices simétricas, ortogonales y se construyen a partir de un vector $v \neq 0$ definiendo: $$R_H = I_m-\beta v v^T$$ con $v \in \mathbb{R}^m - {0}$ y $\beta = \frac{2}{v^Tv}$. El vector $v$ se llama vector de Householder. La multiplicación $R_Hx$ representa la reflexión del vector $x \in \mathbb{R}^m$ a través del hiperplano $v^\perp$. ``` ```{admonition} Comentario Algunas propiedades de las reflexiones de Householder son: $R_H^TR_H = R_H^2 = I_m$, $R_H^{-1}=R_H$, $det(R_H)=-1$. ``` ```{sidebar} Proyector ortogonal elemental En este dibujo se utiliza el proyector ortogonal elemental sobre el complemento ortogonal $u^\perp$ definido como: $P=I_m- u u^T$ y $Px$ es la proyección ortogonal de $x$ sobre $u^\perp$ . Los proyectores ortogonales elementales no son matrices ortogonales, son singulares, son simétricas y $P^2=P$. El proyector ortogonal elemental de $x$ sobre $u^\perp$ tienen $rank$ igual a $m-1$ y el proyector ortogonal de $x$ sobre $span{u}$ definido por $I_m-P=uu^T$ tienen $rank$ igual a $1$. <img src="https://dl.dropboxusercontent.com/s/itjn9edajx4g2ql/elementary_projector_drawing.png?dl=0" heigth="350" width="350"> Recuerda que $span{u}$ es el conjunto generado por $u$. Se define como el conjunto de combinaciones lineales de $u$: $span{u} = \left {k u | k \in \mathbb{R} \forall i =1,\dots,m \right }$. ``` Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - {0}$ , $||u||_2 = 1$ y $R_H=I_m-2 u u^T$ es el siguiente : <img src="https://dl.dropboxusercontent.com/s/o3oht181nm8lfit/householder_drawing.png?dl=0" heigth="350" width="350"> Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector. Ejemplo aplicando reflectores de Householder a un vector Considérese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$. End of explanation e1 = np.array([1,0,0]) v = x-np.linalg.norm(x)*e1 Explanation: Utilizamos la definición $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico para construir al vector de Householder: ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera. ``` End of explanation beta = 2/v.dot(v) Explanation: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` End of explanation print(x-beta*v*(v.dot(x))) Explanation: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_H$, directamente se tiene $R_H x = x - \beta vv^Tx$. ``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $R_Hx$: End of explanation print(np.linalg.norm(x)) Explanation: El resultado de $R_Hx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por: End of explanation R_H = np.eye(3)-beta*np.outer(v,np.transpose(v)) print(R_H) print(R_H@x) Explanation: ```{admonition} Observación :class: tip Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de reflexión de Householder son matrices ortogonales y por tanto isometrías: $||R_Hv||_2=||v||_2$. Observa que a diferencia de las transformaciones de Gauss con las reflexiones de Householder en general se modifica la primera entrada, ver {ref}Ejemplo aplicando transformaciones de Gauss a un vector &lt;EG1&gt;. ``` A continuación se muestra que el producto $R_Hx$ si se construye $R_H$ es equivalente a lo anterior: ```{margin} $R_H = I_3 - \beta v v^T$. ``` End of explanation e1 = np.array([1,0,0]) v = x+np.linalg.norm(x)*e1 Explanation: Ejemplo aplicando reflectores de Householder a un vector Considérese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$". Otra opción para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico: ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera. ``` End of explanation beta = 2/v.dot(v) Explanation: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` End of explanation print(x-beta*v*(v.dot(x))) Explanation: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_H$, directamente se tiene $R_H x = x - \beta vv^Tx$. ``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $R_Hx$: End of explanation A = np.array([[3 ,2, -1], [2 ,3 ,2], [-1, 2 ,3], [2 ,1 ,4]], dtype = float) print(A) Explanation: ```{admonition} Observación :class: tip Observa que difieren en signo las primeras entradas al utilizar $v=x + ||x||_2 e_1$ o $v=x - ||x||_2 e_1$. ``` ¿Cuál definición del vector de Householder usar? En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicación $R_Hx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$): <img src="https://dl.dropboxusercontent.com/s/bfk7gojxm93ah5s/householder_2_posibilites.png?dl=0" heigth="400" width="400"> El vector $v^+ = - u_0^+ = x-||x||_2e_1$ se utiliza para construir la transformación de Householder que refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). Análogamente se tiene una situación similar con el vector $v^- = -u_0^- = x+||x||_2e_1$ y el subespacio $H^-$. Para reducir los errores por redondeo y evitar el problema de cancelación en la aritmética de punto flotante (ver Sistema de punto flotante) se utiliza: $$v = x+signo(x_1)||x||_2e_1$$ donde: $signo(x_1) = \begin{cases} 1 &\text{ si } x_1 \geq 0 ,\ -1 &\text{ si } x_1 < 0 \end{cases}.$ La idea de la definción anterior con la función $signo(\cdot)$ es que la reflexión (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo más alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexión igual a $-||x||_2e_1$. ```{admonition} Comentarios Otra forma de lidiar con el problema de cancelación es definiendo a la primera componente del vector de Householder $v_1$ como $v_1=x_1-||x||_2$ y haciendo una manipulación algebraica como sigue: $$v_1=x_1-||x||_2 = \frac{x_1^2-||x||_2^2}{x_1+||x||_2} = -\frac{x_2^2+x_3^2+\dots + x_m^2}{x_1+||x||_2}.$$ En la implementación del cálculo del vector de Householder, es útil que $v_1=1$ y así únicamente se almacenará $v[2:m]$. Al vector $v[2:m]$ se le nombra parte esencial del vector de Householder. Las transformaciones de reflexión de Householder se utilizan para la factorización QR. Ver QR decomposition, la cual es una factorización estable numéricamente bajo el redondeo. ``` ```{admonition} Ejercicio :class: tip Reflejar al vector $v=\left [\begin{array}{c}1 \1 \\end{array}\right ]$ utilizando al vector de Householder $u = \left [\begin{array}{c}\frac{-4}{3}\\frac{2}{3}\end{array}\right ]$ para construir $R_H$. Graficar $v, u, u^\perp$ y el vector reflejado. ``` Ejemplo aplicando reflectores de Householder a una matriz Las reflexiones de Householder se utilizan para hacer ceros por debajo de la diagonal a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}Ejemplo aplicando transformaciones de Gauss a una matriz &lt;EG2&gt;). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$: $$\begin{array}{l} R_2A^{(1)} = R_2 \left[ \begin{array}{cccc} * & * & * & \ 0 & * & * & \ 0 & * & * & * \ 0 & * & * & * \ 0 & * & * & * \end{array} \right] = \left[ \begin{array}{cccc} * & * & * & \ 0 & * & * & \ 0 & 0 & * & * \ 0 & 0 & * & * \ 0 & 0 & * & * \end{array} \right] := A^{(2)} \end{array} $$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexión de Householder. En este caso $$R_2 = \left [ \begin{array}{cc} 1 & 0 \ 0 & \hat{R_2} \end{array} \right ] $$ con $\hat{R}2$ una matriz de reflexión de Householder que hace ceros por debajo de de $a{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$: No modifica el primer renglón de $A^{(1)}$. No destruye los ceros de la primer columna de $A^{(1)}$. $R_2$ es una matriz de reflexión de Householder. ```{admonition} Observación :class: tip Para la implementación computacional no se inserta $\hat{R}_2$ en $R_2$, en lugar de esto se aplica $\hat{R}_2$ a la submatriz $A^{(1)}[2:m, 2:m]$. ``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 3}$: $$A = \left [ \begin{array}{ccc} 3 & 2 & -1 \ 2 & 3 & 2 \ -1 & 2 & 3 \ 2 & 1 & 4 \end{array} \right ] $$ y aplíquense reflexiones de Householder para llevarla a una forma triangular superior. End of explanation e1 = np.array([1,0,0,0]) Explanation: ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$: $A[1:4,1]$. ``` End of explanation v = A[:,0] + np.linalg.norm(A[:,0])*e1 print(v) Explanation: ```{margin} Recuerda la definición de $v= A[1:4,1] + signo(A[1,1])||A[1:4,1]||_2e_1$. ``` End of explanation beta = 2/v.dot(v) print(beta) Explanation: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` End of explanation l_betas = [] l_v_Householder = [] l_betas.append(beta) l_v_Householder.append(v) Explanation: La siguiente lista la utilizamos para guardar a las partes esenciales del vector de Householder y las betas: End of explanation print(A[:,0] - beta*v*v.dot(A[:,0])) Explanation: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1:4,1] = A[1:4,1] - \beta vv^TA[1:4,1]$. ``` End of explanation A1 = A[:,0:]-beta*np.outer(v,v.dot(A[:,0:])) print(A1) Explanation: ```{margin} Recuerda $A^{(1)} = R_1 A^{(0)}$. ``` End of explanation print(np.linalg.norm(A1[:,0])) print(np.linalg.norm(A[:,0])) Explanation: ```{admonition} Observación :class: tip Observa que a diferencia de las transformaciones de Gauss la reflexión de Householder $R_1$ sí modifica el primer renglón de $A^{(0)}$, ver {ref}Después de hacer la multiplicación... &lt;EG2.1&gt;. ``` ```{margin} Se preserva la norma $2$ o Euclidiana de $A[1:4,1]$. ``` End of explanation e1 = np.array([1, 0, 0]) Explanation: A continuación queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$. ```{margin} Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$: $A^{(1)}[2:4,2]$. ``` End of explanation v = A1[1:,1] + np.linalg.norm(A1[1:,1])*e1 print(v) Explanation: ```{margin} Recuerda la definición de $v= A[2:4,2] + signo(A[2,2])||A[2:4,2]||_2e_1$. ``` End of explanation beta = 2/v.dot(v) l_betas l_v_Householder l_betas.append(beta) l_v_Householder.append(v) Explanation: ```{margin} Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario. ``` End of explanation print(A1[1:,1] - beta*v*v.dot(A1[1:,1])) Explanation: ```{margin} Observa que por la definición de la reflexión de Householder, no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2:4,2] = A[2:4,2] - \beta vv^TA[2:4,2]$. ``` End of explanation A2_aux = A1[1:,1:]-beta*np.outer(v,v.dot(A1[1:,1:])) print(A2_aux) Explanation: ```{margin} Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sólo operamos en $A^{(2)}[2:4, 2:3]$. ``` End of explanation print(np.linalg.norm(A1[1:,1])) Explanation: ```{margin} Se preserva la norma $2$ o Euclidiana de $A[2:4,2]$. ``` End of explanation e1 = np.array([1, 0]) v = A2_aux[1:,1] + np.linalg.norm(A2_aux[1:,1])*e1 print(v) beta = 2/v.dot(v) l_betas.append(beta) l_v_Householder.append(v) Explanation: A continuación queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$. End of explanation A3_aux = A2_aux[1:,1]-beta*v*v.dot(A2_aux[1:,1]) print(A3_aux) print(np.linalg.norm(A2_aux[1:,1])) Explanation: ```{margin} Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sólo operamos en $A^{(2)}[3:4, 3]$. ``` End of explanation m,n = A.shape number_of_zeros = m-2 A3_aux_2 = np.column_stack((np.zeros(number_of_zeros), A3_aux)) print(A3_aux_2) A3_aux_3 = np.row_stack((A2_aux[0, 0:], A3_aux_2)) print(A3_aux_3) number_of_zeros = m-1 A3_aux_4 = np.column_stack((np.zeros(number_of_zeros), A3_aux_3)) print(A3_aux_4) Explanation: Entonces sólo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en numpy con column_stack y row_stack: End of explanation A3 = np.row_stack((A1[0, 0:], A3_aux_4)) print(A3) Explanation: La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es: End of explanation q,r = np.linalg.qr(A) print("Q:") print(q) print("R:") print(r) Explanation: Podemos verificar lo anterior comparando con la matriz $R$ de la factorización $QR$ de $A$: End of explanation print(l_betas) print(l_v_Householder) Q_Householder = np.eye(m) for j in range(n-1,-1,-1): v = l_v_Householder[j] Q_Householder[j:m, j:m] = Q_Householder[j:m, j:m] - l_betas[j]*np.outer(v, v.dot(Q_Householder[j:m,j:m])) print(Q_Householder) Explanation: ```{admonition} Ejercicio :class: tip Aplicar reflexiones de Householder a la matriz $$A = \left [ \begin{array}{cccc} 4 & 1 & -2 & 2 \ 1 & 2 & 0 & 1\ -2 & 0 & 3 & -2 \ 2 & 1 & -2 & -1 \end{array} \right ] $$ para obtener una matriz triangular superior. ``` Cálculo del factor $Q$ en la factorización $QR$ con reflexiones de Householder Con los vectores de Householder puede construirse el factor $Q$ para la factorización $QR$ utilizando el producto: $$Q = R_1 R_2 \cdots R_n$$ donde: $R_i \in \mathbb{R}^{m \times m}$ es reflexión de Householder. ```{admonition} Observación :class: tip Un mejor uso de almacenamiento es utilizar las partes esenciales de los vectores de Householder. ``` En el caso del ejemplo anterior utilizamos las listas: End of explanation q,r = np.linalg.qr(A, "complete") print("Q:") print(q) print("R:") print(r) Explanation: Podemos verificar lo anterior comparando con la matriz $Q$ de la factorización $QR$ de $A$: End of explanation v=np.array([1,1]) Explanation: (TROT)= Transformaciones de rotación En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}\forall i=1,2,\dots,m, j=1, 2, \dots, n$. Si $u, v \in \mathbb{R}^2-{0}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un ángulo $\theta$ para llevarlo a la dirección de $v$: <img src="https://dl.dropboxusercontent.com/s/vq8eu0yga2x7cb2/rotation_1.png?dl=0" heigth="500" width="500"> A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente: $$\begin{array}{l} \left[\begin{array}{c} v_1\ v_2 \end{array} \right] = \left[ \begin{array}{cc} cos\theta & -sen\theta\ sen\theta & cos\theta \end{array} \right] \cdot \left[\begin{array}{c} u_1\ u_2 \end{array} \right] \end{array} $$ ```{admonition} Definición La matriz $R_O$: $$R_O= \left[ \begin{array}{cc} cos\theta & -sen\theta\ sen\theta & cos\theta \end{array} \right] $$ se nombra matriz de rotación o rotaciones Givens, es una matriz ortogonal pues $R_O^TR_O=I_2$. La multiplicación $v=R_Ou$ es una rotación en sentido contrario a las manecillas del reloj, de hecho cumple $det(R_O)=1$. La multiplicación $u=R_O^Tv$ es una rotación en sentido de las manecillas del reloj y el ángulo asociado es $-\theta$. ``` Ejemplo aplicando rotaciones Givens a un vector Rotar al vector $v=(1,1)^T$ un ángulo de $45^o$ en sentido contrario a las manecillas del reloj. End of explanation theta=math.pi/4 R_O=np.array([[math.cos(theta), -math.sin(theta)], [math.sin(theta), math.cos(theta)]]) print(R_O) print(R_O@v) print(np.linalg.norm(v)) Explanation: La matriz $R_O$ es: $$R_O = \left[ \begin{array}{cc} cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\ sen(\frac{\pi}{4}) & cos(\frac{\pi}{4}) \end{array} \right ] $$ End of explanation A = np.array([[4, 1, -2, 2], [1, 2, 0, 1], [-2, 0, 3, -2], [2, 1, -2, -1]], dtype=float) Explanation: ```{admonition} Observación :class: tip Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de rotación Givens son matrices ortogonales y por tanto isometrías: $||R_0v||_2=||v||_2$. ``` En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotación se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y se desea hacer cero la entrada $v_2$ de $v$ se puede utilizar la matriz de rotación: $$R_O = \left[ \begin{array}{cc} \frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\ -\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}} \end{array} \right ] $$ pues: $$\begin{array}{l} \left[ \begin{array}{cc} \frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\ -\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}} \end{array} \right ] \cdot \left[\begin{array}{c} v_1\ v_2 \end{array} \right]= \left[ \begin{array}{c} \frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\ \frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}} \end{array} \right ] = \left[ \begin{array}{c} \frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\ 0 \end{array} \right ]= \left[ \begin{array}{c} ||v||_2\ 0 \end{array} \right ] \end{array} $$ Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene : $$ R_O=\left[ \begin{array}{cc} cos\theta & sen\theta\ -sen\theta & cos\theta \end{array} \right] $$ que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y: $$ R_O=\left[ \begin{array}{cc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{array} \right] $$ que es una matriz de rotación para un ángulo que gira en sentido de las manecillas del reloj. Para hacer cero la entrada $v_1$ de $v$ hay que usar: $$\begin{array}{l} R_O=\left[ \begin{array}{cc} cos\theta & -sen\theta\ sen\theta & cos\theta \end{array} \right] =\left[ \begin{array}{cc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\ \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \end{array} \right] \end{array} $$ que es una matriz de rotación para un ángulo que gira en sentido contrario de las manecillas del reloj. ```{admonition} Ejercicio :class: tip Usar una matriz de rotación Givens para rotar al vector $v = (-3, 4)^T$ un ángulo de $\frac{\pi}{3}$ en sentido de las manecillas del reloj. Graficar $v$ y el vector rotado. ``` Ejemplo aplicando rotaciones Givens a una matriz Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son seleccionadas. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y $$ R_{24}^\theta= \left [ \begin{array}{cccc} 1 & 0 & 0 & 0\ 0 & cos\theta & 0 & sen\theta \ 0 & 0 & 1 & 0 \ 0 & -sen\theta & 0 & cos\theta \end{array} \right ] $$ entonces: $$ R_{24}^\theta x = \begin{array}{l} \left [ \begin{array}{cccc} 1 & 0 & 0 & 0\ 0 & cos\theta & 0 & sen\theta \ 0 & 0 & 1 & 0 \ 0 & -sen\theta & 0 & cos\theta \end{array} \right ] \left [ \begin{array}{c} x_1 \ x_2 \ x_3 \ x_4 \end{array} \right ] = \left [ \begin{array}{c} x_1 \ \sqrt{x_2^2 + x_4^2} \ x_3 \ 0 \end{array} \right ] \end{array} $$ Y se escribe que se hizo una rotación en el plano $(2,4)$. ```{admonition} Observación :class: tip Obsérvese que sólo se modificaron dos entradas de $x$: $x_2, x_4$ por lo que el mismo efecto se obtiene al hacer la multiplicación: $$ \begin{array}{l} \left[ \begin{array}{cc} cos\theta & -sen\theta\ sen\theta & cos\theta \end{array} \right] \left [ \begin{array}{c} x_2\ x_4 \end{array} \right ] \end{array} $$ para tales entradas. ``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 4}$: $$A = \left [ \begin{array}{cccc} 4 & 1 & -2 & 2 \ 1 & 2 & 0 & 1\ -2 & 0 & 3 & -2 \ 2 & 1 & -2 & -1 \end{array} \right ] $$ y aplíquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz triangular superior. End of explanation idx_1 = 0 idx_2 = 1 idx_column = 0 print(A) a_11 = A[idx_1,idx_column] a_21 = A[idx_2,idx_column] norm = math.sqrt(a_11**2 + a_21**2) cos_theta = a_11/norm sen_theta = a_21/norm R12 = np.array([[cos_theta, sen_theta], [-sen_theta, cos_theta]]) print(R12) Explanation: Entrada $a_{21}$, plano $(1,2)$: End of explanation A_subset = np.row_stack((A[idx_1,:], A[idx_2,:])) print(A_subset) print(R12@A_subset) A1_aux = R12@A_subset print(A1_aux) Explanation: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` End of explanation A1 = A.copy() A1[idx_1, :] = A1_aux[0, :] A1[idx_2, :] = A1_aux[1, :] Explanation: Hacemos copia para un fácil manejo de los índices y matrices modificadas. Podríamos también usar numpy.view. End of explanation print(A1) print(A) Explanation: ```{margin} $A^{(1)} = R_{12}^\theta A^{(0)}$. ``` End of explanation print(np.linalg.norm(A1[:, idx_column])) print(np.linalg.norm(A[:, idx_column])) Explanation: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1:4,1]$. ``` End of explanation idx_1 = 0 idx_2 = 2 idx_column = 0 a_11 = A1[idx_1, idx_column] a_31 = A1[idx_2, idx_column] norm = math.sqrt(a_11**2 + a_31**2) cos_theta = a_11/norm sen_theta = a_31/norm R13 = np.array([[cos_theta, sen_theta], [-sen_theta, cos_theta]]) print(R13) Explanation: Entrada $a_{31}$, plano $(1,3)$: End of explanation A1_subset = np.row_stack((A1[idx_1,:], A1[idx_2,:])) print(A1_subset) print(R13@A1_subset) A2_aux = R13@A1_subset print(A2_aux) A2 = A1.copy() A2[idx_1, :] = A2_aux[0, :] A2[idx_2, :] = A2_aux[1, :] Explanation: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` End of explanation print(A2) print(A1) print(A) Explanation: ```{margin} $A^{(2)} = R_{13}^\theta A^{(1)}$. ``` End of explanation print(np.linalg.norm(A2[:, idx_column])) print(np.linalg.norm(A[:, idx_column])) Explanation: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1:4,1]$. ``` End of explanation idx_1 = 0 idx_2 = 3 idx_column = 0 a_11 = A2[idx_1, idx_column] a_41 = A2[idx_2, idx_column] norm = math.sqrt(a_11**2 + a_41**2) cos_theta = a_11/norm sen_theta = a_41/norm R14 = np.array([[cos_theta, sen_theta], [-sen_theta, cos_theta]]) print(R14) Explanation: Entrada $a_{41}$, plano $(1,4)$: End of explanation A2_subset = np.row_stack((A2[idx_1,:], A2[idx_2,:])) print(A2_subset) print(R14@A2_subset) A3_aux = R14@A2_subset print(A3_aux) A3 = A2.copy() A3[idx_1, :] = A3_aux[0, :] A3[idx_2, :] = A3_aux[1, :] Explanation: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` End of explanation print(A3) print(A2) Explanation: ```{margin} $A^{(3)} = R_{14}^\theta A^{(2)}$. ``` End of explanation print(np.linalg.norm(A3[:, idx_column])) print(np.linalg.norm(A[:, idx_column])) Explanation: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1:4,1]$. ``` End of explanation idx_1 = 1 idx_2 = 2 idx_column = 1 a_22 = A3[idx_1, idx_column] a_32 = A3[idx_2, idx_column] norm = math.sqrt(a_22**2 + a_32**2) cos_theta = a_22/norm sen_theta = a_32/norm R23 = np.array([[cos_theta, sen_theta], [-sen_theta, cos_theta]]) print(R23) Explanation: Entrada $a_{32}$, plano $(2,3)$: End of explanation A3_subset = np.row_stack((A3[idx_1,:], A3[idx_2,:])) print(A3_subset) print(R23@A3_subset) A4_aux = R23@A3_subset print(A4_aux) A4 = A3.copy() A4[idx_1, :] = A4_aux[0, :] A4[idx_2, :] = A4_aux[1, :] Explanation: ```{margin} Extraemos sólo los renglones a los que se les aplicará la matriz de rotación. ``` End of explanation print(A4) print(A3) print(A2) Explanation: ```{margin} $A^{(4)} = R_{23}^\theta A^{(3)}$. ``` End of explanation print(np.linalg.norm(A4[:, idx_column])) print(np.linalg.norm(A[:, idx_column])) Explanation: ```{margin} Se preserva la norma 2 o Euclidiana de $A[1:4,2]$. ``` End of explanation
5,069
Given the following text description, write Python code to implement the functionality described below step by step Description: Lesson 27 Step1: They can be used in combination Step2: The . character matches any character. Step3: The .* is therefore used to match anything, any number of any character Step4: .* is greedy by default, but you can activate non-greedy mode with .*? Step5: .* matches any character except the newline (\n) character. Step6: We can use the paramater re.DOTALL can set to truly match any character Step7: Similiarily, re.IGNORECASE or re.I to ignore case
Python Code: import re beginsWithTheHelloRegex = re.compile(r'^Hello') # String must start exactly with 'Hello' print(beginsWithTheHelloRegex.findall('Hello there')) print(beginsWithTheHelloRegex.findall('Wait, did he say Hello just now?')) print(beginsWithTheHelloRegex.findall('He said Hello')) endsWithTheHelloRegex = re.compile(r'Hello$') # String must end exactly with 'Hello' print(endsWithTheHelloRegex.findall('Hello there')) print(beginsWithTheHelloRegex.findall('Wait, did he say Hello just now?')) print(endsWithTheHelloRegex.findall('He said Hello')) Explanation: Lesson 27: RegEx .* Dot-Star, ^ Caret, & $ Dollar Sign Characters Besides just turning a class negative, the ^ character can also define the start of a string. The $ character can be used in combination to define the end of a string. End of explanation allDigitsRegex = re.compile(r'^\d+$') # Must start and end with a digit, with at least 1 or more digits inbetween print(allDigitsRegex.findall('2153234623462561514')) # Matches entire string print(allDigitsRegex.findall('21532346234letters!62561514')) # No match, doesn't end with string Explanation: They can be used in combination: End of explanation atRegex = re.compile(r'.at') # Any single character followed by at print(atRegex.findall('The cat in the hat sat on the flat mat.')) # matches anything ending with at atRegex = re.compile(r'.{2}at') # Any two characters followed by at print(atRegex.findall('The cat in the hat sat on the flat mat.')) # matches anything ending with at, including spaces Explanation: The . character matches any character. End of explanation name = 'First Name: Al, Last Name: Sweigart' # To pull names from this string would require a lot of indexing code name2 = 'First Name: Vivek, Last Name: Menon' # To pull names from this string would require a lot of indexing code nameRegex = re.compile(r'First Name: (.*), Last Name: (.*)') # Matches anything in this groups formatted exactly like this print(nameRegex.findall(name)) print(nameRegex.findall(name2)) Explanation: The .* is therefore used to match anything, any number of any character: End of explanation serve = '<To serve humans> for dinner.>' greedyRegex = re.compile(r'<(.*)>') # Looking for any length match, between brackets. nongreedyRegex = re.compile(r'<(.*?)>') # Looking for any length match, between brackets. print(greedyRegex.findall(serve)) # Matches the longest string print(nongreedyRegex.findall(serve)) # Matches the shortest string Explanation: .* is greedy by default, but you can activate non-greedy mode with .*? End of explanation primeDirectives = 'Serve the public trust.\nProtect the innocent.\nUphold the law.' print(primeDirectives) dotStar = re.compile(r'.*') print(dotStar.findall(primeDirectives)) Explanation: .* matches any character except the newline (\n) character. End of explanation dotStar = re.compile(r'.*', re.DOTALL) print(dotStar.findall(primeDirectives)) Explanation: We can use the paramater re.DOTALL can set to truly match any character: End of explanation vowelRegex = re.compile(r'[aeiou]', re.I) # Match any vowel, regardless of case print(vowelRegex.findall('Al, why does your programming book talk about RoboCop so much?')) Explanation: Similiarily, re.IGNORECASE or re.I to ignore case: End of explanation
5,070
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have the following DF
Problem: import pandas as pd df = pd.DataFrame({'Date':['2019-01-01','2019-02-08','2019-02-08', '2019-03-08']}) df['Date'] = pd.to_datetime(df['Date']) df['Date'] = df['Date'].dt.strftime('%b-%Y')
5,071
Given the following text description, write Python code to implement the functionality described below step by step Description: Toy (counter-)example for anomaly decomposition This is a carefully crafted example to demonstrate two possibly counter-intuitive results in anomaly decomposition study Step1: Introducing 3 real subsystems and generating labels for them (1 - good, 0 - anomaly). Step2: $\mathrm{score}_i$ variables represent classifier's prediction for each of the subdetectors. By construction $\mathrm{score}_i$ has high discriminative power against $i$-th detector. Step3: Observations
Python Code: import numpy as np %matplotlib inline import matplotlib.pyplot as plt plt.style.use('ggplot') Explanation: Toy (counter-)example for anomaly decomposition This is a carefully crafted example to demonstrate two possibly counter-intuitive results in anomaly decomposition study: - ROC AUC < 0.5 - AUC for 'all good' is lower than for some other 'subsystems'. End of explanation ### Good 80% of the time rd1 = np.random.binomial(1, p=0.8, size=1000) ### Good 80% of the time rd2 = np.random.binomial(1, p=0.8, size=1000) ### This detector is anti-correlated with the second detector rd3 = np.where(rd2 == 1, np.random.binomial(1, p = 0.7, size=1000), 1) ### 1&2 Good rd12 = rd1 * rd2 ### 1&3 Good rd13 = rd1 * rd3 ### 2&3 good rd23 = rd2 * rd3 ### All good rd123 = rd12 * rd3 np.mean(rd123) Explanation: Introducing 3 real subsystems and generating labels for them (1 - good, 0 - anomaly). End of explanation ### just some noise introduced into the true labels. score1 = np.random.normal(scale=0.2, size=1000) + rd1 score2 = np.random.normal(scale=0.3, size=1000) + rd2 score3 = np.random.normal(scale=0.4, size=1000) + rd3 from sklearn.metrics import roc_auc_score roc_aucs = np.ndarray(shape=(3, 7)) for i, score in enumerate([score1, score2, score3]): for j, rd in enumerate([rd1, rd2, rd3, rd12, rd13, rd23, rd123]): roc_aucs[i, j] = roc_auc_score(rd, score) plt.figure(figsize=(12, 6)) plt.imshow(roc_aucs) plt.yticks(np.arange(3), ['score 1', 'score 2', 'score 3'], fontsize=16) plt.xticks( np.arange(7), ['1 good', '2 good', '3 good', '1 & 2 good', '1 & 3 good', '2 & 3 good', 'all good'], fontsize=14, rotation= 45 ) plt.title('ROC AUC') plt.xlabel('True state of the subsystems', fontsize=18) plt.ylabel('Predicted state', fontsize=18) plt.colorbar() Explanation: $\mathrm{score}_i$ variables represent classifier's prediction for each of the subdetectors. By construction $\mathrm{score}_i$ has high discriminative power against $i$-th detector. End of explanation metrics = np.ndarray(shape=(3, 7, 4)) for i, score in enumerate([score1, score2, score3]): pred = score > 0.5 for j, rd in enumerate([rd1, rd2, rd3, rd12, rd13, rd23, rd123]): tp = float(np.sum((pred == 1) & (rd == 1))) fp = float(np.sum((pred == 1) & (rd == 0))) tn = float(np.sum((pred == 0) & (rd == 0))) fn = float(np.sum((pred == 0) & (rd == 1))) precision = tp / (tp + fp) recall = tp / (tp + fn) spc = tn / (tn + fp) p = tn / (tn + fn) roc = roc_auc_score(rd, score) metrics[i, j] = (precision, recall, spc, p) for j, rd in enumerate(['1', '2', '3', '1&2', '1&3', '2&3', 'all good']): plt.figure() plt.imshow(metrics[:, j, :], vmin=0, vmax=1) plt.title('subsystems: %s' % rd) plt.xticks(np.arange(4), ['precision', 'recall', 'SPC', 'negative\npredictive value']) plt.yticks(np.arange(3), ['score 1', 'score 2', 'score 3'], fontsize=16) plt.colorbar() plt.show() Explanation: Observations: $\mathrm{score}_i$, indeed, have high ROC AUC against corresponding true states; $\mathrm{score}_2$ has ROC AUC < 0.5 against subsystem 3; the same for $\mathrm{score}_3$ vs subsystem 2; 'all good' has relatevely low score in spite of each score having high ROC AUC against corersponding subsystem. End of explanation
5,072
Given the following text description, write Python code to implement the functionality described below step by step Description: Tools for Game Theory Daisuke Oyama Faculty of Economics, University of Tokyo This notebook demonstrates the functionalities of the game_theory module. Step1: Normal Form Games An $N$-player normal form game is a triplet $g = (I, (A_i){i \in I}, (u_i){i \in I})$ where $I = {0, \ldots, N-1}$ is the set of players, $A_i = {0, \ldots, n_i-1}$ is the set of actions of player $i \in I$, and $u_i \colon A_i \times A_{i+1} \times \cdots \times A_{i+N-1} \to \mathbb{R}$ is the payoff function of player $i \in I$, where $i+j$ is understood modulo $N$. Note that we adopt the convention that the $0$-th argument of the payoff function $u_i$ is player $i$'s own action and the $j$-th argument, $j = 1, \ldots, N-1$, is player ($i+j$)'s action (modulo $N$). In our module, a normal form game and a player are represented by the classes NormalFormGame and Player, respectively. A Player carries the player's payoff function and implements in particular a method that returns the best response action(s) given an action of the opponent player, or a profile of actions of the opponents if there are more than one. A NormalFormGame is in effect a container of Player instances. Creating a NormalFormGame There are several ways to create a NormalFormGame instance. The first is to pass an array of payoffs for all the players, i.e., an $(N+1)$-dimenstional array of shape $(n_0, \ldots, n_{N-1}, N)$ whose $(a_0, \ldots, a_{N-1})$-entry contains an array of the $N$ payoff values for the action profile $(a_0, \ldots, a_{N-1})$. As an example, consider the following game ("Matching Pennies") Step2: If a square matrix (2-dimensional array) is given, then it is considered to be a symmetric two-player game. Consider the following game (symmetric $2 \times 2$ "Coordination Game") Step3: Another example ("Rock-Paper-Scissors") Step4: The second is to specify the sizes of the action sets of the players to create a NormalFormGame instance filled with payoff zeros, and then set the payoff values to each entry. Let us construct the following game ("Prisoners' Dilemma") Step5: Finally, a NormalFormGame instance can be constructed by giving an array of Player instances, as explained in the next section. Creating a Player A Player instance is created by passing an array of dimension $N$ that represents the player's payoff function ("payoff array"). Consider the following game (a variant of "Battle of the Sexes") Step6: Beware that in payoff_array[h, k], h refers to the player's own action, while k refers to the opponent player's action. Step7: Passing an array of Player instances is the third way to create a NormalFormGame instance Step9: More than two players The game_theory module also supports games with more than two players. Let us consider the following version of $N$-player Cournot Game. There are $N$ firms (players) which produce a homogeneous good with common constant marginal cost $c \geq 0$. Each firm $i$ simultaneously determines the quantity $q_i \geq 0$ (action) of the good to produce. The inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$, where $Q = q_0 + \cdots + q_{N-1}$ is the aggregate supply. Then the profit (payoff) for firm $i$ is given by $$ u_i(q_i, q_{i+1}, \ldots, q_{i+N-1}) = P(Q) q_i - c q_i = \left(a - c - \sum_{j \neq i} q_j - q_i\right) q_i. $$ Theoretically, the set of actions, i.e., available quantities, may be the set of all nonnegative real numbers $\mathbb{R}$ (or a bounded interval $[0, \bar{q}]$ with some upper bound $\bar{q}$), but for computation on a computer we have to discretize the action space and only allow for finitely many grid points. The following script creates a NormalFormGame instance of the Cournot game as described above, assuming that the (common) grid of possible quantity values is stored in an array q_grid. Step10: Here's a simple example with three firms, marginal cost $20$, and inverse demand function $80 - Q$, where the feasible quantity values are assumed to be $10$ and $15$. Step11: Nash Equilibrium A Nash equilibrium of a normal form game is a profile of actions where the action of each player is a best response to the others'. The Player object has a method best_response. Consider the Matching Pennies game g_MP defined above. For example, player 0's best response to the opponent's action 1 is Step12: Player 0's best responses to the opponent's mixed action [0.5, 0.5] (we know they are 0 and 1) Step13: For this game, we know that ([0.5, 0.5], [0.5, 0.5]) is a (unique) Nash equilibrium. Step15: Finding Nash equilibria Our module does not have sophisticated algorithms to compute Nash equilibria... One might look at Gambit, which implements several such algorithms. Brute force For small games, we can find pure action Nash equilibria by brute force. Step16: Matching Pennies Step17: Coordination game Step18: Rock-Paper-Scissors Step19: Battle of the Sexes Step20: Prisoners' Dillema Step21: Cournot game Step23: Sequential best response In some games, such as "supermodular games" and "potential games", the process of sequential best responses converges to a Nash equilibrium. Here's a script to find one pure Nash equilibrium by sequential best response, if it converges. Step24: A Cournot game with linear demand is known to be a potential game, for which sequential best response converges to a Nash equilibrium. Let us try a bigger instance Step25: The limit action profile is indeed a Nash equilibrium Step26: In fact, the game has other Nash equilibria (because of our choice of grid points and parameter values) Step27: Make it bigger Step28: Sequential best response does not converge in all games
Python Code: from __future__ import division, print_function import numpy as np from normal_form_game import NormalFormGame, Player Explanation: Tools for Game Theory Daisuke Oyama Faculty of Economics, University of Tokyo This notebook demonstrates the functionalities of the game_theory module. End of explanation matching_pennies_bimatrix = [[(1, -1), (-1, 1)], [(-1, 1), (1, -1)]] g_MP = NormalFormGame(matching_pennies_bimatrix) print(g_MP) print(g_MP.players[0]) # Player instance for player 0 print(g_MP.players[1]) # Player instance for player 1 g_MP.players[0].payoff_array # Player 0's payoff array g_MP.players[1].payoff_array # Player 1's payoff array Explanation: Normal Form Games An $N$-player normal form game is a triplet $g = (I, (A_i){i \in I}, (u_i){i \in I})$ where $I = {0, \ldots, N-1}$ is the set of players, $A_i = {0, \ldots, n_i-1}$ is the set of actions of player $i \in I$, and $u_i \colon A_i \times A_{i+1} \times \cdots \times A_{i+N-1} \to \mathbb{R}$ is the payoff function of player $i \in I$, where $i+j$ is understood modulo $N$. Note that we adopt the convention that the $0$-th argument of the payoff function $u_i$ is player $i$'s own action and the $j$-th argument, $j = 1, \ldots, N-1$, is player ($i+j$)'s action (modulo $N$). In our module, a normal form game and a player are represented by the classes NormalFormGame and Player, respectively. A Player carries the player's payoff function and implements in particular a method that returns the best response action(s) given an action of the opponent player, or a profile of actions of the opponents if there are more than one. A NormalFormGame is in effect a container of Player instances. Creating a NormalFormGame There are several ways to create a NormalFormGame instance. The first is to pass an array of payoffs for all the players, i.e., an $(N+1)$-dimenstional array of shape $(n_0, \ldots, n_{N-1}, N)$ whose $(a_0, \ldots, a_{N-1})$-entry contains an array of the $N$ payoff values for the action profile $(a_0, \ldots, a_{N-1})$. As an example, consider the following game ("Matching Pennies"): $ \begin{bmatrix} 1, -1 & -1, 1 \ -1, 1 & 1, -1 \end{bmatrix} $ End of explanation coordination_game_matrix = [[4, 0], [3, 2]] # square matrix g_Coo = NormalFormGame(coordination_game_matrix) print(g_Coo) g_Coo.players[0].payoff_array # Player 0's payoff array g_Coo.players[1].payoff_array # Player 1's payoff array Explanation: If a square matrix (2-dimensional array) is given, then it is considered to be a symmetric two-player game. Consider the following game (symmetric $2 \times 2$ "Coordination Game"): $ \begin{bmatrix} 4, 4 & 0, 3 \ 3, 0 & 2, 2 \end{bmatrix} $ End of explanation RPS_matrix = [[ 0, -1, 1], [ 1, 0, -1], [-1, 1, 0]] g_RPS = NormalFormGame(RPS_matrix) print(g_RPS) Explanation: Another example ("Rock-Paper-Scissors"): $ \begin{bmatrix} 0, 0 & -1, 1 & 1, -1 \ 1, -1 & 0, 0 & -1, 1 \ -1, 1 & 1, -1 & 0, 0 \end{bmatrix} $ End of explanation g_PD = NormalFormGame((2, 2)) # There are 2 players, each of whom has 2 actions g_PD[0, 0] = 1, 1 g_PD[0, 1] = -2, 3 g_PD[1, 0] = 3, -2 g_PD[1, 1] = 0, 0 print(g_PD) Explanation: The second is to specify the sizes of the action sets of the players to create a NormalFormGame instance filled with payoff zeros, and then set the payoff values to each entry. Let us construct the following game ("Prisoners' Dilemma"): $ \begin{bmatrix} 1, 1 & -2, 3 \ 3, -2 & 0, 0 \end{bmatrix} $ End of explanation player0 = Player([[3, 1], [0, 2]]) player1 = Player([[2, 0], [1, 3]]) Explanation: Finally, a NormalFormGame instance can be constructed by giving an array of Player instances, as explained in the next section. Creating a Player A Player instance is created by passing an array of dimension $N$ that represents the player's payoff function ("payoff array"). Consider the following game (a variant of "Battle of the Sexes"): $ \begin{bmatrix} 3, 2 & 1, 1 \ 0, 0 & 2, 3 \end{bmatrix} $ End of explanation player0.payoff_array player1.payoff_array Explanation: Beware that in payoff_array[h, k], h refers to the player's own action, while k refers to the opponent player's action. End of explanation g_BoS = NormalFormGame((player0, player1)) print(g_BoS) Explanation: Passing an array of Player instances is the third way to create a NormalFormGame instance: End of explanation from quantecon.cartesian import cartesian def cournot(a, c, N, q_grid): Create a `NormalFormGame` instance for the symmetric N-player Cournot game with linear inverse demand a - Q and constant marginal cost c. Parameters ---------- a : scalar Intercept of the demand curve c : scalar Common constant marginal cost N : scalar(int) Number of firms q_grid : array_like(scalar) Array containing the set of possible quantities Returns ------- NormalFormGame NormalFormGame instance representing the Cournot game q_grid = np.asarray(q_grid) payoff_array = \ cartesian([q_grid]*N).sum(axis=-1).reshape([len(q_grid)]*N) * (-1) + \ (a - c) payoff_array *= q_grid.reshape([len(q_grid)] + [1]*(N-1)) payoff_array += 0 # To get rid of the minus sign of -0 player = Player(payoff_array) return NormalFormGame([player for i in range(N)]) Explanation: More than two players The game_theory module also supports games with more than two players. Let us consider the following version of $N$-player Cournot Game. There are $N$ firms (players) which produce a homogeneous good with common constant marginal cost $c \geq 0$. Each firm $i$ simultaneously determines the quantity $q_i \geq 0$ (action) of the good to produce. The inverse demand function is given by the linear function $P(Q) = a - Q$, $a > 0$, where $Q = q_0 + \cdots + q_{N-1}$ is the aggregate supply. Then the profit (payoff) for firm $i$ is given by $$ u_i(q_i, q_{i+1}, \ldots, q_{i+N-1}) = P(Q) q_i - c q_i = \left(a - c - \sum_{j \neq i} q_j - q_i\right) q_i. $$ Theoretically, the set of actions, i.e., available quantities, may be the set of all nonnegative real numbers $\mathbb{R}$ (or a bounded interval $[0, \bar{q}]$ with some upper bound $\bar{q}$), but for computation on a computer we have to discretize the action space and only allow for finitely many grid points. The following script creates a NormalFormGame instance of the Cournot game as described above, assuming that the (common) grid of possible quantity values is stored in an array q_grid. End of explanation a, c = 80, 20 N = 3 q_grid = [10, 15] # [1/3 of Monopoly quantity, Nash equilibrium quantity] g_Cou = cournot(a, c, N, q_grid) print(g_Cou) print(g_Cou.players[0]) g_Cou.nums_actions Explanation: Here's a simple example with three firms, marginal cost $20$, and inverse demand function $80 - Q$, where the feasible quantity values are assumed to be $10$ and $15$. End of explanation g_MP.players[0].best_response(1) Explanation: Nash Equilibrium A Nash equilibrium of a normal form game is a profile of actions where the action of each player is a best response to the others'. The Player object has a method best_response. Consider the Matching Pennies game g_MP defined above. For example, player 0's best response to the opponent's action 1 is: End of explanation # By default, returns the best response action with the smallest index g_MP.players[0].best_response([0.5, 0.5]) # With tie_breaking='random', returns randomly one of the best responses g_MP.players[0].best_response([0.5, 0.5], tie_breaking='random') # Try several times # With tie_breaking=False, returns an array of all the best responses g_MP.players[0].best_response([0.5, 0.5], tie_breaking=False) Explanation: Player 0's best responses to the opponent's mixed action [0.5, 0.5] (we know they are 0 and 1): End of explanation g_MP.is_nash(([0.5, 0.5], [0.5, 0.5])) g_MP.is_nash((0, 0)) g_MP.is_nash((0, [0.5, 0.5])) Explanation: For this game, we know that ([0.5, 0.5], [0.5, 0.5]) is a (unique) Nash equilibrium. End of explanation def find_pure_nash_brute(g): Find all pure Nash equilibria of a normal form game by brute force. Parameters ---------- g : NormalFormGame NEs = [] for a in np.ndindex(*g.nums_actions): if g.is_nash(a): NEs.append(a) num_NEs = len(NEs) if num_NEs == 0: msg = 'no pure Nash equilibrium' elif num_NEs == 1: msg = '1 pure Nash equilibrium:\n{0}'.format(NEs) else: msg = '{0} pure Nash equilibria:\n{1}'.format(num_NEs, NEs) print('The game has ' + msg) Explanation: Finding Nash equilibria Our module does not have sophisticated algorithms to compute Nash equilibria... One might look at Gambit, which implements several such algorithms. Brute force For small games, we can find pure action Nash equilibria by brute force. End of explanation find_pure_nash_brute(g_MP) Explanation: Matching Pennies: End of explanation find_pure_nash_brute(g_Coo) Explanation: Coordination game: End of explanation find_pure_nash_brute(g_RPS) Explanation: Rock-Paper-Scissors: End of explanation find_pure_nash_brute(g_BoS) Explanation: Battle of the Sexes: End of explanation find_pure_nash_brute(g_PD) Explanation: Prisoners' Dillema: End of explanation find_pure_nash_brute(g_Cou) Explanation: Cournot game: End of explanation def sequential_best_response(g, init_actions=None, tie_breaking='smallest', verbose=True): Find a pure Nash equilibrium of a normal form game by sequential best response. Parameters ---------- g : NormalFormGame init_actions : array_like(int), optional(default=[0, ..., 0]) The initial action profile. tie_breaking : {'smallest', 'random'}, optional(default='smallest') verbose: bool, optional(default=True) If True, print the intermediate process. N = g.N # Number of players a = np.empty(N, dtype=int) # Action profile if init_actions is None: init_actions = [0] * N a[:] = init_actions if verbose: print('init_actions: {0}'.format(a)) new_a = np.empty(N, dtype=int) max_iter = np.prod(g.nums_actions) for t in range(max_iter): new_a[:] = a for i, player in enumerate(g.players): if N == 2: a_except_i = new_a[1-i] else: a_except_i = new_a[np.arange(i+1, i+N) % N] new_a[i] = player.best_response(a_except_i, tie_breaking=tie_breaking) if verbose: print('player {0}: {1}'.format(i, new_a)) if np.array_equal(new_a, a): return a else: a[:] = new_a print('No pure Nash equilibrium found') return None Explanation: Sequential best response In some games, such as "supermodular games" and "potential games", the process of sequential best responses converges to a Nash equilibrium. Here's a script to find one pure Nash equilibrium by sequential best response, if it converges. End of explanation a, c = 80, 20 N = 3 q_grid = np.linspace(0, a-c, 13) # [0, 5, 10, ..., 60] g_Cou = cournot(a, c, N, q_grid) a_star = sequential_best_response(g_Cou) # By default, start with (0, 0, 0) print('Nash equilibrium indices: {0}'.format(a_star)) print('Nash equilibrium quantities: {0}'.format(q_grid[a_star])) # Start with the largest actions (12, 12, 12) sequential_best_response(g_Cou, init_actions=(12, 12, 12)) Explanation: A Cournot game with linear demand is known to be a potential game, for which sequential best response converges to a Nash equilibrium. Let us try a bigger instance: End of explanation g_Cou.is_nash(a_star) Explanation: The limit action profile is indeed a Nash equilibrium: End of explanation find_pure_nash_brute(g_Cou) Explanation: In fact, the game has other Nash equilibria (because of our choice of grid points and parameter values): End of explanation N = 4 q_grid = np.linspace(0, a-c, 61) # [0, 1, 2, ..., 60] g_Cou = cournot(a, c, N, q_grid) sequential_best_response(g_Cou) sequential_best_response(g_Cou, init_actions=(0, 0, 0, 30)) Explanation: Make it bigger: End of explanation print(g_MP) # Matching Pennies sequential_best_response(g_MP) Explanation: Sequential best response does not converge in all games: End of explanation
5,073
Given the following text description, write Python code to implement the functionality described below step by step Description: Fibonacci Numbers A Fibonacci number F(n) is computed as the sum of the two numbers preceeding it in a Fibonacci sequence (0), 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ..., for example, F(10) = 55. More formally, we can define a Fibonacci number F(n) as $F(n) = F(n-1) + F(n-2)$, for integers $n > 1$ Step1: However, it is unfortunately a terribly inefficient algorithm with an exponential running time of $O(2^n)$. The main problem why it is so slow is that we recompute Fibonacci number $F(n) = F(n-1) + F(n-2)$ repeatedly as shown in the recursive tree below Step2: (If you are interested in other approaches, I recommend you take a look at the pages on Wikipedia and Wolfram.) To get a rough idea of the running times of each of our implementations, let's use the %timeit magic for F(30). Step3: Finally, let's benchmark our implementations for varying sizes of n
Python Code: def fibo_recurse(n): if n <= 1: return n else: return fibo_recurse(n-1) + fibo_recurse(n-2) print(fibo_recurse(0)) print(fibo_recurse(1)) print(fibo_recurse(10)) Explanation: Fibonacci Numbers A Fibonacci number F(n) is computed as the sum of the two numbers preceeding it in a Fibonacci sequence (0), 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, ..., for example, F(10) = 55. More formally, we can define a Fibonacci number F(n) as $F(n) = F(n-1) + F(n-2)$, for integers $n > 1$: $$F(n)= \begin{cases} 0 & n=0, \ 1, & n=1, \ F(n-1) + F(n-2), & n > 1. \end{cases}$$ The Fibonacci sequence was named after Leanardo Fibonacci, who used the Fibonacci sequence to study rabit populations in the 12th century. I highly recommend reading the excellent articles on Wikipedia and Wolfram, which discuss the interesting facts about the Fibonacci number in great detail. The recursive Fibonacci number computation is a typical text book example of a recursive algorithm: End of explanation def fibo_dynamic(n): f, f_minus_1 = 0, 1 for i in range(n): f_minus_1, f = f, f + f_minus_1 return f print(fibo_dynamic(0)) print(fibo_dynamic(1)) print(fibo_dynamic(10)) Explanation: However, it is unfortunately a terribly inefficient algorithm with an exponential running time of $O(2^n)$. The main problem why it is so slow is that we recompute Fibonacci number $F(n) = F(n-1) + F(n-2)$ repeatedly as shown in the recursive tree below: For example, assuming $n \geq 2$ we have $O(2^{n-1}) + O(2^{n-2}) + O(1) = O(2^n)$ for $F(n) = F(n-1) + F(n-2)$, where $O(1)$ is for adding to Fibonacci numbers together. A more efficient approach to compute a Fibonacci number is a dynamic approach with linear runtime, $O(n)$: End of explanation %timeit -r 3 -n 10 fibo_recurse(n=30) %timeit -r 3 -n 10 fibo_dynamic(n=30) Explanation: (If you are interested in other approaches, I recommend you take a look at the pages on Wikipedia and Wolfram.) To get a rough idea of the running times of each of our implementations, let's use the %timeit magic for F(30). End of explanation import timeit funcs = ['fibo_recurse', 'fibo_dynamic'] orders_n = list(range(0, 50, 10)) times_n = {f:[] for f in funcs} for n in orders_n: for f in funcs: times_n[f].append(min(timeit.Timer('%s(n)' % f, 'from __main__ import %s, n' % f) .repeat(repeat=3, number=5))) %matplotlib inline import matplotlib.pyplot as plt def plot_timing(): labels = [('fibo_recurse', 'fibo_recurse'), ('fibo_dynamic', 'fibo_dynamic')] plt.rcParams.update({'font.size': 12}) fig = plt.figure(figsize=(10, 8)) for lb in labels: plt.plot(orders_n, times_n[lb[0]], alpha=0.5, label=lb[1], marker='o', lw=3) plt.xlabel('sample size n') plt.ylabel('time per computation in milliseconds [ms]') plt.legend(loc=2) plt.ylim([-1, 300]) plt.grid() plt.show() plot_timing() Explanation: Finally, let's benchmark our implementations for varying sizes of n: End of explanation
5,074
Given the following text description, write Python code to implement the functionality described below step by step Description: Non supervised learning Autoencoders Suppose we have only a set of unlabeled training examples $x_1,x_2,x_3, \dots $, where $x_i \in \Re^n$. An autoencoder neural network is an unsupervised learning algorithm that applies backpropagation and uses a loss function that is optimal when setting the target values to be equal to the inputs, $y_i=x_i$. To build an autoencoder, you need three things Step1: Let's also create a separate encoder model and a separate decoder model Step2: Let's prepare our input data. Step3: Adding depth and sparsity constraint on the encoded representations In the previous example, the representations were only constrained by the size of the hidden layer (32). In such a situation, what typically happens is that the hidden layer is learning an approximation of PCA (principal component analysis). But another way to constrain the representations to be compact is to add a sparsity contraint on the activity of the hidden representations, so fewer units would "fire" at a given time. In Keras, this can be done by adding an activity_regularizer to our Dense layer Step4: Convolutional Autoencoders Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. Step5: Example Step6: Variational Autoencoders A variational autoencoder is an autoencoder that adds probabilistic constraints on the representations being learned. When using probabilistic models, compressed representation is called latent variable model. So, instead of learning a function this model is learning a probabilistic distribution function that models your data. Why? Standard autoencoders are not suited to work as a generative model. If you pick a random value for your decoder you won't get necessarily a good reconstruction Step7: Up to now we have an encoder that takes images and produce (the parameters of) a pdf in the latent space. The decoder takes points in the latent space and return reconstructions. How do we connect both models? By sampling from the produced distribution! <center> <img src="images/vae4.png" alt="" style="width Step8: Now we can create the decoder net Step9: Lastly, from this model, we can do three things Step11: In order to be coherent with our previous definitions, we must assure that points sampled fron the latent space fit a standard normal distribition, but the encoder is producing non standard normal distributions. So, we must add a constraint for getting something like this Step12: Training a VAE How do we train a model that have a sampling step? <center> <img src="images/vae_sampling.png" alt="" style="width
Python Code: # Source: Adapted from https://blog.keras.io/building-autoencoders-in-keras.html from keras.layers import Input, Dense from keras.models import Model # this is the size of our encoded representations encoding_dim = 32 # 32 floats -> compression of factor 24.5, # assuming the input is 784 floats input_img = Input(shape=(784,)) # encoded representation of the input encoding_layer = Dense(encoding_dim, activation='relu') encoded = encoding_layer(input_img) # lossy reconstruction of the input decoding_layer = Dense(784, activation='sigmoid') decoded = decoding_layer(encoded) # this model maps an input to its reconstruction autoencoder = Model(input_img, decoded) Explanation: Non supervised learning Autoencoders Suppose we have only a set of unlabeled training examples $x_1,x_2,x_3, \dots $, where $x_i \in \Re^n$. An autoencoder neural network is an unsupervised learning algorithm that applies backpropagation and uses a loss function that is optimal when setting the target values to be equal to the inputs, $y_i=x_i$. To build an autoencoder, you need three things: an encoding function, a decoding function, and a distance function between the amount of information loss between the compressed representation of your data and the decompressed representation. <center> <img src="images/autoencoder.jpg" alt="" style="width: 700px;"/> Source: https://blog.keras.io/building-autoencoders-in-keras.html </center> Two practical applications of autoencoders are data denoising, and dimensionality reduction for data visualization. With appropriate dimensionality and sparsity constraints, autoencoders can learn data projections that are more interesting than PCA or other basic techniques. We'll start simple, with a single fully-connected neural layer as encoder and as decoder: End of explanation # this model maps an input to its encoded representation encoding_model = Model(input_img, encoded) # create a placeholder for an encoded input # and create the decoder model encoded_input = Input(shape=(encoding_dim,)) decoding_model = Model(encoded_input, decoding_layer(encoded_input)) autoencoder.compile(optimizer='adam', loss='mse') Explanation: Let's also create a separate encoder model and a separate decoder model: End of explanation from keras.datasets import mnist import numpy as np (x_train, _), (x_test, _) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) print x_train.shape print x_test.shape autoencoder.fit(x_train, x_train, nb_epoch=15, batch_size=256, shuffle=True, validation_data=(x_test, x_test)) # encode and decode some digits # note that we take them from the *test* set encoded_imgs = encoding_model.predict(x_test) decoded_imgs = decoding_model.predict(encoded_imgs) import matplotlib.pyplot as plt %matplotlib inline n = 10 # how many digits we will display plt.figure(figsize=(10, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() Explanation: Let's prepare our input data. End of explanation #autoencoder.reset_states() #encoder.reset_states() #decoder.reset_states() from keras import regularizers from keras import optimizers from keras.regularizers import l2, activity_l1 from keras.layers import Input, Dense from keras.models import Model input_img = Input(shape=(784,)) encoded = Dense(128, activation='relu')(input_img) encoded = Dense(64, activation='relu')(encoded) encoded = Dense(32, activation='relu')(encoded) decoded = Dense(64, activation='relu')(encoded) decoded = Dense(128, activation='relu')(decoded) decoded = Dense(784, activation='sigmoid')(decoded) autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy', activity_regularizer=regularizers.l1(10e-5)) autoencoder.fit(x_train, x_train, nb_epoch=100, batch_size=256, shuffle=True, validation_data=(x_test, x_test)) Explanation: Adding depth and sparsity constraint on the encoded representations In the previous example, the representations were only constrained by the size of the hidden layer (32). In such a situation, what typically happens is that the hidden layer is learning an approximation of PCA (principal component analysis). But another way to constrain the representations to be compact is to add a sparsity contraint on the activity of the hidden representations, so fewer units would "fire" at a given time. In Keras, this can be done by adding an activity_regularizer to our Dense layer: End of explanation from keras.layers import Input, Dense, Conv2D, MaxPooling2D, UpSampling2D from keras.models import Model from keras import backend as K input_img = Input(shape=(28, 28, 1)) x = Conv2D(16, 3, 3, activation='relu', border_mode='same')(input_img) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Conv2D(8, 3, 3, activation='relu', border_mode='same')(x) x = MaxPooling2D((2, 2), border_mode='same')(x) x = Conv2D(8, 3, 3, activation='relu', border_mode='same')(x) encoded = MaxPooling2D((2, 2), border_mode='same')(x) # at this point the representation is (4, 4, 8) i.e. 128-dimensional x = Conv2D(8, 3, 3, activation='relu', border_mode='same')(encoded) x = UpSampling2D((2, 2))(x) x = Conv2D(8, 3, 3, activation='relu', border_mode='same')(x) x = UpSampling2D((2, 2))(x) x = Conv2D(16, 3, 3, activation='relu')(x) x = UpSampling2D((2, 2))(x) decoded = Conv2D(1, 3, 3, activation='sigmoid', border_mode='same')(x) # at this point the representation is (28, 28, 1) i.e. 784-dimensional autoencoder = Model(input_img, decoded) autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy') from keras.datasets import mnist import numpy as np (x_train, _), (x_test, _) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format from keras.callbacks import TensorBoard autoencoder.fit(x_train, x_train, nb_epoch=50, batch_size=128, shuffle=True, validation_data=(x_test, x_test), callbacks=[TensorBoard(log_dir='/tmp/autoencoder')]) decoded_imgs = autoencoder.predict(x_test) import matplotlib.pyplot as plt n = 10 plt.figure(figsize=(10, 2)) for i in range(1,n): # display original ax = plt.subplot(2, n, i) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() Explanation: Convolutional Autoencoders Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. End of explanation import matplotlib.pyplot as plt import numpy as np from keras.datasets import mnist from keras.layers import Dense from keras.models import Sequential (x_train, _), (x_test, _) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = np.reshape(x_train, (len(x_train), 784)) x_test = np.reshape(x_test, (len(x_test), 784)) noise_factor = 0.5 x_train_noisy = x_train + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_train.shape) x_test_noisy = x_test + noise_factor * np.random.normal(loc=0.0, scale=1.0, size=x_test.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_test_noisy = np.clip(x_test_noisy, 0., 1.) n = 10 plt.figure(figsize=(20, 2)) for i in range(n): ax = plt.subplot(1, n, i+1) plt.imshow(x_test_noisy[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() model = Sequential() model.add(Dense(128, activation='relu', input_dim=784)) model.add(Dense(64, activation='relu')) model.add(Dense(32, activation='relu')) model.add(Dense(64, activation='relu')) model.add(Dense(128, activation='relu')) model.add(Dense(784, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy') model.fit(x_train_noisy, x_train, nb_epoch=100, batch_size=256, shuffle=True, validation_data=(x_test_noisy, x_test)) decoded_imgs = model.predict(x_test) n = 10 plt.figure(figsize=(20, 6)) for i in range(1, n): # display original ax = plt.subplot(3, n, i) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display noisy ax = plt.subplot(3, n, i + n) plt.imshow(x_test_noisy[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(3, n, i + 2*n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() Explanation: Example: Image denoising <center> <img src="images/denoised_digits.png" alt="" style="width: 700px;"/> (Source: https://blog.keras.io/building-autoencoders-in-keras.html) </center> End of explanation # vae architecture from tensorflow.examples.tutorials.mnist import input_data from keras.layers import Input, Dense, Lambda from keras.models import Model from keras.objectives import binary_crossentropy from keras.callbacks import LearningRateScheduler import numpy as np import matplotlib.pyplot as plt import keras.backend as K import tensorflow as tf m = 50 n_z = 2 n_epoch = 100 # encoder inputs = Input(shape=(784,)) h_q = Dense(512, activation='relu')(inputs) mu = Dense(n_z, activation='linear')(h_q) log_sigma = Dense(n_z, activation='linear')(h_q) Explanation: Variational Autoencoders A variational autoencoder is an autoencoder that adds probabilistic constraints on the representations being learned. When using probabilistic models, compressed representation is called latent variable model. So, instead of learning a function this model is learning a probabilistic distribution function that models your data. Why? Standard autoencoders are not suited to work as a generative model. If you pick a random value for your decoder you won't get necessarily a good reconstruction: the value can far away from any previous value the network has seen before! That's why attaching a probabilistic model to the compressed representation is a good idea! For sake of simplicity, let's use a standard normal distribution to define the distribution of inputs ($\mathbf V$) the decoder will receive. The architecture of a variational autoencoder (VAE) is thus: <center> <img src="images/vae1.png" alt="" style="width: 300px;"/> (Source: http://ijdykeman.github.io/ml/2016/12/21/cvae.html) </center> We want the decoder to take any point taken from a standard normal distribution to return a reasonable element of our dataset: <center> <img src="images/vae2.png" alt="" style="width: 500px;"/> (Source: http://ijdykeman.github.io/ml/2016/12/21/cvae.html) </center> Let's consider the encoder role in this architecture. In a traditional autoencoder, the encoder model takes a sample from data and returns a single point in the latent space, which is then passed to the decoder. What information is encoded in the latent space? In VAE the encoder instead produces (the parameters of) a probability distribution in the latent space: <center> <img src="images/vae3.png" alt="" style="width: 500px;"/> (Source: http://ijdykeman.github.io/ml/2016/12/21/cvae.html) </center> These distributions are (non standard) Gaussians of the same dimensionality as the latent space. First, let’s implement the encoder net, which takes input $X$ and outputs two things: $\mu(X)$ and $\Sigma(X)$, the parameters of the Gaussian. Our encoder will be a neural net with one hidden layer. Our latent variable is two dimensional, so that we could easily visualize it. End of explanation def sample_z(args): mu, log_sigma = args eps = K.random_normal(shape=(m, n_z), mean=0., std=1.) return mu + K.exp(log_sigma / 2) * eps # Sample z z = Lambda(sample_z)([mu, log_sigma]) Explanation: Up to now we have an encoder that takes images and produce (the parameters of) a pdf in the latent space. The decoder takes points in the latent space and return reconstructions. How do we connect both models? By sampling from the produced distribution! <center> <img src="images/vae4.png" alt="" style="width: 400px;"/> (Source: http://ijdykeman.github.io/ml/2016/12/21/cvae.html) </center> To this end we will implement a random variate reparameterisation: the substitution of a random variable by a deterministic transformation of a simpler random variable. There are several methods by which non-uniform random numbers, or random variates, can be generated. The most popular methods are the one-liners, which give us the simple tools to generate random variates in one line of code, following the classic paper by Luc Devroye (Luc Devroye, Random variate generation in one line of code, Proceedings of the 28th conference on Winter simulation, 1996). In the case of a Gaussian, we can use the following algorithm: + Generate $\epsilon \sim \mathcal{N}(0;1)$. + Compute a sample from $\mathcal{N}(\mu; RR^T)$ as $\mu + R \epsilon$. End of explanation decoder_hidden = Dense(512, activation='relu') h_p = decoder_hidden(z) decoder_out = Dense(784, activation='sigmoid') outputs = decoder_out(h_p) Explanation: Now we can create the decoder net: End of explanation # Overall VAE model, for reconstruction and training vae = Model(inputs, outputs) # Encoder model, to encode input into latent variable # We use the mean as the output as it is the center point, the representative of the gaussian encoder = Model(inputs, mu) # Generator model, generate new data given latent variable z d_in = Input(shape=(n_z,)) d_h = decoder_hidden(d_in) d_out = decoder_out(d_h) decoder = Model(d_in, d_out) Explanation: Lastly, from this model, we can do three things: reconstruct inputs, encode inputs into latent variables, and generate data from latent variable. End of explanation def vae_loss(y_true, y_pred): Calculate loss = reconstruction loss + KL loss for each data in minibatch recon = K.sum(K.binary_crossentropy(y_pred, y_true), axis=1) # D_KL(Q(z|X) || P(z|X)); # calculate in closed form as both dist. are Gaussian kl = 0.5 * K.sum(K.exp(log_sigma) + K.square(mu) - 1. - log_sigma, axis=1) return recon + kl Explanation: In order to be coherent with our previous definitions, we must assure that points sampled fron the latent space fit a standard normal distribition, but the encoder is producing non standard normal distributions. So, we must add a constraint for getting something like this: <center> <img src="images/vae5.png" alt="" style="width: 700px;"/> (Source: http://ijdykeman.github.io/ml/2016/12/21/cvae.html) </center> In order to impose this constraint in the loss function by using the Kullback-Leibler divergence. The Kullback–Leibler divergence is a measure of how one probability distribution diverges from a second expected probability distribution. For discrete probability distributions $P$ and $Q$, the Kullback–Leibler divergence from $Q$ to $P$ is defined to be $$ D_{\mathrm {KL} }(P\|Q)=\sum _{i}P(i)\,\log {\frac {P(i)}{Q(i)}}. $$ The rest of the loss function must take into account the "reconstruction" error. End of explanation from keras.datasets import mnist (x_train, _), (x_test, y_test) = mnist.load_data() x_train = x_train.astype('float32') / 255. x_test = x_test.astype('float32') / 255. x_train = np.reshape(x_train, (len(x_train), 784)) x_test = np.reshape(x_test, (len(x_test), 784)) vae.compile(optimizer='adam', loss=vae_loss) vae.fit(x_train, x_train, batch_size=m, nb_epoch=n_epoch) encoded_imgs = encoder.predict(x_test) decoded_imgs = decoder.predict(encoded_imgs) import matplotlib.pyplot as plt %matplotlib inline n = 10 # how many digits we will display plt.figure(figsize=(10, 4)) for i in range(n): # display original ax = plt.subplot(2, n, i + 1) plt.imshow(x_test[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # display reconstruction ax = plt.subplot(2, n, i + 1 + n) plt.imshow(decoded_imgs[i].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() plt.scatter(encoded_imgs[:,0], encoded_imgs[:,1], c=y_test, cmap=plt.cm.get_cmap("jet", 10)) plt.colorbar(ticks=range(10)) Explanation: Training a VAE How do we train a model that have a sampling step? <center> <img src="images/vae_sampling.png" alt="" style="width: 800px;"/> </center> In fact this is not a problem! By using the one-liner method for sampling we have expressed the latent distribution in a way that its parameters are factored out of the parameters of the random variable so that backpropagation can be used to find the optimal parameters of the latent distribution. For this reason this method is called reparametrization trick. By using this trick we can train end-to-end a VAE with backpropagation. End of explanation
5,075
Given the following text description, write Python code to implement the functionality described below step by step Description: Pandas and SQL Pandas is a widely used python package for data analysis. We will mainly focus on pandas.DataFrame. Informally, you can think of a dataframe as an advanced relational table (or a spreadsheet in an Excel file). There, you can easily perform many useful tasks quickly, such as aggregate analysis, and data enrichment or selection. In this notebook, we focus on supporting common SQL operations on dataframes. More documentation can be found at * "Comparison with SQL" Step1: Import data We can easily create a DataFrame from a CSV or a tab-delimited file. You can creata a dataframe object in other ways, e.g., from multiple Series or from a dictionary. Then we can * use shape to see the number of rows and columns * use head(x) or tail(x) to view the first or last x rows of the dataFrame. The default value of x is 5. * pandas is also smart in that it will intelligently print part of the dataframe content if it is too large. Step2: SELECT Columns can be identified by its index (0-based) or its name. Step3: WHERE Step4: When dealing with multiple conditions, pandas uses &amp; and |. Each condition must be bracketed. Step5: Under the hood, the conditions determines a Boolean array. Step6: GROUP BY In pandas, we uses groupby() to split a dataset into groups; we can apply some function (e.g., aggregation) and combine the groups together. We use size() to return the number of rows of each group (like COUNT in SQL) Step7: We use agg() to apply multiple functions at once, and pass a list of columns to groupby() to grouping multiple columns Step8: JOIN Step9: We use pd.merge() to join two DataFrames, where you can specify the join key Step10: We can also specify the join type Step11: UNION Step12: use pd.concat() to union two tables without removing duplicates (i.e., UNION ALL in SQL) Step13: use drop_duplicates() to remove duplicate rows Step14: Adding Columns Step15: Pivoting and Visualization
Python Code: import pandas as pd import numpy as np import plotly.plotly as py import plotly.graph_objs as go Explanation: Pandas and SQL Pandas is a widely used python package for data analysis. We will mainly focus on pandas.DataFrame. Informally, you can think of a dataframe as an advanced relational table (or a spreadsheet in an Excel file). There, you can easily perform many useful tasks quickly, such as aggregate analysis, and data enrichment or selection. In this notebook, we focus on supporting common SQL operations on dataframes. More documentation can be found at * "Comparison with SQL": http://pandas.pydata.org/pandas-docs/stable/comparison_with_sql.html * Dataframe/Pandas documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html We also use the plotly plotting library. You need to install it, register an account, and perform initialization (c.f., https://plot.ly/python/getting-started/), before the following code can work. Alternatively, just comment out plotly import below and the code that generates the plot. Import Modules End of explanation df = pd.read_csv('./asset/lecture_data.txt', sep='\t') # to read an excel file, use read_excel() df.head() df df.describe() Explanation: Import data We can easily create a DataFrame from a CSV or a tab-delimited file. You can creata a dataframe object in other ways, e.g., from multiple Series or from a dictionary. Then we can * use shape to see the number of rows and columns * use head(x) or tail(x) to view the first or last x rows of the dataFrame. The default value of x is 5. * pandas is also smart in that it will intelligently print part of the dataframe content if it is too large. End of explanation df.location.head() # show location and dollars_sold df[[0, 3]].head() # df[['location','dollars_sold']].head() # also okay Explanation: SELECT Columns can be identified by its index (0-based) or its name. End of explanation df[df['location'] == 'Vancouver'].head() Explanation: WHERE End of explanation # location at Vancouver and dollars_sold more than 500, except the 1st quarter df[(df['location'] == 'Vancouver') & (df['time'] != 'Q1') & (df['dollars_sold'] > 500)] Explanation: When dealing with multiple conditions, pandas uses &amp; and |. Each condition must be bracketed. End of explanation df['time'] == 'Q1' Explanation: Under the hood, the conditions determines a Boolean array. End of explanation df.groupby('location').size() Explanation: GROUP BY In pandas, we uses groupby() to split a dataset into groups; we can apply some function (e.g., aggregation) and combine the groups together. We use size() to return the number of rows of each group (like COUNT in SQL) End of explanation df.groupby(['location','item']).agg({'dollars_sold': [np.mean,np.sum]}) Explanation: We use agg() to apply multiple functions at once, and pass a list of columns to groupby() to grouping multiple columns End of explanation # import another table about the population of each city df2 = pd.read_csv('./asset/population_1.txt', sep='\t') df2.head() Explanation: JOIN End of explanation pd.merge(df,df2,on='location').head() Explanation: We use pd.merge() to join two DataFrames, where you can specify the join key End of explanation pd.merge(df,df2,on='location',how='left').tail() Explanation: We can also specify the join type End of explanation # import another part of the population table df3=pd.read_csv('./asset/population_2.txt', sep='\t') df3 Explanation: UNION End of explanation pd.concat([df2,df3]) Explanation: use pd.concat() to union two tables without removing duplicates (i.e., UNION ALL in SQL) End of explanation pd.concat([df2,df3]).drop_duplicates() Explanation: use drop_duplicates() to remove duplicate rows End of explanation df_city = pd.concat([df2,df3]).drop_duplicates() df_city['big city'] = pd.Series(df_city['population'] > 1000000, index=df_city.index) df_city Explanation: Adding Columns End of explanation table = pd.pivot_table(df, index = 'location', columns = 'time', aggfunc=np.sum) table pd.pivot_table(df, index = ['location', 'item'], columns = 'time', aggfunc=np.sum, margins=True) trace1 = go.Bar( x=table.index, y=table.dollars_sold.Q1, name='Q1' ) trace2 = go.Bar( x=table.index, y=table.dollars_sold.Q2, name='Q2' ) trace3 = go.Bar( x=table.index, y=table.dollars_sold.Q3, name='Q3' ) trace4 = go.Bar( x=table.index, y=table.dollars_sold.Q4, name='Q4' ) data = [trace1, trace2, trace3, trace4] layout = go.Layout( barmode='group' ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='sales-plot') Explanation: Pivoting and Visualization End of explanation
5,076
Given the following text description, write Python code to implement the functionality described below step by step Description: Data science with IBM Planning Analytics Cubike example - Part 2 and 3 Welcome to the second part of the Data Science with TM1/Planning Analytics. In Part 1 , we uploaded in our TM1 cubes the weather data from a web service. Now that we have all the data we need in TM1, we can start to analyse it. The Python community provides lots of tools which make data science easy and fun. Why using Python for Data Science? Top-notch, free data analysis libraries. Free (and good) libraries to access data from systems or web. Lots of research in new disciplines like, Machine Learning (Tensorflow) or Natural Language Processing (NLTK). TM1 (as the Information Warehouse) is the ideal data source for Data Science Objective The objective of this article is to explore the impact of seasonal effects, weather and public holidays on our operative business. To do that we are going to follow these steps Step1: Import all Python librairies we need in this example Step2: Step 2 Step3: Load 2017 Bike Shares by Month Before we start with the analysis, we first need to bring data from TM1 into our notebook. We start with data from the view 2017 Counts by Month of the Bike Shares cube. To load data from the cube view into python we execute the following code Step4: Our cellset, given back by TM1, is stored in the variable data. To convert this data set into a pandas dataframe, we use the TM1py function Utils.build_pandas_dataframe_from_cellset Step5: Working with a pandas dataframe is much more convenient than working with a raw data set. A pandas dataframe comes with lots of features which will help us to manipulate data. We now need to rearrange the dataframe. We need to reduce the dimensionality of our data from 4 (Time, Version, City, Measure) to 2 (Time, City). This should make our life easier down the road. Step6: To show the rearranged dataframe, we can just type df into a jupyter cell and execute it. Step7: Plot Bike Shares by Month Let's plot a barchart from our dataframe, to explore and understand the monthly distributions throughout the cities, visually. Using the popular charting library Plotly, we can create an interactive barplot with just 7 lines of python code Step8: Step 2 - Conclusion As expected, the seasons have a massive impact on our bike sharing business. In the warmer months we have substantially more usage than in colder months. Also interesting is that the seasons seem to impact cities differently. While the relation between Summer and Winter months in NYC and Washington DC is approximately 1/2, in Chicago it is close to 1/5! Let's dig deeper into the relationships between the cities and the Temperature! Step 3 Step9: TM1 returns missing values as NaN, to work with this data set in Python need to replace the NaN values with 0. Step10: We need now to convert the Date coming from our Date dimension into a date format that Pandas can understand. We use df_w.Date to focus only on the Date column Step11: The last step is to rearrange the data Step12: Let's print 10 sample records from the dataframe using df_w.sample(10) Step13: Plot data Step14: Load 2014 to 2017 Bike Shares from TM1 Cube Load number of Bike Shares by day from View <span style="color Step15: Rearrange the data Step16: Let's print 5 sample records from the dataframe using df_b.sample(5) Step17: Correlation of Bike Shares between the Cities Pandas dataframes come with very handy and easy to use tools for data analysis. To calculate the correlation between the different columns (cities) in our dataframe, we can just call the corr function on our dataframe. df_b.corr() Step18: As one would expect, the bike shares by day across our three cities are all strongly correlated. To analyze the relationship between the average temperature and the bike shares by city, we need to query the daily historic average temperatures from our TM1 cube into python. We execute the following code to create a dataframe (we call it Step19: Step 3 - Conclusion Temperature and Bike shares are strongly correlated in every city. For the forecasting (part 3 of this series) to be effective we will need a model that can take seasonal effects (e.g. temperature) into account. The intensity with which temperature affects the bike shares varies by city. For the forecasting we will need to create different models by city. Step 4 Step20: Step 4 - Conclusion Analyzing the plot visually we make a few statements Step21: To calculate the fitted line we use the linregress function from the popular Scipy stats module. Note that the function not only returns us the slope and intercept of the fitted line, but also three measures (R squared, P Value and the Standard Error), that quantify how well the fitted line matches the observations. Step22: Load Holidays and Weekends Now we need to query Public Holidays and Weekends from TM1 through two small MDX Queries, and merge them into a list. This list we call non_work_days. Step23: Plot Scatterplot with Trendline How does Temperature impact our business? Scatterplot of Temperature against Bike Shares. Now we can create a new scatterplot, that includes the fitted line (orange), the working days (blue) and the non-working days (green). <b>Workingdays</b> in lightblue <b>Non workingdays</b> in green <b>Fitted Line</b> in orange Step24: When we repeat this exercise for Chicago and Washington DC we see a similar picture Step25: Step 5 - Conclusion In all three cities the vast majority of the green points lay under the fitted line. On Non-working days there is generally less usage than our fitted linear regression model (Temperature ~ Bike Count) predicts. For the forecasting (part 3 of this series) to be effective we will need to take weekdays and public holidays into account. Part 3 Step26: In the Part 2 of this series, we already loaded the actuals from the Bike Sharing cube into Python. We called the variable df_b. Before we can use this data to fit our Prophet model, we must make sure we arrange the data in a correct format. The dataframe that Prophet needs has two columns Step27: We use the tail() pandas function on our dataframe (df_nyc) to display the last 5 rows of data Step28: Step 2 Step29: Now we can fit our model, by executing the fit method on our model and passing the dataframe, that we arranged in step 1. Step30: This is where Prophet is actually doing all the hard work, the curve-fitting. Under the hood Prophet uses Stan to run the statistical calculations as fast as possible. Step 3 Step31: Then we use the predict function on our model. As the argument to that function, we pass the dataframe future. Step32: Done! The forecast is ready. Let's look at what Prophet predicted ! We select the following columns on the dataframe and print out the last 5 records Step33: Step 4 Step34: To get an even further understanding of our fitted model, we can plot each of the model components. This is shown in the plot below. In the top panel we see the linear growth term. This term contains changepoints (either determined independently by Prophet or preset by the user) so that the rate of growth is allowed to vary over time. The second panel shows the effect that public holidays have on our bike shares. The final two panels show the estimated yearly and weekly trends of the model Step35: Conclusion on this analysis Step36: Once our data set is ready, we use the TM1py function tm1.cubes.cells.write_values to send the data to our cube Bike Shares
Python Code: import configparser config = configparser.ConfigParser() config.read(r'..\..\config.ini') Explanation: Data science with IBM Planning Analytics Cubike example - Part 2 and 3 Welcome to the second part of the Data Science with TM1/Planning Analytics. In Part 1 , we uploaded in our TM1 cubes the weather data from a web service. Now that we have all the data we need in TM1, we can start to analyse it. The Python community provides lots of tools which make data science easy and fun. Why using Python for Data Science? Top-notch, free data analysis libraries. Free (and good) libraries to access data from systems or web. Lots of research in new disciplines like, Machine Learning (Tensorflow) or Natural Language Processing (NLTK). TM1 (as the Information Warehouse) is the ideal data source for Data Science Objective The objective of this article is to explore the impact of seasonal effects, weather and public holidays on our operative business. To do that we are going to follow these steps: Load and visualize monthly bike shares by city Explore seasonal and regional trends Analyze relationship between average temperatures and bike shares by day Analyze the impact of Non-working days vs. Working days. Step 1:Import TM1 config and librairies The first step is to define the TM1 connection settings which you find at the top of the notebook: End of explanation from copy import deepcopy from datetime import datetime # TM1py from TM1py.Services import TM1Service from TM1py.Utils import Utils # Data-Analysis Libraries import pandas as pd import numpy as np from scipy import stats # Ploting Libraries import matplotlib import matplotlib.pyplot as plt plt.style.use('ggplot') %matplotlib inline import plotly.offline as py import plotly.graph_objs as go py.init_notebook_mode() import plotly.tools as tls # Facebook's Prophet from fbprophet import Prophet # supress warnings import warnings warnings.filterwarnings('ignore') Explanation: Import all Python librairies we need in this example: * Pandas provides high-performance and easy-to-use data structures. * Numpy introduces (fast) vector based data types into python. * SciPy scientific computing in python (e.g. Linear Regression). * Matplotlib is a plotting library. * ploty a charting library to make interactive, publication-quality graphs online. * PyStan on Windows required for Prophet. * Prophet is a tool for producing high quality forecasts for time series data. * TM1py the python package for TM1. End of explanation tm1 = TM1Service(**config['tm1srv01']) Explanation: Step 2: Load, visualize 2017 monthly bike shares by city Establish Connection to TM1 with TM1py Instantiate the TM1 Service. Establish Connection to TM1 Model, that runs on AWS End of explanation cube_name = 'Bike Shares' view_name = '2017 Counts by Month' data = tm1.cubes.cells.execute_view( cube_name=cube_name, view_name=view_name, private=False) Explanation: Load 2017 Bike Shares by Month Before we start with the analysis, we first need to bring data from TM1 into our notebook. We start with data from the view 2017 Counts by Month of the Bike Shares cube. To load data from the cube view into python we execute the following code: Query data from View <span style="color:SteelBlue">2017 Counts by Month</span> from Cube <span style="color:SteelBlue">Bike Shares</span> Build DataFrame and rearrange content in DataFrame End of explanation df = Utils.build_pandas_dataframe_from_cellset( data, multiindex=False) Explanation: Our cellset, given back by TM1, is stored in the variable data. To convert this data set into a pandas dataframe, we use the TM1py function Utils.build_pandas_dataframe_from_cellset: End of explanation df['Values'] = df["Values"].replace(np.NaN, 0) for city in ('NYC', 'Chicago', 'Washington'): df[city] = df.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1) df.drop(columns=["Values"], inplace=True) df = df.groupby("Date").sum() Explanation: Working with a pandas dataframe is much more convenient than working with a raw data set. A pandas dataframe comes with lots of features which will help us to manipulate data. We now need to rearrange the dataframe. We need to reduce the dimensionality of our data from 4 (Time, Version, City, Measure) to 2 (Time, City). This should make our life easier down the road. End of explanation df Explanation: To show the rearranged dataframe, we can just type df into a jupyter cell and execute it. End of explanation cities = ('NYC', 'Chicago', 'Washington') # define Data for plot data = [go.Bar(x=df.index, y=df[city].values, name=city) for city in cities] # define Layout. stack vs. group ! layout = go.Layout( barmode='stack', title="Bike Shares 2017" ) # plot fig = go.Figure(data=data, layout=layout) py.iplot(fig) Explanation: Plot Bike Shares by Month Let's plot a barchart from our dataframe, to explore and understand the monthly distributions throughout the cities, visually. Using the popular charting library Plotly, we can create an interactive barplot with just 7 lines of python code: End of explanation cube_name = 'Weather Data' view_name = '2014 to 2017 Average by Day' data = tm1.cubes.cells.execute_view( cube_name=cube_name, view_name=view_name, private=False) df_w = Utils.build_pandas_dataframe_from_cellset( cellset=data, multiindex=False) Explanation: Step 2 - Conclusion As expected, the seasons have a massive impact on our bike sharing business. In the warmer months we have substantially more usage than in colder months. Also interesting is that the seasons seem to impact cities differently. While the relation between Summer and Winter months in NYC and Washington DC is approximately 1/2, in Chicago it is close to 1/5! Let's dig deeper into the relationships between the cities and the Temperature! Step 3: Explore seasonal and regional trends As one would expect, the bike shares by day across our three cities are all strongly correlated. To analyze the relationship between the average temperature and the bike shares by city, we need to query the daily historic average temperatures from our TM1 cube into python. We execute the following code to create a dataframe (we call it: df_w) based on the cubeview 2014 to 2017 Average by Day of the Weather Data cube: End of explanation # Replace missing values with 0... df_w['Values'] = df_w["Values"].replace(np.NaN, 0) Explanation: TM1 returns missing values as NaN, to work with this data set in Python need to replace the NaN values with 0. End of explanation # Convert Date to pandas time df_w.Date = pd.to_datetime(df_w.Date) Explanation: We need now to convert the Date coming from our Date dimension into a date format that Pandas can understand. We use df_w.Date to focus only on the Date column: End of explanation # Rearrange Weather Data in DataFrame for city in ('NYC', 'Chicago', 'Washington'): df_w[city] = df_w.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1) df_w.drop(columns=["Values"], inplace=True) df_w = df_w.groupby("Date").sum() Explanation: The last step is to rearrange the data: End of explanation df_w.head(10) Explanation: Let's print 10 sample records from the dataframe using df_w.sample(10): End of explanation trace_nyc = go.Scatter( x=df_w.index, y=df_w['NYC'], name = "NYC", line = dict(color = '#17BECF'), opacity = 0.8) trace_chicago = go.Scatter( x=df_w.index, y=df_w['Chicago'], name = "Chicago", line = dict(color = '#7F7F7F'), opacity = 0.8) trace_washington = go.Scatter( x=df_w.index, y=df_w['Washington'], name = "Washington", opacity = 0.8) data = [trace_nyc, trace_chicago, trace_washington] layout = dict( title = "Temperature by day by city", xaxis = dict( range = ['2017-01-01','2017-12-31']) ) fig = dict(data=data, layout=layout) py.iplot(fig, filename = "Manually Set Range") Explanation: Plot data End of explanation cube_name = 'Bike Shares' view_name = '2014 to 2017 Counts by Day' data = tm1.cubes.cells.execute_view(cube_name=cube_name, view_name=view_name, private=False) df_b = Utils.build_pandas_dataframe_from_cellset(data, multiindex=False) df_b['Values'] = df_b["Values"].replace(np.NaN, 0) Explanation: Load 2014 to 2017 Bike Shares from TM1 Cube Load number of Bike Shares by day from View <span style="color:SteelBlue">2014 to 2017 Counts By Day</span> from Cube <span style="color:SteelBlue">Bike Shares</span> Get the values from the 2014 to 2017 Counts by Day and replace the NaN with 0 values: End of explanation # Rearrange content in DataFrame for city in ('NYC', 'Chicago', 'Washington'): df_b[city] = df_b.apply(lambda row: row["Values"] if row["City"] == city else None, axis=1) df_b.drop(columns=["Values"], inplace=True) df_b = df_b.groupby("Date").sum() Explanation: Rearrange the data End of explanation df_b.sample(5) Explanation: Let's print 5 sample records from the dataframe using df_b.sample(5): End of explanation df_b.corr() Explanation: Correlation of Bike Shares between the Cities Pandas dataframes come with very handy and easy to use tools for data analysis. To calculate the correlation between the different columns (cities) in our dataframe, we can just call the corr function on our dataframe. df_b.corr(): End of explanation df_b.corrwith(df_w) Explanation: As one would expect, the bike shares by day across our three cities are all strongly correlated. To analyze the relationship between the average temperature and the bike shares by city, we need to query the daily historic average temperatures from our TM1 cube into python. We execute the following code to create a dataframe (we call it: df_w) based on the cubeview 2014 to 2017 Average by Day of the Weather Data cube: Correlation Between Temperature and Bike Shares by city Correlation between two DataFrames (df_b, df_w) that share the same index (Date) End of explanation cities = ('NYC', 'Chicago', 'Washington') colors = ( 'rgba(222, 167, 14, 0.5)','rgba(31, 156, 157, 0.5)', 'rgba(181, 77, 52, 0.5)') # Scatterplot per city data = [go.Scatter( x = df_w[city].values, y = df_b[city].values, mode = 'markers', marker = dict( color = color ), text= df_w.index, name=city )for (city, color) in zip (cities, colors)] # Plot and embed in jupyter notebook! py.iplot(data) Explanation: Step 3 - Conclusion Temperature and Bike shares are strongly correlated in every city. For the forecasting (part 3 of this series) to be effective we will need a model that can take seasonal effects (e.g. temperature) into account. The intensity with which temperature affects the bike shares varies by city. For the forecasting we will need to create different models by city. Step 4: Analyze relationship between average temperature and bike shares by day Let's visualize the relationship between temperature and bike shares in a Scatterplot. From our two dataframes: df_w (average temperature by day) and df_b (bike shares per day) we can create a scatterplot in just a few lines of code: End of explanation city = "NYC" Explanation: Step 4 - Conclusion Analyzing the plot visually we make a few statements: Among the three cities, the distribution in Chicago is the closest to a linear model . Judging visually, we could draw a neat line through that point cloud For Washington DC we can recognize an interseting trend, that for temperatures of approx. 25 degrees and higher the bike count stagnates. The distribution for NYC is less homogeneous. A simple linear model would not sufficiently explain the bike shares count. Let's quantify those finding and take a closer look at the how non-working days impact our operative business. Step 5: Analyze the impact of Non-working days vs. Working days. To analyze the impact of Public holidays and weekends, we will focus on one city at a time. Linear Regression First we want to create a linear regression between the average temperatures and the bike shares for NYC. End of explanation x, y = df_w[city].values, df_b[city].values slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) print("y = %.2fx + (%.2f)" % (slope, intercept)) Explanation: To calculate the fitted line we use the linregress function from the popular Scipy stats module. Note that the function not only returns us the slope and intercept of the fitted line, but also three measures (R squared, P Value and the Standard Error), that quantify how well the fitted line matches the observations. End of explanation mdx = "{ FILTER ( { TM1SubsetAll([Date]) }, [Public Holidays].([City].[NYC]) = 1) }" public_holidays = tm1.dimensions.execute_mdx("Date", mdx) mdx = "{FILTER( {TM1SUBSETALL( [Date] )}, [Date].[Weekday] > '5')}" weekends = tm1.dimensions.execute_mdx("Date", mdx) non_work_days = public_holidays + weekends Explanation: Load Holidays and Weekends Now we need to query Public Holidays and Weekends from TM1 through two small MDX Queries, and merge them into a list. This list we call non_work_days. End of explanation working_days = go.Scatter( x = df_w[city].values, y = df_b[city].values, mode = 'markers', marker = dict(color = 'LightBlue'), text= df_w.index, name="Working Days" ) non_working_days = go.Scatter( x = df_w[city][df_w.index.isin(non_work_days)].values, y = df_b[city][df_w.index.isin(non_work_days)].values, mode = 'markers', marker = dict(color = 'green'), text= df_w[df_w.index.isin(non_work_days)].index, name="Non Working Days" ) line = go.Scatter( x = df_w[city].values, y = df_w[city].values * slope + intercept, mode = 'lines', marker = dict(color = 'orange'), name = 'Trendline' ) data = [working_days, non_working_days, line] layout = go.Layout(title=city) figure = go.Figure(data=data, layout=layout) py.iplot(figure) Explanation: Plot Scatterplot with Trendline How does Temperature impact our business? Scatterplot of Temperature against Bike Shares. Now we can create a new scatterplot, that includes the fitted line (orange), the working days (blue) and the non-working days (green). <b>Workingdays</b> in lightblue <b>Non workingdays</b> in green <b>Fitted Line</b> in orange End of explanation d = dict() for city in ("NYC", "Chicago", "Washington"): x, y = df_w[city].values, df_b[city].values slope, intercept, r_value, p_value, std_err = stats.linregress(x, y) d[city] = deepcopy((r_value**2, std_err, p_value)) pd.DataFrame(data=list(d.values()), columns=['R-Squared', 'Standard Error', 'P Value'], index=d.keys()) Explanation: When we repeat this exercise for Chicago and Washington DC we see a similar picture: The fitted line matches the points more (Chicago, Washington DC) or less (NYC) good and the majority of the green points lay underneath the fitted line. Quantify Goodness-of-Fit of the model <b>R Squared</b> Relative measure, how well the points match the line. Value of 1: All points are on the Line <b>Standard Error</b> Absolute measure of the typical distance that the data points fall from the regression line <b>P Value</b> Tests against Nullhypothesis: that the coefficient is equal to zero (no effect) End of explanation city = 'NYC' Explanation: Step 5 - Conclusion In all three cities the vast majority of the green points lay under the fitted line. On Non-working days there is generally less usage than our fitted linear regression model (Temperature ~ Bike Count) predicts. For the forecasting (part 3 of this series) to be effective we will need to take weekdays and public holidays into account. Part 3: Timeseries Forecasting Welcome to the last part of the articles series about Data Science with TM1/Planning Analytics and Python. In Part 1 we loaded weather data from the NOOA web service into our TM1 cubes. In Part 2, by analyzing the data with Pandas and Ploty, we've learned that There are strong seasonal trends throughout the year Public Holidays and weekends have a negative impact on the bike shares Temperature and Bike shares are strongly correlated in every city. The intensity with which temperature affects the bike shares varies by city. Washington DC is the city that is least affected by the weather. Objective In this article, we are going to explain how to use Facebook's Prophet to create a two year demand forecast for bike sharing, based on four years of actuals from our TM1 cube. Before we start with the implemenation let's quickly discuss what Prophet is. Prophet The idea behind the prophet package is to decompose a time series of data into the following three components: Trends: these are non-periodic and systematic trends in the data, Seasonal effects: these are modelled as daily or yearly periodicities in the data (optionally also hourly), and Holidays / one-off effects: one-off effects for days like: Black Friday, Christmas, etc. Based on our historic data, Prophet fits a model, where each of these components contribute additively to the observed time series. In other words, the number of bike shares on a given day is the sum of the trend component, the seasonal component and the one-off effects. We are going to focus on NYC: End of explanation holidays = pd.DataFrame({ 'holiday': 'Public Holidays', 'ds': pd.to_datetime(public_holidays), 'lower_window': 0, 'upper_window': 0, }) df_nyc = df_b[city].reset_index() df_nyc.rename(columns={'Date': 'ds', city: 'y'}, inplace=True) Explanation: In the Part 2 of this series, we already loaded the actuals from the Bike Sharing cube into Python. We called the variable df_b. Before we can use this data to fit our Prophet model, we must make sure we arrange the data in a correct format. The dataframe that Prophet needs has two columns: ds: dates y: numeric values We execute the following code to arrange our dataframe. End of explanation df_nyc.tail() Explanation: We use the tail() pandas function on our dataframe (df_nyc) to display the last 5 rows of data: End of explanation m = Prophet(holidays = holidays, daily_seasonality=False) Explanation: Step 2: Fitting the model Now that we have the data ready, and a high level understanding of the seasonal trends in our data, we are ready to fit our model! First we need to instantiate Prophet. We are passing two arguments to the constructor of the Prophet model: The public holidays that we want Prophet to take into account (they come from a TM1 cube through MDX. More details in the Jupyter notebook) Whether or not Prophet should model intraday seasonality End of explanation m.fit(df_nyc); Explanation: Now we can fit our model, by executing the fit method on our model and passing the dataframe, that we arranged in step 1. End of explanation future = m.make_future_dataframe(periods=365*2) Explanation: This is where Prophet is actually doing all the hard work, the curve-fitting. Under the hood Prophet uses Stan to run the statistical calculations as fast as possible. Step 3: Use Facebook's Prophet to forecast the next 2 years We can use the fitted Prophet model, to predict values for the future. First we need to specify how many days we would like to forecast forward. This code block creates a dataframe with the sized window of future dates. End of explanation forecast = m.predict(future) Explanation: Then we use the predict function on our model. As the argument to that function, we pass the dataframe future. End of explanation forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail() Explanation: Done! The forecast is ready. Let's look at what Prophet predicted ! We select the following columns on the dataframe and print out the last 5 records:: ds (the date) yhat (the predicted value) yhat_lower (the lower bound of the confidence interval) yhat_upper (the upper bound of the confidence interval) The following code is going to print the last 5 records: End of explanation pd.plotting.register_matplotlib_converters() m.plot(forecast); Explanation: Step 4: Analysing the forecast We can interrogate the model a bit to understand what it is doing. The best way to do this is to see how the model fits existing data and what the forecast looks like. This is shown in the plot below. The black dots correspond to the historic number of bike shares each day (2014-2018). The dark blue line represents the estimated number of shares, projected with the fitted model. The light blue lines correspond to the 80% confidence interval for the models predictions. Judging visually, the model has done a good job of picking up the yearly seasonality and the overall trend. The forecast for 2019 and 2020 looks plausible! End of explanation m.plot_components(forecast); Explanation: To get an even further understanding of our fitted model, we can plot each of the model components. This is shown in the plot below. In the top panel we see the linear growth term. This term contains changepoints (either determined independently by Prophet or preset by the user) so that the rate of growth is allowed to vary over time. The second panel shows the effect that public holidays have on our bike shares. The final two panels show the estimated yearly and weekly trends of the model: End of explanation cells = {} for index, row in forecast.iterrows(): date = str(row['ds'])[0:10] cells['Prophet Forecast', date, city, 'Count'] = round(row['yhat']) cells['Prophet Forecast', date, city, 'Count Lower'] = round(row['yhat_lower']) cells['Prophet Forecast', date, city, 'Count Upper'] = round(row['yhat_upper']) Explanation: Conclusion on this analysis: An overall global trend of growth from 2015, that slowed down slightly after 2016. Public holidays lead to a fall in the usage of the bikes A strong weekly seasonality: Our bikes are used mostly during the week – presumably for commuting. A strong yearly seasonality with a peak in summer/ automn and a drop in winter. Step 5: The last step is to send the data back to TM1 Before sending the data back to TM1, we need to rearrange the data so it matches the dimensions in our cube: Version: Prophet Forecast Date: date City: city Bike Shares Measures: Count for yhat Count Lower for yhat_lower Count Upper for yhat_upper To rearrange the data for TM1 we execute the following code. End of explanation tm1.cubes.cells.write_values('Bike Shares', cells) Explanation: Once our data set is ready, we use the TM1py function tm1.cubes.cells.write_values to send the data to our cube Bike Shares: End of explanation
5,077
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Landice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Ice Albedo Is Required Step7: 1.4. Atmospheric Coupling Variables Is Required Step8: 1.5. Oceanic Coupling Variables Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required Step11: 2.2. Code Version Is Required Step12: 2.3. Code Languages Is Required Step13: 3. Grid Land ice grid 3.1. Overview Is Required Step14: 3.2. Adaptive Grid Is Required Step15: 3.3. Base Resolution Is Required Step16: 3.4. Resolution Limit Is Required Step17: 3.5. Projection Is Required Step18: 4. Glaciers Land ice glaciers 4.1. Overview Is Required Step19: 4.2. Description Is Required Step20: 4.3. Dynamic Areal Extent Is Required Step21: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required Step22: 5.2. Grounding Line Method Is Required Step23: 5.3. Ice Sheet Is Required Step24: 5.4. Ice Shelf Is Required Step25: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required Step26: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required Step27: 7.2. Ocean Is Required Step28: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required Step29: 8.2. Melting Is Required Step30: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required Step31: 9.2. Approximation Is Required Step32: 9.3. Adaptive Timestep Is Required Step33: 9.4. Timestep Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-1', 'landice') Explanation: ES-DOC CMIP6 Model Properties - Landice MIP Era: CMIP6 Institute: INPE Source ID: SANDBOX-1 Topic: Landice Sub-Topics: Glaciers, Ice. Properties: 30 (21 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:06 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Grid 4. Glaciers 5. Ice 6. Ice --&gt; Mass Balance 7. Ice --&gt; Mass Balance --&gt; Basal 8. Ice --&gt; Mass Balance --&gt; Frontal 9. Ice --&gt; Dynamics 1. Key Properties Land ice key properties 1.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.ice_albedo') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "function of ice age" # "function of ice density" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Ice Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify how ice albedo is modelled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Atmospheric Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Oceanic Coupling Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which variables are passed between the ocean and ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ice velocity" # "ice thickness" # "ice temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which variables are prognostically calculated in the ice model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of land ice code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Grid Land ice grid 3.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is an adative grid being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.base_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Base Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The base resolution (in metres), before any adaption End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.resolution_limit') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Resolution Limit Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If an adaptive grid is being used, what is the limit of the resolution (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.grid.projection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.5. Projection Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The projection of the land ice grid (e.g. albers_equal_area) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Glaciers Land ice glaciers 4.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of glaciers in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of glaciers, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 4.3. Dynamic Areal Extent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does the model include a dynamic glacial extent? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Ice Ice sheet and ice shelf 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the ice sheet and ice shelf in the land ice scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.grounding_line_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "grounding line prescribed" # "flux prescribed (Schoof)" # "fixed grid size" # "moving grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.2. Grounding Line Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_sheet') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.3. Ice Sheet Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice sheets simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.ice_shelf') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Ice Shelf Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are ice shelves simulated? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Ice --&gt; Mass Balance Description of the surface mass balance treatment 6.1. Surface Mass Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Ice --&gt; Mass Balance --&gt; Basal Description of basal melting 7.1. Bedrock Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over bedrock End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of basal melting over the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Ice --&gt; Mass Balance --&gt; Frontal Description of claving/melting from the ice shelf front 8.1. Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of calving from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Melting Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the implementation of melting from the front of the ice shelf End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Ice --&gt; Dynamics ** 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description if ice sheet and ice shelf dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.approximation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "SIA" # "SAA" # "full stokes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Approximation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Approximation type used in modelling ice dynamics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.3. Adaptive Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there an adaptive time scheme for the ice scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.landice.ice.dynamics.timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. End of explanation
5,078
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Modular neural nets In the previous exercise, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures. In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this Step2: Affine layer Step3: Affine layer Step4: ReLU layer Step5: ReLU layer Step6: Loss layers Step7: Convolution layer Step9: Aside Step10: Convolution layer Step11: Max pooling layer Step12: Max pooling layer Step13: Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory Step14: Sandwich layers There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly
Python Code: # As usual, a bit of setup import numpy as np import matplotlib.pyplot as plt from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient from cs231n.layers import * %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) Explanation: Modular neural nets In the previous exercise, we computed the loss and gradient for a two-layer neural network in a single monolithic function. This isn't very difficult for a small two-layer network, but would be tedious and error-prone for larger networks. Ideally we want to build networks using a more modular design so that we can snap together different types of layers and loss functions in order to quickly experiment with different architectures. In this exercise we will implement this approach, and develop a number of different layer types in isolation that can then be easily plugged together. For each layer we will implement forward and backward functions. The forward function will receive data, weights, and other parameters, and will return both an output and a cache object that stores data needed for the backward pass. The backward function will recieve upstream derivatives and the cache object, and will return gradients with respect to the data and all of the weights. This will allow us to write code that looks like this: ```python def two_layer_net(X, W1, b1, W2, b2, reg): # Forward pass; compute scores s1, fc1_cache = affine_forward(X, W1, b1) a1, relu_cache = relu_forward(s1) scores, fc2_cache = affine_forward(a1, W2, b2) # Loss functions return data loss and gradients on scores data_loss, dscores = svm_loss(scores, y) # Compute backward pass da1, dW2, db2 = affine_backward(dscores, fc2_cache) ds1 = relu_backward(da1, relu_cache) dX, dW1, db1 = affine_backward(ds1, fc1_cache) # A real network would add regularization here # Return loss and gradients return loss, dW1, db1, dW2, db2 ``` End of explanation # Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim) out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297], [ 3.25553199, 3.5141327, 3.77273342]]) # Compare your output with ours. The error should be around 1e-9. print 'Testing affine_forward function:' print 'difference: ', rel_error(out, correct_out) Explanation: Affine layer: forward Open the file cs231n/layers.py and implement the affine_forward function. Once you are done we will test your can test your implementation by running the following: End of explanation # Test the affine_backward function x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout) _, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache) # The error should be less than 1e-10 print 'Testing affine_backward function:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Explanation: Affine layer: backward Now implement the affine_backward function. You can test your implementation using numeric gradient checking. End of explanation # Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]]) # Compare your output with ours. The error should be around 1e-8 print 'Testing relu_forward function:' print 'difference: ', rel_error(out, correct_out) Explanation: ReLU layer: forward Implement the relu_forward function and test your implementation by running the following: End of explanation x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be around 1e-12 print 'Testing relu_backward function:' print 'dx error: ', rel_error(dx_num, dx) Explanation: ReLU layer: backward Implement the relu_backward function and test your implementation using numeric gradient checking: End of explanation num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be 1e-9 print 'Testing svm_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y) # Test softmax_loss function. Loss should be 2.3 and dx error should be 1e-8 print '\nTesting softmax_loss:' print 'loss: ', loss print 'dx error: ', rel_error(dx_num, dx) Explanation: Loss layers: Softmax and SVM You implemented these loss functions in the last assignment, so we'll give them to you for free here. It's still a good idea to test them to make sure they work correctly. End of explanation x_shape = (2, 3, 4, 4) w_shape = (3, 3, 4, 4) x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape) w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape) b = np.linspace(-0.1, 0.2, num=3) conv_param = {'stride': 2, 'pad': 1} out, _ = conv_forward_naive(x, w, b, conv_param) correct_out = np.array([[[[[-0.08759809, -0.10987781], [-0.18387192, -0.2109216 ]], [[ 0.21027089, 0.21661097], [ 0.22847626, 0.23004637]], [[ 0.50813986, 0.54309974], [ 0.64082444, 0.67101435]]], [[[-0.98053589, -1.03143541], [-1.19128892, -1.24695841]], [[ 0.69108355, 0.66880383], [ 0.59480972, 0.56776003]], [[ 2.36270298, 2.36904306], [ 2.38090835, 2.38247847]]]]]) # Compare your output to ours; difference should be around 1e-8 print 'Testing conv_forward_naive' print 'difference: ', rel_error(out, correct_out) Explanation: Convolution layer: forward naive We are now ready to implement the forward pass for a convolutional layer. Implement the function conv_forward_naive in the file cs231n/layers.py. You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear. You can test your implementation by running the following: End of explanation from scipy.misc import imread, imresize kitten, puppy = imread('kitten.jpg'), imread('puppy.jpg') # kitten is wide, and puppy is already square d = kitten.shape[1] - kitten.shape[0] kitten_cropped = kitten[:, d/2:-d/2, :] img_size = 200 # Make this smaller if it runs too slow x = np.zeros((2, 3, img_size, img_size)) x[0, :, :, :] = imresize(puppy, (img_size, img_size)).transpose((2, 0, 1)) x[1, :, :, :] = imresize(kitten_cropped, (img_size, img_size)).transpose((2, 0, 1)) # Set up a convolutional weights holding 2 filters, each 3x3 w = np.zeros((2, 3, 3, 3)) # The first filter converts the image to grayscale. # Set up the red, green, and blue channels of the filter. w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]] w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]] w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]] # Second filter detects horizontal edges in the blue channel. w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]] # Vector of biases. We don't need any bias for the grayscale # filter, but for the edge detection filter we want to add 128 # to each output so that nothing is negative. b = np.array([0, 128]) # Compute the result of convolving each input in x with each filter in w, # offsetting by b, and storing the results in out. out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1}) def imshow_noax(img, normalize=True): Tiny helper to show images as uint8 and remove axis labels if normalize: img_max, img_min = np.max(img), np.min(img) img = 255.0 * (img - img_min) / (img_max - img_min) plt.imshow(img.astype('uint8')) plt.gca().axis('off') # Show the original images and the results of the conv operation plt.subplot(2, 3, 1) imshow_noax(puppy, normalize=False) plt.title('Original image') plt.subplot(2, 3, 2) imshow_noax(out[0, 0]) plt.title('Grayscale') plt.subplot(2, 3, 3) imshow_noax(out[0, 1]) plt.title('Edges') plt.subplot(2, 3, 4) imshow_noax(kitten_cropped, normalize=False) plt.subplot(2, 3, 5) imshow_noax(out[1, 0]) plt.subplot(2, 3, 6) imshow_noax(out[1, 1]) plt.show() Explanation: Aside: Image processing via convolutions As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check. End of explanation x = np.random.randn(4, 3, 5, 5) w = np.random.randn(2, 3, 3, 3) b = np.random.randn(2,) dout = np.random.randn(4, 2, 5, 5) conv_param = {'stride': 1, 'pad': 1} dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout) out, cache = conv_forward_naive(x, w, b, conv_param) dx, dw, db = conv_backward_naive(dout, cache) # Your errors should be around 1e-9' print 'Testing conv_backward_naive function' print 'dx error: ', rel_error(dx, dx_num) print 'dw error: ', rel_error(dw, dw_num) print 'db error: ', rel_error(db, db_num) Explanation: Convolution layer: backward naive Next you need to implement the function conv_backward_naive in the file cs231n/layers.py. As usual, we will check your implementation with numeric gradient checking. End of explanation x_shape = (2, 3, 4, 4) x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape) pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2} out, _ = max_pool_forward_naive(x, pool_param) correct_out = np.array([[[[-0.26315789, -0.24842105], [-0.20421053, -0.18947368]], [[-0.14526316, -0.13052632], [-0.08631579, -0.07157895]], [[-0.02736842, -0.01263158], [ 0.03157895, 0.04631579]]], [[[ 0.09052632, 0.10526316], [ 0.14947368, 0.16421053]], [[ 0.20842105, 0.22315789], [ 0.26736842, 0.28210526]], [[ 0.32631579, 0.34105263], [ 0.38526316, 0.4 ]]]]) # Compare your output with ours. Difference should be around 1e-8. print 'Testing max_pool_forward_naive function:' print 'difference: ', rel_error(out, correct_out) Explanation: Max pooling layer: forward naive The last layer we need for a basic convolutional neural network is the max pooling layer. First implement the forward pass in the function max_pool_forward_naive in the file cs231n/layers.py. End of explanation x = np.random.randn(3, 2, 8, 8) dout = np.random.randn(3, 2, 4, 4) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout) out, cache = max_pool_forward_naive(x, pool_param) dx = max_pool_backward_naive(dout, cache) # Your error should be around 1e-12 print 'Testing max_pool_backward_naive function:' print 'dx error: ', rel_error(dx, dx_num) Explanation: Max pooling layer: backward naive Implement the backward pass for a max pooling layer in the function max_pool_backward_naive in the file cs231n/layers.py. As always we check the correctness of the backward pass using numerical gradient checking. End of explanation from cs231n.fast_layers import conv_forward_fast, conv_backward_fast from time import time x = np.random.randn(100, 3, 31, 31) w = np.random.randn(25, 3, 3, 3) b = np.random.randn(25,) dout = np.random.randn(100, 25, 16, 16) conv_param = {'stride': 2, 'pad': 1} t0 = time() out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param) t1 = time() out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param) t2 = time() print 'Testing conv_forward_fast:' print 'Naive: %fs' % (t1 - t0) print 'Fast: %fs' % (t2 - t1) print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'Difference: ', rel_error(out_naive, out_fast) t0 = time() dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive) t1 = time() dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast) t2 = time() print '\nTesting conv_backward_fast:' print 'Naive: %fs' % (t1 - t0) print 'Fast: %fs' % (t2 - t1) print 'Speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'dx difference: ', rel_error(dx_naive, dx_fast) print 'dw difference: ', rel_error(dw_naive, dw_fast) print 'db difference: ', rel_error(db_naive, db_fast) from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast x = np.random.randn(100, 3, 32, 32) dout = np.random.randn(100, 3, 16, 16) pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} t0 = time() out_naive, cache_naive = max_pool_forward_naive(x, pool_param) t1 = time() out_fast, cache_fast = max_pool_forward_fast(x, pool_param) t2 = time() print 'Testing pool_forward_fast:' print 'Naive: %fs' % (t1 - t0) print 'fast: %fs' % (t2 - t1) print 'speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'difference: ', rel_error(out_naive, out_fast) t0 = time() dx_naive = max_pool_backward_naive(dout, cache_naive) t1 = time() dx_fast = max_pool_backward_fast(dout, cache_fast) t2 = time() print '\nTesting pool_backward_fast:' print 'Naive: %fs' % (t1 - t0) print 'speedup: %fx' % ((t1 - t0) / (t2 - t1)) print 'dx difference: ', rel_error(dx_naive, dx_fast) Explanation: Fast layers Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file cs231n/fast_layers.py. The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the cs231n directory: bash python setup.py build_ext --inplace The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights. NOTE: The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation. You can compare the performance of the naive and fast versions of these layers by running the following: End of explanation from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward x = np.random.randn(2, 3, 16, 16) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2} out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param) dx, dw, db = conv_relu_pool_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout) print 'Testing conv_relu_pool_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) from cs231n.layer_utils import conv_relu_forward, conv_relu_backward x = np.random.randn(2, 3, 8, 8) w = np.random.randn(3, 3, 3, 3) b = np.random.randn(3,) dout = np.random.randn(2, 3, 8, 8) conv_param = {'stride': 1, 'pad': 1} out, cache = conv_relu_forward(x, w, b, conv_param) dx, dw, db = conv_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout) print 'Testing conv_relu_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) from cs231n.layer_utils import affine_relu_forward, affine_relu_backward x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout) print 'Testing affine_relu_forward:' print 'dx error: ', rel_error(dx_num, dx) print 'dw error: ', rel_error(dw_num, dw) print 'db error: ', rel_error(db_num, db) Explanation: Sandwich layers There are a couple common layer "sandwiches" that frequently appear in ConvNets. For example convolutional layers are frequently followed by ReLU and pooling, and affine layers are frequently followed by ReLU. To make it more convenient to use these common patterns, we have defined several convenience layers in the file cs231n/layer_utils.py. Lets grad-check them to make sure that they work correctly: End of explanation
5,079
Given the following text description, write Python code to implement the functionality described below step by step Description: Hello, TensorFlow A beginner-level, getting started, basic introduction to TensorFlow TensorFlow is a general-purpose system for graph-based computation. A typical use is machine learning. In this notebook, we'll introduce the basic concepts of TensorFlow using some simple examples. TensorFlow gets its name from tensors, which are arrays of arbitrary dimensionality. A vector is a 1-d array and is known as a 1st-order tensor. A matrix is a 2-d array and a 2nd-order tensor. The "flow" part of the name refers to computation flowing through a graph. Training and inference in a neural network, for example, involves the propagation of matrix computations through many nodes in a computational graph. When you think of doing things in TensorFlow, you might want to think of creating tensors (like matrices), adding operations (that output other tensors), and then executing the computation (running the computational graph). In particular, it's important to realize that when you add an operation on tensors, it doesn't execute immediately. Rather, TensorFlow waits for you to define all the operations you want to perform. Then, TensorFlow optimizes the computation graph, deciding how to execute the computation, before generating the data. Because of this, a tensor in TensorFlow isn't so much holding the data as a placeholder for holding the data, waiting for the data to arrive when a computation is executed. Adding two vectors in TensorFlow Let's start with something that should be simple. Let's add two length four vectors (two 1st-order tensors) Step1: What we're doing is creating two vectors, [1.0, 1.0, 1.0, 1.0] and [2.0, 2.0, 2.0, 2.0], and then adding them. Here's equivalent code in raw Python and using numpy Step2: Details of adding two vectors in TensorFlow The example above of adding two vectors involves a lot more than it seems, so let's look at it in more depth. import tensorflow as tf This import brings TensorFlow's public API into our IPython runtime environment. with tf.Session() Step3: This version uses constant in a way similar to numpy's fill, specifying the optional shape and having the values copied out across it. The add operator supports operator overloading, so you could try writing it inline as input1 + input2 instead as well as experimenting with other operators. Step4: Adding two matrices Next, let's do something very similar, adding two matrices Step5: Recall that you can pass numpy or Python arrays into constant. In this example, the matrix with values from 1 to 6 is created in numpy and passed into constant, but TensorFlow also has range, reshape, and tofloat operators. Doing this entirely within TensorFlow could be more efficient if this was a very large matrix. Try experimenting with this code a bit -- maybe modifying some of the values, using the numpy version, doing this using, adding another operation, or doing this using TensorFlow's range function. Multiplying matrices Let's move on to matrix multiplication. This time, let's use a bit vector and some random values, which is a good step toward some of what we'll need to do for regression and neural networks. Step6: Above, we're taking a 1 x 4 vector [1 0 0 1] and multiplying it by a 4 by 2 matrix full of random values from a normal distribution (mean 0, stdev 1). The output is a 1 x 2 matrix. You might try modifying this example. Running the cell multiple times will generate new random weights and a new output. Or, change the input, e.g., to [0 0 0 1]), and run the cell again. Or, try initializing the weights using the TensorFlow op, e.g., random_normal, instead of using numpy to generate the random weights. What we have here is the basics of a simple neural network already. If we are reading in the input features, along with some expected output, and change the weights based on the error with the output each time, that's a neural network. Use of variables Let's look at adding two small matrices in a loop, not by creating new tensors every time, but by updating the existing values and then re-running the computation graph on the new data. This happens a lot with machine learning models, where we change some parameters each time such as gradient descent on some weights and then perform the same computations over and over again.
Python Code: from __future__ import print_function import tensorflow as tf with tf.Session(): input1 = tf.constant([1.0, 1.0, 1.0, 1.0]) input2 = tf.constant([2.0, 2.0, 2.0, 2.0]) output = tf.add(input1, input2) result = output.eval() print("result: ", result) Explanation: Hello, TensorFlow A beginner-level, getting started, basic introduction to TensorFlow TensorFlow is a general-purpose system for graph-based computation. A typical use is machine learning. In this notebook, we'll introduce the basic concepts of TensorFlow using some simple examples. TensorFlow gets its name from tensors, which are arrays of arbitrary dimensionality. A vector is a 1-d array and is known as a 1st-order tensor. A matrix is a 2-d array and a 2nd-order tensor. The "flow" part of the name refers to computation flowing through a graph. Training and inference in a neural network, for example, involves the propagation of matrix computations through many nodes in a computational graph. When you think of doing things in TensorFlow, you might want to think of creating tensors (like matrices), adding operations (that output other tensors), and then executing the computation (running the computational graph). In particular, it's important to realize that when you add an operation on tensors, it doesn't execute immediately. Rather, TensorFlow waits for you to define all the operations you want to perform. Then, TensorFlow optimizes the computation graph, deciding how to execute the computation, before generating the data. Because of this, a tensor in TensorFlow isn't so much holding the data as a placeholder for holding the data, waiting for the data to arrive when a computation is executed. Adding two vectors in TensorFlow Let's start with something that should be simple. Let's add two length four vectors (two 1st-order tensors): $\begin{bmatrix} 1. & 1. & 1. & 1.\end{bmatrix} + \begin{bmatrix} 2. & 2. & 2. & 2.\end{bmatrix} = \begin{bmatrix} 3. & 3. & 3. & 3.\end{bmatrix}$ End of explanation print([x + y for x, y in zip([1.0] * 4, [2.0] * 4)]) import numpy as np x, y = np.full(4, 1.0), np.full(4, 2.0) print("{} + {} = {}".format(x, y, x + y)) Explanation: What we're doing is creating two vectors, [1.0, 1.0, 1.0, 1.0] and [2.0, 2.0, 2.0, 2.0], and then adding them. Here's equivalent code in raw Python and using numpy: End of explanation import tensorflow as tf with tf.Session(): input1 = tf.constant(1.0, shape=[4]) input2 = tf.constant(2.0, shape=[4]) input3 = tf.constant(3.0, shape=[4]) output = tf.add(tf.add(input1, input2), input3) result = output.eval() print(result) Explanation: Details of adding two vectors in TensorFlow The example above of adding two vectors involves a lot more than it seems, so let's look at it in more depth. import tensorflow as tf This import brings TensorFlow's public API into our IPython runtime environment. with tf.Session(): When you run an operation in TensorFlow, you need to do it in the context of a Session. A session holds the computation graph, which contains the tensors and the operations. When you create tensors and operations, they are not executed immediately, but wait for other operations and tensors to be added to the graph, only executing when finally requested to produce the results of the session. Deferring the execution like this provides additional opportunities for parallelism and optimization, as TensorFlow can decide how to combine operations and where to run them after TensorFlow knows about all the operations. input1 = tf.constant([1.0, 1.0, 1.0, 1.0]) input2 = tf.constant([2.0, 2.0, 2.0, 2.0]) The next two lines create tensors using a convenience function called constant, which is similar to numpy's array and numpy's full. If you look at the code for constant, you can see the details of what it is doing to create the tensor. In summary, it creates a tensor of the necessary shape and applies the constant operator to it to fill it with the provided values. The values to constant can be Python or numpy arrays. constant can take an optional shape parameter, which works similarly to numpy's fill if provided, and an optional name parameter, which can be used to put a more human-readable label on the operation in the TensorFlow operation graph. output = tf.add(input1, input2) You might think add just adds the two vectors now, but it doesn't quite do that. What it does is put the add operation into the computational graph. The results of the addition aren't available yet. They've been put in the computation graph, but the computation graph hasn't been executed yet. result = output.eval() print result eval() is also slightly more complicated than it looks. Yes, it does get the value of the vector (tensor) that results from the addition. It returns this as a numpy array, which can then be printed. But, it's important to realize it also runs the computation graph at this point, because we demanded the output from the operation node of the graph; to produce that, it had to run the computation graph. So, this is the point where the addition is actually performed, not when add was called, as add just put the addition operation into the TensorFlow computation graph. Multiple operations To use TensorFlow, you add operations on tensors that produce tensors to the computation graph, then execute that graph to run all those operations and calculate the values of all the tensors in the graph. Here's a simple example with two operations: End of explanation with tf.Session(): input1 = tf.constant(1.0, shape=[4]) input2 = tf.constant(2.0, shape=[4]) output = input1 + input2 print(output.eval()) Explanation: This version uses constant in a way similar to numpy's fill, specifying the optional shape and having the values copied out across it. The add operator supports operator overloading, so you could try writing it inline as input1 + input2 instead as well as experimenting with other operators. End of explanation import tensorflow as tf import numpy as np with tf.Session(): input1 = tf.constant(1.0, shape=[2, 3]) input2 = tf.constant(np.reshape(np.arange(1.0, 7.0, dtype=np.float32), (2, 3))) output = tf.add(input1, input2) print(output.eval()) Explanation: Adding two matrices Next, let's do something very similar, adding two matrices: $\begin{bmatrix} 1. & 1. & 1. \ 1. & 1. & 1. \ \end{bmatrix} + \begin{bmatrix} 1. & 2. & 3. \ 4. & 5. & 6. \ \end{bmatrix} = \begin{bmatrix} 2. & 3. & 4. \ 5. & 6. & 7. \ \end{bmatrix}$ End of explanation #@test {"output": "ignore"} import tensorflow as tf import numpy as np with tf.Session(): input_features = tf.constant(np.reshape([1, 0, 0, 1], (1, 4)).astype(np.float32)) weights = tf.constant(np.random.randn(4, 2).astype(np.float32)) output = tf.matmul(input_features, weights) print("Input:") print(input_features.eval()) print("Weights:") print(weights.eval()) print("Output:") print(output.eval()) Explanation: Recall that you can pass numpy or Python arrays into constant. In this example, the matrix with values from 1 to 6 is created in numpy and passed into constant, but TensorFlow also has range, reshape, and tofloat operators. Doing this entirely within TensorFlow could be more efficient if this was a very large matrix. Try experimenting with this code a bit -- maybe modifying some of the values, using the numpy version, doing this using, adding another operation, or doing this using TensorFlow's range function. Multiplying matrices Let's move on to matrix multiplication. This time, let's use a bit vector and some random values, which is a good step toward some of what we'll need to do for regression and neural networks. End of explanation #@test {"output": "ignore"} import tensorflow as tf import numpy as np with tf.Session() as sess: # Set up two variables, total and weights, that we'll change repeatedly. total = tf.Variable(tf.zeros([1, 2])) weights = tf.Variable(tf.random_uniform([1,2])) # Initialize the variables we defined above. tf.global_variables_initializer().run() # This only adds the operators to the graph right now. The assignment # and addition operations are not performed yet. update_weights = tf.assign(weights, tf.random_uniform([1, 2], -1.0, 1.0)) update_total = tf.assign(total, tf.add(total, weights)) for _ in range(5): # Actually run the operation graph, so randomly generate weights and then # add them into the total. Order does matter here. We need to update # the weights before updating the total. sess.run(update_weights) sess.run(update_total) print(weights.eval(), total.eval()) Explanation: Above, we're taking a 1 x 4 vector [1 0 0 1] and multiplying it by a 4 by 2 matrix full of random values from a normal distribution (mean 0, stdev 1). The output is a 1 x 2 matrix. You might try modifying this example. Running the cell multiple times will generate new random weights and a new output. Or, change the input, e.g., to [0 0 0 1]), and run the cell again. Or, try initializing the weights using the TensorFlow op, e.g., random_normal, instead of using numpy to generate the random weights. What we have here is the basics of a simple neural network already. If we are reading in the input features, along with some expected output, and change the weights based on the error with the output each time, that's a neural network. Use of variables Let's look at adding two small matrices in a loop, not by creating new tensors every time, but by updating the existing values and then re-running the computation graph on the new data. This happens a lot with machine learning models, where we change some parameters each time such as gradient descent on some weights and then perform the same computations over and over again. End of explanation
5,080
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook is based on this TensorFlow tutorial. It's been modified to use a SummaryWriter so we can track the training process using TensorBoard. For a nice getting started with TensorBoard tutorial, check this out. Step2: Download the text corpus. Step4: Read the data into a string. Step5: Build the dictionary and replace rare words with UNK token. Step6: Function to generate a training batch for the skip-gram model. Step7: Build and train a skip-gram model. Step8: Start TensorBoard Start TensorBoard while the training is running (or after it's done) by pointing it to the directory in which the summaries were written. The script is configured to write them to this location. sh $ tensorboard --logdir=/tmp/word2vec_basic/summaries Open a browser to http Step9: How-to find the 'nearby' words for a specific given word Here's a function to find the 'nearby' words for a specific word. E.g., picking "six" as the word may give a result like this (after about 100K training steps, for higher accuracy, try running for 500k)
Python Code: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. %matplotlib inline from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections import math import os import random import time import zipfile import numpy as np from six.moves import urllib from six.moves import xrange # pylint: disable=redefined-builtin import tensorflow as tf from sklearn.manifold import TSNE Explanation: This notebook is based on this TensorFlow tutorial. It's been modified to use a SummaryWriter so we can track the training process using TensorBoard. For a nice getting started with TensorBoard tutorial, check this out. End of explanation url = 'http://mattmahoney.net/dc/' def maybe_download(filename, expected_bytes): Download a file if not present, and make sure it's the right size. if not os.path.exists(filename): filename, _ = urllib.request.urlretrieve(url + filename, filename) statinfo = os.stat(filename) if statinfo.st_size == expected_bytes: print('Found and verified %s' % filename) else: print(statinfo.st_size) raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser?') return filename filename = maybe_download('text8.zip', 31344016) Explanation: Download the text corpus. End of explanation def read_data(filename): Extract the first file enclosed in a zip file as a list of words with zipfile.ZipFile(filename) as f: data = tf.compat.as_str(f.read(f.namelist()[0])).split() return data words = read_data(filename) print('Data size %d' % len(words)) Explanation: Read the data into a string. End of explanation vocabulary_size = 50000 def build_dataset(words): count = [['UNK', -1]] count.extend(collections.Counter(words).most_common(vocabulary_size - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: if word in dictionary: index = dictionary[word] else: index = 0 # dictionary['UNK'] unk_count = unk_count + 1 data.append(index) count[0][1] = unk_count reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reverse_dictionary data, count, dictionary, reverse_dictionary = build_dataset(words) print('Most common words (+UNK)', count[:5]) print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]]) del words # Hint to reduce memory. Explanation: Build the dictionary and replace rare words with UNK token. End of explanation data_index = 0 def generate_batch(batch_size, num_skips, skip_window): global data_index assert batch_size % num_skips == 0 assert num_skips <= 2 * skip_window batch = np.ndarray(shape=(batch_size), dtype=np.int32) labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32) span = 2 * skip_window + 1 # [ skip_window target skip_window ] buffer = collections.deque(maxlen=span) for _ in range(span): buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) for i in range(batch_size // num_skips): target = skip_window # target label at the center of the buffer targets_to_avoid = [skip_window] for j in range(num_skips): while target in targets_to_avoid: target = random.randint(0, span - 1) targets_to_avoid.append(target) batch[i * num_skips + j] = buffer[skip_window] labels[i * num_skips + j, 0] = buffer[target] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch, labels print('data:', [reverse_dictionary[di] for di in data[:8]]) for num_skips, skip_window in [(2, 1), (4, 2)]: data_index = 0 batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window) print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window)) print(' batch:', [reverse_dictionary[bi] for bi in batch]) print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)]) Explanation: Function to generate a training batch for the skip-gram model. End of explanation batch_size = 128 embedding_size = 128 # Dimension of the embedding vector. skip_window = 1 # How many words to consider left and right. num_skips = 2 # How many times to reuse an input to generate a label. # We pick a random validation set to sample nearest neighbors. Here we limit the # validation samples to the words that have a low numeric ID, which by # construction are also the most frequent. valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # Only pick dev samples in the head of the distribution. valid_examples = np.random.choice(valid_window, valid_size, replace=False) num_sampled = 64 # Number of negative examples to sample. graph = tf.Graph() with graph.as_default(): # Input data. train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1]) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # Ops and variables pinned to the CPU because of missing GPU implementation with tf.device('/cpu:0'): # Look up embeddings for inputs. embeddings = tf.Variable( tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)) embed = tf.nn.embedding_lookup(embeddings, train_inputs) # Construct the variables for the NCE loss nce_weights = tf.Variable( tf.truncated_normal([vocabulary_size, embedding_size], stddev=1.0 / math.sqrt(embedding_size))) nce_biases = tf.Variable(tf.zeros([vocabulary_size])) # Compute the average NCE loss for the batch. # tf.nce_loss automatically draws a new sample of the negative labels each # time we evaluate the loss. loss = tf.reduce_mean( tf.nn.nce_loss(nce_weights, nce_biases, embed, train_labels, num_sampled, vocabulary_size)) # Construct the SGD optimizer using a learning rate of 1.0. optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss) # Compute the cosine similarity between minibatch examples and all embeddings. norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm valid_embeddings = tf.nn.embedding_lookup( normalized_embeddings, valid_dataset) similarity = tf.matmul( valid_embeddings, normalized_embeddings, transpose_b=True) # Define info to be used by the SummaryWriter. This will let TensorBoard # plot loss values during the training process. loss_summary = tf.scalar_summary("loss", loss) train_summary_op = tf.merge_summary([loss_summary]) # Add variable initializer. init = tf.initialize_all_variables() print("finished building graph.") # Begin training. num_steps = 100001 session = tf.InteractiveSession(graph=graph) # We must initialize all variables before we use them. init.run() print("Initialized") # Directory in which to write summary information. # You can point TensorBoard to this directory via: # $ tensorboard --logdir=/tmp/word2vec_basic/summaries # Tensorflow assumes this directory already exists, so we need to create it. timestamp = str(int(time.time())) if not os.path.exists(os.path.join("/tmp/word2vec_basic", "summaries", timestamp)): os.makedirs(os.path.join("/tmp/word2vec_basic", "summaries", timestamp)) # Create the SummaryWriter train_summary_writer = tf.train.SummaryWriter( os.path.join( "/tmp/word2vec_basic", "summaries", timestamp), session.graph) average_loss = 0 for step in xrange(num_steps): batch_inputs, batch_labels = generate_batch( batch_size, num_skips, skip_window) feed_dict = {train_inputs: batch_inputs, train_labels: batch_labels} # We perform one update step by evaluating the optimizer op (including it # in the list of returned values for session.run() # Also evaluate the training summary op. _, loss_val, tsummary = session.run( [optimizer, loss, train_summary_op], feed_dict=feed_dict) average_loss += loss_val # Write the evaluated summary info to the SummaryWriter. This info will # then show up in the TensorBoard events. train_summary_writer.add_summary(tsummary, step) if step % 2000 == 0: if step > 0: average_loss /= 2000 # The average loss is an estimate of the loss over the last 2000 batches. print("Average loss at step ", step, ": ", average_loss) average_loss = 0 # Note that this is expensive (~20% slowdown if computed every 500 steps) if step % 10000 == 0: sim = similarity.eval() for i in xrange(valid_size): valid_word = reverse_dictionary[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k + 1] log_str = "Nearest to %s:" % valid_word for k in xrange(top_k): close_word = reverse_dictionary[nearest[k]] log_str = "%s %s," % (log_str, close_word) print(log_str) final_embeddings = normalized_embeddings.eval() print("finished training.") Explanation: Build and train a skip-gram model. End of explanation # Visualize the embeddings. def plot_with_labels(low_dim_embs, labels, filename='tsne.png'): assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings" plt.figure(figsize=(18, 18)) # in inches for i, label in enumerate(labels): x, y = low_dim_embs[i, :] plt.scatter(x, y) plt.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points', ha='right', va='bottom') plt.savefig(filename) try: from sklearn.manifold import TSNE import matplotlib.pyplot as plt tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000) plot_only = 500 low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only, :]) labels = [reverse_dictionary[i] for i in xrange(plot_only)] plot_with_labels(low_dim_embs, labels) except ImportError: print("Please install sklearn and matplotlib to visualize embeddings.") Explanation: Start TensorBoard Start TensorBoard while the training is running (or after it's done) by pointing it to the directory in which the summaries were written. The script is configured to write them to this location. sh $ tensorboard --logdir=/tmp/word2vec_basic/summaries Open a browser to http://localhost:6006/. End of explanation test_word = 'six' test_word_idx = dictionary[test_word] print("Found word {} at index {}".format(test_word, test_word_idx)) test_embeddings = tf.nn.embedding_lookup(normalized_embeddings, [test_word_idx]) test_similarity = tf.matmul(test_embeddings, normalized_embeddings, transpose_b=True) top_k = 8 # number of nearest neighbors # Extra: eval the 'test word' similarity sim = test_similarity.eval() nearest = (-sim[0, :]).argsort()[1:top_k + 1] print("Nearest to {}:".format(test_word)) for k in xrange(top_k): close_word = reverse_dictionary[nearest[k]] print (close_word) Explanation: How-to find the 'nearby' words for a specific given word Here's a function to find the 'nearby' words for a specific word. E.g., picking "six" as the word may give a result like this (after about 100K training steps, for higher accuracy, try running for 500k): ``` Found word six at index 22 Nearest to six: seven four eight five nine three two zero ``` End of explanation
5,081
Given the following text description, write Python code to implement the functionality described below step by step Description: TensorFlow实战Titanic解析 一、数据读入及预处理 1. 使用pandas读入csv文件,读入为pands.DataFrame对象 Step1: 2. 预处理 剔除空数据 将'Sex'字段转换为int类型 选取数值类型的字段,抛弃字符串类型字段 Step2: 3. 将训练数据切分为训练集(training set)和验证集(validation set) Step3: 二、构建计算图 逻辑回归 逻辑回归是形式最简单,并且最容易理解的分类器之一。从数学上,逻辑回归的预测函数可以表示为如下公式: y = softmax(xW + b) 其中,x为输入向量,是大小为d×1的列向量,d是特征数。W是大小为的c×d权重矩阵,c是分类类别数目。b是偏置向量,为c×1列向量。softmax在数学定义里,是指一种归一化指数函数。它将一个k维的向量x按照公式 的形式将向量中的元素转换为(0, 1)的区间。机器学习领域常使用这种方法将类似判别函数的置信度值转换为概率形式(如判别超平面的距离等)。softmax函数常用于输出层,用于指定唯一的分类输出。 1. 使用placeholder声明输入占位符 TensorFlow设计了数据Feed机制。也就是说计算程序并不会直接交互执行,而是在声明过程只做计算图的构建。所以,此时并不会触碰真实的数据,而只是通过placeholder算子声明一个输入数据的占位符,在后面真正运行计算时,才用数据替换占位符。 声明占位符placeholder需要给定三个参数,分别是输入数据的元素类型dtype、维度形状shape和占位符名称标识name。 Step4: 2. 声明参数变量 变量的声明方式是直接定义tf.Variable()对象。 初始化变量对象有两种方式,一种是从protocol buffer结构VariableDef中反序列化,另一种是通过参数指定初始值。最简单的方式就是向下面程序这样,为变量传入初始值。初始值必须是一个tensor对象,或是可以通过convert_to_tensor()方法转换成tensor的Python对象。TensorFlow提供了多种构造随机tensor的方法,可以构造全零tensor、随机正态分布tensor等。定义变量会保留初始值的维度形状。 Step5: 3. 构造前向传播计算图 使用算子构建由输入计算出标签的计算过程。 在计算图的构建过程中,TensorFlow会自动推算每一个节点的输入输出形状。若无法运算,比如两个行列数不同的矩阵相加,则会直接报错。 Step6: 4. 声明代价函数 使用交叉熵(cross entropy)作为代价函数。 Step7: NOTE 在计算交叉熵的时候,对模型输出值 y_pred 加上了一个很小的误差值(在上面程序中是 1e-10),这是因为当 y_pred 十分接近真值 y_true 的时候,也就是 y_pred 的值非常接近 0 或 1 的取值时,计算会得到负无穷 -inf,从而导致输出非法,并进一步导致无法计算梯度,迭代陷入崩溃。要解决这个问题有三种办法: 在计算时,直接加入一个极小的误差值,使计算合法。这样可以避免计算,但存在的问题是加入误差后相当于y_pred的值会突破1。在示例代码中使用了这种方案; 使用 clip() 函数,当 y_pred 接近 0 时,将其赋值成为极小误差值。也就是将 y_pred 的取值范围限定在的范围内; 当计算交叉熵的计算出现 nan 值时,显式地将cost设置为0。这种方式回避了 函数计算的问题,而是在最终的代价函数上进行容错处理。 5. 加入优化算法 TensorFlow内置了多种经典的优化算法,如随机梯度下降算法(SGD,Stochastic Gradient Descent)、动量算法(Momentum)、Adagrad算法、ADAM算法、RMSProp算法等。优化器内部会自动构建梯度计算和反向传播部分的计算图。 一般对于优化算法,最关键的参数是学习率(learning rate),对于学习率的设置是一门技术。同时,不同优化算法在不同问题上可能会有不同的收敛速度,在解决实际问题时可以做多种尝试。 Step8: 6. (optional) 计算准确率 Step9: 三、构建训练迭代 & 执行训练 启动Session,代入数据进行计算。训练结束后使用验证集评估训练效果 Step10: 四、存储和加载模型参数 变量的存储和读取是通过tf.train.Saver类来完成的。Saver对象在初始化时,为计算图加入了用于存储和加载变量的算子,并可以通过参数指定是要存储哪些变量。Saver对象的save()和restore()方法是触发图中算子的入口。 Step11: TensorBoard TensorBoard是TensorFlow配套的可视化工具,可以用来帮助理解复杂的模型和检查实现中的错误。 TensorBoard的工作方式是启动一个WEB服务,该服务进程从TensorFlow程序执行所得的事件日志文件(event files)中读取概要(summary)数据,然后将数据在网页中绘制成可视化的图表。概要数据主要包括以下几种类别: 1. 标量数据,如准确率、代价损失值,使用tf.summary.scalar加入记录算子; 2. 参数数据,如参数矩阵weights、偏置矩阵bias,一般使用tf.summary.histogram记录; 3. 图像数据,用tf.summary.image加入记录算子; 4. 音频数据,用tf.summary.audio加入记录算子; 5. 计算图结构,在定义tf.summary.FileWriter对象时自动记录。 可以通过TensorBoard展示的完整程序:
Python Code: import os import numpy as np import pandas as pd import tensorflow as tf # read data from file data = pd.read_csv('data/train.csv') print(data.info()) Explanation: TensorFlow实战Titanic解析 一、数据读入及预处理 1. 使用pandas读入csv文件,读入为pands.DataFrame对象 End of explanation # fill nan values with 0 data = data.fillna(0) # convert ['male', 'female'] values of Sex to [1, 0] data['Sex'] = data['Sex'].apply(lambda s: 1 if s == 'male' else 0) # 'Survived' is the label of one class, # add 'Deceased' as the other class data['Deceased'] = data['Survived'].apply(lambda s: 1 - s) # select features and labels for training dataset_X = data[['Sex', 'Age', 'Pclass', 'SibSp', 'Parch', 'Fare']] dataset_Y = data[['Deceased', 'Survived']] print(dataset_X) print(dataset_Y) Explanation: 2. 预处理 剔除空数据 将'Sex'字段转换为int类型 选取数值类型的字段,抛弃字符串类型字段 End of explanation from sklearn.model_selection import train_test_split # split training data and validation set data X_train, X_val, y_train, y_val = train_test_split(dataset_X.as_matrix(), dataset_Y.as_matrix(), test_size=0.2, random_state=42) Explanation: 3. 将训练数据切分为训练集(training set)和验证集(validation set) End of explanation # 声明输入数据占位符 # shape参数的第一个元素为None,表示可以同时放入任意条记录 X = tf.placeholder(tf.float32, shape=[None, 6], name='input') y = tf.placeholder(tf.float32, shape=[None, 2], name='label') Explanation: 二、构建计算图 逻辑回归 逻辑回归是形式最简单,并且最容易理解的分类器之一。从数学上,逻辑回归的预测函数可以表示为如下公式: y = softmax(xW + b) 其中,x为输入向量,是大小为d×1的列向量,d是特征数。W是大小为的c×d权重矩阵,c是分类类别数目。b是偏置向量,为c×1列向量。softmax在数学定义里,是指一种归一化指数函数。它将一个k维的向量x按照公式 的形式将向量中的元素转换为(0, 1)的区间。机器学习领域常使用这种方法将类似判别函数的置信度值转换为概率形式(如判别超平面的距离等)。softmax函数常用于输出层,用于指定唯一的分类输出。 1. 使用placeholder声明输入占位符 TensorFlow设计了数据Feed机制。也就是说计算程序并不会直接交互执行,而是在声明过程只做计算图的构建。所以,此时并不会触碰真实的数据,而只是通过placeholder算子声明一个输入数据的占位符,在后面真正运行计算时,才用数据替换占位符。 声明占位符placeholder需要给定三个参数,分别是输入数据的元素类型dtype、维度形状shape和占位符名称标识name。 End of explanation # 声明变量 weights = tf.Variable(tf.random_normal([6, 2]), name='weights') bias = tf.Variable(tf.zeros([2]), name='bias') Explanation: 2. 声明参数变量 变量的声明方式是直接定义tf.Variable()对象。 初始化变量对象有两种方式,一种是从protocol buffer结构VariableDef中反序列化,另一种是通过参数指定初始值。最简单的方式就是向下面程序这样,为变量传入初始值。初始值必须是一个tensor对象,或是可以通过convert_to_tensor()方法转换成tensor的Python对象。TensorFlow提供了多种构造随机tensor的方法,可以构造全零tensor、随机正态分布tensor等。定义变量会保留初始值的维度形状。 End of explanation y_pred = tf.nn.softmax(tf.matmul(X, weights) + bias) Explanation: 3. 构造前向传播计算图 使用算子构建由输入计算出标签的计算过程。 在计算图的构建过程中,TensorFlow会自动推算每一个节点的输入输出形状。若无法运算,比如两个行列数不同的矩阵相加,则会直接报错。 End of explanation # 使用交叉熵作为代价函数 cross_entropy = - tf.reduce_sum(y * tf.log(y_pred + 1e-10), reduction_indices=1) # 批量样本的代价值为所有样本交叉熵的平均值 cost = tf.reduce_mean(cross_entropy) Explanation: 4. 声明代价函数 使用交叉熵(cross entropy)作为代价函数。 End of explanation # 使用随机梯度下降算法优化器来最小化代价,系统自动构建反向传播部分的计算图 train_op = tf.train.GradientDescentOptimizer(0.001).minimize(cost) Explanation: NOTE 在计算交叉熵的时候,对模型输出值 y_pred 加上了一个很小的误差值(在上面程序中是 1e-10),这是因为当 y_pred 十分接近真值 y_true 的时候,也就是 y_pred 的值非常接近 0 或 1 的取值时,计算会得到负无穷 -inf,从而导致输出非法,并进一步导致无法计算梯度,迭代陷入崩溃。要解决这个问题有三种办法: 在计算时,直接加入一个极小的误差值,使计算合法。这样可以避免计算,但存在的问题是加入误差后相当于y_pred的值会突破1。在示例代码中使用了这种方案; 使用 clip() 函数,当 y_pred 接近 0 时,将其赋值成为极小误差值。也就是将 y_pred 的取值范围限定在的范围内; 当计算交叉熵的计算出现 nan 值时,显式地将cost设置为0。这种方式回避了 函数计算的问题,而是在最终的代价函数上进行容错处理。 5. 加入优化算法 TensorFlow内置了多种经典的优化算法,如随机梯度下降算法(SGD,Stochastic Gradient Descent)、动量算法(Momentum)、Adagrad算法、ADAM算法、RMSProp算法等。优化器内部会自动构建梯度计算和反向传播部分的计算图。 一般对于优化算法,最关键的参数是学习率(learning rate),对于学习率的设置是一门技术。同时,不同优化算法在不同问题上可能会有不同的收敛速度,在解决实际问题时可以做多种尝试。 End of explanation # 计算准确率 correct_pred = tf.equal(tf.argmax(y, 1), tf.argmax(y_pred, 1)) acc_op = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) Explanation: 6. (optional) 计算准确率 End of explanation with tf.Session() as sess: # variables have to be initialized at the first place tf.global_variables_initializer().run() # training loop for epoch in range(10): total_loss = 0. for i in range(len(X_train)): # prepare feed data and run feed_dict = {X: [X_train[i]], y: [y_train[i]]} _, loss = sess.run([train_op, cost], feed_dict=feed_dict) total_loss += loss # display loss per epoch print('Epoch: %04d, total loss=%.9f' % (epoch + 1, total_loss)) print 'Training complete!' # Accuracy calculated by TensorFlow accuracy = sess.run(acc_op, feed_dict={X: X_val, y: y_val}) print("Accuracy on validation set: %.9f" % accuracy) # Accuracy calculated by NumPy pred = sess.run(y_pred, feed_dict={X: X_val}) correct = np.equal(np.argmax(pred, 1), np.argmax(y_val, 1)) numpy_accuracy = np.mean(correct.astype(np.float32)) print("Accuracy on validation set (numpy): %.9f" % numpy_accuracy) Explanation: 三、构建训练迭代 & 执行训练 启动Session,代入数据进行计算。训练结束后使用验证集评估训练效果 End of explanation # 训练步数记录 global_step = tf.Variable(0, name='global_step', trainable=False) # 存档入口 saver = tf.train.Saver() # 在Saver声明之后定义的变量将不会被存储 # non_storable_variable = tf.Variable(777) ckpt_dir = './ckpt_dir' if not os.path.exists(ckpt_dir): os.makedirs(ckpt_dir) with tf.Session() as sess: tf.global_variables_initializer().run() # 加载模型存档 ckpt = tf.train.get_checkpoint_state(ckpt_dir) if ckpt and ckpt.model_checkpoint_path: print('Restoring from checkpoint: %s' % ckpt.model_checkpoint_path) saver.restore(sess, ckpt.model_checkpoint_path) start = global_step.eval() for epoch in range(start, start + 10): total_loss = 0. for i in range(0, len(X_train)): feed_dict = { X: [X_train[i]], y: [y_train[i]] } _, loss = sess.run([train_op, cost], feed_dict=feed_dict) total_loss += loss print('Epoch: %04d, loss=%.9f' % (epoch + 1, total_loss)) # 模型存档 global_step.assign(epoch).eval() saver.save(sess, ckpt_dir + '/logistic.ckpt', global_step=global_step) print('Training complete!') Explanation: 四、存储和加载模型参数 变量的存储和读取是通过tf.train.Saver类来完成的。Saver对象在初始化时,为计算图加入了用于存储和加载变量的算子,并可以通过参数指定是要存储哪些变量。Saver对象的save()和restore()方法是触发图中算子的入口。 End of explanation ################################ # Constructing Dataflow Graph ################################ # arguments that can be set in command line tf.app.flags.DEFINE_integer('epochs', 10, 'Training epochs') tf.app.flags.DEFINE_integer('batch_size', 10, 'size of mini-batch') FLAGS = tf.app.flags.FLAGS with tf.name_scope('input'): # create symbolic variables X = tf.placeholder(tf.float32, shape=[None, 6]) y_true = tf.placeholder(tf.float32, shape=[None, 2]) with tf.name_scope('classifier'): # weights and bias are the variables to be trained weights = tf.Variable(tf.random_normal([6, 2])) bias = tf.Variable(tf.zeros([2])) y_pred = tf.nn.softmax(tf.matmul(X, weights) + bias) # add histogram summaries for weights, view on tensorboard tf.summary.histogram('weights', weights) tf.summary.histogram('bias', bias) # Minimise cost using cross entropy # NOTE: add a epsilon(1e-10) when calculate log(y_pred), # otherwise the result will be -inf with tf.name_scope('cost'): cross_entropy = - tf.reduce_sum(y_true * tf.log(y_pred + 1e-10), reduction_indices=1) cost = tf.reduce_mean(cross_entropy) tf.summary.scalar('loss', cost) # use gradient descent optimizer to minimize cost train_op = tf.train.GradientDescentOptimizer(0.001).minimize(cost) with tf.name_scope('accuracy'): correct_pred = tf.equal(tf.argmax(y_true, 1), tf.argmax(y_pred, 1)) acc_op = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) # Add scalar summary for accuracy tf.summary.scalar('accuracy', acc_op) global_step = tf.Variable(0, name='global_step', trainable=False) # use saver to save and restore model saver = tf.train.Saver() # this variable won't be stored, since it is declared after tf.train.Saver() non_storable_variable = tf.Variable(777) ckpt_dir = './ckpt_dir' if not os.path.exists(ckpt_dir): os.makedirs(ckpt_dir) ################################ # Training the model ################################ # use session to run the calculation with tf.Session() as sess: # create a log writer. run 'tensorboard --logdir=./logs' writer = tf.summary.FileWriter('./logs', sess.graph) merged = tf.summary.merge_all() # variables have to be initialized at the first place tf.global_variables_initializer().run() # restore variables from checkpoint if exists ckpt = tf.train.get_checkpoint_state(ckpt_dir) if ckpt and ckpt.model_checkpoint_path: print('Restoring from checkpoint: %s' % ckpt.model_checkpoint_path) saver.restore(sess, ckpt.model_checkpoint_path) start = global_step.eval() # training loop for epoch in range(start, start + FLAGS.epochs): total_loss = 0. for i in range(0, len(X_train), FLAGS.batch_size): # train with mini-batch feed_dict = { X: X_train[i: i + FLAGS.batch_size], y_true: y_train[i: i + FLAGS.batch_size] } _, loss = sess.run([train_op, cost], feed_dict=feed_dict) total_loss += loss # display loss per epoch print('Epoch: %04d, loss=%.9f' % (epoch + 1, total_loss)) summary, accuracy = sess.run([merged, acc_op], feed_dict={X: X_val, y_true: y_val}) writer.add_summary(summary, epoch) # Write summary print('Accuracy on validation set: %.9f' % accuracy) # set and update(eval) global_step with epoch global_step.assign(epoch).eval() saver.save(sess, ckpt_dir + '/logistic.ckpt', global_step=global_step) print('Training complete!') Explanation: TensorBoard TensorBoard是TensorFlow配套的可视化工具,可以用来帮助理解复杂的模型和检查实现中的错误。 TensorBoard的工作方式是启动一个WEB服务,该服务进程从TensorFlow程序执行所得的事件日志文件(event files)中读取概要(summary)数据,然后将数据在网页中绘制成可视化的图表。概要数据主要包括以下几种类别: 1. 标量数据,如准确率、代价损失值,使用tf.summary.scalar加入记录算子; 2. 参数数据,如参数矩阵weights、偏置矩阵bias,一般使用tf.summary.histogram记录; 3. 图像数据,用tf.summary.image加入记录算子; 4. 音频数据,用tf.summary.audio加入记录算子; 5. 计算图结构,在定义tf.summary.FileWriter对象时自动记录。 可以通过TensorBoard展示的完整程序: End of explanation
5,082
Given the following text description, write Python code to implement the functionality described below step by step Description: VAEs on sparse data The following notebook provides an example of how to load a dataset, setup parameters for it, create the model and train it for a few epochs. In the notebook, we will use with the RCV1 dataset (assuming it has been setup previously). For details on how to set it up, run python rcv2.py in the optvaedatasets folder Step1: Model Parameters The model parameters have been saved here, we'll load them and look at them These are what the model will be built based on Step2: For the moment, we will leave everything as is. Some worthwhile parameters to note Step3: Load dataset Lets load the RCV1(v2) dataset and visualize how the dataset &lt;dict&gt; is structured We'll need to append some parameters from the dataset into the default parameters dict that we will use to create the model Also, compute the idf vectors for the entire dataset (the term frequencies will be multiplied dynamically) inside the model Step4: Setup Create directory for configuration files. The configuration file for a single experiment is in the pickle file. We will use this directory to save checkpoint files as well Step5: Training the model We can now train the model we created This is the overall setup for the file train.py
Python Code: import sys,os,glob from collections import OrderedDict import numpy as np from utils.misc import readPickle, createIfAbsent sys.path.append('../') from optvaedatasets.load import loadDataset as loadDataset_OVAE from sklearn.feature_extraction.text import TfidfTransformer Explanation: VAEs on sparse data The following notebook provides an example of how to load a dataset, setup parameters for it, create the model and train it for a few epochs. In the notebook, we will use with the RCV1 dataset (assuming it has been setup previously). For details on how to set it up, run python rcv2.py in the optvaedatasets folder End of explanation default_params = readPickle('../optvaeutils/default_settings.pkl')[0] for k in default_params: print '(',k,default_params[k],')', print Explanation: Model Parameters The model parameters have been saved here, we'll load them and look at them These are what the model will be built based on End of explanation default_params['opt_type'] = 'finopt' #set to finopt to optimize var. params, none otherwise default_params['n_steps'] = 5 #temporary directory where checkpoints are saved default_params['savedir'] = './tmp' Explanation: For the moment, we will leave everything as is. Some worthwhile parameters to note: * n_steps: Number of steps of optimizing $\psi(x)$, the local variational parameters as output by the inference network. We'll set this to 10 below for the moment. * dim_stochastic: Number of latent dimensions. End of explanation dset = loadDataset_OVAE('rcv2') #Visualize structure of dataset dict for k in dset: print k, type(dset[k]), if hasattr(dset[k],'shape'): print dset[k].shape elif type(dset[k]) is not list: print dset[k] else: print #Add parameters to default_params for k in ['dim_observations','data_type']: default_params[k] = dset[k] default_params['max_word_count'] =dset['train'].max() #Create IDF additional_attrs = {} tfidf = TfidfTransformer(norm=None) tfidf.fit(dset['train']) additional_attrs['idf'] = tfidf.idf_ from optvaemodels.vae import VAE as Model import optvaemodels.vae_learn as Learn import optvaemodels.vae_evaluate as Evaluate Explanation: Load dataset Lets load the RCV1(v2) dataset and visualize how the dataset &lt;dict&gt; is structured We'll need to append some parameters from the dataset into the default parameters dict that we will use to create the model Also, compute the idf vectors for the entire dataset (the term frequencies will be multiplied dynamically) inside the model End of explanation default_params['savedir']+='-rcv2-'+default_params['opt_type'] createIfAbsent(default_params['savedir']) pfile= default_params['savedir']+'/'+default_params['unique_id']+'-config.pkl' print 'Training model from scratch. Parameters in: ',pfile model = Model(default_params, paramFile = pfile, additional_attrs = additional_attrs) Explanation: Setup Create directory for configuration files. The configuration file for a single experiment is in the pickle file. We will use this directory to save checkpoint files as well End of explanation savef = os.path.join(default_params['savedir'],default_params['unique_id']) #Prefix for saving in checkpoint directory savedata = Learn.learn( model, dataset = dset['train'], epoch_start = 0 , epoch_end = 3, #epochs -- set w/ default_params['epochs'] batch_size = default_params['batch_size'], #batch size savefreq = default_params['savefreq'], #frequency of saving savefile = savef, dataset_eval= dset['valid'] ) for k in savedata: print k, type(savedata[k]), savedata[k].shape Explanation: Training the model We can now train the model we created This is the overall setup for the file train.py End of explanation
5,083
Given the following text description, write Python code to implement the functionality described below step by step Description: <!--BOOK_INFORMATION--> <img align="left" style="padding-right Step1: Once this submodule is imported, a three-dimensional axes can be created by passing the keyword projection='3d' to any of the normal axes creation routines Step2: With this three-dimensional axes enabled, we can now plot a variety of three-dimensional plot types. Three-dimensional plotting is one of the functionalities that benefits immensely from viewing figures interactively rather than statically in the notebook; recall that to use interactive figures, you can use %matplotlib notebook rather than %matplotlib inline when running this code. Three-dimensional Points and Lines The most basic three-dimensional plot is a line or collection of scatter plot created from sets of (x, y, z) triples. In analogy with the more common two-dimensional plots discussed earlier, these can be created using the ax.plot3D and ax.scatter3D functions. The call signature for these is nearly identical to that of their two-dimensional counterparts, so you can refer to Simple Line Plots and Simple Scatter Plots for more information on controlling the output. Here we'll plot a trigonometric spiral, along with some points drawn randomly near the line Step3: Notice that by default, the scatter points have their transparency adjusted to give a sense of depth on the page. While the three-dimensional effect is sometimes difficult to see within a static image, an interactive view can lead to some nice intuition about the layout of the points. Three-dimensional Contour Plots Analogous to the contour plots we explored in Density and Contour Plots, mplot3d contains tools to create three-dimensional relief plots using the same inputs. Like two-dimensional ax.contour plots, ax.contour3D requires all the input data to be in the form of two-dimensional regular grids, with the Z data evaluated at each point. Here we'll show a three-dimensional contour diagram of a three-dimensional sinusoidal function Step4: Sometimes the default viewing angle is not optimal, in which case we can use the view_init method to set the elevation and azimuthal angles. In the following example, we'll use an elevation of 60 degrees (that is, 60 degrees above the x-y plane) and an azimuth of 35 degrees (that is, rotated 35 degrees counter-clockwise about the z-axis) Step5: Again, note that this type of rotation can be accomplished interactively by clicking and dragging when using one of Matplotlib's interactive backends. Wireframes and Surface Plots Two other types of three-dimensional plots that work on gridded data are wireframes and surface plots. These take a grid of values and project it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize. Here's an example of using a wireframe Step6: A surface plot is like a wireframe plot, but each face of the wireframe is a filled polygon. Adding a colormap to the filled polygons can aid perception of the topology of the surface being visualized Step7: Note that though the grid of values for a surface plot needs to be two-dimensional, it need not be rectilinear. Here is an example of creating a partial polar grid, which when used with the surface3D plot can give us a slice into the function we're visualizing Step8: Surface Triangulations For some applications, the evenly sampled grids required by the above routines is overly restrictive and inconvenient. In these situations, the triangulation-based plots can be very useful. What if rather than an even draw from a Cartesian or a polar grid, we instead have a set of random draws? Step9: We could create a scatter plot of the points to get an idea of the surface we're sampling from Step10: This leaves a lot to be desired. The function that will help us in this case is ax.plot_trisurf, which creates a surface by first finding a set of triangles formed between adjacent points (remember that x, y, and z here are one-dimensional arrays) Step11: The result is certainly not as clean as when it is plotted with a grid, but the flexibility of such a triangulation allows for some really interesting three-dimensional plots. For example, it is actually possible to plot a three-dimensional Möbius strip using this, as we'll see next. Example Step12: Now from this parametrization, we must determine the (x, y, z) positions of the embedded strip. Thinking about it, we might realize that there are two rotations happening Step13: Now we use our recollection of trigonometry to derive the three-dimensional embedding. We'll define $r$, the distance of each point from the center, and use this to find the embedded $(x, y, z)$ coordinates Step14: Finally, to plot the object, we must make sure the triangulation is correct. The best way to do this is to define the triangulation within the underlying parametrization, and then let Matplotlib project this triangulation into the three-dimensional space of the Möbius strip. This can be accomplished as follows
Python Code: from mpl_toolkits import mplot3d Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png"> This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub. The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book! No changes were made to the contents of this notebook from the original. <!--NAVIGATION--> < Customizing Matplotlib: Configurations and Stylesheets | Contents | Geographic Data with Basemap > Three-Dimensional Plotting in Matplotlib Matplotlib was initially designed with only two-dimensional plotting in mind. Around the time of the 1.0 release, some three-dimensional plotting utilities were built on top of Matplotlib's two-dimensional display, and the result is a convenient (if somewhat limited) set of tools for three-dimensional data visualization. three-dimensional plots are enabled by importing the mplot3d toolkit, included with the main Matplotlib installation: End of explanation %matplotlib inline import numpy as np import matplotlib.pyplot as plt fig = plt.figure() ax = plt.axes(projection='3d') Explanation: Once this submodule is imported, a three-dimensional axes can be created by passing the keyword projection='3d' to any of the normal axes creation routines: End of explanation ax = plt.axes(projection='3d') # Data for a three-dimensional line zline = np.linspace(0, 15, 1000) xline = np.sin(zline) yline = np.cos(zline) ax.plot3D(xline, yline, zline, 'gray') # Data for three-dimensional scattered points zdata = 15 * np.random.random(100) xdata = np.sin(zdata) + 0.1 * np.random.randn(100) ydata = np.cos(zdata) + 0.1 * np.random.randn(100) ax.scatter3D(xdata, ydata, zdata, c=zdata, cmap='Greens'); Explanation: With this three-dimensional axes enabled, we can now plot a variety of three-dimensional plot types. Three-dimensional plotting is one of the functionalities that benefits immensely from viewing figures interactively rather than statically in the notebook; recall that to use interactive figures, you can use %matplotlib notebook rather than %matplotlib inline when running this code. Three-dimensional Points and Lines The most basic three-dimensional plot is a line or collection of scatter plot created from sets of (x, y, z) triples. In analogy with the more common two-dimensional plots discussed earlier, these can be created using the ax.plot3D and ax.scatter3D functions. The call signature for these is nearly identical to that of their two-dimensional counterparts, so you can refer to Simple Line Plots and Simple Scatter Plots for more information on controlling the output. Here we'll plot a trigonometric spiral, along with some points drawn randomly near the line: End of explanation def f(x, y): return np.sin(np.sqrt(x ** 2 + y ** 2)) x = np.linspace(-6, 6, 30) y = np.linspace(-6, 6, 30) X, Y = np.meshgrid(x, y) Z = f(X, Y) fig = plt.figure() ax = plt.axes(projection='3d') ax.contour3D(X, Y, Z, 50, cmap='binary') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z'); Explanation: Notice that by default, the scatter points have their transparency adjusted to give a sense of depth on the page. While the three-dimensional effect is sometimes difficult to see within a static image, an interactive view can lead to some nice intuition about the layout of the points. Three-dimensional Contour Plots Analogous to the contour plots we explored in Density and Contour Plots, mplot3d contains tools to create three-dimensional relief plots using the same inputs. Like two-dimensional ax.contour plots, ax.contour3D requires all the input data to be in the form of two-dimensional regular grids, with the Z data evaluated at each point. Here we'll show a three-dimensional contour diagram of a three-dimensional sinusoidal function: End of explanation ax.view_init(60, 35) fig Explanation: Sometimes the default viewing angle is not optimal, in which case we can use the view_init method to set the elevation and azimuthal angles. In the following example, we'll use an elevation of 60 degrees (that is, 60 degrees above the x-y plane) and an azimuth of 35 degrees (that is, rotated 35 degrees counter-clockwise about the z-axis): End of explanation fig = plt.figure() ax = plt.axes(projection='3d') ax.plot_wireframe(X, Y, Z, color='black') ax.set_title('wireframe'); Explanation: Again, note that this type of rotation can be accomplished interactively by clicking and dragging when using one of Matplotlib's interactive backends. Wireframes and Surface Plots Two other types of three-dimensional plots that work on gridded data are wireframes and surface plots. These take a grid of values and project it onto the specified three-dimensional surface, and can make the resulting three-dimensional forms quite easy to visualize. Here's an example of using a wireframe: End of explanation ax = plt.axes(projection='3d') ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis', edgecolor='none') ax.set_title('surface'); Explanation: A surface plot is like a wireframe plot, but each face of the wireframe is a filled polygon. Adding a colormap to the filled polygons can aid perception of the topology of the surface being visualized: End of explanation r = np.linspace(0, 6, 20) theta = np.linspace(-0.9 * np.pi, 0.8 * np.pi, 40) r, theta = np.meshgrid(r, theta) X = r * np.sin(theta) Y = r * np.cos(theta) Z = f(X, Y) ax = plt.axes(projection='3d') ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis', edgecolor='none'); Explanation: Note that though the grid of values for a surface plot needs to be two-dimensional, it need not be rectilinear. Here is an example of creating a partial polar grid, which when used with the surface3D plot can give us a slice into the function we're visualizing: End of explanation theta = 2 * np.pi * np.random.random(1000) r = 6 * np.random.random(1000) x = np.ravel(r * np.sin(theta)) y = np.ravel(r * np.cos(theta)) z = f(x, y) Explanation: Surface Triangulations For some applications, the evenly sampled grids required by the above routines is overly restrictive and inconvenient. In these situations, the triangulation-based plots can be very useful. What if rather than an even draw from a Cartesian or a polar grid, we instead have a set of random draws? End of explanation ax = plt.axes(projection='3d') ax.scatter(x, y, z, c=z, cmap='viridis', linewidth=0.5); Explanation: We could create a scatter plot of the points to get an idea of the surface we're sampling from: End of explanation ax = plt.axes(projection='3d') ax.plot_trisurf(x, y, z, cmap='viridis', edgecolor='none'); Explanation: This leaves a lot to be desired. The function that will help us in this case is ax.plot_trisurf, which creates a surface by first finding a set of triangles formed between adjacent points (remember that x, y, and z here are one-dimensional arrays): End of explanation theta = np.linspace(0, 2 * np.pi, 30) w = np.linspace(-0.25, 0.25, 8) w, theta = np.meshgrid(w, theta) Explanation: The result is certainly not as clean as when it is plotted with a grid, but the flexibility of such a triangulation allows for some really interesting three-dimensional plots. For example, it is actually possible to plot a three-dimensional Möbius strip using this, as we'll see next. Example: Visualizing a Möbius strip A Möbius strip is similar to a strip of paper glued into a loop with a half-twist. Topologically, it's quite interesting because despite appearances it has only a single side! Here we will visualize such an object using Matplotlib's three-dimensional tools. The key to creating the Möbius strip is to think about it's parametrization: it's a two-dimensional strip, so we need two intrinsic dimensions. Let's call them $\theta$, which ranges from $0$ to $2\pi$ around the loop, and $w$ which ranges from -1 to 1 across the width of the strip: End of explanation phi = 0.5 * theta Explanation: Now from this parametrization, we must determine the (x, y, z) positions of the embedded strip. Thinking about it, we might realize that there are two rotations happening: one is the position of the loop about its center (what we've called $\theta$), while the other is the twisting of the strip about its axis (we'll call this $\phi$). For a Möbius strip, we must have the strip makes half a twist during a full loop, or $\Delta\phi = \Delta\theta/2$. End of explanation # radius in x-y plane r = 1 + w * np.cos(phi) x = np.ravel(r * np.cos(theta)) y = np.ravel(r * np.sin(theta)) z = np.ravel(w * np.sin(phi)) Explanation: Now we use our recollection of trigonometry to derive the three-dimensional embedding. We'll define $r$, the distance of each point from the center, and use this to find the embedded $(x, y, z)$ coordinates: End of explanation # triangulate in the underlying parametrization from matplotlib.tri import Triangulation tri = Triangulation(np.ravel(w), np.ravel(theta)) ax = plt.axes(projection='3d') ax.plot_trisurf(x, y, z, triangles=tri.triangles, cmap='viridis', linewidths=0.2); ax.set_xlim(-1, 1); ax.set_ylim(-1, 1); ax.set_zlim(-1, 1); Explanation: Finally, to plot the object, we must make sure the triangulation is correct. The best way to do this is to define the triangulation within the underlying parametrization, and then let Matplotlib project this triangulation into the three-dimensional space of the Möbius strip. This can be accomplished as follows: End of explanation
5,084
Given the following text description, write Python code to implement the functionality described below step by step Description: Corrections et révisions Nom de variables et type Step1: Donc attention aux nom donné aux paramêtres formels, il peut être aussi utile de se premunir contre des fautes, en utilisant la commande try qui permet la gestion des exceptions Step2: Chaînes et Listes Erreurs classiques sur les Chaines de caractères A list comprehension can be used inside a str, but sometimes it is dangerous to look for too compact solutions. Ici la mauvaise utilisation d'une liste de comprehension est la raison du problème Step3: En cherchant une solution plus détaillé, on peut inclure un test conditionnel Step4: On peut également utiliser une déclaration comme try on appelle cela un catch On peut utiliser un catch de ZeroDivisionError Step5: Les chaines de caractères, sont d'abord utiles pour retravailler la présentation d'un tableau Step6: dans cet exemple on va faire l'appel Step7: Erreurs classiques sur les listes Step8: Visualisation d'un phénomène physique Visualisation d'un phénomène physique $$B(x)=\frac{\mu_o}{4\pi}.I_o.\frac{r^2}{2 (r^2 +x^2)^{3/2} }$$ L'equation de Biot-et-Savart
Python Code: def tronquer_1( l ): return l[1:] l=[1,2,3] tronquer_1(l) Explanation: Corrections et révisions Nom de variables et type End of explanation def tronquer_liste( ma_liste ): try: return ma_liste[1:] except TypeError: print("Cette fonction n'accepte que des listes ou des chaînes de caractères") l=[1,2,3] tronquer_liste(l) Explanation: Donc attention aux nom donné aux paramêtres formels, il peut être aussi utile de se premunir contre des fautes, en utilisant la commande try qui permet la gestion des exceptions End of explanation def extraire_avant_premier(s): return s[ : str.find(s,",") ] print( extraire_avant_premier("chat") ) ## cha Explanation: Chaînes et Listes Erreurs classiques sur les Chaines de caractères A list comprehension can be used inside a str, but sometimes it is dangerous to look for too compact solutions. Ici la mauvaise utilisation d'une liste de comprehension est la raison du problème End of explanation def extraire_avant_premier(s): if str.find(s,",") == -1: return s return s[ : str.find(s,",") ] print( extraire_avant_premier("chat") ) ## Affiche chat Explanation: En cherchant une solution plus détaillé, on peut inclure un test conditionnel End of explanation def extraire_avant_premier(s): try: 1/(1+str.find(s,",")) return s[ : str.find(s,",") ] except ZeroDivisionError: return s print( extraire_avant_premier("chat, chien") ) ## Affiche chat Explanation: On peut également utiliser une déclaration comme try on appelle cela un catch On peut utiliser un catch de ZeroDivisionError End of explanation def completer_2(s,n): return(s.ljust(n,' ')) Explanation: Les chaines de caractères, sont d'abord utiles pour retravailler la présentation d'un tableau End of explanation Mon_Nom1 = 'DE SAINT-EXUPERY' Mon_Nom2 = 'CAMUS' #? Mon_Nom.ljust print(Mon_Nom1.ljust(25,' '),'present !') print(Mon_Nom2.ljust(25,' '),"c'est moi") Explanation: dans cet exemple on va faire l'appel End of explanation #Les Delistes def tronquer_2(n): if n== []: return [] else: del n[0] return n liste_1 = [1,2,3] liste_2 = tronquer_2( liste_1 ) print( liste_1 ) ## affiche [2,3] print( liste_2 ) ## affiche [2,3] #Les Popistes def tronquer_2(a): if a == [] : return [] a.pop(0) return a liste_1 = [1,2,3] liste_2 = tronquer_2( liste_1 ) print( liste_1 ) ## affiche [2,3] print( liste_2 ) ## affiche [2,3] Explanation: Erreurs classiques sur les listes End of explanation # -*- coding: utf-8 -*- import numpy as np # Constantes my0=4*np.pi*1e-7; # permeabilite du vide I0=-1; # intensité du courant # le courant circule de gauche a droite # Dimensions d=25*1e-3 # Diametre de la spire (m) r=d/2 # rayon de la spire (m) segments=100 # discretisation de la spire alpha = 2*np.pi/(segments-1) # discretisation de l'angle # initialisation de la spire x=[i*0 for i in range(segments)] y=[r*np.sin(i*alpha) for i in range(segments)] z=[-r*np.cos(i*alpha) for i in range(segments)] # Definition du sens du positif du courant : gauche -> droite # pour le calcul les longeurs sont exprimees en m x_spire=np.array([x]); y_spire=np.array([y]); z_spire=np.array([z]); %matplotlib inline #%%%%%%%%%%%%%%% Affichage de la spire en 3D %%%%%%%%%%%%%%%%# from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt import matplotlib.patches as patches fig = plt.figure() ax = fig.gca(projection='3d') #titre du graphe plt.title ('Simulation numerique d une spire', color='b') # spire plt.plot(y_spire [0], z_spire[0], 'g.', linewidth=1, label='spire') plt.plot(y_spire, z_spire, 'g.', linewidth=1) # vecteur champ magnetique Bx plt.plot(([0.003,0.003]),([0,0]),([-.05,0.05]), 'b-',label='axe de la spire',linewidth=1) plt.plot(([0.003,0.003]),([0,0]),([-.01,0.02]), 'm-',label='champ magnetique',linewidth=1) plt.plot([0.003],[0],[0.025], 'm', marker='^', linewidth=2) ax.text(0.004,0,0,'Bx', color='m') #courant plt.plot([0],[0.0125],[0], 'r', marker='<') ax.text(0,0.0125,0.01,'I', color='r') #legende du graphe plt.legend (loc='lower left', prop={'size':8}) plt.show() #%%%%%%%%%% Calcul du champ magnetique d apres Biot et Savart ndp=50 # Nombre de points # limites x xmin,xmax=-0.05,0.05 # limites y ymin, ymax=-0.05, 0.05 # limites z zmin, zmax=-0.05,0.05 dx=(xmax-xmin)/(ndp-1) #increment x dy=(ymax-ymin)/(ndp-1) # increment y dz=(zmax-zmin)/(ndp-1) # increment z #%%%%%%%%%%%%%%% Calcul magnetostatique %%%%%%%%%%%%%%%%%%%%%%%%%%%# bxf=np.zeros(ndp) # initialisation de la composante Bx du champ byf=np.zeros(ndp) # initialisation de la composante By du champ bzf=np.zeros(ndp) # initialisation de la composante Bz du champ I0f1=my0*I0/(4*np.pi) # Magnetostatique (on multiplie le courant #I0 par mu0/(4.pi)) # Integration du champ induit en un point de l'axe bleu # par le courant circulant sur chaque segment de la spire verte bfx,bfy,bfz=0,0,0 nseg=np.size(z_spire)-1 for i in range(ndp): #Initialisation des positions xM=(xmin+i*dx) yM,zM=0,0 #Initialisation des champs locaux bfx,bfy,bfz=0,0,0 R=np.array([xM,yM,zM]) # vecteur position sur # le point qui doit etre calcule # en integrant la contribution # de tous les courants le long # de la boucle verte for wseg in range(nseg): xs=x_spire[0][wseg] ys=y_spire[0][wseg] zs=z_spire[0][wseg] Rs=np.array([xs, ys, zs]) drsx=(x_spire[0][wseg+1]-x_spire[0][wseg]) drsy=(y_spire[0][wseg+1]-y_spire[0][wseg]) drsz=(z_spire[0][wseg+1]-z_spire[0][wseg]) drs=np.array([drsx, drsy, drsz]) #direction du courant Delta_R= Rs - R #vecteur entre l'élement de spire et #le point où est calcul le champ Delta_Rdist=np.sqrt(Delta_R[0]**2+Delta_R[1]**2+Delta_R[2]**2) #Delta_Rdis2=Delta_Rdist**2 Delta_Rdis3=Delta_Rdist**3 b2=1.0/Delta_Rdis3 b12=I0f1*b2*(-1) # Produit vectoriel Delta_Rxdrs_x=Delta_R[1]*drsz-Delta_R[2]*drsy Delta_Rxdrs_y=Delta_R[2]*drsx-Delta_R[0]*drsz Delta_Rxdrs_z=Delta_R[0]*drsy-Delta_R[1]*drsx #Intégration bfx=bfx+b12*Delta_Rxdrs_x bfy=bfy+b12*Delta_Rxdrs_y bfz=bfz+b12*Delta_Rxdrs_z # Il faut utiliser un champ defini comme 3 listes : # une liste pour chaque abscisse bxf[i]+=bfx byf[i]+=bfy bzf[i]+=bfz #%%%%%%%%%%% Modele Theorique %%%%%%%%%%%%%%%%%%# #r=d/2; # rayon de la spire en mm #r=r*1e-3; # rayon de la spire en m #valeur absolue de la composante Bx du champ magnetique en fonction de la position sur l axe de la spire bx_analytique=[abs(my0*I0)*(r)**2/(2*((r)**2+(x)**2)**(3/2)) for x in np.linspace(xmin, xmax, ndp, endpoint = True)] #%%%%%%%%%%% Visualisation %%%%%%%%%%%%%%%%%%%%%%%# #trace de la valeur absolue de la composante Bx du champ magnetique en fonction de x plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bxf,'bo') plt.plot(np.linspace(xmin, xmax, ndp, endpoint = True) , bx_analytique,'r-') plt.title ('Champ magnetique Bx le long de l axe Ox de la spire', color='b') plt.xlabel('position sur l axe Ox', color='b') plt.ylabel ('Bx', color='b') #etude de la composante bx1 du champ magnetique en differents points du plan xOy #initialisation du champ magnétique bx1 bx1=0 by1=0 #Initialisation de l energie Em=0 Emf=0 #Indice pour les listes l=0 #ndp-1 intervalles ndp1=41 #### pour verification de Bx sur l'axe Ox avec yM,zM=0,0 ##BD seul cas ou la comparaison analytique est possible #### #initialisation des listes des coordonnees des points s = (ndp1,ndp1) xM1f=np.zeros(s) yM1f=np.zeros(s) #zM1f=np.zeros(ndp1*ndp1) u = np.zeros(s) X = np.zeros(s) Y = np.zeros(s) #initialisation de la liste des valeurs de la composante Bx1 en chaque point bx1f=np.zeros(s) #initialisation de la liste des valeurs de la composante By1 en chaque point by1f=np.zeros(s) #initialisation de la liste des valeurs de la composante By1 en chaque point bz1f=np.zeros(s) #calcul exact du coefficient b0 de l'integrale servant a calculer Bx #selon la formule: b0=mu0*I0*r/(2*pi) b0=2*10**(-7)*I0*r #coordonnees des points xM1f = np.linspace(-.03,.03,ndp1) yM1f = np.linspace(-.03,.03,ndp1) (X,Y) = np.meshgrid(xM1f,yM1f) #calcul de Bx1 en des points regulierement espaces du plan xoy autour de la spire ##BD Physiquement ce calcul doit s'appuyer sur un modele numerique car l'equation analytique est injustifiable des que l'on est excentre #Initialisation des positions xM, yM,zM=0,0,0 for i in range(ndp1): #Initialisation des champs locaux bfx,bfy,bfz=0,0,0 for j in range(ndp1): xM=X[i][j] yM=Y[i][j] #Initialisation des champs locaux bfx,bfy,bfz=0,0,0 R=np.array([xM,yM,zM]) # vecteur position sur # le point qui doit etre calcule # en integrant la contribution # de tous les courants le long # de la boucle verte for wseg in range(nseg): xs=x_spire[0][wseg] ys=y_spire[0][wseg] zs=z_spire[0][wseg] Rs=np.array([xs, ys, zs]) drsx=(x_spire[0][wseg+1]-x_spire[0][wseg]) drsy=(y_spire[0][wseg+1]-y_spire[0][wseg]) drsz=(z_spire[0][wseg+1]-z_spire[0][wseg]) drs=np.array([drsx, drsy, drsz]) #direction du courant Delta_R= Rs - R #vecteur entre l'élement de spire et #le point où est calcul le champ Delta_Rdist=np.sqrt(Delta_R[0]**2+Delta_R[1]**2+Delta_R[2]**2) #Delta_Rdis2=Delta_Rdist**2 Delta_Rdis3=Delta_Rdist**3 b2=1.0/Delta_Rdis3 b12=I0f1*b2*(-1) # Produit vectoriel Delta_Rxdrs_x=Delta_R[1]*drsz-Delta_R[2]*drsy Delta_Rxdrs_y=Delta_R[2]*drsx-Delta_R[0]*drsz Delta_Rxdrs_z=Delta_R[0]*drsy-Delta_R[1]*drsx #Intégration bfx=bfx+b12*Delta_Rxdrs_x bfy=bfy+b12*Delta_Rxdrs_y bfz=bfz+b12*Delta_Rxdrs_z # Il faut utiliser un champ defini comme 3 listes : # une liste pour chaque abscisse bx1f[i][j]+=bfx by1f[i][j]+=bfy bz1f[i][j]+=bfz #tracé des vecteurs Bx en chaque point #tracé des vecteurs B dans le plan xOy pour vérifier les lignes de champ magnétique q = plt.quiver(X,Y,bx1f,by1f,color='r') plt.title ('Cartographie des composantes Bx,By dans le plan xOy',color='b') plt.xlabel ('axe y',color='b') plt.ylabel ('axe x',color='b') plt.show() Explanation: Visualisation d'un phénomène physique Visualisation d'un phénomène physique $$B(x)=\frac{\mu_o}{4\pi}.I_o.\frac{r^2}{2 (r^2 +x^2)^{3/2} }$$ L'equation de Biot-et-Savart End of explanation
5,085
Given the following text description, write Python code to implement the functionality described below step by step Description: Multimodal entailment Author Step1: Imports Step2: Define a label map Step3: Collect the dataset The original dataset is available here. It comes with URLs of images which are hosted on Twitter's photo storage system called the Photo Blob Storage (PBS for short). We will be working with the downloaded images along with additional data that comes with the original dataset. Thanks to Nilabhra Roy Chowdhury who worked on preparing the image data. Step4: Read the dataset and apply basic preprocessing Step5: The columns we are interested in are the following Step6: Dataset visualization Step7: Train/test split The dataset suffers from class imbalance problem. We can confirm that in the following cell. Step8: To account for that we will go for a stratified split. Step9: Data input pipeline TensorFlow Hub provides variety of BERT family of models. Each of those models comes with a corresponding preprocessing layer. You can learn more about these models and their preprocessing layers from this resource. To keep the runtime of this example relatively short, we will use a smaller variant of the original BERT model. Step11: Our text preprocessing code mostly comes from this tutorial. You are highly encouraged to check out the tutorial to learn more about the input preprocessing. Step12: Run the preprocessor on a sample input Step13: We will now create tf.data.Dataset objects from the dataframes. Note that the text inputs will be preprocessed as a part of the data input pipeline. But the preprocessing modules can also be a part of their corresponding BERT models. This helps reduce the training/serving skew and lets our models operate with raw text inputs. Follow this tutorial to learn more about how to incorporate the preprocessing modules directly inside the models. Step14: Preprocessing utilities Step15: Create the final datasets Step16: Model building utilities Our final model will accept two images along with their text counterparts. While the images will be directly fed to the model the text inputs will first be preprocessed and then will make it into the model. Below is a visual illustration of this approach Step17: Vision encoder utilities Step18: Text encoder utilities Step19: Multimodal model utilities Step20: You can inspect the structure of the individual encoders as well by setting the expand_nested argument of plot_model() to True. You are encouraged to play with the different hyperparameters involved in building this model and observe how the final performance is affected. Compile and train the model Step21: Evaluate the model
Python Code: !pip install -q tensorflow_text Explanation: Multimodal entailment Author: Sayak Paul<br> Date created: 2021/08/08<br> Last modified: 2021/08/15<br> Description: Training a multimodal model for predicting entailment. Introduction In this example, we will build and train a model for predicting multimodal entailment. We will be using the multimodal entailment dataset recently introduced by Google Research. What is multimodal entailment? On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time: Does a given piece of information contradict the other? Does a given piece of information imply the other? In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities. Requirements This example requires TensorFlow 2.5 or higher. In addition, TensorFlow Hub and TensorFlow Text are required for the BERT model (Devlin et al.). These libraries can be installed using the following command: End of explanation from sklearn.model_selection import train_test_split import matplotlib.pyplot as plt import pandas as pd import numpy as np import os import tensorflow as tf import tensorflow_hub as hub import tensorflow_text as text from tensorflow import keras Explanation: Imports End of explanation label_map = {"Contradictory": 0, "Implies": 1, "NoEntailment": 2} Explanation: Define a label map End of explanation image_base_path = keras.utils.get_file( "tweet_images", "https://github.com/sayakpaul/Multimodal-Entailment-Baseline/releases/download/v1.0.0/tweet_images.tar.gz", untar=True, ) Explanation: Collect the dataset The original dataset is available here. It comes with URLs of images which are hosted on Twitter's photo storage system called the Photo Blob Storage (PBS for short). We will be working with the downloaded images along with additional data that comes with the original dataset. Thanks to Nilabhra Roy Chowdhury who worked on preparing the image data. End of explanation df = pd.read_csv( "https://github.com/sayakpaul/Multimodal-Entailment-Baseline/raw/main/csvs/tweets.csv" ) df.sample(10) Explanation: Read the dataset and apply basic preprocessing End of explanation images_one_paths = [] images_two_paths = [] for idx in range(len(df)): current_row = df.iloc[idx] id_1 = current_row["id_1"] id_2 = current_row["id_2"] extentsion_one = current_row["image_1"].split(".")[-1] extentsion_two = current_row["image_2"].split(".")[-1] image_one_path = os.path.join(image_base_path, str(id_1) + f".{extentsion_one}") image_two_path = os.path.join(image_base_path, str(id_2) + f".{extentsion_two}") images_one_paths.append(image_one_path) images_two_paths.append(image_two_path) df["image_1_path"] = images_one_paths df["image_2_path"] = images_two_paths # Create another column containing the integer ids of # the string labels. df["label_idx"] = df["label"].apply(lambda x: label_map[x]) Explanation: The columns we are interested in are the following: text_1 image_1 text_2 image_2 label The entailment task is formulated as the following: Given the pairs of (text_1, image_1) and (text_2, image_2) do they entail (or not entail or contradict) each other? We have the images already downloaded. image_1 is downloaded as id1 as its filename and image2 is downloaded as id2 as its filename. In the next step, we will add two more columns to df - filepaths of image_1s and image_2s. End of explanation def visualize(idx): current_row = df.iloc[idx] image_1 = plt.imread(current_row["image_1_path"]) image_2 = plt.imread(current_row["image_2_path"]) text_1 = current_row["text_1"] text_2 = current_row["text_2"] label = current_row["label"] plt.subplot(1, 2, 1) plt.imshow(image_1) plt.axis("off") plt.title("Image One") plt.subplot(1, 2, 2) plt.imshow(image_1) plt.axis("off") plt.title("Image Two") plt.show() print(f"Text one: {text_1}") print(f"Text two: {text_2}") print(f"Label: {label}") random_idx = np.random.choice(len(df)) visualize(random_idx) random_idx = np.random.choice(len(df)) visualize(random_idx) Explanation: Dataset visualization End of explanation df["label"].value_counts() Explanation: Train/test split The dataset suffers from class imbalance problem. We can confirm that in the following cell. End of explanation # 10% for test train_df, test_df = train_test_split( df, test_size=0.1, stratify=df["label"].values, random_state=42 ) # 5% for validation train_df, val_df = train_test_split( train_df, test_size=0.05, stratify=train_df["label"].values, random_state=42 ) print(f"Total training examples: {len(train_df)}") print(f"Total validation examples: {len(val_df)}") print(f"Total test examples: {len(test_df)}") Explanation: To account for that we will go for a stratified split. End of explanation # Define TF Hub paths to the BERT encoder and its preprocessor bert_model_path = ( "https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-2_H-256_A-4/1" ) bert_preprocess_path = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3" Explanation: Data input pipeline TensorFlow Hub provides variety of BERT family of models. Each of those models comes with a corresponding preprocessing layer. You can learn more about these models and their preprocessing layers from this resource. To keep the runtime of this example relatively short, we will use a smaller variant of the original BERT model. End of explanation def make_bert_preprocessing_model(sentence_features, seq_length=128): Returns Model mapping string features to BERT inputs. Args: sentence_features: A list with the names of string-valued features. seq_length: An integer that defines the sequence length of BERT inputs. Returns: A Keras Model that can be called on a list or dict of string Tensors (with the order or names, resp., given by sentence_features) and returns a dict of tensors for input to BERT. input_segments = [ tf.keras.layers.Input(shape=(), dtype=tf.string, name=ft) for ft in sentence_features ] # Tokenize the text to word pieces. bert_preprocess = hub.load(bert_preprocess_path) tokenizer = hub.KerasLayer(bert_preprocess.tokenize, name="tokenizer") segments = [tokenizer(s) for s in input_segments] # Optional: Trim segments in a smart way to fit seq_length. # Simple cases (like this example) can skip this step and let # the next step apply a default truncation to approximately equal lengths. truncated_segments = segments # Pack inputs. The details (start/end token ids, dict of output tensors) # are model-dependent, so this gets loaded from the SavedModel. packer = hub.KerasLayer( bert_preprocess.bert_pack_inputs, arguments=dict(seq_length=seq_length), name="packer", ) model_inputs = packer(truncated_segments) return keras.Model(input_segments, model_inputs) bert_preprocess_model = make_bert_preprocessing_model(["text_1", "text_2"]) keras.utils.plot_model(bert_preprocess_model, show_shapes=True, show_dtype=True) Explanation: Our text preprocessing code mostly comes from this tutorial. You are highly encouraged to check out the tutorial to learn more about the input preprocessing. End of explanation idx = np.random.choice(len(train_df)) row = train_df.iloc[idx] sample_text_1, sample_text_2 = row["text_1"], row["text_2"] print(f"Text 1: {sample_text_1}") print(f"Text 2: {sample_text_2}") test_text = [np.array([sample_text_1]), np.array([sample_text_2])] text_preprocessed = bert_preprocess_model(test_text) print("Keys : ", list(text_preprocessed.keys())) print("Shape Word Ids : ", text_preprocessed["input_word_ids"].shape) print("Word Ids : ", text_preprocessed["input_word_ids"][0, :16]) print("Shape Mask : ", text_preprocessed["input_mask"].shape) print("Input Mask : ", text_preprocessed["input_mask"][0, :16]) print("Shape Type Ids : ", text_preprocessed["input_type_ids"].shape) print("Type Ids : ", text_preprocessed["input_type_ids"][0, :16]) Explanation: Run the preprocessor on a sample input End of explanation def dataframe_to_dataset(dataframe): columns = ["image_1_path", "image_2_path", "text_1", "text_2", "label_idx"] dataframe = dataframe[columns].copy() labels = dataframe.pop("label_idx") ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels)) ds = ds.shuffle(buffer_size=len(dataframe)) return ds Explanation: We will now create tf.data.Dataset objects from the dataframes. Note that the text inputs will be preprocessed as a part of the data input pipeline. But the preprocessing modules can also be a part of their corresponding BERT models. This helps reduce the training/serving skew and lets our models operate with raw text inputs. Follow this tutorial to learn more about how to incorporate the preprocessing modules directly inside the models. End of explanation resize = (128, 128) bert_input_features = ["input_word_ids", "input_type_ids", "input_mask"] def preprocess_image(image_path): extension = tf.strings.split(image_path)[-1] image = tf.io.read_file(image_path) if extension == b"jpg": image = tf.image.decode_jpeg(image, 3) else: image = tf.image.decode_png(image, 3) image = tf.image.resize(image, resize) return image def preprocess_text(text_1, text_2): text_1 = tf.convert_to_tensor([text_1]) text_2 = tf.convert_to_tensor([text_2]) output = bert_preprocess_model([text_1, text_2]) output = {feature: tf.squeeze(output[feature]) for feature in bert_input_features} return output def preprocess_text_and_image(sample): image_1 = preprocess_image(sample["image_1_path"]) image_2 = preprocess_image(sample["image_2_path"]) text = preprocess_text(sample["text_1"], sample["text_2"]) return {"image_1": image_1, "image_2": image_2, "text": text} Explanation: Preprocessing utilities End of explanation batch_size = 32 auto = tf.data.AUTOTUNE def prepare_dataset(dataframe, training=True): ds = dataframe_to_dataset(dataframe) if training: ds = ds.shuffle(len(train_df)) ds = ds.map(lambda x, y: (preprocess_text_and_image(x), y)).cache() ds = ds.batch(batch_size).prefetch(auto) return ds train_ds = prepare_dataset(train_df) validation_ds = prepare_dataset(val_df, False) test_ds = prepare_dataset(test_df, False) Explanation: Create the final datasets End of explanation def project_embeddings( embeddings, num_projection_layers, projection_dims, dropout_rate ): projected_embeddings = keras.layers.Dense(units=projection_dims)(embeddings) for _ in range(num_projection_layers): x = tf.nn.gelu(projected_embeddings) x = keras.layers.Dense(projection_dims)(x) x = keras.layers.Dropout(dropout_rate)(x) x = keras.layers.Add()([projected_embeddings, x]) projected_embeddings = keras.layers.LayerNormalization()(x) return projected_embeddings Explanation: Model building utilities Our final model will accept two images along with their text counterparts. While the images will be directly fed to the model the text inputs will first be preprocessed and then will make it into the model. Below is a visual illustration of this approach: The model consists of the following elements: A standalone encoder for the images. We will use a ResNet50V2 pre-trained on the ImageNet-1k dataset for this. A standalone encoder for the images. A pre-trained BERT will be used for this. After extracting the individual embeddings, they will be projected in an identical space. Finally, their projections will be concatenated and be fed to the final classification layer. This is a multi-class classification problem involving the following classes: NoEntailment Implies Contradictory project_embeddings(), create_vision_encoder(), and create_text_encoder() utilities are referred from this example. Projection utilities End of explanation def create_vision_encoder( num_projection_layers, projection_dims, dropout_rate, trainable=False ): # Load the pre-trained ResNet50V2 model to be used as the base encoder. resnet_v2 = keras.applications.ResNet50V2( include_top=False, weights="imagenet", pooling="avg" ) # Set the trainability of the base encoder. for layer in resnet_v2.layers: layer.trainable = trainable # Receive the images as inputs. image_1 = keras.Input(shape=(128, 128, 3), name="image_1") image_2 = keras.Input(shape=(128, 128, 3), name="image_2") # Preprocess the input image. preprocessed_1 = keras.applications.resnet_v2.preprocess_input(image_1) preprocessed_2 = keras.applications.resnet_v2.preprocess_input(image_2) # Generate the embeddings for the images using the resnet_v2 model # concatenate them. embeddings_1 = resnet_v2(preprocessed_1) embeddings_2 = resnet_v2(preprocessed_2) embeddings = keras.layers.Concatenate()([embeddings_1, embeddings_2]) # Project the embeddings produced by the model. outputs = project_embeddings( embeddings, num_projection_layers, projection_dims, dropout_rate ) # Create the vision encoder model. return keras.Model([image_1, image_2], outputs, name="vision_encoder") Explanation: Vision encoder utilities End of explanation def create_text_encoder( num_projection_layers, projection_dims, dropout_rate, trainable=False ): # Load the pre-trained BERT model to be used as the base encoder. bert = hub.KerasLayer(bert_model_path, name="bert",) # Set the trainability of the base encoder. bert.trainable = trainable # Receive the text as inputs. bert_input_features = ["input_type_ids", "input_mask", "input_word_ids"] inputs = { feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature) for feature in bert_input_features } # Generate embeddings for the preprocessed text using the BERT model. embeddings = bert(inputs)["pooled_output"] # Project the embeddings produced by the model. outputs = project_embeddings( embeddings, num_projection_layers, projection_dims, dropout_rate ) # Create the text encoder model. return keras.Model(inputs, outputs, name="text_encoder") Explanation: Text encoder utilities End of explanation def create_multimodal_model( num_projection_layers=1, projection_dims=256, dropout_rate=0.1, vision_trainable=False, text_trainable=False, ): # Receive the images as inputs. image_1 = keras.Input(shape=(128, 128, 3), name="image_1") image_2 = keras.Input(shape=(128, 128, 3), name="image_2") # Receive the text as inputs. bert_input_features = ["input_type_ids", "input_mask", "input_word_ids"] text_inputs = { feature: keras.Input(shape=(128,), dtype=tf.int32, name=feature) for feature in bert_input_features } # Create the encoders. vision_encoder = create_vision_encoder( num_projection_layers, projection_dims, dropout_rate, vision_trainable ) text_encoder = create_text_encoder( num_projection_layers, projection_dims, dropout_rate, text_trainable ) # Fetch the embedding projections. vision_projections = vision_encoder([image_1, image_2]) text_projections = text_encoder(text_inputs) # Concatenate the projections and pass through the classification layer. concatenated = keras.layers.Concatenate()([vision_projections, text_projections]) outputs = keras.layers.Dense(3, activation="softmax")(concatenated) return keras.Model([image_1, image_2, text_inputs], outputs) multimodal_model = create_multimodal_model() keras.utils.plot_model(multimodal_model, show_shapes=True) Explanation: Multimodal model utilities End of explanation multimodal_model.compile( optimizer="adam", loss="sparse_categorical_crossentropy", metrics="accuracy" ) history = multimodal_model.fit(train_ds, validation_data=validation_ds, epochs=10) Explanation: You can inspect the structure of the individual encoders as well by setting the expand_nested argument of plot_model() to True. You are encouraged to play with the different hyperparameters involved in building this model and observe how the final performance is affected. Compile and train the model End of explanation _, acc = multimodal_model.evaluate(test_ds) print(f"Accuracy on the test set: {round(acc * 100, 2)}%.") Explanation: Evaluate the model End of explanation
5,086
Given the following text description, write Python code to implement the functionality described below step by step Description: Create BigQuery stored procedures This notebook is the second of two notebooks that guide you through completing the prerequisites for running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution. Use this notebook to create the following stored procedures that are needed by the solution Step1: Import libraries Step2: Configure GCP environment settings Update the following variables to reflect the values for your GCP environment Step3: Authenticate your GCP account This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated. Step4: Create the stored procedure dependencies Step5: Create the stored procedures Run the scripts that create the BigQuery stored procedures. Step6: List the stored procedures
Python Code: !pip install -q -U google-cloud-bigquery pyarrow Explanation: Create BigQuery stored procedures This notebook is the second of two notebooks that guide you through completing the prerequisites for running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution. Use this notebook to create the following stored procedures that are needed by the solution: sp_ComputePMI - Computes pointwise mutual information (PMI) from item co-occurence data. This data is used by a matrix factorization model to learn item embeddings. sp_TrainItemMatchingModel - Creates the item_embedding_model matrix factorization model. This model learns item embeddings based on the PMI data computed by sp_ComputePMI. sp_ExractEmbeddings - Extracts the item embedding values from the item_embedding_model model, aggregates these values to produce a single embedding vector for each item, and stores these vectors in the item_embeddings table. The vector data is later exported to Cloud Storage to be used for item embedding lookup. Before starting this notebook, you must run the 00_prep_bq_and_datastore notebook to complete the first part of the prerequisites. After completing this notebook, you can run the solution either step-by-step or with a TFX pipeline: To start running the solution step-by-step, run the 01_train_bqml_mf_pmi notebook to create item embeddings. To run the solution by using a TFX pipeline, run the tfx01_interactive notebook to create the pipeline. Setup Install the required Python packages, configure the environment variables, and authenticate your GCP account. End of explanation import os from google.cloud import bigquery Explanation: Import libraries End of explanation PROJECT_ID = 'yourProject' # Change to your project. BUCKET = 'yourBucketName' # Change to the bucket you created. SQL_SCRIPTS_DIR = 'sql_scripts' BQ_DATASET_NAME = 'recommendations' !gcloud config set project $PROJECT_ID Explanation: Configure GCP environment settings Update the following variables to reflect the values for your GCP environment: PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution. BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket. End of explanation try: from google.colab import auth auth.authenticate_user() print("Colab user is authenticated.") except: pass Explanation: Authenticate your GCP account This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated. End of explanation %%bigquery --project $PROJECT_ID CREATE TABLE IF NOT EXISTS recommendations.item_cooc AS SELECT 0 AS item1_Id, 0 AS item2_Id, 0 AS cooc, 0 AS pmi; %%bigquery --project $PROJECT_ID CREATE MODEL IF NOT EXISTS recommendations.item_matching_model OPTIONS( MODEL_TYPE='matrix_factorization', USER_COL='item1_Id', ITEM_COL='item2_Id', RATING_COL='score' ) AS SELECT 0 AS item1_Id, 0 AS item2_Id, 0 AS score; Explanation: Create the stored procedure dependencies End of explanation client = bigquery.Client(project=PROJECT_ID) sql_scripts = dict() for script_file in [file for file in os.listdir(SQL_SCRIPTS_DIR) if '.sql' in file]: script_file_path = os.path.join(SQL_SCRIPTS_DIR, script_file) sql_script = open(script_file_path, 'r').read() sql_script = sql_script.replace('@DATASET_NAME', BQ_DATASET_NAME) sql_scripts[script_file] = sql_script for script_file in sql_scripts: print(f'Executing {script_file} script...') query = sql_scripts[script_file] query_job = client.query(query) result = query_job.result() print('Done.') Explanation: Create the stored procedures Run the scripts that create the BigQuery stored procedures. End of explanation query = f'SELECT * FROM {BQ_DATASET_NAME}.INFORMATION_SCHEMA.ROUTINES;' query_job = client.query(query) query_job.result().to_dataframe() Explanation: List the stored procedures End of explanation
5,087
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook can be used to test the basic features of a Catalog object, which is defined in by the class definition in skyofstars/catalogs.py. This will only work if you have installed sky-of-stars as a package, as described in the README. Step1: We defined a new kind of Python variable, by writing a class definition for a Catalog. We can create a new one of these objects as follows.
Python Code: # import everything from skyofstars from skyofstars.examples import * example = create_test_catalog() Explanation: This notebook can be used to test the basic features of a Catalog object, which is defined in by the class definition in skyofstars/catalogs.py. This will only work if you have installed sky-of-stars as a package, as described in the README. End of explanation example.coordinates = example.coordinates.transform_to('icrs') print (example.coordinates.icrs.ra.rad) # we can also access all the functions we defined *inside* the Catalog example.plot_celestial(s=5, alpha=0.2) # we can list all the stuff defined inside the Catalog like this: dir(example) Explanation: We defined a new kind of Python variable, by writing a class definition for a Catalog. We can create a new one of these objects as follows. End of explanation
5,088
Given the following text description, write Python code to implement the functionality described below step by step Description: This work was done by Harsh Gupta as part of his internship at The Center for Internet & Society India Step2: Encrypted Media Extension Diversity Analysis Encrypted Media Extension (EME) is the controvertial draft standard at W3C which aims to aims to prevent copyright infrigement in digital video but opens up door for lots of issues regarding security, accessibility, privacy and interoperability. This notebook tries to analyze if the interests of the important stakeholders were well represented in the debate that happened on public-html mailing list of W3C. Methodology Any emails with EME, Encrypted Media or Digital Rights Managagement in the subject line is considered to about EME. Then each of the participant is categorized on the basis of region of the world they belong to and their employeer's interest to the debate. Notes about the participants can be found here. Region Methodology Step3: Notice that there is absolutely no one from Asia, Africa or South America. This is important because the DRM laws, attitude towards IP vary considerably across the world.
Python Code: import bigbang.mailman as mailman import bigbang.process as process from bigbang.archive import Archive import pandas as pd import datetime from commonregex import CommonRegex import matplotlib.pyplot as plt %matplotlib inline Explanation: This work was done by Harsh Gupta as part of his internship at The Center for Internet & Society India End of explanation def filter_messages(df, column, keywords): filters = [] for keyword in keywords: filters.append(df[column].str.contains(keyword, case=False)) return df[reduce(lambda p, q: p | q, filters)] # Get the Archieves pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options mlist = mailman.open_list_archives("https://lists.w3.org/Archives/Public/public-html/", archive_dir="./archives") # The spaces around eme are **very** important otherwise it can catch things like "emerging", "implement" etc eme_messages = filter_messages(mlist, 'Subject', [' EME ', 'Encrypted Media', 'Digital Rights Managagement']) eme_activites = Archive.get_activity(Archive(eme_messages)) eme_activites.sum(0).sum() # XXX: Bugzilla might also contain discussions eme_activites.drop("[email protected]", axis=1, inplace=True) # Remove Dupicate senders levdf = process.sorted_matrix(eme_activites) consolidates = [] # gather pairs of names which have a distance of less than 10 for col in levdf.columns: for index, value in levdf.loc[levdf[col] < 10, col].iteritems(): if index != col: # the name shouldn't be a pair for itself consolidates.append((col, index)) # Handpick special cases which aren't covered with string matching consolidates.extend([(u'Kornel Lesi\u0144ski <[email protected]>', u'wrong string <[email protected]>'), (u'Charles McCathie Nevile <[email protected]>', u'Charles McCathieNevile <[email protected]>')]) eme_activites = process.consolidate_senders_activity(eme_activites, consolidates) sender_categories = pd.read_csv('people_tag.csv',delimiter=',', encoding="utf-8-sig") # match sender using email only sender_categories['email'] = map(lambda x: CommonRegex(x).emails[0].lower(), sender_categories['name_email']) sender_categories.index = sender_categories['email'] cat_dicts = { "region":{ 1: "Asia", 2: "Australia and New Zealand", 3: "Europe", 4: "Africa", 5: "North America", 6: "South America" }, "work":{ 1: "Foss Browser Developer", 2: "Content Provider", 3: "DRM platform provider", 4: "Accessibility", 5: "Security Researcher", 6: "Other W3C Empoyee", 7: "Privacy", 8: "None of the above" }, "gender":{ 1: "Female", 2: "Male" } } def get_cat_val_func(cat): Given category type, returns a function which gives the category value for a sender. def _get_cat_val(sender): try: sender_email = CommonRegex(sender).emails[0].lower() return cat_dicts[cat][sender_categories.loc[sender_email][cat]] except KeyError: return "Unknow" return _get_cat_val grouped = eme_activites.groupby(get_cat_val_func("region"), axis=1) print("Emails sent per region\n") print(grouped.sum().sum()) print("Total emails: %s" % grouped.sum().sum().sum()) print("Participants per region") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum())) print("Total participants: %s" % len(eme_activites.columns)) Explanation: Encrypted Media Extension Diversity Analysis Encrypted Media Extension (EME) is the controvertial draft standard at W3C which aims to aims to prevent copyright infrigement in digital video but opens up door for lots of issues regarding security, accessibility, privacy and interoperability. This notebook tries to analyze if the interests of the important stakeholders were well represented in the debate that happened on public-html mailing list of W3C. Methodology Any emails with EME, Encrypted Media or Digital Rights Managagement in the subject line is considered to about EME. Then each of the participant is categorized on the basis of region of the world they belong to and their employeer's interest to the debate. Notes about the participants can be found here. Region Methodology: Look up their personal website and social media accounts (Twitter, LinkedIn, Github) and see if it mentions the country they live in. (Works in Most of the cases) If the person's email has uses a country specific top level domain, assume that as the country If github profile is available look up the timezone on last 5 commits. For people who have moved from their home country consider the country where they live now. Work Methodology Look up their personal website and social media accounts (Twitter, LinkedIn, Github) and see if it mentions the employer and categorize accordingly. People who work on Accessibility, Privacy or Security but also fit into first three categories are categorized in one of the first three categories. For example someone who works on privacy in Google will be placed in "DRM platform provider" instead of "Privacy". If no other category can be assigned, then assign "None of the Above" Other Notes Google's position is very interesting, it is DRM provider as a browser manufacturer but also a content provider in Youtube and fair number of Google Employers are against EME due to other concerns. I've categorized Christian as Content provider because he works on Youtube, and I've placed everyone else as DRM provider. End of explanation grouped = eme_activites.groupby(get_cat_val_func("work"), axis=1) print("Emails sent per work category") print(grouped.sum().sum()) print("Participants per work category") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum())) grouped = eme_activites.groupby(get_cat_val_func("gender"), axis=1) print("Emails sent per Gender") print(grouped.sum().sum()) print("Participants per work category") for group in grouped.groups: print "%s: %s" % (group,len(grouped.get_group(group).sum())) Explanation: Notice that there is absolutely no one from Asia, Africa or South America. This is important because the DRM laws, attitude towards IP vary considerably across the world. End of explanation
5,089
Given the following text description, write Python code to implement the functionality described below step by step Description: LAB 2b Step1: Set environment variables so that we can use them throughout the entire lab. We will be using our project ID for our bucket. Step2: The source dataset Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset. The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. Step3: Create the training and evaluation data tables Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting. Note Step4: Lab Task #2 Step5: Lab Task #3 Step6: Split augmented dataset into eval dataset Exercise Step7: Verify table creation Verify that you created the dataset and training data table. Step8: Lab Task #4 Step9: Verify CSV creation Verify that we correctly created the CSV files in our bucket.
Python Code: import os from google.cloud import bigquery Explanation: LAB 2b: Prepare babyweight dataset. Learning Objectives Setup up the environment Preprocess natality dataset Augment natality dataset Create the train and eval tables in BigQuery Export data from BigQuery to GCS in CSV format Introduction In this notebook, we will prepare the babyweight dataset for model development and training to predict the weight of a baby before it is born. We will use BigQuery to perform data augmentation and preprocessing which will be used for AutoML Tables, BigQuery ML, and Keras models trained on Cloud AI Platform. In this lab, we will set up the environment, create the project dataset, preprocess and augment natality dataset, create the train and eval tables in BigQuery, and export data from BigQuery to GCS in CSV format. Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook. Set up environment variables and load necessary libraries Import necessary libraries. End of explanation PROJECT = !gcloud config list --format 'value(core.project)' PROJECT = PROJECT[0] BUCKET = PROJECT REGION = "us-central1" os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION Explanation: Set environment variables so that we can use them throughout the entire lab. We will be using our project ID for our bucket. End of explanation %%bash ## Create a BigQuery dataset for babyweight if it doesn't exist datasetexists=$(bq ls -d | grep -w ) # TODO: Add dataset name if [ -n "$datasetexists" ]; then echo -e "BigQuery dataset already exists, let's not recreate it." else echo "Creating BigQuery dataset titled: babyweight" bq --location=US mk --dataset \ --description "Babyweight" \ $PROJECT:# TODO: Add dataset name echo "Here are your current datasets:" bq ls fi ## Create GCS bucket if it doesn't exist already... exists=$(gsutil ls -d | grep -w gs://${BUCKET}/) if [ -n "$exists" ]; then echo -e "Bucket exists, let's not recreate it." else echo "Creating a new GCS bucket." gsutil mb -l ${REGION} gs://${BUCKET} echo "Here are your current buckets:" gsutil ls fi Explanation: The source dataset Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset. The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict. Create a BigQuery Dataset and Google Cloud Storage Bucket A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too. End of explanation %%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_data AS SELECT # TODO: Add selected raw features and preprocessed features FROM publicdata.samples.natality WHERE # TODO: Add filters Explanation: Create the training and evaluation data tables Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting. Note: The dataset in the create table code below is the one created previously, e.g. "babyweight". Lab Task #1: Preprocess and filter dataset We have some preprocessing and filtering we would like to do to get our data in the right format for training. Preprocessing: * Cast is_male from BOOL to STRING * Cast plurality from INTEGER to STRING where [1, 2, 3, 4, 5] becomes ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"] * Add hashcolumn hashing on year and month Filtering: * Only want data for years later than 2000 * Only want baby weights greater than 0 * Only want mothers whose age is greater than 0 * Only want plurality to be greater than 0 * Only want the number of weeks of gestation to be greater than 0 End of explanation %%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_augmented_data AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, hashmonth FROM babyweight.babyweight_data UNION ALL SELECT # TODO: Replace is_male and plurality as indicated above FROM babyweight.babyweight_data Explanation: Lab Task #2: Augment dataset to simulate missing data Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+). End of explanation %%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_data_train AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_augmented_data WHERE # TODO: Modulo hashmonth to be approximately 75% of the data Explanation: Lab Task #3: Split augmented dataset into train and eval sets Using hashmonth, apply a modulo to get approximately a 75/25 train/eval split. Split augmented dataset into train dataset Exercise: RUN the query to create the training data table. End of explanation %%bigquery CREATE OR REPLACE TABLE babyweight.babyweight_data_eval AS SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks FROM babyweight.babyweight_augmented_data WHERE # TODO: Modulo hashmonth to be approximately 25% of the data Explanation: Split augmented dataset into eval dataset Exercise: RUN the query to create the evaluation data table. End of explanation %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_train LIMIT 0 %%bigquery -- LIMIT 0 is a free query; this allows us to check that the table exists. SELECT * FROM babyweight.babyweight_data_eval LIMIT 0 Explanation: Verify table creation Verify that you created the dataset and training data table. End of explanation # Construct a BigQuery client object. client = bigquery.Client() dataset_name = # TODO: Add dataset name # Create dataset reference object dataset_ref = client.dataset( dataset_id=dataset_name, project=client.project) # Export both train and eval tables for step in [ # TODO: Loop over train and eval ]: destination_uri = os.path.join( "gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step)) table_name = "babyweight_data_{}".format(step) table_ref = dataset_ref.table(table_name) extract_job = client.extract_table( table_ref, destination_uri, # Location must match that of the source table. location="US", ) # API request extract_job.result() # Waits for job to complete. print("Exported {}:{}.{} to {}".format( client.project, dataset_name, table_name, destination_uri)) Explanation: Lab Task #4: Export from BigQuery to CSVs in GCS Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data. End of explanation %%bash gsutil ls gs://${BUCKET}/babyweight/data/*.csv %%bash gsutil cat gs://${BUCKET}/babyweight/data/train000000000000.csv | head -5 %%bash gsutil cat gs://${BUCKET}/babyweight/data/eval000000000000.csv | head -5 Explanation: Verify CSV creation Verify that we correctly created the CSV files in our bucket. End of explanation
5,090
Given the following text description, write Python code to implement the functionality described below step by step Description: Table of Contents <p><div class="lev1 toc-item"><a href="#&quot;Living-in-a-noisy-world...&quot;,-using-James-Powell's-(dutc)-rwatch-module" data-toc-modified-id="&quot;Living-in-a-noisy-world...&quot;,-using-James-Powell's-(dutc)-rwatch-module-1"><span class="toc-item-num">1&nbsp;&nbsp;</span><em>"Living in a noisy world..."</em>, using James Powell's (<code>dutc</code>) <code>rwatch</code> module</a></div><div class="lev2 toc-item"><a href="#Requirements-and-links" data-toc-modified-id="Requirements-and-links-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Requirements and links</a></div><div class="lev2 toc-item"><a href="#Defining-a-debugging-context-manager,-just-to-try" data-toc-modified-id="Defining-a-debugging-context-manager,-just-to-try-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Defining a debugging context manager, just to try</a></div><div class="lev3 toc-item"><a href="#Watching-just-one-object" data-toc-modified-id="Watching-just-one-object-121"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Watching just one object</a></div><div class="lev3 toc-item"><a href="#Can-we-delete-the-rwatch-?" data-toc-modified-id="Can-we-delete-the-rwatch-?-122"><span class="toc-item-num">1.2.2&nbsp;&nbsp;</span>Can we delete the <code>rwatch</code> ?</a></div><div class="lev3 toc-item"><a href="#More-useful-debuggin-information" data-toc-modified-id="More-useful-debuggin-information-123"><span class="toc-item-num">1.2.3&nbsp;&nbsp;</span>More useful debuggin information</a></div><div class="lev3 toc-item"><a href="#Watching-any-object" data-toc-modified-id="Watching-any-object-124"><span class="toc-item-num">1.2.4&nbsp;&nbsp;</span>Watching <em>any</em> object</a></div><div class="lev3 toc-item"><a href="#A-first-context-manager-to-have-debugging-for-one-object" data-toc-modified-id="A-first-context-manager-to-have-debugging-for-one-object-125"><span class="toc-item-num">1.2.5&nbsp;&nbsp;</span>A first context manager to have debugging for <em>one</em> object</a></div><div class="lev3 toc-item"><a href="#A-second-context-manager-to-debug-any-object" data-toc-modified-id="A-second-context-manager-to-debug-any-object-126"><span class="toc-item-num">1.2.6&nbsp;&nbsp;</span>A second context manager to debug <em>any</em> object</a></div><div class="lev2 toc-item"><a href="#Defining-a-context-manager-to-add-white-noise" data-toc-modified-id="Defining-a-context-manager-to-add-white-noise-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Defining a context manager to add white noise</a></div><div class="lev3 toc-item"><a href="#Capturing-any-numerical-value" data-toc-modified-id="Capturing-any-numerical-value-131"><span class="toc-item-num">1.3.1&nbsp;&nbsp;</span>Capturing any numerical value</a></div><div class="lev3 toc-item"><a href="#Adding-a-white-noise-for-numbers" data-toc-modified-id="Adding-a-white-noise-for-numbers-132"><span class="toc-item-num">1.3.2&nbsp;&nbsp;</span>Adding a white noise for numbers</a></div><div class="lev3 toc-item"><a href="#WhiteNoiseComplex-context-manager" data-toc-modified-id="WhiteNoiseComplex-context-manager-133"><span class="toc-item-num">1.3.3&nbsp;&nbsp;</span><code>WhiteNoiseComplex</code> context manager</a></div><div class="lev2 toc-item"><a href="#Defining-a-generic-noisy-context-manager" data-toc-modified-id="Defining-a-generic-noisy-context-manager-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Defining a generic <code>noisy</code> context manager</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Conclusion</a></div> # *"Living in a noisy world..."*, using James Powell's (`dutc`) `rwatch` module Goal Step1: Then, the core part will be to install and import James Powell (dutc) rwatch module. If you don't have it installed Step2: Anyhow, if rwatch is installed, we can import it, and it enables two new functions in the sys module Step3: Finally, we need the collections module and its defaultdict magical datastructure. Step4: Defining a debugging context manager, just to try This is the first example given in James presentation at PyCon Canada 2016. We will first define and add a rwatch for just one object, let say a variable x, and then for any object using defaultdict. From there, writing a context manager that enables this feature only locally is easy. Watching just one object Write the function, that needs two argument frame, obj, and should return obj, Install it... Check it! Step5: That's awesome, it works! Can we delete the rwatch ? Sure! Step6: We can also delete rwatches that are not defined, without a failure Step7: More useful debuggin information What is this frame thing? It is described in the documentation of the inspect module. We can actually use it to display some useful information about the object and where was it called etc. Step8: That can be quite useful! Watching any object We can actually pass a defaultdict to the setrwatch function, so that any object will have a rwatch! Warning Step9: Clearly, there is a lot of strings involved, mainly because this is a notebook and not the simple Python interpreter, so filtering on the info.filename as I did was smart. Step10: But obviously, having this for all objects is incredibly verbose! Step11: Let check that one, on a very simple example (which runs in less than 20 micro seconds) Step12: It seems to work very well! But it slows down everything, obviously the filtering takes time (for every object!) Computing 123 + 134 = 257 took about 10 miliseconds! That's just CRAZY! A first context manager to have debugging for one object It would be nice to be able to turn on and off this debugging tool whenever you want. Well, it turns out that context managers are exactly meant for that! They are simple classes with just a __enter__() and __exit__() special methods. First, let us write a context manager to debug ONE object. Step13: We can check it Step14: The first debug information shows line 5, which is the line where print(z) is. A second context manager to debug any object Easy Step15: It will probably break everything in the notebook, but works in a basic Python interpreter. Step16: The 5th debug information printed is Access to 0 (@0xXXX) at &lt;ipython-input-41-XXX&gt; Step17: We also see here the None and {} objects being given to the context manager (see the __enter__ method at first, and __exit__ at the end). Defining a context manager to add white noise Basically, we will do as above, but instead of debug information, a white noise sampled from a Normal distribution (i.e., $\sim \mathcal{N}(0, 1)$) will be added to any number. Capturing any numerical value To capture both integers and float numbers, the numbers.Number abstract class is useful. Step18: Adding a white noise for numbers This is very simple. But I want to be safe, so it will only works if the frame indicate that the number does not come from a file, as previously. Step19: Let us try it out! Step20: It seems to work! Let's do it for any number then... ... Sadly, it's actually breaking the interpreter, which obviously has to have access to non-noisy constants and numbers to work ! We can lower the risk by only adding noise to complex numbers. I guess the interpreter doesn't need complex numbers, write? Step21: Awesome! « Now, the real world is non noisy, but the complex one is! » That's one sentence I thought I would never say! WhiteNoiseComplex context manager To stay cautious, I only add noise to complex numbers. Step22: And it works as expected Step23: Defining a generic noisy context manager This will be a very simple change from the previous one, by letting the Noisy class accept any noisy function, which takes obj and return a noisy version of obj, only for complex-valued objects.
Python Code: import numpy as np np.random.seed(1234) np.random.normal() Explanation: Table of Contents <p><div class="lev1 toc-item"><a href="#&quot;Living-in-a-noisy-world...&quot;,-using-James-Powell's-(dutc)-rwatch-module" data-toc-modified-id="&quot;Living-in-a-noisy-world...&quot;,-using-James-Powell's-(dutc)-rwatch-module-1"><span class="toc-item-num">1&nbsp;&nbsp;</span><em>"Living in a noisy world..."</em>, using James Powell's (<code>dutc</code>) <code>rwatch</code> module</a></div><div class="lev2 toc-item"><a href="#Requirements-and-links" data-toc-modified-id="Requirements-and-links-11"><span class="toc-item-num">1.1&nbsp;&nbsp;</span>Requirements and links</a></div><div class="lev2 toc-item"><a href="#Defining-a-debugging-context-manager,-just-to-try" data-toc-modified-id="Defining-a-debugging-context-manager,-just-to-try-12"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Defining a debugging context manager, just to try</a></div><div class="lev3 toc-item"><a href="#Watching-just-one-object" data-toc-modified-id="Watching-just-one-object-121"><span class="toc-item-num">1.2.1&nbsp;&nbsp;</span>Watching just one object</a></div><div class="lev3 toc-item"><a href="#Can-we-delete-the-rwatch-?" data-toc-modified-id="Can-we-delete-the-rwatch-?-122"><span class="toc-item-num">1.2.2&nbsp;&nbsp;</span>Can we delete the <code>rwatch</code> ?</a></div><div class="lev3 toc-item"><a href="#More-useful-debuggin-information" data-toc-modified-id="More-useful-debuggin-information-123"><span class="toc-item-num">1.2.3&nbsp;&nbsp;</span>More useful debuggin information</a></div><div class="lev3 toc-item"><a href="#Watching-any-object" data-toc-modified-id="Watching-any-object-124"><span class="toc-item-num">1.2.4&nbsp;&nbsp;</span>Watching <em>any</em> object</a></div><div class="lev3 toc-item"><a href="#A-first-context-manager-to-have-debugging-for-one-object" data-toc-modified-id="A-first-context-manager-to-have-debugging-for-one-object-125"><span class="toc-item-num">1.2.5&nbsp;&nbsp;</span>A first context manager to have debugging for <em>one</em> object</a></div><div class="lev3 toc-item"><a href="#A-second-context-manager-to-debug-any-object" data-toc-modified-id="A-second-context-manager-to-debug-any-object-126"><span class="toc-item-num">1.2.6&nbsp;&nbsp;</span>A second context manager to debug <em>any</em> object</a></div><div class="lev2 toc-item"><a href="#Defining-a-context-manager-to-add-white-noise" data-toc-modified-id="Defining-a-context-manager-to-add-white-noise-13"><span class="toc-item-num">1.3&nbsp;&nbsp;</span>Defining a context manager to add white noise</a></div><div class="lev3 toc-item"><a href="#Capturing-any-numerical-value" data-toc-modified-id="Capturing-any-numerical-value-131"><span class="toc-item-num">1.3.1&nbsp;&nbsp;</span>Capturing any numerical value</a></div><div class="lev3 toc-item"><a href="#Adding-a-white-noise-for-numbers" data-toc-modified-id="Adding-a-white-noise-for-numbers-132"><span class="toc-item-num">1.3.2&nbsp;&nbsp;</span>Adding a white noise for numbers</a></div><div class="lev3 toc-item"><a href="#WhiteNoiseComplex-context-manager" data-toc-modified-id="WhiteNoiseComplex-context-manager-133"><span class="toc-item-num">1.3.3&nbsp;&nbsp;</span><code>WhiteNoiseComplex</code> context manager</a></div><div class="lev2 toc-item"><a href="#Defining-a-generic-noisy-context-manager" data-toc-modified-id="Defining-a-generic-noisy-context-manager-14"><span class="toc-item-num">1.4&nbsp;&nbsp;</span>Defining a generic <code>noisy</code> context manager</a></div><div class="lev2 toc-item"><a href="#Conclusion" data-toc-modified-id="Conclusion-15"><span class="toc-item-num">1.5&nbsp;&nbsp;</span>Conclusion</a></div> # *"Living in a noisy world..."*, using James Powell's (`dutc`) `rwatch` module Goal : I want to write a [context manager](https://docs.python.org/3.5/library/stdtypes.html#typecontextmanager) for [Python 3.5+](https://www.Python.org/), so that inside the context manager, every number is seen noisy (with a white Gaussian noise, for instance). > It will be like being drunk, except that your it will be my Python interpretor and not me ! For instance, I will like to have this feature: ```python >>> x = 120193 >>> print(x) 120193 >>> np.random.seed(1234) >>> with WhiteNoise(): >>> print(x) 120193.47143516373249306 ``` ---- ## Requirements and links First, we will need [numpy](https://docs.scipy.org/doc/numpy/user/whatisnumpy.html) to have some random number generator, as `numpy.random.normal` to have some 1D Gaussian noise. End of explanation %%bash tmpdir=$(mktemp -d) cd $tmpdir git clone https://github.com/dutc/rwatch cd rwatch/src/ make ls -larth ./rwatch.so file ./rwatch.so # cp ./rwatch.so /where/ver/you/need/ # ~/publis/notebook/ for me Explanation: Then, the core part will be to install and import James Powell (dutc) rwatch module. If you don't have it installed : Be sure to have CPython 3.5. rwatch patches the CPython eval loop, so it's fixed to specific versions of Python & the author lazy about keeping it updated. Then pip install dutc-rwatch. It should work, but it fails for me. (alternative) You can just cd /tmp/ &amp;&amp; git clone https://github.com/dutc/rwatch &amp;&amp; cd rwatch/src/ &amp;&amp; make and copy the rwatch.so dynamic library wherever you need... End of explanation import rwatch from sys import setrwatch, getrwatch setrwatch({}) # clean any previously installed rwatch getrwatch() Explanation: Anyhow, if rwatch is installed, we can import it, and it enables two new functions in the sys module: End of explanation from collections import defaultdict Explanation: Finally, we need the collections module and its defaultdict magical datastructure. End of explanation def basic_view(frame, obj): print("Python saw the object {} from frame {}".format(obj, frame)) return obj x = "I am alive!" setrwatch({ id(x): basic_view }) print(x) Explanation: Defining a debugging context manager, just to try This is the first example given in James presentation at PyCon Canada 2016. We will first define and add a rwatch for just one object, let say a variable x, and then for any object using defaultdict. From there, writing a context manager that enables this feature only locally is easy. Watching just one object Write the function, that needs two argument frame, obj, and should return obj, Install it... Check it! End of explanation def delrwatch(idobj): getrwatch().pop(idobj, None) print(x) delrwatch(id(x)) print(x) # no more rwatch on this! print(x) # no more rwatch on this! Explanation: That's awesome, it works! Can we delete the rwatch ? Sure! End of explanation y = "I am Zorro !" print(y) delrwatch(y) # No issue! print(y) Explanation: We can also delete rwatches that are not defined, without a failure: End of explanation from inspect import getframeinfo def debug_view(frame, obj): info = getframeinfo(frame) msg = '- Access to {!r} (@{}) at {}:{}:{}' print(msg.format(obj, hex(id(obj)), info.filename, info.lineno, info.function)) return obj setrwatch({}) setrwatch({ id(x): debug_view }) getrwatch() print(x) Explanation: More useful debuggin information What is this frame thing? It is described in the documentation of the inspect module. We can actually use it to display some useful information about the object and where was it called etc. End of explanation setrwatch({}) def debug_view_for_str(frame, obj): if isinstance(obj, str): info = getframeinfo(frame) if '<stdin>' in info.filename or '<ipython-' in info.filename: msg = '- Access to {!r} (@{}) at {}:{}:{}' print(msg.format(obj, hex(id(obj)), info.filename, info.lineno, info.function)) return obj setrwatch(defaultdict(lambda: debug_view_for_str)) print(x) Explanation: That can be quite useful! Watching any object We can actually pass a defaultdict to the setrwatch function, so that any object will have a rwatch! Warning: obviously, this will crazily slowdown your interpreter! So let be cautious, and only deal with strings here. But I want to be safe, so it will only works if the frame indicate that the variable does not come from a file. End of explanation setrwatch({}) Explanation: Clearly, there is a lot of strings involved, mainly because this is a notebook and not the simple Python interpreter, so filtering on the info.filename as I did was smart. End of explanation def debug_view_for_any_object(frame, obj): info = getframeinfo(frame) if '<stdin>' in info.filename or '<ipython-' in info.filename: msg = '- Access to {!r} (@{}) at {}:{}:{}' print(msg.format(obj, hex(id(obj)), info.filename, info.lineno, info.function)) return obj Explanation: But obviously, having this for all objects is incredibly verbose! End of explanation print(x) %time 123 + 134 setrwatch({}) setrwatch(defaultdict(lambda: debug_view_for_any_object)) print(x) %time 123 + 134 setrwatch({}) Explanation: Let check that one, on a very simple example (which runs in less than 20 micro seconds): End of explanation class InspectThisObject(object): def __init__(self, obj): self.idobj = id(obj) def __enter__(self): getrwatch()[self.idobj] = debug_view def __exit__(self, exc_type, exc_val, exc_tb): delrwatch(self.idobj) Explanation: It seems to work very well! But it slows down everything, obviously the filtering takes time (for every object!) Computing 123 + 134 = 257 took about 10 miliseconds! That's just CRAZY! A first context manager to have debugging for one object It would be nice to be able to turn on and off this debugging tool whenever you want. Well, it turns out that context managers are exactly meant for that! They are simple classes with just a __enter__() and __exit__() special methods. First, let us write a context manager to debug ONE object. End of explanation z = "I am Batman!" print(z) with InspectThisObject(z): print(z) print(z) Explanation: We can check it: End of explanation class InspectAllObjects(object): def __init__(self): pass def __enter__(self): setrwatch(defaultdict(lambda: debug_view_for_any_object)) def __exit__(self, exc_type, exc_val, exc_tb): setrwatch({}) Explanation: The first debug information shows line 5, which is the line where print(z) is. A second context manager to debug any object Easy: End of explanation with InspectAllObjects(): print(0) Explanation: It will probably break everything in the notebook, but works in a basic Python interpreter. End of explanation with InspectAllObjects(): print("Darth Vader -- No Luke, I am your Father!") print("Luke -- I have a father? Yay! Let's eat cookies together!") Explanation: The 5th debug information printed is Access to 0 (@0xXXX) at &lt;ipython-input-41-XXX&gt;:2:&lt;module&gt;, showing the access in line #2 of the constant 0. End of explanation from numbers import Number Explanation: We also see here the None and {} objects being given to the context manager (see the __enter__ method at first, and __exit__ at the end). Defining a context manager to add white noise Basically, we will do as above, but instead of debug information, a white noise sampled from a Normal distribution (i.e., $\sim \mathcal{N}(0, 1)$) will be added to any number. Capturing any numerical value To capture both integers and float numbers, the numbers.Number abstract class is useful. End of explanation def add_white_noise_to_numbers(frame, obj): if isinstance(obj, Number): info = getframeinfo(frame) if '<stdin>' in info.filename or '<ipython-' in info.filename: return obj + np.random.normal() return obj Explanation: Adding a white noise for numbers This is very simple. But I want to be safe, so it will only works if the frame indicate that the number does not come from a file, as previously. End of explanation np.random.seed(1234) setrwatch({}) x = 1234 print(x) getrwatch()[id(x)] = add_white_noise_to_numbers print(x) # huhoww, that's noisy! print(10 * x + x + x**2) # and noise propagate! setrwatch({}) print(x) print(10 * x + x + x**2) Explanation: Let us try it out! End of explanation def add_white_noise_to_complex(frame, obj): if isinstance(obj, complex): info = getframeinfo(frame) if '<stdin>' in info.filename or '<ipython-' in info.filename: return obj + np.random.normal() + np.random.normal() * 1j return obj np.random.seed(1234) setrwatch({}) y = 1234j print(y) setrwatch(defaultdict(lambda: add_white_noise_to_complex)) print(y) # huhoww, that's noisy! setrwatch({}) print(y) Explanation: It seems to work! Let's do it for any number then... ... Sadly, it's actually breaking the interpreter, which obviously has to have access to non-noisy constants and numbers to work ! We can lower the risk by only adding noise to complex numbers. I guess the interpreter doesn't need complex numbers, write? End of explanation class WhiteNoiseComplex(object): def __init__(self): pass def __enter__(self): setrwatch(defaultdict(lambda: add_white_noise_to_complex)) def __exit__(self, exc_type, exc_val, exc_tb): setrwatch({}) Explanation: Awesome! « Now, the real world is non noisy, but the complex one is! » That's one sentence I thought I would never say! WhiteNoiseComplex context manager To stay cautious, I only add noise to complex numbers. End of explanation np.random.seed(120193) print(120193, 120193j) with WhiteNoiseComplex(): print(120193, 120193j) # Huhoo, noisy! print(120193, 120193j) print(0*1j) with WhiteNoiseComplex(): print(0*1j) # Huhoo, noisy! print(0*1j) Explanation: And it works as expected: End of explanation class Noisy(object): def __init__(self, noise): def add_white_noise_to_complex(frame, obj): if isinstance(obj, complex): info = getframeinfo(frame) if '<stdin>' in info.filename or '<ipython-' in info.filename: return noise(obj) return obj self.rwatch = add_white_noise_to_complex def __enter__(self): setrwatch(defaultdict(lambda: self.rwatch)) def __exit__(self, exc_type, exc_val, exc_tb): setrwatch({}) print(1j) with Noisy(lambda obj: obj + np.random.normal()): print(1j) print(1j) print(1j) with Noisy(lambda obj: obj * np.random.normal()): print(1j) print(1j) print(1j) with Noisy(lambda obj: obj + np.random.normal(10, 0.1) + np.random.normal(10, 0.1) * 1j): print(1j) print(1j) Explanation: Defining a generic noisy context manager This will be a very simple change from the previous one, by letting the Noisy class accept any noisy function, which takes obj and return a noisy version of obj, only for complex-valued objects. End of explanation
5,091
Given the following text description, write Python code to implement the functionality described below step by step Description: 確率分布と乱数の取得 Step1: 練習問題 (1) 2個のサイコロを振った結果をシュミレーションします。次の例のように、1〜6の整数のペアを含むarrayを乱数で生成してください。 Step2: (2) 2個のサイコロを振った結果を10回分用意します。次の例のように、1〜6の整数のペア(リスト)を10組含むarrayを生成して、変数 dice に保存してください。 Step3: (3) 変数 dice に保存されたそれぞれの結果に対して、次の例のように、2個のサイコロの目の合計を計算してください。(計算結果はリストに保存すること。) Step4: (4) 2個のサイコロの目の合計を1000回分用意して、2〜12のそれぞれの回数をヒストグラムに表示してください。 ヒント:オプション bins=11, range=(1.5, 12.5) を指定するときれいに描けます。 Step5: (5) 0≦x≦1 の範囲を等分した10個の点 data_x = np.linspace(0,1,10) に対して、sin(2πx) の値を格納したarrayを作成して、変数 data_y に保存しなさい。 さらに、data_y に含まれるそれぞれの値に、標準偏差 0.3 の正規分布に従う乱数を加えたarrayを作成して、変数 data_t に保存した後、(data_x, data_t) を散布図に表示しなさい。
Python Code: import numpy as np import matplotlib.pyplot as plt import pandas as pd from pandas import Series, DataFrame Explanation: 確率分布と乱数の取得 End of explanation from numpy.random import randint randint(1,7,2) Explanation: 練習問題 (1) 2個のサイコロを振った結果をシュミレーションします。次の例のように、1〜6の整数のペアを含むarrayを乱数で生成してください。 End of explanation dice = randint(1,7,[10,2]) dice Explanation: (2) 2個のサイコロを振った結果を10回分用意します。次の例のように、1〜6の整数のペア(リスト)を10組含むarrayを生成して、変数 dice に保存してください。 End of explanation [a+b for (a,b) in dice] Explanation: (3) 変数 dice に保存されたそれぞれの結果に対して、次の例のように、2個のサイコロの目の合計を計算してください。(計算結果はリストに保存すること。) End of explanation dice = randint(1,7,[1000,2]) sums = [a+b for (a,b) in dice] plt.hist(sums, bins=11, range=(1.5, 12.5)) Explanation: (4) 2個のサイコロの目の合計を1000回分用意して、2〜12のそれぞれの回数をヒストグラムに表示してください。 ヒント:オプション bins=11, range=(1.5, 12.5) を指定するときれいに描けます。 End of explanation from numpy.random import normal data_x = np.linspace(0,1,10) data_y = np.sin(2*np.pi*data_x) data_t = data_y + normal(loc=0, scale=0.3, size=len(data_y)) plt.scatter(data_x, data_t) Explanation: (5) 0≦x≦1 の範囲を等分した10個の点 data_x = np.linspace(0,1,10) に対して、sin(2πx) の値を格納したarrayを作成して、変数 data_y に保存しなさい。 さらに、data_y に含まれるそれぞれの値に、標準偏差 0.3 の正規分布に従う乱数を加えたarrayを作成して、変数 data_t に保存した後、(data_x, data_t) を散布図に表示しなさい。 End of explanation
5,092
Given the following text description, write Python code to implement the functionality described below step by step Description: Step 1 - Observe the Real Data The Data below is the real observations. We can see 2 peaks in the data, one with the center around 120 and the other with the center around 200. Felt like binominal distribution that an observation falls into either cluster 1 or cluster 2 Step1: Step 2 - Generate the Data with Priors Assign Clusters pm.Categorical Its parameter is a k-length array of probabilities that must sum to one Its value attribute is a integer between 0 and k-1 randomly chosen according to the crafted array of probabilities (In our case k=2) Since we don't know the probability to assign to which cluster in prior, so use uniform distribution here Step2: Assign μ, σ of the Cluster to each Observation Step3: Define Sampling Methods to Explore Define Space Metropolis() for our continuous variables CategoricalGibbsMetropolis() for categorical variable Sample 25000 iterations The results are saved in trace, which records the value of specificed vars for each observation These vars are, p, sds, centers and assignment NOTE Step4: Check the probability of which cluster a point belongs to Step5: Autocorrelation & Convergence A chain that is not exploring the space well will exhibit very high autocorrelation. Visually, if the trace seems to meander like a river, and not settle down, the chain will have high autocorrelation. This does not imply that a converged MCMC has low autocorrelation. Hence low autocorrelation is not necessary for convergence, but it is sufficient. PyMC3 has a built-in autocorrelation plotting function in the plots module. Another issue can arise if there is high-autocorrelation between posterior samples. Many post-processing algorithms require samples to be independent of each other. This can be solved, or at least reduced, by only returning to the user every nth sample -- <b>Tinning</b>
Python Code: figsize(12.5, 4) data = np.loadtxt("../data/mixture_data.csv", delimiter=",") plt.hist(data, bins=20, color="g", histtype="stepfilled", alpha=0.8) plt.title("The Distribution of Mixture_Data Dataset") plt.ylim([0, None]); print(data[:10], "...") Explanation: Step 1 - Observe the Real Data The Data below is the real observations. We can see 2 peaks in the data, one with the center around 120 and the other with the center around 200. Felt like binominal distribution that an observation falls into either cluster 1 or cluster 2 End of explanation with pm.Model() as model: p1 = pm.Uniform('p', 0, 1) p2 = 1 - p1 p = T.stack([p1, p2]) assignment = pm.Categorical("assignment", p, shape=data.shape[0], testval=np.random.randint(0, 2, data.shape[0])) print("prior assignment, with p = %.2f:" % p1.tag.test_value) print(assignment.tag.test_value[:10]) Explanation: Step 2 - Generate the Data with Priors Assign Clusters pm.Categorical Its parameter is a k-length array of probabilities that must sum to one Its value attribute is a integer between 0 and k-1 randomly chosen according to the crafted array of probabilities (In our case k=2) Since we don't know the probability to assign to which cluster in prior, so use uniform distribution here End of explanation with model: sds = pm.Uniform("sds", 0, 100, shape=2) centers = pm.Normal("centers", mu=np.array([120, 200]), sd=np.array([10, 10]), shape=2) center_i = pm.Deterministic('center_i', centers[assignment]) sd_i = pm.Deterministic('sd_i', sds[assignment]) # combine with the real observations: observations = pm.Normal("obs", mu=center_i, sd=sd_i, observed=data) print("Random assignments: ", assignment.tag.test_value[:10], "...") print("Assigned center: ", center_i.tag.test_value[:10], "...") print("Assigned standard deviation: ", sd_i.tag.test_value[:10], "...") print("Observations: ", observations.tag.test_value[:10], "...") print(data[:10], "...") print(sds.tag.test_value) print(centers.tag.test_value) Explanation: Assign μ, σ of the Cluster to each Observation End of explanation with model: step1 = pm.Metropolis(vars=[p, sds, centers]) step2 = pm.CategoricalGibbsMetropolis(vars=[assignment]) trace = pm.sample(25000, step=[step1, step2]) print(trace['centers'].shape) print(trace['p'].shape) print(trace['sds'].shape) print(trace['assignment'].shape) plt.figure(figsize=(12,3)) lw = 1 center_trace = trace["centers"] plt.plot(center_trace[:, 0], label="cluster 0", c='g', lw=lw) plt.plot(center_trace[:, 1], label="cluster 1", c='purple', lw=lw) plt.xlabel('Steps') plt.title("Traces of Centers") leg = plt.legend(loc="upper right") leg.get_frame().set_alpha(0.7) plt.figure(figsize=(12,3)) lw = 1 center_trace = trace["sds"] plt.plot(std_trace[:, 0], label="cluster 0", c='g', lw=lw) plt.plot(std_trace[:, 1], label="cluster 1", c='purple', lw=lw) plt.xlabel('Steps') plt.title('Traces of Standard Deviations') leg = plt.legend(loc="upper right") leg.get_frame().set_alpha(0.7) plt.figure(figsize=(12,3)) lw = 1 p_trace = trace["p"] plt.plot(p_trace, label="cluster 0", c='g', lw=lw) plt.xlabel('Steps') plt.title('Traces of Cluster 0 Assignment Frequency') leg = plt.legend(loc="upper right") leg.get_frame().set_alpha(0.7) # Sample 50,000 more to achieve more convergence with model: trace = pm.sample(50000, step=[step1, step2], trace=trace) figsize(12, 3) prev_center_trace = trace["centers"][:25000] center_trace = trace["centers"][25000:] x = np.arange(25000) plt.plot(x, prev_center_trace[:, 0], label="previous center 0", lw=lw, alpha=0.4, c='green') plt.plot(x, prev_center_trace[:, 1], label="previous center 1", lw=lw, alpha=0.4, c='purple') x = np.arange(25000, 300000) plt.plot(x, center_trace[:, 0], label="new center 0", lw=lw, c="green") plt.plot(x, center_trace[:, 1], label="new center 1", lw=lw, c="purple") plt.title("Previous & New traces of Centers") leg = plt.legend(loc="upper right") leg.get_frame().set_alpha(0.8) plt.xlabel("Steps"); # Plot posterior clusters figsize(12, 3) plt.title("Posterior of Centers") plt.hist(center_trace[:, 0], bins=30, histtype="stepfilled", label='center 0', color='g') plt.hist(center_trace[:, 1], bins=30, histtype="stepfilled", label='center 1', color='purple') plt.legend() figsize(12, 3) sds_trace = trace["sds"][25000:] plt.title("Posterior of Standard Deviations") plt.hist(sds_trace[:, 0], bins=30, histtype="stepfilled", label='sds 0', color='g', alpha=0.7) plt.hist(sds_trace[:, 1], bins=30, histtype="stepfilled", label='sds 1', color='purple', alpha=0.7) plt.legend() import matplotlib as mpl cmap = mpl.colors.LinearSegmentedColormap.from_list("BMH", ['green', 'purple']) assign_trace = trace["assignment"] plt.scatter(data, 1-assign_trace.mean(axis=0), cmap=cmap, c=assign_trace.mean(axis=0), s=50) plt.ylim(-0.05, 1.05) plt.xlim(35, 300) plt.title("Probability of data point belonging to cluster 0") plt.ylabel("probability") plt.xlabel("value of data point"); Explanation: Define Sampling Methods to Explore Define Space Metropolis() for our continuous variables CategoricalGibbsMetropolis() for categorical variable Sample 25000 iterations The results are saved in trace, which records the value of specificed vars for each observation These vars are, p, sds, centers and assignment NOTE: I tried find_map() before sample(), but it simply doesn't work End of explanation norm_pdf = stats.norm.pdf p_trace = trace["p"][25000:] x = 175 v1 = (1-p_trace) * norm_pdf(x, loc=center_trace[:, 1], scale=sds_trace[:, 1]) > \ p_trace * norm_pdf(x, loc=center_trace[:, 0], scale=sds_trace[:, 0]) v0 = (1-p_trace) * norm_pdf(x, loc=center_trace[:, 1], scale=sds_trace[:, 1]) < \ p_trace * norm_pdf(x, loc=center_trace[:, 0], scale=sds_trace[:, 0]) print("Probability of belonging to cluster 1:", v1.mean()) print("Probability of belonging to cluster 0:", v0.mean()) Explanation: Check the probability of which cluster a point belongs to End of explanation pm.plots.autocorrplot(data=trace, var_names=["centers"]) pm.plots.traceplot(data=trace, var_names=["centers"]) pm.plots.plot_posterior(data=trace["centers"][:,0]) pm.plots.plot_posterior(data=trace["centers"][:,1]) Explanation: Autocorrelation & Convergence A chain that is not exploring the space well will exhibit very high autocorrelation. Visually, if the trace seems to meander like a river, and not settle down, the chain will have high autocorrelation. This does not imply that a converged MCMC has low autocorrelation. Hence low autocorrelation is not necessary for convergence, but it is sufficient. PyMC3 has a built-in autocorrelation plotting function in the plots module. Another issue can arise if there is high-autocorrelation between posterior samples. Many post-processing algorithms require samples to be independent of each other. This can be solved, or at least reduced, by only returning to the user every nth sample -- <b>Tinning</b> End of explanation
5,093
Given the following text description, write Python code to implement the functionality described below step by step Description: Random Sampling Credits Step1: Part One Suppose we want to estimate the average weight of men and women in the U.S. And we want to quantify the uncertainty of the estimate. One approach is to simulate many experiments and see how much the results vary from one experiment to the next. I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption. Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters Step2: Here's what that distribution looks like Step3: make_sample draws a random sample from this distribution. The result is a NumPy array. Step4: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact. Step5: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean Step6: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight. We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array. Step7: The next line runs the simulation 1000 times and puts the results in sample_means Step8: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next. Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments. Step9: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part. Step10: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate. This quantity is called the "standard error". Step11: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results Step12: The following function takes an array of sample statistics and prints the SE and CI Step13: And here's what that looks like Step14: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results. Step15: Here's a test run with n=100 Step16: Now we can use interact to run plot_sample_stats with different values of n. Note Step17: This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic. As an exercise, fill in sample_stat below with any of these statistics Step24: Part Two So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI. But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it! In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it. Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions. Step25: The following function instantiates a Resampler and runs it. Step26: Here's a test run with n=100 Step27: Now we can use plot_resampled_stats in an interaction Step30: Exercise Step31: Test your code using the cell below Step32: When your StdResampler is working, you should be able to interact with it Step33: Part Three We can extend this framework to compute SE and CI for a difference in means. For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data) Step34: And here's the men's distribution Step35: I'll simulate a sample of 100 men and 100 women Step36: The difference in means should be about 17 kg, but will vary from one random sample to the next Step38: Here's the function that computes Cohen's $d$ again Step39: The difference in weight between men and women is about 1 standard deviation Step40: Now we can write a version of the Resampler that computes the sampling distribution of $d$. Step41: Now we can instantiate a CohenResampler and plot the sampling distribution.
Python Code: from __future__ import print_function, division import numpy import scipy.stats import matplotlib.pyplot as pyplot from IPython.html.widgets import interact, fixed from IPython.html import widgets # seed the random number generator so we all get the same results numpy.random.seed(18) # some nicer colors from http://colorbrewer2.org/ COLOR1 = '#7fc97f' COLOR2 = '#beaed4' COLOR3 = '#fdc086' COLOR4 = '#ffff99' COLOR5 = '#386cb0' %matplotlib inline Explanation: Random Sampling Credits: Forked from CompStats by Allen Downey. License: Creative Commons Attribution 4.0 International. End of explanation weight = scipy.stats.lognorm(0.23, 0, 70.8) weight.mean(), weight.std() Explanation: Part One Suppose we want to estimate the average weight of men and women in the U.S. And we want to quantify the uncertainty of the estimate. One approach is to simulate many experiments and see how much the results vary from one experiment to the next. I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption. Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters: End of explanation xs = numpy.linspace(20, 160, 100) ys = weight.pdf(xs) pyplot.plot(xs, ys, linewidth=4, color=COLOR1) pyplot.xlabel('weight (kg)') pyplot.ylabel('PDF') None Explanation: Here's what that distribution looks like: End of explanation def make_sample(n=100): sample = weight.rvs(n) return sample Explanation: make_sample draws a random sample from this distribution. The result is a NumPy array. End of explanation sample = make_sample(n=100) sample.mean(), sample.std() Explanation: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact. End of explanation def sample_stat(sample): return sample.mean() Explanation: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean: End of explanation def compute_sample_statistics(n=100, iters=1000): stats = [sample_stat(make_sample(n)) for i in range(iters)] return numpy.array(stats) Explanation: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight. We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array. End of explanation sample_means = compute_sample_statistics(n=100, iters=1000) Explanation: The next line runs the simulation 1000 times and puts the results in sample_means: End of explanation pyplot.hist(sample_means, color=COLOR5) pyplot.xlabel('sample mean (n=100)') pyplot.ylabel('count') None Explanation: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next. Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments. End of explanation sample_means.mean() Explanation: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part. End of explanation std_err = sample_means.std() std_err Explanation: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate. This quantity is called the "standard error". End of explanation conf_int = numpy.percentile(sample_means, [5, 95]) conf_int Explanation: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results: End of explanation def summarize_sampling_distribution(sample_stats): print('SE', sample_stats.std()) print('90% CI', numpy.percentile(sample_stats, [5, 95])) Explanation: The following function takes an array of sample statistics and prints the SE and CI: End of explanation summarize_sampling_distribution(sample_means) Explanation: And here's what that looks like: End of explanation def plot_sample_stats(n, xlim=None): sample_stats = compute_sample_statistics(n, iters=1000) summarize_sampling_distribution(sample_stats) pyplot.hist(sample_stats, color=COLOR2) pyplot.xlabel('sample statistic') pyplot.xlim(xlim) Explanation: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results. End of explanation plot_sample_stats(100) Explanation: Here's a test run with n=100: End of explanation def sample_stat(sample): return sample.mean() slider = widgets.IntSliderWidget(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([55, 95])) None Explanation: Now we can use interact to run plot_sample_stats with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n. End of explanation def sample_stat(sample): # TODO: replace the following line with another sample statistic return sample.mean() slider = widgets.IntSliderWidget(min=10, max=1000, value=100) interact(plot_sample_stats, n=slider, xlim=fixed([0, 100])) None Explanation: This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic. As an exercise, fill in sample_stat below with any of these statistics: Standard deviation of the sample. Coefficient of variation, which is the sample standard deviation divided by the sample standard mean. Min or Max Median (which is the 50th percentile) 10th or 90th percentile. Interquartile range (IQR), which is the difference between the 75th and 25th percentiles. NumPy array methods you might find useful include std, min, max, and percentile. Depending on the results, you might want to adjust xlim. End of explanation class Resampler(object): Represents a framework for computing sampling distributions. def __init__(self, sample, xlim=None): Stores the actual sample. self.sample = sample self.n = len(sample) self.xlim = xlim def resample(self): Generates a new sample by choosing from the original sample with replacement. new_sample = numpy.random.choice(self.sample, self.n, replace=True) return new_sample def sample_stat(self, sample): Computes a sample statistic using the original sample or a simulated sample. return sample.mean() def compute_sample_statistics(self, iters=1000): Simulates many experiments and collects the resulting sample statistics. stats = [self.sample_stat(self.resample()) for i in range(iters)] return numpy.array(stats) def plot_sample_stats(self): Runs simulated experiments and summarizes the results. sample_stats = self.compute_sample_statistics() summarize_sampling_distribution(sample_stats) pyplot.hist(sample_stats, color=COLOR2) pyplot.xlabel('sample statistic') pyplot.xlim(self.xlim) Explanation: Part Two So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI. But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it! In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it. Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions. End of explanation def plot_resampled_stats(n=100): sample = weight.rvs(n) resampler = Resampler(sample, xlim=[55, 95]) resampler.plot_sample_stats() Explanation: The following function instantiates a Resampler and runs it. End of explanation plot_resampled_stats(100) Explanation: Here's a test run with n=100 End of explanation slider = widgets.IntSliderWidget(min=10, max=1000, value=100) interact(plot_resampled_stats, n=slider, xlim=fixed([1, 15])) None Explanation: Now we can use plot_resampled_stats in an interaction: End of explanation class StdResampler(Resampler): Computes the sampling distribution of the standard deviation. def sample_stat(self, sample): Computes a sample statistic using the original sample or a simulated sample. return sample.std() Explanation: Exercise: write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data. End of explanation def plot_resampled_stats(n=100): sample = weight.rvs(n) resampler = StdResampler(sample, xlim=[0, 100]) resampler.plot_sample_stats() plot_resampled_stats() Explanation: Test your code using the cell below: End of explanation slider = widgets.IntSliderWidget(min=10, max=1000, value=100) interact(plot_resampled_stats, n=slider) None Explanation: When your StdResampler is working, you should be able to interact with it: End of explanation female_weight = scipy.stats.lognorm(0.23, 0, 70.8) female_weight.mean(), female_weight.std() Explanation: Part Three We can extend this framework to compute SE and CI for a difference in means. For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data): End of explanation male_weight = scipy.stats.lognorm(0.20, 0, 87.3) male_weight.mean(), male_weight.std() Explanation: And here's the men's distribution: End of explanation female_sample = female_weight.rvs(100) male_sample = male_weight.rvs(100) Explanation: I'll simulate a sample of 100 men and 100 women: End of explanation male_sample.mean() - female_sample.mean() Explanation: The difference in means should be about 17 kg, but will vary from one random sample to the next: End of explanation def CohenEffectSize(group1, group2): Compute Cohen's d. group1: Series or NumPy array group2: Series or NumPy array returns: float diff = group1.mean() - group2.mean() n1, n2 = len(group1), len(group2) var1 = group1.var() var2 = group2.var() pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2) d = diff / numpy.sqrt(pooled_var) return d Explanation: Here's the function that computes Cohen's $d$ again: End of explanation CohenEffectSize(male_sample, female_sample) Explanation: The difference in weight between men and women is about 1 standard deviation: End of explanation class CohenResampler(Resampler): def __init__(self, group1, group2, xlim=None): self.group1 = group1 self.group2 = group2 self.xlim = xlim def resample(self): group1 = numpy.random.choice(self.group1, len(self.group1), replace=True) group2 = numpy.random.choice(self.group2, len(self.group2), replace=True) return group1, group2 def sample_stat(self, groups): group1, group2 = groups return CohenEffectSize(group1, group2) # NOTE: The following functions are the same as the ones in Resampler, # so I could just inherit them, but I'm including them for readability def compute_sample_statistics(self, iters=1000): stats = [self.sample_stat(self.resample()) for i in range(iters)] return numpy.array(stats) def plot_sample_stats(self): sample_stats = self.compute_sample_statistics() summarize_sampling_distribution(sample_stats) pyplot.hist(sample_stats, color=COLOR2) pyplot.xlabel('sample statistic') pyplot.xlim(self.xlim) Explanation: Now we can write a version of the Resampler that computes the sampling distribution of $d$. End of explanation resampler = CohenResampler(male_sample, female_sample) resampler.plot_sample_stats() Explanation: Now we can instantiate a CohenResampler and plot the sampling distribution. End of explanation
5,094
Given the following text description, write Python code to implement the functionality described below step by step Description: Theoretical Efficiency of Read Until Enrichment The "Read Until" feature of the Oxford Nanopore sequencing technology means a program can see the data coming in at each pore and, dependend on that data, reject the molecule inside a certain pore. The actual performance of such a method depends on a lot of factors Step1: 50% ham in sample Step2: 10% ham in sample Step3: 1% ham in sample
Python Code: import matplotlib.pyplot as plt import numpy as np %matplotlib inline def sim_ru(ham_frequency, ham_duration, accuracy): # Monte-Carlo Style n = 1000000 ham = np.random.random(size=n)<ham_frequency durations = np.ones(n) accurate = np.random.random(size=n)<accuracy durations[ham & accurate] = ham_duration durations[~ham & ~accurate] = ham_duration return (np.sum(durations[ham]) / np.sum(durations)) def sim_ru2(ham_frequency, ham_duration, accuracy): # exact calculation long = ((ham_frequency* accuracy) + (1-ham_frequency)*(1-accuracy)) * ham_duration short = ((ham_frequency* (1-accuracy)) + (1-ham_frequency)*(accuracy)) * 1.0 ham = (ham_frequency* accuracy)*ham_duration + (ham_frequency* (1-accuracy))*1 return ham / (long+short) def make_plot(ham_frequency): f, ax = plt.subplots() f.set_figwidth(14) f.set_figheight(6) ax.set_ylim(0,1) x = np.arange(0.5, 1.0,0.001) y = np.zeros(len(x)) handles = [] for j in reversed([2.5,5,10,20,40]): for i in range(len(x)): y[i] = sim_ru2(ham_frequency, j, x[i]) handles.append(ax.plot(x,y, label = "%.1f" % j)) ax.grid() f.suptitle("Ratio of desired data over total data for different values of \"desired length\"/\"rejected length\" ") ax.legend(loc=0); ax.xaxis.set_label_text("Detection Accuracy"); ax.yaxis.set_label_text("Desired Output / Total Output"); Explanation: Theoretical Efficiency of Read Until Enrichment The "Read Until" feature of the Oxford Nanopore sequencing technology means a program can see the data coming in at each pore and, dependend on that data, reject the molecule inside a certain pore. The actual performance of such a method depends on a lot of factors: ratio of desireable over undesireable molecules in the sample accuracy of detection length of event data necessary for the decision latency of event data reaching the controlling program delay between decision and ejecting the molecule time until the pore can accept a new molecule length of DNA strands in the sample In this notebook I boiled it down to three parameters: ham_frequency is the frequency of desired molecules ham_duration is the scale by which the desired molecules are read "longer" accuracy is the accuracy of the classification The analogy to spam detection is chosen because "ham/spam" makes for catchier variable names. This computation considers time and "amount of data" as equivalent. In reality, event speeds vary a lot, but in the long run, duration of reads and length of the strands correlate very strongly. The result of the computation is the ratio of desired time/data over undesired time/data, which is hopefully higher than the original ham_frequency. End of explanation make_plot(0.5) Explanation: 50% ham in sample End of explanation make_plot(0.1) Explanation: 10% ham in sample End of explanation make_plot(0.01) Explanation: 1% ham in sample End of explanation
5,095
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Can I use string as input for a DecisionTreeClassifier?
Problem: import numpy as np import pandas as pd from sklearn.tree import DecisionTreeClassifier X = [['asdf', '1'], ['asdf', '0']] clf = DecisionTreeClassifier() from sklearn.feature_extraction import DictVectorizer X = [dict(enumerate(x)) for x in X] vect = DictVectorizer(sparse=False) new_X = vect.fit_transform(X)
5,096
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Toplevel MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required Step7: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required Step8: 3.2. CMIP3 Parent Is Required Step9: 3.3. CMIP5 Parent Is Required Step10: 3.4. Previous Name Is Required Step11: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required Step12: 4.2. Code Version Is Required Step13: 4.3. Code Languages Is Required Step14: 4.4. Components Structure Is Required Step15: 4.5. Coupler Is Required Step16: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required Step17: 5.2. Atmosphere Double Flux Is Required Step18: 5.3. Atmosphere Fluxes Calculation Grid Is Required Step19: 5.4. Atmosphere Relative Winds Is Required Step20: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required Step21: 6.2. Global Mean Metrics Used Is Required Step22: 6.3. Regional Metrics Used Is Required Step23: 6.4. Trend Metrics Used Is Required Step24: 6.5. Energy Balance Is Required Step25: 6.6. Fresh Water Balance Is Required Step26: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required Step27: 7.2. Atmos Ocean Interface Is Required Step28: 7.3. Atmos Land Interface Is Required Step29: 7.4. Atmos Sea-ice Interface Is Required Step30: 7.5. Ocean Seaice Interface Is Required Step31: 7.6. Land Ocean Interface Is Required Step32: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required Step33: 8.2. Atmos Ocean Interface Is Required Step34: 8.3. Atmos Land Interface Is Required Step35: 8.4. Atmos Sea-ice Interface Is Required Step36: 8.5. Ocean Seaice Interface Is Required Step37: 8.6. Runoff Is Required Step38: 8.7. Iceberg Calving Is Required Step39: 8.8. Endoreic Basins Is Required Step40: 8.9. Snow Accumulation Is Required Step41: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required Step42: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required Step43: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required Step44: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required Step45: 12.2. Additional Information Is Required Step46: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required Step47: 13.2. Additional Information Is Required Step48: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required Step49: 14.2. Additional Information Is Required Step50: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required Step51: 15.2. Additional Information Is Required Step52: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required Step53: 16.2. Additional Information Is Required Step54: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required Step55: 17.2. Equivalence Concentration Is Required Step56: 17.3. Additional Information Is Required Step57: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required Step58: 18.2. Additional Information Is Required Step59: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required Step60: 19.2. Additional Information Is Required Step61: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required Step62: 20.2. Additional Information Is Required Step63: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required Step64: 21.2. Additional Information Is Required Step65: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required Step66: 22.2. Aerosol Effect On Ice Clouds Is Required Step67: 22.3. Additional Information Is Required Step68: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required Step69: 23.2. Aerosol Effect On Ice Clouds Is Required Step70: 23.3. RFaci From Sulfate Only Is Required Step71: 23.4. Additional Information Is Required Step72: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required Step73: 24.2. Additional Information Is Required Step74: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step76: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required Step77: 25.4. Additional Information Is Required Step78: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required Step80: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required Step81: 26.4. Additional Information Is Required Step82: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required Step83: 27.2. Additional Information Is Required Step84: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required Step85: 28.2. Crop Change Only Is Required Step86: 28.3. Additional Information Is Required Step87: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required Step88: 29.2. Additional Information Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'csir-csiro', 'sandbox-3', 'toplevel') Explanation: ES-DOC CMIP6 Model Properties - Toplevel MIP Era: CMIP6 Institute: CSIR-CSIRO Source ID: SANDBOX-3 Sub-Topics: Radiative Forcings. Properties: 85 (42 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:54 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Flux Correction 3. Key Properties --&gt; Genealogy 4. Key Properties --&gt; Software Properties 5. Key Properties --&gt; Coupling 6. Key Properties --&gt; Tuning Applied 7. Key Properties --&gt; Conservation --&gt; Heat 8. Key Properties --&gt; Conservation --&gt; Fresh Water 9. Key Properties --&gt; Conservation --&gt; Salt 10. Key Properties --&gt; Conservation --&gt; Momentum 11. Radiative Forcings 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect 24. Radiative Forcings --&gt; Aerosols --&gt; Dust 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt 28. Radiative Forcings --&gt; Other --&gt; Land Use 29. Radiative Forcings --&gt; Other --&gt; Solar 1. Key Properties Key properties of the model 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Top level overview of coupled model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of coupled model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Flux Correction Flux correction properties of the model 2.1. Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how flux corrections are applied in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Genealogy Genealogy and history of the model 3.1. Year Released Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Year the model was released End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.2. CMIP3 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP3 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. CMIP5 Parent Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 CMIP5 parent if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Previous Name Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Previously known as End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of model 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.4. Components Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OASIS" # "OASIS3-MCT" # "ESMF" # "NUOPC" # "Bespoke" # "Unknown" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.5. Coupler Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Overarching coupling framework for model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Coupling ** 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of coupling in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.2. Atmosphere Double Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Atmosphere grid" # "Ocean grid" # "Specific coupler grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 5.3. Atmosphere Fluxes Calculation Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Where are the air-sea fluxes calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 5.4. Atmosphere Relative Winds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Tuning Applied Tuning methodology for model 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics/diagnostics of the global mean state used in tuning model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics/diagnostics used in tuning model/component (such as 20th century) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.5. Energy Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. Fresh Water Balance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Conservation --&gt; Heat Global heat convervation properties of the model 7.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how heat is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.6. Land Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how heat is conserved at the land/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation --&gt; Fresh Water Global fresh water convervation properties of the model 8.1. Global Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh_water is conserved globally End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Atmos Ocean Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Atmos Land Interface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how fresh water is conserved at the atmosphere/land coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Atmos Sea-ice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.6. Runoff Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how runoff is distributed and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.7. Iceberg Calving Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how iceberg calving is modeled and conserved End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.8. Endoreic Basins Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how endoreic basins (no ocean access) are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.9. Snow Accumulation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how snow accumulation over land and over sea-ice is treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Key Properties --&gt; Conservation --&gt; Salt Global salt convervation properties of the model 9.1. Ocean Seaice Interface Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how salt is conserved at the ocean/sea-ice coupling interface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10. Key Properties --&gt; Conservation --&gt; Momentum Global momentum convervation properties of the model 10.1. Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how momentum is conserved in the model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Radiative Forcings Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5) 11.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of radiative forcings (GHG and aerosols) implementation in model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2 Carbon dioxide forcing 12.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4 Methane forcing 13.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O Nitrous oxide forcing 14.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3 Troposheric ozone forcing 15.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3 Stratospheric ozone forcing 16.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC Ozone-depleting and non-ozone-depleting fluorinated gases forcing 17.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "Option 1" # "Option 2" # "Option 3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.2. Equivalence Concentration Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of any equivalence concentrations used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Radiative Forcings --&gt; Aerosols --&gt; SO4 SO4 aerosol forcing 18.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon Black carbon aerosol forcing 19.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon Organic carbon aerosol forcing 20.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate Nitrate forcing 21.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect Cloud albedo effect forcing (RFaci) 22.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect Cloud lifetime effect forcing (ERFaci) 23.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.2. Aerosol Effect On Ice Clouds Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative effects of aerosols on ice clouds are represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.3. RFaci From Sulfate Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Radiative forcing from aerosol cloud interactions from sulfate aerosol only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 24. Radiative Forcings --&gt; Aerosols --&gt; Dust Dust forcing 24.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic Tropospheric volcanic forcing 25.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic Stratospheric volcanic forcing 26.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in historical simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Type A" # "Type B" # "Type C" # "Type D" # "Type E" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How explosive volcanic aerosol is implemented in future simulations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt Sea salt forcing 27.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "M" # "Y" # "E" # "ES" # "C" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 28. Radiative Forcings --&gt; Other --&gt; Land Use Land use forcing 28.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28.2. Crop Change Only Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Land use change represented via crop change only? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "N/A" # "irradiance" # "proton" # "electron" # "cosmic ray" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 29. Radiative Forcings --&gt; Other --&gt; Solar Solar forcing 29.1. Provision Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N How solar forcing is provided End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Additional Information Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.). End of explanation
5,097
Given the following text description, write Python code to implement the functionality described below step by step Description: Terminal Velocity by Paulo Marques, 2013/09/23 This notebook discusses the <a href="http Step1: We now define the initial conditions and constants of the problem. Step2: As said, let's define a system of ordinary differential equations in its normal form ${\bf f}' = {\bf g}({\bf f}, t)$. (In the code bellow we substitute $g()$ for $gm()$ so that it doesn't clash with the acceleration of gravity constant - $g$). Step3: Let's define the conditions to numerically solve the problem, including a time vector Step4: Now let's solve the equations numericaly and extract the corresponding $y(t)$ and $v(t)$ Step5: Finally, we can plot the solution. Step6: As you can see, the velocity starts at 10 $m/s^2$, with the ball going up. Its velocity starts decreasing, goes to zero at max height, and then becomes negative as the ball starts coming down. After a while it reaches its maximum speed Step7: Now, with our numerical simulation, the terminal velocity is
Python Code: %pylab inline from scipy.integrate import odeint from math import sqrt, atan Explanation: Terminal Velocity by Paulo Marques, 2013/09/23 This notebook discusses the <a href="http://en.wikipedia.org/wiki/Drag_(physics)">drag forces</a> exerted on a body when traveling through air. A falling body is subject to two forces: a downward force, $\vec{F_g}$, due to gravity, and an upward force $\vec{F_a}$, due to air resistance. Let's consider a body with mass $m$. Assuming an upward oriented YY axis, the following applies: $$ F_g = - g \cdot m $$ $$ F_a = - sgn(v) \cdot {1 \over 2} \rho C_D A v^2 $$ In these formulas, $m$ is the mass of the body, $\rho$ is the density of air ($\approx 1.2 Kg/m^3$), $g$ is the accelaration of gravity ($\approx 9.8 m/s^2$), $v$ is the velocity of the body, $C_D$ is the drag coeficient of the body and $A$ is its cross-section area. $sgn(x)$ is the sign function that given a number returns +1 for positives and -1 for negatives. In the formula it makes the force always contrary to the movement of the body. Thus, the resulting force $\vec{F}$ experienced by the body is: $$ F = - g \cdot m - sgn(v) \cdot {1 \over 2} \rho C_D A v^2 $$ Given that force is directly related to accelaration, and accelaration to velocity: $$ v = \frac{dy}{dt} $$ $$ F = m \cdot \frac{dv}{dt} $$ substituting, we get this differencial equation: $$ m \cdot \frac{dv}{dt} = - g \cdot m - sgn(v) \cdot {1 \over 2} \rho C_D A v^2 $$ In fact, this can be expressed as a system of differential equations in the form ${\bf f}' = {\bf g}({\bf f}, t)$: $$ \begin{cases} \frac{dy}{dt} = v \\ \frac{dv}{dt} = -g - sgn(v) \cdot {1 \over 2} {\rho \over m} C_D A v^2 \end{cases} $$ So, let's simulate this with Python! Let's start by importing some basic libraries: End of explanation # Constants g = 9.8 # Accelaration of gravity p = 1.2 # Density of air # Caracteristics of the problem m = 0.100 # A 100 g ball r = 0.10 # 10 cm radius Cd = 0.5 # Drag coeficient for a small spherical object y0 = 1000.0 # Initial height of the body (1000 m) v0 = 10.0 # Initial velocity of the body (10 m/s^2, going up) A = math.pi*r**2 # Cross-section area of the body Explanation: We now define the initial conditions and constants of the problem. End of explanation sgn = lambda x: math.copysign(1, x) # Auxiliary function to calculate the sign of a number def gm(f, t): (y, v) = f # Extract y and v (i.e., dy/dt) from the f mapping dy_dt = v # The differential equations dv_dt = -1.0*g - sgn(v)*(1./2.)*(p/m)*Cd*A*v**2 return [dy_dt, dv_dt] # Return the derivatives Explanation: As said, let's define a system of ordinary differential equations in its normal form ${\bf f}' = {\bf g}({\bf f}, t)$. (In the code bellow we substitute $g()$ for $gm()$ so that it doesn't clash with the acceleration of gravity constant - $g$). End of explanation # Initial conditions (position and velocity) start = [y0, v0] # Time vector (from 0 to 5 secs) tf = 5.0 t = linspace(0, tf, int(tf*100)) Explanation: Let's define the conditions to numerically solve the problem, including a time vector: End of explanation f = odeint(gm, start, t) y = f[:, 0] v = f[:, 1] Explanation: Now let's solve the equations numericaly and extract the corresponding $y(t)$ and $v(t)$: End of explanation figure(figsize=(14, 6)) subplot(1, 2, 1, title='Velocity over time') xlabel('Time (sec)') ylabel('Velocity (m/sec)') plot(t, v) subplot(1, 2, 2, title='Height over time') xlabel('Time (sec)') ylabel('Height (m)') plot(t, y) Explanation: Finally, we can plot the solution. End of explanation vt = sqrt( (2.*m*g) / (p*A*Cd) ) vt Explanation: As you can see, the velocity starts at 10 $m/s^2$, with the ball going up. Its velocity starts decreasing, goes to zero at max height, and then becomes negative as the ball starts coming down. After a while it reaches its maximum speed: terminal velocity. The theorerically terminal velocity, $V_t$, is: $$ V_t = \sqrt{\frac{2 m g}{\rho A C_D}} $$ Calculating it is easy: End of explanation # The terminal velocity vt_numeric = abs(min(v)) vt_numeric Explanation: Now, with our numerical simulation, the terminal velocity is: End of explanation
5,098
Given the following text description, write Python code to implement the functionality described below step by step Description: Gapfillling GrowMatch and SMILEY are gap-filling algorithms, which try to to make the minimal number of changes to a model and allow it to simulate growth. For more information, see Kumar et al.. Please note that these algorithms are Mixed-Integer Linear Programs, which need solvers such as gurobi or cplex to function correctly. Step1: In this model D-Fructose-6-phosphate is an essential metabolite. We will remove all the reactions using it, and at them to a separate model. Step2: Now, because of these gaps, the model won't grow. Step3: GrowMatch We will use GrowMatch to add back the minimal number of reactions from this set of "universal" reactions (in this case just the ones we removed) to allow it to grow. Step4: We can obtain multiple possible reaction sets by having the algorithm go through multiple iterations. Step5: SMILEY SMILEY is very similar to growMatch, only instead of setting growth as the objective, it sets production of a specific metabolite
Python Code: import cobra.test model = cobra.test.create_test_model("salmonella") Explanation: Gapfillling GrowMatch and SMILEY are gap-filling algorithms, which try to to make the minimal number of changes to a model and allow it to simulate growth. For more information, see Kumar et al.. Please note that these algorithms are Mixed-Integer Linear Programs, which need solvers such as gurobi or cplex to function correctly. End of explanation # remove some reactions and add them to the universal reactions Universal = cobra.Model("Universal_Reactions") for i in [i.id for i in model.metabolites.f6p_c.reactions]: reaction = model.reactions.get_by_id(i) Universal.add_reaction(reaction.copy()) reaction.remove_from_model() Explanation: In this model D-Fructose-6-phosphate is an essential metabolite. We will remove all the reactions using it, and at them to a separate model. End of explanation model.optimize().f Explanation: Now, because of these gaps, the model won't grow. End of explanation r = cobra.flux_analysis.growMatch(model, Universal) for e in r[0]: print(e.id) Explanation: GrowMatch We will use GrowMatch to add back the minimal number of reactions from this set of "universal" reactions (in this case just the ones we removed) to allow it to grow. End of explanation result = cobra.flux_analysis.growMatch(model, Universal, iterations=4) for i, entries in enumerate(result): print("---- Run %d ----" % (i + 1)) for e in entries: print(e.id) Explanation: We can obtain multiple possible reaction sets by having the algorithm go through multiple iterations. End of explanation r = cobra.flux_analysis.gapfilling.SMILEY(model, "ac_e", Universal) for e in r[0]: print(e.id) Explanation: SMILEY SMILEY is very similar to growMatch, only instead of setting growth as the objective, it sets production of a specific metabolite End of explanation
5,099
Given the following text description, write Python code to implement the functionality described below step by step Description: Flux Noise Mask Design General Notes Step1: CPW We want to use the same cpw dimensions for resonator and feedline/purcell filter cpw's so the kinetic inductance correction is the same for everything. Step2: $\lambda/4$ readout resonators IMPAs from Google will be good in the 4-6GHz range. We will aim for resonators near 6GHz, but have a spread from 5-6GHz on the mask. They should be spread every 30MHz or so. Step3: Qubit parameters From Ted Thorbeck's notes Step4: Feedline with and without crossovers Step5: Inductive Coupling From [1], we have the dephasing of a qubit Step6: Purcell Filter Do we even need a purcell filter? [3] Without purcell filter Step7: Loss from XY line From Thorbeck's notes, we have $R_p = R_s(1+Q_s^2)$ and $C_p = C_s\left(\frac{Q_s^2}{1+Q_s^2}\right)$ where $Q_s = \frac{1}{\omega R_s C_s}$ and the "s" and "p" subscript refer to just the coupling capacitor and Z0 of the line in a series or parallel configuration. Combining this with the normal LC of the qubit, we can find the loss
Python Code: qubits = [] for i in range(3): q = qubit.Qubit('Transmon') q.C_g = 3.87e-15 q.C_q = 75.1e-15 q.C_resToGnd = 79.1e-15 qubits.append(q) q = qubit.Qubit('OCSQubit') q.C_g = 2.94e-15 q.C_q = 48.5e-15 q.C_resToGnd = 51.5e-15 qubits.append(q) Explanation: Flux Noise Mask Design General Notes: * Looking at the chip left to right, top to bottom, we have Q1-Q4. * Q4 is the charge sensitive qubit. All others are normal transmons. Still to do: * simulate mutual Problems with mask: * One of the XY (Q2) lines is messed up * capacitances change resonator frequencies so nothing matches * Alex thinks flux bias line might be bad * Resonators need to be moved closer (Z0 -> Z0^2) * Should probably change the test pattern to have less substrate contribution * Purcell filter changes resonator coupling, so Q was wrong. Transmon Selected params: + w = 34, l_c = 90, w_c = 150: + C_q = 75.6fF + C_g = 3.39fF + C_resToGnd = 79.1fF + d_xy = 80: C_xy = 99.5fF Charge Sensitive Selected params: + w = 14, l_c = 90, w_c = 200: + C_q = 48.5fF + C_g = 2.94fF + C_resToGnd = 107fF + d_xy = 50: C_xy = 95.5fF End of explanation cpw = cpwtools.CPW(material='al', w=10., s=7.) print cpw Explanation: CPW We want to use the same cpw dimensions for resonator and feedline/purcell filter cpw's so the kinetic inductance correction is the same for everything. End of explanation l_curve = 2*pi*50/4 coupling_length = 287 tot_length = l_curve*(1+1+2+2+2+2) + 2*1000 + 1156 + 350 + 500 + coupling_length # this coupling length ranges from 45-150 depending on desired Qc. # Plan for 45, can always trombone down L4 = cpwtools.QuarterLResonator(cpw, tot_length) print('The frequency is brought down significantly by the capacitance through to ground through the qubit, ' + 'as well as the self-capacitance of the coupling cap to ground. These capacitances pull down the transmon ' + 'frequency more, so we will set Q3 to have no extension, and set the other qubit frequencies around it.') print('Bare resonator frequency = {:.3f} GHz'.format(L4.fl()/1e9)) print def L4FromQubit(q): L4 = cpwtools.QuarterLResonator(cpw, tot_length) seriesCap = q.C_g*q.C_q/(q.C_g+q.C_q) L4.addCapacitiveCoupling('g', seriesCap, Z0 = 0) L4.addCapacitiveCoupling('c_coupler', q.C_resToGnd, Z0 = 0) return L4 L4 = L4FromQubit(qubits[2]) f0 = L4.fl() for i,q in enumerate(qubits): L4 = L4FromQubit(q) length = L4.setLengthFromFreq(f0 + 0.04e9*[-2, -1, 0, 1][i]) q.C_r = L4.C() q.omega_r = L4.wl() q.omega_q = 2*pi*(f0-1e9) print("{}: l = {:.2f}um f_l = {:.3f}GHz C_r = {:.2f}fF extension = {:.2f}um".format( q.name, 1e6*q.res_length, L4.fl()/1e9, 1e15*L4.C(), (1e6*length - tot_length)/2)) Explanation: $\lambda/4$ readout resonators IMPAs from Google will be good in the 4-6GHz range. We will aim for resonators near 6GHz, but have a spread from 5-6GHz on the mask. They should be spread every 30MHz or so. End of explanation qb = deepcopy(qubits[2]) g = 2*pi*50e6 # qubit-resonator coupling in Hz print('Range of C_q on the mask:') print "C_q = 30fF: E_c = {:.2f}MHz".format( qb.E_c(30e6)/(2*pi*hbar)*1e15 ) print "C_q = 95fF: E_c = {:.2f}MHz".format( qb.E_c(95e6)/(2*pi*hbar)*1e15 ) print print('Ideal:') print "Transmon: E_c = 250MHz: C_sigma = C_q + C_g = {:.2f}fF".format( e**2/2/250e6/(2*pi*hbar)*1e15 ) print "Charge Sensitive: E_c = 385MHz: C_sigma = C_q + C_g = {:.2f}fF".format( e**2/2/385e6/(2*pi*hbar)*1e15 ) # With caps chosen from the mask: for q in qubits: print "{}: C_q = {:.2f}fF E_c = {:.2f}MHz E_j = {:.2f}GHz alpha = {:.2f}MHz g = {:.2f}MHz C_g = {:.2f}fF".format( q.name, 1e15*q.C_q, -q.E_c()/(2*pi*hbar)/1e6, q.E_j()/2/pi/hbar/1e9, q.alpha(q.E_c(),q.E_j())/(2*pi)/1e6, g/2/pi/1e6, 1e15*q.cap_g(g)) # We choose the closest g capacitance from the mask for q in qubits: print "{}: C_g = {:.2f}fF g = {:.2f}MHz Chi_0/2pi = {:.2f}MHz Chi/2pi = {:.2f}MHz Q_r = {:.0f} kappa = {:.2f}MHz 1/kappa = {:.0f}ns I_c={:.2f}nA n_crit={:.0f}".format( q.name, 1e15*q.cap_g(q.g()), q.g()/2/pi/1e6, 1e-6*q.Chi_0()/2/pi, 1e-6*q.Chi()/2/pi, q.Q_r(), q.omega_r/q.Q_r()*1e-6/2/pi, q.Q_r()/q.omega_r*1e9, q.I_c()*1e9, ((q.omega_q-q.omega_r)/2/q.g())**2) delta = 380e-6; #2\Delta/e in V Jc = 1e8*673e-9 # A/cm^2 nJJs = [2,1,1,2] for i,q in enumerate(qubits): print("{}: I_c = {:.2f}nA R_N = {:.2f}k width = {} x {:.3f}nm".format(q.name, q.I_c()*1e9, 1e-3*pi/4*delta/q.I_c(), nJJs[i], 1e9*q.I_c()/(1e4*Jc)/100e-9/nJJs[i] )) for q in qubits: print "{}: Ej/Ec = {:.3f} Charge dispersion = {:.3f}MHz".format(q.name, q.E_j()/q.E_c(), q.charge_dispersion()/2/pi/hbar/1e6) # What variation in C_g should be included on mask for the C_q variation we have? for C_q_ in [85e-15, 29e-15, e**2/2/250e6]: for g_ in [2*pi*25e6, 2*pi*50e6, 2*pi*200e6]: qb.C_q = C_q_ print "C_q = {:.2f}fF g = {:.2f}MHz C_g = {:.2f}fF".format( 1e15*C_q_, g_/2/pi/1e6, 1e15*qb.cap_g(g_)) Explanation: Qubit parameters From Ted Thorbeck's notes: $E_c = \frac{e^2}{2C}$, $E_c/\hbar=\alpha=\text{anharmonicity}$ $E_J = \frac{I_o \Phi_0}{2 \pi} $ $\omega_q = \sqrt{8E_JE_c}/\hbar $ $g = \frac{1}{2} \frac{C_g}{\sqrt{(C_q+C_g)(C_r+C_g)}}\sqrt{\omega_r\omega_q}$ We want g in the range 25-200MHz for an ideal anharmonicity $\alpha$=250MHz End of explanation cpw.setKineticInductanceCorrection(False) print cpw cpwx = cpwtools.CPWWithBridges(material='al', w=1e6*cpw.w, s=1e6*cpw.s, bridgeSpacing = 250, bridgeWidth = 3, t_oxide=0.16) cpwx.setKineticInductanceCorrection(False) print cpwx Explanation: Feedline with and without crossovers End of explanation d = 5 MperL = inductiveCoupling.inductiveCoupling.CalcMutual(cpw.w*1e6, cpw.w*1e6, cpw.s*1e6, cpw.s*1e6, d, 10*cpw.w*1e6)[0] for q in qubits: M = 1/(np.sqrt(q.Q_r()*pi/8/cpw.z0()**2)*q.omega_r) print "{} M = {:.2f}pH coupling length = {:.2f}um".format(q.name, M*1e12, M/MperL*1e6) for q in [3000,6000,9000,15000,21000,27000,33000]: print "Q_c={} l_c={:.2f}".format(q,1/(np.sqrt(q*pi/8/cpw.z0()**2)*qubits[2].omega_r)/MperL*1e6) Explanation: Inductive Coupling From [1], we have the dephasing of a qubit: $\Gamma_\phi = \eta\frac{4\chi^2}{\kappa}\bar{n}$, where $\eta=\frac{\kappa^2}{\kappa^2+4\chi^2}$, $\bar{n}=\left(\frac{\Delta}{2g}\right)^2$ $\Gamma_\phi = \frac{4\chi^2\kappa}{\kappa^2+4\chi^2}\left(\frac{\Delta}{2g}\right)^2$ To maximize the efficiency of readout, we want to maximize the rate of information leaving the system (into the readout chain), or equivilently, maximize dephasing. $\partial_\kappa\Gamma_\phi = 0 = -\frac{4\chi^2(\kappa^2-4\chi^2)}{(\kappa^2+4\chi^2)^2}$ when $2\chi=\kappa$. $2\chi = \kappa_r = \omega_r/Q_r$ $ Q_{r,c} = \frac{8Z_0^2}{\pi(\omega M)^2}$ [2] We want a $Q_c$ of 3k-30k [1] Yan et al. The flux qubit revisited to enhance coherence and reproducibility. Nature Communications, 7, 1–9. http://doi.org/10.1038/ncomms12964 [2] Matt Beck's Thesis, p.39 End of explanation l_curve = 2*pi*50/4 tot_length = l_curve*(1+2+2+2+1)*2 + 4*750 + 2569 + 4*450 + 2*106 purcell = cpwtools.HalfLResonator(cpw,tot_length) purcell.addCapacitiveCoupling('in', 40e-15) purcell.addCapacitiveCoupling('out', 130e-15) print( "f_max = {:.3f}GHz Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcell.fl(), purcell.Qc('in'), purcell.Qc('out') ) ) purcell.l = (tot_length + 503*4)*1e-6 print( "f_min = {:.3f}GHz Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcell.fl(), purcell.Qc('in'), purcell.Qc('out') ) ) print print('The measured purcell filter (no crossovers) seems to be 150-200MHz below expected. This has been accounted for below.') f0 = (qubits[1].omega_r + qubits[2].omega_r)/2/2/pi purcell.setLengthFromFreq(f0 + 175e6) # The measured purcell filter (no crossovers) seems to be 150-200MHz below expected. print "f = {:.2f}GHz l = {:.3f}um offset = {:.3f}um Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcell.fl(), purcell.l*1e6, (purcell.l*1e6-tot_length)/4, purcell.Qc('in'), purcell.Qc('out') ) print "V_out/V_in =", (purcell.Qc('in')/purcell.Qc('out'))**0.5 print "{:.2f}% power lost through input".format( 100*purcell.Ql()/purcell.Qc('in') ) print "{:.2f}% power lost through output".format( 100*purcell.Ql()/purcell.Qc('out') ) print "{:.2f}% power lost internally".format( 100*purcell.Ql()/purcell.Qint() ) print print "The purcell filter frequency goes up by 310MHz when crossovers are added:" purcellx = deepcopy(purcell) purcellx.cpw = cpwx print "f = {:.2f}GHz l = {:.3f}um Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcellx.fl(), purcellx.l*1e6, purcellx.Qc('in'), purcellx.Qc('out') ) print "Purcell Filter FWHM = {:.2f}MHz".format(2*pi*f0/purcell.Ql()/2/pi/1e6) print "Purcell Filter Q_l = {:.2f}".format(purcell.Ql()) print for q in qubits: kappa_r = q.omega_r/q.Q_r() Delta = q.omega_q - q.omega_r print "{}: T1 limit (no purcell) = {:.2f}us T1 limit (purcell) = {:.2f}us".format( q.name, (Delta/q.g())**2/kappa_r * 1e6, (Delta/q.g())**2 * (q.omega_r/q.omega_q) * (2*Delta/q.omega_r*purcell.Ql())**2/kappa_r * 1e6 ) Explanation: Purcell Filter Do we even need a purcell filter? [3] Without purcell filter: $\kappa_r T_1 \le \left(\frac{\Delta}{g}\right)^2$ With purcell filter: $\kappa_r T_1 \le \left(\frac{\Delta}{g}\right)^2 \left(\frac{\omega_r}{\omega_q}\right) \left(\frac{2\Delta}{\omega_r/Q_{pf}}\right)^2$ $\kappa_r = \omega_r/Q_r$ With the readout resonators spaced ~30MHz appart, we need a bandwidth of at least 4*30MHz=120MHz. We have a range of readout resonators from 5-6GHz. [3] Jeffrey et al. Fast accurate state measurement with superconducting qubits. Physical Review Letters, 112(19), 1–5. http://doi.org/10.1103/PhysRevLett.112.190504 End of explanation C_q = qubits[2].C_q L_q = 1/(qubits[2].omega_q**2 * C_q) R_s = 50 C_s = 0.1e-15 Q_s = 1/(qubits[2].omega_q * R_s * C_s) R_p = R_s*(1 + Q_s**2) C_p = C_s * Q_s**2/(1 + Q_s**2) omega = 1/np.sqrt((C_q+C_p)*L_q) Q_xy = omega*R_p*(C_q+C_p) print("f: {:.3f}GHz --> {:.3f}GHz".format( 1e-9/np.sqrt(C_q*L_q)/2/pi, 1e-9*omega/2/pi)) print("Q = {:.2f}".format(Q_xy)) print("1/kappa = {:.2f}us".format(1e6*Q_xy/omega)) Explanation: Loss from XY line From Thorbeck's notes, we have $R_p = R_s(1+Q_s^2)$ and $C_p = C_s\left(\frac{Q_s^2}{1+Q_s^2}\right)$ where $Q_s = \frac{1}{\omega R_s C_s}$ and the "s" and "p" subscript refer to just the coupling capacitor and Z0 of the line in a series or parallel configuration. Combining this with the normal LC of the qubit, we can find the loss End of explanation