Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
5,200
Given the following text description, write Python code to implement the functionality described below step by step Description: Monads Monads are the most feared concept of FP, so I reserve a complete chapter for understanding this concept. What is a monad? Right now, my understanding is that monads are a very flexible concept that basically allows to attach context to an otherwise stateless system. This means, that through a monad, the application of a otherwise pure function can be made dependent on context, so that a function will be executed differently in different contexts. An easy example Step1: I now instatiate an instance of this class with a correctly set street attribute in the address dict. Then, everything works well when we want the query the street address from this company Step2: However, when we want to get the street name when the company doesn't have a street attribute, this lookup will fail and throw an error Step3: What we would normally do to allieviate this issue is to write a function that deals with null values Step4: We now see that we are able to complete the request without an error, returning None, if there is no address given or if there is no dict entry for "street" in the address. But wouldn't it be nice to have this handled once and for all? Enter the "Maybe" monad! Step5: Now, we can rewrite the get_street as get_street_from_company, using two helper function
Python Code: class Company(): def __init__(self, name, address=None): self.address = address self.name = name def get_name(self): return self.name def get_address(self): return self.address Explanation: Monads Monads are the most feared concept of FP, so I reserve a complete chapter for understanding this concept. What is a monad? Right now, my understanding is that monads are a very flexible concept that basically allows to attach context to an otherwise stateless system. This means, that through a monad, the application of a otherwise pure function can be made dependent on context, so that a function will be executed differently in different contexts. An easy example: The maybe monad We will start with an easy example: Let's assume we have the task of looking up a street name from a company record. If we'd do it the normal, non-functional way, we'd have to write functions that look up these records and check if the results are not NULL: This example is heavily inspired by https://unpythonic.com/01_06_monads/ The following is a simple company class, where the address attribute is a simple dict containing the detailed address information. End of explanation cp1 = Company(name="Meier GmbH", address={"street":"Herforstweg 4"}) cp1.get_name() cp1.get_address() cp1.get_address().get("street") Explanation: I now instatiate an instance of this class with a correctly set street attribute in the address dict. Then, everything works well when we want the query the street address from this company: End of explanation cp2 = Company("Schultze AG") cp2.get_name() cp2.get_address().get("street") Explanation: However, when we want to get the street name when the company doesn't have a street attribute, this lookup will fail and throw an error: End of explanation def get_street(company): address = company.get_address() if address: if address.has_key("street"): return address.get("street") return None return None get_street(cp2) cp3 = Company(name="Wifi GbR", address={"zipcode": 11476} ) get_street(cp3) Explanation: What we would normally do to allieviate this issue is to write a function that deals with null values: End of explanation class Maybe(): def __init__(self, value): self.value = value def bind(self, fn): if self.value is None: return self return fn(self.value) def get_value(self): return self.value Explanation: We now see that we are able to complete the request without an error, returning None, if there is no address given or if there is no dict entry for "street" in the address. But wouldn't it be nice to have this handled once and for all? Enter the "Maybe" monad! End of explanation def get_address(company): return Maybe(company.get_address()) def get_street(address): return Maybe(address.get('street')) def get_street_from_company(company): return (Maybe(company) .bind(get_address) .bind(get_street) .get_value()) get_street_from_company(cp1) get_street_from_company(cp3) Explanation: Now, we can rewrite the get_street as get_street_from_company, using two helper function End of explanation
5,201
Given the following text description, write Python code to implement the functionality described below step by step Description: Unlike other programs that have a single programming interface (matlab) or a dominant interface de jour (R with RStudio), Python has a whole ecosystem of programs for writing it. This can be confusing at first, with so much choice, what should you use for your project? This presentation will cover some of the most popular Python interfaces, their pros and cons, and some situations in which one may be preferable to another. We will also discuss some operational details of the Anaconda package management system. You can see a recording of this presentation here Step1: 1. Jupyter notebooks These are our primary teaching tool. Pros Web based interface, easy to maintain Inline figures and markdown cells make great workbooks Encourages self documenting code "magic" functions to interact with operating system Can share interactive notebooks online, e.g. via Binder Cons Harder to automate the scripts Makes a mess in git Requires a GUI to run/efficiently examine the notebooks Also check out jupyterlab. This is the new standard for jupyter. Much more powerful and integrated. All projects written in notebooks can be continued in lab with no changes needed Step2: 2. Integrated Development Environments (IDEs) These are full featured tools for code development. Spyder is very popular among scientists. Especially if you are coming from a matlab or RStudio background, the appearance of this IDE is very familiar and comforting. The whole thing isitself made in Python which is pretty cool Pycharm is like Spyder on steriods Pros See variables, file system, command line and code at a glance Loads of plugins (especially Pycharm) Smart autocompletion Code highlighting for e.g. unused imports, missing whitespace Can handle outside programs like git Cons Heavy on OS resources, especially RAM Can be slow to start Step3: 3. Python fresh from the command line Just open up a Python prompt and start coding This is a farily rare use case unless you are doing something very short. However, it's good to remember that this is availble. On pretty much any unix system (Linux, Mac etc) you can get straight to Python from the command line. This can be useful if you're logged in to a remote server and need to execute some Python in a hurry. If you're writing more than a couple of lines however, you'll want to write some .py files and run them Step4: 4. Python in files You can write Python in any text editor program. On UNIX systems vim and emacs remain popular after several decades. Atom is a more user friendly GUI based option. Windows users can try notepad++ for Python support Pros Simple and lightweight Always there for you (especially vim) Super portable scripts Easy to automate with tools like cron Cons Limited autocompletion and error checking No easy way to check workspace (variables, path etc) Working with figures can be difficult (need to save to file and display) Providing inputs to Python scripts run from the command line There are different ways to turn your Python program (.py) into a commandline tool. We will demonstrate two of these options below. sys.argv The sys module is part of the standard Python library and contains functions to access and modify variables of the Python runtime environment. In this tutorial, we're only demonstrating one of its functions Step5: Now we can run this script called halloween.py in the shell as follows Step6: So, everything after python3 halloween.py ends up as a string in a list returned by sys.argv. The first element of sys.argv is always the name of the program that is being run. argparse With argparse, you can easily supply your Python program with input from commandline in a more user friendly way. Inputs are supplied to your python program in the following format Step7: In the terminal, we can provide inputs using the flags we specified Step8: One of the advantages of argparse is that a help function is automatically generated from the "help" argument you supply when adding options Step9: See the documentation and tutorial to find out what else you can do with argparse. 5. Python on the HPC Depending on your research, your data and your computer, you may want to consider running some or most of your analyses and experiments on a High Performance Computer (HPC). While the HPC is running your Python programs, your own machine is not burdened, so you can freely use it for other tasks or shut it off. UEA has its own HPC for research
Python Code: from pygments import highlight from pygments.lexers import PythonLexer from pygments.formatters import HtmlFormatter import IPython Explanation: Unlike other programs that have a single programming interface (matlab) or a dominant interface de jour (R with RStudio), Python has a whole ecosystem of programs for writing it. This can be confusing at first, with so much choice, what should you use for your project? This presentation will cover some of the most popular Python interfaces, their pros and cons, and some situations in which one may be preferable to another. We will also discuss some operational details of the Anaconda package management system. You can see a recording of this presentation here End of explanation # Demo some jupyter stuff here Explanation: 1. Jupyter notebooks These are our primary teaching tool. Pros Web based interface, easy to maintain Inline figures and markdown cells make great workbooks Encourages self documenting code "magic" functions to interact with operating system Can share interactive notebooks online, e.g. via Binder Cons Harder to automate the scripts Makes a mess in git Requires a GUI to run/efficiently examine the notebooks Also check out jupyterlab. This is the new standard for jupyter. Much more powerful and integrated. All projects written in notebooks can be continued in lab with no changes needed End of explanation # Demo an IDE, including code hints and autocompletion Explanation: 2. Integrated Development Environments (IDEs) These are full featured tools for code development. Spyder is very popular among scientists. Especially if you are coming from a matlab or RStudio background, the appearance of this IDE is very familiar and comforting. The whole thing isitself made in Python which is pretty cool Pycharm is like Spyder on steriods Pros See variables, file system, command line and code at a glance Loads of plugins (especially Pycharm) Smart autocompletion Code highlighting for e.g. unused imports, missing whitespace Can handle outside programs like git Cons Heavy on OS resources, especially RAM Can be slow to start End of explanation # Python command line demo Explanation: 3. Python fresh from the command line Just open up a Python prompt and start coding This is a farily rare use case unless you are doing something very short. However, it's good to remember that this is availble. On pretty much any unix system (Linux, Mac etc) you can get straight to Python from the command line. This can be useful if you're logged in to a remote server and need to execute some Python in a hurry. If you're writing more than a couple of lines however, you'll want to write some .py files and run them End of explanation # Show content of a python script with syntax highlighting. Shamelessly copied from jgosmann's answer on # stackoverflow.com/questions/19197931/how-to-show-as-output-cell-the-contents-of-a-py-file-with-syntax-highlighting with open('halloween_sysargv.py') as f: code = f.read() formatter = HtmlFormatter() IPython.display.HTML('<style type="text/css">{}</style>{}'.format( formatter.get_style_defs('.highlight'), highlight(code, PythonLexer(), HtmlFormatter()))) Explanation: 4. Python in files You can write Python in any text editor program. On UNIX systems vim and emacs remain popular after several decades. Atom is a more user friendly GUI based option. Windows users can try notepad++ for Python support Pros Simple and lightweight Always there for you (especially vim) Super portable scripts Easy to automate with tools like cron Cons Limited autocompletion and error checking No easy way to check workspace (variables, path etc) Working with figures can be difficult (need to save to file and display) Providing inputs to Python scripts run from the command line There are different ways to turn your Python program (.py) into a commandline tool. We will demonstrate two of these options below. sys.argv The sys module is part of the standard Python library and contains functions to access and modify variables of the Python runtime environment. In this tutorial, we're only demonstrating one of its functions: sys.argv. Let's look at the contents of a python script called halloween_sysargv.py below. It is a very simple demonstration of how to provide numerical, string (for example filenames!) or list inputs to a python program. End of explanation ! python3 halloween_sysargv.py 13 pumpkin cat,bat,spider # the exlamation mark tells Jupyter we're running a shell command. Explanation: Now we can run this script called halloween.py in the shell as follows: End of explanation # Show content of a python script with syntax highlighting. Shamelessly copied from jgosmann's answer on # stackoverflow.com/questions/19197931/how-to-show-as-output-cell-the-contents-of-a-py-file-with-syntax-highlighting with open('halloween_argparse.py') as f: code = f.read() IPython.display.HTML('<style type="text/css">{}</style>{}'.format( formatter.get_style_defs('.highlight'), highlight(code, PythonLexer(), HtmlFormatter()))) Explanation: So, everything after python3 halloween.py ends up as a string in a list returned by sys.argv. The first element of sys.argv is always the name of the program that is being run. argparse With argparse, you can easily supply your Python program with input from commandline in a more user friendly way. Inputs are supplied to your python program in the following format: python myprogram.py -a avalue -b bvalue --option-c cvalue -f The predecessor of argparse is optparse. Content of halloween_argparse.py: End of explanation ! python3 halloween_argparse.py -n 13 --animals=cat,bat,spider,wolf -c # ! running in shell Explanation: In the terminal, we can provide inputs using the flags we specified: End of explanation ! python halloween_argparse.py --help # ! running in shell Explanation: One of the advantages of argparse is that a help function is automatically generated from the "help" argument you supply when adding options: End of explanation HTML(html) Explanation: See the documentation and tutorial to find out what else you can do with argparse. 5. Python on the HPC Depending on your research, your data and your computer, you may want to consider running some or most of your analyses and experiments on a High Performance Computer (HPC). While the HPC is running your Python programs, your own machine is not burdened, so you can freely use it for other tasks or shut it off. UEA has its own HPC for research: the new ADA Cluster. This provides me with an excellent excuse to insert an image of 19th century visionary Ada Lovelace. For more introduction on high performance computing and ADA, please see the UEA Research and Specialist Computing Support help pages. The HPC Team offers to meet with all new users to help you get started. You can use Conda to manage Python environments on ADA. Information on how to build and activate conda python environments on ADA can be found here. On a HPC, you can either work interactively or submit batch jobs. When submitting batch jobs (after code development and testing locally or in an interactive session), only the fourth way of Python above is available to you. Providing inputs from the command line will come in handy when submitting (array) jobs. Note that in batch jobs, you need to activate conda environments with source activate myenv instead of the otherwise recommended conda activate myenv. In an interactive session, the recommended ways to work with Python on ADA are options 3 and 4 from above (from the UEA HPC team: "Jupyter Notebooks and IDEs rely on graphical interfaces that have high overheads and therefore generally don't work well on a cluster environment"). The file editors available on ADA are nano, nedit, emacs, Vi and gvim. Anaconda If you are not already familiar with Anaconda, it is a distribution of Python geared toward data scientists that aims to make it quick and easy to manage multiple projects with differing dependencies. With Anaconda you can maintain seperate environments for all your projects. Why would you want to do this? Different projects require different packages, and not all of these packages are able to interoperate. Particularly in science, we often need to use legacy software dependant on older modules. If you want to work on one project built in Python 2.7 and your new stuff in 3.8, you'll need to keep them seperate on your system so they don't interfere with each other. Anaconda is a very user friendly way to acheive this. The key to anaconda is environments. These are collections of Python modules, non Python programs (like jupyter notebooks, GDAL or Spyder) and a specific version of Python itself. There is no limit to the number of environments you can have. The only requirement is that each one has a unique name on your system. Here's an example environment from our PPD Python course yml name: ppd_python channels: - defaults - conda-forge dependencies: - python=3.8 - ipython - jupyter - numpy - matplotlib - pandas - cartopy - xarray - netcdf4 - seaborn - spyder - tqdm - scipy - iris - plotly - cftime The environment is created from a textfile. You need to specify a names, sources and the modules (dependencies) you need. In this case we specified Python=3.8, jupyter to run notebooks and a bunch of modules including numpy, matplotlib and scipy. This should be all anyone needs to replicate the same environment on their machine and run the scripts succesfully. If you are sharing code with others, always include an environment file so it runs correctly. We will do a more detailed demo of package management with Anaconda in the future How I start a Python project *Other Hosting Services Are Available Reading If you want a good science environment file to start from, try the one from ppd_python. You'll find some handy conda instruction in the repo description. Click to download the zip You want the environment.yml file. The environment is based on Python 3.8 which will be supported until October 2024 A solid intro to git by Software Carpentry A cool trick with conda for bash users by Leo Uieda. N.B. conda activate is preferred to source activate these days. Sources Python on the ADA HPC Images Conda image: https://www.imperial.ac.uk/admin-services/ict/self-service/research-support/rcs/support/applications/conda/ Ada Lovelace: https://blogs.scientificamerican.com/observations/ada-lovelace-day-honors-the-first-computer-programmer/ Flow chart made with graphviz End of explanation
5,202
Given the following text description, write Python code to implement the functionality described below step by step Description: $$\newcommand{\xv}{\mathbf{x}} \newcommand{\Xv}{\mathbf{X}} \newcommand{\yv}{\mathbf{y}} \newcommand{\Yv}{\mathbf{Y}} \newcommand{\zv}{\mathbf{z}} \newcommand{\av}{\mathbf{a}} \newcommand{\Wv}{\mathbf{W}} \newcommand{\wv}{\mathbf{w}} \newcommand{\betav}{\mathbf{\beta}} \newcommand{\gv}{\mathbf{g}} \newcommand{\Hv}{\mathbf{H}} \newcommand{\dv}{\mathbf{d}} \newcommand{\Vv}{\mathbf{V}} \newcommand{\vv}{\mathbf{v}} \newcommand{\tv}{\mathbf{t}} \newcommand{\Tv}{\mathbf{T}} \newcommand{\Sv}{\mathbf{S}} \newcommand{\Gv}{\mathbf{G}} \newcommand{\zv}{\mathbf{z}} \newcommand{\Zv}{\mathbf{Z}} \newcommand{\Norm}{\mathcal{N}} \newcommand{\muv}{\boldsymbol{\mu}} \newcommand{\sigmav}{\boldsymbol{\sigma}} \newcommand{\phiv}{\boldsymbol{\phi}} \newcommand{\Phiv}{\boldsymbol{\Phi}} \newcommand{\Sigmav}{\boldsymbol{\Sigma}} \newcommand{\Lambdav}{\boldsymbol{\Lambda}} \newcommand{\half}{\frac{1}{2}} \newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}}} \newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}}} \newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}} \newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}} \newcommand{\grad}{\mathbf{\nabla}} \newcommand{\ebx}[1]{e^{\betav_{#1}^T \xv_n}} \newcommand{\eby}[1]{e^{y_{n,#1}}} \newcommand{\Tiv}{\mathbf{Ti}} \newcommand{\Fv}{\mathbf{F}} \newcommand{\ones}[1]{\mathbf{1}_{#1}} $$ Analysis of Neural Network Classifiers and Bottleneck Networks Step1: Now let's go through the steps of applying a neural network classifier to the Student Alcohol Consumption data set, recently added to the UCI ML Repository. This data set has 1,044 samples, each with 33 attributes. The paper Using data Mining to Predict Secondary School Student Alcohol Consumption describes the application of random forests to this classification problem. This paper will serve as our guide on how to set up the data. Start by downloading the student.zip file, either by clicking on the link in the Data Folder at the Repository, or by using wget. Step2: So, first of all, we see that semi-colons are the field separator, and the first line is column headings. Also, notice that the non-numeric values are surrounded by double quotes. And the third and second to last numeric fields are also surrounded by double quotes. Read the first line into a list of column names. Step3: Now, how do we read the data lines? My usual answer is to use np.loadtxt. The Pandas package is something you should read about and practice using in the future. np.loadtxt allows you to specify a function to call to convert values in each column. The converters are specified as a dictionary with keys for column indices. Here is a definition of converters for each column interspersed with the data set's documentation for each attribute. Step4: Now we can read in both files. Step5: These two data sets are for two different tests, a math test and a portugese test. Let's combine them, and add a first column with value 0 for math and 1 for portugese. Step6: Now we can pull out the two columns that indicate alcohol use. Which ones are they? Step7: Let's look at these values. Add a bit of random noise to each to make them visible. Step8: For the experiments described in the paper linked above these two values are combined into one, then the value is discretized into two binary values, 0 for low alcohol use and 1 for high alcohol use. A weighted average of the two is formed, with weekend consumption (the second value) more heavily weighted. Then this value is quantized to 0 if the weighted average is less than 2, 1 otherwise. In the paper, a cutoff of 3 was used. Step9: Now, for the input $X$ matrix. Step10: Now we are ready to do some classifying. Let's use a small neural network. Step11: Let's try different sized nets. Step12: What about deeper nets? Step13: Analysis What patterns are the neural networks finding in the data that allow them to classify this data? One way to answer this is to look at the first hidden layer weight vectors. We can see this in the network diagram. Step14: The largest magnitude weight in the hidden layer is the positive weight on goout on the single hidden unit. goout has values of 1 to 5, with higher values representing more frequently going out. The hidden unit is connected negatively to the output. Considering these weights in isolation, the classifier output is decreased when the goout is higher than average. When goout is lower than average, its standardized value is negative. When multiplied by its positive weight, its contribution to the sum in the second unit is negative. This decreases the value of the output of the unit, when increases the output value sum. So, staying home is correlated with more alcohol use. Make sense? No, it doesn't. What's wrong with our reasoning? This network has one output unit, but remember, there are two classes. The probability of the second class for a sample is 1 minus the probability of this single output unit. So, the output unit in this network outputs estimates the probability of class 0, which represents low alcohol use. So, now it all makes sense. goout is positively connected to the hidden unit, which is negatively connected to the low alcohol class output, or goout is negatively correlated to low alcohol use. Let's check this by plotting the output class, 0 or 1, versus goout. Step15: Yep, it does look like alcohol use is positively correlated with gooout. Step16: We could also take a look at extreme samples, such as the sample that produces the highest probability of high alcohol use, and the sample that produces the lowest probability of alcohol use. Step17: Let's do this again with a larger network of 20 units. Step18: Bottleneck By forcing all information to flow through a very narrow layer, a small-dimensional representation of each sample can be obtained. Step19: We should partition data into training and testing subsets, to see if the two-dimensional mapping learned on training data cleanly separates testing data.
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline !wget http://www.cs.colostate.edu/~anderson/cs480/notebooks/nn2.tar !tar xvf nn2.tar import neuralnetworks as nn import qdalda import mlutils as ml Explanation: $$\newcommand{\xv}{\mathbf{x}} \newcommand{\Xv}{\mathbf{X}} \newcommand{\yv}{\mathbf{y}} \newcommand{\Yv}{\mathbf{Y}} \newcommand{\zv}{\mathbf{z}} \newcommand{\av}{\mathbf{a}} \newcommand{\Wv}{\mathbf{W}} \newcommand{\wv}{\mathbf{w}} \newcommand{\betav}{\mathbf{\beta}} \newcommand{\gv}{\mathbf{g}} \newcommand{\Hv}{\mathbf{H}} \newcommand{\dv}{\mathbf{d}} \newcommand{\Vv}{\mathbf{V}} \newcommand{\vv}{\mathbf{v}} \newcommand{\tv}{\mathbf{t}} \newcommand{\Tv}{\mathbf{T}} \newcommand{\Sv}{\mathbf{S}} \newcommand{\Gv}{\mathbf{G}} \newcommand{\zv}{\mathbf{z}} \newcommand{\Zv}{\mathbf{Z}} \newcommand{\Norm}{\mathcal{N}} \newcommand{\muv}{\boldsymbol{\mu}} \newcommand{\sigmav}{\boldsymbol{\sigma}} \newcommand{\phiv}{\boldsymbol{\phi}} \newcommand{\Phiv}{\boldsymbol{\Phi}} \newcommand{\Sigmav}{\boldsymbol{\Sigma}} \newcommand{\Lambdav}{\boldsymbol{\Lambda}} \newcommand{\half}{\frac{1}{2}} \newcommand{\argmax}[1]{\underset{#1}{\operatorname{argmax}}} \newcommand{\argmin}[1]{\underset{#1}{\operatorname{argmin}}} \newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}} \newcommand{\dimensionbar}[1]{\underset{#1}{\operatorname{|}}} \newcommand{\grad}{\mathbf{\nabla}} \newcommand{\ebx}[1]{e^{\betav_{#1}^T \xv_n}} \newcommand{\eby}[1]{e^{y_{n,#1}}} \newcommand{\Tiv}{\mathbf{Ti}} \newcommand{\Fv}{\mathbf{F}} \newcommand{\ones}[1]{\mathbf{1}_{#1}} $$ Analysis of Neural Network Classifiers and Bottleneck Networks End of explanation !wget http://archive.ics.uci.edu/ml/machine-learning-databases/00356/student.zip !unzip -o student.zip !rm student.zip !ls student* !cat student.txt !head -10 student-mat.csv Explanation: Now let's go through the steps of applying a neural network classifier to the Student Alcohol Consumption data set, recently added to the UCI ML Repository. This data set has 1,044 samples, each with 33 attributes. The paper Using data Mining to Predict Secondary School Student Alcohol Consumption describes the application of random forests to this classification problem. This paper will serve as our guide on how to set up the data. Start by downloading the student.zip file, either by clicking on the link in the Data Folder at the Repository, or by using wget. End of explanation with open('student-mat.csv') as file: headings = file.readline().split(';') headings = [head.strip() for head in headings] print(headings) Explanation: So, first of all, we see that semi-colons are the field separator, and the first line is column headings. Also, notice that the non-numeric values are surrounded by double quotes. And the third and second to last numeric fields are also surrounded by double quotes. Read the first line into a list of column names. End of explanation # 1 school - student's school (binary: 'GP' - Gabriel Pereira or 'MS' - Mousinho da Silveira) school = {'GP':0, 'MS':1} # 2 sex - student's sex (binary: 'F' - female or 'M' - male) gender = {'F':0, 'M':1} # 3 age - student's age (numeric: from 15 to 22) # 4 address - student's home address type (binary: 'U' - urban or 'R' - rural) address = {'U':0, 'R':1} # 5 famsize - family size (binary: 'LE3' - less or equal to 3 or 'GT3' - greater than 3) famsize = {'LE3':0, 'GT3':1} # 6 Pstatus - parent's cohabitation status (binary: 'T' - living together or 'A' - apart) parentsStatus = {'T':0, 'A':1} # 7 Medu - mother's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education) # 8 Fedu - father's education (numeric: 0 - none, 1 - primary education (4th grade), 2 – 5th to 9th grade, 3 – secondary education or 4 – higher education) # 9 Mjob - mother's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other') parentJob = {'teacher':0, 'health':1, 'services':2, 'at_home':3, 'other':4} # 10 Fjob - father's job (nominal: 'teacher', 'health' care related, civil 'services' (e.g. administrative or police), 'at_home' or 'other') # 11 reason - reason to choose this school (nominal: close to 'home', school 'reputation', 'course' preference or 'other') reason = {'home':0, 'reputation':1, 'course':2, 'other':3} # 12 guardian - student's guardian (nominal: 'mother', 'father' or 'other') guardian = {'mother':0, 'father':1, 'other':2} # 13 traveltime - home to school travel time (numeric: 1 - <15 min., 2 - 15 to 30 min., 3 - 30 min. to 1 hour, or 4 - >1 hour) # 14 studytime - weekly study time (numeric: 1 - <2 hours, 2 - 2 to 5 hours, 3 - 5 to 10 hours, or 4 - >10 hours) # 15 failures - number of past class failures (numeric: n if 1<=n<3, else 4) # 16 schoolsup - extra educational support (binary: yes or no) noYes = {'no':0, 'yes':1} # 17 famsup - family educational support (binary: yes or no) # 18 paid - extra paid classes within the course subject (Math or Portuguese) (binary: yes or no) # 19 activities - extra-curricular activities (binary: yes or no) # 20 nursery - attended nursery school (binary: yes or no) # 21 higher - wants to take higher education (binary: yes or no) # 22 internet - Internet access at home (binary: yes or no) # 23 romantic - with a romantic relationship (binary: yes or no) # 24 famrel - quality of family relationships (numeric: from 1 - very bad to 5 - excellent) # 25 freetime - free time after school (numeric: from 1 - very low to 5 - very high) # 26 goout - going out with friends (numeric: from 1 - very low to 5 - very high) # 27 Dalc - workday alcohol consumption (numeric: from 1 - very low to 5 - very high) # 28 Walc - weekend alcohol consumption (numeric: from 1 - very low to 5 - very high) # 29 health - current health status (numeric: from 1 - very bad to 5 - very good) # 30 absences - number of school absences (numeric: from 0 to 93) # # these grades are related with the course subject, Math or Portuguese: # 31 G1 - first period grade (numeric: from 0 to 20) # 31 G2 - second period grade (numeric: from 0 to 20) # 32 G3 - final grade (numeric: from 0 to 20, output target) def bs(bytes): '''Convert bytes to string and remove double quotes''' # print(bytes) return str(bytes,'utf-8').replace('"','') converters = {0: lambda x: school[bs(x)], 1: lambda x: gender[bs(x)], 2: lambda age: age, 3: lambda x: address[bs(x)], 4: lambda x: famsize[bs(x)], 5: lambda x: parentsStatus[bs(x)], 6: lambda motherEd: motherEd, 7: lambda fatherEd: fatherEd, 8: lambda m: parentJob[bs(m)], 9: lambda f: parentJob[bs(f)], 10: lambda x: reason[bs(x)], 11: lambda x: guardian[bs(x)], 12: lambda travelTime: travelTime, 13: lambda studyTime: studyTime, 14: lambda failures: failures, 15: lambda schoolSupport: noYes[bs(schoolSupport)], 16: lambda famsup: noYes[bs(famsup)], 17: lambda paid: noYes[bs(paid)], 18: lambda activities: noYes[bs(activities)], 19: lambda nursery: noYes[bs(nursery)], 20: lambda higher: noYes[bs(higher)], 21: lambda internet: noYes[bs(internet)], 22: lambda romantic: noYes[bs(romantic)], 23: lambda famrel: famrel, 24: lambda freetime: freetime, 25: lambda goout: goout, 26: lambda workdayAlcohol: workdayAlcohol, 27: lambda weekendAlcohol: weekendAlcohol, 28: lambda health: health, 29: lambda absences: absences, 30: lambda G1: bs(G1), 31: lambda G2: bs(G2), 32: lambda G3: G3} converters[11](b'mother'), converters[11](b'father') Explanation: Now, how do we read the data lines? My usual answer is to use np.loadtxt. The Pandas package is something you should read about and practice using in the future. np.loadtxt allows you to specify a function to call to convert values in each column. The converters are specified as a dictionary with keys for column indices. Here is a definition of converters for each column interspersed with the data set's documentation for each attribute. End of explanation with open('student-mat.csv') as file: headings = file.readline().split(';') headings = [head.strip() for head in headings] datamat = np.loadtxt(file, delimiter=';',converters=converters) datapor = np.loadtxt('student-por.csv', delimiter=';',skiprows=1,converters=converters) datamat.shape, datapor.shape Explanation: Now we can read in both files. End of explanation # add first column to indicate which subject headings = ['subject'] + headings datamat = np.hstack((np.zeros((datamat.shape[0],1)), datamat)) datapor = np.hstack((np.ones((datapor.shape[0],1)), datapor)) data = np.vstack((datamat,datapor)) data.shape Explanation: These two data sets are for two different tests, a math test and a portugese test. Let's combine them, and add a first column with value 0 for math and 1 for portugese. End of explanation [i for i,head in enumerate(headings) if 'alc' in head] headings[27:29] data[:5,27:29] Explanation: Now we can pull out the two columns that indicate alcohol use. Which ones are they? End of explanation alcoholColumns = [26, 27] T2 = data[:,alcoholColumns] T2r = T2 + np.random.normal(0,0.1,size=T2.shape) plt.plot(T2r[:,0],T2r[:,1], 'o', alpha=0.5); Explanation: Let's look at these values. Add a bit of random noise to each to make them visible. End of explanation wavg = (T2[:,0:1] * 2 + T2[:,1:2] * 5) / 7 T = (wavg > 2.0).astype(int) plt.subplot(1,2,1) plt.hist(wavg) plt.subplot(1,2,2) plt.hist(T); Explanation: For the experiments described in the paper linked above these two values are combined into one, then the value is discretized into two binary values, 0 for low alcohol use and 1 for high alcohol use. A weighted average of the two is formed, with weekend consumption (the second value) more heavily weighted. Then this value is quantized to 0 if the weighted average is less than 2, 1 otherwise. In the paper, a cutoff of 3 was used. End of explanation X = data.copy() X = np.delete(X, alcoholColumns, axis=1) Xnames = [h for h in headings if 'alc' not in h] print(len(Xnames),Xnames) X.shape, T.shape Explanation: Now, for the input $X$ matrix. End of explanation nnet = nn.NeuralNetworkClassifier(X.shape[1], [2], len(np.unique(T))) nnet.train(X,T,nIterations=100,verbose=True) predictedClass = nnet.use(X) plt.plot(np.exp(-nnet.getErrorTrace())) accuracy = np.sum(predictedClass == T)/ len(T) * 100 print('Accuracy',accuracy) Explanation: Now we are ready to do some classifying. Let's use a small neural network. End of explanation for hidden in [0,1,2,5,10]: nnet = nn.NeuralNetworkClassifier(X.shape[1], [hidden], len(np.unique(T))) nnet.train(X,T,nIterations=100,verbose=False) predictedClass = nnet.use(X) plt.plot(np.exp(-nnet.getErrorTrace()), label='h='+str(hidden)) plt.legend(loc='best') accuracy = np.sum(predictedClass == T)/ len(T) * 100 print(hidden,accuracy) Explanation: Let's try different sized nets. End of explanation plt.figure(figsize=(15,6)) for layers in [1,2,3]: plt.subplot(1,3,layers) for hidden in [1,2,5,10]: hiddens = [hidden]*layers nnet = nn.NeuralNetworkClassifier(X.shape[1], hiddens, len(np.unique(T))) nnet.train(X,T,nIterations=100,verbose=False) predictedClass = nnet.use(X) plt.plot(np.exp(-nnet.getErrorTrace()), label='h='+str(hidden)) plt.legend(loc='best') accuracy = np.sum(predictedClass == T)/ len(T) * 100 print(hiddens,accuracy) plt.title('{} layers'.format(layers)) Explanation: What about deeper nets? End of explanation nnet = nn.NeuralNetworkClassifier(X.shape[1],[1],len(np.unique(T))) nnet.train(X,T,nIterations = 500) plt.plot(np.exp(-nnet.getErrorTrace())) plt.figure(figsize=(6,8)) nnet.draw(inputNames = Xnames) Explanation: Analysis What patterns are the neural networks finding in the data that allow them to classify this data? One way to answer this is to look at the first hidden layer weight vectors. We can see this in the network diagram. End of explanation plt.plot(X[:,-6]+np.random.normal(0,0.05,size=X[:,-6].shape), T+np.random.normal(0,0.05,size=T.shape),'o',alpha=0.2) plt.xlabel('goout') plt.ylabel('alcohol'); Explanation: The largest magnitude weight in the hidden layer is the positive weight on goout on the single hidden unit. goout has values of 1 to 5, with higher values representing more frequently going out. The hidden unit is connected negatively to the output. Considering these weights in isolation, the classifier output is decreased when the goout is higher than average. When goout is lower than average, its standardized value is negative. When multiplied by its positive weight, its contribution to the sum in the second unit is negative. This decreases the value of the output of the unit, when increases the output value sum. So, staying home is correlated with more alcohol use. Make sense? No, it doesn't. What's wrong with our reasoning? This network has one output unit, but remember, there are two classes. The probability of the second class for a sample is 1 minus the probability of this single output unit. So, the output unit in this network outputs estimates the probability of class 0, which represents low alcohol use. So, now it all makes sense. goout is positively connected to the hidden unit, which is negatively connected to the low alcohol class output, or goout is negatively correlated to low alcohol use. Let's check this by plotting the output class, 0 or 1, versus goout. End of explanation hidden = 2 plt.figure(figsize=(10,8)) nnet = nn.NeuralNetworkClassifier(X.shape[1],[hidden],len(np.unique(T))) nnet.train(X,T,nIterations = 1000) plt.subplot(1,2,1) plt.plot(np.exp(-nnet.getErrorTrace())) plt.subplot(1,2,2) nnet.draw(inputNames = Xnames) hidden = 5 plt.figure(figsize=(10,8)) nnet = nn.NeuralNetworkClassifier(X.shape[1],[hidden],len(np.unique(T))) nnet.train(X,T,nIterations = 1000) plt.subplot(1,2,1) plt.plot(np.exp(-nnet.getErrorTrace())) plt.subplot(1,2,2) nnet.draw(inputNames = Xnames) Explanation: Yep, it does look like alcohol use is positively correlated with gooout. End of explanation predclass,prob,_ = nnet.use(X,allOutputs=True) prob.shape np.argmax(prob[:,1]) prob[100,:] highest = X[np.argmax(prob[:,1]),:] lowest = X[np.argmin(prob[:,1]),:] print('Hi Alc Lo Alc') for h,l,n in zip(highest,lowest,Xnames): print('{}\t{}\t{}'.format(h,l,n)) Explanation: We could also take a look at extreme samples, such as the sample that produces the highest probability of high alcohol use, and the sample that produces the lowest probability of alcohol use. End of explanation hidden = 20 plt.figure(figsize=(20,10)) nnet = nn.NeuralNetworkClassifier(X.shape[1],[hidden],len(np.unique(T))) nnet.train(X,T,nIterations = 1000) plt.subplot(1,2,1) plt.plot(np.exp(-nnet.getErrorTrace())) plt.subplot(1,2,2) nnet.draw(inputNames = Xnames) predclass,prob,_ = nnet.use(X,allOutputs=True) highest = X[np.argmax(prob[:,1]),:] lowest = X[np.argmin(prob[:,1]),:] print('Hi Alc Lo Alc') for h,l,n in zip(highest,lowest,Xnames): print('{}\t{}\t{}'.format(h,l,n)) Explanation: Let's do this again with a larger network of 20 units. End of explanation hiddens = [10,2,10] plt.figure(figsize=(12,8)) nnet = nn.NeuralNetworkClassifier(X.shape[1],hiddens,len(np.unique(T))) nnet.train(X,T,nIterations = 500) plt.subplot(1,2,1) plt.plot(np.exp(-nnet.getErrorTrace())) plt.subplot(1,2,2) nnet.draw(inputNames = Xnames) predclass,prob,Z = nnet.use(X,allOutputs=True) highest = X[np.argmax(prob[:,1]),:] lowest = X[np.argmin(prob[:,1]),:] print('Hi Alc Lo Alc') for h,l,n in zip(highest,lowest,Xnames): print('{}\t{}\t{}'.format(h,l,n)) len(Z) bottleneck = Z[1] bottleneck.shape plt.figure(figsize=(10,10)) plt.scatter(bottleneck[:,0],bottleneck[:,1],c=T, s=60, alpha=0.3); hiddens = [10,10,2,10,10] plt.figure(figsize=(12,8)) nnet = nn.NeuralNetworkClassifier(X.shape[1],hiddens,len(np.unique(T))) nnet.train(X,T,nIterations = 1000) plt.subplot(1,2,1) plt.plot(np.exp(-nnet.getErrorTrace())) plt.subplot(1,2,2) predclass,prob,Z = nnet.use(X,allOutputs=True) bottleneck = Z[int(len(Z)/2)] plt.scatter(bottleneck[:,0],bottleneck[:,1],c=T, s=60, alpha=0.3); Explanation: Bottleneck By forcing all information to flow through a very narrow layer, a small-dimensional representation of each sample can be obtained. End of explanation def percCorrect(predicted,T): return np.sum(predicted == T) / T.shape[0] * 100 for Xtrain,Ttrain,Xtest,Ttest in ml.partitionsKFolds(X,T,4,validation=False,shuffle=True,classification=True): hiddens = [100,50,10,2,10,50,100] nIterations = 10 plt.figure(figsize=(15,8)) nnet = nn.NeuralNetworkClassifier(Xtrain.shape[1],hiddens,len(np.unique(Ttrain))) nnet.train(Xtrain,Ttrain,nIterations = nIterations) plt.subplot(1,3,1) plt.plot(np.exp(-nnet.getErrorTrace())) plt.subplot(1,3,2) predclassTrain,prob,Z = nnet.use(Xtrain,allOutputs=True) bottleneck = Z[int(len(Z)/2)] plt.scatter(bottleneck[:,0],bottleneck[:,1],c=Ttrain, s=60, alpha=0.3); plt.title('Train data') plt.subplot(1,3,3) predclassTest,prob,Z = nnet.use(Xtest,allOutputs=True) bottleneck = Z[int(len(Z)/2)] plt.scatter(bottleneck[:,0],bottleneck[:,1],c=Ttest, s=60, alpha=0.3) plt.title('Test data') print('Hiddens',hiddens,'nIterations',nIterations) print('Train % correct',percCorrect(predclassTrain,Ttrain)) print('Test % correct ',percCorrect(predclassTest,Ttest)) break Explanation: We should partition data into training and testing subsets, to see if the two-dimensional mapping learned on training data cleanly separates testing data. End of explanation
5,203
Given the following text description, write Python code to implement the functionality described below step by step Description: Working with Python Step1: plot() is a versatile command, and will take an arbitrary number of arguments. For example, to plot x versus y, you can issue the command Step2: For every x, y pair of arguments, there is an optional third argument which is the format string that indicates the color and line type of the plot. The letters and symbols of the format string are from MATLAB, and you concatenate a color string with a line style string. The default format string is b-, which is a solid blue line. For example, to plot the above with red circles, you would chose ro. Step3: matplotlib has a few methods in the pyplot module that make creating common types of plots faster and more convenient because they automatically create a Figure and an Axes object. The most widely used are Step4: Exercise 4.1 Re-use the GapMinder dataset to plot, in Jupyter using Matplotlib, from the world data the life expectancy against GDP per capita for 1957 and 2007 using a scatter plot, add title to your graph as well as a legend. BioPython The goal of Biopython is to make it as easy as possible to use Python for bioinformatics by creating high-quality, reusable modules and classes. Biopython features include parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...), interfaces to common and not-so-common programs (Clustalw, DSSP, MSMS...), a standard sequence class, various clustering modules, a KD tree data structure etc. and documentation as well as a tutorial Step5: We can use functions from Bio.SeqUtils to get idea about a sequence Step6: One letter code protein sequences can be converted into three letter codes using seq3 utility Step7: Alphabets defines how the strings are going to be treated as sequence object. Bio.Alphabet module defines the available alphabets for Biopython. Bio.Alphabet.IUPAC provides basic definition for DNA, RNA and proteins. Step8: Parsing sequence file format Step9: Biopython provides specific functions to allow parsing/reading sequence files. Step10: Sequence objects can be written into files using file handles with the function SeqIO.write(). We need to provide the name of the output sequence file and the sequence file format. Step11: Connecting with biological databases Sequences can be searched and downloaded from public databases.
Python Code: import matplotlib.pyplot as mpyplot mpyplot.plot([1,2,3,4]) mpyplot.ylabel('some numbers') mpyplot.show() Explanation: Working with Python: functions and modules Session 4: Using third party libraries Matplotlib Exercise 4.1 BioPython Working with sequences Connecting with biological databases Exercise 4.2 Matplotlib matplotlib is probably the single most used Python package for graphics. It provides both a very quick way to visualize data from Python and publication-quality figures in many formats. matplotlib.pyplot is a collection of command style functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc. Let's start with a very simple plot. End of explanation mpyplot.plot([1,2,3,4], [1,4,9,16]) Explanation: plot() is a versatile command, and will take an arbitrary number of arguments. For example, to plot x versus y, you can issue the command: End of explanation import matplotlib.pyplot as mpyplot mpyplot.plot([1,2,3,4], [1,4,9,16], 'ro') mpyplot.axis([0, 6, 0, 20]) mpyplot.show() Explanation: For every x, y pair of arguments, there is an optional third argument which is the format string that indicates the color and line type of the plot. The letters and symbols of the format string are from MATLAB, and you concatenate a color string with a line style string. The default format string is b-, which is a solid blue line. For example, to plot the above with red circles, you would chose ro. End of explanation seq = 'ATGGTGCATCTGACTCCTGAGGAGAAGTCTGCCGTTACTGCCCTGTGGGGCAAGGTG' gc = [40.0, 60.0, 80.0, 60.0, 40.0, 60.0, 40.0, 40.0, 40.0, 60.0, 40.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 60.0, 40.0, 40.0, 40.0, 40.0, 40.0, 60.0, 60.0, 80.0, 80.0, 80.0, 60.0, 40.0, 40.0, 20.0, 40.0, 60.0, 80.0, 80.0, 80.0, 80.0, 60.0, 60.0, 60.0, 80.0, 80.0, 100.0, 80.0, 60.0, 60.0, 60.0, 40.0, 60.0] window_ids = range(len(gc)) import matplotlib.pyplot as mpyplot mpyplot.plot(window_ids, gc, '--' ) mpyplot.xlabel('5 bases window id along the sequence') mpyplot.ylabel('%GC') mpyplot.title('GC plot for sequence\n' + seq) mpyplot.show() Explanation: matplotlib has a few methods in the pyplot module that make creating common types of plots faster and more convenient because they automatically create a Figure and an Axes object. The most widely used are: mpyplot.bar – creates a bar chart. mpyplot.boxplot – makes a box and whisker plot. mpyplot.hist – makes a histogram. mpyplot.plot – creates a line plot. mpyplot.scatter – makes a scatter plot. Calling any of these methods will automatically setup Figure and Axes objects, and draw the plot. Each of these methods has different parameters that can be passed in to modify the resulting plot. The Pyplot tutorial is where these simple examples above are coming from. More could be learn from it if you wish during your own time. Let's now try to plot the GC content along the chain we have calculated during the previous session, while solving the Exercises 3.3 and 3.4. End of explanation # Creating sequence from Bio.Seq import Seq my_seq = Seq("AGTACACTGGT") print(my_seq) print(my_seq[10]) print(my_seq[1:5]) print(len(my_seq)) print(my_seq.count("A")) Explanation: Exercise 4.1 Re-use the GapMinder dataset to plot, in Jupyter using Matplotlib, from the world data the life expectancy against GDP per capita for 1957 and 2007 using a scatter plot, add title to your graph as well as a legend. BioPython The goal of Biopython is to make it as easy as possible to use Python for bioinformatics by creating high-quality, reusable modules and classes. Biopython features include parsers for various Bioinformatics file formats (BLAST, Clustalw, FASTA, Genbank,...), access to online services (NCBI, Expasy,...), interfaces to common and not-so-common programs (Clustalw, DSSP, MSMS...), a standard sequence class, various clustering modules, a KD tree data structure etc. and documentation as well as a tutorial: http://biopython.org/DIST/docs/tutorial/Tutorial.html. Working with sequences We can create a sequence by defining a Seq object with strings. Bio.Seq() takes as input a string and converts in into a Seq object. We can print the sequences, individual residues, lengths and use other functions to get summary statistics. End of explanation # Calculate the molecular weight from Bio.SeqUtils import GC, molecular_weight print(GC(my_seq)) print(molecular_weight(my_seq)) Explanation: We can use functions from Bio.SeqUtils to get idea about a sequence End of explanation from Bio.SeqUtils import seq3 print(seq3(my_seq)) Explanation: One letter code protein sequences can be converted into three letter codes using seq3 utility End of explanation from Bio.Alphabet import IUPAC my_dna = Seq("AGTACATGACTGGTTTAG", IUPAC.unambiguous_dna) print(my_dna) print(my_dna.alphabet) my_dna.complement() my_dna.reverse_complement() my_dna.translate() Explanation: Alphabets defines how the strings are going to be treated as sequence object. Bio.Alphabet module defines the available alphabets for Biopython. Bio.Alphabet.IUPAC provides basic definition for DNA, RNA and proteins. End of explanation with open( "data/glpa.fa" ) as f: print(f.read()) Explanation: Parsing sequence file format: FASTA files Sequence files can be parsed and read the same way we read other files. End of explanation # Reading FASTA files from Bio import SeqIO with open("data/glpa.fa") as f: for protein in SeqIO.parse(f, 'fasta'): print(protein.id) print(protein.seq) Explanation: Biopython provides specific functions to allow parsing/reading sequence files. End of explanation # Writing FASTA files from Bio import SeqIO from Bio.SeqRecord import SeqRecord from Bio.Seq import Seq from Bio.Alphabet import IUPAC sequence = 'MYGKIIFVLLLSEIVSISASSTTGVAMHTSTSSSVTKSYISSQTNDTHKRDTYAATPRAHEVSEISVRTVYPPEEETGERVQLAHHFSEPEITLIIFG' seq = Seq(sequence, IUPAC.protein) protein = [SeqRecord(seq, id="THEID", description='a description'),] with open( "biopython.fa", "w") as f: SeqIO.write(protein, f, 'fasta') with open( "biopython.fa" ) as f: print(f.read()) Explanation: Sequence objects can be written into files using file handles with the function SeqIO.write(). We need to provide the name of the output sequence file and the sequence file format. End of explanation # Read FASTA file from NCBI GenBank from Bio import Entrez Entrez.email = '[email protected]' # Always tell NCBI who you are handle = Entrez.efetch(db="nucleotide", id="71066805", rettype="gb") seq_record = SeqIO.read(handle, "gb") handle.close() print(seq_record.id, 'with', len(seq_record.features), 'features') print(seq_record.seq) print(seq_record.format("fasta")) # Read SWISSPROT record from Bio import ExPASy handle = ExPASy.get_sprot_raw('HBB_HUMAN') prot_record = SeqIO.read(handle, "swiss") handle.close() print(prot_record.description) print(prot_record.seq) Explanation: Connecting with biological databases Sequences can be searched and downloaded from public databases. End of explanation
5,204
Given the following text description, write Python code to implement the functionality described below step by step Description: Working with Kafka data During a simulation, the producer and the marketplace are constantly logging sales and the activity on the market to Kafka. These information are organised in topics. In order to estimate customer demand and predict good prices, merchants can use the Kafka API to access this data. The merchants gets the data in form of a pandas DataFrame. If you want to try the following examples, make sure that the Pricewars plattform is running. Either by deploying them individually or by using the docker setup. The following step is specific for this notebook. It is not necessary if your merchant is in the repository root. Step1: Initialize Kafka API You need a merchant token to use the Kafka API. To get one, register the merchant at the marketplace. Step2: It was not possible to connect to the marketplace if you got the following error Step3: Request topic You can request data for specific topics. The most important topics are buyOffer which contains your own sales and marketSituation which contains a history of market situations. The call will return the data in form of a pandas DataFrame. Depending on how active the simulation is and how much data is logged, this can take some time. Step4: This method may return None if it was not possible to obtain the data. For example, this happens if the merchant doesn't have any sales.
Python Code: import sys sys.path.append('../') Explanation: Working with Kafka data During a simulation, the producer and the marketplace are constantly logging sales and the activity on the market to Kafka. These information are organised in topics. In order to estimate customer demand and predict good prices, merchants can use the Kafka API to access this data. The merchants gets the data in form of a pandas DataFrame. If you want to try the following examples, make sure that the Pricewars plattform is running. Either by deploying them individually or by using the docker setup. The following step is specific for this notebook. It is not necessary if your merchant is in the repository root. End of explanation from api import Marketplace marketplace = Marketplace() registration = marketplace.register( 'http://nobody:55000/', merchant_name='kafka_notebook_merchant', algorithm_name='human') registration Explanation: Initialize Kafka API You need a merchant token to use the Kafka API. To get one, register the merchant at the marketplace. End of explanation from api import Kafka kafka = Kafka(token=registration.merchant_token) Explanation: It was not possible to connect to the marketplace if you got the following error: ConnectionError: HTTPConnectionPool(host='marketplace', port=8080) In that case, make sure that the marketplace is running and host and port are correct. If host or port are wrong, you can change it by creating a marketplace object with the host argument: marketplace = Marketplace(host='www.another_host.com:1234') Same is true for the upcoming Kafka API Next, initialize the Kafka API: End of explanation sales_data = kafka.download_topic_data('buyOffer') sales_data.head() Explanation: Request topic You can request data for specific topics. The most important topics are buyOffer which contains your own sales and marketSituation which contains a history of market situations. The call will return the data in form of a pandas DataFrame. Depending on how active the simulation is and how much data is logged, this can take some time. End of explanation len(sales_data) market_situations = kafka.download_topic_data('marketSituation') print(len(market_situations)) market_situations.head() Explanation: This method may return None if it was not possible to obtain the data. For example, this happens if the merchant doesn't have any sales. End of explanation
5,205
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). DCGAN Step1: Import TensorFlow and enable eager execution Step2: Load the dataset We are going to use the MNIST dataset to train the generator and the discriminator. The generator will then generate handwritten digits. Step3: Use tf.data to create batches and shuffle the dataset Step4: Write the generator and discriminator models Generator It is responsible for creating convincing images that are good enough to fool the discriminator. It consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size (mnist image size) which is (28, 28, 1). We use leaky relu activation except for the last layer which uses tanh activation. Discriminator The discriminator is responsible for classifying the fake images from the real images. In other words, the discriminator is given generated images (from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake (generated) and real (MNIST images). Basically the generator should be good enough to fool the discriminator that the generated images are real. Step5: Define the loss functions and the optimizer Discriminator loss The discriminator loss function takes 2 inputs; real images, generated images real_loss is a sigmoid cross entropy loss of the real images and an array of ones (since these are the real images) generated_loss is a sigmoid cross entropy loss of the generated images and an array of zeros (since these are the fake images) Then the total_loss is the sum of real_loss and the generated_loss Generator loss It is a sigmoid cross entropy loss of the generated images and an array of ones The discriminator and the generator optimizers are different since we will train them separately. Step6: Checkpoints (Object-based saving) Step7: Training We start by iterating over the dataset The generator is given noise as an input which when passed through the generator model will output a image looking like a handwritten digit The discriminator is given the real MNIST images as well as the generated images (from the generator). Next, we calculate the generator and the discriminator loss. Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables (inputs) and apply those to the optimizer. Generate Images After training, its time to generate some images! We start by creating noise array as an input to the generator The generator will then convert the noise into handwritten images. Last step is to plot the predictions and voila! Step8: Restore the latest checkpoint Step9: Display an image using the epoch number Step10: Generate a GIF of all the saved images. <!-- TODO(markdaoust) Step11: To downlod the animation from Colab uncomment the code below
Python Code: # to generate gifs !pip install imageio Explanation: Copyright 2018 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the "License"). DCGAN: An example with tf.keras and eager <table class="tfo-notebook-buttons" align="left"><td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td><td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table> This notebook demonstrates how to generate images of handwritten digits using tf.keras and eager execution. To do so, we use Deep Convolutional Generative Adverserial Networks (DCGAN). This model takes about ~30 seconds per epoch (using tf.contrib.eager.defun to create graph functions) to train on a single Tesla K80 on Colab, as of July 2018. Below is the output generated after training the generator and discriminator models for 150 epochs. End of explanation from __future__ import absolute_import, division, print_function # Import TensorFlow >= 1.10 and enable eager execution import tensorflow as tf tf.enable_eager_execution() import os import time import numpy as np import glob import matplotlib.pyplot as plt import PIL import imageio from IPython import display Explanation: Import TensorFlow and enable eager execution End of explanation (train_images, train_labels), (_, _) = tf.keras.datasets.mnist.load_data() train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32') # We are normalizing the images to the range of [-1, 1] train_images = (train_images - 127.5) / 127.5 BUFFER_SIZE = 60000 BATCH_SIZE = 256 Explanation: Load the dataset We are going to use the MNIST dataset to train the generator and the discriminator. The generator will then generate handwritten digits. End of explanation train_dataset = tf.data.Dataset.from_tensor_slices(train_images).shuffle(BUFFER_SIZE).batch(BATCH_SIZE) Explanation: Use tf.data to create batches and shuffle the dataset End of explanation class Generator(tf.keras.Model): def __init__(self): super(Generator, self).__init__() self.fc1 = tf.keras.layers.Dense(7*7*64, use_bias=False) self.batchnorm1 = tf.keras.layers.BatchNormalization() self.conv1 = tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(1, 1), padding='same', use_bias=False) self.batchnorm2 = tf.keras.layers.BatchNormalization() self.conv2 = tf.keras.layers.Conv2DTranspose(32, (5, 5), strides=(2, 2), padding='same', use_bias=False) self.batchnorm3 = tf.keras.layers.BatchNormalization() self.conv3 = tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same', use_bias=False) def call(self, x, training=True): x = self.fc1(x) x = self.batchnorm1(x, training=training) x = tf.nn.relu(x) x = tf.reshape(x, shape=(-1, 7, 7, 64)) x = self.conv1(x) x = self.batchnorm2(x, training=training) x = tf.nn.relu(x) x = self.conv2(x) x = self.batchnorm3(x, training=training) x = tf.nn.relu(x) x = tf.nn.tanh(self.conv3(x)) return x class Discriminator(tf.keras.Model): def __init__(self): super(Discriminator, self).__init__() self.conv1 = tf.keras.layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same') self.conv2 = tf.keras.layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same') self.dropout = tf.keras.layers.Dropout(0.3) self.flatten = tf.keras.layers.Flatten() self.fc1 = tf.keras.layers.Dense(1) def call(self, x, training=True): x = tf.nn.leaky_relu(self.conv1(x)) x = self.dropout(x, training=training) x = tf.nn.leaky_relu(self.conv2(x)) x = self.dropout(x, training=training) x = self.flatten(x) x = self.fc1(x) return x generator = Generator() discriminator = Discriminator() # Defun gives 10 secs/epoch performance boost generator.call = tf.contrib.eager.defun(generator.call) discriminator.call = tf.contrib.eager.defun(discriminator.call) Explanation: Write the generator and discriminator models Generator It is responsible for creating convincing images that are good enough to fool the discriminator. It consists of Conv2DTranspose (Upsampling) layers. We start with a fully connected layer and upsample the image 2 times so as to reach the desired image size (mnist image size) which is (28, 28, 1). We use leaky relu activation except for the last layer which uses tanh activation. Discriminator The discriminator is responsible for classifying the fake images from the real images. In other words, the discriminator is given generated images (from the generator) and the real MNIST images. The job of the discriminator is to classify these images into fake (generated) and real (MNIST images). Basically the generator should be good enough to fool the discriminator that the generated images are real. End of explanation def discriminator_loss(real_output, generated_output): # [1,1,...,1] with real output since it is true and we want # our generated examples to look like it real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones_like(real_output), logits=real_output) # [0,0,...,0] with generated images since they are fake generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.zeros_like(generated_output), logits=generated_output) total_loss = real_loss + generated_loss return total_loss def generator_loss(generated_output): return tf.losses.sigmoid_cross_entropy(tf.ones_like(generated_output), generated_output) discriminator_optimizer = tf.train.AdamOptimizer(1e-4) generator_optimizer = tf.train.AdamOptimizer(1e-4) Explanation: Define the loss functions and the optimizer Discriminator loss The discriminator loss function takes 2 inputs; real images, generated images real_loss is a sigmoid cross entropy loss of the real images and an array of ones (since these are the real images) generated_loss is a sigmoid cross entropy loss of the generated images and an array of zeros (since these are the fake images) Then the total_loss is the sum of real_loss and the generated_loss Generator loss It is a sigmoid cross entropy loss of the generated images and an array of ones The discriminator and the generator optimizers are different since we will train them separately. End of explanation checkpoint_dir = './training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt") checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=generator, discriminator=discriminator) Explanation: Checkpoints (Object-based saving) End of explanation EPOCHS = 150 noise_dim = 100 num_examples_to_generate = 16 # keeping the random vector constant for generation (prediction) so # it will be easier to see the improvement of the gan. random_vector_for_generation = tf.random_normal([num_examples_to_generate, noise_dim]) def generate_and_save_images(model, epoch, test_input): # make sure the training parameter is set to False because we # don't want to train the batchnorm layer when doing inference. predictions = model(test_input, training=False) fig = plt.figure(figsize=(4,4)) for i in range(predictions.shape[0]): plt.subplot(4, 4, i+1) plt.imshow(predictions[i, :, :, 0] * 127.5 + 127.5, cmap='gray') plt.axis('off') plt.savefig('image_at_epoch_{:04d}.png'.format(epoch)) plt.show() def train(dataset, epochs, noise_dim): for epoch in range(epochs): start = time.time() for images in dataset: # generating noise from a uniform distribution noise = tf.random_normal([BATCH_SIZE, noise_dim]) with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape: generated_images = generator(noise, training=True) real_output = discriminator(images, training=True) generated_output = discriminator(generated_images, training=True) gen_loss = generator_loss(generated_output) disc_loss = discriminator_loss(real_output, generated_output) gradients_of_generator = gen_tape.gradient(gen_loss, generator.variables) gradients_of_discriminator = disc_tape.gradient(disc_loss, discriminator.variables) generator_optimizer.apply_gradients(zip(gradients_of_generator, generator.variables)) discriminator_optimizer.apply_gradients(zip(gradients_of_discriminator, discriminator.variables)) if epoch % 1 == 0: display.clear_output(wait=True) generate_and_save_images(generator, epoch + 1, random_vector_for_generation) # saving (checkpoint) the model every 15 epochs if epoch % 15 == 0: checkpoint.save(file_prefix = checkpoint_prefix) print ('Time taken for epoch {} is {} sec'.format(epoch + 1, time.time()-start)) # generating after the final epoch display.clear_output(wait=True) generate_and_save_images(generator, epochs, random_vector_for_generation) train(train_dataset, EPOCHS, noise_dim) Explanation: Training We start by iterating over the dataset The generator is given noise as an input which when passed through the generator model will output a image looking like a handwritten digit The discriminator is given the real MNIST images as well as the generated images (from the generator). Next, we calculate the generator and the discriminator loss. Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables (inputs) and apply those to the optimizer. Generate Images After training, its time to generate some images! We start by creating noise array as an input to the generator The generator will then convert the noise into handwritten images. Last step is to plot the predictions and voila! End of explanation # restoring the latest checkpoint in checkpoint_dir checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir)) Explanation: Restore the latest checkpoint End of explanation def display_image(epoch_no): return PIL.Image.open('image_at_epoch_{:04d}.png'.format(epoch_no)) display_image(EPOCHS) Explanation: Display an image using the epoch number End of explanation with imageio.get_writer('dcgan.gif', mode='I') as writer: filenames = glob.glob('image*.png') filenames = sorted(filenames) last = -1 for i,filename in enumerate(filenames): frame = 2*(i**0.5) if round(frame) > round(last): last = frame else: continue image = imageio.imread(filename) writer.append_data(image) image = imageio.imread(filename) writer.append_data(image) # this is a hack to display the gif inside the notebook os.system('cp dcgan.gif dcgan.gif.png') display.Image(filename="dcgan.gif.png") Explanation: Generate a GIF of all the saved images. <!-- TODO(markdaoust): Remove the hack when Ipython version is updated --> End of explanation #from google.colab import files #files.download('dcgan.gif') Explanation: To downlod the animation from Colab uncomment the code below: End of explanation
5,206
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Supervised Classification Step2: Read the dataset In this case the training dataset is just a csv file. In case of larger dataset more advanced file fromats like hdf5 are used. Pandas is used to load the files. Step3: Creating training sets Each class of tissue in our pandas framework has a pre assigned label (Module 1). This labels were Step4: X is the feature vector y are the labels Split Training/Validation Step5: Create the classifier For the following example we will consider a SVM classifier. The classifier is provided by the Scikit-Learn library Step6: Run some basic analytics Calculate some basic metrics. Step7: Correct way Fine tune hyperparameters Step8: Debug algorithm with learning curve X_train is randomly split into a training and a test set 3 times (n_iter=3). Each point on the training-score curve is the average of 3 scores where the model was trained and evaluated on the first i training examples. Each point on the cross-validation score curve is the average of 3 scores where the model was trained on the first i training examples and evaluated on all examples of the test set. Step9: Heatmap This will take some time...
Python Code: %matplotlib inline import warnings warnings.filterwarnings('ignore') import os import numpy as np import matplotlib.pyplot as plt from sklearn import svm import pandas as pd from matplotlib.colors import ListedColormap from sklearn.model_selection import StratifiedShuffleSplit from sklearn.model_selection import GridSearchCV from sklearn.model_selection import train_test_split from sklearn.model_selection import ShuffleSplit from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC import sklearn.metrics as metrics from sklearn import tree from IPython.display import Image from sklearn.externals.six import StringIO import pydotplus from matplotlib.colors import Normalize from sklearn.learning_curve import learning_curve from sklearn import preprocessing from sklearn.preprocessing import StandardScaler # Utility function to move the midpoint of a colormap to be around # the values of interest. class MidpointNormalize(Normalize): def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False): self.midpoint = midpoint Normalize.__init__(self, vmin, vmax, clip) def __call__(self, value, clip=None): x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1] return np.ma.masked_array(np.interp(value, x, y)) def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)): Generate a simple plot of the test and training learning curve. Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 3-fold cross-validation, - integer, to specify the number of folds. - An object to be used as a cross-validation generator. - An iterable yielding train/test splits. For integer/None inputs, if ``y`` is binary or multiclass, :class:`StratifiedKFold` used. If the estimator is not a classifier or if ``y`` is neither binary nor multiclass, :class:`KFold` is used. Refer :ref:`User Guide <cross_validation>` for the various cross-validators that can be used here. n_jobs : integer, optional Number of jobs to run in parallel (default 1). plt.figure() plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score") plt.legend(loc="best") return plt Explanation: Supervised Classification: SVM Import Libraries End of explanation Data=pd.read_csv ('DataExample.csv') # if you need to print or have access to the data as numpy array you can execute the following commands # print (Data) # print(Data.as_matrix(columns=['NAWMpost'])) Explanation: Read the dataset In this case the training dataset is just a csv file. In case of larger dataset more advanced file fromats like hdf5 are used. Pandas is used to load the files. End of explanation ClassBrainTissuepost=(Data['ClassTissuePost'].values) ClassBrainTissuepost= (np.asarray(ClassBrainTissuepost)) ClassBrainTissuepost=ClassBrainTissuepost[~np.isnan(ClassBrainTissuepost)] ClassBrainTissuepre=(Data[['ClassTissuePre']].values) ClassBrainTissuepre= (np.asarray(ClassBrainTissuepre)) ClassBrainTissuepre=ClassBrainTissuepre[~np.isnan(ClassBrainTissuepre)] ClassTUMORpost=(Data[['ClassTumorPost']].values) ClassTUMORpost= (np.asarray(ClassTUMORpost)) ClassTUMORpost=ClassTUMORpost[~np.isnan(ClassTUMORpost)] ClassTUMORpre=(Data[['ClassTumorPre']].values) ClassTUMORpre= (np.asarray(ClassTUMORpre)) ClassTUMORpre=ClassTUMORpre[~np.isnan(ClassTUMORpre)] X_1 = np.stack((ClassBrainTissuepost,ClassBrainTissuepre)) # we only take the first two features. X_2 = np.stack((ClassTUMORpost,ClassTUMORpre)) X=np.concatenate((X_1.transpose(), X_2.transpose()),axis=0) y =np.zeros((np.shape(X))[0]) y[np.shape(X_1)[1]:]=1 X= preprocessing.scale(X) Explanation: Creating training sets Each class of tissue in our pandas framework has a pre assigned label (Module 1). This labels were: - ClassTissuePost - ClassTissuePre - ClassTissueFlair - ClassTumorPost - ClassTumorPre - ClassTumorFlair - ClassEdemaPost - ClassEdemaPre - ClassEdemaFlair For demontration purposes we will create a feature vector that contains the intesities for the tumor and white matter area from the T1w pre and post contrast images. End of explanation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) Explanation: X is the feature vector y are the labels Split Training/Validation End of explanation h = .02 # step size in the mesh # we create an instance of SVM and fit out data. We do not scale our # data since we want to plot the support vectors C = 1.0 # SVM regularization parameter svc = svm.SVC(kernel='linear', C=C).fit(X, y) rbf_svc = svm.SVC(kernel='rbf', gamma=0.1, C=10).fit(X, y) poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y) # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # title for the plots titles = ['SVC with linear kernel', 'SVC with RBF kernel', 'SVC with polynomial (degree 3) kernel'] for i, clf in enumerate((svc, rbf_svc, poly_svc)): # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. plt.subplot(2, 2, i + 1) plt.subplots_adjust(wspace=0.4, hspace=0.4) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # Plot also the training points plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.xlabel('Intensity post contrast') plt.ylabel('Intensity pre contrast') plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) plt.xticks(()) plt.yticks(()) plt.title(titles[i]) plt.show() # understanding margins for C in [0.001,1000]: fig = plt.subplot() clf = svm.SVC(C,kernel='linear') clf.fit(X, y) # create a mesh to plot in x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx = np.linspace(x_min,x_max) # print (xx) xx=np.asarray(xx) # get the separating hyperplane w = clf.coef_[0] # print(w) a = -w[0] / w[1] # print (a) yy = a * xx - (clf.intercept_[0]) / w[1] # print(yy) # plot the parallels to the separating hyperplane that pass through the # support vectors b = clf.support_vectors_[0] yy_down = a * xx + (b[1] - a * b[0]) b = clf.support_vectors_[-1] yy_up = a * xx + (b[1] - a * b[0]) # plot the line, the points, and the nearest vectors to the plane plt.plot(xx, yy, 'k-') plt.plot(xx, yy_down, 'k--') plt.plot(xx, yy_up, 'k--') plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=80, facecolors='none') plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired) plt.axis('tight') plt.show() Explanation: Create the classifier For the following example we will consider a SVM classifier. The classifier is provided by the Scikit-Learn library End of explanation print ('C=100') model=svm.SVC(C=100,kernel='linear') model.fit(X_train, y_train) # make predictions expected = y_test predicted = model.predict(X_test) # summarize the fit of the model print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted)) print (20*'---') print ('C=0.0001') model=svm.SVC(C=0.0001,kernel='linear') model.fit(X_train, y_train) # make predictions expected = y_test predicted = model.predict(X_test) # summarize the fit of the model print(metrics.classification_report(expected, predicted)) print(metrics.confusion_matrix(expected, predicted)) Explanation: Run some basic analytics Calculate some basic metrics. End of explanation gamma_val =[0.01, .2,.3,.4,.9] classifier = svm.SVC(kernel='rbf', C=10).fit(X, y) classifier = GridSearchCV(estimator=classifier, cv=5, param_grid=dict(gamma=gamma_val)) classifier.fit(X_train, y_train) Explanation: Correct way Fine tune hyperparameters End of explanation title = 'Learning Curves (SVM, gamma=%.6f)' %classifier.best_estimator_.gamma estimator = svm.SVC(kernel='rbf', C=10, gamma=classifier.best_estimator_.gamma) plot_learning_curve(estimator, title, X_train, y_train, cv=4) plt.show() ### Final evaluation on the test set classifier.score(X_test, y_test) Explanation: Debug algorithm with learning curve X_train is randomly split into a training and a test set 3 times (n_iter=3). Each point on the training-score curve is the average of 3 scores where the model was trained and evaluated on the first i training examples. Each point on the cross-validation score curve is the average of 3 scores where the model was trained on the first i training examples and evaluated on all examples of the test set. End of explanation C_range = np.logspace(-2, 10, 13) gamma_range = np.logspace(-9, 3, 13) param_grid = dict(gamma=gamma_range, C=C_range) cv = StratifiedShuffleSplit(n_splits=5, test_size=0.2, random_state=42) grid_clf = GridSearchCV(SVC(), param_grid=param_grid, cv=cv) grid_clf.fit(X, y) print("The best parameters are %s with a score of %0.2f" % (grid_clf.best_params_, grid_clf.best_score_)) plt.figure(figsize=(8, 6)) scores = grid_clf.cv_results_['mean_test_score'].reshape(len(C_range), len(gamma_range)) plt.figure(figsize=(8, 6)) plt.subplots_adjust(left=.2, right=0.95, bottom=0.15, top=0.95) plt.imshow(scores, interpolation='nearest', cmap=plt.cm.jet, norm=MidpointNormalize(vmin=0.2, midpoint=0.92)) plt.xlabel('gamma') plt.ylabel('C') plt.colorbar() plt.xticks(np.arange(len(gamma_range)), gamma_range, rotation=45) plt.yticks(np.arange(len(C_range)), C_range) plt.title('Validation accuracy') plt.show() Explanation: Heatmap This will take some time... End of explanation
5,207
Given the following text description, write Python code to implement the functionality described below step by step Description: An Introduction to CausalGraphicalModels CausalGraphicalModel is a python module for describing and manipulating Causal Graphical Models and Structural Causal Models. Behind the curtain, it is a light wrapper around the python graph library networkx. This notebook is designed to give a quick overview of the functionality of this package. CausalGraphicalModels Step1: Latent Variables Step2: StructuralCausalModels For Structural Causal Models (SCM) we need to specify the functional form of each node Step3: The only requirement on the functions are Step4: And to access the implied CGM" Step5: And to apply an intervention Step6: And sample from the distribution implied by this intervention
Python Code: from causalgraphicalmodels import CausalGraphicalModel sprinkler = CausalGraphicalModel( nodes=["season", "rain", "sprinkler", "wet", "slippery"], edges=[ ("season", "rain"), ("season", "sprinkler"), ("rain", "wet"), ("sprinkler", "wet"), ("wet", "slippery") ] ) # draw return a graphviz `dot` object, which jupyter can render sprinkler.draw() # get the distribution implied by the graph print(sprinkler.get_distribution()) # check for d-seperation of two nodes sprinkler.is_d_separated("slippery", "season", {"wet"}) # get all the conditional independence relationships implied by a CGM sprinkler.get_all_independence_relationships() # check backdoor adjustment set sprinkler.is_valid_backdoor_adjustment_set("rain", "slippery", {"wet"}) # get all backdoor adjustment sets sprinkler.get_all_backdoor_adjustment_sets("rain", "slippery") # get the graph created by intervening on node "rain" do_sprinkler = sprinkler.do("rain") do_sprinkler.draw() Explanation: An Introduction to CausalGraphicalModels CausalGraphicalModel is a python module for describing and manipulating Causal Graphical Models and Structural Causal Models. Behind the curtain, it is a light wrapper around the python graph library networkx. This notebook is designed to give a quick overview of the functionality of this package. CausalGraphicalModels End of explanation dag_with_latent_variables = CausalGraphicalModel( nodes=["x", "y", "z"], edges=[ ("x", "z"), ("z", "y"), ], latent_edges=[ ("x", "y") ] ) dag_with_latent_variables.draw() # here there are no observed backdoor adjustment sets dag_with_latent_variables.get_all_backdoor_adjustment_sets("x", "y") # but there is a frontdoor adjustment set dag_with_latent_variables.get_all_frontdoor_adjustment_sets("x", "y") Explanation: Latent Variables End of explanation from causalgraphicalmodels import StructuralCausalModel import numpy as np scm = StructuralCausalModel({ "x1": lambda n_samples: np.random.binomial(n=1,p=0.7,size=n_samples), "x2": lambda x1, n_samples: np.random.normal(loc=x1, scale=0.1), "x3": lambda x2, n_samples: x2 ** 2, }) Explanation: StructuralCausalModels For Structural Causal Models (SCM) we need to specify the functional form of each node: End of explanation ds = scm.sample(n_samples=100) ds.head() # and visualise the samples import seaborn as sns %matplotlib inline sns.kdeplot( data=ds.x2, data2=ds.x3, ) Explanation: The only requirement on the functions are: - that variable names are consistent - each function accepts keyword variables in the form of numpy arrays and output numpy arrays of shape [n_samples] - that in addition to it's parents, each function takes a n_samples variables indicating how many samples to generate - that any function acts on each row independently. This ensure that the output samples are independent Wrapping these functions in the StructuralCausalModel object allows us to easily generate samples: End of explanation scm.cgm.draw() Explanation: And to access the implied CGM" End of explanation scm_do = scm.do("x1") scm_do.cgm.draw() Explanation: And to apply an intervention: End of explanation scm_do.sample(n_samples=5, set_values={"x1": np.arange(5)}) Explanation: And sample from the distribution implied by this intervention: End of explanation
5,208
Given the following text description, write Python code to implement the functionality described below step by step Description: Graphical User Interfaces Object oriented programming and particularly inheritance is commonly used for creating GUIs. There are a large number of different frameworks supporting building GUIs. The following are particularly relevant Step1: Although this works, it is visually unappealing. We can improve on this using styles and themes. Step2: As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site. Step4: Matplotlib For simple programs, displaying data and taking basic input, often a command line application will be much faster to implement than a GUI. The times when I have moved away from the command line it has been to interact with image data and plots. Here, matplotlib often works very well. Either it can be embedded in a larger application or it can be used directly. There are a number of examples on the matplotlib site. Here is one stripped down example of one recent GUI I have used.
Python Code: import tkinter as tk class Application(tk.Frame): def __init__(self, master=None): tk.Frame.__init__(self, master) self.pack() self.createWidgets() def createWidgets(self): self.hi_there = tk.Button(self) self.hi_there["text"] = "Hello World\n(click me)" self.hi_there["command"] = self.say_hi self.hi_there.pack(side="top") self.QUIT = tk.Button(self, text="QUIT", fg="red", command=root.destroy) self.QUIT.pack(side="bottom") def say_hi(self): print("hi there, everyone!") root = tk.Tk() app = Application(master=root) app.mainloop() Explanation: Graphical User Interfaces Object oriented programming and particularly inheritance is commonly used for creating GUIs. There are a large number of different frameworks supporting building GUIs. The following are particularly relevant: TkInter - This is the official/default GUI framework guidata - A GUI framework for dataset display and editing VTK - A GUI framework for data visualization pyqtgraph - A GUI framework for data visualization, easily installed with conda install pyqtgraph matplotlib - As well as creating plots matplotlib can support interaction TkInter TkInter is widely used with plenty of documentation available but may prove somewhat limited for more data intensive applications. Documentation from the standard library Further documentation from python.org TkDocs Let's look at a simple example from the documentation End of explanation import tkinter as tk from tkinter import ttk class Application(ttk.Frame): def __init__(self, master=None): super().__init__(master, padding="3 3 12 12") self.grid(column=0, row=0, ) self.createWidgets() self.master.title('Test') def createWidgets(self): self.hi_there = ttk.Button(self) self.hi_there["text"] = "Hello World\n(click me)" self.hi_there["command"] = self.say_hi self.QUIT = ttk.Button(self, text="QUIT", style='Alert.TButton', command=root.destroy) for child in self.winfo_children(): child.grid_configure(padx=10, pady=10) def say_hi(self): print("hi there, everyone!") root = tk.Tk() app = Application(master=root) s = ttk.Style() s.configure('TButton', font='helvetica 24') s.configure('Alert.TButton', foreground='red') root.mainloop() Explanation: Although this works, it is visually unappealing. We can improve on this using styles and themes. End of explanation from tkinter import * from tkinter import ttk def calculate(*args): try: value = float(feet.get()) meters.set((0.3048 * value * 10000.0 + 0.5)/10000.0) except ValueError: pass root = Tk() root.title("Feet to Meters") mainframe = ttk.Frame(root, padding="3 3 12 12") mainframe.grid(column=0, row=0, sticky=(N, W, E, S)) mainframe.columnconfigure(0, weight=1) mainframe.rowconfigure(0, weight=1) feet = StringVar() meters = StringVar() feet_entry = ttk.Entry(mainframe, width=7, textvariable=feet) feet_entry.grid(column=2, row=1, sticky=(W, E)) ttk.Label(mainframe, textvariable=meters).grid(column=2, row=2, sticky=(W, E)) ttk.Button(mainframe, text="Calculate", command=calculate).grid(column=3, row=3, sticky=W) ttk.Label(mainframe, text="feet").grid(column=3, row=1, sticky=W) ttk.Label(mainframe, text="is equivalent to").grid(column=1, row=2, sticky=E) ttk.Label(mainframe, text="meters").grid(column=3, row=2, sticky=W) for child in mainframe.winfo_children(): child.grid_configure(padx=5, pady=5) feet_entry.focus() root.bind('<Return>', calculate) root.mainloop() Explanation: As our applications get more complicated we must give greater thought to the layout. The following example comes from the TkDocs site. End of explanation Do a mouseclick somewhere, move the mouse to some destination, release the button. This class gives click- and release-events and also draws a line or a box from the click-point to the actual mouseposition (within the same axes) until the button is released. Within the method 'self.ignore()' it is checked wether the button from eventpress and eventrelease are the same. from matplotlib.widgets import RectangleSelector import matplotlib.pyplot as plt import matplotlib.cbook as cbook def line_select_callback(eclick, erelease): 'eclick and erelease are the press and release events' x1, y1 = eclick.xdata, eclick.ydata x2, y2 = erelease.xdata, erelease.ydata print ("(%3.2f, %3.2f) --> (%3.2f, %3.2f)" % (x1, y1, x2, y2)) print (" The button you used were: %s %s" % (eclick.button, erelease.button)) def toggle_selector(event): print (' Key pressed.') if event.key in ['Q', 'q'] and toggle_selector.RS.active: print (' RectangleSelector deactivated.') toggle_selector.RS.set_active(False) if event.key in ['A', 'a'] and not toggle_selector.RS.active: print (' RectangleSelector activated.') toggle_selector.RS.set_active(True) image_file = cbook.get_sample_data('grace_hopper.png') image = plt.imread(image_file) fig, current_ax = plt.subplots() plt.imshow(image) toggle_selector.RS = RectangleSelector(current_ax, line_select_callback, drawtype='box', useblit=True, button=[1,3], # don't use middle button minspanx=5, minspany=5, spancoords='pixels') plt.connect('key_press_event', toggle_selector) plt.show() Explanation: Matplotlib For simple programs, displaying data and taking basic input, often a command line application will be much faster to implement than a GUI. The times when I have moved away from the command line it has been to interact with image data and plots. Here, matplotlib often works very well. Either it can be embedded in a larger application or it can be used directly. There are a number of examples on the matplotlib site. Here is one stripped down example of one recent GUI I have used. End of explanation
5,209
Given the following text description, write Python code to implement the functionality described below step by step Description: <a href="http Step2: Part 1 Step3: Next, let's demonstrate the different sorts of grids we get with different numbers of layers. We'll look at grids with between 3 and 1023 nodes. Step4: We can see the number of nodes grows exponentially with the number of layers. Step6: 1b Step8: 1c Step9: Part 2 Step10: We can see that the majority of the time it took to create the grid, parcels, and the NST component occurs in xarray functions. This is good, as those are likely already efficient. Next we profile the code as we use the run_one_step function that advances the component forward in time. Step12: We can see that the majority of the time it takes to run the component happens in a function called _partition_active_and_storage_layers. This function is used to figure out which parcels are moving and which are not active. Part 3 Step13: Next, we use or new time_code function with a few different grid sizes, a few different parcels per link, and for 10 seconds. Feel free to experiment and change these values. Some of these values have been reduced to ensure that this notebook always works in the Landlab continuous integration. 3b Step14: We make a dataframe and investigate the contents with df.head. We'll use some shorthand for the column and axis names Step15: 3c
Python Code: import cProfile import io import pstats import time import warnings from pstats import SortKey import matplotlib.pyplot as plt import numpy as np import pandas as pd import xarray as xr from landlab.components import FlowDirectorSteepest, NetworkSedimentTransporter from landlab.data_record import DataRecord from landlab.grid.network import NetworkModelGrid from landlab.plot import graph warnings.filterwarnings("ignore") Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a> Profiling and Scaling Analysis of the NetworkSedimentTransporter Planning to run the NetworkSedimentTransporter (NST) on a large river network, or for a long time? You might wonder: how efficient is this component? Most field-scale applications of the NST would likley have 100-500 links in the network and be initialized with at least 100 parcels per link (e.g., up to 50k+ parcels). This number of parcels per link are commonly need to capture the full grain size distribution. Because the component may use many parcels on many links, we will want to assess two parts of the components efficiency. First, we will profile the code to see what happens each time the component runs. This is useful to see which parts of the computation take the most time and occur the most often. If code is running too slowly, it is usefull to profile it in order to identify which parts may yield the biggest speedups. The second goal is scaling, or seeing how changes in the number of parcels and links changes run time. If doubling the number of links doubled the run time, we would say that runtime for the component "scales linearly" with the number of links. In computer science this is often called the time complexity of the code/algorithm. The goal of this notebook is to explore profiling and scaling with the NST. We will look at how changing the grid nodes and number of parcels per link impacts the initialization and run time of this component. In order to do this we will need to make some python functions to create variably sized grids. We will also use some common python tools for profiling code. The notebook is organized into three parts: Part 1: Create generic, variable sized grids and parcels. Part 2: Profiling the code. Part 3: Scaling analysis. We begin by importaing all needed python modules. End of explanation def create_node_xy_and_links(n_layers, x0=0.0, y0=0.0, xperc=0.9, dy=1.0): Create node and link structure of a branching binary tree. The tree can have an arbitrary number of layers. For example, a tree with one layer has three nodes and two links: :: * * \ / * The lowest of the nodes is the "origin" node, and has coordinates of `(x0, y0)`. The y spacing between layers is given by `dy`. Finally, in order to ensure that links do not cross and nodes are not co-located a shrinking factor, `xperc` that must be less than 1.0 is specified. Each layer has 2^{layer} nodes: Layer 0 has 1 node, layer 1 has 2 nodes, layer 2 has 4 nodes. A tree with three layers has seven nodes and six links: :: * * * * \ / \ / * * \ / * Parameters ---------- n_layers : int Number of layers of the binary tree x0 : float x coordinate position of the origin node. Default of 0. y0=0. : float y coordinate position of the origin node. Default of 0. xperc : float x direction shrinkage factor to prevent co-location of nodes and corssing links. Must be between 0.0 and 1.0 noninclusive. Default of 0.9. dy : float y direction spacing between layers. Default of 1. Returns ------ x_of_node : list Node x coordinates. y_of_node : list Node y coordinates. nodes_at_link : list of (head, tail) tuples Nodes at link tail and head. assert xperc < 1.0 assert xperc > 0.0 nodes_per_layer = np.power(2, np.arange(n_layers + 1)) nnodes = np.sum(nodes_per_layer) x_of_node = [x0] y_of_node = [y0] nodes_at_link = [] id_start_layer = 0 for nl in np.arange(1, n_layers + 1): nodes_last_layer = np.power(2, nl - 1) nodes_this_layer = np.power(2, nl) dx = xperc * (dy) * (0.5 ** (nl - 1)) for ni in range(nodes_last_layer): head_id = id_start_layer + ni tail_id = len(x_of_node) x = x_of_node[head_id] y = y_of_node[head_id] x_of_node.extend([x - dx, x + dx]) y_of_node.extend([y + dy, y + dy]) nodes_at_link.extend([(head_id, tail_id), (head_id, tail_id + 1)]) id_start_layer = len(x_of_node) - nodes_this_layer return x_of_node, y_of_node, nodes_at_link Explanation: Part 1: Variably-sized grids and parcels Part 1a: Grid topology First, we need the ability to create different sizes of network model grids. We will use these different size grids to assess how long it takes the NST to run with different grid sizes. A simple approach is to create a generic grid in which each node has two recievers. Lets start by writing a function that creates the x and y node coordinates and the linking structure for a given number of layers. While these grids do not look like typical river networks, they are topologically similar. We create a function create_node_xy_and_links which takes the following inputs n_layers x0 y0 x_perc dy and returns three arrays: - x_of_node - y_of_node - nodes_at_link These inputs and outputs are defined in the function docstring below. The function was designed to produce output that could be directly provided to the NetworkModelGrid init function. End of explanation example_layers = [1, 3, 5, 7, 9] nodes = [] for i, n_layers in enumerate(example_layers): x_of_node, y_of_node, nodes_at_link = create_node_xy_and_links(n_layers) grid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link) graph.plot_graph(grid, at="node,link", with_id=False) nodes.append(grid.number_of_nodes) Explanation: Next, let's demonstrate the different sorts of grids we get with different numbers of layers. We'll look at grids with between 3 and 1023 nodes. End of explanation plt.plot(example_layers, nodes) plt.xlabel("Number of Layers") plt.ylabel("Number of Nodes") plt.show() Explanation: We can see the number of nodes grows exponentially with the number of layers. End of explanation def create_nmg_and_fd(n_layers): Create a generic NetworkModelGrid and FlowDirectorSteepest. This function will also add the following fields to the NetworkModelGrid. - topographic__elevation at node - bedrock__elevation at node - flow_depth at link - reach_length at link - channel_width at link. Parameters ---------- n_layers : int Number of layers of the binary tree Returns ------- grid : NetworkModelGrid instance fd : FlowDirectorSteepest instance x_of_node, y_of_node, nodes_at_link = create_node_xy_and_links(n_layers) grid = NetworkModelGrid((y_of_node, x_of_node), nodes_at_link) _ = grid.add_field("topographic__elevation", grid.y_of_node.copy(), at="node") _ = grid.add_field("bedrock__elevation", grid.y_of_node.copy(), at="node") _ = grid.add_field( "flow_depth", 2.5 * np.ones(grid.number_of_links), at="link" ) # m _ = grid.add_field( "reach_length", 200.0 * np.ones(grid.number_of_links), at="link" ) # m _ = grid.add_field( "channel_width", 1.0 * np.ones(grid.number_of_links), at="link" ) # m fd = FlowDirectorSteepest(grid) fd.run_one_step() return grid, fd Explanation: 1b: Generic grid. In order to run the NST, our grids need fields related to channel geometry and flow characteristics. Here, we'll create a function that takes only the number of layers and uses the function create_node_xy_and_links that we just created, creates a grid, adds, those fields and populate them with generic values. The NST also needs a FlowDirectorSteepest instance, so we'll create that too. Thus, with these two functions we can specifiy the desired number of layers and get both of these objects we need to instantiate the NST. End of explanation def create_parcels(grid, parcels_per_link=5): Create a generic set of parcels. The NetworkSedimentTransporter requires a set of parcels with some specific attributes (e.g., density) that are used in order to calculate travel distances. This function creates the parcels in the correct format and populates all necessary attributes. Specifically it creates the following attributes: - "abrasion_rate" - "density" - "time_arrival_in_link" - "active_layer" - "location_in_link" - "D" - "volume" Parameters ---------- grid : NetworkModelGrid parcels_per_link : int Number of parcels to create for each link. Default of 5. Returns ------- parcels : DataRecord # element_id is the link on which the parcel begins. element_id = np.repeat(np.arange(grid.number_of_links), parcels_per_link) element_id = np.expand_dims(element_id, axis=1) # scale volume with parcels per link so we end up with a similar quantity of sediment. volume = (1.0 / parcels_per_link) * np.ones(np.shape(element_id)) # (m3) active_layer = np.zeros(np.shape(element_id)) # 1= active, 0 = inactive density = 2650 * np.ones(np.size(element_id)) # (kg/m3) abrasion_rate = 0.0 * np.ones(np.size(element_id)) # (mass loss /m) # Lognormal GSD medianD = 0.085 # m mu = np.log(medianD) sigma = np.log(2) # assume that D84 = sigma*D50 np.random.seed(0) D = np.random.lognormal( mu, sigma, np.shape(element_id) ) # (m) the diameter of grains in each parcel time_arrival_in_link = np.random.rand(np.size(element_id), 1) location_in_link = np.random.rand(np.size(element_id), 1) variables = { "abrasion_rate": (["item_id"], abrasion_rate), "density": (["item_id"], density), "time_arrival_in_link": (["item_id", "time"], time_arrival_in_link), "active_layer": (["item_id", "time"], active_layer), "location_in_link": (["item_id", "time"], location_in_link), "D": (["item_id", "time"], D), "volume": (["item_id", "time"], volume), } items = {"grid_element": "link", "element_id": element_id} parcels = DataRecord( grid, items=items, time=[0.0], data_vars=variables, dummy_elements={"link": [NetworkSedimentTransporter.OUT_OF_NETWORK]}, ) return parcels Explanation: 1c: Generic sets of parcels Our second to last step in writing functions to create a generic model grid and set of parcels is to create a function that will create our parcels. The parcels are associated with the grid because parcels must be located on a grid element, and thus we will always need the grid to create the parcels. End of explanation # feel free to change these parameters and see # how it impacts the results nlayer = 5 timesteps = 50 parcels_per_link = 50 # calculate dt and set seed. dt = 60 * 60 * 24 * 12 # length of timestep (seconds) np.random.seed(1234) pr = cProfile.Profile() pr.enable() grid, fd = create_nmg_and_fd(nlayer) parcels = create_parcels(grid, parcels_per_link=parcels_per_link) nst = NetworkSedimentTransporter( grid, parcels, fd, bed_porosity=0.3, g=9.81, fluid_density=1000, transport_method="WilcockCrowe", ) pr.disable() s = io.StringIO() sortby = SortKey.CUMULATIVE ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) Explanation: Part 2: Profile the code Next we will create a NST, run it, and profile it using the python built-in profiling tool cProfile. We create our grid and parcels, instantiate the profile, run the model for 60 timesteps, and then compile the results of profiling. We will profile the code in two parts, the instantiation and running. End of explanation pr = cProfile.Profile() pr.enable() for t in range(0, (timesteps * dt), dt): nst.run_one_step(dt) pr.disable() s = io.StringIO() sortby = SortKey.CUMULATIVE ps = pstats.Stats(pr, stream=s).sort_stats(sortby) ps.print_stats() print(s.getvalue()) Explanation: We can see that the majority of the time it took to create the grid, parcels, and the NST component occurs in xarray functions. This is good, as those are likely already efficient. Next we profile the code as we use the run_one_step function that advances the component forward in time. End of explanation def time_code(nlayer=3, parcels_per_link=100, timesteps=10): Time the initializiation and runtime. Parameters ---------- n_layers : int Number of layers of the binary tree used to create the NetworkModelGrid. Default of 3 parcels_per_link : int Number of parcels to create for each link. Default of 100. timesteps : int Number of timesteps. Default of 10. Returns ------- (number_of_nodes, parcels_per_link, total_parcels) : tuple Tuple indicating the key inputs in our scaling analysis. The number of nodes, the number of parcels per link, the total number of parcels. init_duration : float Duration of the initiation step, in seconds. r1s_per : float Duration of the average run_one_step call, in seconds init_start = time.time() grid, fd = create_nmg_and_fd(nlayer) parcels = create_parcels(grid, parcels_per_link=parcels_per_link) dt = 60 * 60 * 24 * 12 # length of timestep (seconds) nst = NetworkSedimentTransporter( grid, parcels, fd, bed_porosity=0.3, g=9.81, fluid_density=1000, transport_method="WilcockCrowe", ) init_duration = time.time() - init_start if timesteps > 0: r1s_start = time.time() for t in range(timesteps): nst.run_one_step(dt) r1s_per = (time.time() - r1s_start) / timesteps else: r1s_per = 0.0 return (grid.number_of_nodes, parcels_per_link), init_duration, r1s_per Explanation: We can see that the majority of the time it takes to run the component happens in a function called _partition_active_and_storage_layers. This function is used to figure out which parcels are moving and which are not active. Part 3: Scaling Next we look at how initialization and moving the model forward in time change as we increase the number of nodes and number of parcels on a given link. 3a: Create a timing function. To do this we create one final convenience function that will create everything we need for our scaling analysis, run the model, and report how long it took to initialize and run. End of explanation np.random.seed(345) out = [] # this range for i in reduced for testing. for i in range(2, 5): for j in [10, 20, 50, 100, 200, 500]: print(i, j) (nn, ppl), init, r1s_per = time_code(nlayer=i, parcels_per_link=j, timesteps=10) out.append((nn, ppl, init, r1s_per)) Explanation: Next, we use or new time_code function with a few different grid sizes, a few different parcels per link, and for 10 seconds. Feel free to experiment and change these values. Some of these values have been reduced to ensure that this notebook always works in the Landlab continuous integration. 3b: Time the code while systematically varying inputs We save the results in a list structured to easily make a pandas.Dataframe. End of explanation df = pd.DataFrame(out, columns=["nnodes", "ppl", "init", "r1s_per"]) df = df.pivot(index="nnodes", columns="ppl", values=["init", "r1s_per"]) df.head() Explanation: We make a dataframe and investigate the contents with df.head. We'll use some shorthand for the column and axis names: nnodes = number of nodes in the NetworkModelGrid ppl = parcels per link init = duration of time (in seconds) of instantiating the grid, parcels, and NST r1s_per = duration of time (in seconds) of running the model forward one step in time. End of explanation fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, dpi=300) df["init"].plot(loglog=True, ax=ax[0], title="init duration") ax[0].set_ylabel("duration (s)") ax[0].set_xlabel("Number of Nodes") df["r1s_per"].plot(loglog=True, ax=ax[1], title="run one step duration") ax[1].set_ylabel("duration (s)") ax[1].set_xlabel("Number of Nodes") # plt.savefig("scaling1.png") plt.show() Explanation: 3c: Plot the output to see how the code scales. First, lets plot the duration of init (left) and run one step (right) as a function of the number of nodes and parcels per link. We will put number of nodes on the x axis and make a line for each different nubmer of parcels per link. For each As is common, we will make these plots in log-log space. End of explanation
5,210
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction Step1: In this chapter, We want to introduce you to the wonderful world of graph visualization. You probably have seen graphs that are visualized as hairballs. Apart from communicating how complex the graph is, hairballs don't really communicate much else. As such, my goal by the end of this chapter is to introduce you to what I call rational graph visualization. But before we can do that, let's first make sure we understand how to use NetworkX's drawing facilities to draw graphs to the screen. In a pinch, and for small graphs, it's very handy to have. Hairballs The node-link diagram is the canonical diagram we will see in publications. Nodes are commonly drawn as circles, while edges are drawn s lines. Node-link diagrams are common, and there's a good reason for this Step2: Nodes more tightly connected with one another are clustered together. Initial node placement is done typically at random, so really it's tough to deterministically generate the same figure. If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument to bring some degree of informativeness to the drawing Step3: The downside to drawing graphs this way is that large graphs end up looking like hairballs. Can you imagine a graph with more than the 28 nodes that we have? As you probably can imagine, the default nx.draw(G) is probably not suitable for generating visual insights. Matrix Plot A different way that we can visualize a graph is by visualizing it in its matrix form. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. We can draw a graph's matrix form conveniently by using nxviz.MatrixPlot Step4: What can you tell from the graph visualization? A few things are immediately obvious Step5: The Arc Plot forms the basis of the next visualization, the highly popular Circos plot. Circos Plot The Circos Plot was developed by Martin Krzywinski at the BC Cancer Research Center. The nxviz.CircosPlot takes inspiration from the original by joining the two ends of the Arc Plot into a circle. Likewise, we can colour and order nodes by node metadata Step6: Generally speaking, you can think of a Circos Plot as being a more compact and aesthetically pleasing version of Arc Plots. Hive Plot The final plot we'll show is, Hive Plots.
Python Code: from IPython.display import YouTubeVideo YouTubeVideo(id="v9HrR_AF5Zc", width="100%") Explanation: Introduction End of explanation from nams import load_data as cf import networkx as nx import matplotlib.pyplot as plt G = cf.load_seventh_grader_network() nx.draw(G) Explanation: In this chapter, We want to introduce you to the wonderful world of graph visualization. You probably have seen graphs that are visualized as hairballs. Apart from communicating how complex the graph is, hairballs don't really communicate much else. As such, my goal by the end of this chapter is to introduce you to what I call rational graph visualization. But before we can do that, let's first make sure we understand how to use NetworkX's drawing facilities to draw graphs to the screen. In a pinch, and for small graphs, it's very handy to have. Hairballs The node-link diagram is the canonical diagram we will see in publications. Nodes are commonly drawn as circles, while edges are drawn s lines. Node-link diagrams are common, and there's a good reason for this: it's convenient to draw! In NetworkX, we can draw node-link diagrams using: End of explanation G.is_directed() nx.draw(G, with_labels=True) Explanation: Nodes more tightly connected with one another are clustered together. Initial node placement is done typically at random, so really it's tough to deterministically generate the same figure. If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument to bring some degree of informativeness to the drawing: End of explanation import nxviz as nv from nxviz import annotate nv.matrix(G, group_by="gender", node_color_by="gender") annotate.matrix_group(G, group_by="gender") Explanation: The downside to drawing graphs this way is that large graphs end up looking like hairballs. Can you imagine a graph with more than the 28 nodes that we have? As you probably can imagine, the default nx.draw(G) is probably not suitable for generating visual insights. Matrix Plot A different way that we can visualize a graph is by visualizing it in its matrix form. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. We can draw a graph's matrix form conveniently by using nxviz.MatrixPlot: End of explanation # a = ArcPlot(G, node_color='gender', node_grouping='gender') nv.arc(G, node_color_by="gender", group_by="gender") annotate.arc_group(G, group_by="gender") Explanation: What can you tell from the graph visualization? A few things are immediately obvious: The diagonal is empty: no student voted for themselves as their favourite. The matrix is asymmetric about the diagonal: this is a directed graph! (An undirected graph would be symmetric about the diagonal.) You might go on to suggest that there is some clustering happening, but without applying a proper clustering algorithm on the adjacency matrix, we would be hard-pressed to know for sure. After all, we can simply re-order the node ordering along the axes to produce a seemingly-random matrix. Arc Plot The Arc Plot is another rational graph visualization. Here, we line up the nodes along a horizontal axis, and draw arcs between nodes if they are connected by an edge. We can also optionally group and colour them by some metadata. In the case of this student graph, we group and colour them by "gender". End of explanation nv.circos(G, group_by="gender", node_color_by="gender") annotate.circos_group(G, group_by="gender") Explanation: The Arc Plot forms the basis of the next visualization, the highly popular Circos plot. Circos Plot The Circos Plot was developed by Martin Krzywinski at the BC Cancer Research Center. The nxviz.CircosPlot takes inspiration from the original by joining the two ends of the Arc Plot into a circle. Likewise, we can colour and order nodes by node metadata: End of explanation from nxviz import plots import matplotlib.pyplot as plt nv.hive(G, group_by="gender", node_color_by="gender") annotate.hive_group(G, group_by="gender") Explanation: Generally speaking, you can think of a Circos Plot as being a more compact and aesthetically pleasing version of Arc Plots. Hive Plot The final plot we'll show is, Hive Plots. End of explanation
5,211
Given the following text description, write Python code to implement the functionality described below step by step Description: Modeling and Simulation in Python Insulin minimal model Copyright 2017 Allen Downey License Step1: Data We have data from Pacini and Bergman (1986), "MINMOD Step2: The insulin minimal model In addition to the glucose minimal mode, Pacini and Bergman present an insulin minimal model, in which the concentration of insulin, $I$, is governed by this differential equation Step3: Exercise Step4: Exercise Step6: Exercise Step7: Exercise Step8: Exercise
Python Code: # Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import * Explanation: Modeling and Simulation in Python Insulin minimal model Copyright 2017 Allen Downey License: Creative Commons Attribution 4.0 International End of explanation data = pd.read_csv('data/glucose_insulin.csv', index_col='time'); Explanation: Data We have data from Pacini and Bergman (1986), "MINMOD: a computer program to calculate insulin sensitivity and pancreatic responsivity from the frequently sampled intravenous glucose tolerance test", Computer Methods and Programs in Biomedicine, 23: 113-122.. End of explanation params = Params(I0 = 360, k = 0.25, gamma = 0.004, G_T = 80) # Solution def make_system(params, data): # params might be a Params object or an array, # so we have to unpack it like this I0, k, gamma, G_T = params init = State(I=I0) t_0 = get_first_label(data) t_end = get_last_label(data) G=interpolate(data.glucose) system = System(I0=I0, k=k, gamma=gamma, G_T=G_T, G=G, init=init, t_0=t_0, t_end=t_end, dt=1) return system # Solution system = make_system(params, data) Explanation: The insulin minimal model In addition to the glucose minimal mode, Pacini and Bergman present an insulin minimal model, in which the concentration of insulin, $I$, is governed by this differential equation: $ \frac{dI}{dt} = -k I(t) + \gamma (G(t) - G_T) t $ Exercise: Write a version of make_system that takes the parameters of this model, I0, k, gamma, and G_T as parameters, along with a DataFrame containing the measurements, and returns a System object suitable for use with run_simulation or run_odeint. Use it to make a System object with the following parameters: End of explanation # Solution def slope_func(state, t, system): [I] = state k, gamma = system.k, system.gamma G, G_T = system.G, system.G_T dIdt = -k * I + gamma * (G(t) - G_T) * t return [dIdt] # Solution slope_func(system.init, system.t_0, system) Explanation: Exercise: Write a slope function that takes state, t, system as parameters and returns the derivative of I with respect to time. Test your function with the initial condition $I(0)=360$. End of explanation # Solution results, details = run_ode_solver(system, slope_func) details # Solution results.tail() # Solution plot(results.I, 'g-', label='simulation') plot(data.insulin, 'go', label='insulin data') decorate(xlabel='Time (min)', ylabel='Concentration ($\mu$U/mL)') Explanation: Exercise: Run run_ode_solver with your System object and slope function, and plot the results, along with the measured insulin levels. End of explanation # Solution def error_func(params, data): Computes an array of errors to be minimized. params: sequence of parameters actual: array of values to be matched returns: array of errors print(params) # make a System with the given parameters system = make_system(params, data) # solve the ODE results, details = run_ode_solver(system, slope_func) # compute the difference between the model # results and actual data errors = (results.I - data.insulin).dropna() return TimeSeries(errors.loc[8:]) # Solution error_func(params, data) Explanation: Exercise: Write an error function that takes a sequence of parameters as an argument, along with the DataFrame containing the measurements. It should make a System object with the given parameters, run it, and compute the difference between the results of the simulation and the measured values. Test your error function by calling it with the parameters from the previous exercise. Hint: As we did in a previous exercise, you might want to drop the errors for times prior to t=8. End of explanation # Solution best_params, details = leastsq(error_func, params, data) print(details.mesg) # Solution system = make_system(best_params, data) # Solution results, details = run_ode_solver(system, slope_func, t_eval=data.index) details # Solution plot(results.I, 'g-', label='simulation') plot(data.insulin, 'go', label='insulin data') decorate(xlabel='Time (min)', ylabel='Concentration ($\mu$U/mL)') Explanation: Exercise: Use leastsq to find the parameters that best fit the data. Make a System object with those parameters, run it, and plot the results along with the measurements. End of explanation # Solution I0, k, gamma, G_T = best_params # Solution I_max = data.insulin.max() Ib = data.insulin[0] I_max, Ib # Solution # The value of G0 is the best estimate from the glucose model G0 = 289 Gb = data.glucose[0] G0, Gb # Solution phi_1 = (I_max - Ib) / k / (G0 - Gb) phi_1 # Solution phi_2 = gamma * 1e4 phi_2 Explanation: Exercise: Using the best parameters, estimate the sensitivity to glucose of the first and second phase pancreatic responsivity: $ \phi_1 = \frac{I_{max} - I_b}{k (G_0 - G_b)} $ $ \phi_2 = \gamma \times 10^4 $ For $G_0$, use the best estimate from the glucose model, 290. For $G_b$ and $I_b$, use the inital measurements from the data. End of explanation
5,212
Given the following text description, write Python code to implement the functionality described below step by step Description: Linear Regression Algorithms using Apache SystemML This notebook shows Step2: Import SystemML API Step3: Import numpy, sklearn, and define some helper functions Step5: Example 1 Step6: Examine execution plans, and increase data size to obverve changed execution plans Step7: Load diabetes dataset from scikit-learn Step9: Example 2 Step11: Algorithm 2 Step13: Algorithm 3 Step14: Example 3 Step15: Example 4
Python Code: !pip show systemml Explanation: Linear Regression Algorithms using Apache SystemML This notebook shows: - Install SystemML Python package and jar file - pip - SystemML 'Hello World' - Example 1: Matrix Multiplication - SystemML script to generate a random matrix, perform matrix multiplication, and compute the sum of the output - Examine execution plans, and increase data size to obverve changed execution plans - Load diabetes dataset from scikit-learn - Example 2: Implement three different algorithms to train linear regression model - Algorithm 1: Linear Regression - Direct Solve (no regularization) - Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) - Algorithm 3: Linear Regression - Conjugate Gradient (no regularization) - Example 3: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API - Example 4: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API - Uninstall/Clean up SystemML Python package and jar file This notebook is supported with SystemML 0.14.0 and above. End of explanation from systemml import MLContext, dml, dmlFromResource ml = MLContext(sc) print ("Spark Version:" + sc.version) print ("SystemML Version:" + ml.version()) print ("SystemML Built-Time:"+ ml.buildTime()) ml.execute(dml(s = 'Hello World!').output("s")).get("s") Explanation: Import SystemML API End of explanation import sys, os, glob, subprocess import matplotlib.pyplot as plt import numpy as np from sklearn import datasets plt.switch_backend('agg') def printLastLogLines(n): fname = max(glob.iglob(os.sep.join([os.environ["HOME"],'/logs/notebook/kernel-pyspark-*.log'])), key=os.path.getctime) print(subprocess.check_output(['tail', '-' + str(n), fname])) Explanation: Import numpy, sklearn, and define some helper functions End of explanation script = X = rand(rows=$nr, cols=1000, sparsity=0.5) A = t(X) %*% X s = sum(A) prog = dml(script).input('$nr', 1e5).output('s') s = ml.execute(prog).get('s') print (s) Explanation: Example 1: Matrix Multiplication SystemML script to generate a random matrix, perform matrix multiplication, and compute the sum of the output End of explanation ml = MLContext(sc) ml = ml.setStatistics(True) # re-execute ML program # printLastLogLines(22) prog = dml(script).input('$nr', 1e6).output('s') out = ml.execute(prog).get('s') print (out) ml = MLContext(sc) ml = ml.setStatistics(False) Explanation: Examine execution plans, and increase data size to obverve changed execution plans End of explanation %matplotlib inline diabetes = datasets.load_diabetes() diabetes_X = diabetes.data[:, np.newaxis, 2] diabetes_X_train = diabetes_X[:-20] diabetes_X_test = diabetes_X[-20:] diabetes_y_train = diabetes.target[:-20].reshape(-1,1) diabetes_y_test = diabetes.target[-20:].reshape(-1,1) plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') diabetes.data.shape Explanation: Load diabetes dataset from scikit-learn End of explanation script = # add constant feature to X to model intercept X = cbind(X, matrix(1, rows=nrow(X), cols=1)) A = t(X) %*% X b = t(X) %*% y w = solve(A, b) bias = as.scalar(w[nrow(w),1]) w = w[1:nrow(w)-1,] prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias') w, bias = ml.execute(prog).get('w','bias') w = w.toNumPy() plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted') Explanation: Example 2: Implement three different algorithms to train linear regression model Algorithm 1: Linear Regression - Direct Solve (no regularization) Least squares formulation w* = argminw ||Xw-y||2 = argminw (y - Xw)'(y - Xw) = argminw (w'(X'X)w - w'(X'y))/2 Setting the gradient dw = (X'X)w - (X'y) to 0, w = (X'X)-1(X' y) = solve(X'X, X'y) End of explanation script = # add constant feature to X to model intercepts X = cbind(X, matrix(1, rows=nrow(X), cols=1)) max_iter = 100 w = matrix(0, rows=ncol(X), cols=1) for(i in 1:max_iter){ XtX = t(X) %*% X dw = XtX %*%w - t(X) %*% y alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw) w = w + dw*alpha } bias = as.scalar(w[nrow(w),1]) w = w[1:nrow(w)-1,] prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w').output('bias') w, bias = ml.execute(prog).get('w', 'bias') w = w.toNumPy() plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed') Explanation: Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) Algorithm Step 1: Start with an initial point while(not converged) { Step 2: Compute gradient dw. Step 3: Compute stepsize alpha. Step 4: Update: wnew = wold + alpha*dw } Gradient formula dw = r = (X'X)w - (X'y) Step size formula Find number alpha to minimize f(w + alpha*r) alpha = -(r'r)/(r'X'Xr) End of explanation script = # add constant feature to X to model intercepts X = cbind(X, matrix(1, rows=nrow(X), cols=1)) m = ncol(X); i = 1; max_iter = 20; w = matrix (0, rows = m, cols = 1); # initialize weights to 0 dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y) norm_r2 = sum (dw ^ 2); for(i in 1:max_iter) { q = t(X) %*% (X %*% p) alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r) w = w + alpha * p; # update weights dw = dw + alpha * q; old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2); p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction i = i + 1; } bias = as.scalar(w[nrow(w),1]) w = w[1:nrow(w)-1,] prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w').output('bias') w, bias = ml.execute(prog).get('w','bias') w = w.toNumPy() plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed') Explanation: Algorithm 3: Linear Regression - Conjugate Gradient (no regularization) Problem with gradient descent: Takes very similar directions many times Solution: Enforce conjugacy Step 1: Start with an initial point while(not converged) { Step 2: Compute gradient dw. Step 3: Compute stepsize alpha. Step 4: Compute next direction p by enforcing conjugacy with previous direction. Step 4: Update: w_new = w_old + alpha*p } End of explanation import os from subprocess import call dirName = os.path.dirname(os.path.realpath("~")) + "/scripts" call(["mkdir", "-p", dirName]) call(["wget", "-N", "-q", "-P", dirName, "https://raw.githubusercontent.com/apache/incubator-systemml/master/scripts/algorithms/LinearRegDS.dml"]) scriptName = dirName + "/LinearRegDS.dml" dml_script = dmlFromResource(scriptName) prog = dml_script.input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out') w = ml.execute(prog).get('beta_out') w = w.toNumPy() bias=w[1] print (bias) plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, (w[0]*diabetes_X_test)+bias, color='red', linestyle ='dashed') Explanation: Example 3: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API End of explanation from pyspark.sql import SQLContext from systemml.mllearn import LinearRegression sqlCtx = SQLContext(sc) regr = LinearRegression(sqlCtx) # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) predictions = regr.predict(diabetes_X_test) # Use the trained model to perform prediction %matplotlib inline plt.scatter(diabetes_X_train, diabetes_y_train, color='black') plt.scatter(diabetes_X_test, diabetes_y_test, color='red') plt.plot(diabetes_X_test, predictions, color='black') Explanation: Example 4: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API mllearn API allows a Python programmer to invoke SystemML's algorithms using scikit-learn like API as well as Spark's MLPipeline API. End of explanation
5,213
Given the following text description, write Python code to implement the functionality described below step by step Description: tmp-API-check The clustergrammer_widget class is now being loaded into the Network class. The class and widget instance are saved in th Network instance, net. This allows us to load data, cluster, and finally produce a new widget instance using the widget method. The instance of the widget is saved in net and can be used to grab the data from the clustergram as a Pandas DataFrame using the widget_df method. The exported DataFrame will reflect any filtering or imported categories that were added on the front end. In these examples, we will filter the matrix using the brush crop tool, export the filtered matrix as a DataFrame, and finally visualize this as a new clustergram widget. Step1: Make widget using new API Step2: Above, we have filtered the matrix to a region of interest using the brush cropping tool. Below we will get export this region of interest, defined on the front end, to a DataFrame, df_genes. This demonstrates the two-way communication capabilities of widgets. Step3: Above, we made a new widget visualizing this region of interest. Generate random DataFrame Here we will genrate a DataFrame with random data and visualize it using the widget. Step4: Above, we selected a region of interest using the front-end brush crop tool and export to DataFrame, df_random. Below we will visualize it using a new widget.
Python Code: import numpy as np import pandas as pd from clustergrammer_widget import * net = Network(clustergrammer_widget) Explanation: tmp-API-check The clustergrammer_widget class is now being loaded into the Network class. The class and widget instance are saved in th Network instance, net. This allows us to load data, cluster, and finally produce a new widget instance using the widget method. The instance of the widget is saved in net and can be used to grab the data from the clustergram as a Pandas DataFrame using the widget_df method. The exported DataFrame will reflect any filtering or imported categories that were added on the front end. In these examples, we will filter the matrix using the brush crop tool, export the filtered matrix as a DataFrame, and finally visualize this as a new clustergram widget. End of explanation net.load_file('rc_two_cats.txt') net.cluster() net.widget() Explanation: Make widget using new API End of explanation df_genes = net.widget_df() df_genes.shape net.load_df(df_genes) net.cluster() net.widget() Explanation: Above, we have filtered the matrix to a region of interest using the brush cropping tool. Below we will get export this region of interest, defined on the front end, to a DataFrame, df_genes. This demonstrates the two-way communication capabilities of widgets. End of explanation # generate random matrix num_rows = 500 num_cols = 10 np.random.seed(seed=100) mat = np.random.rand(num_rows, num_cols) # make row and col labels rows = range(num_rows) cols = range(num_cols) rows = [str(i) for i in rows] cols = [str(i) for i in cols] # make dataframe df = pd.DataFrame(data=mat, columns=cols, index=rows) net.load_df(df) net.cluster() net.widget() Explanation: Above, we made a new widget visualizing this region of interest. Generate random DataFrame Here we will genrate a DataFrame with random data and visualize it using the widget. End of explanation df_random = net.widget_df() df_random.shape net.load_df(df_random) net.cluster() net.widget() Explanation: Above, we selected a region of interest using the front-end brush crop tool and export to DataFrame, df_random. Below we will visualize it using a new widget. End of explanation
5,214
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: For this problem set, we'll be using the Jupyter notebook Step4: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does Step6: Part B (1 point) Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality. Step9: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get Step11: Part C (1 point) Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function. $\sum_{i=1}^n i^2$ Part D (2 points) Find a usecase for your sum_of_squares function and implement that usecase in the cell below.
Python Code: def squares(n): Compute the squares of numbers from 1 to n, such that the ith element of the returned list equals i^2. ### BEGIN SOLUTION if n < 1: raise ValueError("n must be greater than or equal to 1") return [i ** 2 for i in range(1, n + 1)] ### END SOLUTION Explanation: For this problem set, we'll be using the Jupyter notebook: Part A (2 points) Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError. End of explanation squares(10) Check that squares returns the correct output for several inputs from nose.tools import assert_equal assert_equal(squares(1), [1]) assert_equal(squares(2), [1, 4]) assert_equal(squares(10), [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]) assert_equal(squares(11), [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]) Check that squares raises an error for invalid inputs from nose.tools import assert_raises assert_raises(ValueError, squares, 0) assert_raises(ValueError, squares, -4) Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does: End of explanation def sum_of_squares(n): Compute the sum of the squares of numbers from 1 to n. ### BEGIN SOLUTION return sum(squares(n)) ### END SOLUTION Explanation: Part B (1 point) Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality. End of explanation sum_of_squares(10) Check that sum_of_squares returns the correct answer for various inputs. assert_equal(sum_of_squares(1), 1) assert_equal(sum_of_squares(2), 5) assert_equal(sum_of_squares(10), 385) assert_equal(sum_of_squares(11), 506) Check that sum_of_squares relies on squares. orig_squares = squares del squares try: assert_raises(NameError, sum_of_squares, 1) except AssertionError: raise AssertionError("sum_of_squares does not use squares") finally: squares = orig_squares Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get: End of explanation def pyramidal_number(n): Returns the n^th pyramidal number return sum_of_squares(n) Explanation: Part C (1 point) Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function. $\sum_{i=1}^n i^2$ Part D (2 points) Find a usecase for your sum_of_squares function and implement that usecase in the cell below. End of explanation
5,215
Given the following text description, write Python code to implement the functionality described below step by step Description: CNN-Project-Exercise We'll be using the CIFAR-10 dataset, which is very famous dataset for image recognition! The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. Follow the Instructions in Bold, if you get stuck somewhere, view the solutions video! Most of the challenge with this project is actually dealing with the data and its dimensions, not from setting up the CNN itself! Step 0 Step1: The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle. Load the Data. Use the Code Below to load the data Step2: Why the 'b's in front of the string? Bytes literals are always prefixed with 'b' or 'B'; they produce an instance of the bytes type instead of the str type. They may only contain ASCII characters; bytes with a numeric value of 128 or greater must be expressed with escapes. https Step3: Loaded in this way, each of the batch files contains a dictionary with the following elements Step4: Helper Functions for Dealing With Data. Use the provided code below to help with dealing with grabbing the next batch once you've gotten ready to create the Graph Session. Can you break down how it works? Step5: How to use the above code Step6: Creating the Model Import tensorflow Step7: Create 2 placeholders, x and y_true. Their shapes should be Step8: Create one more placeholder called hold_prob. No need for shape here. This placeholder will just hold a single probability for the dropout. Step9: Helper Functions Grab the helper functions from MNIST with CNN (or recreate them here yourself for a hard challenge!). You'll need Step10: Create the Layers Create a convolutional layer and a pooling layer as we did for MNIST. Its up to you what the 2d size of the convolution should be, but the last two digits need to be 3 and 32 because of the 3 color channels and 32 pixels. So for example you could use Step11: Create the next convolutional and pooling layers. The last two dimensions of the convo_2 layer should be 32,64 Step12: Now create a flattened layer by reshaping the pooling layer into [-1,8 * 8 * 64] or [-1,4096] Step13: Create a new full layer using the normal_full_layer function and passing in your flattend convolutional 2 layer with size=1024. (You could also choose to reduce this to something like 512) Step14: Now create the dropout layer with tf.nn.dropout, remember to pass in your hold_prob placeholder. Step15: Finally set the output to y_pred by passing in the dropout layer into the normal_full_layer function. The size should be 10 because of the 10 possible labels Step16: Loss Function Create a cross_entropy loss function Step17: Optimizer Create the optimizer using an Adam Optimizer. Step18: Create a variable to intialize all the global tf variables. Step19: Graph Session Perform the training and test print outs in a Tf session and run your model!
Python Code: # Put file path as a string here CIFAR_DIR = './data./cifar-10-batches-py/' Explanation: CNN-Project-Exercise We'll be using the CIFAR-10 dataset, which is very famous dataset for image recognition! The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class. Follow the Instructions in Bold, if you get stuck somewhere, view the solutions video! Most of the challenge with this project is actually dealing with the data and its dimensions, not from setting up the CNN itself! Step 0: Get the Data Note: If you have trouble with this just watch the solutions video. This doesn't really have anything to do with the exercise, its more about setting up your data. Please make sure to watch the solutions video before posting any QA questions. Download the data for CIFAR from here: https://www.cs.toronto.edu/~kriz/cifar.html Specifically the CIFAR-10 python version link: https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz Remember the directory you save the file in! End of explanation def unpickle(file): import pickle with open(file, 'rb') as fo: cifar_dict = pickle.load(fo, encoding='bytes') return cifar_dict dirs = ['batches.meta','data_batch_1','data_batch_2','data_batch_3','data_batch_4','data_batch_5','test_batch'] all_data = [0, 1, 2, 3, 4, 5, 6] for i,direc in zip(all_data,dirs): all_data[i] = unpickle(CIFAR_DIR + direc) batch_meta = all_data[0] data_batch1 = all_data[1] data_batch2 = all_data[2] data_batch3 = all_data[3] data_batch4 = all_data[4] data_batch5 = all_data[5] test_batch = all_data[6] batch_meta Explanation: The archive contains the files data_batch_1, data_batch_2, ..., data_batch_5, as well as test_batch. Each of these files is a Python "pickled" object produced with cPickle. Load the Data. Use the Code Below to load the data: End of explanation data_batch1.keys() Explanation: Why the 'b's in front of the string? Bytes literals are always prefixed with 'b' or 'B'; they produce an instance of the bytes type instead of the str type. They may only contain ASCII characters; bytes with a numeric value of 128 or greater must be expressed with escapes. https://stackoverflow.com/questions/6269765/what-does-the-b-character-do-in-front-of-a-string-literal End of explanation import matplotlib.pyplot as plt %matplotlib inline import numpy as np X = data_batch1[b'data'] # Put the code here that transforms the X array! X = X.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("uint8") plt.imshow(X[0]) plt.imshow(X[1]) plt.imshow(X[4]) Explanation: Loaded in this way, each of the batch files contains a dictionary with the following elements: * data -- a 10000x3072 numpy array of uint8s. Each row of the array stores a 32x32 colour image. The first 1024 entries contain the red channel values, the next 1024 the green, and the final 1024 the blue. The image is stored in row-major order, so that the first 32 entries of the array are the red channel values of the first row of the image. * labels -- a list of 10000 numbers in the range 0-9. The number at index i indicates the label of the ith image in the array data. The dataset contains another file, called batches.meta. It too contains a Python dictionary object. It has the following entries: label_names -- a 10-element list which gives meaningful names to the numeric labels in the labels array described above. For example, label_names[0] == "airplane", label_names[1] == "automobile", etc. Display a single image using matplotlib. Grab a single image from data_batch1 and display it with plt.imshow(). You'll need to reshape and transpose the numpy array inside the X = data_batch[b'data'] dictionary entry. It should end up looking like this: # Array of all images reshaped and formatted for viewing X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("uint8") End of explanation def one_hot_encode(vec, vals = 10): ''' For use to one-hot encode the 10- possible labels ''' n = len(vec) out = np.zeros((n, vals)) out[range(n), vec] = 1 return out class CifarHelper(): def __init__(self): self.i = 0 # Grabs a list of all the data batches for training self.all_train_batches = [data_batch1, data_batch2, data_batch3, data_batch4, data_batch5] # Grabs a list of all the test batches (really just one batch) self.test_batch = [test_batch] # Intialize some empty variables for later on self.training_images = None self.training_labels = None self.test_images = None self.test_labels = None def set_up_images(self): print("Setting Up Training Images and Labels") # Vertically stacks the training images self.training_images = np.vstack([d[b"data"] for d in self.all_train_batches]) train_len = len(self.training_images) # Reshapes and normalizes training images self.training_images = self.training_images.reshape(train_len, 3, 32, 32).transpose(0, 2, 3, 1) / 255 # One hot Encodes the training labels (e.g. [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]) self.training_labels = one_hot_encode(np.hstack([d[b"labels"] for d in self.all_train_batches]), 10) print("Setting Up Test Images and Labels") # Vertically stacks the test images self.test_images = np.vstack([d[b"data"] for d in self.test_batch]) test_len = len(self.test_images) # Reshapes and normalizes test images self.test_images = self.test_images.reshape(test_len, 3, 32, 32).transpose(0, 2, 3, 1)/255 # One hot Encodes the test labels (e.g. [0, 0, 0, 1, 0, 0, 0, 0, 0, 0]) self.test_labels = one_hot_encode(np.hstack([d[b"labels"] for d in self.test_batch]), 10) def next_batch(self, batch_size): # Note that the 100 dimension in the reshape call is set by an assumed batch size of 100 x = self.training_images[self.i: self.i + batch_size].reshape(100, 32, 32, 3) y = self.training_labels[self.i: self.i + batch_size] self.i = (self.i + batch_size) % len(self.training_images) return x, y Explanation: Helper Functions for Dealing With Data. Use the provided code below to help with dealing with grabbing the next batch once you've gotten ready to create the Graph Session. Can you break down how it works? End of explanation # Before Your tf.Session run these two lines ch = CifarHelper() ch.set_up_images() # During your session to grab the next batch use this line # (Just like we did for mnist.train.next_batch) # batch = ch.next_batch(100) Explanation: How to use the above code: End of explanation import tensorflow as tf Explanation: Creating the Model Import tensorflow End of explanation x = tf.placeholder(shape = [None, 32, 32, 3], dtype = tf.float32) y = tf.placeholder(shape = [None, 10], dtype = tf.float32) Explanation: Create 2 placeholders, x and y_true. Their shapes should be: x shape = [None,32,32,3] y_true shape = [None,10] End of explanation hold_prob = tf.placeholder(dtype = tf.float32) Explanation: Create one more placeholder called hold_prob. No need for shape here. This placeholder will just hold a single probability for the dropout. End of explanation def initialize_weights(shape): init_random_distribution = tf.truncated_normal(shape, stddev = 0.1) return tf.Variable(init_random_distribution) def initialize_bias(shape): init_bias_vals = tf.constant(0.1, shape=shape) return tf.Variable(init_bias_vals) def conv2d(x, W): return tf.nn.conv2d(x, W, strides = [1, 1, 1, 1], padding = 'SAME') def max_pool_2_by_2(x): return tf.nn.max_pool(x, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'SAME') def convolutional_layer(input_x, shape): W = initialize_weights(shape) b = initialize_bias([shape[3]]) return tf.nn.relu(conv2d(input_x, W) + b) def fully_connected_layer(input_layer, size): input_size = int(input_layer.get_shape()[1]) W = initialize_weights([input_size, size]) b = initialize_bias([size]) return tf.matmul(input_layer, W) + b Explanation: Helper Functions Grab the helper functions from MNIST with CNN (or recreate them here yourself for a hard challenge!). You'll need: init_weights init_bias conv2d max_pool_2by2 convolutional_layer normal_full_layer End of explanation convo_1 = convolutional_layer(x, shape = [4, 4, 3, 32]) pool_1 = max_pool_2_by_2(convo_1) Explanation: Create the Layers Create a convolutional layer and a pooling layer as we did for MNIST. Its up to you what the 2d size of the convolution should be, but the last two digits need to be 3 and 32 because of the 3 color channels and 32 pixels. So for example you could use: convo_1 = convolutional_layer(x,shape=[4,4,3,32]) End of explanation convo_2 = convolutional_layer(pool_1, shape = [4, 4, 32, 64]) pool_2 = max_pool_2_by_2(convo_2) Explanation: Create the next convolutional and pooling layers. The last two dimensions of the convo_2 layer should be 32,64 End of explanation 8*8*64 flattened = tf.reshape(pool_2, shape = [-1, 8 * 8 * 64]) Explanation: Now create a flattened layer by reshaping the pooling layer into [-1,8 * 8 * 64] or [-1,4096] End of explanation fully_connected_layer_1 = tf.nn.relu(fully_connected_layer(flattened,1024)) Explanation: Create a new full layer using the normal_full_layer function and passing in your flattend convolutional 2 layer with size=1024. (You could also choose to reduce this to something like 512) End of explanation fully_connected_layer_after_dropout = tf.nn.dropout(x = fully_connected_layer_1, keep_prob = hold_prob) Explanation: Now create the dropout layer with tf.nn.dropout, remember to pass in your hold_prob placeholder. End of explanation y_pred = fully_connected_layer(fully_connected_layer_after_dropout, size = 10) Explanation: Finally set the output to y_pred by passing in the dropout layer into the normal_full_layer function. The size should be 10 because of the 10 possible labels End of explanation cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels = y, logits = y_pred)) Explanation: Loss Function Create a cross_entropy loss function End of explanation optimizer = tf.train.AdamOptimizer(learning_rate=0.001) train = optimizer.minimize(cross_entropy) Explanation: Optimizer Create the optimizer using an Adam Optimizer. End of explanation init = tf.global_variables_initializer() Explanation: Create a variable to intialize all the global tf variables. End of explanation with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(5000): batch = ch.next_batch(100) sess.run(train, feed_dict={x: batch[0], y: batch[1], hold_prob: 0.5}) # PRINT OUT A MESSAGE EVERY 100 STEPS if i%100 == 0: print('Currently on step {}'.format(i)) print('Accuracy is:') # Test the Train Model matches = tf.equal(tf.argmax(y_pred,1),tf.argmax(y,1)) acc = tf.reduce_mean(tf.cast(matches,tf.float32)) print(sess.run(acc,feed_dict={x: ch.test_images, y: ch.test_labels, hold_prob: 1.0})) print('\n') Explanation: Graph Session Perform the training and test print outs in a Tf session and run your model! End of explanation
5,216
Given the following text description, write Python code to implement the functionality described below step by step Description: Submission Notebook Chris Madeley ToC References Statistical Test Linear Regression (Questions) Visualisation Conclusion Reflection Change Log <b>Revision 1 Step1: 0. References In general, only standard package documentation has been used throughout. A couple of one-liners adapted from stackoverflow answers noted in code where used. 1. Statistical Test 1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value? The objective of this project, as described in the project details, is to figure out if more people ride the subway when it is raining versus when it is not raining. To evaluate this question through statistical testing, a hypothesis test is used. To perform such a test two opposing hypotheses are constructed Step2: 1.3 What results did you get from this statistical test? These should include the following numerical values Step3: 1.4 What is the significance and interpretation of these results? Given the p-value < 0.05, we can reject the null hypothesis that the average ridership is not greater when it is raining, hence the we can accept the alternative hypothesis the average ridership is greater when it rains. 2. Linear Regression Step4: 2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model Step5: 2.3 Why did you select these features in your model? The first step was to qualitatively assess which parameters may be useful for the model. This begins with looking at a list of the data, and the type of data, which has been captured, illustrated as follows. Step6: Some parameters are going to be clearly important Step7: The final selection of variables was determined through trial and error of rational combinations of variables. The station popularity was captured in using the stall_num2 variable, since it appears to create a superior model compared with just using UNIT dummies, and because it allowed the creation of combinations. Combining the station with hour was useful, and is intuitive since stations in the CBD will have the greatest patronage and have greater entries in the evening peak hour. A similar logic applies to combining the station and whether it is a weekday. Various combinations of environmental variables were trialled in the model, but none appeared to improve the model accuracy and were subsequently dicarded. Since rain is the focus of this study it was retained, however it was combined with the time of day. The predictive strenght of the model was not really improved with the inclusion of a rain parameter, however combining it with hour appears to improve it's usefulness for providing insight, as will be discussed in section 4. 2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model? Step8: Due to the use of several combinations, there are very few non-dummy features, with the coefficients illustrated below. Since stall_num2 is also used in several combinations, it's individual coefficient doesn't prove very useful. Step9: However when looking at all the combinations for stall_num2 provides greater insight. Here we can see that activity is greater on weekdays, and greatest in the 16 Step10: Even more interesting are the coefficient for the rain combinations. These appear to indicate that patronage increases in the 08 Step11: 2.5 What is your model’s R2 (coefficients of determination) value? Step12: The final R-squared value of 0.74 is much greater than earlier models that used UNIT as a dummy variable, which had R-squared values around 0.55. 2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value? To evaluate the goodness of fit the residuals of the model have been evaluated in two ways. First, a histogram of the residuals has been plotted below. The distribution of residuals is encouragingly symmetric. However efforts to fit a normal distribution found distributions which underestimated the frequency at the mode and tails. Fitting a fat-tailed distribution, such as the Cauchy distribution below, was far more successful. I'm not sure if there's a good reason why it's worked out this way (but would love to hear ideas as to why). Step13: Secondly, a scatterplot of the residuals against the expected values is plotted. As expected, the largest residuals are associated with cases where the traffic is largest. In general the model appears to underpredict the traffic at the busiest of units. Also clear on this plot is how individual stations form a 'streak' of points on the diagonal. This is because the model essentially makes a prediction for each station per hour per for weekdays and weekends. The natural variation of the actual result in this timeframe creates the run of points. Step14: Additionally, note that the condition number for the final model is relatively low, hence there don't appear to be any collinearity issues with this model. By comparison, when UNIT was included as a dummy variable instead, the correlation was weaker and the condition number was up around 220. Step15: In summary, it appears that this linear model has done a reasonable job of predicting ridership in this instance. Clearly some improvements are possible (like fixing the predictions of negative entries!), but given there will always be a degree of random variation, an R-squared value of 0.74 for a linear model seems quite reasonable. To be sure of the model suitability the data should be split into training/test sets. Additionally, more data from extra months could prove beneficial. 3. Visualisation 3.1 One visualization should contain two histograms Step16: Once both plots are normalised, the difference between subway entries when raining and not raining are almost identical. No useful differentiation can be made between the two datasets here. 3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like.
Python Code: # Imports # Numeric Packages from __future__ import division import numpy as np import pandas as pd import scipy.stats as sps # Plotting packages import matplotlib.pyplot as plt from matplotlib import ticker import seaborn as sns %matplotlib inline sns.set_style('whitegrid') sns.set_context('talk') # Other from datetime import datetime, timedelta import statsmodels.api as sm # Import turnstile data and convert datetime column to datetime python objects df = pd.read_csv('turnstile_weather_v2.csv') df['datetime'] = pd.to_datetime(df['datetime']) Explanation: Submission Notebook Chris Madeley ToC References Statistical Test Linear Regression (Questions) Visualisation Conclusion Reflection Change Log <b>Revision 1:</b> Corrections to questions 1.1, 1.4 based on the comments of the first review. <b>Revision 2:</b> Corrections to questions 1.1, 1.4, 4.1 based on the comments of the second review. Overview These answers to the assignment questions have been prepared in a Jupyter (formally IPython) notebook. This was chosen to allow clarity of working, enable reproducability, and as it should be suitable and useful for the target audience, and can be converted to html easily. In general, the code necessary for each question is included below each question, although some blocks of necessary code fall inbetween questions. End of explanation W, p = sps.shapiro(df.ENTRIESn_hourly.tolist()) print 'Probability that data is the realisation of a gaussian random variable: {:.3f}'.format(p) plt.figure(figsize=[8,5]) sns.distplot(df.ENTRIESn_hourly.tolist(), bins=np.arange(0,10001,500), kde=False) plt.xlim(0,10000) plt.yticks(np.arange(0,16001,4000)) plt.title('Histogram of Entry Count') plt.show() Explanation: 0. References In general, only standard package documentation has been used throughout. A couple of one-liners adapted from stackoverflow answers noted in code where used. 1. Statistical Test 1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value? The objective of this project, as described in the project details, is to figure out if more people ride the subway when it is raining versus when it is not raining. To evaluate this question through statistical testing, a hypothesis test is used. To perform such a test two opposing hypotheses are constructed: the null hypothesis and the alternative hypothesis. A hypothesis test considers one sample of data to determine if there is sufficient evidence to reject the null hypothesis for the entire population from which it came; that the difference in the two underlying populations are different with statistical significance. The test is performed to a 'significance level' which determines the probability of Type 1 error occuring, where Type 1 error is the incorrect rejection of the null hypothesis; a false positive. The null hypothesis is constructed to represent the status quo, where the treatment on a population has no effect on the population, chosen this way because the test controls only for Type 1 error. In the context of this assignment, the null hypothesis for this test is on average, no more people ride the subway compared to when it is not; i.e. 'ridership' is the population and 'rain' is the treatment. i.e. $H_0: \alpha_{raining} \leq \alpha_{not_raining}$ where $\alpha$ represents the average ridership of the subway. Consequently, the alternative hypothesis is given by: $H_1: \alpha_{raining} > \alpha_{not_raining}$. Due to the way the hypothesis is framed, that we are only questioning whether ridership increases during rain, a single-tailed test is required. This is because we are only looking for a test statistic that shows an increase in ridership in order to reject the null hypothesis. A significance value of 0.05 has been chosen to reject the null hypothesis for this test, due to it being the most commonly used value for testing. 1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples. The Mann-Whitney U test was chosen for the hypothesis testing due as it is agnostic to the underlying distribution. The entry values are definitely not normally distributed, illustrated below both graphically and using the Shapiro-Wilk test. End of explanation raindata = np.array(df[df.rain==1].ENTRIESn_hourly.tolist()) noraindata = np.array(df[df.rain==0].ENTRIESn_hourly.tolist()) U, p = sps.mannwhitneyu(raindata, noraindata) print 'Results' print '-------' print 'p-value: {:.2f}'.format(p) # Note that p value calculated by scipy is single-tailed print 'Mean with rain: {:.0f}'.format(raindata.mean()) print 'Mean without rain: {:.0f}'.format(noraindata.mean()) Explanation: 1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test. End of explanation # Because the hour '0' is actually the entries from 20:00 to 24:00, it makes more sense to label it 24 when plotting data df.datetime -= timedelta(seconds=1) df['day']= df.datetime.apply(lambda x: x.day) df['hour'] = df.datetime.apply(lambda x: x.hour+1) df['weekday'] = df.datetime.apply(lambda x: not bool(x.weekday()//5)) df['day_week'] = df.datetime.apply(lambda x: x.weekday()) # The dataset includes the Memorial Day Public Holiday, which makes more sense to be classify as a weekend. df.loc[df['day']==30,'weekday'] = False Explanation: 1.4 What is the significance and interpretation of these results? Given the p-value < 0.05, we can reject the null hypothesis that the average ridership is not greater when it is raining, hence the we can accept the alternative hypothesis the average ridership is greater when it rains. 2. Linear Regression End of explanation # Create a new column, stall_num2, representing the proportion of entries through a stall across the entire period. total_patrons = df.ENTRIESn_hourly.sum() # Dataframe with the units, and total passing through each unit across the time period total_by_stall = pd.DataFrame(df.groupby('UNIT').ENTRIESn_hourly.sum()) # Create new variable = proportion of total entries total_by_stall['stall_num2'] = total_by_stall.ENTRIESn_hourly/total_patrons # Normalise by mean and standard deviation... fixes orders of magnitude errors in the output total_stall_mean = total_by_stall.stall_num2.mean() total_stall_stddev = total_by_stall.stall_num2.std() total_by_stall.stall_num2 = ( (total_by_stall.stall_num2 - total_stall_mean) / total_stall_stddev ) # Map the new variable back on the original dataframe df['stall_num2'] = df.UNIT.apply(lambda x: total_by_stall.stall_num2[x]) Explanation: 2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model: Ordinary Least Squares (OLS) was used for the linear regression for this model. 2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features? The final fit used in the model includes multiple components, two of which include the custom input stall_num2, described later: ENTRIESn_hourly ~ 'ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday' - stall_num2 - includes the effect off the stall (unit) number; - C(hour) - (dummy variable) included using dummy variables, since the the entries across hour vary in a highly nonlinear way; - weekday - true/false value for whether it is a weekday; - rain:C(hour) - rain is included as the focus of the study, however it has been combined with the time of day; - stall_num2 * C(hour) - (dummy variable) interaction between the stall number and time of day; and - stall_num2 * weekday - interaction between the stall number and whether it is a weekday. Additionally, an intercept was included in the model, statsmodels appears to automatically create N-1 dummies when this is included. The variable stall_num2 was created as a substitute to using the UNIT column as a dummy variable. It was clear early on that using UNIT has a large impact on the model accuracy, intuitive given the relative popularity of stalls will be important for predicting their entry count. However, with 240 stalls, a lot of dummy variables are created, and it makes interactions between UNIT and other variables impractical. Additionally, so many dummy variables throws away information relating to the similar response between units of similar popularity. stall_num2 was constructed by calculating the number of entries that passed through each stall as a proportion of total entries for the entire period of the data. These results were then normalised to have μ=0 and σ=1 (although they're not normally distributed) to make the solution matrix well behaved; keep the condition number within normal bounds. End of explanation for i in df.columns.tolist(): print i, Explanation: 2.3 Why did you select these features in your model? The first step was to qualitatively assess which parameters may be useful for the model. This begins with looking at a list of the data, and the type of data, which has been captured, illustrated as follows. End of explanation plt.figure(figsize=[8,6]) corr = df[['ENTRIESn_hourly', 'EXITSn_hourly', 'day_week', # Day of the week (0-6) 'weekday', # Whether it is a weekday or not 'day', # Day of the month 'hour', # In set [4, 8, 12, 16, 20, 24] 'fog', 'precipi', 'rain', 'tempi', 'wspdi']].corr() sns.heatmap(corr) plt.title('Correlation matrix between potential features') plt.show() Explanation: Some parameters are going to be clearly important: - UNIT/station - ridership will vary between entry points; - hour - ridership will definitely be different between peak hour and 4am; and - weekday - it is intutive that there will be more entries on weekdays; this is clearly illustrated in the visualisations in section 3. Additionally, rain needed to be included as a feature due to it being the focus of the overall investigation. Beyond these parameters, I selected a set of numeric features which may have an impact on the result, and initially computed and plotted the correlations between featires in an effort to screen out some multicollinearity prior linear regression. The results of this correlation matrix indicated a moderately strong correlations between: - Entries and exits - hence exits is not really suitable for predicting entries, which is somewhat intuitive - Day of the week and weekday - obviously correlated, hence only one should be chosen. - Day of the month and temperature are well correlated, and when plotted show a clear warming trend throughout May. There are also a handful of weaker environmental correlations, such as precipitation and fog, rain and precipitation and rain and temperature. End of explanation # Construct and fit the model mod = sm.OLS.from_formula('ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday', data=df) res = mod.fit_regularized() s = res.summary2() Explanation: The final selection of variables was determined through trial and error of rational combinations of variables. The station popularity was captured in using the stall_num2 variable, since it appears to create a superior model compared with just using UNIT dummies, and because it allowed the creation of combinations. Combining the station with hour was useful, and is intuitive since stations in the CBD will have the greatest patronage and have greater entries in the evening peak hour. A similar logic applies to combining the station and whether it is a weekday. Various combinations of environmental variables were trialled in the model, but none appeared to improve the model accuracy and were subsequently dicarded. Since rain is the focus of this study it was retained, however it was combined with the time of day. The predictive strenght of the model was not really improved with the inclusion of a rain parameter, however combining it with hour appears to improve it's usefulness for providing insight, as will be discussed in section 4. 2.4 What are the parameters (also known as "coefficients" or "weights") of the non-dummy features in your linear regression model? End of explanation s.tables[1].ix[['Intercept', 'stall_num2']] Explanation: Due to the use of several combinations, there are very few non-dummy features, with the coefficients illustrated below. Since stall_num2 is also used in several combinations, it's individual coefficient doesn't prove very useful. End of explanation s.tables[1].ix[[i for i in s.tables[1].index if i[:5]=='stall']] Explanation: However when looking at all the combinations for stall_num2 provides greater insight. Here we can see that activity is greater on weekdays, and greatest in the 16:00-20:00hrs block. It is lowest in the 00:00-04:00hrs block, not shown as it was removed by the model due to the generic stall_num2 parameter being there; the other combinations are effectively referenced to the 00:00-04:00hrs block. End of explanation s.tables[1].ix[[i for i in s.tables[1].index if i[:4]=='rain']] Explanation: Even more interesting are the coefficient for the rain combinations. These appear to indicate that patronage increases in the 08:00-12:00 and 16:00-20:00, corresponding to peak hour. Conversely, subway entries are lower at all other times. Could it be that subway usage increases if it is raining when people are travelling to and from work, but decreases otherwise because people prefer not to travel in the rain at all? End of explanation print 'Model Coefficient of Determination (R-squared): {:.3f}'.format(res.rsquared) Explanation: 2.5 What is your model’s R2 (coefficients of determination) value? End of explanation residuals = res.resid sns.set_style('whitegrid') sns.distplot(residuals,bins=np.arange(-10000,10001,200), kde = False, # kde_kws={'kernel':'gau', 'gridsize':4000, 'bw':100}, fit=sps.cauchy, fit_kws={'gridsize':4000}) plt.xlim(-5000,5000) plt.title('Distribution of Residuals\nwith fitted cauchy Distribution overlaid') plt.show() Explanation: The final R-squared value of 0.74 is much greater than earlier models that used UNIT as a dummy variable, which had R-squared values around 0.55. 2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value? To evaluate the goodness of fit the residuals of the model have been evaluated in two ways. First, a histogram of the residuals has been plotted below. The distribution of residuals is encouragingly symmetric. However efforts to fit a normal distribution found distributions which underestimated the frequency at the mode and tails. Fitting a fat-tailed distribution, such as the Cauchy distribution below, was far more successful. I'm not sure if there's a good reason why it's worked out this way (but would love to hear ideas as to why). End of explanation sns.set_style('whitegrid') fig = plt.figure(figsize=[6,6]) plt.xlabel('ENTRIESn_hourly') plt.ylabel('Residuals') plt.scatter(df.ENTRIESn_hourly, residuals, c=(df.stall_num2*total_stall_stddev+total_stall_mean)*100, # denormalise values cmap='YlGnBu') plt.colorbar(label='UNIT Relative Traffic (%)') plt.plot([0,20000],[0,-20000], ls=':', c='0.7', lw=2) # Line to show negative prediction values (i.e. negative entries) plt.xlim(xmin=0) plt.ylim(-20000,25000) plt.xticks(rotation='45') plt.title('Model Residuals vs. Expected Value') plt.show() Explanation: Secondly, a scatterplot of the residuals against the expected values is plotted. As expected, the largest residuals are associated with cases where the traffic is largest. In general the model appears to underpredict the traffic at the busiest of units. Also clear on this plot is how individual stations form a 'streak' of points on the diagonal. This is because the model essentially makes a prediction for each station per hour per for weekdays and weekends. The natural variation of the actual result in this timeframe creates the run of points. End of explanation print 'Condition Number: {:.2f}'.format(res.condition_number) Explanation: Additionally, note that the condition number for the final model is relatively low, hence there don't appear to be any collinearity issues with this model. By comparison, when UNIT was included as a dummy variable instead, the correlation was weaker and the condition number was up around 220. End of explanation sns.set_style('white') sns.set_context('talk') mydf = df.copy() mydf['rain'] = mydf.rain.apply(lambda x: 'Raining' if x else 'Not Raining') raindata = df[df.rain==1].ENTRIESn_hourly.tolist() noraindata = df[df.rain==0].ENTRIESn_hourly.tolist() fig = plt.figure(figsize=[9,6]) ax = fig.add_subplot(111) plt.hist([raindata,noraindata], normed=True, bins=np.arange(0,11500,1000), color=['dodgerblue', 'indianred'], label=['Raining', 'Not Raining'], align='right') plt.legend() sns.despine(left=True, bottom=True) # http://stackoverflow.com/questions/9767241/setting-a-relative-frequency-in-a-matplotlib-histogram def adjust_y_axis(x, pos): return '{:.0%}'.format(x * 1000) ax.yaxis.set_major_formatter(ticker.FuncFormatter(adjust_y_axis)) plt.title('Histogram of Subway Entries per 4 hour Block per Gate') plt.ylabel('Proportion of Total Entries') plt.xlim(500,10500) plt.xticks(np.arange(1000,10001,1000)) plt.show() Explanation: In summary, it appears that this linear model has done a reasonable job of predicting ridership in this instance. Clearly some improvements are possible (like fixing the predictions of negative entries!), but given there will always be a degree of random variation, an R-squared value of 0.74 for a linear model seems quite reasonable. To be sure of the model suitability the data should be split into training/test sets. Additionally, more data from extra months could prove beneficial. 3. Visualisation 3.1 One visualization should contain two histograms: one of ENTRIESn_hourly for rainy days and one of ENTRIESn_hourly for non-rainy days. End of explanation # Plot to illustrate the average riders per time block for each weekday. # First we need to sum up the entries per hour (category) per weekday across all units. # This is done for every day, whilst retaining the 'day_week' field for convenience. reset_index puts it back into a standard dataframe # For the sake of illustration, memorial day has been excluded since it would incorrectly characterise the Monday ridership mydf = df.copy() mydf = mydf[mydf.day!=30].pivot_table(values='ENTRIESn_hourly', index=['day','day_week','hour'], aggfunc=np.sum).reset_index() # The second pivot takes the daily summed data, and finds the mean for each weekday/hour block. mydf = mydf.pivot_table(values='ENTRIESn_hourly', index='hour', columns='day_week', aggfunc=np.mean) # Generate plout using the seaborn heatplot function. fig = plt.figure(figsize=[9,6]) timelabels = ['Midnight - 4am','4am - 8am','8am - 12pm','12pm - 4pm','4pm - 8pm','8pm - Midnight'] weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'] plot = sns.heatmap(mydf, yticklabels=timelabels, xticklabels=weekdays) plt.xlabel('') # The axis ticks are descriptive enough to negate the need for axis labels plt.ylabel('') plot.tick_params(labelsize=14) # Make stuff bigger! # Make heatmap ticks bigger http://stackoverflow.com/questions/27832054/change-tick-size-on-colorbar-of-seaborn-heatmap cax = plt.gcf().axes[-1] cax.tick_params(labelsize=14) plt.title('Daily NYC Subway Ridership\n(Data from May 2011)', fontsize=20) plt.show() Explanation: Once both plots are normalised, the difference between subway entries when raining and not raining are almost identical. No useful differentiation can be made between the two datasets here. 3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like. End of explanation
5,217
Given the following text description, write Python code to implement the functionality described below step by step Description: Python Lists Quick Reference Table Of Contents <a href="#1.-Construction">Construction</a> <a href="#2.-Accessing-Data">Accessing Data</a> <a href="#3.-Modifying">Modifying</a> <a href="#4.-Sorting">Sorting</a> <a href="#5.-Deep-Copy">Deep Copy</a> <a href="#6.-Equality-and-Identity">Equality and Identity</a> <a href="#7.-List-Comprehensions">List Comprehensions</a> 1. Construction Step1: 2. Accessing Data Step2: 3. Modifying Step3: Setwise list manipulations Step4: 4. Sorting Sort a list in place (modifies but does not return the list) Step5: Return a sorted list (does not modify the original list) Step6: Insert into an already sorted list, and keep it sorted Step7: 5. Deep Copy Step8: 6. Equality and Identity Step9: 7. List Comprehensions
Python Code: #create an empty list empty_list=[] empty_list=list() simpsons = ['homer', 'marge', 'bart'] Explanation: Python Lists Quick Reference Table Of Contents <a href="#1.-Construction">Construction</a> <a href="#2.-Accessing-Data">Accessing Data</a> <a href="#3.-Modifying">Modifying</a> <a href="#4.-Sorting">Sorting</a> <a href="#5.-Deep-Copy">Deep Copy</a> <a href="#6.-Equality-and-Identity">Equality and Identity</a> <a href="#7.-List-Comprehensions">List Comprehensions</a> 1. Construction End of explanation simpsons[0] len(simpsons) # counts the number of instances simpsons.count('bart') # returns index of the first reference simpsons.index('marge') Explanation: 2. Accessing Data End of explanation #append to end simpsons.append('lisa') simpsons #append multiple elments to end simpsons.extend(['itchy','scratchy']) simpsons #insert at index (shifts all elements to the right) simpsons.insert(0, 'maggie') simpsons #remove element 0 and return it simpsons.pop(0) #remove element at 0 (does not return it) del simpsons[0] simpsons #search for first instance and remove it simpsons.remove('bart') simpsons #replace elmeent simpsons[0]='krusty' simpsons #conctonate lists (slower then 'extend' method) neighbors = simpsons + ['ned','rod','todd'] neighbors Explanation: 3. Modifying End of explanation neighbors * 2 Explanation: Setwise list manipulations: End of explanation simpsons.sort() simpsons #reverse sort simpsons.sort(reverse=True) simpsons #sort by a key simpsons.sort(key=len) simpsons Explanation: 4. Sorting Sort a list in place (modifies but does not return the list): End of explanation sorted(simpsons) sorted(simpsons, reverse=True, key=len) Explanation: Return a sorted list (does not modify the original list): End of explanation num = [10, 20, 40, 50] from bisect import insort insort(num, 30) num Explanation: Insert into an already sorted list, and keep it sorted: End of explanation new_num = num[:] new_num=list(num) new_num Explanation: 5. Deep Copy End of explanation # Identity compare: are they referencing the same object? num is new_num # Equality compare: do they have the same data? num == new_num Explanation: 6. Equality and Identity End of explanation mylist = [1, 4, -5, 10, -7, 2, 3, -1] [n for n in mylist if n > 0] [n for n in mylist if n < 0] [-5, -7, -1] import math [math.sqrt(n) for n in mylist if n > 0] #clip negative values [n if n > 0 else 0 for n in mylist] # nested loops flattened into one list alphas = 'abc' digits = '123' correct = ['a1', 'a2','a3','b1','b2','b3','c1','c2','c3'] answer = [alpha+digit for alpha in alphas for digit in digits] answer == correct Explanation: 7. List Comprehensions End of explanation
5,218
Given the following text description, write Python code to implement the functionality described below step by step Description: Tic Tac Toe This is the solution for the Milestone Project! A two player game made within a Jupyter Notebook. Feel free to download the notebook to understand how it works! First some imports we'll need to use for displaying output and set the global variables Step1: Next make a function that will reset the board, in this case we'll store values as a list. Step2: Now create a function to display the board, I'll use the num pad as the board reference. Note Step3: Define a function to check for a win by comparing inputs in the board list. Note Step4: Define function to check if the board is already full in case of a tie. (This is straightfoward with our board stored as a list) Just remember index 0 is always empty. Step5: Now define a function to get player input and do various checks on it. Step6: Now have a function that takes in the player's choice (via the ask_player function) then returns the game_state. Step7: Finally put it all together in a function to play the game. Step8: Let's play!
Python Code: # Specifically for the iPython Notebook environment for clearing output. from IPython.display import clear_output # Global variables board = [' '] * 10 game_state = True announce = '' Explanation: Tic Tac Toe This is the solution for the Milestone Project! A two player game made within a Jupyter Notebook. Feel free to download the notebook to understand how it works! First some imports we'll need to use for displaying output and set the global variables End of explanation # Note: Game will ignore the 0 index def reset_board(): global board,game_state board = [' '] * 10 game_state = True Explanation: Next make a function that will reset the board, in this case we'll store values as a list. End of explanation def display_board(): ''' This function prints out the board so the numpad can be used as a reference ''' # Clear current cell output clear_output() # Print board print " "+board[7]+" |"+board[8]+" | "+board[9]+" " print "------------" print " "+board[4]+" |"+board[5]+" | "+board[6]+" " print "------------" print " "+board[1]+" |"+board[2]+" | "+board[3]+" " Explanation: Now create a function to display the board, I'll use the num pad as the board reference. Note: Should probably just make board and player classes later.... End of explanation def win_check(board, player): ''' Check Horizontals,Verticals, and Diagonals for a win ''' if (board[7] == board[8] == board[9] == player) or \ (board[4] == board[5] == board[6] == player) or \ (board[1] == board[2] == board[3] == player) or \ (board[7] == board[4] == board[1] == player) or \ (board[8] == board[5] == board[2] == player) or \ (board[9] == board[6] == board[3] == player) or \ (board[1] == board[5] == board[9] == player) or \ (board[3] == board[5] == board[7] == player): return True else: return False Explanation: Define a function to check for a win by comparing inputs in the board list. Note: Maybe should just have a list of winning combos and cycle through them? End of explanation def full_board_check(board): ''' Function to check if any remaining blanks are in the board ''' if " " in board[1:]: return False else: return True Explanation: Define function to check if the board is already full in case of a tie. (This is straightfoward with our board stored as a list) Just remember index 0 is always empty. End of explanation def ask_player(mark): ''' Asks player where to place X or O mark, checks validity ''' global board req = 'Choose where to place your: ' + mark while True: try: choice = int(raw_input(req)) except ValueError: print("Sorry, please input a number between 1-9.") continue if board[choice] == " ": board[choice] = mark break else: print "That space isn't empty!" continue Explanation: Now define a function to get player input and do various checks on it. End of explanation def player_choice(mark): global board,game_state,announce #Set game blank game announcement announce = '' #Get Player Input mark = str(mark) # Validate input ask_player(mark) #Check for player win if win_check(board,mark): clear_output() display_board() announce = mark +" wins! Congratulations" game_state = False #Show board clear_output() display_board() #Check for a tie if full_board_check(board): announce = "Tie!" game_state = False return game_state,announce Explanation: Now have a function that takes in the player's choice (via the ask_player function) then returns the game_state. End of explanation def play_game(): reset_board() global announce # Set marks X='X' O='O' while True: # Show board clear_output() display_board() # Player X turn game_state,announce = player_choice(X) print announce if game_state == False: break # Player O turn game_state,announce = player_choice(O) print announce if game_state == False: break # Ask player for a rematch rematch = raw_input('Would you like to play again? y/n') if rematch == 'y': play_game() else: print "Thanks for playing!" Explanation: Finally put it all together in a function to play the game. End of explanation play_game() Explanation: Let's play! End of explanation
5,219
Given the following text description, write Python code to implement the functionality described below step by step Description: Run code to get all URLs ``` with open("all_urls.txt", "wb+") as fp Step2: Load expanded data Step3: Extract tweet features
Python Code: len(data) data[0].keys() data[0][u'source'] data[0][u'is_quote_status'] data[0][u'quoted_status']['text'] data[0]['text'] count_quoted = 0 has_coordinates = 0 count_replies = 0 language_ids = defaultdict(int) count_user_locs = 0 user_locs = Counter() count_verified = 0 for d in data: count_quoted += d.get('is_quote_status', 0) coords = d.get(u'coordinates', None) repl_id = d.get(u'in_reply_to_status_id', None) has_coordinates += (coords is not None) count_replies += (repl_id is not None) loc = d['user'].get('location', u'') count_verified += d['user']['verified'] if loc != u'': count_user_locs += 1 user_locs.update([loc]) language_ids[d['lang']] += 1 print count_quoted, has_coordinates, count_replies, count_user_locs, count_verified print("Of {} tweets, {} have coordinates, while {} have user locations, comprising of {} unique locations".format( len(data), has_coordinates, count_user_locs, len(user_locs) )) user_locs.most_common(10) len(data) data[0]['user'] Explanation: Run code to get all URLs ``` with open("all_urls.txt", "wb+") as fp: for url in sorted(filter(lambda x: x[1] != 'twitter.com', unique_urls), key=lambda x: url_types[x[1]], reverse=True): print >> fp, "%s\t%s\t%s" % (url[0], url[1], url_types[url[1]]) ! head all_urls.txt ``` End of explanation df = pd.read_csv("URL_CAT_MAPPINGS.txt", sep="\t") df.head() df['URL_EXP_SUCCESS'] = (df.EXPANDED_STATUS < 2) df.head() URL_DICT = dict(zip(df[df.URL_CATS != 'UNK'].URL, df[df.URL_CATS != 'UNK'].URL_CATS)) URL_MAPS = dict(zip(df.URL, df.URL_DOMAIN)) URL_EXP_SUCCESS = dict(zip(df.URL, df.URL_EXP_SUCCESS)) len(URL_DICT), df.shape, len(URL_MAPS), len(URL_EXP_SUCCESS) df.URL.head().values URL_MAPS['http://bit.ly/1SqTn5d'] found_urls = 0 twitter_urls = 0 total_urls = 0 tid_mapped_urls = [] url_types = defaultdict(int) for d in data: if 'urls' in d['entities']: m_entities = d['entities']['urls'] for m in m_entities: total_urls += 1 m = m['expanded_url'] m_cats = "UNK" if m in URL_DICT: found_urls += 1 m_cats = URL_DICT[m] elif m.startswith("https://twitter.com") or m.startswith("http://twitter.com"): found_urls += 1 twitter_urls += 1 m_cats = "socialmedia|twitter" else: m_type = "failed_url" if URL_EXP_SUCCESS[m]: m_type = URL_MAPS.get(m, "None.com") m_type = m.split("/", 3)[2] #m_type = m_type.split("/", 3)[2] if m_type.startswith("www."): m_type = m_type[4:] url_types[m_type] += 1 tid_mapped_urls.append((d["id"], m, m_cats)) print "Data: %s, Total: %s, Found: %s, Twitter: %s" % (len(data), total_urls, found_urls, twitter_urls) url_types = Counter(url_types) url_types.most_common(10) url_types.most_common(50) sum(url_types.values()) tid_mapped_urls[:10] df_mapped_cats = pd.DataFrame(tid_mapped_urls, columns=["TID", "URL", "CATS"]) df_mapped_cats.head() df_mapped_cats.to_csv("TID_URL_CATS.txt", sep="\t", index=False) ! head TID_URL_CATS.txt Explanation: Load expanded data End of explanation def extract_meta_features(x): u_data = x["user"] u_url = u_data['url'] if u_url is not None: u_url = u_data['entities']['url']['urls'][0]['expanded_url'] return (x["id"], x['created_at'], x['retweet_count'], x['favorite_count'], x['in_reply_to_status_id'] is not None, 'quoted_status' in x and x['quoted_status'] is not None, len(x['entities']['hashtags']), len(x['entities']['urls']), len(x['entities']['user_mentions']), 0 if 'media' not in x['entities'] else len(x['entities']['media']), # Has photos u_data['id'], u_data[u'created_at'], u_data[u'listed_count'], u_data[u'favourites_count'], u_data[u'followers_count'], u_data[u'friends_count'], u_data[u'statuses_count'], u_data[u'verified'], u_data[u'location'].replace('\r', ''), u_data[u'name'].replace('\r',''), u_url ) extract_meta_features(data[0]) df_meta = pd.DataFrame((extract_meta_features(d) for d in data), columns=["t_id", "t_created", "t_retweets", "t_favorites", "t_is_reply", "t_is_quote", "t_n_hashtags", "t_n_urls", "t_n_mentions", "t_n_media", "u_id", "u_created", "u_n_listed", "u_n_favorites", "u_n_followers", "u_n_friends", "u_n_statuses", "u_is_verified", "u_location", "u_name", "u_url" ]) df_meta.head() df_meta.dtypes df_meta[df_meta.u_url.apply(lambda x: x is not None)]["u_url"].head() df_meta.to_csv("TID_META.txt", sep="\t", index=False, encoding='utf-8') ! head TID_META.txt df_meta[df_meta.u_url.apply(lambda x: x is not None)]["u_url"].shape df_meta.shape Explanation: Extract tweet features End of explanation
5,220
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'snu', 'sandbox-1', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: SNU Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:38 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
5,221
Given the following text description, write Python code to implement the functionality described below step by step Description: In this notebook we'll look at interfacing between the composability and ability to generate complex visualizations that HoloViews provides, the power of pandas library dataframes for manipulating tabular data, and the great-looking statistical plots and analyses provided by the Seaborn library. This tutorial assumes you're already familiar with some of the core concepts of HoloViews, which are explained in the other Tutorials. Step1: We can now select static and animation backends Step2: Visualizing Distributions of Data <a id='Histogram'></a> If import seaborn succeeds, HoloViews will provide a number of additional Element types, including Distribution, Bivariate, TimeSeries, Regression, and DFrame (a Seaborn-visualizable version of the DFrame Element class provided when only pandas is available). We'll start by generating a number of Distribution Elements containing normal distributions with different means and standard deviations and overlaying them. Using the %%opts magic you can specify specific plot and style options as usual; here we deactivate the default histogram and shade the kernel density estimate Step3: Thanks to Seaborn you can choose to plot your distribution as histograms, kernel density estimates, and/or rug plots Step4: We can also visualize the same data with Bivariate distributions Step5: This plot type also has the option of enabling a joint plot with marginal distribution along each axis, and the kind option lets you control whether to visualize the distribution as a scatter, reg, resid, kde or hex plot Step6: Working with TimeSeries data Next let's take a look at the TimeSeries View type, which allows you to visualize statistical time-series data. TimeSeries data can take the form of a number of observations of some dependent variable at multiple timepoints. By controlling the plot and style option the data can be visualized in a number of ways, including confidence intervals, error bars, traces or scatter points. Let's begin by defining a function to generate sine-wave time courses with varying phase and noise levels. Step7: Now we can create HoloMaps of sine and cosine curves with varying levels of observational and independent error. Step8: First let's visualize the sine stack with a confidence interval Step9: And the cosine stack with error bars Step10: Since the %%opts cell magic has applied the style to each object individually, we can now overlay the two with different visualization styles in the same plot Step11: Working with pandas DataFrames In order to make this a little more interesting, we can use some of the real-world datasets provided with the Seaborn library. The holoviews DFrame object can be used to wrap the Seaborn-generated pandas dataframes like this Step12: Iris Data <a id='Box'></a> Let's visualize the relationship between sepal length and width in the Iris flower dataset. Here we can make use of some of the inbuilt Seaborn plot types, starting with a pairplot that can plot each variable in a dataset against each other variable. We can customize this plot further by passing arguments via the style options, to define what plot types the pairplot will use and define the dimension to which we will apply the hue option. Step13: When working with a DFrame object directly, you can select particular columns of your DFrame to visualize by supplying x and y parameters corresponding to the Dimensions or columns you want visualize. Here we'll visualize the sepal_width and sepal_length by species as a box plot and violin plot, respectively. By switching the x and y arguments we can draw either a vertical or horizontal plot. Step14: Titanic passenger data <a id='Correlation'></a> The Titanic passenger data is a truly large dataset, so we can make use of some of the more advanced features of Seaborn and pandas. Above we saw the usage of a pairgrid, which allows you to quickly compare each variable in your dataset. HoloViews also support Seaborn based FacetGrids. The FacetGrid specification is simply passed via the style options, where the map keyword should be supplied as a tuple of the plotting function to use and the Dimensions to place on the x axis and y axis. You may also specify the Dimensions to lay out along the rows and columns of the plot, and the hue groups Step15: FacetGrids support most Seaborn and matplotlib plot types
Python Code: import itertools import numpy as np import pandas as pd import seaborn as sb import holoviews as hv np.random.seed(9221999) Explanation: In this notebook we'll look at interfacing between the composability and ability to generate complex visualizations that HoloViews provides, the power of pandas library dataframes for manipulating tabular data, and the great-looking statistical plots and analyses provided by the Seaborn library. This tutorial assumes you're already familiar with some of the core concepts of HoloViews, which are explained in the other Tutorials. End of explanation hv.notebook_extension() %output holomap='widgets' fig='svg' Explanation: We can now select static and animation backends: End of explanation %%opts Distribution (hist=False kde_kws=dict(shade=True)) d1 = 25 * np.random.randn(500) + 450 d2 = 45 * np.random.randn(500) + 540 d3 = 55 * np.random.randn(500) + 590 hv.Distribution(d1, label='Blue') *\ hv.Distribution(d2, label='Red') *\ hv.Distribution(d3, label='Yellow') Explanation: Visualizing Distributions of Data <a id='Histogram'></a> If import seaborn succeeds, HoloViews will provide a number of additional Element types, including Distribution, Bivariate, TimeSeries, Regression, and DFrame (a Seaborn-visualizable version of the DFrame Element class provided when only pandas is available). We'll start by generating a number of Distribution Elements containing normal distributions with different means and standard deviations and overlaying them. Using the %%opts magic you can specify specific plot and style options as usual; here we deactivate the default histogram and shade the kernel density estimate: End of explanation %%opts Distribution (rug=True kde_kws={'color':'indianred','linestyle':'--'}) hv.Distribution(np.random.randn(10), vdims=['Activity']) Explanation: Thanks to Seaborn you can choose to plot your distribution as histograms, kernel density estimates, and/or rug plots: End of explanation %%opts Bivariate (shade=True) Bivariate.A (cmap='Blues') Bivariate.B (cmap='Reds') Bivariate.C (cmap='Greens') hv.Bivariate(np.array([d1, d2]).T, group='A') +\ hv.Bivariate(np.array([d1, d3]).T, group='B') +\ hv.Bivariate(np.array([d2, d3]).T, group='C') Explanation: We can also visualize the same data with Bivariate distributions: End of explanation %%opts Bivariate [joint=True] (kind='kde' cmap='Blues') hv.Bivariate(np.array([d1, d2]).T, group='A') Explanation: This plot type also has the option of enabling a joint plot with marginal distribution along each axis, and the kind option lets you control whether to visualize the distribution as a scatter, reg, resid, kde or hex plot: End of explanation def sine_wave(n_x, obs_err_sd=1.5, tp_err_sd=.3, phase=0): x = np.linspace(0+phase, (n_x - 1) / 2+phase, n_x) y = np.sin(x) + np.random.normal(0, obs_err_sd) + np.random.normal(0, tp_err_sd, n_x) return y Explanation: Working with TimeSeries data Next let's take a look at the TimeSeries View type, which allows you to visualize statistical time-series data. TimeSeries data can take the form of a number of observations of some dependent variable at multiple timepoints. By controlling the plot and style option the data can be visualized in a number of ways, including confidence intervals, error bars, traces or scatter points. Let's begin by defining a function to generate sine-wave time courses with varying phase and noise levels. End of explanation sine_stack = hv.HoloMap(kdims=['Observation error','Random error']) cos_stack = hv.HoloMap(kdims=['Observation error', 'Random error']) for oe, te in itertools.product(np.linspace(0.5,2,4), np.linspace(0.5,2,4)): sines = np.array([sine_wave(31, oe, te) for _ in range(20)]) sine_stack[(oe, te)] = hv.TimeSeries(sines, label='Sine', group='Activity', kdims=['Time', 'Observation']) cosines = np.array([sine_wave(31, oe, te, phase=np.pi) for _ in range(20)]) cos_stack[(oe, te)] = hv.TimeSeries(cosines, group='Activity',label='Cosine', kdims=['Time', 'Observation']) Explanation: Now we can create HoloMaps of sine and cosine curves with varying levels of observational and independent error. End of explanation %%opts TimeSeries (ci=95 color='indianred') sine_stack Explanation: First let's visualize the sine stack with a confidence interval: End of explanation %%opts TimeSeries (err_style='ci_bars') cos_stack.last Explanation: And the cosine stack with error bars: End of explanation cos_stack.last * sine_stack.last Explanation: Since the %%opts cell magic has applied the style to each object individually, we can now overlay the two with different visualization styles in the same plot: End of explanation iris = hv.DFrame(sb.load_dataset("iris")) tips = hv.DFrame(sb.load_dataset("tips")) titanic = hv.DFrame(sb.load_dataset("titanic")) %output fig='png' dpi=100 size=150 Explanation: Working with pandas DataFrames In order to make this a little more interesting, we can use some of the real-world datasets provided with the Seaborn library. The holoviews DFrame object can be used to wrap the Seaborn-generated pandas dataframes like this: End of explanation %%opts DFrame (diag_kind='kde' kind='reg' hue='species') iris.clone(label="Iris Data", plot_type='pairplot') Explanation: Iris Data <a id='Box'></a> Let's visualize the relationship between sepal length and width in the Iris flower dataset. Here we can make use of some of the inbuilt Seaborn plot types, starting with a pairplot that can plot each variable in a dataset against each other variable. We can customize this plot further by passing arguments via the style options, to define what plot types the pairplot will use and define the dimension to which we will apply the hue option. End of explanation %%opts DFrame [show_grid=False] iris.clone(x='sepal_width', y='species', plot_type='boxplot') +\ iris.clone(x='species', y='sepal_width', plot_type='violinplot') Explanation: When working with a DFrame object directly, you can select particular columns of your DFrame to visualize by supplying x and y parameters corresponding to the Dimensions or columns you want visualize. Here we'll visualize the sepal_width and sepal_length by species as a box plot and violin plot, respectively. By switching the x and y arguments we can draw either a vertical or horizontal plot. End of explanation %%opts DFrame (map=('barplot', 'alive', 'age') col='class' row='sex' hue='pclass' aspect=1.0) titanic.clone(plot_type='facetgrid') Explanation: Titanic passenger data <a id='Correlation'></a> The Titanic passenger data is a truly large dataset, so we can make use of some of the more advanced features of Seaborn and pandas. Above we saw the usage of a pairgrid, which allows you to quickly compare each variable in your dataset. HoloViews also support Seaborn based FacetGrids. The FacetGrid specification is simply passed via the style options, where the map keyword should be supplied as a tuple of the plotting function to use and the Dimensions to place on the x axis and y axis. You may also specify the Dimensions to lay out along the rows and columns of the plot, and the hue groups: End of explanation %%opts DFrame (map=('regplot', 'age', 'fare') col='class' hue='class') titanic.clone(plot_type='facetgrid') Explanation: FacetGrids support most Seaborn and matplotlib plot types: End of explanation
5,222
Given the following text description, write Python code to implement the functionality described below step by step Description: Jump_to notebook introduction in lesson 10 video Early stopping Better callback cancellation Jump_to lesson 10 video Step1: Other callbacks Step2: LR Finder NB Step3: NB Step4: Export
Python Code: x_train,y_train,x_valid,y_valid = get_data() train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid) nh,bs = 50,512 c = y_train.max().item()+1 loss_func = F.cross_entropy data = DataBunch(*get_dls(train_ds, valid_ds, bs), c) #export class Callback(): _order=0 def set_runner(self, run): self.run=run def __getattr__(self, k): return getattr(self.run, k) @property def name(self): name = re.sub(r'Callback$', '', self.__class__.__name__) return camel2snake(name or 'callback') def __call__(self, cb_name): f = getattr(self, cb_name, None) if f and f(): return True return False class TrainEvalCallback(Callback): def begin_fit(self): self.run.n_epochs=0. self.run.n_iter=0 def after_batch(self): if not self.in_train: return self.run.n_epochs += 1./self.iters self.run.n_iter += 1 def begin_epoch(self): self.run.n_epochs=self.epoch self.model.train() self.run.in_train=True def begin_validate(self): self.model.eval() self.run.in_train=False class CancelTrainException(Exception): pass class CancelEpochException(Exception): pass class CancelBatchException(Exception): pass #export class Runner(): def __init__(self, cbs=None, cb_funcs=None): self.in_train = False cbs = listify(cbs) for cbf in listify(cb_funcs): cb = cbf() setattr(self, cb.name, cb) cbs.append(cb) self.stop,self.cbs = False,[TrainEvalCallback()]+cbs @property def opt(self): return self.learn.opt @property def model(self): return self.learn.model @property def loss_func(self): return self.learn.loss_func @property def data(self): return self.learn.data def one_batch(self, xb, yb): try: self.xb,self.yb = xb,yb self('begin_batch') self.pred = self.model(self.xb) self('after_pred') self.loss = self.loss_func(self.pred, self.yb) self('after_loss') if not self.in_train: return self.loss.backward() self('after_backward') self.opt.step() self('after_step') self.opt.zero_grad() except CancelBatchException: self('after_cancel_batch') finally: self('after_batch') def all_batches(self, dl): self.iters = len(dl) try: for xb,yb in dl: self.one_batch(xb, yb) except CancelEpochException: self('after_cancel_epoch') def fit(self, epochs, learn): self.epochs,self.learn,self.loss = epochs,learn,tensor(0.) try: for cb in self.cbs: cb.set_runner(self) self('begin_fit') for epoch in range(epochs): self.epoch = epoch if not self('begin_epoch'): self.all_batches(self.data.train_dl) with torch.no_grad(): if not self('begin_validate'): self.all_batches(self.data.valid_dl) self('after_epoch') except CancelTrainException: self('after_cancel_train') finally: self('after_fit') self.learn = None def __call__(self, cb_name): res = False for cb in sorted(self.cbs, key=lambda x: x._order): res = cb(cb_name) or res return res learn = create_learner(get_model, loss_func, data) class TestCallback(Callback): _order=1 def after_step(self): print(self.n_iter) if self.n_iter>=10: raise CancelTrainException() run = Runner(cb_funcs=TestCallback) run.fit(3, learn) Explanation: Jump_to notebook introduction in lesson 10 video Early stopping Better callback cancellation Jump_to lesson 10 video End of explanation #export class AvgStatsCallback(Callback): def __init__(self, metrics): self.train_stats,self.valid_stats = AvgStats(metrics,True),AvgStats(metrics,False) def begin_epoch(self): self.train_stats.reset() self.valid_stats.reset() def after_loss(self): stats = self.train_stats if self.in_train else self.valid_stats with torch.no_grad(): stats.accumulate(self.run) def after_epoch(self): print(self.train_stats) print(self.valid_stats) class Recorder(Callback): def begin_fit(self): self.lrs = [[] for _ in self.opt.param_groups] self.losses = [] def after_batch(self): if not self.in_train: return for pg,lr in zip(self.opt.param_groups,self.lrs): lr.append(pg['lr']) self.losses.append(self.loss.detach().cpu()) def plot_lr (self, pgid=-1): plt.plot(self.lrs[pgid]) def plot_loss(self, skip_last=0): plt.plot(self.losses[:len(self.losses)-skip_last]) def plot(self, skip_last=0, pgid=-1): losses = [o.item() for o in self.losses] lrs = self.lrs[pgid] n = len(losses)-skip_last plt.xscale('log') plt.plot(lrs[:n], losses[:n]) class ParamScheduler(Callback): _order=1 def __init__(self, pname, sched_funcs): self.pname,self.sched_funcs = pname,sched_funcs def begin_fit(self): if not isinstance(self.sched_funcs, (list,tuple)): self.sched_funcs = [self.sched_funcs] * len(self.opt.param_groups) def set_param(self): assert len(self.opt.param_groups)==len(self.sched_funcs) for pg,f in zip(self.opt.param_groups,self.sched_funcs): pg[self.pname] = f(self.n_epochs/self.epochs) def begin_batch(self): if self.in_train: self.set_param() Explanation: Other callbacks End of explanation class LR_Find(Callback): _order=1 def __init__(self, max_iter=100, min_lr=1e-6, max_lr=10): self.max_iter,self.min_lr,self.max_lr = max_iter,min_lr,max_lr self.best_loss = 1e9 def begin_batch(self): if not self.in_train: return pos = self.n_iter/self.max_iter lr = self.min_lr * (self.max_lr/self.min_lr) ** pos for pg in self.opt.param_groups: pg['lr'] = lr def after_step(self): if self.n_iter>=self.max_iter or self.loss>self.best_loss*10: raise CancelTrainException() if self.loss < self.best_loss: self.best_loss = self.loss Explanation: LR Finder NB: You may want to also add something that saves the model before running this, and loads it back after running - otherwise you'll lose your weights! Jump_to lesson 10 video End of explanation learn = create_learner(get_model, loss_func, data) run = Runner(cb_funcs=[LR_Find, Recorder]) run.fit(2, learn) run.recorder.plot(skip_last=5) run.recorder.plot_lr() Explanation: NB: In fastai we also use exponential smoothing on the loss. For that reason we check for best_loss*3 instead of best_loss*10. End of explanation !python notebook2script.py 05b_early_stopping.ipynb Explanation: Export End of explanation
5,223
Given the following text description, write Python code to implement the functionality described below step by step Description: Bayesian Rolling Regression in PyMC3 Author Step1: Lets load the prices of GDX and GLD. Step2: Plotting the prices over time suggests a strong correlation. However, the correlation seems to change over time. Step3: A naive approach would be to estimate a linear model and ignore the time domain. Step4: The posterior predictive plot shows how bad the fit is. Step5: Rolling regression Next, we will build an improved model that will allow for changes in the regression coefficients over time. Specifically, we will assume that intercept and slope follow a random-walk through time. That idea is similar to the stochastic volatility model. $$ \alpha_t \sim \mathcal{N}(\alpha_{t-1}, \sigma_\alpha^2) $$ $$ \beta_t \sim \mathcal{N}(\beta_{t-1}, \sigma_\beta^2) $$ First, lets define the hyper-priors for $\sigma_\alpha^2$ and $\sigma_\beta^2$. This parameter can be interpreted as the volatility in the regression coefficients. Step6: Next, we define the regression parameters that are not a single random variable but rather a random vector with the above stated dependence structure. So as not to fit a coefficient to a single data point, we will chunk the data into bins of 50 and apply the same coefficients to all data points in a single bin. Step7: Perform the regression given coefficients and data and link to the data via the likelihood. Step8: Inference. Despite this being quite a complex model, NUTS handles it wells. Step9: Analysis of results $\alpha$, the intercept, does not seem to change over time. Step10: However, the slope does. Step11: The posterior predictive plot shows that we capture the change in regression over time much better. Note that we should have used returns instead of prices. The model would still work the same, but the visualisations would not be quite as clear.
Python Code: %matplotlib inline import pandas as pd from pandas_datareader import data import numpy as np import pymc3 as pm import matplotlib.pyplot as plt Explanation: Bayesian Rolling Regression in PyMC3 Author: Thomas Wiecki Pairs trading is a famous technique in algorithmic trading that plays two stocks against each other. For this to work, stocks must be correlated (cointegrated). One common example is the price of gold (GLD) and the price of gold mining operations (GDX). End of explanation prices = data.YahooDailyReader(symbols=['GLD', 'GDX'], end='2014-8-1').read().loc['Adj Close', :, :].iloc[:1000] prices.head() Explanation: Lets load the prices of GDX and GLD. End of explanation fig = plt.figure(figsize=(9, 6)) ax = fig.add_subplot(111, xlabel='Price GDX in \$', ylabel='Price GLD in \$') colors = np.linspace(0.1, 1, len(prices)) mymap = plt.get_cmap("winter") sc = ax.scatter(prices.GDX, prices.GLD, c=colors, cmap=mymap, lw=0) cb = plt.colorbar(sc) cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]); Explanation: Plotting the prices over time suggests a strong correlation. However, the correlation seems to change over time. End of explanation with pm.Model() as model_reg: pm.glm.glm('GLD ~ GDX', prices) trace_reg = pm.sample(2000) Explanation: A naive approach would be to estimate a linear model and ignore the time domain. End of explanation fig = plt.figure(figsize=(9, 6)) ax = fig.add_subplot(111, xlabel='Price GDX in \$', ylabel='Price GLD in \$', title='Posterior predictive regression lines') sc = ax.scatter(prices.GDX, prices.GLD, c=colors, cmap=mymap, lw=0) pm.glm.plot_posterior_predictive(trace_reg[100:], samples=100, label='posterior predictive regression lines', lm=lambda x, sample: sample['Intercept'] + sample['GDX'] * x, eval=np.linspace(prices.GDX.min(), prices.GDX.max(), 100)) cb = plt.colorbar(sc) cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]); ax.legend(loc=0); Explanation: The posterior predictive plot shows how bad the fit is. End of explanation model_randomwalk = pm.Model() with model_randomwalk: # std of random walk, best sampled in log space. sigma_alpha = pm.Exponential('sigma_alpha', 1./.02, testval = .1) sigma_beta = pm.Exponential('sigma_beta', 1./.02, testval = .1) Explanation: Rolling regression Next, we will build an improved model that will allow for changes in the regression coefficients over time. Specifically, we will assume that intercept and slope follow a random-walk through time. That idea is similar to the stochastic volatility model. $$ \alpha_t \sim \mathcal{N}(\alpha_{t-1}, \sigma_\alpha^2) $$ $$ \beta_t \sim \mathcal{N}(\beta_{t-1}, \sigma_\beta^2) $$ First, lets define the hyper-priors for $\sigma_\alpha^2$ and $\sigma_\beta^2$. This parameter can be interpreted as the volatility in the regression coefficients. End of explanation import theano.tensor as T # To make the model simpler, we will apply the same coefficient for 50 data points at a time subsample_alpha = 50 subsample_beta = 50 with model_randomwalk: alpha = pm.GaussianRandomWalk('alpha', sigma_alpha**-2, shape=len(prices) // subsample_alpha) beta = pm.GaussianRandomWalk('beta', sigma_beta**-2, shape=len(prices) // subsample_beta) # Make coefficients have the same length as prices alpha_r = T.repeat(alpha, subsample_alpha) beta_r = T.repeat(beta, subsample_beta) Explanation: Next, we define the regression parameters that are not a single random variable but rather a random vector with the above stated dependence structure. So as not to fit a coefficient to a single data point, we will chunk the data into bins of 50 and apply the same coefficients to all data points in a single bin. End of explanation with model_randomwalk: # Define regression regression = alpha_r + beta_r * prices.GDX.values # Assume prices are Normally distributed, the mean comes from the regression. sd = pm.Uniform('sd', 0, 20) likelihood = pm.Normal('y', mu=regression, sd=sd, observed=prices.GLD.values) Explanation: Perform the regression given coefficients and data and link to the data via the likelihood. End of explanation from scipy import optimize with model_randomwalk: # First optimize random walk start = pm.find_MAP(vars=[alpha, beta], fmin=optimize.fmin_l_bfgs_b) # Sample step = pm.NUTS(scaling=start) trace_rw = pm.sample(2000, step, start=start) Explanation: Inference. Despite this being quite a complex model, NUTS handles it wells. End of explanation fig = plt.figure(figsize=(8, 6)) ax = plt.subplot(111, xlabel='time', ylabel='alpha', title='Change of alpha over time.') ax.plot(trace_rw[-1000:][alpha].T, 'r', alpha=.05); ax.set_xticklabels([str(p.date()) for p in prices[::len(prices)//5].index]); Explanation: Analysis of results $\alpha$, the intercept, does not seem to change over time. End of explanation fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, xlabel='time', ylabel='beta', title='Change of beta over time') ax.plot(trace_rw[-1000:][beta].T, 'b', alpha=.05); ax.set_xticklabels([str(p.date()) for p in prices[::len(prices)//5].index]); Explanation: However, the slope does. End of explanation fig = plt.figure(figsize=(8, 6)) ax = fig.add_subplot(111, xlabel='Price GDX in \$', ylabel='Price GLD in \$', title='Posterior predictive regression lines') colors = np.linspace(0.1, 1, len(prices)) colors_sc = np.linspace(0.1, 1, len(trace_rw[-500::10]['alpha'].T)) mymap = plt.get_cmap('winter') mymap_sc = plt.get_cmap('winter') xi = np.linspace(prices.GDX.min(), prices.GDX.max(), 50) for i, (alpha, beta) in enumerate(zip(trace_rw[-500::10]['alpha'].T, trace_rw[-500::10]['beta'].T)): for a, b in zip(alpha, beta): ax.plot(xi, a + b*xi, alpha=.05, lw=1, c=mymap_sc(colors_sc[i])) sc = ax.scatter(prices.GDX, prices.GLD, label='data', cmap=mymap, c=colors) cb = plt.colorbar(sc) cb.ax.set_yticklabels([str(p.date()) for p in prices[::len(prices)//10].index]); Explanation: The posterior predictive plot shows that we capture the change in regression over time much better. Note that we should have used returns instead of prices. The model would still work the same, but the visualisations would not be quite as clear. End of explanation
5,224
Given the following text description, write Python code to implement the functionality described below step by step Description: <img src="../Pierian-Data-Logo.PNG"> <br> <strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong> MNIST Code Along with CNN Now that we've seen the results of an artificial neural network model on the <a href='https Step1: Load the MNIST dataset PyTorch makes the MNIST train and test datasets available through <a href='https Step2: Create loaders When working with images, we want relatively small batches; a batch size of 4 is not uncommon. Step3: Define a convolutional model In the previous section we used only fully connected layers, with an input layer of 784 (our flattened 28x28 images), hidden layers of 120 and 84 neurons, and an output size representing 10 possible digits. This time we'll employ two convolutional layers and two pooling layers before feeding data through fully connected hidden layers to our output. The model follows CONV/RELU/POOL/CONV/RELU/POOL/FC/RELU/FC. <div class="alert alert-info"><strong>Let's walk through the steps we're about to take.</strong><br> 1. Extend the base Module class Step4: <div class="alert alert-danger"><strong>This is how the convolution output is passed into the fully connected layers.</strong></div> Now let's run the code. Step5: Including the bias terms for each layer, the total number of parameters being trained is Step6: Define loss function & optimizer Step7: Train the model This time we'll feed the data directly into the model without flattening it first. Step8: Plot the loss and accuracy comparisons Step9: While there may be some overfitting of the training data, there is far less than we saw with the ANN model. Step10: Evaluate Test Data Step11: Recall that our [784,120,84,10] ANN returned an accuracy of 97.25% after 10 epochs. And it used 105,214 parameters to our current 60,074. Display the confusion matrix Step12: Examine the misses We can track the index positions of "missed" predictions, and extract the corresponding image and label. We'll do this in batches to save screen space. Step13: Now that everything is set up, run and re-run the cell below to view all of the missed predictions.<br> Use <kbd>Ctrl+Enter</kbd> to remain on the cell between runs. You'll see a <tt>StopIteration</tt> once all the misses have been seen. Step14: Run a new image through the model We can also pass a single image through the model to obtain a prediction. Pick a number from 0 to 9999, assign it to "x", and we'll use that value to select a number from the MNIST test set.
Python Code: import torch import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader from torchvision import datasets, transforms from torchvision.utils import make_grid import numpy as np import pandas as pd from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt %matplotlib inline Explanation: <img src="../Pierian-Data-Logo.PNG"> <br> <strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong> MNIST Code Along with CNN Now that we've seen the results of an artificial neural network model on the <a href='https://en.wikipedia.org/wiki/MNIST_database'>MNIST dataset</a>, let's work the same data with a <a href='https://en.wikipedia.org/wiki/Convolutional_neural_network'>Convolutional Neural Network</a> (CNN). Make sure to watch the theory lectures! You'll want to be comfortable with: * convolutional layers * filters/kernels * pooling * depth, stride and zero-padding Note that in this exercise there is no need to flatten the MNIST data, as a CNN expects 2-dimensional data. Perform standard imports End of explanation transform = transforms.ToTensor() train_data = datasets.MNIST(root='../Data', train=True, download=True, transform=transform) test_data = datasets.MNIST(root='../Data', train=False, download=True, transform=transform) train_data test_data Explanation: Load the MNIST dataset PyTorch makes the MNIST train and test datasets available through <a href='https://pytorch.org/docs/stable/torchvision/index.html'><tt><strong>torchvision</strong></tt></a>. The first time they're called, the datasets will be downloaded onto your computer to the path specified. From that point, torchvision will always look for a local copy before attempting another download. Refer to the previous section for explanations of transformations, batch sizes and <a href='https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader'><tt><strong>DataLoader</strong></tt></a>. End of explanation train_loader = DataLoader(train_data, batch_size=10, shuffle=True) test_loader = DataLoader(test_data, batch_size=10, shuffle=False) Explanation: Create loaders When working with images, we want relatively small batches; a batch size of 4 is not uncommon. End of explanation # Define layers conv1 = nn.Conv2d(1, 6, 3, 1) conv2 = nn.Conv2d(6, 16, 3, 1) # Grab the first MNIST record for i, (X_train, y_train) in enumerate(train_data): break # Create a rank-4 tensor to be passed into the model # (train_loader will have done this already) x = X_train.view(1,1,28,28) print(x.shape) # Perform the first convolution/activation x = F.relu(conv1(x)) print(x.shape) # Run the first pooling layer x = F.max_pool2d(x, 2, 2) print(x.shape) # Perform the second convolution/activation x = F.relu(conv2(x)) print(x.shape) # Run the second pooling layer x = F.max_pool2d(x, 2, 2) print(x.shape) # Flatten the data x = x.view(-1, 5*5*16) print(x.shape) Explanation: Define a convolutional model In the previous section we used only fully connected layers, with an input layer of 784 (our flattened 28x28 images), hidden layers of 120 and 84 neurons, and an output size representing 10 possible digits. This time we'll employ two convolutional layers and two pooling layers before feeding data through fully connected hidden layers to our output. The model follows CONV/RELU/POOL/CONV/RELU/POOL/FC/RELU/FC. <div class="alert alert-info"><strong>Let's walk through the steps we're about to take.</strong><br> 1. Extend the base Module class: <tt><font color=black>class ConvolutionalNetwork(nn.Module):<br> &nbsp;&nbsp;&nbsp;&nbsp;def \_\_init\_\_(self):<br> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;super().\_\_init\_\_()</font></tt><br> 2. Set up the convolutional layers with <a href='https://pytorch.org/docs/stable/nn.html#conv2d'><tt><strong>torch.nn.Conv2d()</strong></tt></a><br><br>The first layer has one input channel (the grayscale color channel). We'll assign 6 output channels for feature extraction. We'll set our kernel size to 3 to make a 3x3 filter, and set the step size to 1.<br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.conv1 = nn.Conv2d(1, 6, 3, 1)</font></tt><br> The second layer will take our 6 input channels and deliver 16 output channels.<br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.conv2 = nn.Conv2d(6, 16, 3, 1)</font></tt><br><br> 3. Set up the fully connected layers with <a href='https://pytorch.org/docs/stable/nn.html#linear'><tt><strong>torch.nn.Linear()</strong></tt></a>.<br><br>The input size of (5x5x16) is determined by the effect of our kernels on the input image size. A 3x3 filter applied to a 28x28 image leaves a 1-pixel edge on all four sides. In one layer the size changes from 28x28 to 26x26. We could address this with zero-padding, but since an MNIST image is mostly black at the edges, we should be safe ignoring these pixels. We'll apply the kernel twice, and apply pooling layers twice, so our resulting output will be $\;(((28-2)/2)-2)/2 = 5.5\;$ which rounds down to 5 pixels per side.<br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.fc1 = nn.Linear(5\*5\*16, 120)</font></tt><br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.fc2 = nn.Linear(120, 84)</font></tt><br> <tt><font color=black>&nbsp;&nbsp;&nbsp;&nbsp;self.fc3 = nn.Linear(84, 10)</font></tt><br> See below for a more detailed look at this step.<br><br> 4. Define the forward method.<br><br>Activations can be applied to the convolutions in one line using <a href='https://pytorch.org/docs/stable/nn.html#id27'><tt><strong>F.relu()</strong></tt></a> and pooling is done using <a href='https://pytorch.org/docs/stable/nn.html#maxpool2d'><tt><strong>F.max_pool2d()</strong></tt></a><br> <tt><font color=black>def forward(self, X):<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.relu(self.conv1(X))<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.max_pool2d(X, 2, 2)<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.relu(self.conv2(X))<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.max_pool2d(X, 2, 2)<br> </font></tt>Flatten the data for the fully connected layers:<br><tt><font color=black> &nbsp;&nbsp;&nbsp;&nbsp;X = X.view(-1, 5\*5\*16)<br> &nbsp;&nbsp;&nbsp;&nbsp;X = F.relu(self.fc1(X))<br> &nbsp;&nbsp;&nbsp;&nbsp;X = self.fc2(X)<br> &nbsp;&nbsp;&nbsp;&nbsp;return F.log_softmax(X, dim=1)</font></tt> </div> <div class="alert alert-danger"><strong>Breaking down the convolutional layers</strong> (this code is for illustration purposes only.)</div> End of explanation class ConvolutionalNetwork(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 6, 3, 1) self.conv2 = nn.Conv2d(6, 16, 3, 1) self.fc1 = nn.Linear(5*5*16, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84,10) def forward(self, X): X = F.relu(self.conv1(X)) X = F.max_pool2d(X, 2, 2) X = F.relu(self.conv2(X)) X = F.max_pool2d(X, 2, 2) X = X.view(-1, 5*5*16) X = F.relu(self.fc1(X)) X = F.relu(self.fc2(X)) X = self.fc3(X) return F.log_softmax(X, dim=1) torch.manual_seed(42) model = ConvolutionalNetwork() model Explanation: <div class="alert alert-danger"><strong>This is how the convolution output is passed into the fully connected layers.</strong></div> Now let's run the code. End of explanation def count_parameters(model): params = [p.numel() for p in model.parameters() if p.requires_grad] for item in params: print(f'{item:>6}') print(f'______\n{sum(params):>6}') count_parameters(model) Explanation: Including the bias terms for each layer, the total number of parameters being trained is:<br> $\quad\begin{split}(1\times6\times3\times3)+6+(6\times16\times3\times3)+16+(400\times120)+120+(120\times84)+84+(84\times10)+10 &=\ 54+6+864+16+48000+120+10080+84+840+10 &= 60,074\end{split}$<br> End of explanation criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.001) Explanation: Define loss function & optimizer End of explanation import time start_time = time.time() epochs = 5 train_losses = [] test_losses = [] train_correct = [] test_correct = [] for i in range(epochs): trn_corr = 0 tst_corr = 0 # Run the training batches for b, (X_train, y_train) in enumerate(train_loader): b+=1 # Apply the model y_pred = model(X_train) # we don't flatten X-train here loss = criterion(y_pred, y_train) # Tally the number of correct predictions predicted = torch.max(y_pred.data, 1)[1] batch_corr = (predicted == y_train).sum() trn_corr += batch_corr # Update parameters optimizer.zero_grad() loss.backward() optimizer.step() # Print interim results if b%600 == 0: print(f'epoch: {i:2} batch: {b:4} [{10*b:6}/60000] loss: {loss.item():10.8f} \ accuracy: {trn_corr.item()*100/(10*b):7.3f}%') train_losses.append(loss) train_correct.append(trn_corr) # Run the testing batches with torch.no_grad(): for b, (X_test, y_test) in enumerate(test_loader): # Apply the model y_val = model(X_test) # Tally the number of correct predictions predicted = torch.max(y_val.data, 1)[1] tst_corr += (predicted == y_test).sum() loss = criterion(y_val, y_test) test_losses.append(loss) test_correct.append(tst_corr) print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed Explanation: Train the model This time we'll feed the data directly into the model without flattening it first. End of explanation plt.plot(train_losses, label='training loss') plt.plot(test_losses, label='validation loss') plt.title('Loss at the end of each epoch') plt.legend(); test_losses Explanation: Plot the loss and accuracy comparisons End of explanation plt.plot([t/600 for t in train_correct], label='training accuracy') plt.plot([t/100 for t in test_correct], label='validation accuracy') plt.title('Accuracy at the end of each epoch') plt.legend(); Explanation: While there may be some overfitting of the training data, there is far less than we saw with the ANN model. End of explanation # Extract the data all at once, not in batches test_load_all = DataLoader(test_data, batch_size=10000, shuffle=False) with torch.no_grad(): correct = 0 for X_test, y_test in test_load_all: y_val = model(X_test) # we don't flatten the data this time predicted = torch.max(y_val,1)[1] correct += (predicted == y_test).sum() print(f'Test accuracy: {correct.item()}/{len(test_data)} = {correct.item()*100/(len(test_data)):7.3f}%') Explanation: Evaluate Test Data End of explanation # print a row of values for reference np.set_printoptions(formatter=dict(int=lambda x: f'{x:4}')) print(np.arange(10).reshape(1,10)) print() # print the confusion matrix print(confusion_matrix(predicted.view(-1), y_test.view(-1))) Explanation: Recall that our [784,120,84,10] ANN returned an accuracy of 97.25% after 10 epochs. And it used 105,214 parameters to our current 60,074. Display the confusion matrix End of explanation misses = np.array([]) for i in range(len(predicted.view(-1))): if predicted[i] != y_test[i]: misses = np.append(misses,i).astype('int64') # Display the number of misses len(misses) # Display the first 10 index positions misses[:10] # Set up an iterator to feed batched rows r = 12 # row size row = iter(np.array_split(misses,len(misses)//r+1)) Explanation: Examine the misses We can track the index positions of "missed" predictions, and extract the corresponding image and label. We'll do this in batches to save screen space. End of explanation nextrow = next(row) print("Index:", nextrow) print("Label:", y_test.index_select(0,torch.tensor(nextrow)).numpy()) print("Guess:", predicted.index_select(0,torch.tensor(nextrow)).numpy()) images = X_test.index_select(0,torch.tensor(nextrow)) im = make_grid(images, nrow=r) plt.figure(figsize=(10,4)) plt.imshow(np.transpose(im.numpy(), (1, 2, 0))); Explanation: Now that everything is set up, run and re-run the cell below to view all of the missed predictions.<br> Use <kbd>Ctrl+Enter</kbd> to remain on the cell between runs. You'll see a <tt>StopIteration</tt> once all the misses have been seen. End of explanation x = 2019 plt.figure(figsize=(1,1)) plt.imshow(test_data[x][0].reshape((28,28)), cmap="gist_yarg"); model.eval() with torch.no_grad(): new_pred = model(test_data[x][0].view(1,1,28,28)).argmax() print("Predicted value:",new_pred.item()) Explanation: Run a new image through the model We can also pass a single image through the model to obtain a prediction. Pick a number from 0 to 9999, assign it to "x", and we'll use that value to select a number from the MNIST test set. End of explanation
5,225
Given the following text description, write Python code to implement the functionality described below step by step Description: Sentiment Analysis for Twitter Overview This tutorial is going to introduce some simple tools for detecting sentiment in Tweets. We will be using a set of tools called the Natural Language Toolkit (NLTK). This is collection of software written in the Python programming language. An important design goal behind Python is that it should be easy to read and fun to use, so well-suited for beginners. A similar motivation inspired NLTK Step1: Some of the cells will contain snippets of code that are necessary for the big story to work, but which you don't need to understand. We'll try to make it clear when it's important for you to pay attention to one of the cells. Twitter As you know, people are tweeting all the time. The rate varies, with about 6,000 per second being the average, but when I last checked, the rate was over 10,000 Tweets per second. So, a lot. Twitter kindly allows people to tap into a small sample of this stream &mdash; unless you're able to pay, the sample is at most 1% of the total stream. Here's a tiny snapshow to Tweets, reflecting the Twitter public stream at the point this tutorial was last executed. By using the keywords 'love, hate', we restrict our sample to just those Tweets containing one or both of those words. Step2: Using a Twitter corpus You too can sample Tweets in this way, but you'll need to set up your Twitter API keys according to these instructions, and also install NLTK (and IPython if you want) on your own computer. Since this is a bit of hassle, for the rest of this tutorial, we'll focus our attention on a sample of 20,000 English-language Tweets that were collected at the end of April 2015. In order focus on Tweets about the UK general election, the public stream was filtered with the following set of terms Step3: Sentiment Analysis When we talk about understanding natural language, we often focus on 'who did what to whom'. Yet in many situations, we are more interested in attitudes and opinions. When someone writes about a movie, did they like it or hate it? Is a product review for a water bottle on Amazon positive or negative? Is a Tweet about the US President supportive or critical? We might also care about the intensity of the views expressed Step4: In the next example, we are going to create a table of Tweets using the pandas library. We will use the term data to refer to this table Step5: Next, we will try to add labels for political parties and party leaders in a way that corresponds to the text of the Tweets. However, in some cases, it may not be possible or appropriate to add a label and instead we want to have a 'blank cell' that will be ignored by pandas. We'll do this by inserting a value NaN (Not a Number). Step6: To add a sentiment column, we will use the polarity_scores() method from VADER that we briefly described earlier. We'll only look at the overall 'compound' polarity score. Step7: Let's inspect the 25 most positive Tweets Step8: Let's print out the text of the Tweet in row 15079. Step9: Now let's have a peek at the 25 most negative Tweets. Step10: And here is the text of the Tweet at row 5069 Step11: In the next few examples, we group the Tweets together either by leader or by party, and then look at some summary statistics. Step12: Challenges It's not hard to find examples where something close to full natural language understanding is required to determine the correct polarity. <i>David Cameron doesn't seem to have done too badly until now. Otherwise #milifandom and #cleggers would be attacking him for these bad things.</i> <i>Even though I don't like UKIP I'm hating them less and less every day, they do actually have very some good policies.</i>
Python Code: 3 + 4 Explanation: Sentiment Analysis for Twitter Overview This tutorial is going to introduce some simple tools for detecting sentiment in Tweets. We will be using a set of tools called the Natural Language Toolkit (NLTK). This is collection of software written in the Python programming language. An important design goal behind Python is that it should be easy to read and fun to use, so well-suited for beginners. A similar motivation inspired NLTK: it should make complex tasks easy to carry out, and it should be written in a way that would allow users to inspect and understand the code. Why is this relevant? Well, a lot of software these days is built to be easy to use, but hard to inspect. For example, smartphones have a lot of slick apps on them, but very few people have the expertise to look under the hood to find out how they work. NLTK has quite the opposite approach: you are actively encouraged to discover how the code works. However, your level of understanding will depend heavily on how far you get to grips with Python itself. This tutorial is written using the IPython framework. This allows text to be interspersed by fragments of code, occuring in special "cells". Just below is a cell where we are using Python to do a simple calculation: End of explanation import nltk # load up the NLTK library from nltk.twitter import Twitter tw = Twitter() # start a new client that connects to Twitter tw.tweets(keywords='love, hate', limit=25) #filter Tweets from the public stream Explanation: Some of the cells will contain snippets of code that are necessary for the big story to work, but which you don't need to understand. We'll try to make it clear when it's important for you to pay attention to one of the cells. Twitter As you know, people are tweeting all the time. The rate varies, with about 6,000 per second being the average, but when I last checked, the rate was over 10,000 Tweets per second. So, a lot. Twitter kindly allows people to tap into a small sample of this stream &mdash; unless you're able to pay, the sample is at most 1% of the total stream. Here's a tiny snapshow to Tweets, reflecting the Twitter public stream at the point this tutorial was last executed. By using the keywords 'love, hate', we restrict our sample to just those Tweets containing one or both of those words. End of explanation from nltk.corpus import twitter_samples strings = twitter_samples.strings('tweets.20150430-223406.json') for string in strings[:20]: print(string) Explanation: Using a Twitter corpus You too can sample Tweets in this way, but you'll need to set up your Twitter API keys according to these instructions, and also install NLTK (and IPython if you want) on your own computer. Since this is a bit of hassle, for the rest of this tutorial, we'll focus our attention on a sample of 20,000 English-language Tweets that were collected at the end of April 2015. In order focus on Tweets about the UK general election, the public stream was filtered with the following set of terms: david cameron, miliband, milliband, sturgeon, clegg, farage, tory, tories, ukip, snp, libdem The following code cell allows us to get hold of this collection, and prints out the text of the first 15. You don't need to worry about the details of how this happens. End of explanation from nltk.sentiment import SentimentIntensityAnalyzer sia = SentimentIntensityAnalyzer() sia.polarity_scores("I REALLY adore Starwars!!!!! :-)") full_tweets = twitter_samples.docs('tweets.20150430-223406.json') Explanation: Sentiment Analysis When we talk about understanding natural language, we often focus on 'who did what to whom'. Yet in many situations, we are more interested in attitudes and opinions. When someone writes about a movie, did they like it or hate it? Is a product review for a water bottle on Amazon positive or negative? Is a Tweet about the US President supportive or critical? We might also care about the intensity of the views expressed: "this is a fine movie" is different from "WOW! This movie is soooooo great!!!!" even though both are positive. Sentiment analysis (or opinion mining) is a broad term for a range of techniques that try to identify the subjective views expressed in texts. Many organisations care deeply about public opinion &mdash; whether these concern commercial products, creative works, or political parties and policies &mdash; and have consequently turned to sentiment analysis as a way of gleaning valuable insights from voluminous bodies of online text. This in turn has stimulated much activity in the area, ranging from academic research to commercial applications and industry-focussed conferences. However, it's worth saying at the outset that sentiment analysis is hard. Although it is designed to work with written text, the way in which people express their feelings is often goes far beyond what they literally say. In spoken language, intonation will be important. And of course we often express emotion using no words at all, as illustrated in this picture from Darwin's book The Expression of the Emotions. <a title="By Charles Darwin (author of volume); unknown photographer of plate [Public domain], via Wikimedia Commons" href="https://commons.wikimedia.org/wiki/File%3APlate_depicting_emotions_of_grief_from_Charles_Darwin's_book_The_Expression_of_the_Emotions.jpg"><img align="center" width="512" alt="Plate depicting emotions of grief from Charles Darwin&'s book The Expression of the Emotions" src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/0c/Plate_depicting_emotions_of_grief_from_Charles_Darwin%27s_book_The_Expression_of_the_Emotions.jpg/512px-Plate_depicting_emotions_of_grief_from_Charles_Darwin%27s_book_The_Expression_of_the_Emotions.jpg"/></a> Classifying sentences Let's say that we want to classify a sentence into one of three categories: positive, negative or neutral. Each of these can be illustrated by posts on Twitter collected during the UK General Election in 2015. <dl> <dt>positive:</dt> <dd><i>Good stuff from Clegg. Clear, passionate & honest about the difficulties of govt but also the difference @LibDems have made.</i></dd> <dt>negative:</dt> <dd><i>Hmm. Ed Miliband being against SNP is a bad move I think. It'll cost him n it is a dumb choice.</i></dd> <dt>neutral:</dt> <dd><i>Why is Ed Milliband trending when him name is Ed Miliband?</i></dd> </dl> The term polarity is often used to refer to whether a piece of text is judged to be positive or negative. The easiest approach to classifying examples like these is to get hold of two lists of words, positive ones such as good, excellent, fine, triumph, well, succeed, ... and negative ones such as bad, poor, dismal, lying, fail, disaster, .... We figure out an overall polarity score based on the ratio of positive tokens to negative ones in a given string. A sentence with neither positive or negative tokens (or possibly an equal number of each) will be categorised as neutral. This simple approach is likely to yield the roughly correct results for the Twitter examples above. Things become more complicated when negation enter into the picture. The next example is mildly positive (at least in British English), so we need to ensure that not reverses the polarity of bad in appropriate contexts, <i>Given Miliband personal ratings still 20 points behind Cameron, I'd say that not a bad margin for Labour leader https://t.co/ILQP93VYLF</i> Classifying Tweets with VADER VADER is a system for determining the sentiment of texts which has been incorporated into NLTK. It is based on the idea of looking for positive and negative words, but adds to important new elements. First, it uses a lexicon of 7,500 items which have been manually annotated for both polarity and intensity. Second, the overall score for an input text is computed by using a complex set of rules that take into account not just words (and negation), but also the boosting effect of devices like capitalisation and punctuation. End of explanation import pandas as pd from numpy import nan data = pd.DataFrame() data['text'] = [t['text'] for t in full_tweets] # add a column corresponding to the text of each Tweet Explanation: In the next example, we are going to create a table of Tweets using the pandas library. We will use the term data to refer to this table End of explanation parties = {} parties['conservative'] = set(['osborne', 'portillo', 'pickles', 'tory', 'tories', 'torie', 'voteconservative', 'conservative', 'conservatives', 'bullingdon', 'telegraph']) parties['labour'] = set(['uklabour', 'scottishlabour', 'labour', 'lab', 'murphy']) parties['libdem'] = set(['libdem', 'libdems', 'dems', 'alexander']) parties['ukip'] = set(['ukip', 'davidcoburnukip']) parties['snp'] = set(['salmond', 'snp', 'snpwin', 'votesnp', 'snpbecause', 'scotland', 'scotlands', 'scottish', 'indyref', 'independence', 'celebs4indy']) leaders = {} leaders['cameron'] = set(['cameron', 'david_cameron', 'davidcameron','dave', 'davecamm']) leaders['miliband'] = set(['miliband', 'ed_miliband', 'edmiliband', 'edm', 'milliband', 'ed', 'edforchange', 'edforpm', 'milifandom']) leaders['clegg'] = set(['clegg']) leaders['farage'] = set(['farage', 'nigel_farage', 'nsegel', 'askfarage', 'asknigelfarage', 'asknigelfar']) leaders['sturgeon'] = set(['sturgeon', 'nicola_sturgeon', 'nicolasturgeon', 'nicola']) def tweet_classify(text, keywords): label = nan from nltk.tokenize import wordpunct_tokenize import operator toks = wordpunct_tokenize(text) toks_lower = [t.lower() for t in toks] d = {} for k in keywords: d[k] = len(keywords[k] & set(toks_lower)) best = max(d.items(), key=operator.itemgetter(1)) if best[1] > 0: label = best[0] return label data['party'] = [tweet_classify(row['text'], parties) for index, row in data.iterrows()] data['leader'] = [tweet_classify(row['text'], leaders) for index, row in data.iterrows()] data.head(25) Explanation: Next, we will try to add labels for political parties and party leaders in a way that corresponds to the text of the Tweets. However, in some cases, it may not be possible or appropriate to add a label and instead we want to have a 'blank cell' that will be ignored by pandas. We'll do this by inserting a value NaN (Not a Number). End of explanation data['sentiment'] = [sia.polarity_scores(row['text'])['compound'] for index, row in data.iterrows()] data.describe() # summarise the table Explanation: To add a sentiment column, we will use the polarity_scores() method from VADER that we briefly described earlier. We'll only look at the overall 'compound' polarity score. End of explanation data.sort_index(by="sentiment", ascending=False).head(25) Explanation: Let's inspect the 25 most positive Tweets: End of explanation print(data.iloc[15079]['text']) Explanation: Let's print out the text of the Tweet in row 15079. End of explanation data.sort_index(by="sentiment").head(25) Explanation: Now let's have a peek at the 25 most negative Tweets. End of explanation print(data.iloc[5069]['text']) Explanation: And here is the text of the Tweet at row 5069: End of explanation grouped_leader = data['sentiment'].groupby(data['leader']) grouped_leader.mean() grouped_party = data['sentiment'].groupby(data['party']) grouped_party.mean() grouped_leader.count() grouped_party.count() grouped_party.max() grouped_leader.max() Explanation: In the next few examples, we group the Tweets together either by leader or by party, and then look at some summary statistics. End of explanation sia.polarity_scores("David Cameron doesn't seem to have done too badly until now." + "Otherwise #milifandom and #cleggers would be attacking him for these bad things.") Explanation: Challenges It's not hard to find examples where something close to full natural language understanding is required to determine the correct polarity. <i>David Cameron doesn't seem to have done too badly until now. Otherwise #milifandom and #cleggers would be attacking him for these bad things.</i> <i>Even though I don't like UKIP I'm hating them less and less every day, they do actually have very some good policies.</i> End of explanation
5,226
Given the following text description, write Python code to implement the functionality described below step by step Description: The model in theory We are going to use 4 features Step1: Read data Step2: Plot Step3: Price Step4: MACD Step5: Stochastics Oscillator Step6: Average True Range Step7: Create complete DataFrame & Save Data
Python Code: def MACD(df,period1,period2,periodSignal): EMA1 = pd.DataFrame.ewm(df,span=period1).mean() EMA2 = pd.DataFrame.ewm(df,span=period2).mean() MACD = EMA1-EMA2 Signal = pd.DataFrame.ewm(MACD,periodSignal).mean() Histogram = MACD-Signal return Histogram def stochastics_oscillator(df,period): l, h = pd.DataFrame.rolling(df, period).min(), pd.DataFrame.rolling(df, period).max() k = 100 * (df - l) / (h - l) return k def ATR(df,period): ''' Method A: Current High less the current Low ''' df['H-L'] = abs(df['High']-df['Low']) df['H-PC'] = abs(df['High']-df['Price'].shift(1)) df['L-PC'] = abs(df['Low']-df['Price'].shift(1)) TR = df[['H-L','H-PC','L-PC']].max(axis=1) return TR.to_frame() Explanation: The model in theory We are going to use 4 features: The price itself and three extra technical indicators. - MACD (Trend) - Stochastics (Momentum) - Average True Range (Volume) Functions Exponential Moving Average: Is a type of infinite impulse response filter that applies weighting factors which decrease exponentially. The weighting for each older datum decreases exponentially, never reaching zero. <img src="https://www.bionicturtle.com/images/uploads/WindowsLiveWriterGARCHapproachandExponentialsmoothingEWMA_863image_16.png"> MACD: The Moving Average Convergence/Divergence oscillator (MACD) is one of the simplest and most effective momentum indicators available. The MACD turns two trend-following indicators, moving averages, into a momentum oscillator by subtracting the longer moving average from the shorter moving average. <img src="http://i68.tinypic.com/289ie1l.png"> Stochastics oscillator: The Stochastic Oscillator is a momentum indicator that shows the location of the close relative to the high-low range over a set number of periods. <img src="http://i66.tinypic.com/2vam3uo.png"> Average True Range: Is an indicator to measure the volalitility (NOT price direction). The largest of: - Method A: Current High less the current Low - Method B: Current High less the previous Close (absolute value) - Method C: Current Low less the previous Close (absolute value) <img src="http://d.stockcharts.com/school/data/media/chart_school/technical_indicators_and_overlays/average_true_range_atr/atr-1-trexam.png" width="400px"> Calculation: <img src="http://i68.tinypic.com/e0kggi.png"> End of explanation df = pd.read_csv('BTCUSD.csv',usecols=[1,2,3,4]) df = df.iloc[::-1] df["Price"] = (df["Price"].str.split()).apply(lambda x: float(x[0].replace(',', ''))) df["Open"] = (df["Open"].str.split()).apply(lambda x: float(x[0].replace(',', ''))) df["High"] = (df["High"].str.split()).apply(lambda x: float(x[0].replace(',', ''))) df["Low"] = (df["Low"].str.split()).apply(lambda x: float(x[0].replace(',', ''))) dfPrices = pd.read_csv('BTCUSD.csv',usecols=[1]) dfPrices = dfPrices.iloc[::-1] dfPrices["Price"] = (dfPrices["Price"].str.split()).apply(lambda x: float(x[0].replace(',', ''))) dfPrices.head(2) Explanation: Read data End of explanation price = dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)].as_matrix().ravel() Explanation: Plot End of explanation prices = dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)].as_matrix().ravel() plt.figure(figsize=(25,7)) plt.plot(prices,label='Test',color='black') plt.title('Price') plt.legend(loc='upper left') plt.show() Explanation: Price End of explanation macd = MACD(dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)],12,26,9) plt.figure(figsize=(25,7)) plt.plot(macd,label='macd',color='red') plt.title('MACD') plt.legend(loc='upper left') plt.show() Explanation: MACD End of explanation stochastics = stochastics_oscillator(dfPrices.iloc[len(dfPrices.index)-60:len(dfPrices.index)],14) plt.figure(figsize=(14,7)) #First 100 points because it's too dense plt.plot(stochastics[0:100],label='Stochastics Oscillator',color='blue') plt.title('Stochastics Oscillator') plt.legend(loc='upper left') plt.show() Explanation: Stochastics Oscillator End of explanation atr = ATR(df.iloc[len(df.index)-60:len(df.index)],14) plt.figure(figsize=(21,7)) #First 100 points because it's too dense plt.plot(atr[0:100],label='ATR',color='green') plt.title('Average True Range') plt.legend(loc='upper left') plt.show() Explanation: Average True Range End of explanation dfPriceShift = dfPrices.shift(-1) dfPriceShift.rename(columns={'Price':'PriceTarget'}, inplace=True) dfPriceShift.head(2) macd = MACD(dfPrices,12,26,9) macd.rename(columns={'Price':'MACD'}, inplace=True) stochastics = stochastics_oscillator(dfPrices,14) stochastics.rename(columns={'Price':'Stochastics'}, inplace=True) atr = ATR(df,14) atr.rename(columns={0:'ATR'}, inplace=True) final_data = pd.concat([dfPrices,dfPriceShift,macd,stochastics,atr], axis=1) # Delete the entries with missing values (where the stochastics couldn't be computed yet) because have a lot of datapoints ;) final_data = final_data.dropna() final_data.info() final_data final_data.to_csv('BTCUSD_TechnicalIndicators.csv',index=False) Explanation: Create complete DataFrame & Save Data End of explanation
5,227
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Say that I want to train BaggingClassifier that uses DecisionTreeClassifier:
Problem: import numpy as np import pandas as pd from sklearn.ensemble import BaggingClassifier from sklearn.model_selection import GridSearchCV from sklearn.tree import DecisionTreeClassifier X_train, y_train = load_data() assert type(X_train) == np.ndarray assert type(y_train) == np.ndarray X_test = X_train param_grid = { 'base_estimator__max_depth': [1, 2, 3, 4, 5], 'max_samples': [0.05, 0.1, 0.2, 0.5] } dt = DecisionTreeClassifier(max_depth=1) bc = BaggingClassifier(dt, n_estimators=20, max_samples=0.5, max_features=0.5) clf = GridSearchCV(bc, param_grid) clf.fit(X_train, y_train)
5,228
Given the following text description, write Python code to implement the functionality described below step by step Description: 2.0 - 2.1 Migration Step1: In this tutorial we will review the changes in the PHOEBE mesh structures. We will first explain the changes and then demonstrate them in code. As usual, let us import phoebe and create a default binary bundle Step2: PHOEBE 2.0 had a mesh dataset along with pbmesh and protomesh options you could send to b.run_compute(). These options were quite convenient, but had a few inherit problems Step3: The 'include_times' parameter Similarly, the include_times parameter is a SelectParameter, with the choices being the existing datasets, as well as the t0s mentioned above.
Python Code: !pip install -I "phoebe>=2.1,<2.2" Explanation: 2.0 - 2.1 Migration: Meshes Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release). End of explanation import phoebe b = phoebe.default_binary() Explanation: In this tutorial we will review the changes in the PHOEBE mesh structures. We will first explain the changes and then demonstrate them in code. As usual, let us import phoebe and create a default binary bundle: End of explanation b.add_dataset('mesh') print b.get_parameter('columns').get_choices() b.add_dataset('lc') print b.get_parameter('columns').get_choices() b['columns'] = ['*@lc01', 'teffs'] b.get_parameter('columns').get_value() b.get_parameter('columns').expand_value() Explanation: PHOEBE 2.0 had a mesh dataset along with pbmesh and protomesh options you could send to b.run_compute(). These options were quite convenient, but had a few inherit problems: The protomesh was exposed at only t0 and was in Roche coordinates, despite using the same qualifiers 'x', 'y', 'z'. Passband-dependent parameters were exposed in the mesh if pbmesh=True, but only if the times matched exactly with the passband (lc, rv, etc) dataset. Storing more than a few meshes become very memory intensive due to their large size and the large number of columns. Addressing these shortcomings required a complete redesign of the mesh dataset. The most important changes are: pbmesh and protomesh are no longer valid options to b.run_compute(). Everything is done through the mesh dataset itself, i.e. b.add_dataset('mesh'). The default columns that are computed for each mesh include the elements in both Roche and plane-of-sky coordinate systems. These columns cannot be disabled. The columns parameter in the mesh dataset lists additional columns to be exposed in the model mesh when calling b.run_compute(). See the section on columns below for more details. You can choose whether to expose coordinates in the Roche coordinate system ('xs', 'ys', 'zs') or the plane-of-sky coordinate system ('us', 'vs', 'ws'). When plotting, the default is the plane-of-sky coordinate system, and the axes will be correctly labeled as uvw, whereas in PHOEBE 2.0.x these were still labeled xyz. Note that this also applies to velocities ('vxs', 'vys', 'vzs' vs 'vus', 'vvs', 'vws'). The include_times parameter allows for importing timestamps from other datasets. It also provides support for important orbital times: 't0' (zero-point), 't0_perpass' (periastron passage), 't0_supconj' (superior conjunction) and 't0_ref' (zero-phase reference point). By default, the times parameter is empty. If you do not set times or include_times before calling b.run_compute(), your model will be empty. The 'columns' parameter This parameter is a SelectParameter (a new type of Parameter introduced in PHOEBE 2.1). Its value is one of the values in a list of allowed options. You can list the options by calling param.get_choices() (same as you would for a ChoiceParameter). The value also accepts wildcards, as long as the expression matches at least one of the choices. This allows you to easily select, say, rvs from all datasets, by passing rvs@*. To see the full list of matched options, use param.expand_value(). To demonstrate, let us add a few datasets and look at the available choices for the columns parameter. End of explanation print b.get_parameter('include_times').get_value() print b.get_parameter('include_times').get_choices() b['include_times'] = ['lc01', 't0@system'] print b.get_parameter('include_times').get_value() Explanation: The 'include_times' parameter Similarly, the include_times parameter is a SelectParameter, with the choices being the existing datasets, as well as the t0s mentioned above. End of explanation
5,229
Given the following text description, write Python code to implement the functionality described below step by step Description: Regression functions demo notebook If you have not already done so, run the following command to install the statsmodels package Step1: The following function runs a random model with a random independent variable y and four random covariates, using both the statsmodels and scikit-learn packages. The user can compare output from the two tools. Step2: The two models produce the same results. There is no standard regression table type output from sklearn. However, sklearn offers greater features for prediction, by incorporating machine learning functionality. For that reason, we will likely wish to use both packages, for different purposes. The user_model function prompts the user to input a model formula for an OLS regression, then runs the model in statsmodel, and outputs model results and a plot of y data vs. model fitted values. At the prompt, you may either input your own model formula, or copy and paste the following formula as an example
Python Code: from data_cleaning_utils import import_data dat = import_data('../Data/Test/pool82014-10-02cleaned_Subset.csv') Explanation: Regression functions demo notebook If you have not already done so, run the following command to install the statsmodels package: easy_install -U statsmodels Run the following command to install scipy and scikit-learn: conda install scipy conda install scikit-learn Use the data cleaning package to import a data set: End of explanation from regression import compare_OLS compare_OLS(dat) Explanation: The following function runs a random model with a random independent variable y and four random covariates, using both the statsmodels and scikit-learn packages. The user can compare output from the two tools. End of explanation %matplotlib inline from regression import user_model user_model(data=dat) %matplotlib inline import pandas as pd from regression import plot_pairs plot_pairs(data=dat[['XCO2Dpp', 'XCH4Dpp', 'TempC', 'ChlAugL', 'TurbFNU', 'fDOMQSU', 'ODOmgL', 'pH', 'CH4uM', 'CO2uM']], minCorr=0.1, maxCorr=0.95) dat.columns dat.shape[1] Explanation: The two models produce the same results. There is no standard regression table type output from sklearn. However, sklearn offers greater features for prediction, by incorporating machine learning functionality. For that reason, we will likely wish to use both packages, for different purposes. The user_model function prompts the user to input a model formula for an OLS regression, then runs the model in statsmodel, and outputs model results and a plot of y data vs. model fitted values. At the prompt, you may either input your own model formula, or copy and paste the following formula as an example: CO2uM ~ pH + TempC + ChlAugL End of explanation
5,230
Given the following text description, write Python code to implement the functionality described below step by step Description: Controlando o programa programas simples são sequencias lineares de operaçoes existem opções para adotar um fluxo menos linear opções para tomar decisões e poder executar uma ou outra instrução Controles de fluxo muitas vezes queremos executar uma instrução se uma condição prévia for satisfeita para isso usamos a instrução if, veja o exemplo abaixo Step1: Note a estrutura da instrução if if expressão Step2: o tamanho da identação pode ser qualquer, mas deve ser consistente dentro de um bloco de modo geral existe um "padrão" de 4 espaços para identação programas como Spyder já fazem a identação automaticamente Veja abaixo alguns testes de condições básicos Step3: If, elif e else Além do if existem outras duas formas de trabalhar com condições. else Step4: a instrução while uma importante variação da instrução if é a instrução while. ele se comporta de maneira similar, através de condições que são checadas no entanto, while executa o bloco de código até a condição ser satisfeita Step5: Break e Continue As vezes pode ser útil quebrar uma sequencia de código. Para blocos de while existe a opção break
Python Code: x=int(input("entre com um numero inteiro não maior que 10 :")) if x > 10: print "oops, vamos arrumar isso..." x = 10 print "seu número é ",x Explanation: Controlando o programa programas simples são sequencias lineares de operaçoes existem opções para adotar um fluxo menos linear opções para tomar decisões e poder executar uma ou outra instrução Controles de fluxo muitas vezes queremos executar uma instrução se uma condição prévia for satisfeita para isso usamos a instrução if, veja o exemplo abaixo: End of explanation x=int(input("entre com um numero inteiro não maior que 10 :")) if x > 10: print "oops, vamos arrumar isso..." x = 10 # aqui está um erro de identação print "seu número é ",x Explanation: Note a estrutura da instrução if if expressão: instruções note em particular a identação End of explanation x = 12 if x>10 or x<1: print "ok" x=8 if x <= 10 and x >= 1: print "ok" Explanation: o tamanho da identação pode ser qualquer, mas deve ser consistente dentro de um bloco de modo geral existe um "padrão" de 4 espaços para identação programas como Spyder já fazem a identação automaticamente Veja abaixo alguns testes de condições básicos: if x == 1 # checar se x igual a 1 if x > 1 # checar se x maior que 1 if x < 1 # checar se x menor que 1 if x >= 1 # checar se x maior ou igual a 1 if x <= 1 # checar se x menor ou igual a 1 if x != 1 # checar se x é diferente de 1 é possível combinar duas condições em uma instrução End of explanation x = 9 if x > 10: print "seu numero é maior que 10" elif x >= 9: print "seu numero é proximo de 10" else: print "seu numero está ok." Explanation: If, elif e else Além do if existem outras duas formas de trabalhar com condições. else: significa "se não satisfeita a condição anterior, faça isso" elif: significa "se não satisfeita a condição anterior, faça isso se esta outra condição for satisfeita" End of explanation x=int(input("entre com um numero inteiro não maior que 10 :")) while x > 10: print "esse numero é maior que 10, tente de novo." x=int(input("entre com um numero inteiro não maior que 10 :")) print "seu numero é :",x Explanation: a instrução while uma importante variação da instrução if é a instrução while. ele se comporta de maneira similar, através de condições que são checadas no entanto, while executa o bloco de código até a condição ser satisfeita End of explanation x = 12 while x > 10: print "O numero é maior que 10, tente de novo." x = int(input("Entre com um numero inteiro menor que 10: ")) if x == 111: break Explanation: Break e Continue As vezes pode ser útil quebrar uma sequencia de código. Para blocos de while existe a opção break End of explanation
5,231
Given the following text description, write Python code to implement the functionality described below step by step Description: Let’s embrace WebAssembly! Presentation made at EuroPython 2018 - Edinburgh (by Almar Klein) This Notebook is Step1: Compiling 'find_prime()' to WASM Note Step2: Run in Browser Step3: Run in NodeJS Step4: Compile 'find_prime()' to WASM then to Machine code Step5: Use case 2 Step6: rocket wasm controlled per javascript <center> <img src='images/rocket_in_js.png' width=800> </center> abstract of rocket.html file <center> <img src='images/github_rocket_wasm_js.png' width=1000> </center> same rocket wasm controlled per python <center> <img src='images/rocket_in_py.png' width=800> </center> Step7: Run Rocket in Python with Qt Step8: This game is not that hard to play ... Let's make an AI! Game in rust compiled in WASM AI in C compiled in WASM Glue in Python
Python Code: # in RISE mode, click <Shift>+<Enter> to execute a cell def find_prime(nth): n = 0 i = -1 while n < nth: i = i + 1 if i <= 1: continue # nope elif i == 2: n = n + 1 else: gotit = 1 for j in range(2, i//2+1): if i % j == 0: gotit = 0 break if gotit == 1: n = n + 1 return i %time find_prime(1000) Explanation: Let’s embrace WebAssembly! Presentation made at EuroPython 2018 - Edinburgh (by Almar Klein) This Notebook is: - a compacted version of original https://github.com/almarklein/rocket_rust_py - best seen in RISE mode: <center><img src='images/start_rise_mode.png' width=800px></center> WASM has a compact binary format And a human readable counterpart: wasm (module (type $print (func (param i32)) (func $main (i32.const 42) (call $print) ) (start $main) ) It's safe Because browsers. Use case 1: Compile a subset of Python to WASM <center><img src='images/pysnippet_to_wasm.png' width=800px></center> End of explanation # in RISE mode, click <Shift>+<Enter> to execute a cell from ppci import wasm from ppci.lang.python import python_to_wasm def main(): print(find_prime(1000)) m = python_to_wasm(main, find_prime) # WASM (somewhat) readable machine code m.show() # WASM binary format m.show_bytes() # WASM interface m.show_interface() Explanation: Compiling 'find_prime()' to WASM Note: The python-to-wasm compiler is just a POC! Assumes a (reliable) wasm-to-native compiler End of explanation wasm.run_wasm_in_notebook(m) Explanation: Run in Browser End of explanation wasm.run_wasm_in_node(m) Explanation: Run in NodeJS End of explanation # this doesn't currently work on a Python 32 bit, when run on a Windows 64 bit @wasm.wasmify def find_prime2(nth): n = 0 i = -1 while n < nth: i = i + 1 if i <= 1: continue # nope elif i == 2: n = n + 1 else: gotit = 1 for j in range(2, i//2+1): if i % j == 0: gotit = 0 break if gotit == 1: n = n + 1 return i %time find_prime2(1000) Explanation: Compile 'find_prime()' to WASM then to Machine code End of explanation from ppci import wasm m = wasm.Module(open(r'wasm/rocket.wasm', 'rb')) m m.show_interface() Explanation: Use case 2: Python as a platform to bind and run WASM modules ... and allow that code to call into Python functions <center><img src='images/py_as_platform.png' width=700px></center> the Rocket game from github.com/aochagavia <center> <!-- <a href='https://thread-safe.nl/rocket/' target='new'> --> <a href='rocket.html' target='new'> <img src='images/github_rocket_wasm.png' width=900> </a> </center> The rocket game is in a single binary WASM file (58 KB) <center> <img src='images/github_rocket_wasm_html.png' width=600> </center> <center> <img src='images/rocketgame.png' width=800> </center> End of explanation # abstract of rocket_qt.py (do not run) class PythonRocketGame: # ... def wasm_sin(self, a:float) -> float: return math.sin(a) def wasm_cos(self, a:float) -> float: return math.cos(a) def wasm_Math_atan(self, a:float) -> float: return math.atan(a) def wasm_clear_screen(self) -> None: # ... def wasm_draw_bullet(self, x:float, y:float) -> None: # ... def wasm_draw_enemy(self, x:float, y:float) -> None: # ... def wasm_draw_particle(self, x:float, y:float, a:float) -> None: # ... def wasm_draw_player(self, x:float, y:float, a:float) -> None: # ... def wasm_draw_score(self, score:float) -> None: # ... Explanation: rocket wasm controlled per javascript <center> <img src='images/rocket_in_js.png' width=800> </center> abstract of rocket.html file <center> <img src='images/github_rocket_wasm_js.png' width=1000> </center> same rocket wasm controlled per python <center> <img src='images/rocket_in_py.png' width=800> </center> End of explanation from rocket_qt import QtRocketGame game = QtRocketGame() # you may have to switch to the QT window appearing on the side of your browser session game.run() Explanation: Run Rocket in Python with Qt End of explanation # let's write the AI in C print(open('wasm/ai2.c', 'rt').read()) # use https://wasdk.github.io/WasmFiddle/ to convert ai.c in ai2.wasm from ppci import wasm ai2 = wasm.Module(open('wasm/ai2.wasm', 'rb')) ai2.show_interface() from rocket_ai import AiRocketGame game = AiRocketGame(ai2) game.run() Explanation: This game is not that hard to play ... Let's make an AI! Game in rust compiled in WASM AI in C compiled in WASM Glue in Python End of explanation
5,232
Given the following text description, write Python code to implement the functionality described below step by step Description: <center> <img src="../../img/ods_stickers.jpg"> Открытый курс по машинному обучению. Сессия № 2 Автор материала Step1: Теперь перейдем непосредственно к машинному обучению. Данные по кредитному скорингу представлены следующим образом Step2: Напишем функцию, которая будет заменять значения NaN на медиану в каждом столбце таблицы. Step3: Считываем данные Step4: Рассмотрим типы считанных данных Step5: Посмотрим на распределение классов в зависимой переменной Step6: Выберем названия всех признаков, кроме прогнозируемого Step7: Применяем функцию, заменяющую все значения NaN на медианное значение соответствующего столбца. Step8: Разделяем целевой признак и все остальные – получаем обучающую выборку. Step9: Бутстрэп <font color='red'>Задание 2.</font> Сделайте интервальную оценку (на основе бутстрэпа) среднего дохода (MonthlyIncome) клиентов, просрочивших выплату кредита, и отдельно – для вовремя заплативших. Стройте 90% доверительный интервал. Найдите разницу между нижней границей полученного интервала для не просрочивших кредит и верхней границей – для просрочивших. То есть вас просят построить 90%-ые интервалы для дохода "хороших" клиентов $[good_income_lower, good_income_upper]$ и для "плохих" – $[bad_income_lower, bad_income_upper]$ и найти разницу $good_income_lower - bad_income_upper$. Используйте пример из статьи. Поставьте np.random.seed(17). Округлите ответ до целых. <font color='red'>Варианты ответа Step10: Дерево решений, подбор гиперпараметров Одной из основных метрик качества модели является площадь под ROC-кривой. Значения ROC-AUC лежат от 0 до 1. Чем ближе значение ROC-AUC к 1, тем качественнее происходит классификация моделью. Найдите с помощью GridSearchCV гиперпараметры DecisionTreeClassifier, максимизирующие площадь под ROC-кривой. Step11: Используем модуль DecisionTreeClassifier для построения дерева решений. Из-за несбалансированности классов в целевом признаке добавляем параметр балансировки. Используем также параметр random_state=17 для воспроизводимости результатов. Step12: Перебирать будем вот такие значения гиперпараметров Step13: Зафиксируем кросс-валидацию Step14: <font color='red'>Задание 3.</font> Сделайте GridSearch с метрикой ROC AUC по гиперпараметрам из словаря tree_params. Какое максимальное значение ROC AUC получилось (округлите до 2 знаков после разделителя)? Назовем кросс-валидацию устойчивой, если стандартное отклонение метрики качества на кросс-валидации меньше 1%. Получилась ли кросс-валидация устойчивой при оптимальных сочетаниях гиперпараметров (т.е. обеспечивающих максимум среднего значения ROC AUC на кросс-валидации)? <font color='red'>Варианты ответа Step15: Простая реализация случайного леса <font color='red'>Задание 4.</font> Реализуйте свой собственный случайный лес с помощью DecisionTreeClassifier с лучшими параметрами из прошлого задания. В нашем лесу будет 10 деревьев, предсказанные вероятности которых вам нужно усреднить. Краткая спецификация Step16: <font color='red'>Задание 5.</font> Тут сравним нашу собственную реализацию случайного леса с sklearn-овской. Для этого воспользуйтесь RandomForestClassifier(n_jobs=1, random_state=17), укажите все те же значения max_depth и max_features, что и раньше. Какое среднее значение ROC AUC на кросс-валидации мы в итоге получили? Выберите самое близкое значение. <font color='red'>Варианты ответа Step17: Случайный лес sklearn, подбор гиперпараметров <font color='red'>Задание 6.</font> В 3 задании мы находили оптимальные гиперпараметры для одного дерева, но может быть, для ансамбля эти параметры дерева не будут оптимальными. Давайте проверим это с помощью GridSearchCV (RandomForestClassifier(random_state=17)). Только теперь расширим перебираемые занчения max_depth до 15 включительно, так как в лесу нужны деревья поглубже (а почему именно – вы поняли из статьи). Какими теперь стали лучшие значения гиперпараметров? <font color='red'>Варианты ответа Step18: Логистическая регрессия, подбор гиперпараметров <font color='red'>Задание 7.</font> Теперь сравним с логистической регрессией (укажем class_weight='balanced' и random_state=17). Сделайте полный перебор по параметру C из широкого диапазона значений np.logspace(-8, 8, 17). Только сделаем это корректно и выстроим пайплайн – сначала масштабирование, затем обучение модели. Разберитесь с пайплайнами и проведите кросс-валидацию. Какое получилось лучшее значение средней ROC AUC? Выберите самое близкое значение. <font color='red'>Варианты ответа Step19: Логистическая регрессия и случайный лес на разреженных признаках В случае небольшого числа признаков случайный лес показал себя лучше логистической регрессии. Однако один из главных недостатков деревьев проявляется при работе с разреженным данными, например с текстами. Давайте сравним логистическую регрессию и случайный лес в новой задаче. Скачайте данные с отзывами к фильмам отсюда. Step20: <font color='red'>Задание 8.</font> Сделайте полный перебор по параметру C из выборки [0.1, 1, 10, 100]. Какое лучшее значение ROC AUC получилось на кросс-валидации? Выберите самое близкое значение. <font color='red'>Варианты ответа Step21: <font color='red'>Задание 9.</font> Теперь попробуем сравнить со случайным лесом. Аналогично делаем перебор и получаем максимальное ROC AUC. Выберите самое близкое значение. <font color='red'>Варианты ответа
Python Code: import math def nCr(n,r): f = math.factorial return f(n) / f(r) / f(n - r) p, N, m, s = 0.8, 7, 4, 0 for i in range(m, N+1): s += nCr(N, i) * p**i * (1 - p) ** (N - i) print(s) Explanation: <center> <img src="../../img/ods_stickers.jpg"> Открытый курс по машинному обучению. Сессия № 2 Автор материала: Data Science интерн Ciklum, студент магистерской программы CSDS UCU Виталий Радченко, программист-исследователь Mail.ru Group, старший преподаватель Факультета Компьютерных Наук ВШЭ Юрий Кашницкий. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала. <center> Домашнее задание №5 <center> Случайный лес и логистическая регрессия в задачах кредитного скоринга и классификации отзывов к фильмам Нашей главной задачей будет построение и настройка моделей для задач кредитного скоринга и анализа отзывов к фильмам. Заполните код в клетках (где написано "Ваш код здесь") и ответьте на вопросы в веб-форме. Но для разминки решите первое задание. <font color='red'>Задание 1.</font> В зале суда есть 7 присяжных, каждый из них по отдельности с вероятностью 80% может правильно определить, виновен подсудимый или нет. С какой вероятностью присяжные все вместе вынесут правильный вердикт, если решение принимается большинством голосов? <font color='red'>Варианты ответа:</font> - 20.97% - 80.00% - 83.70% - 96.66% <font color="red">Решение</font>: поскольку большинство голосов – 4, тогда наше m = 4, N = 7, p = 0.8. Подставляем в формулу из статьи $$ \large \mu = \sum_{i=4}^{7}C_7^i0.8^i(1-0.8)^{7-i} $$ После подставления и проделывания всех операций получим ответ 96.66% End of explanation import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline Explanation: Теперь перейдем непосредственно к машинному обучению. Данные по кредитному скорингу представлены следующим образом: Прогнозируемая переменная SeriousDlqin2yrs – Человек имел долгие просрочки выплат платежей за 2 года; бинарный признак Независимые признаки age – Возраст заёмщика кредитных средств (число полных лет); тип – integer NumberOfTime30-59DaysPastDueNotWorse – Количество раз, когда человек имел просрочку выплаты других кредитов более 30-59 дней (но не больше) в течение последних двух лет; тип - integer DebtRatio – Ежемесячный отчисления на задолжености(кредиты,алименты и т.д.) / совокупный месячный доход percentage; тип – float MonthlyIncome – Месячный доход в долларах; тип – float NumberOfTimes90DaysLate – Количество раз, когда человек имел просрочку выплаты других кредитов более 90 дней; тип – integer NumberOfTime60-89DaysPastDueNotWorse – Количество раз, когда человек имел просрочку выплаты других кредитов более 60-89 дней (но не больше) в течение последних двух лет; тип – integer NumberOfDependents – Число человек в семье кредитозаёмщика; тип – integer End of explanation def impute_nan_with_median(table): for col in table.columns: table[col]= table[col].fillna(table[col].median()) return table Explanation: Напишем функцию, которая будет заменять значения NaN на медиану в каждом столбце таблицы. End of explanation data = pd.read_csv('../../data/credit_scoring_sample.csv', sep=";") data.head() Explanation: Считываем данные End of explanation data.dtypes Explanation: Рассмотрим типы считанных данных End of explanation ax = data['SeriousDlqin2yrs'].hist(orientation='horizontal', color='red') ax.set_xlabel("number_of_observations") ax.set_ylabel("unique_value") ax.set_title("Target distribution") print('Distribution of target:') data['SeriousDlqin2yrs'].value_counts() / data.shape[0] Explanation: Посмотрим на распределение классов в зависимой переменной End of explanation independent_columns_names = data.columns.values independent_columns_names = [x for x in data if x != 'SeriousDlqin2yrs'] independent_columns_names Explanation: Выберем названия всех признаков, кроме прогнозируемого End of explanation table = impute_nan_with_median(data) Explanation: Применяем функцию, заменяющую все значения NaN на медианное значение соответствующего столбца. End of explanation X = table[independent_columns_names] y = table['SeriousDlqin2yrs'] Explanation: Разделяем целевой признак и все остальные – получаем обучающую выборку. End of explanation def get_bootstrap_samples(data, n_samples, seed=0): # функция для генерации подвыборок с помощью бутстрэпа np.random.seed(seed) indices = np.random.randint(0, len(data), (n_samples, len(data))) samples = data[indices] return samples def stat_intervals(stat, alpha): # функция для интервальной оценки boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)]) return boundaries # сохранение в отдельные numpy массивы данных по просрочке churn = data[data['SeriousDlqin2yrs'] == 1]['MonthlyIncome'].values not_churn = data[data['SeriousDlqin2yrs'] == 0]['MonthlyIncome'].values # генерируем выборки с помощью бутстрэра и сразу считаем по каждой из них среднее churn_mean_scores = [np.mean(sample) for sample in get_bootstrap_samples(churn, 1000, seed=17)] not_churn_mean_scores = [np.mean(sample) for sample in get_bootstrap_samples(not_churn, 1000, seed=17)] # выводим интервальную оценку среднего print("Mean interval", stat_intervals(churn_mean_scores, 0.1)) print("Mean interval", stat_intervals(not_churn_mean_scores, 0.1)) print("Difference is", stat_intervals(not_churn_mean_scores, 0.1)[0] - stat_intervals(churn_mean_scores, 0.1)[1]) Explanation: Бутстрэп <font color='red'>Задание 2.</font> Сделайте интервальную оценку (на основе бутстрэпа) среднего дохода (MonthlyIncome) клиентов, просрочивших выплату кредита, и отдельно – для вовремя заплативших. Стройте 90% доверительный интервал. Найдите разницу между нижней границей полученного интервала для не просрочивших кредит и верхней границей – для просрочивших. То есть вас просят построить 90%-ые интервалы для дохода "хороших" клиентов $[good_income_lower, good_income_upper]$ и для "плохих" – $[bad_income_lower, bad_income_upper]$ и найти разницу $good_income_lower - bad_income_upper$. Используйте пример из статьи. Поставьте np.random.seed(17). Округлите ответ до целых. <font color='red'>Варианты ответа:</font> - 344 - 424 - 584 - 654 End of explanation from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import GridSearchCV, StratifiedKFold Explanation: Дерево решений, подбор гиперпараметров Одной из основных метрик качества модели является площадь под ROC-кривой. Значения ROC-AUC лежат от 0 до 1. Чем ближе значение ROC-AUC к 1, тем качественнее происходит классификация моделью. Найдите с помощью GridSearchCV гиперпараметры DecisionTreeClassifier, максимизирующие площадь под ROC-кривой. End of explanation dt = DecisionTreeClassifier(random_state=17, class_weight='balanced') Explanation: Используем модуль DecisionTreeClassifier для построения дерева решений. Из-за несбалансированности классов в целевом признаке добавляем параметр балансировки. Используем также параметр random_state=17 для воспроизводимости результатов. End of explanation max_depth_values = [5, 6, 7, 8, 9] max_features_values = [4, 5, 6, 7] tree_params = {'max_depth': max_depth_values, 'max_features': max_features_values} Explanation: Перебирать будем вот такие значения гиперпараметров: End of explanation skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=17) Explanation: Зафиксируем кросс-валидацию: стратифицированная, 5 разбиений с перемешиванием, не забываем про random_state. End of explanation dt_grid_search = GridSearchCV(dt, tree_params, n_jobs=-1, scoring ='roc_auc', cv=skf) dt_grid_search.fit(X, y) round(float(dt_grid_search.best_score_), 3) dt_grid_search.best_params_ dt_grid_search.cv_results_["std_test_score"][np.argmax(dt_grid_search.cv_results_["mean_test_score"])] Explanation: <font color='red'>Задание 3.</font> Сделайте GridSearch с метрикой ROC AUC по гиперпараметрам из словаря tree_params. Какое максимальное значение ROC AUC получилось (округлите до 2 знаков после разделителя)? Назовем кросс-валидацию устойчивой, если стандартное отклонение метрики качества на кросс-валидации меньше 1%. Получилась ли кросс-валидация устойчивой при оптимальных сочетаниях гиперпараметров (т.е. обеспечивающих максимум среднего значения ROC AUC на кросс-валидации)? <font color='red'>Варианты ответа:</font> - 0.82, нет - 0.84, нет - 0.82, да - 0.84, да End of explanation from sklearn.base import BaseEstimator from sklearn.model_selection import cross_val_score class RandomForestClassifierCustom(BaseEstimator): def __init__(self, n_estimators=10, max_depth=10, max_features=10, random_state=17): self.n_estimators = n_estimators self.max_depth = max_depth self.max_features = max_features self.random_state = random_state self.trees = [] self.feat_ids_by_tree = [] def fit(self, X, y): for i in range(self.n_estimators): np.random.seed(i + self.random_state) feat_to_use_ids = np.random.choice(range(X.shape[1]), self.max_features, replace=False) examples_to_use = list(set(np.random.choice(range(X.shape[0]), X.shape[0], replace=True))) self.feat_ids_by_tree.append(feat_to_use_ids) dt = DecisionTreeClassifier(class_weight='balanced', max_depth=self.max_depth, max_features=self.max_features, random_state = self.random_state) dt.fit(X[examples_to_use, :][:, feat_to_use_ids], y[examples_to_use]) self.trees.append(dt) return self def predict_proba(self, X): predictions = [] for i in range(self.n_estimators): feat_to_use_ids = self.feat_ids_by_tree[i] predictions.append(self.trees[i].predict_proba(X[:,feat_to_use_ids])) return np.mean(predictions, axis=0) rf = RandomForestClassifierCustom(max_depth=7, max_features=6).fit(X.values, y.values) cv_aucs = cross_val_score(RandomForestClassifierCustom(max_depth=7, max_features=6), X.values, y.values, scoring="roc_auc", cv=skf) print("Средняя ROC AUC для собственного случайного леса:", np.mean(cv_aucs)) Explanation: Простая реализация случайного леса <font color='red'>Задание 4.</font> Реализуйте свой собственный случайный лес с помощью DecisionTreeClassifier с лучшими параметрами из прошлого задания. В нашем лесу будет 10 деревьев, предсказанные вероятности которых вам нужно усреднить. Краткая спецификация: - Используйте основу ниже - В методе fit в цикле (i от 0 до n_estimators-1) фиксируйте seed, равный (random_state + i). Почему именно так – неважно, главное чтоб на каждой итерации seed был новый, при этом все значения можно было бы воспроизвести - Зафиксировав seed, выберите без замещения max_features признаков, сохраните список выбранных id признаков в self.feat_ids_by_tree - Также сделайте bootstrap-выборку (т.е. с замещением) из множества id объектов - Обучите дерево с теми же max_depth, max_features и random_state, что и у RandomForestClassifierCustom на выборке с нужным подмножеством объектов и признаков - Метод fit возвращает текущий экземпляр класса RandomForestClassifierCustom, то есть self - В методе predict_proba опять нужен цикл по всем деревьям. У тестовой выборки нужно взять те признаки, на которых соответсвующее дерево обучалось, и сделать прогноз вероятностей (predict_proba уже для дерева). Метод должен вернуть усреднение прогнозов по всем деревьям. Проведите кросс-валидацию. Какое получилось среднее значение ROC AUC на кросс-валидации? Выберите самое близкое значение. <font color='red'>Варианты ответа:</font> - 0.823 - 0.833 - 0.843 - 0.853 End of explanation from sklearn.ensemble import RandomForestClassifier cv_aucs = cross_val_score(RandomForestClassifier(n_estimators=10, max_depth=7, max_features=6, random_state=17, n_jobs=-1, class_weight='balanced'), X.values, y.values, scoring="roc_auc", cv=skf) print("Средняя ROC AUC для случайного леса Sklearn:", np.mean(cv_aucs)) Explanation: <font color='red'>Задание 5.</font> Тут сравним нашу собственную реализацию случайного леса с sklearn-овской. Для этого воспользуйтесь RandomForestClassifier(n_jobs=1, random_state=17), укажите все те же значения max_depth и max_features, что и раньше. Какое среднее значение ROC AUC на кросс-валидации мы в итоге получили? Выберите самое близкое значение. <font color='red'>Варианты ответа:</font> - 0.823 - 0.833 - 0.843 - 0.853 End of explanation max_depth_values = range(5, 15) max_features_values = [4, 5, 6, 7] forest_params = {'max_depth': max_depth_values, 'max_features': max_features_values} rf = RandomForestClassifier(random_state=17, n_jobs=-1, class_weight='balanced') rf_grid_search = GridSearchCV(rf, forest_params, n_jobs=-1, scoring='roc_auc', cv=skf) rf_grid_search.fit(X.values, y.values) rf_grid_search.best_score_ rf_grid_search.best_params_ rf_grid_search.cv_results_["std_test_score"][np.argmax(rf_grid_search.cv_results_["mean_test_score"])] Explanation: Случайный лес sklearn, подбор гиперпараметров <font color='red'>Задание 6.</font> В 3 задании мы находили оптимальные гиперпараметры для одного дерева, но может быть, для ансамбля эти параметры дерева не будут оптимальными. Давайте проверим это с помощью GridSearchCV (RandomForestClassifier(random_state=17)). Только теперь расширим перебираемые занчения max_depth до 15 включительно, так как в лесу нужны деревья поглубже (а почему именно – вы поняли из статьи). Какими теперь стали лучшие значения гиперпараметров? <font color='red'>Варианты ответа:</font> - max_depth=8, max_features=4 - max_depth=9, max_features=5 - max_depth=10, max_features=6 - max_depth=11, max_features=7 End of explanation from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LogisticRegression scaler = StandardScaler() logit = LogisticRegression(random_state=17, class_weight='balanced') logit_pipe = Pipeline([('scaler', scaler), ('logit', logit)]) logit_pipe_params = {'logit__C': np.logspace(-8, 8, 17)} logit_pipe_grid_search = GridSearchCV(logit_pipe, logit_pipe_params, n_jobs=-1, scoring ='roc_auc', cv=skf) logit_pipe_grid_search.fit(X.values, y.values) logit_pipe_grid_search.best_score_ logit_pipe_grid_search.best_params_ logit_pipe_grid_search.cv_results_["std_test_score"][np.argmax(logit_pipe_grid_search.cv_results_["mean_test_score"])] Explanation: Логистическая регрессия, подбор гиперпараметров <font color='red'>Задание 7.</font> Теперь сравним с логистической регрессией (укажем class_weight='balanced' и random_state=17). Сделайте полный перебор по параметру C из широкого диапазона значений np.logspace(-8, 8, 17). Только сделаем это корректно и выстроим пайплайн – сначала масштабирование, затем обучение модели. Разберитесь с пайплайнами и проведите кросс-валидацию. Какое получилось лучшее значение средней ROC AUC? Выберите самое близкое значение. <font color='red'>Варианты ответа:</font> - 0.778 - 0.788 - 0.798 - 0.808 End of explanation # Загрузим данные df = pd.read_csv("../../data/movie_reviews_train.csv", nrows=50000) # Разделим данные на текст и целевой признак X_text = df["text"] y_text = df["label"] # Соотношения классов df.label.value_counts() from sklearn.feature_extraction.text import CountVectorizer from sklearn.pipeline import Pipeline # будем разбивать на 3 фолда skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=17) # в Pipeline будем сразу преобразовать наш текст и обучать логистическую регрессию classifier = Pipeline([ ('vectorizer', CountVectorizer(max_features = 100000, ngram_range = (1, 3))), ('clf', LogisticRegression(random_state=17))]) Explanation: Логистическая регрессия и случайный лес на разреженных признаках В случае небольшого числа признаков случайный лес показал себя лучше логистической регрессии. Однако один из главных недостатков деревьев проявляется при работе с разреженным данными, например с текстами. Давайте сравним логистическую регрессию и случайный лес в новой задаче. Скачайте данные с отзывами к фильмам отсюда. End of explanation %%time parameters = {'clf__C': (0.1, 1, 10, 100)} grid_search = GridSearchCV(classifier, parameters, n_jobs=-1, scoring ='roc_auc', cv=skf) grid_search = grid_search.fit(X_text, y_text) grid_search.best_params_ grid_search.best_score_ Explanation: <font color='red'>Задание 8.</font> Сделайте полный перебор по параметру C из выборки [0.1, 1, 10, 100]. Какое лучшее значение ROC AUC получилось на кросс-валидации? Выберите самое близкое значение. <font color='red'>Варианты ответа:</font> - 0.74 - 0.75 - 0.84 - 0.85 End of explanation classifier = Pipeline([ ('vectorizer', CountVectorizer(max_features = 100000, ngram_range = (1, 3))), ('clf', RandomForestClassifier(random_state=17, n_jobs=-1))]) min_samples_leaf = [1, 2, 3] max_features = [0.3, 0.5, 0.7] max_depth = [None] %%time parameters = {'clf__max_features': max_features, 'clf__min_samples_leaf': min_samples_leaf, 'clf__max_depth': max_depth} grid_search = GridSearchCV(classifier, parameters, n_jobs=-1, scoring ='roc_auc', cv=skf) grid_search = grid_search.fit(X_text, y_text) grid_search.best_params_ grid_search.best_score_ Explanation: <font color='red'>Задание 9.</font> Теперь попробуем сравнить со случайным лесом. Аналогично делаем перебор и получаем максимальное ROC AUC. Выберите самое близкое значение. <font color='red'>Варианты ответа:</font> - 0.74 - 0.75 - 0.84 - 0.85 End of explanation
5,233
Given the following text description, write Python code to implement the functionality described below step by step Description: STUDENT LOANS CHALLENGE COURSERA ML CHALLENGE <br> This notebook was created to document the steps taken to solve the Predict Students’ Ability to Repay Educational Loans posted on the Data Science Community in Coursera. The data is aviable at Step1: Acqure Data The data is acquired using pandas (I renamed the file to CollegeScorecardData.csv) Step2: Analyze Data First, let's see a little bit of the data Step3: Find information about the features Let's find more about the data Step4: There are 7703 examples and 1743 features. There are 443 float features that may be numeric, 13 integer features that may be categorical, and 1287 features that are strings, but may be numbers but data was not entered correctly (for example, if there was not data for a given feature, someone could have written "blank"). Given the high number of non numerical features, we need to explore them more. Luckly, there is a dictionary provided with the data, so we can explore it a little bit to learn about the data (The original file was converted do CSV) Step5: There are 1975 entries, but the column NAME OF DATA ELEMENT has only 1734 not nut elements, so something is up. Let's try to explore the dict a little bit more Step6: Nothing suspicius here, lets try again Step7: Aha! It seems that the feature at index 15 is categorical, and that's why the rows that follow it don't have a value under NAME OF DATA ELEMENT. Just for now, let's get rid of those NAN rows. Step8: Lets get the info of the new dict Step9: We are interested primarly in the NAME OF DATA ELEMENT, VARIABLE NAME and API data type. They seem complete. Let's see howe many data types there are Step10: Let's find out how many features have each data type Step11: So in reality, there are 1206 float features, 521 integers, and 7 string features. (For now we assume that the autocomplete type is string). This numbers differ a lot from our previus analisys, in which we had 443 float features, 13 integer features and 1287 features that are strings. Also, we cannot asume that all features of type integer are categorical, for example the ZIP code feature is integer but is not a categorical feature. Let's find more about the autocomplete features Step12: We can see that these autocomplete features can be treated as strings. Delete features that have all their values NaN Step13: Delete features that are meaningless There are features that are meaningless for the problem we are trying to solve. We need to drop these features, but we need a criterion to eliminate them. The criterion that we are going to employ is to eliminate the features that are unique for every entry and don't add information to the problem, for example if we have a unique ID for every institution, this ID doesn't add information to the problem. Also, we need to take in account that there area features that may be unique for every entry, but DOES add relevant information. For example, the tuition fees may be unique and add information. Let's find the ratio of the number of unique values over number of examples Step14: So there are some features in the data that are not explained in the dictionary. Tha is not necessarly an inconvenience, so we won't worry abot this right now. Lets find what those NTP4 features are about Step15: So those NTP4 features are about Average Net prices, so they are defenetly numeric features, and it makes sense to keep them. Let's run our previous analysis again with out those features so we can have a cleaner visualization as we lower the threshold Step16: Let's see what are these features about Step17: So UNITID, OPEID, OPEID6, INSTNM, INSTURL, NPCURL and ALIAS are features that have to do with the identity of the institution, so they don't add relevant information to the problem, therfore they will be eliminated. (flag_e) The ZIP code could be useful if it is used to group the schools to some sort of category about it's location. We are not going to to this so we are going to eliminate it as well. Step18: Work on the string and autocmplet data Step19: We already dropped INSTURL and NPCURL. Let's explore the STABBR feature Step20: So this feature has to do with the state where the school is located. Let's explore the ACCREDAGENCY feature Step21: Now les's explore the autocomplete data type Step22: INSTNM and ALIAS where dropped, let's see the CITY feature Step23: So STABBR, ACCREDAGENCY and CITY are features that we are going to keep, but they need to be transformed to an ordinal (using numbers) representation, since the ML algorithms use numbers and not strings. Step24: Let's see how our data looks so far Step25: Although we mapped or eliminated the string features, we still have a lot object (not numeric) data types. Let's work on them Fetures with object dtype Let's try to find a sample of features that should be numbers, but for some reason in the data they are not numbers Step26: We can see that there is a lot of data suppresed for privacy reasons. Also, there are dates, and one of them 12/31/2999 seems to be invalid. Let's go ahead and replace these values with nan, so we will treat it as any nan value. Also, if any column ends having all of its values as Nan, we will delete this column. Step27: Lets find wich features are date features Step28: It seems that SEPAR_DT_MDN don't add valuable information to the problem, so we are going to drop it Step29: Now we will transfore all the object features to numeric Step30: Now we have gotten rid of the object dtype Eliminate features with high number of NaN values We already deleted features with that had all of their value as NaN, but now we will eliminate features with a high percentage of NaN values (more than 90%) Step31: Filling missing data We need to fill the mising data. To do this we need to know if the feature is numeric or categorical. Let's use the dictionary to get that info. Step32: We can see that after the name of a categorical feature, there is at least one item with value NaN. Let's use this to get a list of categorical features Step33: To fill the missing data that belongs to a categorical feature, we will use the most common value of the data (mode). To fill the missing data that belongs to a numeric feature, we will use the the average of the data (mean). Step34: Let's save the data in a file
Python Code: # data analysis and manipulation import numpy as np import pandas as pd np.set_printoptions(threshold=1000) # visualization import seaborn as sns import matplotlib.pyplot as plt #machine learning import tensorflow as tf #Regular expression import re Explanation: STUDENT LOANS CHALLENGE COURSERA ML CHALLENGE <br> This notebook was created to document the steps taken to solve the Predict Students’ Ability to Repay Educational Loans posted on the Data Science Community in Coursera. The data is aviable at: https://ed-public-download.app.cloud.gov/downloads/Most-Recent-Cohorts-All-Data-Elements.csv. Documentation for the data is available at https://collegescorecard.ed.gov/data/documentation/. There is a data dictionary at https://collegescorecard.ed.gov/assets/CollegeScorecardDataDictionary.xlsx. WORKFLOW The Workflow suggested in https://www.kaggle.com/startupsci/titanic-data-science-solutions is going to be followed. The Workflow is the following: Question or problem definition. Acquire training and testing data. Wrangle, prepare, cleanse the data. Analyze, identify patterns, and explore the data. Model, predict and solve the problem. Visualize, report, and present the problem solving steps and final solution. Supply the results. The workflow indicates general sequence of how each stage may follow the other. However, there are use cases with exceptions: We may combine mulitple workflow stages. We may analyze by visualizing data. Perform a stage earlier than indicated. We may analyze data before and after wrangling. Perform a stage multiple times in our workflow. Visualize stage may be used multiple times. Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition. Problem Definition Test to see if a set of institutional features can be used to predict student otucomes, in particular debt repayment. This solution is intended to try to explore to what extent instututional characteristics as well as certain demographic factors can indicate or predict debt repayment. The (US) “College Scorecard” (the data set) includes national data on the earnings of former college graduates and new data on student debt. Import Libraries First import the libraries that are going to be used: End of explanation all_data = pd.read_csv('datasets/CollegeScorecardData.csv') Explanation: Acqure Data The data is acquired using pandas (I renamed the file to CollegeScorecardData.csv) End of explanation all_data.head() Explanation: Analyze Data First, let's see a little bit of the data End of explanation all_data.info() Explanation: Find information about the features Let's find more about the data End of explanation data_dict = pd.read_csv('datasets/CollegeScorecardDataDictionary.csv') data_dict.head() data_dict.tail() data_dict.info() Explanation: There are 7703 examples and 1743 features. There are 443 float features that may be numeric, 13 integer features that may be categorical, and 1287 features that are strings, but may be numbers but data was not entered correctly (for example, if there was not data for a given feature, someone could have written "blank"). Given the high number of non numerical features, we need to explore them more. Luckly, there is a dictionary provided with the data, so we can explore it a little bit to learn about the data (The original file was converted do CSV) End of explanation data_dict[5:10] Explanation: There are 1975 entries, but the column NAME OF DATA ELEMENT has only 1734 not nut elements, so something is up. Let's try to explore the dict a little bit more End of explanation data_dict[10:20] Explanation: Nothing suspicius here, lets try again End of explanation data_dict_no_nan_names = data_dict.dropna(subset=['NAME OF DATA ELEMENT']) data_dict_no_nan_names[10:20] Explanation: Aha! It seems that the feature at index 15 is categorical, and that's why the rows that follow it don't have a value under NAME OF DATA ELEMENT. Just for now, let's get rid of those NAN rows. End of explanation data_dict_no_nan_names.info() Explanation: Lets get the info of the new dict End of explanation data_dict_no_nan_names['API data type'].unique() Explanation: We are interested primarly in the NAME OF DATA ELEMENT, VARIABLE NAME and API data type. They seem complete. Let's see howe many data types there are End of explanation data_dict_no_nan_names['API data type'].value_counts() Explanation: Let's find out how many features have each data type End of explanation data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'autocomplete'] Explanation: So in reality, there are 1206 float features, 521 integers, and 7 string features. (For now we assume that the autocomplete type is string). This numbers differ a lot from our previus analisys, in which we had 443 float features, 13 integer features and 1287 features that are strings. Also, we cannot asume that all features of type integer are categorical, for example the ZIP code feature is integer but is not a categorical feature. Let's find more about the autocomplete features: End of explanation all_data_no_na_columns = all_data.dropna(axis=1, how='all') Explanation: We can see that these autocomplete features can be treated as strings. Delete features that have all their values NaN End of explanation # Create a list to save the features that are above a certain threshold features_with_high_ratio = [] # Create a list to save the features in all_data but not in the dict features_not_in_dict = [] #Calculate the ratio for feature in all_data_no_na_columns.columns.values: # Get the row in the dict wich have VARIABLE NAME == feature row_in_dict = data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature] # Get the data type of the row data_type_series = row_in_dict['API data type'] #Check if exists in the dict if data_type_series.size > 0: # Get the data type data_type = data_type_series.values[0] # float features (numeric features) are not taken in account if data_type == 'integer' or data_type == 'string' or data_type == 'autocomplete': column = all_data_no_na_columns[feature] column_no_na = column.dropna() r = column_no_na.unique().size / column_no_na.size if r > 0.8: features_with_high_ratio.append(feature) print(str(feature) + ": " + str(r)) #The feature is not in the dict else: features_not_in_dict.append(feature) print ("\nFeatures in data but not in the dictionary:" + str(features_not_in_dict)) Explanation: Delete features that are meaningless There are features that are meaningless for the problem we are trying to solve. We need to drop these features, but we need a criterion to eliminate them. The criterion that we are going to employ is to eliminate the features that are unique for every entry and don't add information to the problem, for example if we have a unique ID for every institution, this ID doesn't add information to the problem. Also, we need to take in account that there area features that may be unique for every entry, but DOES add relevant information. For example, the tuition fees may be unique and add information. Let's find the ratio of the number of unique values over number of examples: End of explanation npt4_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT4_PUB' npt41_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT41_PUB' npt42_pub = data_dict_no_nan_names['VARIABLE NAME'] == 'NPT42_PUB' data_dict_no_nan_names[npt4_pub | npt41_pub | npt42_pub ] Explanation: So there are some features in the data that are not explained in the dictionary. Tha is not necessarly an inconvenience, so we won't worry abot this right now. Lets find what those NTP4 features are about End of explanation # Create a list to save the features that are above a certain threshold features_with_high_ratio = [] # Create a list to save the features in all_data but not in the dict features_not_in_dict = [] #Calculate the ratio for feature in all_data_no_na_columns.columns.values: # Get the row in the dict wich have VARIABLE NAME == feature row_in_dict = data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature] # Get the data type of the row data_type_series = row_in_dict['API data type'] #Check if exists in the dict if data_type_series.size > 0: # Get the data type data_type = data_type_series.values[0] # float features (numeric features) are not taken in account if (data_type == 'integer' or data_type == 'string' or data_type == 'autocomplete') \ and feature[:4] != 'NPT4': column = all_data_no_na_columns[feature] column_no_na = column.dropna() r = column_no_na.unique().size / column_no_na.size if r > 0.5: features_with_high_ratio.append(feature) print(str(feature) + ": " + str(r)) print(features_with_high_ratio) Explanation: So those NTP4 features are about Average Net prices, so they are defenetly numeric features, and it makes sense to keep them. Let's run our previous analysis again with out those features so we can have a cleaner visualization as we lower the threshold End of explanation high_ratio_features = pd.DataFrame() for feature in features_with_high_ratio: high_ratio_features = high_ratio_features.append(data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == feature]) high_ratio_features Explanation: Let's see what are these features about: End of explanation all_data_no_id_cols = all_data_no_na_columns.drop(['UNITID', 'OPEID', 'OPEID6', 'INSTNM', 'INSTURL', 'NPCURL', 'ALIAS', 'ZIP'], axis = 1) all_data_no_id_cols.head() Explanation: So UNITID, OPEID, OPEID6, INSTNM, INSTURL, NPCURL and ALIAS are features that have to do with the identity of the institution, so they don't add relevant information to the problem, therfore they will be eliminated. (flag_e) The ZIP code could be useful if it is used to group the schools to some sort of category about it's location. We are not going to to this so we are going to eliminate it as well. End of explanation data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'string'] Explanation: Work on the string and autocmplet data End of explanation all_data_no_id_cols['STABBR'] Explanation: We already dropped INSTURL and NPCURL. Let's explore the STABBR feature End of explanation all_data_no_id_cols['ACCREDAGENCY'] all_data_no_id_cols['ACCREDAGENCY'].value_counts() Explanation: So this feature has to do with the state where the school is located. Let's explore the ACCREDAGENCY feature: End of explanation data_dict_no_nan_names[data_dict_no_nan_names['API data type'] == 'autocomplete'] Explanation: Now les's explore the autocomplete data type: End of explanation all_data_no_id_cols['CITY'] Explanation: INSTNM and ALIAS where dropped, let's see the CITY feature: End of explanation all_data_no_strings = all_data_no_id_cols.copy() #STABBR mapping values = all_data_no_strings['STABBR'].unique() mapping = {} numeric_value = 1 for value in values: mapping[value] = numeric_value numeric_value += 1 all_data_no_strings['STABBR'] = all_data_no_strings['STABBR'].map(mapping) #ACCREDAGENCY mapping values = all_data_no_id_cols['ACCREDAGENCY'].unique() mapping = {} numeric_value = 1 for value in values: mapping[value] = numeric_value numeric_value += 1 all_data_no_strings['ACCREDAGENCY'] = all_data_no_strings['ACCREDAGENCY'].map(mapping) #CITY mapping values = all_data_no_id_cols['CITY'].unique() mapping = {} numeric_value = 1 for value in values: mapping[value] = numeric_value numeric_value += 1 all_data_no_strings['CITY'] = all_data_no_strings['CITY'].map(mapping) all_data_no_strings.head() Explanation: So STABBR, ACCREDAGENCY and CITY are features that we are going to keep, but they need to be transformed to an ordinal (using numbers) representation, since the ML algorithms use numbers and not strings. End of explanation all_data_no_strings.info() Explanation: Let's see how our data looks so far End of explanation regex = re.compile('[0-9]+(\.[0-9]+)?$') words = [] for column in all_data_no_strings: if all_data_no_strings[column].dtypes == 'object': for data in all_data_no_strings[column]: if not regex.match(str(data)): words.append(data) pd.Series(words).value_counts() Explanation: Although we mapped or eliminated the string features, we still have a lot object (not numeric) data types. Let's work on them Fetures with object dtype Let's try to find a sample of features that should be numbers, but for some reason in the data they are not numbers End of explanation all_data_replaced_with_nan = all_data_no_strings.replace(to_replace = 'PrivacySuppressed', value = np.nan) all_data_replaced_with_nan = all_data_replaced_with_nan.replace(to_replace = '12/31/2999', value = np.nan) all_data_replaced_with_nan = all_data_replaced_with_nan.dropna(axis=1, how='all') all_data_replaced_with_nan.info() Explanation: We can see that there is a lot of data suppresed for privacy reasons. Also, there are dates, and one of them 12/31/2999 seems to be invalid. Let's go ahead and replace these values with nan, so we will treat it as any nan value. Also, if any column ends having all of its values as Nan, we will delete this column. End of explanation features_with_date = [] for column in all_data_replaced_with_nan: if all_data_replaced_with_nan[column].dtypes == 'object': if all_data_replaced_with_nan[column].str.match('[0-9]{2}/[0-9]{2}/[0-9]{4}').any(): features_with_date.append(column) features_with_date data_dict_no_nan_names[data_dict_no_nan_names['VARIABLE NAME'] == 'SEPAR_DT_MDN'] Explanation: Lets find wich features are date features End of explanation all_data_no_dates = all_data_replaced_with_nan.drop(['SEPAR_DT_MDN'], axis = 1) Explanation: It seems that SEPAR_DT_MDN don't add valuable information to the problem, so we are going to drop it End of explanation all_data_no_objects = all_data_no_dates.copy() for feature in all_data_no_dates: if all_data_no_dates[feature].dtypes == 'object': #Make all data numeric all_data_no_objects[feature] = pd.to_numeric(all_data_no_dates[feature]) all_data_no_objects.info() Explanation: Now we will transfore all the object features to numeric End of explanation high_nan_features = [] for feature in all_data_no_objects: size = all_data_no_objects[feature].size number_of_valid = all_data_no_objects[feature].count() number_of_nan = size - number_of_valid ratio = number_of_nan / size if ratio > 0.9: high_nan_features.append(feature) print (len(high_nan_features)) all_data_no_high_nan = all_data_no_objects.drop(high_nan_features, axis = 1) all_data_no_high_nan.info() Explanation: Now we have gotten rid of the object dtype Eliminate features with high number of NaN values We already deleted features with that had all of their value as NaN, but now we will eliminate features with a high percentage of NaN values (more than 90%) End of explanation data_dict[15:25] Explanation: Filling missing data We need to fill the mising data. To do this we need to know if the feature is numeric or categorical. Let's use the dictionary to get that info. End of explanation categorical_features = [] is_null = data_dict['NAME OF DATA ELEMENT'].isnull() for i in range(len(is_null) - 1): if not is_null[i] and is_null[i+1]: categorical_features.append(data_dict['VARIABLE NAME'][i]) Explanation: We can see that after the name of a categorical feature, there is at least one item with value NaN. Let's use this to get a list of categorical features End of explanation all_data_no_nan = all_data_no_high_nan.copy() for feature in all_data_no_high_nan: if feature in categorical_features: mode = all_data_no_high_nan[feature].mode()[0] all_data_no_nan[feature] = all_data_no_high_nan[feature].fillna(mode) else: mean = all_data_no_high_nan[feature].mean() all_data_no_nan[feature] = all_data_no_high_nan[feature].fillna(mean) all_data_no_nan.head() all_data_no_nan.info() Explanation: To fill the missing data that belongs to a categorical feature, we will use the most common value of the data (mode). To fill the missing data that belongs to a numeric feature, we will use the the average of the data (mean). End of explanation all_data_no_nan.to_csv('datasets/CollegeScorecardDataCleaned.csv', index = False) Explanation: Let's save the data in a file End of explanation
5,234
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top Step1: Each of the 4 dataframes loaded above represents a company's average sales over time. Check the descriptive statistics below. Step2: Can you identify a dataset that is least likely to represent a company's sales over time? Set the following variable to 'Yes' or 'No'. Step3: If you answered 'Yes' which dataset is least likely to represent a company's sales over time? Set the following variable to 1, 2, 3, or 4. Step4: Clue Pandas has a handy function to generate scatterplots
Python Code: # Run the following to import necessary packages and import dataset. Do not use any additional plotting libraries. import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt %matplotlib inline matplotlib.style.use('ggplot') d1 = "dataset/sales1.csv" d2 = "dataset/sales2.csv" d3 = "dataset/sales3.csv" d4 = "dataset/sales4.csv" df1 = pd.read_csv(d1) df2 = pd.read_csv(d2) df3 = pd.read_csv(d3) df4 = pd.read_csv(d4) df1.head(n=5) # Print n number of rows from top of dataset Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Sales" data-toc-modified-id="Sales-1">Sales</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Instructions-/-Notes:" data-toc-modified-id="Instructions-/-Notes:-1.0.1">Instructions / Notes:</a></span></li></ul></li><li><span><a href="#Each-of-the-4-dataframes-loaded-above-represents-a-company's-average-sales-over-time.-Check-the-descriptive-statistics-below." data-toc-modified-id="Each-of-the-4-dataframes-loaded-above-represents-a-company's-average-sales-over-time.-Check-the-descriptive-statistics-below.-1.1">Each of the 4 dataframes loaded above represents a company's average sales over time. Check the descriptive statistics below.</a></span></li><li><span><a href="#Clue" data-toc-modified-id="Clue-1.2">Clue</a></span><ul class="toc-item"><li><span><a href="#Pandas-has-a-handy-function-to-generate-scatterplots:-https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.plot.scatter.html" data-toc-modified-id="Pandas-has-a-handy-function-to-generate-scatterplots:-https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.plot.scatter.html-1.2.1">Pandas has a handy function to generate scatterplots: <a href="https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.plot.scatter.html" target="_blank">https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.plot.scatter.html</a></a></span></li></ul></li></ul></li></ul></div> Sales Instructions / Notes: Read these carefully Read and execute each cell in order, without skipping forward You may create new Jupyter notebook cells to use for e.g. testing, debugging, exploring, etc.- this is encouraged in fact!- just make sure that your final answer dataframes and answers use the set variables outlined below Have fun! End of explanation df1.describe() df2.describe() df3.describe() df4.describe() Explanation: Each of the 4 dataframes loaded above represents a company's average sales over time. Check the descriptive statistics below. End of explanation least_rep_dataset_exists = 'Yes' Explanation: Can you identify a dataset that is least likely to represent a company's sales over time? Set the following variable to 'Yes' or 'No'. End of explanation least_rep_dataset = 4 Explanation: If you answered 'Yes' which dataset is least likely to represent a company's sales over time? Set the following variable to 1, 2, 3, or 4. End of explanation df1.head() # Show your revised analysis below df_all = [df1, df2, df3, df4] for df in df_all: df.plot.scatter('time', 'avg_sales') least_rep_dataset_exists_clue = 'Yes' least_rep_dataset_clue = 4 Explanation: Clue Pandas has a handy function to generate scatterplots: https://pandas.pydata.org/pandas-docs/version/0.21/generated/pandas.DataFrame.plot.scatter.html If this clue changes your answer, try again below. Otherwise, if you are confident in your answer above, leave the following untouched. End of explanation
5,235
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have two tensors of dimension (2*x, 1). I want to check how many of the last x elements are equal in the two tensors. I think I should be able to do this in few lines like Numpy but couldn't find a similar function.
Problem: import numpy as np import pandas as pd import torch A, B = load_data() cnt_equal = int((A[int(len(A) / 2):] == B[int(len(A) / 2):]).sum())
5,236
Given the following text description, write Python code to implement the functionality described below step by step Description: How to search the IOOS CSW catalog with Python tools This notebook demonstrates a how to query a Catalog Service for the Web (CSW), like the IOOS Catalog, and to parse its results into endpoints that can be used to access the data. Step1: Let's start by creating the search filters. The filter used here constraints the search on a certain geographical region (bounding box), a time span (last week), and some CF variable standard names that represent sea surface temperature. Step2: With these 3 elements it is possible to assemble a OGC Filter Encoding (FE) using the owslib.fes* module. * OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models. Step4: The csw object created from CatalogueServiceWeb did not fetched anything yet. It is the method getrecords2 that uses the filter for the search. However, even though there is a maxrecords option, the search is always limited by the server side and there is the need to iterate over multiple calls of getrecords2 to actually retrieve all records. The get_csw_records does exactly that.
Python Code: import os import sys ioos_tools = os.path.join(os.path.pardir) sys.path.append(ioos_tools) Explanation: How to search the IOOS CSW catalog with Python tools This notebook demonstrates a how to query a Catalog Service for the Web (CSW), like the IOOS Catalog, and to parse its results into endpoints that can be used to access the data. End of explanation from datetime import datetime, timedelta import dateutil.parser service_type = 'WMS' min_lon, min_lat = -90.0, 30.0 max_lon, max_lat = -80.0, 40.0 bbox = [min_lon, min_lat, max_lon, max_lat] crs = 'urn:ogc:def:crs:OGC:1.3:CRS84' # Temporal range: Last week. now = datetime.utcnow() start, stop = now - timedelta(days=(7)), now start = dateutil.parser.parse('2017-03-01T00:00:00Z') stop = dateutil.parser.parse('2017-04-01T00:00:00Z') # Ocean Model Names model_names = ['NAM', 'GFS'] Explanation: Let's start by creating the search filters. The filter used here constraints the search on a certain geographical region (bounding box), a time span (last week), and some CF variable standard names that represent sea surface temperature. End of explanation from owslib import fes from ioos_tools.ioos import fes_date_filter kw = dict(wildCard='*', escapeChar='\\', singleChar='?', propertyname='apiso:AnyText') or_filt = fes.Or([fes.PropertyIsLike(literal=('*%s*' % val), **kw) for val in model_names]) kw = dict(wildCard='*', escapeChar='\\', singleChar='?', propertyname='apiso:ServiceType') serviceType = fes.PropertyIsLike(literal=('*%s*' % service_type), **kw) begin, end = fes_date_filter(start, stop) bbox_crs = fes.BBox(bbox, crs=crs) filter_list = [ fes.And( [ bbox_crs, # bounding box begin, end, # start and end date or_filt, # or conditions (CF variable names) serviceType # search only for datasets that have WMS services ] ) ] from owslib.csw import CatalogueServiceWeb endpoint = 'https://data.ioos.us/csw' csw = CatalogueServiceWeb(endpoint, timeout=60) Explanation: With these 3 elements it is possible to assemble a OGC Filter Encoding (FE) using the owslib.fes* module. * OWSLib is a Python package for client programming with Open Geospatial Consortium (OGC) web service (hence OWS) interface standards, and their related content models. End of explanation def get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000): Iterate `maxrecords`/`pagesize` times until the requested value in `maxrecords` is reached. from owslib.fes import SortBy, SortProperty # Iterate over sorted results. sortby = SortBy([SortProperty('dc:title', 'ASC')]) csw_records = {} startposition = 0 nextrecord = getattr(csw, 'results', 1) while nextrecord != 0: csw.getrecords2(constraints=filter_list, startposition=startposition, maxrecords=pagesize, sortby=sortby) csw_records.update(csw.records) if csw.results['nextrecord'] == 0: break startposition += pagesize + 1 # Last one is included. if startposition >= maxrecords: break csw.records.update(csw_records) get_csw_records(csw, filter_list, pagesize=10, maxrecords=1000) records = '\n'.join(csw.records.keys()) print('Found {} records.\n'.format(len(csw.records.keys()))) for key, value in list(csw.records.items()): print('[{}]\n{}\n'.format(value.title, key)) csw.request #write to JSON for use in TerriaJS csw_request = '"{}": {}"'.format('getRecordsTemplate',str(csw.request,'utf-8')) import io import json with io.open('query.json', 'a', encoding='utf-8') as f: f.write(json.dumps(csw_request, ensure_ascii=False)) f.write('\n') Explanation: The csw object created from CatalogueServiceWeb did not fetched anything yet. It is the method getrecords2 that uses the filter for the search. However, even though there is a maxrecords option, the search is always limited by the server side and there is the need to iterate over multiple calls of getrecords2 to actually retrieve all records. The get_csw_records does exactly that. End of explanation
5,237
Given the following text description, write Python code to implement the functionality described below step by step Description: 파이썬 기본 자료형 문제 실수(부동소수점)를 하나 입력받아, 그 숫자를 반지름으로 하는 원의 면적과 둘레의 길이를 튜플로 리턴하는 함수 circle_radius를 구현하는 코드를 작성하라, ``` . ``` 문자열 자료형 아래 사이트는 커피 콩의 현재 시세를 보여준다. http Step1: 문제 0부터 1000까지의 숫자들 중에서 홀수이면서 7의 배수인 숫자들의 리스트를 조건제시법으로 생성하는 코드를 작성하라. ``` . ``` 모범답안 Step2: 문제 0부터 1000까지의 숫자들 중에서 홀수이면서 7의 배수인 숫자들을 제곱하여 1을 더한 값들의 리스트를 조건제시법으로 생성하는 코드를 작성하라. 힌트 Step3: csv 파일 읽어들이기 'Seoul_pop2.csv' 파일에는 아래 내용이 저장되어 있다" ```csv 1949년부터 2010년 사이의 서울과 수도권 인구 증가율(%) 구간,서울,수도권 1949-1955,9.12,-5.83 1955-1960,55.88,32.22 1960-1966,55.12,32.76 1966-1970,45.66,28.76 1970-1975,24.51,22.93 1975-1980,21.38,21.69 1980-1985,15.27,18.99 1985-1990,10.15,17.53 1990-1995,-3.64,8.54 1995-2000,-3.55,5.45 2000-2005,-0.93,6.41 2005-2010,-1.34,3.71 ``` 확장자가 csv인 파일은 데이터를 저장하기 위해 주로 사용한다. csv는 Comma-Separated Values의 줄임말로 데이터가 쉼표(콤마)로 구분되어 정리되어 있는 파일을 의미한다. csv 파일을 읽어드리는 방법은 csv 모듈의 reader() 함수를 활용하면 매우 쉽다. reader() 함수의 리턴값은 csv 파일에 저장된 내용을 줄 단위로, 쉼표 단위로 끊어서 2차원 리스트이다. 예를 들어, 아래 코드는 언급된 파일에 저장된 내용의 각 줄을 출력해준다. Step4: 문제 위 코드에서 5번 째 줄을 아래와 같이 하면 오류 발생한다. if row[0][0] == '#' or len(row) == 0 Step5: 문제 아래 모양의 어레이를 생성하는 코드를 작성하라. 단, 언급된 네 개의 함수들만 사용해야 하며, 수동으로 생성된 리스트나 어레이는 허용되지 않는다. $$\left [ \begin{matrix} 2 & 0 & 0 \ 0 & 2 & 0 \ 0 & 0 & 2 \end{matrix} \right ]$$ ``` . ``` 견본답안 Step6: 문제 아래 모양의 어레이를 생성하는 코드를 작성하라. 단, 언급된 네 개의 함수만 사용해야 하며, 수동으로 생성된 리스트나 어레이는 허용되지 않는다. $$\left [ \begin{matrix} 2 & 0 & 0 \ 0 & 4 & 0 \ 0 & 0 & 6 \end{matrix} \right ]$$ ``` . ``` 견본답안 Step7: 넘파이의 linspace() 함수 활용 numpy 모듈의 linspace() 함수는 지정된 구간을 정해진 크기로 일정하게 쪼개는 어래이를 생성한다. 예를 들어, 0부터 3사이의 구간을 균등하게 30개로 쪼개고자 하면 아래와 같이 실행하면 된다. Step8: 문제 0부터 1사이의 구간을 균등하게 10개로 쪼개어 각 항목을 제곱하는 코드를 작성하라. ``` . ``` 견본답안 Step9: 넘파이 활용 기초 2 population.txt 파일은 1900년부터 1920년까지 캐나다 북부지역에서 서식한 산토끼(hare)와 스라소니(lynx)의 숫자, 그리고 채소인 당근(carrot)의 재배숫자를 아래 내용으로 순수 텍스트 데이터로 담고 있다. ``` year hare lynx carrot 1900 30e3 4e3 48300 1901 47.2e3 6.1e3 48200 1902 70.2e3 9.8e3 41500 1903 77.4e3 35.2e3 38200 1904 36.3e3 59.4e3 40600 1905 20.6e3 41.7e3 39800 1906 18.1e3 19e3 38600 1907 21.4e3 13e3 42300 1908 22e3 8.3e3 44500 1909 25.4e3 9.1e3 42100 1910 27.1e3 7.4e3 46000 1911 40.3e3 8e3 46800 1912 57e3 12.3e3 43800 1913 76.6e3 19.5e3 40900 1914 52.3e3 45.7e3 39400 1915 19.5e3 51.1e3 39000 1916 11.2e3 29.7e3 36700 1917 7.6e3 15.8e3 41800 1918 14.6e3 9.7e3 43300 1919 16.2e3 10.1e3 41300 1920 24.7e3 8.6e3 47300 ``` 아래 코드는 연도, 토끼 개체수, 스라소리 개체수, 당근 개체수를 따로따로 떼어 내어 각각 어레이로 변환하여 year, hares, lynxes, carrots 변수에 저장하는 코드이다. Step10: 문제 위 코드에서 np.loadtxt 함수의 작동방식을 간단하게 설명하라. ``` . ``` 문제 위 코드에서 data.T에 대해 간단하게 설명하라. ``` . ``` 아래 코드는 토끼, 스라소니, 당근 각각의 개체수의 연도별 변화를 선그래프로 보여주도록 하는 코드이다.
Python Code: odd_1000 = [x**2 for x in range(0, 1000) if x % 2 == 1] # 리스트의 처음 다섯 개 항목 odd_1000[:5] Explanation: 파이썬 기본 자료형 문제 실수(부동소수점)를 하나 입력받아, 그 숫자를 반지름으로 하는 원의 면적과 둘레의 길이를 튜플로 리턴하는 함수 circle_radius를 구현하는 코드를 작성하라, ``` . ``` 문자열 자료형 아래 사이트는 커피 콩의 현재 시세를 보여준다. http://beans-r-us.appspot.com/prices.html 위 사이트의 내용을 html 소스코드로 보면 다음과 같으며, 검색된 시간의 커피콩의 가격은 Current price of coffee beans 문장이 담겨 있는 줄에 명시되어 있다. ```html <html><head><title>Welcome to the Beans'R'Us Pricing Page</title> <link rel="stylesheet" type="text/css" href="beansrus.css" /> </head><body> <h2>Welcome to the Beans'R'Us Pricing Page</h2> <p>Current price of coffee beans = <strong>$5.94</strong></p> <p>Price valid for 15 minutes from Sun Sep 10 12:21:58 2017.</p> </body></html> ``` 문제 아래 코드가 하는 일을 설명하라. ``` from future import print_function import urllib2 import time def price_setter(b_price, a_price): bean_price = b_price while 5.5 < bean_price < 6.0: time.sleep(1) page = urllib2.urlopen("http://beans-r-us.appspot.com/prices.html") text = page.read().decode("utf8") price_index = text.find("&gt;$") + 2 bean_price_str = text[price_index : price_index + 4] bean_price = float(bean_price_str) print("현재 커피콩 가격이", bean_price, "달러 입니다.") if bean_price &lt;= 5.5: print("아메리카노 가격을", a_price, "달러만큼 인하하세요!") else: print("아메리카노 가격을", a_price, "달러만큼 인상하세요!") ``` ``` .``` 오류 및 예외 처리 문제 아래 코드가 하는 일을 설명하라. ``` number_to_square = raw_input("A number to divide 100: ") try: number = float(number_to_square) print("100을 입력한 값으로 나눈 결과는", 100/number, "입니다.") except ZeroDivisionError: raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.') except ValueError: raise ValueError('숫자를 입력하세요.') ``` ``` .``` 리스트 문제 아래 설명 중에서 리스트 자료형의 성질에 해당하는 항목을 모두 골라라. 가변 자료형이다. 불변 자료형이다. 인덱스와 슬라이싱을 활용하여 항목의 내용을 확인하고 활용할 수 있다. 항목들이 임의의 자료형을 가질 수 있다. 리스트 길이에 제한이 있다. 신성정보 등 중요한 데이터를 보관할 때 사용한다. ``` ``` 견본답안: 1, 3, 4 사전 record_list.txt 파일은 여덟 명의 수영 선수의 50m 기록을 담고 있다. txt player1 21.09 player2 20.32 player3 21.81 player4 22.97 player5 23.29 player6 22.09 player7 21.20 player8 22.16 문제 아래코드가 하는 일을 설명하라. ```python from future import print_function record_f = open("record_list.txt", 'r') record = record_f.read().decode('utf8').split('\n') record_dict = {} for line in record: (player, p_record) = line.split() record_dict[p_record] = player record_f.close() record_list = record_dict.keys() record_list.sort() for i in range(3): item = record_list[i] print(str(i+1) + ":", record_dict[item], item) ``` ``` .``` 튜플 문제 아래 설명 중에서 튜플 자료형의 성질에 해당하는 항목을 모두 골라라. 가변 자료형이다. 불변 자료형이다. 인덱스와 슬라이싱을 활용하여 항목의 내용을 확인하고 활용할 수 있다. 항목들이 임의의 자료형을 가질 수 있다. 튜플 길이에 제한이 있다. 신성정보 등 중요한 데이터를 보관할 때 사용한다. ``` ``` 견본답안: 2, 3, 4, 6 리스트 조건제시법 아래 코드는 0부터 1000 사이의 홀수들의 제곱의 리스트를 조건제시법으로 생성한다 End of explanation odd_3x7 = [x for x in range(0, 1000) if x % 2 == 1 and x % 7 == 0] # 리스트의 처음 다섯 개 항목 odd_3x7[:5] Explanation: 문제 0부터 1000까지의 숫자들 중에서 홀수이면서 7의 배수인 숫자들의 리스트를 조건제시법으로 생성하는 코드를 작성하라. ``` . ``` 모범답안: End of explanation def square_plus1(x): return x**2 + 1 odd_3x7_spl = [square_plus1(x) for x in odd_3x7] # 리스트의 처음 다섯 개 항목 odd_3x7_spl[:5] Explanation: 문제 0부터 1000까지의 숫자들 중에서 홀수이면서 7의 배수인 숫자들을 제곱하여 1을 더한 값들의 리스트를 조건제시법으로 생성하는 코드를 작성하라. 힌트: 아래와 같이 정의된 함수를 활용한다. $$f(x) = x^2 + 1$$ ``` .``` 견본답안: End of explanation import csv with open('Seoul_pop2.csv', 'rb') as f: reader = csv.reader(f) for row in reader: if len(row) == 0 or row[0][0] == '#': continue else: print(row) Explanation: csv 파일 읽어들이기 'Seoul_pop2.csv' 파일에는 아래 내용이 저장되어 있다" ```csv 1949년부터 2010년 사이의 서울과 수도권 인구 증가율(%) 구간,서울,수도권 1949-1955,9.12,-5.83 1955-1960,55.88,32.22 1960-1966,55.12,32.76 1966-1970,45.66,28.76 1970-1975,24.51,22.93 1975-1980,21.38,21.69 1980-1985,15.27,18.99 1985-1990,10.15,17.53 1990-1995,-3.64,8.54 1995-2000,-3.55,5.45 2000-2005,-0.93,6.41 2005-2010,-1.34,3.71 ``` 확장자가 csv인 파일은 데이터를 저장하기 위해 주로 사용한다. csv는 Comma-Separated Values의 줄임말로 데이터가 쉼표(콤마)로 구분되어 정리되어 있는 파일을 의미한다. csv 파일을 읽어드리는 방법은 csv 모듈의 reader() 함수를 활용하면 매우 쉽다. reader() 함수의 리턴값은 csv 파일에 저장된 내용을 줄 단위로, 쉼표 단위로 끊어서 2차원 리스트이다. 예를 들어, 아래 코드는 언급된 파일에 저장된 내용의 각 줄을 출력해준다. End of explanation np.arange(3, 10, 3) np.zeros((2,3)) np.ones((2,)) np.diag([1, 2, 3, 4]) np.ones((3,3)) * 2 Explanation: 문제 위 코드에서 5번 째 줄을 아래와 같이 하면 오류 발생한다. if row[0][0] == '#' or len(row) == 0: 이유를 간단하게 설명하라. ``` . ``` 넘파이 활용 기초 1 넘파이 어레이를 생성하는 방법은 몇 개의 기본적인 함수를 이용하면 된다. np.arange() np.zeros() np.ones() np.diag() 예제: End of explanation np.diag(np.ones((3,))*2) Explanation: 문제 아래 모양의 어레이를 생성하는 코드를 작성하라. 단, 언급된 네 개의 함수들만 사용해야 하며, 수동으로 생성된 리스트나 어레이는 허용되지 않는다. $$\left [ \begin{matrix} 2 & 0 & 0 \ 0 & 2 & 0 \ 0 & 0 & 2 \end{matrix} \right ]$$ ``` . ``` 견본답안: End of explanation np.diag(np.arange(2, 7, 2)) Explanation: 문제 아래 모양의 어레이를 생성하는 코드를 작성하라. 단, 언급된 네 개의 함수만 사용해야 하며, 수동으로 생성된 리스트나 어레이는 허용되지 않는다. $$\left [ \begin{matrix} 2 & 0 & 0 \ 0 & 4 & 0 \ 0 & 0 & 6 \end{matrix} \right ]$$ ``` . ``` 견본답안: End of explanation xs = np.linspace(0, 3, 30) xs Explanation: 넘파이의 linspace() 함수 활용 numpy 모듈의 linspace() 함수는 지정된 구간을 정해진 크기로 일정하게 쪼개는 어래이를 생성한다. 예를 들어, 0부터 3사이의 구간을 균등하게 30개로 쪼개고자 하면 아래와 같이 실행하면 된다. End of explanation np.linspace(0,1, 10) ** 2 Explanation: 문제 0부터 1사이의 구간을 균등하게 10개로 쪼개어 각 항목을 제곱하는 코드를 작성하라. ``` . ``` 견본답안: End of explanation data = np.loadtxt('populations.txt') year, hares, lynxes, carrots = data.T Explanation: 넘파이 활용 기초 2 population.txt 파일은 1900년부터 1920년까지 캐나다 북부지역에서 서식한 산토끼(hare)와 스라소니(lynx)의 숫자, 그리고 채소인 당근(carrot)의 재배숫자를 아래 내용으로 순수 텍스트 데이터로 담고 있다. ``` year hare lynx carrot 1900 30e3 4e3 48300 1901 47.2e3 6.1e3 48200 1902 70.2e3 9.8e3 41500 1903 77.4e3 35.2e3 38200 1904 36.3e3 59.4e3 40600 1905 20.6e3 41.7e3 39800 1906 18.1e3 19e3 38600 1907 21.4e3 13e3 42300 1908 22e3 8.3e3 44500 1909 25.4e3 9.1e3 42100 1910 27.1e3 7.4e3 46000 1911 40.3e3 8e3 46800 1912 57e3 12.3e3 43800 1913 76.6e3 19.5e3 40900 1914 52.3e3 45.7e3 39400 1915 19.5e3 51.1e3 39000 1916 11.2e3 29.7e3 36700 1917 7.6e3 15.8e3 41800 1918 14.6e3 9.7e3 43300 1919 16.2e3 10.1e3 41300 1920 24.7e3 8.6e3 47300 ``` 아래 코드는 연도, 토끼 개체수, 스라소리 개체수, 당근 개체수를 따로따로 떼어 내어 각각 어레이로 변환하여 year, hares, lynxes, carrots 변수에 저장하는 코드이다. End of explanation plt.axes([0.2, 0.1, 0.5, 0.8]) plt.plot(year, hares, year, lynxes, year, carrots) plt.legend(('Hare', 'Lynx', 'Carrot'), loc=(1.05, 0.5)) Explanation: 문제 위 코드에서 np.loadtxt 함수의 작동방식을 간단하게 설명하라. ``` . ``` 문제 위 코드에서 data.T에 대해 간단하게 설명하라. ``` . ``` 아래 코드는 토끼, 스라소니, 당근 각각의 개체수의 연도별 변화를 선그래프로 보여주도록 하는 코드이다. End of explanation
5,238
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting data with Python matplotlib is the main plotting library for Python Step1: Simple Plotting Step2: Simple plotting - with style The default style of matplotlib is a bit lacking in style. Some would term it ugly. The new version of matplotlib has added some new styles that you can use in place of the default. Changing the style will effect all of the rest of the plots on the notebook. Examples of the various styles can be found here Step3: In addition, you can specify colors in many different ways Step4: Simple Histograms Step5: You have better control of the plot with the object oriented interface. While most plt functions translate directly to ax methods (such as plt.plot() → ax.plot(), plt.legend() → ax.legend(), etc.), this is not the case for all commands. In particular, functions to set limits, labels, and titles are slightly modified. For transitioning between matlab-style functions and object-oriented methods, make the following changes Step6: Plotting from multiple external data files Step7: Legend loc codes Step8: An Astronomical Example - Color Magnitude Diagrams Step9: VizieR catalog database Astropy can read data directly from the VizieR catalog database. For example let us read in the stars from the article UBVRI photometric standard stars around the celestial equator (Landolt 1983 AJ) Step10: Polar Plots Step11: Everyone likes Pie Step12: 3D plots
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from astropy.table import QTable Explanation: Plotting data with Python matplotlib is the main plotting library for Python End of explanation t = np.linspace(0,2,100) # 100 points linearly spaced between 0.0 and 2.0 s = np.cos(2*np.pi*t) * np.exp(-t) # s if a function of t plt.plot(t,s) Explanation: Simple Plotting End of explanation plt.style.available plt.style.use('ggplot') plt.plot(t,s) plt.xlabel('time (s)') plt.ylabel('voltage (mV)') plt.title('This is a title') plt.ylim(-1.5,1.5) plt.plot(t, s, color='b', marker='None', linestyle='--'); # adding the ';' at then suppresses the Out[] line mask1 = np.where((s>-0.4) & (s<0)) plt.plot(t, s, color='b', marker='None', linestyle='--') plt.plot(t[mask1],s[mask1],color="g",marker="o",linestyle="None",markersize=8); Explanation: Simple plotting - with style The default style of matplotlib is a bit lacking in style. Some would term it ugly. The new version of matplotlib has added some new styles that you can use in place of the default. Changing the style will effect all of the rest of the plots on the notebook. Examples of the various styles can be found here End of explanation from astropy import units as u from astropy.visualization import quantity_support quantity_support() v = 10 * u.m / u.s t2 = np.linspace(0,10,1000) * u.s y = v * t2 plt.plot(t2,y) Explanation: In addition, you can specify colors in many different ways: Grayscale intensities: color = '0.8' RGB triplets: color = (0.3, 0.1, 0.9) RGB triplets (with transparency): color = (0.3, 0.1, 0.9, 0.4) Hex strings: color = '#7ff00' HTML color names: color = 'Chartreuse' a name from the xkcd color survey prefixed with 'xkcd:' (e.g., 'xkcd:poison green') matplotlib will work with Astropy units End of explanation #Histogram of "h" with 20 bins np.random.seed(42) h = np.random.randn(500) plt.hist(h, bins=20, facecolor='MediumOrchid'); mask2 = np.where(h>0.0) np.random.seed(42) j = np.random.normal(2.0,1.0,300) # normal dist, ave = 2.0, std = 1.0 plt.hist(h[mask2], bins=20, facecolor='#b20010', histtype='stepfilled') plt.hist(j, bins=20, facecolor='#0200b0', histtype='stepfilled', alpha = 0.30); Explanation: Simple Histograms End of explanation fig,ax = plt.subplots(1,1) # One window fig.set_size_inches(11,8.5) # (width,height) - letter paper landscape fig.tight_layout() # Make better use of space on plot ax.set_xlim(0.0,1.5) ax.spines['bottom'].set_position('zero') # Move the bottom axis line to x = 0 ax.set_xlabel("This is X") ax.set_ylabel("This is Y") ax.plot(t, s, color='b', marker='None', linestyle='--') ax.text(0.8, 0.6, 'Bad Wolf', color='green', fontsize=36) # You can place text on the plot ax.vlines(0.4, -0.4, 0.8, color='m', linewidth=3) # vlines(x, ymin, ymax) ax.hlines(0.8, 0.2, 0.6, color='y', linewidth=5) # hlines(y, xmin, xmax) fig.savefig('fig1.png', bbox_inches='tight') import glob glob.glob('*.png') Explanation: You have better control of the plot with the object oriented interface. While most plt functions translate directly to ax methods (such as plt.plot() → ax.plot(), plt.legend() → ax.legend(), etc.), this is not the case for all commands. In particular, functions to set limits, labels, and titles are slightly modified. For transitioning between matlab-style functions and object-oriented methods, make the following changes: plt.xlabel() → ax.set_xlabel() plt.ylabel() → ax.set_ylabel() plt.xlim() → ax.set_xlim() plt.ylim() → ax.set_ylim() plt.title() → ax.set_title() End of explanation data_list = glob.glob('./MyData/12_data*.csv') data_list fig,ax = plt.subplots(1,1) # One window fig.set_size_inches(11,8.5) # (width,height) - letter paper landscape fig.tight_layout() # Make better use of space on plot ax.set_xlim(0.0,80.0) ax.set_ylim(15.0,100.0) ax.set_xlabel("This is X") ax.set_ylabel("This is Y") for file in data_list: data = QTable.read(file, format='ascii.csv') ax.plot(data['x'], data['y'],marker="o",linestyle="None",markersize=7,label=file) ax.legend(loc=0,shadow=True); Explanation: Plotting from multiple external data files End of explanation fig, ax = plt.subplots(2,2) # 2 rows 2 columns fig.set_size_inches(11,8.5) # width, height fig.tight_layout() # Make better use of space on plot ax[0,0].plot(t, s, color='b', marker='None', linestyle='--') # Plot at [0,0] ax[0,1].hist(h, bins=20, facecolor='MediumOrchid') # Plot at [0,1] ax[1,0].hist(j,bins=20, facecolor='HotPink', histtype='stepfilled') # Plot at [1,0] ax[1,0].vlines(2.0, 0.0, 50.0, color='xkcd:seafoam green', linewidth=3) ax[1,1].set_xscale('log') # Plot at [1,1] - x-axis set to log ax[1,1].plot(t, s, color='r', marker='None', linestyle='--'); Explanation: Legend loc codes: 0 best 6 center left 1 upper right 7 center right 2 upper left 8 lower center 3 lower left 9 upper center 4 lower right 10 center Subplots subplot(rows,columns) Access each subplot like a matrix. [x,y] For example: subplot(2,2) makes four panels with the coordinates: End of explanation T = QTable.read('M15_Bright.csv', format='ascii.csv') T[0:3] fig, ax = plt.subplots(1,1) # 1 row, 2 colums fig.set_size_inches(15,10) fig.tight_layout() BV = T['Bmag'] - T['Vmag'] V = T['Vmag'] ax.set_xlim(-0.25,1.5) ax.set_ylim(12,19) ax.set_aspect(1/6) # Make 1 unit in X = 6 units in Y ax.invert_yaxis() # Magnitudes increase to smaller values ax.set_xlabel("B-V") ax.set_ylabel("V") ax.plot(BV,V,color="b",marker="o",linestyle="None",markersize=5); # overplotting mask_color = np.where((V < 16.25) & (BV < 0.55)) ax.plot(BV[mask_color], V[mask_color],color="r",marker="o",linestyle="None",markersize=4, alpha=0.5); Explanation: An Astronomical Example - Color Magnitude Diagrams End of explanation from astropy.io import ascii star_table = ascii.read("ftp://cdsarc.u-strasbg.fr/pub/cats/II/118/main", readme="ftp://cdsarc.u-strasbg.fr/pub/cats/II/118/ReadMe") star_table.info star_table[0:3] ra_to_decimal = star_table['RAh'] + (star_table['RAm'] / 60) + (star_table['RAs'] / 3600) dec_to_decimal = star_table['DEd'] + (star_table['DEm'] / 60) + (star_table['DEs'] / 3600) neg_mask = np.where(star_table['DE-'] == "-") dec_to_decimal[neg_mask] *= -1 fig, ax = plt.subplots(1,2) # 1 row, 2 colums fig.set_size_inches(12,5) fig.tight_layout() ax[0].invert_xaxis() # RA is backward ax[0].set_xlabel("RA") ax[0].set_ylabel("Dec") ax[0].plot(ra_to_decimal,dec_to_decimal,color="b",marker="o",linestyle="None",markersize=5) ax[1].invert_yaxis() # Magnitudes are backward ax[1].set_aspect(1/4) ax[1].set_xlabel("B-V") ax[1].set_ylabel("V") ax[1].plot(star_table['B-V'],star_table['Vmag'],color="r",marker="s",linestyle="None",markersize=5); Explanation: VizieR catalog database Astropy can read data directly from the VizieR catalog database. For example let us read in the stars from the article UBVRI photometric standard stars around the celestial equator (Landolt 1983 AJ) End of explanation theta = np.linspace(0,2*np.pi,1000) fig = plt.figure() ax = fig.add_subplot(111,projection='polar') fig.set_size_inches(6,6) # (width,height) - letter paper landscape fig.tight_layout() # Make better use of space on plot ax.plot(theta,theta/5.0,label="spiral") ax.plot(theta,np.cos(4*theta),label="flower") ax.legend(loc=2, frameon=False); Explanation: Polar Plots End of explanation fig,ax = plt.subplots(1,1) # One window fig.set_size_inches(6,6) # (width,height) - letter paper landscape fig.tight_layout() # Make better use of space on plot ax.set_aspect('equal') labels = np.array(['John', 'Paul' ,'George' ,'Ringo']) # Name of slices sizes = np.array([0.3, 0.15, 0.45, 0.10]) # Relative size of slices colors = np.array(['r', 'g', 'b', 'c']) # Color of Slices explode = np.array([0, 0, 0.1, 0]) # Offset slide 3 ax.pie(sizes, explode=explode, labels=labels, colors=colors, startangle=90, shadow=True); Explanation: Everyone likes Pie End of explanation from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.add_subplot(111,projection='3d') fig.set_size_inches(9,9) fig.tight_layout() xx = np.cos(3*theta) yy = np.sin(2*theta) ax.plot(theta, xx, yy, c = "Maroon") ax.scatter(theta, xx, yy, c = "Navy", s = 15); ax.view_init(azim = -140, elev = 15) Explanation: 3D plots End of explanation
5,239
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Land MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Description Is Required Step7: 1.4. Land Atmosphere Flux Exchanges Is Required Step8: 1.5. Atmospheric Coupling Treatment Is Required Step9: 1.6. Land Cover Is Required Step10: 1.7. Land Cover Change Is Required Step11: 1.8. Tiling Is Required Step12: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required Step13: 2.2. Water Is Required Step14: 2.3. Carbon Is Required Step15: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required Step16: 3.2. Time Step Is Required Step17: 3.3. Timestepping Method Is Required Step18: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required Step19: 4.2. Code Version Is Required Step20: 4.3. Code Languages Is Required Step21: 5. Grid Land surface grid 5.1. Overview Is Required Step22: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required Step23: 6.2. Matches Atmosphere Grid Is Required Step24: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required Step25: 7.2. Total Depth Is Required Step26: 8. Soil Land surface soil 8.1. Overview Is Required Step27: 8.2. Heat Water Coupling Is Required Step28: 8.3. Number Of Soil layers Is Required Step29: 8.4. Prognostic Variables Is Required Step30: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required Step31: 9.2. Structure Is Required Step32: 9.3. Texture Is Required Step33: 9.4. Organic Matter Is Required Step34: 9.5. Albedo Is Required Step35: 9.6. Water Table Is Required Step36: 9.7. Continuously Varying Soil Depth Is Required Step37: 9.8. Soil Depth Is Required Step38: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required Step39: 10.2. Functions Is Required Step40: 10.3. Direct Diffuse Is Required Step41: 10.4. Number Of Wavelength Bands Is Required Step42: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required Step43: 11.2. Time Step Is Required Step44: 11.3. Tiling Is Required Step45: 11.4. Vertical Discretisation Is Required Step46: 11.5. Number Of Ground Water Layers Is Required Step47: 11.6. Lateral Connectivity Is Required Step48: 11.7. Method Is Required Step49: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required Step50: 12.2. Ice Storage Method Is Required Step51: 12.3. Permafrost Is Required Step52: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required Step53: 13.2. Types Is Required Step54: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required Step55: 14.2. Time Step Is Required Step56: 14.3. Tiling Is Required Step57: 14.4. Vertical Discretisation Is Required Step58: 14.5. Heat Storage Is Required Step59: 14.6. Processes Is Required Step60: 15. Snow Land surface snow 15.1. Overview Is Required Step61: 15.2. Tiling Is Required Step62: 15.3. Number Of Snow Layers Is Required Step63: 15.4. Density Is Required Step64: 15.5. Water Equivalent Is Required Step65: 15.6. Heat Content Is Required Step66: 15.7. Temperature Is Required Step67: 15.8. Liquid Water Content Is Required Step68: 15.9. Snow Cover Fractions Is Required Step69: 15.10. Processes Is Required Step70: 15.11. Prognostic Variables Is Required Step71: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required Step72: 16.2. Functions Is Required Step73: 17. Vegetation Land surface vegetation 17.1. Overview Is Required Step74: 17.2. Time Step Is Required Step75: 17.3. Dynamic Vegetation Is Required Step76: 17.4. Tiling Is Required Step77: 17.5. Vegetation Representation Is Required Step78: 17.6. Vegetation Types Is Required Step79: 17.7. Biome Types Is Required Step80: 17.8. Vegetation Time Variation Is Required Step81: 17.9. Vegetation Map Is Required Step82: 17.10. Interception Is Required Step83: 17.11. Phenology Is Required Step84: 17.12. Phenology Description Is Required Step85: 17.13. Leaf Area Index Is Required Step86: 17.14. Leaf Area Index Description Is Required Step87: 17.15. Biomass Is Required Step88: 17.16. Biomass Description Is Required Step89: 17.17. Biogeography Is Required Step90: 17.18. Biogeography Description Is Required Step91: 17.19. Stomatal Resistance Is Required Step92: 17.20. Stomatal Resistance Description Is Required Step93: 17.21. Prognostic Variables Is Required Step94: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required Step95: 18.2. Tiling Is Required Step96: 18.3. Number Of Surface Temperatures Is Required Step97: 18.4. Evaporation Is Required Step98: 18.5. Processes Is Required Step99: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required Step100: 19.2. Tiling Is Required Step101: 19.3. Time Step Is Required Step102: 19.4. Anthropogenic Carbon Is Required Step103: 19.5. Prognostic Variables Is Required Step104: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required Step105: 20.2. Carbon Pools Is Required Step106: 20.3. Forest Stand Dynamics Is Required Step107: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required Step108: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required Step109: 22.2. Growth Respiration Is Required Step110: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required Step111: 23.2. Allocation Bins Is Required Step112: 23.3. Allocation Fractions Is Required Step113: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required Step114: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required Step115: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required Step116: 26.2. Carbon Pools Is Required Step117: 26.3. Decomposition Is Required Step118: 26.4. Method Is Required Step119: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required Step120: 27.2. Carbon Pools Is Required Step121: 27.3. Decomposition Is Required Step122: 27.4. Method Is Required Step123: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required Step124: 28.2. Emitted Greenhouse Gases Is Required Step125: 28.3. Decomposition Is Required Step126: 28.4. Impact On Soil Properties Is Required Step127: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required Step128: 29.2. Tiling Is Required Step129: 29.3. Time Step Is Required Step130: 29.4. Prognostic Variables Is Required Step131: 30. River Routing Land surface river routing 30.1. Overview Is Required Step132: 30.2. Tiling Is Required Step133: 30.3. Time Step Is Required Step134: 30.4. Grid Inherited From Land Surface Is Required Step135: 30.5. Grid Description Is Required Step136: 30.6. Number Of Reservoirs Is Required Step137: 30.7. Water Re Evaporation Is Required Step138: 30.8. Coupled To Atmosphere Is Required Step139: 30.9. Coupled To Land Is Required Step140: 30.10. Quantities Exchanged With Atmosphere Is Required Step141: 30.11. Basin Flow Direction Map Is Required Step142: 30.12. Flooding Is Required Step143: 30.13. Prognostic Variables Is Required Step144: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required Step145: 31.2. Quantities Transported Is Required Step146: 32. Lakes Land surface lakes 32.1. Overview Is Required Step147: 32.2. Coupling With Rivers Is Required Step148: 32.3. Time Step Is Required Step149: 32.4. Quantities Exchanged With Rivers Is Required Step150: 32.5. Vertical Grid Is Required Step151: 32.6. Prognostic Variables Is Required Step152: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required Step153: 33.2. Albedo Is Required Step154: 33.3. Dynamics Is Required Step155: 33.4. Dynamic Lake Extent Is Required Step156: 33.5. Endorheic Basins Is Required Step157: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-1', 'land') Explanation: ES-DOC CMIP6 Model Properties - Land MIP Era: CMIP6 Institute: TEST-INSTITUTE-2 Source ID: SANDBOX-1 Topic: Land Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. Properties: 154 (96 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:44 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Conservation Properties 3. Key Properties --&gt; Timestepping Framework 4. Key Properties --&gt; Software Properties 5. Grid 6. Grid --&gt; Horizontal 7. Grid --&gt; Vertical 8. Soil 9. Soil --&gt; Soil Map 10. Soil --&gt; Snow Free Albedo 11. Soil --&gt; Hydrology 12. Soil --&gt; Hydrology --&gt; Freezing 13. Soil --&gt; Hydrology --&gt; Drainage 14. Soil --&gt; Heat Treatment 15. Snow 16. Snow --&gt; Snow Albedo 17. Vegetation 18. Energy Balance 19. Carbon Cycle 20. Carbon Cycle --&gt; Vegetation 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality 26. Carbon Cycle --&gt; Litter 27. Carbon Cycle --&gt; Soil 28. Carbon Cycle --&gt; Permafrost Carbon 29. Nitrogen Cycle 30. River Routing 31. River Routing --&gt; Oceanic Discharge 32. Lakes 33. Lakes --&gt; Method 34. Lakes --&gt; Wetlands 1. Key Properties Land surface key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of land surface model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of land surface model code (e.g. MOSES2.2) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.3. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "water" # "energy" # "carbon" # "nitrogen" # "phospherous" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Land Atmosphere Flux Exchanges Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Fluxes exchanged with the atmopshere. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Atmospheric Coupling Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "bare soil" # "urban" # "lake" # "land ice" # "lake ice" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.6. Land Cover Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Types of land cover defined in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.land_cover_change') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Land Cover Change Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe how land cover change is managed (e.g. the use of net or gross transitions) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Tiling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.energy') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Conservation Properties TODO 2.1. Energy Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.water') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Water Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how water is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestepping Framework TODO 3.1. Timestep Dependent On Atmosphere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a time step dependent on the frequency of atmosphere coupling? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overall timestep of land surface model (i.e. time between calls) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Timestepping Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of time stepping method and associated time step(s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Software Properties Software properties of land surface code 4.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Grid Land surface grid 5.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the grid in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid --&gt; Horizontal The horizontal grid in the land surface 6.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the horizontal grid (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the horizontal grid match the atmosphere? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Vertical The vertical grid in the soil 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the vertical grid in the soil (not including any tiling) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.grid.vertical.total_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.2. Total Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The total depth of the soil (in metres) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Soil Land surface soil 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of soil in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_water_coupling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.2. Heat Water Coupling Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the coupling between heat and water in the soil End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.number_of_soil layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 8.3. Number Of Soil layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the soil scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Soil --&gt; Soil Map Key properties of the land surface soil map 9.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of soil map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.structure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Structure Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil structure map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.texture') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.3. Texture Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil texture map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.organic_matter') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.4. Organic Matter Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil organic matter map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.5. Albedo Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil albedo map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.water_table') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Water Table Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil water table map, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 9.7. Continuously Varying Soil Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the soil properties vary continuously with depth? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.soil_map.soil_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.8. Soil Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil depth map End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10. Soil --&gt; Snow Free Albedo TODO 10.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow free albedo prognostic? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "soil humidity" # "vegetation state" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, describe the dependancies on snow free albedo calculations End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "distinction between direct and diffuse albedo" # "no distinction between direct and diffuse albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Direct Diffuse Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe the distinction between direct and diffuse albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.4. Number Of Wavelength Bands Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, enter the number of wavelength bands used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11. Soil --&gt; Hydrology Key properties of the land surface soil hydrology 11.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of the soil hydrological model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river soil hydrology in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil hydrology tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.5. Number Of Ground Water Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of soil layers that may contain water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "perfect connectivity" # "Darcian flow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.6. Lateral Connectivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe the lateral connectivity between tiles End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Bucket" # "Force-restore" # "Choisnel" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.7. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The hydrological dynamics scheme in the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12. Soil --&gt; Hydrology --&gt; Freezing TODO 12.1. Number Of Ground Ice Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How many soil layers may contain ground ice End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Ice Storage Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of ice storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Permafrost Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of permafrost, if any, within the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Soil --&gt; Hydrology --&gt; Drainage TODO 13.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General describe how drainage is included in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.hydrology.drainage.types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Gravity drainage" # "Horton mechanism" # "topmodel-based" # "Dunne mechanism" # "Lateral subsurface flow" # "Baseflow from groundwater" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Different types of runoff represented by the land surface model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Soil --&gt; Heat Treatment TODO 14.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General description of how heat treatment properties are defined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of soil heat scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.3. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the soil heat treatment tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.4. Vertical Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the typical vertical discretisation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Force-restore" # "Explicit diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.5. Heat Storage Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the method of heat storage End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.soil.heat_treatment.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "soil moisture freeze-thaw" # "coupling with snow temperature" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.6. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe processes included in the treatment of soil heat End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Snow Land surface snow 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of snow in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.number_of_snow_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.3. Number Of Snow Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of snow levels used in the land surface scheme/model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.density') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.4. Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow density End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.water_equivalent') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.5. Water Equivalent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the snow water equivalent End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.heat_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.6. Heat Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of the heat content of snow End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.temperature') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.7. Temperature Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow temperature End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.liquid_water_content') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.8. Liquid Water Content Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Description of the treatment of snow liquid water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_cover_fractions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "ground snow fraction" # "vegetation snow fraction" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.9. Snow Cover Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify cover fractions used in the surface snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "snow interception" # "snow melting" # "snow freezing" # "blowing snow" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.10. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Snow related processes in the land surface scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.11. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the snow scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "prescribed" # "constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16. Snow --&gt; Snow Albedo TODO 16.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of snow-covered land albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.snow.snow_albedo.functions') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation type" # "snow age" # "snow density" # "snow grain type" # "aerosol deposition" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 16.2. Functions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N *If prognostic, * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Vegetation Land surface vegetation 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vegetation in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of vegetation scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.dynamic_vegetation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.3. Dynamic Vegetation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there dynamic evolution of vegetation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.4. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vegetation tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "vegetation types" # "biome types" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.5. Vegetation Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Vegetation classification used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "broadleaf tree" # "needleleaf tree" # "C3 grass" # "C4 grass" # "vegetated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.6. Vegetation Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of vegetation types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biome_types') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "evergreen needleleaf forest" # "evergreen broadleaf forest" # "deciduous needleleaf forest" # "deciduous broadleaf forest" # "mixed forest" # "woodland" # "wooded grassland" # "closed shrubland" # "opne shrubland" # "grassland" # "cropland" # "wetlands" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.7. Biome Types Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of biome types in the classification, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_time_variation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed (not varying)" # "prescribed (varying from files)" # "dynamical (varying from simulation)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.8. Vegetation Time Variation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How the vegetation fractions in each tile are varying with time End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.vegetation_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.9. Vegetation Map Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.interception') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 17.10. Interception Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is vegetation interception of rainwater represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic (vegetation map)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.11. Phenology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.phenology_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.12. Phenology Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation phenology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prescribed" # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.13. Leaf Area Index Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.leaf_area_index_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.14. Leaf Area Index Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of leaf area index End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.15. Biomass Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Treatment of vegetation biomass * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biomass_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.16. Biomass Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biomass End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.17. Biogeography Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.biogeography_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.18. Biogeography Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation biogeography End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "light" # "temperature" # "water availability" # "CO2" # "O3" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17.19. Stomatal Resistance Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify what the vegetation stomatal resistance depends on End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.20. Stomatal Resistance Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of the treatment of vegetation stomatal resistance End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.vegetation.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.21. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the vegetation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18. Energy Balance Land surface energy balance 18.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of energy balance in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the energy balance tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.3. Number Of Surface Temperatures Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "alpha" # "beta" # "combined" # "Monteith potential evaporation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.4. Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Specify the formulation method for land surface evaporation, from soil and vegetation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.energy_balance.processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "transpiration" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18.5. Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Describe which processes are included in the energy balance scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19. Carbon Cycle Land surface carbon cycle 19.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of carbon cycle in land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the carbon cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of carbon cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "grand slam protocol" # "residence time" # "decay time" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.4. Anthropogenic Carbon Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Describe the treament of the anthropogenic carbon pool End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the carbon scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 20. Carbon Cycle --&gt; Vegetation TODO 20.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.3. Forest Stand Dynamics Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of forest stand dyanmics End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis TODO 21.1. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration TODO 22.1. Maintainance Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for maintainence respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Growth Respiration Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the general method used for growth respiration End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23. Carbon Cycle --&gt; Vegetation --&gt; Allocation TODO 23.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the allocation scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "leaves + stems + roots" # "leaves + stems + roots (leafy + woody)" # "leaves + fine roots + coarse roots + stems" # "whole plant (no distinction)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Allocation Bins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify distinct carbon bins used in allocation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "fixed" # "function of vegetation type" # "function of plant allometry" # "explicitly calculated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.3. Allocation Fractions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the fractions of allocation are calculated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 24. Carbon Cycle --&gt; Vegetation --&gt; Phenology TODO 24.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the phenology scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 25. Carbon Cycle --&gt; Vegetation --&gt; Mortality TODO 25.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general principle behind the mortality scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26. Carbon Cycle --&gt; Litter TODO 26.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.litter.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27. Carbon Cycle --&gt; Soil TODO 27.1. Number Of Carbon Pools Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.2. Carbon Pools Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the carbon pools used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.soil.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the general method used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 28. Carbon Cycle --&gt; Permafrost Carbon TODO 28.1. Is Permafrost Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is permafrost included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.2. Emitted Greenhouse Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the GHGs emitted End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.3. Decomposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List the decomposition methods used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28.4. Impact On Soil Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the impact of permafrost on soil properties End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29. Nitrogen Cycle Land surface nitrogen cycle 29.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of the nitrogen cycle in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the notrogen cycle tiling, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 29.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of nitrogen cycle in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 29.4. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the nitrogen scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30. River Routing Land surface river routing 30.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of river routing in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.tiling') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.2. Tiling Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the river routing, if any. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of river routing scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.4. Grid Inherited From Land Surface Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the grid inherited from land surface? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.grid_description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.5. Grid Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 General description of grid, if not inherited from land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.number_of_reservoirs') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.6. Number Of Reservoirs Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Enter the number of reservoirs End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.water_re_evaporation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "flood plains" # "irrigation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.7. Water Re Evaporation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N TODO End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 30.8. Coupled To Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is river routing coupled to the atmosphere model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.coupled_to_land') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.9. Coupled To Land Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the coupling between land and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.10. Quantities Exchanged With Atmosphere Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "present day" # "adapted for other periods" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30.11. Basin Flow Direction Map Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of basin flow direction map is being used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.flooding') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.12. Flooding Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the representation of flooding, if any End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.13. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the river routing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "direct (large rivers)" # "diffuse" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. River Routing --&gt; Oceanic Discharge TODO 31.1. Discharge Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify how rivers are discharged to the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31.2. Quantities Transported Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Quantities that are exchanged from river-routing to the ocean model component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32. Lakes Land surface lakes 32.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lakes in the land surface End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.coupling_with_rivers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.2. Coupling With Rivers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are lakes coupled to the river routing model component? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 32.3. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time step of lake scheme in seconds End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "heat" # "water" # "tracers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32.4. Quantities Exchanged With Rivers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If coupling with rivers, which quantities are exchanged between the lakes and rivers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.vertical_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.5. Vertical Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the vertical grid of lakes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List the prognostic variables of the lake scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.ice_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33. Lakes --&gt; Method TODO 33.1. Ice Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is lake ice included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "prognostic" # "diagnostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.2. Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the treatment of lake albedo End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamics') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "No lake dynamics" # "vertical" # "horizontal" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33.3. Dynamics Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which dynamics of lakes are treated? horizontal, vertical, etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.4. Dynamic Lake Extent Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is a dynamic lake extent scheme included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.method.endorheic_basins') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 33.5. Endorheic Basins Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basins not flowing to ocean included? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.land.lakes.wetlands.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34. Lakes --&gt; Wetlands TODO 34.1. Description Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the treatment of wetlands, if any End of explanation
5,240
Given the following text description, write Python code to implement the functionality described below step by step Description: Multisectoral energy system with oemof General description Step1: Import input data Step2: Add entities to energy system Step3: Optimize energy system and plot results Step4: Adding the gas sector In order to add a gas power plant, a gas ressource bus is needed. The gas power plant connects the gas and electricity busses and thereby couples the gas and electricity sector. Step5: Adding the heat sector The heat sector is added and coupled to the electricity sector similarly to the gas sector. The same component, the LinearTransformer, is used to couple the two sectors. Only through its parametrisation it becomes a heater rod or a heat pump. Step6: Adding a heat pump There are different ways to model a heat pump. Here the approach of precalculating a COP and using this as a conversion factor for the LinearTransformer is used. Another approach is to use the LinearN1Transformer that has two inputs - electricity and heat from a heat source. See the solph example "simple_dispatch". Step7: Adding a combined heat and power plant The combined heat and power plant couples the gas, electricity and heat sector. Step8: Adding the mobility sector
Python Code: from oemof.solph import EnergySystem import pandas as pd # initialize energy system energysystem = EnergySystem(timeindex=pd.date_range('1/1/2016', periods=168, freq='H')) Explanation: Multisectoral energy system with oemof General description: The jupyter notebook gives a simple example of how to couple the sectors power, heat and mobility. Installation requirements: This example requires oemof 0.1.4 and jupyter. Install by: pip install oemof==0.1.4 jupyter Create a simple energy system Initialize energy system End of explanation # import example data with scaled demands and feedin timeseries of renewables # as dataframe data = pd.read_csv("data/example_data.csv", sep=",") #print(data.demand_el[0:10]) #print(data.keys()) Explanation: Import input data End of explanation from oemof.solph import Bus, Flow, Sink, Source, LinearTransformer ### BUS # create electricity bus b_el = Bus(label="b_el") # add excess sink to help avoid infeasible problems Sink(label="excess_el", inputs={b_el: Flow()}) Source(label="shortage_el", outputs={b_el: Flow(variable_costs=1000)}) ### DEMAND # add electricity demand Sink(label="demand_el", inputs={b_el: Flow(nominal_value=85, actual_value=data['demand_el'], fixed=True)}) ### SUPPLY # add wind and pv feedin Source(label="wind", outputs={b_el: Flow(actual_value=data['wind'], nominal_value=60, fixed=True)}); Source(label="pv", outputs={b_el: Flow(actual_value=data['pv'], nominal_value=200, fixed=True)}); Explanation: Add entities to energy system End of explanation from oemof.solph import OperationalModel import oemof.outputlib import matplotlib.pyplot as plt def optimize(energysystem): ### optimize # create operational model om = OperationalModel(es=energysystem) # solve using the cbc solver om.solve(solver='cbc', solve_kwargs={'tee': False}) # save LP-file om.write('sector_coupling.lp', io_options={'symbolic_solver_labels': True}) # write back results from optimization object to energysystem om.results(); def plot(energysystem, bus_label, bus_type): # define colors cdict = {'wind': '#00bfff', 'pv': '#ffd700', 'pp_gas': '#8b1a1a', 'pp_chp_extraction': '#838b8b', 'excess_el': '#8b7355', 'shortage_el': '#000000', 'heater_rod': 'darkblue', 'pp_chp': 'green', 'demand_el': 'lightgreen', 'demand_th': '#ce4aff', 'heat_pump': 'red', 'leaving_bev': 'darkred', 'bev_storage': 'orange'} # create multiindex dataframe with result values esplot = oemof.outputlib.DataFramePlot(energy_system=energysystem) # select input results of electrical bus (i.e. power delivered by plants) esplot.slice_unstacked(bus_label=bus_label, type=bus_type, date_from='2016-01-03 00:00:00', date_to='2016-01-06 00:00:00') # set colorlist for esplot colorlist = esplot.color_from_dict(cdict) # set plot attributes esplot.plot(color=colorlist, title="January 2016", stacked=True, width=1, kind='bar') esplot.ax.set_ylabel('Power') esplot.ax.set_xlabel('Date') esplot.set_datetime_ticks(tick_distance=24, date_format='%d-%m') esplot.outside_legend(reverse=True) plt.show() optimize(energysystem) plot(energysystem, "b_el", "to_bus") Explanation: Optimize energy system and plot results End of explanation # add gas bus b_gas = Bus(label="b_gas", balanced=False) # add gas power plant LinearTransformer(label="pp_gas", inputs={b_gas: Flow(summed_max_flow=200)}, outputs={b_el: Flow(nominal_value=40, variable_costs=40)}, conversion_factors={b_el: 0.50}); optimize(energysystem) plot(energysystem, "b_el", "to_bus") Explanation: Adding the gas sector In order to add a gas power plant, a gas ressource bus is needed. The gas power plant connects the gas and electricity busses and thereby couples the gas and electricity sector. End of explanation # add heat bus b_heat = Bus(label="b_heat", balanced=True) # add heat demand Sink(label="demand_th", inputs={b_heat: Flow(nominal_value=60, actual_value=data['demand_th'], fixed=True)}) # add heater rod LinearTransformer(label="heater_rod", inputs={b_el: Flow()}, outputs={b_heat: Flow(variable_costs=10)}, conversion_factors={b_heat: 0.98}); optimize(energysystem) plot(energysystem, "b_heat", "to_bus") Explanation: Adding the heat sector The heat sector is added and coupled to the electricity sector similarly to the gas sector. The same component, the LinearTransformer, is used to couple the two sectors. Only through its parametrisation it becomes a heater rod or a heat pump. End of explanation # COP can be calculated beforehand, assuming the heat reservoir temperature # is infinite random timeseries for COP import numpy as np COP = np.random.uniform(low=3.0, high=5.0, size=(168,)) # add heater rod #LinearTransformer(label="heater_rod", # inputs={b_el: Flow()}, # outputs={b_heat: Flow(variable_costs=10)}, # conversion_factors={b_heat: 0.98}); # add heat pump LinearTransformer(label="heat_pump", inputs={b_el: Flow()}, outputs={b_heat: Flow(nominal_value=20, variable_costs=10)}, conversion_factors={b_heat: COP}); optimize(energysystem) plot(energysystem, "b_heat", "to_bus") Explanation: Adding a heat pump There are different ways to model a heat pump. Here the approach of precalculating a COP and using this as a conversion factor for the LinearTransformer is used. Another approach is to use the LinearN1Transformer that has two inputs - electricity and heat from a heat source. See the solph example "simple_dispatch". End of explanation # add CHP with fixed ratio of heat and power (back-pressure turbine) LinearTransformer(label='pp_chp', inputs={b_gas: Flow()}, outputs={b_el: Flow(nominal_value=30, variable_costs=42), b_heat: Flow(nominal_value=40)}, conversion_factors={b_el: 0.3, b_heat: 0.4}); from oemof.solph import VariableFractionTransformer # add CHP with variable ratio of heat and power (extraction turbine) VariableFractionTransformer(label='pp_chp_extraction', inputs={b_gas: Flow()}, outputs={b_el: Flow(nominal_value=30, variable_costs=42), b_heat: Flow(nominal_value=40)}, conversion_factors={b_el: 0.3, b_heat: 0.4}, conversion_factor_single_flow={b_el: 0.5}); optimize(energysystem) plot(energysystem, "b_el", "to_bus") Explanation: Adding a combined heat and power plant The combined heat and power plant couples the gas, electricity and heat sector. End of explanation from oemof.solph import Storage charging_power = 20 bev_battery_cap = 50 # add mobility bus b_bev = Bus(label="b_bev", balanced=True) # add transformer to transport electricity from grid to mobility sector LinearTransformer(label="transport_el_bev", inputs={b_el: Flow()}, outputs={b_bev: Flow(variable_costs=10, nominal_value=charging_power, max=data['bev_charging_power'])}, conversion_factors={b_bev: 1.0}) # add BEV storage Storage(label='bev_storage', inputs={b_bev: Flow()}, outputs={b_bev: Flow()}, nominal_capacity=bev_battery_cap, capacity_min=data['bev_cap_min'], capacity_max=data['bev_cap_max'], capacity_loss=0.00, initial_capacity=None, inflow_conversion_factor=1.0, outflow_conversion_factor=1.0, nominal_input_capacity_ratio=1.0, nominal_output_capacity_ratio=1.0, fixed_costs=35) # add sink for leaving vehicles Sink(label="leaving_bev", inputs={b_bev: Flow(nominal_value=bev_battery_cap, actual_value=data['bev_sink'], fixed=True)}) # add source for returning vehicles Source(label="returning_bev", outputs={b_bev: Flow(nominal_value=bev_battery_cap, actual_value=data['bev_source'], fixed=True)}); optimize(energysystem) plot(energysystem, "b_bev", "from_bus") plot(energysystem, "b_el", "to_bus") plot(energysystem, "b_el", "from_bus") Explanation: Adding the mobility sector End of explanation
5,241
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: Post-training integer quantization with int16 activations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Check that the 16x8 quantization mode is available Step3: Train and export the model Step4: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy. Convert to a TensorFlow Lite model Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model. Now, convert the model using TFliteConverter into default float32 format Step5: Write it out to a .tflite file Step6: To instead quantize the model to 16x8 quantization mode, first set the optimizations flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification Step7: As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options inference_input(output)_type to tf.int16. Set the calibration data Step8: Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience. Step9: Note how the resulting file is approximately 1/3 the size. Step10: Run the TensorFlow Lite models Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter. Load the model into the interpreters Step11: Test the models on one image Step12: Evaluate the models Step13: Repeat the evaluation on the 16x8 quantized model
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation import logging logging.getLogger("tensorflow").setLevel(logging.DEBUG) import tensorflow as tf from tensorflow import keras import numpy as np import pathlib Explanation: Post-training integer quantization with int16 activations <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_integer_quant_16x8"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/performance/post_training_integer_quant_16x8.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview TensorFlow Lite now supports converting activations to 16-bit integer values and weights to 8-bit integer values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. We refer to this mode as the "16x8 quantization mode". This mode can improve accuracy of the quantized model significantly, when activations are sensitive to the quantization, while still achieving almost 3-4x reduction in model size. Moreover, this fully quantized model can be consumed by integer-only hardware accelerators. Some examples of models that benefit from this mode of the post-training quantization include: * super-resolution, * audio signal processing such as noise cancelling and beamforming, * image de-noising, * HDR reconstruction from a single image In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer using this mode. At the end you check the accuracy of the converted model and compare it to the original float32 model. Note that this example demonstrates the usage of this mode and doesn't show benefits over other available quantization techniques in TensorFlow Lite. Build an MNIST model Setup End of explanation tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 Explanation: Check that the 16x8 quantization mode is available End of explanation # Load MNIST dataset mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_images / 255.0 test_images = test_images / 255.0 # Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.layers.Dense(10) ]) # Train the digit classification model model.compile(optimizer='adam', loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit( train_images, train_labels, epochs=1, validation_data=(test_images, test_labels) ) Explanation: Train and export the model End of explanation converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_model = converter.convert() Explanation: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy. Convert to a TensorFlow Lite model Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model. Now, convert the model using TFliteConverter into default float32 format: End of explanation tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/") tflite_models_dir.mkdir(exist_ok=True, parents=True) tflite_model_file = tflite_models_dir/"mnist_model.tflite" tflite_model_file.write_bytes(tflite_model) Explanation: Write it out to a .tflite file: End of explanation converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8] Explanation: To instead quantize the model to 16x8 quantization mode, first set the optimizations flag to use default optimizations. Then specify that 16x8 quantization mode is the required supported operation in the target specification: End of explanation mnist_train, _ = tf.keras.datasets.mnist.load_data() images = tf.cast(mnist_train[0], tf.float32) / 255.0 mnist_ds = tf.data.Dataset.from_tensor_slices((images)).batch(1) def representative_data_gen(): for input_value in mnist_ds.take(100): # Model has only one input so each data point has one element. yield [input_value] converter.representative_dataset = representative_data_gen Explanation: As in the case of int8 post-training quantization, it is possible to produce a fully integer quantized model by setting converter options inference_input(output)_type to tf.int16. Set the calibration data: End of explanation tflite_16x8_model = converter.convert() tflite_model_16x8_file = tflite_models_dir/"mnist_model_quant_16x8.tflite" tflite_model_16x8_file.write_bytes(tflite_16x8_model) Explanation: Finally, convert the model as usual. Note, by default the converted model will still use float input and outputs for invocation convenience. End of explanation !ls -lh {tflite_models_dir} Explanation: Note how the resulting file is approximately 1/3 the size. End of explanation interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file)) interpreter.allocate_tensors() interpreter_16x8 = tf.lite.Interpreter(model_path=str(tflite_model_16x8_file)) interpreter_16x8.allocate_tensors() Explanation: Run the TensorFlow Lite models Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter. Load the model into the interpreters End of explanation test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] interpreter.set_tensor(input_index, test_image) interpreter.invoke() predictions = interpreter.get_tensor(output_index) import matplotlib.pylab as plt plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32) input_index = interpreter_16x8.get_input_details()[0]["index"] output_index = interpreter_16x8.get_output_details()[0]["index"] interpreter_16x8.set_tensor(input_index, test_image) interpreter_16x8.invoke() predictions = interpreter_16x8.get_tensor(output_index) plt.imshow(test_images[0]) template = "True:{true}, predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[0]), predict=str(np.argmax(predictions[0])))) plt.grid(False) Explanation: Test the models on one image End of explanation # A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_model(interpreter): input_index = interpreter.get_input_details()[0]["index"] output_index = interpreter.get_output_details()[0]["index"] # Run predictions on every image in the "test" dataset. prediction_digits = [] for test_image in test_images: # Pre-processing: add batch dimension and convert to float32 to match with # the model's input data format. test_image = np.expand_dims(test_image, axis=0).astype(np.float32) interpreter.set_tensor(input_index, test_image) # Run inference. interpreter.invoke() # Post-processing: remove batch dimension and find the digit with highest # probability. output = interpreter.tensor(output_index) digit = np.argmax(output()[0]) prediction_digits.append(digit) # Compare prediction results with ground truth labels to calculate accuracy. accurate_count = 0 for index in range(len(prediction_digits)): if prediction_digits[index] == test_labels[index]: accurate_count += 1 accuracy = accurate_count * 1.0 / len(prediction_digits) return accuracy print(evaluate_model(interpreter)) Explanation: Evaluate the models End of explanation # NOTE: This quantization mode is an experimental post-training mode, # it does not have any optimized kernels implementations or # specialized machine learning hardware accelerators. Therefore, # it could be slower than the float interpreter. print(evaluate_model(interpreter_16x8)) Explanation: Repeat the evaluation on the 16x8 quantized model: End of explanation
5,242
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Seaice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required Step7: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required Step8: 3.2. Ocean Freezing Point Value Is Required Step9: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required Step10: 4.2. Canonical Horizontal Resolution Is Required Step11: 4.3. Number Of Horizontal Gridpoints Is Required Step12: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required Step13: 5.2. Target Is Required Step14: 5.3. Simulations Is Required Step15: 5.4. Metrics Used Is Required Step16: 5.5. Variables Is Required Step17: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required Step18: 6.2. Additional Parameters Is Required Step19: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required Step20: 7.2. On Diagnostic Variables Is Required Step21: 7.3. Missing Processes Is Required Step22: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required Step23: 8.2. Properties Is Required Step24: 8.3. Budget Is Required Step25: 8.4. Was Flux Correction Used Is Required Step26: 8.5. Corrected Conserved Prognostic Variables Is Required Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required Step28: 9.2. Grid Type Is Required Step29: 9.3. Scheme Is Required Step30: 9.4. Thermodynamics Time Step Is Required Step31: 9.5. Dynamics Time Step Is Required Step32: 9.6. Additional Details Is Required Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required Step34: 10.2. Number Of Layers Is Required Step35: 10.3. Additional Details Is Required Step36: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required Step37: 11.2. Number Of Categories Is Required Step38: 11.3. Category Limits Is Required Step39: 11.4. Ice Thickness Distribution Scheme Is Required Step40: 11.5. Other Is Required Step41: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required Step42: 12.2. Number Of Snow Levels Is Required Step43: 12.3. Snow Fraction Is Required Step44: 12.4. Additional Details Is Required Step45: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required Step46: 13.2. Transport In Thickness Space Is Required Step47: 13.3. Ice Strength Formulation Is Required Step48: 13.4. Redistribution Is Required Step49: 13.5. Rheology Is Required Step50: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required Step51: 14.2. Thermal Conductivity Is Required Step52: 14.3. Heat Diffusion Is Required Step53: 14.4. Basal Heat Flux Is Required Step54: 14.5. Fixed Salinity Value Is Required Step55: 14.6. Heat Content Of Precipitation Is Required Step56: 14.7. Precipitation Effects On Salinity Is Required Step57: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required Step58: 15.2. Ice Vertical Growth And Melt Is Required Step59: 15.3. Ice Lateral Melting Is Required Step60: 15.4. Ice Surface Sublimation Is Required Step61: 15.5. Frazil Ice Is Required Step62: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required Step63: 16.2. Sea Ice Salinity Thermal Impacts Is Required Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required Step65: 17.2. Constant Salinity Value Is Required Step66: 17.3. Additional Details Is Required Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required Step68: 18.2. Constant Salinity Value Is Required Step69: 18.3. Additional Details Is Required Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required Step72: 20.2. Additional Details Is Required Step73: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required Step74: 21.2. Formulation Is Required Step75: 21.3. Impacts Is Required Step76: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required Step77: 22.2. Snow Aging Scheme Is Required Step78: 22.3. Has Snow Ice Formation Is Required Step79: 22.4. Snow Ice Formation Scheme Is Required Step80: 22.5. Redistribution Is Required Step81: 22.6. Heat Diffusion Is Required Step82: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required Step83: 23.2. Ice Radiation Transmission Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-1', 'seaice') Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: EC-EARTH-CONSORTIUM Source ID: SANDBOX-1 Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:59 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation
5,243
Given the following text description, write Python code to implement the functionality described below step by step Description: Example 14 Step1: Create a point source theory RVT motion Step2: Create site profile This is about the simplest profile that we can create. Linear-elastic soil and rock. Step3: Create the site response calculator Step4: Initialize the variations Step5: Specify the output Step6: Perform the calculation Step7: Plot the outputs Create a few plots of the output. Step8: Manipulating output as dataframe If a tuple is passed as the output name, it is used to create a pandas.MultiIndex columns. Step9: Lets names to the dataframe and transform into a long format. Pandas works better on long formatted tables. Step10: Access the properties of each motion like
Python Code: import matplotlib.pyplot as plt import numpy as np import pandas as pd import pysra %matplotlib inline # Increased figure sizes plt.rcParams["figure.dpi"] = 120 Explanation: Example 14: RVT SRA with multiple motions and simulated profiles Example with multiple input motions and simulated soil profiles. End of explanation motions = [ pysra.motion.SourceTheoryRvtMotion(5.0, 30, "wna"), pysra.motion.SourceTheoryRvtMotion(6.0, 30, "wna"), pysra.motion.SourceTheoryRvtMotion(7.0, 30, "wna"), ] for m in motions: m.calc_fourier_amps() Explanation: Create a point source theory RVT motion End of explanation profile = pysra.site.Profile( [ pysra.site.Layer( pysra.site.DarendeliSoilType(18.0, plas_index=0, ocr=1, stress_mean=100), 10, 400, ), pysra.site.Layer( pysra.site.DarendeliSoilType(18.0, plas_index=0, ocr=1, stress_mean=200), 10, 450, ), pysra.site.Layer( pysra.site.DarendeliSoilType(18.0, plas_index=0, ocr=1, stress_mean=400), 30, 600, ), pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200), ] ) Explanation: Create site profile This is about the simplest profile that we can create. Linear-elastic soil and rock. End of explanation calc = pysra.propagation.EquivalentLinearCalculator() Explanation: Create the site response calculator End of explanation var_thickness = pysra.variation.ToroThicknessVariation() var_velocity = pysra.variation.DepthDependToroVelVariation.generic_model("USGS C") var_soiltypes = pysra.variation.SpidVariation( -0.5, std_mod_reduc=0.15, std_damping=0.30 ) Explanation: Initialize the variations End of explanation freqs = np.logspace(-1, 2, num=500) outputs = pysra.output.OutputCollection( [ pysra.output.ResponseSpectrumOutput( # Frequency freqs, # Location of the output pysra.output.OutputLocation("outcrop", index=0), # Damping 0.05, ), pysra.output.ResponseSpectrumRatioOutput( # Frequency freqs, # Location in (denominator), pysra.output.OutputLocation("outcrop", index=-1), # Location out (numerator) pysra.output.OutputLocation("outcrop", index=0), # Damping 0.05, ), pysra.output.InitialVelProfile(), pysra.output.MaxAccelProfile() ] ) Explanation: Specify the output End of explanation count = 20 outputs.reset() for i, p in enumerate( pysra.variation.iter_varied_profiles( profile, count, # var_thickness=var_thickness, var_velocity=var_velocity, # var_soiltypes=var_soiltypes ) ): # Here we auto-descretize the profile for wave propagation purposes p = p.auto_discretize() for j, m in enumerate(motions): name = (f"p{i}", f"m{j}") calc(m, p, p.location("outcrop", index=-1)) outputs(calc, name=name) Explanation: Perform the calculation End of explanation for o in outputs: ax = o.plot(style="stats") Explanation: Plot the outputs Create a few plots of the output. End of explanation df = outputs[1].to_dataframe() df Explanation: Manipulating output as dataframe If a tuple is passed as the output name, it is used to create a pandas.MultiIndex columns. End of explanation # Add names for clarity df.columns.names = ("profile", "motion") df.index.name = "freq" # Transform into a long format df = df.reset_index().melt(id_vars="freq") df def calc_stats(group): ln_value = np.log(group["value"]) median = np.exp(np.mean(ln_value)) ln_std = np.std(ln_value) return pd.Series({"median": median, "ln_std": ln_std}) stats = df.groupby(["freq", "motion"]).apply(calc_stats) stats stats = ( stats.reset_index("motion") .pivot(columns="motion") .swaplevel(0, 1, axis=1) .sort_index(axis=1) ) stats Explanation: Lets names to the dataframe and transform into a long format. Pandas works better on long formatted tables. End of explanation stats["m0"] fig, axes = plt.subplots(nrows=2, subplot_kw={"xscale": "log"}) for name, g in stats.groupby(level=0, axis=1): for ax, key in zip(axes, ["median", "ln_std"]): ax.plot(g.index, g[(name, key)], label=name) axes[0].set(ylabel="5% Damped, Spec. Ratio") axes[0].legend() axes[1].set(ylabel="ln. Stdev.", xlabel="Frequency (Hz)") fig; Explanation: Access the properties of each motion like: End of explanation
5,244
Given the following text description, write Python code to implement the functionality described below step by step Description: Wiki-Vote Experiments Output Visualization Step1: Parse results Step2: PageRank Seeds Percentage How many times the "Top X" nodes from PageRank have led to the max infection Step3: Avg adopters per seed comparison Step4: Eigenvector Seeds Percentage How many times the "Top X" nodes from Eigenvector have led to the max infection Step5: Avg adopters per seed comparison Step6: Betweenness Seeds Percentage How many times the "Top X" nodes from Betweenness have led to the max infection Step7: Avg adopters per seed comparison Step8: 100 runs adopters comparison Step9: Centrality Measures Averages PageRank avg adopters and seed Step10: Eigenv avg adopters and seed Step11: Betweenness avg adopters and seed
Python Code: #!/usr/bin/python %matplotlib inline import numpy as np import matplotlib.pyplot as plt from stats import parse_results, get_percentage, get_avg_per_seed, draw_pie, draw_bars, draw_bars_comparison, draw_avgs Explanation: Wiki-Vote Experiments Output Visualization End of explanation pr, eigen, bet = parse_results('test_wikivote.txt') Explanation: Parse results End of explanation draw_pie(get_percentage(pr)) Explanation: PageRank Seeds Percentage How many times the "Top X" nodes from PageRank have led to the max infection End of explanation draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(pr)+[(0, np.mean(pr[:,1]))])) Explanation: Avg adopters per seed comparison End of explanation draw_pie(get_percentage(eigen)) Explanation: Eigenvector Seeds Percentage How many times the "Top X" nodes from Eigenvector have led to the max infection End of explanation draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(eigen)+[(0, np.mean(eigen[:,1]))])) Explanation: Avg adopters per seed comparison End of explanation draw_pie(get_percentage(bet)) Explanation: Betweenness Seeds Percentage How many times the "Top X" nodes from Betweenness have led to the max infection End of explanation draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(bet)+[(0, np.mean(bet[:,1]))])) Explanation: Avg adopters per seed comparison End of explanation draw_bars(np.sort(pr.view('i8,i8'), order=['f0'], axis=0).view(np.int), np.sort(eigen.view('i8,i8'), order=['f0'], axis=0).view(np.int), np.sort(bet.view('i8,i8'), order=['f0'], axis=0).view(np.int)) Explanation: 100 runs adopters comparison End of explanation pr_mean = np.mean(pr[:,1]) pr_mean_seed = np.mean(pr[:,0]) print 'Avg Seed:',pr_mean_seed, 'Avg adopters:', pr_mean Explanation: Centrality Measures Averages PageRank avg adopters and seed End of explanation eigen_mean = np.mean(eigen[:,1]) eigen_mean_seed = np.mean(eigen[:,0]) print 'Avg Seed:',eigen_mean_seed, 'Avg adopters:',eigen_mean Explanation: Eigenv avg adopters and seed End of explanation bet_mean = np.mean(bet[:,1]) bet_mean_seed = np.mean(bet[:,0]) print 'Avg Seed:',bet_mean_seed, 'Avg adopters:',bet_mean draw_avgs([pr_mean, eigen_mean, bet_mean]) Explanation: Betweenness avg adopters and seed End of explanation
5,245
Given the following text description, write Python code to implement the functionality described below step by step Description: Stochastic examples This example is designed to show how to use the stochatic optimization algorithms for descrete and semicontinous measures from the POT library. Step1: COMPUTE TRANSPORTATION MATRIX FOR SEMI-DUAL PROBLEM DISCRETE CASE Step2: Call the "SAG" method to find the transportation matrix in the discrete case Define the method "SAG", call ot.solve_semi_dual_entropic and plot the results. Step3: SEMICONTINOUS CASE Step4: Call the "ASGD" method to find the transportation matrix in the semicontinous case Define the method "ASGD", call ot.solve_semi_dual_entropic and plot the results. Step5: Compare the results with the Sinkhorn algorithm Call the Sinkhorn algorithm from POT Step6: PLOT TRANSPORTATION MATRIX Plot SAG results Step7: Plot ASGD results Step8: Plot Sinkhorn results Step9: COMPUTE TRANSPORTATION MATRIX FOR DUAL PROBLEM SEMICONTINOUS CASE Step10: Call the "SGD" dual method to find the transportation matrix in the semicontinous case Call ot.solve_dual_entropic and plot the results. Step11: Compare the results with the Sinkhorn algorithm Call the Sinkhorn algorithm from POT Step12: Plot SGD results Step13: Plot Sinkhorn results
Python Code: # Author: Kilian Fatras <[email protected]> # # License: MIT License import matplotlib.pylab as pl import numpy as np import ot import ot.plot Explanation: Stochastic examples This example is designed to show how to use the stochatic optimization algorithms for descrete and semicontinous measures from the POT library. End of explanation n_source = 7 n_target = 4 reg = 1 numItermax = 1000 a = ot.utils.unif(n_source) b = ot.utils.unif(n_target) rng = np.random.RandomState(0) X_source = rng.randn(n_source, 2) Y_target = rng.randn(n_target, 2) M = ot.dist(X_source, Y_target) Explanation: COMPUTE TRANSPORTATION MATRIX FOR SEMI-DUAL PROBLEM DISCRETE CASE: Sample two discrete measures for the discrete case Define 2 discrete measures a and b, the points where are defined the source and the target measures and finally the cost matrix c. End of explanation method = "SAG" sag_pi = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method, numItermax) print(sag_pi) Explanation: Call the "SAG" method to find the transportation matrix in the discrete case Define the method "SAG", call ot.solve_semi_dual_entropic and plot the results. End of explanation n_source = 7 n_target = 4 reg = 1 numItermax = 1000 log = True a = ot.utils.unif(n_source) b = ot.utils.unif(n_target) rng = np.random.RandomState(0) X_source = rng.randn(n_source, 2) Y_target = rng.randn(n_target, 2) M = ot.dist(X_source, Y_target) Explanation: SEMICONTINOUS CASE: Sample one general measure a, one discrete measures b for the semicontinous case Define one general measure a, one discrete measures b, the points where are defined the source and the target measures and finally the cost matrix c. End of explanation method = "ASGD" asgd_pi, log_asgd = ot.stochastic.solve_semi_dual_entropic(a, b, M, reg, method, numItermax, log=log) print(log_asgd['alpha'], log_asgd['beta']) print(asgd_pi) Explanation: Call the "ASGD" method to find the transportation matrix in the semicontinous case Define the method "ASGD", call ot.solve_semi_dual_entropic and plot the results. End of explanation sinkhorn_pi = ot.sinkhorn(a, b, M, reg) print(sinkhorn_pi) Explanation: Compare the results with the Sinkhorn algorithm Call the Sinkhorn algorithm from POT End of explanation pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sag_pi, 'semi-dual : OT matrix SAG') pl.show() Explanation: PLOT TRANSPORTATION MATRIX Plot SAG results End of explanation pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, asgd_pi, 'semi-dual : OT matrix ASGD') pl.show() Explanation: Plot ASGD results End of explanation pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sinkhorn_pi, 'OT matrix Sinkhorn') pl.show() Explanation: Plot Sinkhorn results End of explanation n_source = 7 n_target = 4 reg = 1 numItermax = 100000 lr = 0.1 batch_size = 3 log = True a = ot.utils.unif(n_source) b = ot.utils.unif(n_target) rng = np.random.RandomState(0) X_source = rng.randn(n_source, 2) Y_target = rng.randn(n_target, 2) M = ot.dist(X_source, Y_target) Explanation: COMPUTE TRANSPORTATION MATRIX FOR DUAL PROBLEM SEMICONTINOUS CASE: Sample one general measure a, one discrete measures b for the semicontinous case Define one general measure a, one discrete measures b, the points where are defined the source and the target measures and finally the cost matrix c. End of explanation sgd_dual_pi, log_sgd = ot.stochastic.solve_dual_entropic(a, b, M, reg, batch_size, numItermax, lr, log=log) print(log_sgd['alpha'], log_sgd['beta']) print(sgd_dual_pi) Explanation: Call the "SGD" dual method to find the transportation matrix in the semicontinous case Call ot.solve_dual_entropic and plot the results. End of explanation sinkhorn_pi = ot.sinkhorn(a, b, M, reg) print(sinkhorn_pi) Explanation: Compare the results with the Sinkhorn algorithm Call the Sinkhorn algorithm from POT End of explanation pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sgd_dual_pi, 'dual : OT matrix SGD') pl.show() Explanation: Plot SGD results End of explanation pl.figure(4, figsize=(5, 5)) ot.plot.plot1D_mat(a, b, sinkhorn_pi, 'OT matrix Sinkhorn') pl.show() Explanation: Plot Sinkhorn results End of explanation
5,246
Given the following text description, write Python code to implement the functionality described below step by step Description: The Knuth-Bendix Completion Algorithm This notebook presents the Knuth-Bendix completion algorithm for transforming a set of equations into a confluent term rewriting system. This notebook is divided into seven sections. - Parsing - Matching - Term Rewriting - Unification - The Lexicographic Path Ordering - Critical Pairs - The Completion Algorithm Parsing To begin, we need a parser that is capable of parsing terms and equations. This parser is implemented in Parser.ipynb and can parse equations of terms and supports the binary operators +, -, *, /, %, and ^ (exponentiation) with the usual precedences. Furthermore, function symbols are supported. It also provides the function to_str for turning terms or equations into strings. All together, it provides the following functions Step1: Back to top Matching The function is_var(t) checks whether the term t is a variable. Variables are represented as nested tuples of the form ($var, name), where name is the name of the variable. Step2: Given a variable name x, the function make_var(x) creates a variable with name x. Step3: Given a term p, a term t, and a substitution σ, the function match(p, t, σ) tries to extend the substitution σ so that the equation $$ p \sigma = t $$ is satisfied. If this is possible, the function returns True and updates the substitution σ so that $p \sigma = t$ holds. Otherwise, the function returns False. Step4: Given a term t (or a set of terms), the function find_variables(t) computes the set of all variables occurring in t. Step5: Given a list of terms L the function find_variables(L) computes the set of all variables occurring in L. Step6: Given a term t and a substitution σ that is represented as a dictionary of the form $$ \sigma = { x_1 Step7: Given a set of terms or equations Ts and a substitution σ, the function apply_set(T, σ) applies the substitution σ to all elements in Ts. Step8: If $\sigma = \big[ x_1 \mapsto s_1, \cdots, x_m \mapsto s_m \big]$ and $\tau = \big[ y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big]$ are two substitutions that are <em style="color Step9: Back to top Term Rewriting Step10: Given a term s and a set of variables V, the function rename_variables(s, V) renames the variables in s so that they differ from the variables in the set V. This will only work if the number of variables occurring in V times two is less than the number of letters in the latin alphabet, i.e. less than 26. Therefore, the set V must have at most 13 variables. For our examples, this is not a restriction. Step11: The function simplify_step(t, E) takes two arguments Step12: The function normal_form(t, E) takes a term t and a list (or set) of equations E and tries to simplify the term t as much as possible using the equations from E. In the implementation, we have to be careful to rename the variables occurring in E so that they are different from the variables occurring in t. Furthermore, we have to take care that we don't identify different variables in E by accident. Therefore, we rename the variables in E so that they are both different from the variables in t and from the old variables occurring in E. Step13: Given an equation eq and a set of RewriteRules, the function simplify_equations simplifies the equation eq using the given rewrite rules. It returns a pair of terms. Step14: Given a new rewrite rule, the function simplify_rules(RewriteRules, rule) tries to simplify all rules in RewriteRules with rule. If an equation eq from RewriteRules can be simplified with rule, it is further simplified with all rules in RewriteRules. Step15: Back to top Unification In this section, we implement the unification algorithm of Martelli and Montanari. Given a variable name x and a term t, the function occurs(x, t) checks whether x occurs in t. Step16: The algorithm implemented below takes a pair (E, σ) as its input. Here E is a set of syntactical equations that need to be solved and σ is a substitution that is initially empty. The pair (E, σ) is then transformed using the rules of Martelli and Montanari. The transformation is successful if the pair (E, σ) can be transformed into a pair of the form ({}, μ). Then μ is the solution to the system of equations E. The rules that can be used to solve a system of syntactical equations are as follows Step17: Given a set of <em style="color Step18: Back to top The Lexicographic Path Ordering In order to turn an equation $s = t$ into a rewrite rules, we have to check whether $s$ is more complex that $t$, so that $s$ should be simplified to $t$, or whether $t$ is more complex than $s$ and we should rewrite $t$ into $s$. To this end, we implement the lexicographic path ordering, which is described in detail below. The function is_simpler(s, t, D) receives three arguments. - s and t are terms. - D is a dictionary mapping function symbols to <u>different</u> natural numbers. We define a total order on function symbols by defining $$ f < g \;\stackrel{_\textrm{def}}{\Longleftrightarrow}\; D[f] < D[g]. $$ The function is_simpler(s, t, D) checks whether s is simpler than t. In order to define this notion, we assume that a total order $<$ is given on the set of function symbols. Then we define $s \prec t$ (read Step19: Given two lists S and T of terms and a dictionary D, the function is_simpler_list(S, T, D) checks whether S is lexicographically simpler than T if the elements of S and T are compared with the lexicographical path ordering $\prec$. It is assumed that S and T have the same length. Step20: Given a pair of term s and t and an Ordering of the function symbols occurring in s and t, the function order_equation orders the equation s = t with respect to the lexicographic path ordering, i.e. in the ordered equation, the right hand side is simpler than the left hand side. If s and t are incomparable, the function raises an exception. Step21: Back to top Critical Pairs The central notion of the Knuth-Bendix algorithm is the notion of a critical pair. Given two equations lhs1 = rhs1 and lhs2 = rhs2, a pair of terms (s, t) is a critical pair of these equations if we have the following Step22: Given a term t and a position u in t, the function subterm(t, u) extracts the subterm that is located at position u, i.e. it computes t/u. The position u is zero-based. Step23: Given a term t, a position u in t and a term s, the function replace_at(t, u, s) replaces the subterm at position u with t. The position u uses zero-based indexing. Step24: Given two equations eq1 and eq2, the function critical_pairs(eq1, eq2) computes the set of all critical pairs between these equations. A pair of terms (s, t) is a critical pair of eq1 and eq2 if we have - eq1 has the form lhs1 = rhs1, - eq2 has the form lhs2 = rhs2, - a is a non-trivial position in lhs1, - $\mu = \texttt{mgu}(\texttt{lhs}_1/a, \texttt{lhs}_2) \not= \texttt{None}$, - $s = \texttt{lhs}_1[a \leftarrow \texttt{rhs}_2]\mu$ and $t = \texttt{rhs}_1\mu$. Step25: Back to top The Completion Algorithm Step26: Given an equation eq of the form eq = ('=', lhs, rhs), the function complexity(eq) computes a measure of complexity for the given equation. This measure of complexity is the length of the string that represents the equation. This measure of complexity is later used to choose between equations Step27: Given a set of equations RewriteRules and a single rewrite rule eq, the function all_critical_pairs(RewriteRules, eq) computes the set of all non-trivial critical pairs that can be build by building critical pairs with an equation from RewriteRules and the equation eq. It is assumed that eq is already an element of RewriteRules. Step28: The module heapq provides heap-based priority queues, which are implemented as lists. Step29: Given a file name that contains a set of equations and a dictionary encoding an ordering of the function symbols, the function knuth_bendix_algorithm performs the Knuth-Bendix algorithm
Python Code: %run Parser.ipynb t = parse_term('x * y * z') t to_str(t) eq = parse_equation('i(x) * x = 1') eq to_str(parse_file('Examples/group-theory-1.eqn')) Explanation: The Knuth-Bendix Completion Algorithm This notebook presents the Knuth-Bendix completion algorithm for transforming a set of equations into a confluent term rewriting system. This notebook is divided into seven sections. - Parsing - Matching - Term Rewriting - Unification - The Lexicographic Path Ordering - Critical Pairs - The Completion Algorithm Parsing To begin, we need a parser that is capable of parsing terms and equations. This parser is implemented in Parser.ipynb and can parse equations of terms and supports the binary operators +, -, *, /, %, and ^ (exponentiation) with the usual precedences. Furthermore, function symbols are supported. It also provides the function to_str for turning terms or equations into strings. All together, it provides the following functions: - parse_file(file_name) parses a file containing equations between terms. - parse_equation(s) converts the string s into an equation. - parse_term(s) converts the string s into a term. - to_str(o) converts an object o into a string. The object o either is * a term, * an equation, * a list of equations, * a set of equations, or * a dictionary representing a substitution. Terms and equations are represented as nested tuples. The parser is implemented using the parser generator Ply. End of explanation def is_var(t): return t[0] == '$var' Explanation: Back to top Matching The function is_var(t) checks whether the term t is a variable. Variables are represented as nested tuples of the form ($var, name), where name is the name of the variable. End of explanation def make_var(x): return ('$var', x) Explanation: Given a variable name x, the function make_var(x) creates a variable with name x. End of explanation def match(pattern, term, σ): if is_var(pattern): _, var = pattern if var in σ: return σ[var] == term else: σ[var] = term # extend σ return True if pattern[0] == term[0] and len(pattern) == len(term): return all(match(pattern[i], term[i], σ) for i in range(1, len(pattern))) p = parse_term('i(x) * z') t = parse_term('i(i(y)) * i(y)') σ = {} match(p, t, σ) to_str(σ) Explanation: Given a term p, a term t, and a substitution σ, the function match(p, t, σ) tries to extend the substitution σ so that the equation $$ p \sigma = t $$ is satisfied. If this is possible, the function returns True and updates the substitution σ so that $p \sigma = t$ holds. Otherwise, the function returns False. End of explanation def find_variables(t): if isinstance(t, set): return { var for term in t for var in find_variables(term) } if is_var(t): _, var = t return { var } _, *L = t return find_variables_list(L) Explanation: Given a term t (or a set of terms), the function find_variables(t) computes the set of all variables occurring in t. End of explanation def find_variables_list(L): if L == []: return set() return { x for t in L for x in find_variables(t) } eq = parse_equation('(x * y) * z = x * (y * z)') find_variables(eq) Explanation: Given a list of terms L the function find_variables(L) computes the set of all variables occurring in L. End of explanation def apply(t, σ): "Apply the substitution σ to the term t." if is_var(t): _, var = t if var in σ: return σ[var] else: return t else: f, *Ts = t return (f,) + tuple(apply(s, σ) for s in Ts) p = parse_term('i(x) * x') t = parse_term('i(i(y)) * i(y)') σ = {} match(p, t, σ) to_str(apply(p, σ)) Explanation: Given a term t and a substitution σ that is represented as a dictionary of the form $$ \sigma = { x_1: s_1, \cdots, x_n:s_n }, $$ the function apply(t, σ) computes the term that results from replacing the variables $x_i$ with the terms $s_i$ in t for all $i=1,\cdots,n$. This term is written as $t\sigma$ and if $\sigma = { x_1: s_1, \cdots, x_n:s_n }$, then $t\sigma$ is defined by induction on t as follows: - $x_i\sigma := s_i$, - $v\sigma := v$ if $v$ is a variable and $v \not\in {x_1,\cdots,x_n}$, - $f(t_1,\cdots,t_n)\sigma := f(t_1\sigma, \cdots, t_n\sigma)$. End of explanation def apply_set(Ts, σ): return { apply(t, σ) for t in Ts } Explanation: Given a set of terms or equations Ts and a substitution σ, the function apply_set(T, σ) applies the substitution σ to all elements in Ts. End of explanation def compose(σ, τ): Result = { x: apply(s, τ) for (x, s) in σ.items() } Result.update(τ) return Result t1 = parse_term('i(y)') t2 = parse_term('a * b') t3 = parse_term('i(b)') σ = { 'x': t1 } τ = { 'y': t2, 'z': t3 } f'compose({to_str(σ)}, {to_str(τ)}) = {to_str(compose(σ, τ))}' Explanation: If $\sigma = \big[ x_1 \mapsto s_1, \cdots, x_m \mapsto s_m \big]$ and $\tau = \big[ y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big]$ are two substitutions that are <em style="color:blue;">non-overlapping</em>, i.e. such that ${x_1,\cdots, x_m} \cap {y_1,\cdots,y_n} = {}$ holds, then we define the <em style="color:blue;">composition</em> $\sigma\tau$ of $\sigma$ and $\tau$ as follows: $$\sigma\tau := \big[ x_1 \mapsto s_1\tau, \cdots, x_m \mapsto s_m\tau,\; y_1 \mapsto t_1, \cdots, y_n \mapsto t_n \big]$$ This definition implies that we the following associative law is valid: $$ x(\sigma\tau) = (x\sigma)\tau $$ The function $\texttt{compose}(\sigma, \tau)$ takes two non-overlapping substitutions and computes the composition $\sigma\tau$. End of explanation from string import ascii_lowercase ascii_lowercase Explanation: Back to top Term Rewriting End of explanation def rename_variables(s, Vars): assert len(Vars) <= 13, f'Error: too many variables in {Vars}.' NewVars = set(ascii_lowercase) - Vars NewVars = sorted(list(NewVars)) σ = { x: make_var(NewVars[i]) for (i, x) in enumerate(Vars) } return apply(s, σ) Vars = { 'x', 'y', 'z' } t = parse_term('x * y * z') f'rename_variables({to_str(t)}, {Vars}) = {to_str(rename_variables(t, Vars))}' Explanation: Given a term s and a set of variables V, the function rename_variables(s, V) renames the variables in s so that they differ from the variables in the set V. This will only work if the number of variables occurring in V times two is less than the number of letters in the latin alphabet, i.e. less than 26. Therefore, the set V must have at most 13 variables. For our examples, this is not a restriction. End of explanation def simplify_step(t, Equations): if is_var(t): return None for eq in Equations: _, lhs, rhs = eq σ = {} if match(lhs, t, σ): return apply(rhs, σ) f, *args = t simpleArgs = [] change = False for arg in args: simple = simplify_step(arg, Equations) if simple != None: simpleArgs += [simple] change = True else: simpleArgs += [arg] if change: return (f,) + tuple(simpleArgs) return None E = { parse_equation('(x * y) * z = x * (y * z)') } t = parse_term('(a * b) * i(b)') f'simplify_step({to_str(t)}, {to_str(E)}) = {to_str(simplify_step(t, E))}' Explanation: The function simplify_step(t, E) takes two arguments: - t is a term, - E is a set of equations of the form ('=', l, r). The function tries to an equation l = r in E and a subterm s in the term t such that the left hand side l of the equation matches the subterm s using some substitution $\sigma$, i.e. we have $s = l\sigma$. Then the term t is simplified by replacing the subterm s in t by $r\sigma$. More formally, if u is the position of s in t, i.e. t/u = s then t is simplified into the term $$ t[u \mapsto r\sigma] $$ If an appropriate subterm s is found, the simplified term is returned. Otherwise, the function returns None. End of explanation def normal_form(t, E): Vars = find_variables(t) | find_variables(E) NewE = [] for eq in E: NewE += [ rename_variables(eq, Vars) ] while True: s = simplify_step(t, NewE) if s == None: return t t = s l = parse_term('i(b * a)') eq = parse_equation('i(a * c) = i(c) * i(a)') f'normal_form({to_str(l)}, {to_str(eq)}) = {to_str(normal_form(l, {eq}))}' E = parse_file('Examples/group-theory-1.eqn') t = parse_term('(x * i(y)) * y * z') print(f'normal_form({to_str(t)}, {to_str(E)}) = \n{to_str(normal_form(t, E))}') Explanation: The function normal_form(t, E) takes a term t and a list (or set) of equations E and tries to simplify the term t as much as possible using the equations from E. In the implementation, we have to be careful to rename the variables occurring in E so that they are different from the variables occurring in t. Furthermore, we have to take care that we don't identify different variables in E by accident. Therefore, we rename the variables in E so that they are both different from the variables in t and from the old variables occurring in E. End of explanation def simplify_equation(eq, RewriteRules): _, s, t = eq new_s = normal_form(s, RewriteRules) new_t = normal_form(t, RewriteRules) return new_s, new_t Explanation: Given an equation eq and a set of RewriteRules, the function simplify_equations simplifies the equation eq using the given rewrite rules. It returns a pair of terms. End of explanation def simplify_rules(RewriteRules, rule): SimpleEqs = set() for eq in RewriteRules: _, s, t = eq new_s = normal_form(s, { rule }) if new_s != s: new_s = normal_form(new_s, RewriteRules | { rule }) new_t = normal_form(t, { rule }) if new_t != t: new_t = normal_form(new_t, RewriteRules | { rule }) if new_s != new_t: simple = order_equation(new_s, new_t, Ordering) SimpleEqs.add(simple) else: print(f'removed: {to_str(s)} = {to_str(t)}') return SimpleEqs Explanation: Given a new rewrite rule, the function simplify_rules(RewriteRules, rule) tries to simplify all rules in RewriteRules with rule. If an equation eq from RewriteRules can be simplified with rule, it is further simplified with all rules in RewriteRules. End of explanation def occurs(x, t): if is_var(t): _, var = t return x == var return any(occurs(x, arg) for arg in t[1:]) Explanation: Back to top Unification In this section, we implement the unification algorithm of Martelli and Montanari. Given a variable name x and a term t, the function occurs(x, t) checks whether x occurs in t. End of explanation def unify(s, t): return solve({('≐', s, t)}, {}) Explanation: The algorithm implemented below takes a pair (E, σ) as its input. Here E is a set of syntactical equations that need to be solved and σ is a substitution that is initially empty. The pair (E, σ) is then transformed using the rules of Martelli and Montanari. The transformation is successful if the pair (E, σ) can be transformed into a pair of the form ({}, μ). Then μ is the solution to the system of equations E. The rules that can be used to solve a system of syntactical equations are as follows: <ol> <li> If $y\in\mathcal{V}$ is a variable that does <b style="color:red;">not</b> occur in the term $t$, then we perform the following reduction: $$ \Big\langle E \cup \big\{ y \doteq t \big\}, \sigma \Big\rangle \quad\leadsto \quad \Big\langle E[y \mapsto t], \sigma\big[ y \mapsto t \big] \Big\rangle $$ </li> <li> If the variable $y$ occurs in the term $t$, then the system of syntactical equations $E \cup \big\{ y \doteq t \big\}$ is not solvable: $$ \Big\langle E \cup \big\{ y \doteq t \big\}, \sigma \Big\rangle\;\leadsto\; \texttt{None} \quad \mbox{if $x \in \textrm{Var}(t)$ and $y \not=t$.}$$ </li> <li> If $y\in\mathcal{V}$ is a variable and $t$ is no variable, then we use the following rule: $$ \Big\langle E \cup \big\{ t \doteq y \big\}, \sigma \Big\rangle \quad\leadsto \quad \Big\langle E \cup \big\{ y \doteq t \big\}, \sigma \Big\rangle. $$ </li> <li> Trivial syntactical equations of variables can be dropped: $$ \Big\langle E \cup \big\{ x \doteq x \big\}, \sigma \Big\rangle \quad\leadsto \quad \Big\langle E, \sigma \Big\rangle. $$ </li> <li> If $f$ is an $n$-ary function symbol, then we have: $$ \Big\langle E \cup \big\{ f(s_1,\cdots,s_n) \doteq f(t_1,\cdots,t_n) \big\}, \sigma \Big\rangle \;\leadsto\; \Big\langle E \cup \big\{ s_1 \doteq t_1, \cdots, s_n \doteq t_n\}, \sigma \Big\rangle. $$ </li> <li> The system of syntactical equations $E \cup \big\{ f(s_1,\cdots,s_m) \doteq g(t_1,\cdots,t_n) \big\}$ has <b style="color:red;">no</b> solution if the function symbols $f$ and $g$ are different: $$ \Big\langle E \cup \big\{ f(s_1,\cdots,s_m) \doteq g(t_1,\cdots,t_n) \big\}, \sigma \Big\rangle \;\leadsto\; \texttt{None} \qquad \mbox{if $f \not= g$}. $$ </ol> Given two terms $s$ and $t$, the function $\texttt{unify}(s, t)$ computes the <em style="color:blue;">most general unifier</em> of $s$ and $t$. End of explanation def solve(E, σ): while E != set(): _, s, t = E.pop() if s == t: # remove trivial equations continue if is_var(s): _, x = s if occurs(x, t): return None else: # set x to t E = apply_set(E, { x: t }) σ = compose(σ, { x: t }) elif is_var(t): E.add(('≐', t, s)) else: f , g = s[0] , t[0] sArgs, tArgs = s[1:] , t[1:] m , n = len(sArgs), len(tArgs) if f != g or m != n: return None else: E |= { ('≐', sArgs[i], tArgs[i]) for i in range(m) } return σ s = parse_term('x * i(x) * (y * z)') t = parse_term('a * i(1) * b') f'unify({to_str(s)}, {to_str(t)}) = {to_str(unify(s, t))}' Explanation: Given a set of <em style="color:blue;">syntactical equations</em> $E$ and a substitution $\sigma$, the function $\texttt{solve}(E, \sigma)$ applies the rules of Martelli and Montanari to solve $E$. End of explanation def is_simpler(s, t, D): if is_var(s): _, x = s return s != t and occurs(x, t) if is_var(t): return False f, *sArgs = s g, *tArgs = t if D[f] < D[g]: return all(is_simpler(arg, t, D) for arg in sArgs) if f == g: assert len(sArgs) == len(tArgs) return any(s == arg or is_simpler(s, arg, D) for arg in tArgs) or \ all(is_simpler(arg, t, D) for arg in sArgs) and is_simpler_list(sArgs, tArgs, D) if D[f] > D[g]: return any(s == arg or is_simpler(s, arg, D) for arg in tArgs) assert False, f'Error in is_simpler({s}, {t}, {D}): incomplete ordering.' Explanation: Back to top The Lexicographic Path Ordering In order to turn an equation $s = t$ into a rewrite rules, we have to check whether $s$ is more complex that $t$, so that $s$ should be simplified to $t$, or whether $t$ is more complex than $s$ and we should rewrite $t$ into $s$. To this end, we implement the lexicographic path ordering, which is described in detail below. The function is_simpler(s, t, D) receives three arguments. - s and t are terms. - D is a dictionary mapping function symbols to <u>different</u> natural numbers. We define a total order on function symbols by defining $$ f < g \;\stackrel{_\textrm{def}}{\Longleftrightarrow}\; D[f] < D[g]. $$ The function is_simpler(s, t, D) checks whether s is simpler than t. In order to define this notion, we assume that a total order $<$ is given on the set of function symbols. Then we define $s \prec t$ (read: s is simpler than t) inductively via the following cases: 1. $v \prec t$ if $v$ is a variable occurring in $t$ and $v \not = t$. 2. $f(s_1,\cdots,s_m) \prec g(t_1,\cdots,t_n)$ if * $f < g$ and * $s_i \prec g(t_1,\cdots,t_n)$ for all $i=1,\cdots, m$. 3. $f(s_1,\cdots,s_m) \prec f(t_1,\cdots,t_m)$ if * there exists an $i \in {1,\cdots,n}$ such that $f(s_1,\cdots,s_m) \preceq t_i$ or * both of the following conditions are true: - $s_i \prec f(t_1,\cdots,t_m)$ for all $i=1,\cdots, m$ and - $[s_1, \cdots, s_m] \prec_{\textrm{lex}} [t_1, \cdots, t_m]$. Here, $\prec_{\textrm{lex}}$ denotes the lexicographic extension of the ordering $\prec$ to lists of terms. It is defined as follows: $$ [x] + R_1 \prec_{\textrm{lex}} [y] + R_2 \;\stackrel{\textrm{def}}{\Longleftrightarrow}\; x \prec y \,\vee\, \bigl(x = y \wedge R_1 \prec{\textrm{lex}} R_2\bigr) $$ 4. $f(s_1,\cdots,s_m) \prec g(t_1,\cdots,t_n)$ if * $f > g$ and * there exists an $i \in {1,\cdots,n}$ such that $f(s_1,\cdots,s_m) \preceq t_i$. This ordering is known as the lexicographic path ordering. End of explanation def is_simpler_list(S, T, D): if S == [] == T: return False if is_simpler(S[0], T[0], D): return True if S[0] == T[0]: return is_simpler_list(S[1:], T[1:], D) return False Ordering = { '1': 0, '*': 1, 'i': 2 } l = parse_term('(x * y) * z') r = parse_term('x * (y * z)') f'is_simpler({to_str(r)}, {to_str(l)}, {Ordering}) = {is_simpler(r, l, Ordering)}' Explanation: Given two lists S and T of terms and a dictionary D, the function is_simpler_list(S, T, D) checks whether S is lexicographically simpler than T if the elements of S and T are compared with the lexicographical path ordering $\prec$. It is assumed that S and T have the same length. End of explanation def order_equation(s, t, Ordering): if is_simpler(t, s, Ordering): return ('=', s, t) elif is_simpler(s, t, Ordering): return ('=', t, s) else: assert False, f'Error: could not order {to_str(s)} = {to_str(t)}' Explanation: Given a pair of term s and t and an Ordering of the function symbols occurring in s and t, the function order_equation orders the equation s = t with respect to the lexicographic path ordering, i.e. in the ordered equation, the right hand side is simpler than the left hand side. If s and t are incomparable, the function raises an exception. End of explanation def non_triv_positions(t): if is_var(t): return set() _, *args = t Result = { () } for i, arg in enumerate(args): Result |= { (i,) + a for a in non_triv_positions(arg) } return Result t = parse_term('x * i(x) * 1') f'non_triv_positions({to_str(t)}) = {non_triv_positions(t)}' Explanation: Back to top Critical Pairs The central notion of the Knuth-Bendix algorithm is the notion of a critical pair. Given two equations lhs1 = rhs1 and lhs2 = rhs2, a pair of terms (s, t) is a critical pair of these equations if we have the following: - u is a non-trivial position in lhs1, i.e. lhs1/u is not a variable, - The subterm lhs1/u is unifiable with lhs2, i.e. $$\mu = \texttt{mgu}(\texttt{lhs}_1 / a, \texttt{lhs}_2) \not= \texttt{None},$$ - $s = \texttt{lhs}_1[a \leftarrow \texttt{rhs}_2]\mu$ and $t = \texttt{rhs}_1\mu$. The function critical_pairs implemented in this section computes the critical pairs between two rewrite rules. Given a term t, the function positions computes the set $\mathcal{P}os(t)$ of all positions in t that do not point to variables. Such positions are called non-trivial positions. Given a term t, the set $\mathcal{P}os(t)$ of all positions in $t$ is defined by induction on t. 1. $\mathcal{P}os(v) := \bigl{()\bigr} \quad \mbox{if $v$ is a variable} $ 2. $\mathcal{P}os\bigl(f(t_0,\cdots,t_{n-1})\bigr) := \bigl{()\bigr} \cup \bigl{ (i,) + u \mid i \in{0,\cdots,n-1} \wedge u \in \mathcal{P}os(t_i) \bigr} $ Note that since we are programming in Python, positions are zero-based. Given a position $v$ in a term $t$, we define $t/v$ as the subterm of $t$ at position $v$ by induction on $t$: 1. $t/() := t$, 2. $f(t_0,\cdots,t_{n-1})/u := t_i/u\texttt{[1:]}$. End of explanation def subterm(t, u): if len(u) == 0: return t _, *args = t i, *ru = u return subterm(args[i], ru) t = parse_term('x * i(x) * 1') f'subterm({to_str(t)}, (0,1)) = {to_str(subterm(t, (0,1)))}' Explanation: Given a term t and a position u in t, the function subterm(t, u) extracts the subterm that is located at position u, i.e. it computes t/u. The position u is zero-based. End of explanation def replace_at(t, u, s): if len(u) == 0: return s i, *ur = u f, *args = t new_args = [] for j, arg in enumerate(args): if j == i: new_args.append(replace_at(arg, ur, s)) else: new_args.append(arg) return (f,) + tuple(new_args) t = parse_term('x * i(x) * 1') s = parse_term('a * b') f'replace_at({to_str(t)}, (0,1), {to_str(s)}) = {to_str(replace_at(t, (0,1), s))}' Explanation: Given a term t, a position u in t and a term s, the function replace_at(t, u, s) replaces the subterm at position u with t. The position u uses zero-based indexing. End of explanation def critical_pairs(eq1, eq2): Vars = find_variables(eq1) eq2 = rename_variables(eq2, Vars) _, lhs1, rhs1 = eq1 _, lhs2, rhs2 = eq2 Result = set() Positions = non_triv_positions(lhs1) for u in Positions: if eq1 == eq2 and u == (): continue s = subterm(lhs1, u) 𝜇 = unify(s, lhs2) if 𝜇 != None: lhs1_new = replace_at(lhs1, u, rhs2) lhs1_new = apply(lhs1_new, 𝜇) rhs1_new = apply(rhs1, 𝜇) Result.add( (lhs1_new, rhs1_new) ) return Result eq1 = parse_equation('i(x) * x = 1') eq2 = parse_equation('(x * y) * z = x * (y * z)') for s, t in critical_pairs(eq2, eq1): print(f'{to_str(s)} = {to_str(t)}') Explanation: Given two equations eq1 and eq2, the function critical_pairs(eq1, eq2) computes the set of all critical pairs between these equations. A pair of terms (s, t) is a critical pair of eq1 and eq2 if we have - eq1 has the form lhs1 = rhs1, - eq2 has the form lhs2 = rhs2, - a is a non-trivial position in lhs1, - $\mu = \texttt{mgu}(\texttt{lhs}_1/a, \texttt{lhs}_2) \not= \texttt{None}$, - $s = \texttt{lhs}_1[a \leftarrow \texttt{rhs}_2]\mu$ and $t = \texttt{rhs}_1\mu$. End of explanation def print_equations(Equations): cnt = 1 for _, l, r in Equations: print(f'{cnt}. {to_str(l)} = {to_str(r)}') cnt += 1 Explanation: Back to top The Completion Algorithm End of explanation def complexity(eq): return len(to_str(eq)) eq = parse_equation('x * i(x) = 1') complexity(eq) Explanation: Given an equation eq of the form eq = ('=', lhs, rhs), the function complexity(eq) computes a measure of complexity for the given equation. This measure of complexity is the length of the string that represents the equation. This measure of complexity is later used to choose between equations: Less complex equations are more interesting and should be considered first when computing critical pairs. End of explanation def all_critical_pairs(RewriteRules, eq): Result = set() for eq1 in RewriteRules: Result |= { ('=', l, r) for l, r in critical_pairs(eq1, eq) } Result |= { ('=', l, r) for l, r in critical_pairs(eq, eq1) } return Result Explanation: Given a set of equations RewriteRules and a single rewrite rule eq, the function all_critical_pairs(RewriteRules, eq) computes the set of all non-trivial critical pairs that can be build by building critical pairs with an equation from RewriteRules and the equation eq. It is assumed that eq is already an element of RewriteRules. End of explanation import heapq as hq Explanation: The module heapq provides heap-based priority queues, which are implemented as lists. End of explanation def knuth_bendix_algorithm(file, Ordering): Equations = set() Axioms = set(parse_file(file)) for _, s, t in Axioms: ordered_eq = order_equation(s, t, Ordering) Equations.add(ordered_eq) print(f'given: {to_str(ordered_eq[1])} = {to_str(ordered_eq[2])}') EqtnQue = [] for eq in Equations: hq.heappush(EqtnQue, (complexity(eq), eq) ) RewriteRules = set() while EqtnQue != []: _, lr = hq.heappop(EqtnQue) l, r = simplify_equation(lr, RewriteRules) if l != r: _, l, r = order_equation(l, r, Ordering) print(f'added: {to_str(l)} → {to_str(r)}') NewEqs = all_critical_pairs(RewriteRules | { ('=', l, r) }, lr) for eq in NewEqs: s, t = simplify_equation(eq, RewriteRules) if s != t: new_rule = order_equation(s, t, Ordering) hq.heappush(EqtnQue, (complexity(new_rule), new_rule) ) RewriteRules = simplify_rules(RewriteRules, ('=', l, r)) RewriteRules.add( ('=', l, r) ) return RewriteRules %%time RewriteRules = knuth_bendix_algorithm('Examples/group-theory-1.eqn', Ordering) print() print_equations(RewriteRules) Explanation: Given a file name that contains a set of equations and a dictionary encoding an ordering of the function symbols, the function knuth_bendix_algorithm performs the Knuth-Bendix algorithm: 1. The given equations are ordered. 2. The ordered equations are pushed into the priority queue EqtnQue according to their complexity. 3. The set RewriteRules is initialized as the empty set. 4. As long as the priority queue is not empty, the least complex equation lr is removed from the priority queue and simplified using the known RewriteRules. 5. If the simplified version of lr is non-trivial, all critical pairs between it and the RewriteRules are computed. These critical pairs are pushed onto the priority queue. 6. When no new critical pairs can be found, the set of RewriteRules is returned. This set is then guaranteed to be a confluent set of rewrite rules. End of explanation
5,247
Given the following text description, write Python code to implement the functionality described below step by step Description: Collect and Clean Twitter Data The twitter data was obtained using the Trump Twitter Archive, the data is from 01/20/2017 - 03/02/2018 2 Step1: Using Pandas I will read the twitter json file, convert it to a dataframe, set the index to 'created at' as datetime objects, then write it to a csv Step2: The next step is to add columns with tokenized text and identify twitter specific puncutiations like hashtags and @ mentions Step3: Scrape Data from the Federal Register This has already been done, and all of the pdfs published by the Executive Office of the U.S.A are in the data folder from 2017/01/20 - 2018/03/02 Don't execute this code unless you need more up-to-date information Step4: Create dataframe with the date the pdf was published and the text of each pdf Step5: Create a dictionary using DefaultDict where the date of publication is the key, and the text of the pdf is the value. Step6: Create a list of tuples, where the date is the first entry and the text of a pdf is the second entry, skipping over any values of None Step7: Pickle the dataframe, so that you only need to process the text once
Python Code: # load json twitter data twitter_json = r'data/twitter_01_20_17_to_3-2-18.json' # Convert to pandas dataframe tweet_data = pd.read_json(twitter_json) Explanation: Collect and Clean Twitter Data The twitter data was obtained using the Trump Twitter Archive, the data is from 01/20/2017 - 03/02/2018 2:38 PM MST. I used the Federal Register's website to obtain all of the actions published by the Executive Office for the same time frame. End of explanation # read the json data into a pandas dataframe tweet_data = pd.read_json(twitter_json) # set column 'created_at' to the index tweet_data.set_index('created_at', drop=True, inplace= True) # convert timestamp index to a datetime index pd.to_datetime(tweet_data.index) Explanation: Using Pandas I will read the twitter json file, convert it to a dataframe, set the index to 'created at' as datetime objects, then write it to a csv End of explanation # function to identify hash tags def hash_tag(text): return re.findall(r'(#[^\s]+)', text) # function to identify @mentions def at_tag(text): return re.findall(r'(@[A-Za-z_]+)[^s]', text) # tokenize all the tweet's text tweet_data['text_tokenized'] = tweet_data['text'].apply(lambda x: word_tokenize(x.lower())) # apply hash tag function to text column tweet_data['hash_tags'] = tweet_data['text'].apply(lambda x: hash_tag(x)) # apply at_tag function to text column tweet_data['@_tags'] = tweet_data['text'].apply(lambda x: at_tag(x)) # pickle data tweet_pickle_path = r'data/twitter_01_20_17_to_3-2-18.pickle' tweet_data.to_pickle(tweet_pickle_path) Explanation: The next step is to add columns with tokenized text and identify twitter specific puncutiations like hashtags and @ mentions End of explanation # Define the 2017 and 2018 url that contains all of the Executive Office of the President's published documents executive_office_url_2017 = r'https://www.federalregister.gov/index/2017/executive-office-of-the-president' executive_office_url_2018 = r'https://www.federalregister.gov/index/2018/executive-office-of-the-president' # scrape all urls for pdf documents published in 2017 and 2018 by the U.S.A. Executive Office pdf_urls= [] for url in [executive_office_url_2017,executive_office_url_2018]: response = requests.get(url) pattern = re.compile(r'https:.*\.pdf') pdfs = re.findall(pattern, response.text) pdf_urls.append(pdfs) # writes all of the pdfs to the data folder start = 'data/' end = '.pdf' num = 0 for i in range(0,(len(pdf_urls))): for url in pdf_urls[i]: ver = str(num) pdf_path = start + ver + end r = requests.get(url) file = open(pdf_path, 'wb') file.write(r.content) file.close() num = num + 1 Explanation: Scrape Data from the Federal Register This has already been done, and all of the pdfs published by the Executive Office of the U.S.A are in the data folder from 2017/01/20 - 2018/03/02 Don't execute this code unless you need more up-to-date information End of explanation # function to convert pdf to text from stack overflow (https://stackoverflow.com/questions/26494211/extracting-text-from-a-pdf-file-using-pdfminer-in-python/44476759#44476759) def convert_pdf_to_txt(path): rsrcmgr = PDFResourceManager() retstr = io.StringIO() codec = 'utf-8' laparams = LAParams() device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams) fp = open(path, 'rb') interpreter = PDFPageInterpreter(rsrcmgr, device) password = "" maxpages = 0 caching = True pagenos = set() for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password, caching=caching, check_extractable=True): interpreter.process_page(page) text = retstr.getvalue() fp.close() device.close() retstr.close() return text # finds the first time the name of a day appears in the txt, and returns that name def find_day(word_generator): day_list = ['Monday,', 'Tuesday,', 'Wednesday,', 'Thursday,', 'Friday,', 'Saturday,', 'Sunday,'] day_name_dict = {'Mon':'Monday,', 'Tue':'Tuesday,','Wed':'Wednesday,','Thu':'Thursday,','Fri':'Friday,','Sat':'Saturday,','Sun':'Sunday,'} day_name = [] for val in word_generator: if val in day_list: num_position = txt.index(val) day_name.append(txt[num_position] + txt[num_position + 1] + txt[num_position +2]) break return day_name_dict[day_name[0]] # takes text and returns the first date in the document def extract_date(txt): word_generator = (word for word in txt.split()) day_name = find_day(word_generator) txt_start = int(txt.index(day_name)) txt_end = txt_start + 40 date_txt = txt[txt_start:txt_end].replace('\n','') cleaned_txt = re.findall('.* \d{4}', date_txt) date_list = cleaned_txt[0].split() clean_date_list = map(lambda x:x.strip(","), date_list) clean_date_string = ", ".join(clean_date_list) date_obj = datetime.strptime(clean_date_string, '%A, %B, %d, %Y') return date_obj Explanation: Create dataframe with the date the pdf was published and the text of each pdf End of explanation start_path = r'data/' end_path = '.pdf' data_dict = defaultdict(list) for i in range(0,270): file_path = start_path + str(i) + end_path txt = convert_pdf_to_txt(file_path) date_obj = extract_date(txt) data_dict[date_obj].append(txt) Explanation: Create a dictionary using DefaultDict where the date of publication is the key, and the text of the pdf is the value. End of explanation tuple_lst = [] for k, v in data_dict.items(): if v != None: for text in v: tuple_lst.append((k, text)) # create dataframe from list of tuples fed_reg_dataframe = pd.DataFrame.from_records(tuple_lst, columns=['date','str_text'], index = 'date') # tokenize all the pdf text fed_reg_dataframe['token_text'] = fed_reg_dataframe['str_text'].apply(lambda x: word_tokenize(x.lower())) # final dataframe fed_reg_dataframe[fed_reg_dataframe.index > '2017-01-20'] Explanation: Create a list of tuples, where the date is the first entry and the text of a pdf is the second entry, skipping over any values of None End of explanation # pickle final data fed_reg_data = r'data/fed_reg_data.pickle' final_df.to_pickle(fed_reg_data) Explanation: Pickle the dataframe, so that you only need to process the text once End of explanation
5,248
Given the following text description, write Python code to implement the functionality described below step by step Description: Dotstar LED Dotstar LEDs are individually addressable LED strips for use with Arduinos, Raspberry Pis, and the Minnowboard. It connects to the device through the SPI pins and is driven here by Python. Start by importing the class file for the LEDs Step1: Create Dotstar object You can pass several arguments to the Dotstar class constructor to change the behavior of the LED class. ds = dotstar.Dotstar(led_count=72, bus=0, init_data=0, init_brightness=0) Parameters Step2: Class Methods Now we can make use of the functions in the class to set the colors and intesnity of each LED. The class works by populating a deque with the LED values you want, and then pushing all the data at once to the LED strip. The following methods provide the most basic functionality
Python Code: from pyDrivers import dotstar Explanation: Dotstar LED Dotstar LEDs are individually addressable LED strips for use with Arduinos, Raspberry Pis, and the Minnowboard. It connects to the device through the SPI pins and is driven here by Python. Start by importing the class file for the LEDs: End of explanation ds = dotstar.Dotstar(led_count=72*3,init_brightness=0) Explanation: Create Dotstar object You can pass several arguments to the Dotstar class constructor to change the behavior of the LED class. ds = dotstar.Dotstar(led_count=72, bus=0, init_data=0, init_brightness=0) Parameters: led_count = some_number_of_leds Change the number of LEDs in your strip. Note that this counts the raw number of individual LEDs, not how many strips/devices you have. Make sure this is set so all the LEDs are used. bus = 0 Change the SPI bus. If you do not specify one, it will be initialized on bus 0, which is the default for the Minnowboard. init_data = some_brightness_value + some_hue Change the initial value of the LED strip. By default all the LEDS are initialized to the first color pushed. If you plan on having all the LEDs start off dark, don't set anything here. init_brightness = some_brightness Change the initial brightness of the LEDs. Valid brightness settings range from 0 to 10, representing the intensity of the LEDs from 0% to 100%. If you want the LEDs to start off dark, set this to 0 at the start. Here is a typical initialization, starting all 72 LEDS (or 2 Adafruit Dotstar LED strips connected together) turned off: End of explanation while True: for current_led in range (4, ds.led_count-4): ds.set(current_led-4, 0, 0, 0, 0) ds.set(current_led-2, 10, 100, 0, 0) ds.set(current_led-1, 50, 200, 0, 0) ds.set(current_led, 50, 250, 0, 0) ds.set(current_led+1, 50, 200, 0, 0) ds.set(current_led+2, 50, 150, 0, 0) ds.set(current_led+4, 0, 0, 0, 0) ds.draw() for current_led in range(ds.led_count-5, 4, -1): ds.set(current_led-3,10,100,0,0) ds.set(current_led-2,10,150,0,0) ds.set(current_led-1,50,200,0,0) ds.set(current_led,50,250,0,0) ds.set(current_led+1,50,200,0,0) ds.set(current_led+2,50,150,0,0) ds.set(current_led+4,0,0,0,0) ds.draw() Explanation: Class Methods Now we can make use of the functions in the class to set the colors and intesnity of each LED. The class works by populating a deque with the LED values you want, and then pushing all the data at once to the LED strip. The following methods provide the most basic functionality: Dotstar.set(which_LED, brightness_level, red_hue, blue_hue, green_hue) This function will add the LED to activate to the queue. The brightness and hue options are on a scale of 0 to 256, and the LED selection is from 0 to Dotstar.draw() This funciton draws the created deque to the LED strip. This function will clear the current deque, allowing you to populate another one. Example Run this section to create a sequence of 5 red LEDS that move throughout the length of the LEDs. It looks like the LED array on KITT from Knight Rider. End of explanation
5,249
Given the following text description, write Python code to implement the functionality described below step by step Description: Working with data 2017. Class 1 Contact Javier Garcia-Bernardo [email protected] 0. Structure About Python Data types, structures and code Read csv files to dataframes Basic operations with dataframes My first plots Debugging python Summary Step1: 2. PYTHON Step2: OPERATIONS IN LISTS Step3: Add element Step4: Retrieve element CAREFUL Step5: Get slices - Get a part of list. This is important Step6: Remove element Step7: Search - element in list - Note that this also works for strings (in fact, a string is very similar to a list of characters) Step8: Length Step9: Sort Step10: Sum Step11: Notice that we wrote this_is_a_list.pop(), but sum(this_is_a_list) and sorted(this_is_a_list) This is because .pop() only works in lists (.pop() is a method of the data structure List), while sum() and sorted() work with many different data structures. Some standard functions Step12: CREATING RANGES range(start,stop,step) generates a list of numbers between start and stop (not including the number stop), jumping in steps of size step. See examples below. Useful for example if we want to do one thing many times. We will see their importance on Thursday. They are not really lists but very similar, in the examples I'll convert them to lists to see what's inside. Step13: WORKING WITH STRINGS AND LISTS - Divide a string into words - Join many words in a list into a string - Replace an element - Find an element Step14: ipython help Step15: 2.B.Tuples The same than a list but inmutable (fixed size). You can't add or remove elements. I (almost) never use them, but you may encounter them. Step16: 2.B.Sets One element of each kind (no repeated elements!) -> If you convert a list to a set you find the unique elements. Allows for intersecting Example Step17: 2.B.Dictionary Like in a index, finds a page in the book very very fast. It combiens keys (word in the index) with values (page number associated to the word) Step18: IMPORTANT Step19: When you do unsorted_array[condition], the computer check if the first element of the array condition is True If it is True, then it keeps the first element of unsorted_array. The computer then checks the second element of the condition array. If it is True, then it keeps the second element of unsorted_array And keeps doing this until the end Step20: 2.B.Pandas dataframe I They correspond to excel spreadsheets Part of the pandas package It is a big, complicated library that we will use a lot. They are based on numpy arrays, so anything you can do with numpy arrays you can do with the rows or columns of a dataframe Read/write csv files. It can also read stata or excel files (and sometimes spss), but who likes those programs. Tip Step21:
Python Code: ##Some code to run at the beginning of the file, to be able to show images in the notebook ##Don't worry about this cell #Print the plots in this screen %matplotlib inline #Be able to plot images saved in the hard drive from IPython.display import Image #Make the notebook wider from IPython.core.display import display, HTML display(HTML("<style>.container { width:90% !important; }</style>")) Explanation: Working with data 2017. Class 1 Contact Javier Garcia-Bernardo [email protected] 0. Structure About Python Data types, structures and code Read csv files to dataframes Basic operations with dataframes My first plots Debugging python Summary End of explanation ##this is a list print([1,2,3]) print(type([1,2,3])) # A list can combine several data types this_is_list1 = [3.5,"I'm another string",4] print(this_is_list1) # It can even combine several data structures, for instance a list inside a list this_is_list2 = [3.5,"I'm another string",4,this_is_list1] print(this_is_list2) Explanation: 2. PYTHON: Data types, structures and code Python uses variables and code. 2.1 Variables Variables tell the computer to save something (a number, a string, a spreadsheet) with a name. For instance, if you write variable_name = 3, the computer knows that variable_name is 3. Variables can represents: - 2.1.1 Data types: Numbers, strings and others - 2.1.2 Data structures: - Lists, tables... (which organize data types) 2.2 Code Instructions to modify variables Can be organized in functions 2.1.2 Most common data structures 2.1.2.1 list = notebook (you can add things, take out things, everything is in order). e.g. a list of the numbers 1,2 and 3: [1,2,3] 2.1.2.2 tuple = book (you cannot change it after it's printed). (1,2,3) 2.1.2.3 set = keywords in a paper (you can check if something exists easily). {1,2,3} 2.1.2.4 dictionary = index (you can find content easily). {"a":1, "b":2, "c":3} 2.1.2.5 numpy array = fast list for math. np.array([1,2,3]) 2.1.2.6 pandas dataframe = spreedsheet. np.DataFrame([1,2,3],columns=["a","b","c"]) They have methods = ways to edit the data structure. For example add, delete, find, sort... (= functions in excel) 2.1.2.1 Lists Combines a series of variables in order Fast to add and delete variables, slow to find variables (needs to go over all the elements) End of explanation ## A list this_is_a_list = [1,3,2,"b"] print("Original: ", this_is_a_list) Explanation: OPERATIONS IN LISTS End of explanation ## Add elements this_is_a_list.append("c") print("Added c: ", this_is_a_list) Explanation: Add element End of explanation ## Get element. The first element has index 0, which means that this_is_a_list[0] gets the first element print("Fourth element: ", this_is_a_list[3]) Explanation: Retrieve element CAREFUL: The first element has index 0, which means that this_is_a_list[0] gets the first element and this_is_a_list[1] gets the second element End of explanation this_is_a_list this_is_a_list[1:3] #All list this_is_a_list = [0,1,2,3,4] print(this_is_a_list) #"Second to end element (included)" print("Second to end element: ", this_is_a_list[1:]) #Second to the fourth (included) print("Second to the fourth (included): ",this_is_a_list[1:4]) #First to the last element (not included) print("First to the last element (not included): ",this_is_a_list[:-1]) Explanation: Get slices - Get a part of list. This is important End of explanation print("Original: ", this_is_a_list) ## Remove 4th element and save it as removed_element removed_element = this_is_a_list.pop(3) print(removed_element) print("The list is now: ", this_is_a_list) this_is_a_list = [1,2,3,4,5] Explanation: Remove element End of explanation #Search print(3 in this_is_a_list) #Find index print(this_is_a_list.index(4)) Explanation: Search - element in list - Note that this also works for strings (in fact, a string is very similar to a list of characters): "eggs" in "eegs and bacon" End of explanation ## Count the number of elements in a list this_is_a_list = [1, 3, 2] len_this_is_a_list = len(this_is_a_list) #you tell the computer to sum it and save it as `sum_this_is_a_list` print("Length: ", len_this_is_a_list) Explanation: Length End of explanation ## Sort a list this_is_a_list = [1, 3, 2] this_is_a_list = sorted(this_is_a_list) #you tell the computer to sort it, and to save it with the same name print("Sorted: ", this_is_a_list) Explanation: Sort End of explanation ## Sum a list this_is_a_list = [1, 3, 2] sum_this_is_a_list = sum(this_is_a_list) #you tell the computer to sum it and save it as `sum_this_is_a_list` print("Sum: ", sum_this_is_a_list) sum(["1","2"]) Explanation: Sum End of explanation print(float("1")) Explanation: Notice that we wrote this_is_a_list.pop(), but sum(this_is_a_list) and sorted(this_is_a_list) This is because .pop() only works in lists (.pop() is a method of the data structure List), while sum() and sorted() work with many different data structures. Some standard functions: - sum() - len() - sorted() - min() - max() - list() #convert to list - set() #convert to set - dict() #convert to dictionary - tuple() #convert to tuple - float() #convert to float End of explanation ## Create a list [0,1,2,3,4] print(list(range(0,5,1))) print(range(5)) #for all practical issues you don't need to convert them ## Create the list [1,2,3,4] print(list(range(1,5,1))) ## Create the list [1,3,4,5,7,9] print(list(range(1,10,2))) import numpy as np np.arange(0,5,0.5) Explanation: CREATING RANGES range(start,stop,step) generates a list of numbers between start and stop (not including the number stop), jumping in steps of size step. See examples below. Useful for example if we want to do one thing many times. We will see their importance on Thursday. They are not really lists but very similar, in the examples I'll convert them to lists to see what's inside. End of explanation #we create a string and call it "eggs and bacon" our_string = "eggs and bacon" #now we divide it into words, creating a list and saving it with the name our_list_of_wrods our_list_of_words = our_string.split() print(our_list_of_words) #we can do the opossite and join the words using the function "join". " ".join(our_list_of_words) #we can join the words using any character we want in the quoted par of (" ".join) ":::".join(our_list_of_words) #we can also divide a string using other characters instead of space. #for example let's split in the "a"s our_string = "eggs and bacon" print(our_string.split("a")) #we can change parts of the word our_string = "eggs and bacon" print(our_string.replace("bacon","tofu")) #we can find where a word start. For example let's find how many characters there are before "and" our_string = "eggs and bacon and" print(our_string.find("and")) #and we can use this to slice the string like we did with the list our_string1 = "10 events" our_string2 = "120 events" our_string3 = "2 events" #we find the index of and index_and1 = our_string1.find("event") index_and2 = our_string2.find("event") index_and3 = our_string3.find("event") print(index_and1) print(index_and2) print(index_and3) print(our_string1) #and keep all the string until that index print(our_string1[:our_string1.find("event")]) print(our_string2[:index_and2]) print(our_string3[:index_and3]) Explanation: WORKING WITH STRINGS AND LISTS - Divide a string into words - Join many words in a list into a string - Replace an element - Find an element End of explanation this_is_a_list? our_string1.find? Explanation: ipython help End of explanation this_is_a_tuple = (1,3,2,"b") print(this_is_a_tuple) this_is_a_list = list(this_is_a_tuple) print(this_is_a_list) #If we try to pop and elment, like with lists, we get an error. this_is_a_tuple.pop(0) Explanation: 2.B.Tuples The same than a list but inmutable (fixed size). You can't add or remove elements. I (almost) never use them, but you may encounter them. End of explanation Image(filename='./images/set-operations-illustrated-with-venn-diagrams.png') {1,2,3} - {2,5,6} #Let's create a list with repeated elements this_is_a_list = [3,1,1,1,2,3] print(this_is_a_list) #Now we convert it to a set and see how the repeated elements are gone this_is_a_set1 = set(this_is_a_list) print(this_is_a_set1) #You can also create sets like this this_is_a_set2 = {1,2,4} print(this_is_a_set2) ## Union print(this_is_a_set1 | this_is_a_set2) ## Intersection print(this_is_a_set1 & this_is_a_set2) ## Diference set_1 - set2 print(this_is_a_set1 - this_is_a_set2) ## Diference set_2 - set1 print(this_is_a_set2 - this_is_a_set1) ## Very useful for words. Imagine we have two articles, one saying "eggs bacon python" and another saying "bacon spam" #We can find which words they have in common print({"eggs","bacon","python"} & {"bacon","spam"}) Explanation: 2.B.Sets One element of each kind (no repeated elements!) -> If you convert a list to a set you find the unique elements. Allows for intersecting Example: Which words are different between two sets End of explanation #First we need to import it import numpy as np #Sum of a list a_list = [0,1,2,3,4,5,6] #Using the standard function print(sum(a_list)) #Using numpy print(np.sum(a_list)) ##How to find the mean() of a list of numbers? mean() does not exist print(mean(a_list)) a_list = [0,1,2,3,4,5,6] a_np_array = np.array(a_list) a_np_array**10 ##but numpy rescues you import numpy as np a_list = [0,1,2,3,4,5,6] #first convert the list to an array. #This is not required when you use np.mean(a_list), but it is required in some other ocassions. a_np_array = np.array(a_list) print(type(a_list)) print(type(a_np_array)) print(np.mean(a_np_array)) ##you can take powers print(a_np_array**2) #this would not work with a list ##or square roots print(np.sqrt(a_np_array)) #this would work with a list #or some basic statistics import numpy as np import scipy.stats #another library for more complicated statistics #we create a list with numbers 0,1,2...998,999 numElements = 1000 this_is_an_list = list(range(numElements)) this_is_an_array = np.array(this_is_an_list) print(this_is_an_array[:10]) #print only the 10 first elements to make sure it's okay #and print some stats print(np.mean(this_is_an_array)) print(np.std(this_is_an_array)) print(np.median(this_is_an_array)) print(scipy.stats.mode(this_is_an_array)) print(scipy.stats.skew(this_is_an_array)) Explanation: 2.B.Dictionary Like in a index, finds a page in the book very very fast. It combiens keys (word in the index) with values (page number associated to the word): {key1: value2, key2: value2} The keys can be numbers, strings or tuples, but NOT lists (if you try Python will give the error unhashable key) We won't use dicts today so we'll cover them on Thursday 2.B.Numpy array Part of the numpy package Extremely fast (and easy) to do math You can slice them like lists It gives many cool functions, like mean(), or ** over a list End of explanation #Let's start with this array this_is_an_array = np.array([1,2,3,4,5,6,7,8,9,10]) print(this_is_an_array) #If we want the elements greater or equal (>=) to 5, we could do: print(this_is_an_array[4:]) #However this case was very easy, but what getting the elements >= 5 in here: #[ 5, 9, 4, 3, 2, 8, 6, 7, 10, 1] unsorted_array = np.array([ 5, 9, 4, 3, 2, 8, 6, 7, 10, 1]) print(unsorted_array) unsorted_array >= 5 unsorted_array[np.array([ True, True, False, False, False, True, True, True, True, False])] #We can do the following: unsorted_array[unsorted_array >= 5] #which means keep the elements of unsorted_array that are larger or equal to 5 #unsorted_array[condition] print(unsorted_array[unsorted_array >= 5]) #This is a special kind of slicing where you filter elements with a condition. #Lists do not allow you to do this #How does it work? By creating another array, of the same lenght, with False and Trues print(unsorted_array) print(this_is_an_array >= 5) #the same than comparing 3 >= 5, but numpy compares every number inside the array. Explanation: IMPORTANT: NUMPY ARRAYS ALLOW YOU TO FILTER BY A CONDITION array_name[array_condition] For example, maybe you have a list [1,2,3,4,5,6,7,8,9,10] and you want the elements greater or equal to 5. Numpy can help End of explanation #We can use variables condition = this_is_an_array >= 5 #the computer does the = at the end always print(unsorted_array[condition]) #What if we want to get the numbers between 5 and 9 (6, 7,9)? unsorted_array = np.array([ 5, 9, 4, 3, 2, 8, 6, 7, 10, 1]) #We do it with two steps #Step 1: Greater than 5 condition_gt5 = unsorted_array > 5 unsorted_array_gt5 = unsorted_array[condition_gt5] print("Greater than 5", unsorted_array_gt5) #Step 2: Lower than 9 condition_lw9 = unsorted_array_gt5 < 9 #we are using the new array unsorted_array_gt5_lw9 = unsorted_array_gt5[condition_lw9] print("Greater than 5 and lower than 9", unsorted_array_gt5_lw9) unsorted_array = np.array([ 5, 9, 4, 3, 2, 8, 6, 7, 10, 1]) condition_gt5 = unsorted_array > 5 condition_lw9 = unsorted_array < 9 print(unsorted_array[condition_gt5 & condition_lw9]) Explanation: When you do unsorted_array[condition], the computer check if the first element of the array condition is True If it is True, then it keeps the first element of unsorted_array. The computer then checks the second element of the condition array. If it is True, then it keeps the second element of unsorted_array And keeps doing this until the end End of explanation ##first we import it import pandas as pd Explanation: 2.B.Pandas dataframe I They correspond to excel spreadsheets Part of the pandas package It is a big, complicated library that we will use a lot. They are based on numpy arrays, so anything you can do with numpy arrays you can do with the rows or columns of a dataframe Read/write csv files. It can also read stata or excel files (and sometimes spss), but who likes those programs. Tip: Always save spreadsheets as .csv data We'll explore it through examples End of explanation Image(url="http://www.relatably.com/m/img/boring-memes/when-my-friend-tell-me-a-boring-story_o_1588863.jpg") Explanation: End of explanation
5,250
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: <font size = "5"> Image Registration </font> <hr style="height Step2: Import the usual libraries You can load that library with the code cell above Step3: Load an image stack Step4: Plot Image Stack Either we load the selected file in hte widget above above or a file dialog window appears. This is the point the notebook can be repeated with a new file. Either select a file above again (without running the code cell above) or open a file dialog here Note that the open file dialog might not apear in the foreground! Step5: Complete Registration Takes a while, depending on your computer between 1 and 10 minutes. Step6: Check Drift Step8: Appendix Demon Registration Here we use the Diffeomorphic Demon Non-Rigid Registration as provided by simpleITK. Please Cite
Python Code: import sys from pkg_resources import get_distribution, DistributionNotFound def test_package(package_name): Test if package exists and returns version or -1 try: version = (get_distribution(package_name).version) except (DistributionNotFound, ImportError) as err: version = '-1' return version # Colab setup ------------------ if 'google.colab' in sys.modules: !pip install git+https://github.com/pycroscopy/pyTEMlib/ -q # pyTEMlib setup ------------------ else: if test_package('sidpy') < '0.0.4': print('installing sidpy') !{sys.executable} -m pip install --upgrade sidpy -q if test_package('pyNSID') < '0.0.2': print('installing pyNSID') !{sys.executable} -m pip install --upgrade pyNSID -q if test_package('pycroscopy') < '0': print('installing pyTEMlib') !{sys.executable} -m pip install --upgrade pyTEMlib -q # ------------------------------ print('done') Explanation: <font size = "5"> Image Registration </font> <hr style="height:1px;border-top:4px solid #FF8200" /> by Gerd Duscher and Matthew. F. Chisholm Materials Science & Engineering<br> Joint Institute of Advanced Materials<br> The University of Tennessee, Knoxville Registration of a Stack of Images We us this notebook only for a stack of images. Prerequesites Install pycroscopy End of explanation # import matplotlib and numpy # use "inline" instead of "notebook" for non-interactive # use widget for jupyterlab needs ipympl to be installed import sys if 'google.colab' in sys.modules: %pylab --no-import-all notebook else: %pylab --no-import-all widget from sidpy.io.interface_utils import open_file_dialog from SciFiReaders import DM3Reader import SciFiReaders %load_ext autoreload %autoreload 2 sys.path.insert(0, '../') import pycroscopy as px __notebook__ = 'Image_Registration' __notebook_version__ = '2021_10_04' Explanation: Import the usual libraries You can load that library with the code cell above: End of explanation if 'google.colab' in sys.modules: from google.colab import drive drive.mount("/content/drive") drive_directory = 'drive/MyDrive/' else: drive_directory = '.' file_widget = open_file_dialog(drive_directory) file_widget Explanation: Load an image stack : Please, load an image stack. <br> A stack of images is used to reduce noise, but for an added image the images have to be aligned to compensate for drift and other microscope instabilities. You select here (with the open_file_dialog parameter), whether an open file dialog apears in the code cell below the next one or whether you want to get a list of files (Nion has a weird way of dealing with file names). End of explanation try: main_dataset.h5_dataset.file.close() except: pass dm3_reader = DM3Reader(file_widget.selected) main_dataset = dm3_reader.read() if main_dataset.data_type.name != 'IMAGE_STACK': print(f"Please load an image stack for this notebook, this is an {main_dataset.data_type}") print(main_dataset) main_dataset.dim_0.dimension_type = 'spatial' main_dataset.dim_1.dimension_type = 'spatial' main_dataset.z.dimension_type = 'temporal' main_dataset.plot() # note this needs a view reference for interaction main_dataset._axes frame_dim = [] spatial_dim = [] for i, axis in main_dataset._axes.items(): if axis.dimension_type.name == 'SPATIAL': spatial_dim.append(i) else: frame_dim.append(i) if len(spatial_dim) != 2: print('need two spatial dimensions') if len(frame_dim) != 1: print('need one frame dimensions') Explanation: Plot Image Stack Either we load the selected file in hte widget above above or a file dialog window appears. This is the point the notebook can be repeated with a new file. Either select a file above again (without running the code cell above) or open a file dialog here Note that the open file dialog might not apear in the foreground! End of explanation ## Do all of registration notebook_tags ={'notebook': __notebook__, 'notebook_version': __notebook_version__} non_rigid_registered, rigid_registered_dataset = px.image.complete_registration(main_dataset) non_rigid_registered.plot() non_rigid_registered Explanation: Complete Registration Takes a while, depending on your computer between 1 and 10 minutes. End of explanation scale_x = (rigid_registered_dataset.x[1]-rigid_registered_dataset.x[0])*1. drift = rigid_registered_dataset.metadata['drift'] x = np.linspace(0,drift.shape[0]-1,drift.shape[0]) polynom_degree = 2 # 1 is linear fit, 2 is parabolic fit, ... line_fit_x = np.polyfit(x, drift[:,0], polynom_degree) poly_x = np.poly1d(line_fit_x) line_fit_y = np.polyfit(x, drift[:,1], polynom_degree) poly_y = np.poly1d(line_fit_y) plt.figure() # plot drift and fit of drift plt.axhline(color = 'gray') plt.plot(x, drift[:,0], label = 'drift x') plt.plot(x, drift[:,1], label = 'drift y') plt.plot(x, poly_x(x), label = 'fit_drift_x') plt.plot(x, poly_y(x), label = 'fit_drift_y') plt.legend(); # set second axis in pico meter ax_pixels = plt.gca() ax_pixels.step(1, 1) ax_pm = ax_pixels.twinx() x_1, x_2 = ax_pixels.get_ylim() ax_pm.set_ylim(x_1*scale_x, x_2*scale_x) # add labels ax_pixels.set_ylabel('drift [pixels]') ax_pm.set_ylabel('drift [nm]') ax_pixels.set_xlabel('image number'); plt.tight_layout() Explanation: Check Drift End of explanation import simpleITK as sitk def DemonReg(cube, verbose = False): Diffeomorphic Demon Non-Rigid Registration Usage: DemReg = DemonReg(cube, verbose = False) Input: cube: stack of image after rigid registration and cropping Output: DemReg: stack of images with non-rigid registration Dempends on: simpleITK and numpy Please Cite: http://www.simpleitk.org/SimpleITK/project/parti.html and T. Vercauteren, X. Pennec, A. Perchant and N. Ayache Diffeomorphic Demons Using ITK\'s Finite Difference Solver Hierarchy The Insight Journal, http://hdl.handle.net/1926/510 2007 DemReg = np.zeros_like(cube) nimages = cube.shape[0] print(nimages) # create fixed image by summing over rigid registration fixed_np = np.average(current_dataset, axis=0) fixed = sitk.GetImageFromArray(fixed_np) fixed = sitk.DiscreteGaussian(fixed, 2.0) #demons = sitk.SymmetricForcesDemonsRegistrationFilter() demons = sitk.DiffeomorphicDemonsRegistrationFilter() demons.SetNumberOfIterations(200) demons.SetStandardDeviations(1.0) resampler = sitk.ResampleImageFilter() resampler.SetReferenceImage(fixed); resampler.SetInterpolator(sitk.sitkBspline) resampler.SetDefaultPixelValue(0) done = 0 for i in range(nimages): if done < int((i+1)/nimages*50): done = int((i+1)/nimages*50) sys.stdout.write('\r') # progress output : sys.stdout.write("[%-50s] %d%%" % ('*'*done, 2*done)) sys.stdout.flush() moving = sitk.GetImageFromArray(cube[i]) movingf = sitk.DiscreteGaussian(moving, 2.0) displacementField = demons.Execute(fixed,movingf) outTx = sitk.DisplacementFieldTransform( displacementField ) resampler.SetTransform(outTx) out = resampler.Execute(moving) DemReg[i,:,:] = sitk.GetArrayFromImage(out) #print('image ', i) print(':-)') print('You have succesfully completed Diffeomorphic Demons Registration') return DemReg Explanation: Appendix Demon Registration Here we use the Diffeomorphic Demon Non-Rigid Registration as provided by simpleITK. Please Cite: * simpleITK and T. Vercauteren, X. Pennec, A. Perchant and N. Ayache Diffeomorphic Demons Using ITK\'s Finite Difference Solver Hierarchy The Insight Journal, 2007 This Non-Rigid Registration consists of the following steps: determine reference image For this we use the average of the rigid registered stack this averaged stack is then smeared with a Gaussian of sigma 2 pixel to reduce noise under the assumption that high frequency scan distortions cancel out over several images, we, therefore, obtained the center of mass of the atoms. perform the demon registration filter to determine a distortion matrix each single image of a stack is first smeared with a Gaussian of sigma of 2pixels then the deformation matrix is determined for these images the deformation matrix is a matrix where each pixel has a vector ( x, and y value) for the relative shift of this pixel. This deformation matrix is used to transform the image The transformation is performed on the original image. Important, here, is to set the interpolator method, (the image needs to be interpolated because the new pixels are not on an integer grid.) Let's see what the different interpolators do. |Method | RMS contrast | Standard | Mean | |-------|:--------------|:-------------|:-------| |original |0.1965806 |0.07764114 |0.3949583 |Linear |0.20159315 |0.079470366 |0.39421165 |BSpline |0.20162606 |0.0794831 |0.39421043 |Gaussian |0.14310582 |0.056414302 |0.39421389 |Hamming |0.20163293 |0.07948672 |0.39421496 The Gaussian interpolator is the only one seems to smear the signal. We will use the Bspline method a fast and simple method that does not introduce spurious features and does not smear the signal. Full Code of Demon registration End of explanation
5,251
Given the following text description, write Python code to implement the functionality described below step by step Description: <h2> Goal Step1: <h2> Only 11 features are within 5ppm of one-another Step2: <h2> Even for all the features (not just those that were in the dataframe and passed QC), only ~1% are indistinguishable by mass </h2> Step3: <h2> Now let's see if the distributions of these m/z overlapping features are distinct </h2> Just work with the QC'd features (those found in the feature table, not those found only in the peaklist)
Python Code: import pandas as pd import numpy as np import scipy.stats as stats import matplotlib.pyplot as plt from matplotlib.ticker import NullFormatter import seaborn as sns %matplotlib inline # import the data local_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/' project_path = ('/revo_healthcare/data/processed/Husermet_MTBLS97/'+ 'Husermet_UPLCMS_positive_ion_mode.xlsx') metadata = pd.read_excel(local_path+project_path, sheetname=1, index_col = 0) peaks = pd.read_excel(local_path+project_path, sheetname=2, index_col=0) # samples x features df = pd.read_excel(local_path+project_path, sheetname=3, dtype=np.float64) # Replace X from df column labels df.columns = pd.Series([i.replace('X', '') for i in df.columns], dtype='Int64') df.info() # Peaks is all the features detected, pre-QC, it seems. Select # Only those that made it to the dataframe. print 'df columns', df.columns print 'peak index', peaks.index # Sanity check that all the peaks are accounted for in the # peaklist for i in df.columns: if i not in peaks.index: print ("Oh shit, you couldn't find one of the"+ "df columns in the peaklist index") raise hell else: print "Found {i}".format(i=i) # Make a matrix of the pairwise-ppm difference between peaks def pairwise_ppm_matrix(peak_mz): ''' GOAL - Make a matrix containing pairwise ppm differences from a pandas series INPUT - peak_mz: pandas series with index as feature identifier OUTPUT - matrix of pairwise ppm values. half-full with comparisons Other (redundant) half is nan values. Using nans so you can ask "ppm_matrix < 20" and sum rows/columns to get an answer ''' ppm_pairwise_matrix = pd.DataFrame( np.full([len(peak_mz), len(peak_mz)], np.nan), index=peak_mz.index, columns=peak_mz.index) for i, mz in enumerate(peak_mz): for idx, mz2 in enumerate(peak_mz[i+1:]): j=i+1+idx # min_ppm = abs( (float(mz-mz2)/max(mz,mz2)) * 10**6) ppm_pairwise_matrix.iloc[j,i] = min_ppm return ppm_pairwise_matrix test_mz = pd.Series([1,2,3], index=['a', 'b', 'c'], dtype='float64') print test_mz test_val = pairwise_ppm_matrix(test_mz) should_val = pd.DataFrame({'a': [np.nan, 0.5*10**6, (2.0/3)*10**6], 'b': [np.nan, np.nan, (1.0/3)*10**6], 'c': [np.nan, np.nan, np.nan]}, index=['a', 'b', 'c']) print '\nOutput from test_vals:\n', test_val print '\nShould be this:\n', should_val assert(test_val.all() == should_val.all()).all() if (test_val.all() == should_val.all()).all(): print '\n\nYou passed the test! (might be other bugs, but idk)' # Select out the peaks from dataframe (those that presumably passed QC) features = peaks.loc[df.columns] feature_mz = features['mz'] def plot_mz_rt(df, save=False,path=None, rt_bounds=[-1e5,-1e5]): # the random data x = df['rt'] y = df['mz'] print np.max(x) print np.max(y) nullfmt = NullFormatter() # no labels # definitions for the axes left, width = 0.1, 0.65 bottom, height = 0.1, 0.65 bottom_h = left_h = left + width + 0.02 rect_scatter = [left, bottom, width, height] rect_histx = [left, bottom_h, width, 0.2] rect_histy = [left_h, bottom, 0.2, height] # start with a rectangular Figure #fig = plt.figure(1, figsize=(8, 8)) fig = plt.figure(1, figsize=(10,10)) axScatter = plt.axes(rect_scatter) axHistx = plt.axes(rect_histx) axHisty = plt.axes(rect_histy) # no labels axHistx.xaxis.set_major_formatter(nullfmt) axHisty.yaxis.set_major_formatter(nullfmt) # the scatter plot: axScatter.scatter(x, y, s=1) # now determine nice limits by hand: binwidth = 0.25 #xymax = np.max([np.max(np.fabs(x)), np.max(np.fabs(y))]) #lim = (int(xymax/binwidth) + 1) * binwidth x_min = np.min(x)-50 x_max = np.max(x)+50 axScatter.set_xlim(x_min, x_max ) y_min = np.min(y)-50 y_max = np.max(y)+50 axScatter.set_ylim(y_min, y_max) # Add vertical red line between 750-1050 retention time ''' plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r', label='Luck', alpha=0.5) ''' print 'ymin: ', y_min # Add vertical/horizontal lines to scatter and histograms axScatter.axvline(x=rt_bounds[0], lw=2, color='r', alpha=0.5) axScatter.axvline(x=rt_bounds[1], lw=2, color='r', alpha=0.5) #bins = np.arange(-lim, lim + binwidth, binwidth) bins = 100 axHistx.hist(x, bins=bins) axHisty.hist(y, bins=bins, orientation='horizontal') axHistx.set_xlim(axScatter.get_xlim()) axHisty.set_ylim(axScatter.get_ylim()) axScatter.set_ylabel('m/z', fontsize=30) axScatter.set_xlabel('Retention Time', fontsize=30) axHistx.set_ylabel('# of Features', fontsize=20) axHisty.set_xlabel('# of Features', fontsize=20) if save: plt.savefig(path, format='pdf') plt.show() plot_mz_rt(features) # Do a quick test on own data ppm_matrix = pairwise_ppm_matrix(feature_mz) Explanation: <h2> Goal: Find out number of isomers in husermet data </h2> Then look into how many can be found redundant. Start with positive ion mode End of explanation def plot_ppm_overlaps(ppm_matrix, x_vals): x = x_vals y = (np.array([(ppm_matrix < i).sum().sum() for i in x]) / float(ppm_matrix.shape[0]))*100 plt.scatter(x,y) plt.xlabel('ppm') plt.ylabel('% of overlapping m/z') plt.title('Few overlapping m/z values in Husermet dataset' + ' (# Features = %s)' % ppm_matrix.shape[0]) plt.axvline(5, color='red', alpha=0.2, label='Instrument precision') plt.legend() plt.show() plot_ppm_overlaps(ppm_matrix, range(1,20)) # Check how many of the annotated features (peaks?) # have overlapping m/z # This takes a long time. Maybe try to matrix-ify some of the code? all_feats = pairwise_ppm_matrix(peaks['mz']) Explanation: <h2> Only 11 features are within 5ppm of one-another :) </h2> Unsure how this will stack up between datasets, but 52 possibly isomeric features out of 1000 is not bad... Also, there's a distinct possibility that they removed other redundant features... Check on that below End of explanation plot_ppm_overlaps(all_feats, range(1,20)) Explanation: <h2> Even for all the features (not just those that were in the dataframe and passed QC), only ~1% are indistinguishable by mass </h2> End of explanation # Get the overlapping features... # Stack() pivots values and drops nan values - yay! overlapping_mz_pairs = list(ppm_matrix[ppm_matrix < 6].stack().index) len(overlapping_mz_pairs) print overlapping_mz_pairs # write a function to get intensities for these features def plot_overlapping_mz_intensities(df, feature_pair): ''' GOAL - Take in tuple of feature indices, return Intensity values for that pair. INPUT - df - pandas dataframe. A feature table with (samples x features), with column index that has same index as feature_pair feature_pair - Tuple. Contains indexes to get intensity vals OUTPUT - Dataframe of (sample, intensity) for each feature pair ''' feats = df[list(feature_pair)] # convert to tidy data tidy_feats = feats.melt(id_vars=feats.index, value_vars=feats.columns, var_name='feature', value_name='intensity').dropna(axis=1, how='all') # Get mann-whitney values u, pval_u = stats.mannwhitneyu(df[feature_pair[0]], df[feature_pair[1]]) # Convert dtype of intensity values! float..? sns.boxplot(x='feature', y='intensity', data=tidy_feats) ax = sns.stripplot(data=tidy_feats, x='feature', y='intensity', jitter=True) plt.title("mann-whitney: {u}, pval: {pval:.2e}".format( u=u, pval=pval_u)) plt.show() #TODO fix bug here that says # I'm using different-length arrays for i in range(0,len(overlapping_mz_pairs)): plot_overlapping_mz_intensities(df , overlapping_mz_pairs[i]) test = pd.DataFrame({'A': [1,2,3], 'B':[10,20,30], 'C':[100,200,300]}) print test test.melt(id_vars=test.index, value_vars=test.columns, var_name='feature', value_name='intensity').dropna(axis=1) Explanation: <h2> Now let's see if the distributions of these m/z overlapping features are distinct </h2> Just work with the QC'd features (those found in the feature table, not those found only in the peaklist) End of explanation
5,252
Given the following text description, write Python code to implement the functionality described below step by step Description: Custom Generator objects This example should guide you to build your own simple generator. Step1: Basic knowledge We assume that you have completed at least some of the previous examples and have a general idea of how adaptiveMD works. Still, let's recapitulate what we think is the typical way of a simulation. How to execute something To execute something you need a description of the task to be done. This is the Task object. Once you have this you can, use it in a Scheduler which will interpret the Task into some code that the computer understands. It handles all the little things you expect from the task, like registering generated file, etc... And to do so, the Scheduler needs your Resource description which acts like a config for the scheduler When you have a Scheduler (with Resource) you let it execute Task objects. If you know how to build these you are done. That is all you need. What are Generators? Build a task can be cumbersome and often repetative, and a factory for Task objects is extremely useful. These are called Generators (maybe TaskFactory) is a better name?!? In your final scheme where you observe all generated objects and want to build new tasks accordingly you will (almost) never build a Task yourself. You use a generator. A typical example is an Engine. It will generate tasks, that simulate new trajectories, extend existing ones, etc... Basic stuff. The second big class is Analysis. It will use trajectories to generate models or properties of interest to guide your decisions for new trajectories. In this example we will build a simple generator for a task, that uses the mdtraj package to compute some features and store these in the database and in a file. The MDTrajFeaturizer generator First, we think about how this featurizer works if we would not use adaptivemd. The reason is, that we have basically two choices for designing a Task (see example 4 about Task objects). A task that calls bash commands for you A task that calls a python function for you Since we want to call mdtraj functions we use the 2nd and start with a skeleton for this type and store it under my_generator.py Step2: What input does our generator always need? Mdtraj needs a topology unless it is already present. Interestingly, our Trajectory objects know about their topology so we could access these, if our function is to process a Trajectory. This requires the Trajectory to be the input. If we want to process any file, then we might need a topology. The decision if we want the generator to work for a fixed topology is yours. To show how this would work, we do this here. We use a fixed topology per generator that applies to File objects. Second is the feature we want to compute. This is tricky and so we hard code this now. You can think of a better way to represent this. But let's pick the tertiary stucture prediction Step3: The task building Step4: The actual script This script is executed on the HPC for you. And requires mdtraj to be installed on it. Step5: That's it. At least in the simplest form. When you use this to create a Task Step6: We wait and then the Task object has a .output property which now contains the returned result. This can now be used in your execution plans... Step7: Next, we look at improvements Better storing of results Often you want to save the output from your function in the DB in some form or another. Though the output is stored, it is not conviniently accessed unless you know the task that was used. For this reason there is a callback function you can set, that can take care of doing a custom handling of the output. The function to be called needs to be a method of the generator and you can give the task the name of the method. The name (str) of the funtion can be set using the then() command. An the default name is then_func. Step8: The function takes exactly 4 parameters project Step9: in that case .output will stay None even after execution Working with Trajectory files and get their properties Note that you always have to write file generation and file analysis/reading that matches. We only store some very general properties of objects with them, e.g. a stride for trajectories. This means you cannot arbitrarily mix code for these. Now we want that this works Step10: This is rather simple Step11: Import! You have no access to the Trajectory object in our remove function. These will be converted to a real path relative to the working directory. This makes sure that you will not have to deal with prefixes, etc. This might change in the future, but. The scripts are considered independent of adaptivemd! Problem with saving your generator to the DB This is not complicated but you need to briefly learn about the mechanism to store complex Python objects in the DB. The general way to Store an instance of a class requires you to subclass from adaptivemd.mongodb.StorableMixin. This provides the class with a __uuid__ attribute that is a unique number for each storable object that is given at creation time. (If we would just store objects using pymongo we would get a number like this, but later). Secondly, it add two functions to_dict() Step12: while this would not work Step13: In the second case you need to overwrite the default function. All of these will work Step14: If you do that, make sure that you really capture all variables. Especially if you subclass from an existing one. You can use super to access the result from the parent class Step15: This is the recommended way to build your custom functions. For completeness we show here what the base TaskGenerator class will do Step16: The only unfamiliar part is the py obj = cls.__new__(cls) StorableMixin.__init__(obj) which needs a little explanation. In most __init__ functions for a TaskGenerator you will construct the initial_staging attribute with some functions. If you would reconstruct by just calling the constructor with the same parameters again, this would result in an equal object as expected and that would work, but not in all regards as expected
Python Code: from adaptivemd import ( Project, Task, File, PythonTask ) project = Project('tutorial') engine = project.generators['openmm'] modeller = project.generators['pyemma'] pdb_file = project.files['initial_pdb'] Explanation: Custom Generator objects This example should guide you to build your own simple generator. End of explanation %% file my_generator.py # This is an example for building your own generator # This file must be added to the project so that it is loaded # when you import `adaptivemd`. Otherwise your worker don't know # about the class! from adaptivemd import Generator class MDTrajFeaturizer(Generator): def __init__(self, {things we always need}): super(PyEMMAAnalysis, self).__init__() # stage file you want to reuse (optional) # self['pdb_file'] = pdb_file # stage = pdb_file.transfer('staging:///') # self['pdb_file_stage'] = stage.target # self.initial_staging.append(stage) @staticmethod def then_func(project, task, data, inputs): # add the output for later reference project.data.add(data) def execute(self, {options per task}): t = PythonTask(self) # get your staged files (optional) # input_pdb = t.link(self['pdb_file_stage'], 'input.pdb') # add the python function call to your script (there can be only one!) t.call( my_script, param1, param2, ... ) return t def my_script(param1, param2, ...): return {"whatever you want to return"} Explanation: Basic knowledge We assume that you have completed at least some of the previous examples and have a general idea of how adaptiveMD works. Still, let's recapitulate what we think is the typical way of a simulation. How to execute something To execute something you need a description of the task to be done. This is the Task object. Once you have this you can, use it in a Scheduler which will interpret the Task into some code that the computer understands. It handles all the little things you expect from the task, like registering generated file, etc... And to do so, the Scheduler needs your Resource description which acts like a config for the scheduler When you have a Scheduler (with Resource) you let it execute Task objects. If you know how to build these you are done. That is all you need. What are Generators? Build a task can be cumbersome and often repetative, and a factory for Task objects is extremely useful. These are called Generators (maybe TaskFactory) is a better name?!? In your final scheme where you observe all generated objects and want to build new tasks accordingly you will (almost) never build a Task yourself. You use a generator. A typical example is an Engine. It will generate tasks, that simulate new trajectories, extend existing ones, etc... Basic stuff. The second big class is Analysis. It will use trajectories to generate models or properties of interest to guide your decisions for new trajectories. In this example we will build a simple generator for a task, that uses the mdtraj package to compute some features and store these in the database and in a file. The MDTrajFeaturizer generator First, we think about how this featurizer works if we would not use adaptivemd. The reason is, that we have basically two choices for designing a Task (see example 4 about Task objects). A task that calls bash commands for you A task that calls a python function for you Since we want to call mdtraj functions we use the 2nd and start with a skeleton for this type and store it under my_generator.py End of explanation def __init__(self, pdb_file=None): super(PyEMMAAnalysis, self).__init__() # if we provide a pdb_file it should be used if pdb_file is not None: # stage file you want to reuse (optional) # give the file an internal name self['pdb_file'] = pdb_file # create the transfer from local to staging: stage = pdb_file.transfer('staging:///') # give the staged file an internal name self['pdb_file_stage'] = stage.target # append the transfer action to the initial staging action list self.initial_staging.append(stage) Explanation: What input does our generator always need? Mdtraj needs a topology unless it is already present. Interestingly, our Trajectory objects know about their topology so we could access these, if our function is to process a Trajectory. This requires the Trajectory to be the input. If we want to process any file, then we might need a topology. The decision if we want the generator to work for a fixed topology is yours. To show how this would work, we do this here. We use a fixed topology per generator that applies to File objects. Second is the feature we want to compute. This is tricky and so we hard code this now. You can think of a better way to represent this. But let's pick the tertiary stucture prediction End of explanation def execute(self, file_to_analyze): assert(isinstance(file_to_analyze, File)) t = PythonTask(self) # get your staged files (optional) if self.get('pdb_file_stage'): input_pdb = t.link(self['pdb_file_stage'], 'input.pdb') else: input_pdb = None # add the python function call to your script (there can be only one!) t.call( my_script, file_to_analyze, input_pdb ) return t Explanation: The task building End of explanation def my_script(file_to_analyze, input_pdb): import mdtraj as md traj = md.load(file_to_analyze, top=input_pdb) features = traj.compute_xyz() return features Explanation: The actual script This script is executed on the HPC for you. And requires mdtraj to be installed on it. End of explanation my_generator = MDTrajFeaturizer(pdb_file) task = my_generator.execute(traj.file('master.dcd')) project.queue(task) Explanation: That's it. At least in the simplest form. When you use this to create a Task End of explanation def strategy(): # generate some structures... # yield wait ... # get a traj object task = my_generator.execute(traj.outputs('master')) # wait until the task is done yield task.is_done # print the output output = task.output # do something with the result, store in the DB, etc... Explanation: We wait and then the Task object has a .output property which now contains the returned result. This can now be used in your execution plans... End of explanation def execute(self, ...): t = PythonTask(self) t.then('handle_my_output') @staticmethod def handle_my_output(project, task, data, inputs): print 'Saving data from task', task, 'into model' m = Model(data) project.model.add(m) Explanation: Next, we look at improvements Better storing of results Often you want to save the output from your function in the DB in some form or another. Though the output is stored, it is not conviniently accessed unless you know the task that was used. For this reason there is a callback function you can set, that can take care of doing a custom handling of the output. The function to be called needs to be a method of the generator and you can give the task the name of the method. The name (str) of the funtion can be set using the then() command. An the default name is then_func. End of explanation def execute(self, ...): t = PythonTask(self) t.then('handle_my_output') t.store_output = False # default is `True` Explanation: The function takes exactly 4 parameters project: the project in which the task was run. Is used to access the database, etc task: the actual task object that produced the output data: the output returned by the function inputs: the input to the python function call (internally). The data actually transmitted to the worker to run Like in the above example you can do whatever you want with your data, store it, alter it, write it to a file, etc. In case you do not want to additionally save the output (data) in the DB as an object, you can tell the trask not to by setting End of explanation my_generator.execute(traj) Explanation: in that case .output will stay None even after execution Working with Trajectory files and get their properties Note that you always have to write file generation and file analysis/reading that matches. We only store some very general properties of objects with them, e.g. a stride for trajectories. This means you cannot arbitrarily mix code for these. Now we want that this works End of explanation def __init__(self, outtype, pdb_file=None): super(PyEMMAAnalysis, self).__init__() # we store a str that holds the name of the outputtype # this must match the definition self.outtype = outtype # ... def execute(self, traj, *args, **kwargs): t = PythonTask(self) # ... file_location = traj.outputs(self.outtype) # get the trajectory file matching outtype # use the file_location. # ... Explanation: This is rather simple: All you need to do is to extract the actual files from the trajectory object. End of explanation class MyStorableObject(StorableMixin): def __init__(self, state): self.state = state Explanation: Import! You have no access to the Trajectory object in our remove function. These will be converted to a real path relative to the working directory. This makes sure that you will not have to deal with prefixes, etc. This might change in the future, but. The scripts are considered independent of adaptivemd! Problem with saving your generator to the DB This is not complicated but you need to briefly learn about the mechanism to store complex Python objects in the DB. The general way to Store an instance of a class requires you to subclass from adaptivemd.mongodb.StorableMixin. This provides the class with a __uuid__ attribute that is a unique number for each storable object that is given at creation time. (If we would just store objects using pymongo we would get a number like this, but later). Secondly, it add two functions to_dict(): this converts the (immutable) state of the object into a dictionary that is simple enough that it can be stored. Simple enought means, that you can have Python primitives, things like numpy arrays or even other storable objects, but not arbitrary objects in it, like lambda constructs (these are possible but need special treatment) from_dict(): The reverse. It takes the dictionary from to_dict and must return an equivalent object! So, you can do clone = obj.__class__.from_dict(obj.to_dict()) and get an equal object in that it has the same attributes. You could also say a deep copy. This is not always trivial and there exists a default implementation, which will make an additional assumption: All necessary attributes have the same parameters in __init__. So, this would correspond to this rule End of explanation class MyStorableObject(StorableMixin): def __init__(self, initial_state): self.state = initial_state Explanation: while this would not work End of explanation # fix `to_dict` to match default `from_dict` class MyStorableObject(StorableMixin): def __init__(self, initial_state): self.state = initial_state def to_dict(self): return { 'initial_state': self.state } # fix `from_dict` to match default `to_dict` class MyStorableObject(StorableMixin): def __init__(self, initial_state): self.state = initial_state @classmethod def from_dict(cls, dct): return cls(initial_state=dct['state']) # fix both `from_dict` and `to_dict` class MyStorableObject(StorableMixin): def __init__(self, initial_state): self.state = initial_state def to_dict(self): return { 'my_state': self.state } @classmethod def from_dict(cls, dct): return cls(initial_state=dct['my_state']) Explanation: In the second case you need to overwrite the default function. All of these will work End of explanation class MyStorableObject(StorableMixin): @classmethod def from_dict(cls, dct): obj = super(MyStorableObject, cls).from_dict(dct) obj.missing_attr1 = dct['missing_attr_key1'] return obj def to_dict(self): dct = super(MyStorableObject, self).to_dict(self) dct.update({ 'missing_attr_key1': self.missing_attr1 }) return dct Explanation: If you do that, make sure that you really capture all variables. Especially if you subclass from an existing one. You can use super to access the result from the parent class End of explanation @classmethod def from_dict(cls, dct): obj = cls.__new__(cls) StorableMixin.__init__(obj) obj._items = dct['_items'] obj.initial_staging = dct['initial_staging'] return obj def to_dict(self): return { '_items': self._items, 'initial_staging': self.initial_staging } Explanation: This is the recommended way to build your custom functions. For completeness we show here what the base TaskGenerator class will do End of explanation project.close() Explanation: The only unfamiliar part is the py obj = cls.__new__(cls) StorableMixin.__init__(obj) which needs a little explanation. In most __init__ functions for a TaskGenerator you will construct the initial_staging attribute with some functions. If you would reconstruct by just calling the constructor with the same parameters again, this would result in an equal object as expected and that would work, but not in all regards as expected: The problem is that if you generate objects that can be stored, these will get new UUIDs and hence are considered different from the ones that you wanted to store. In short, the construction in the __init__ prevents you from getting the real old object back, you always construct something new. This can be solved by not using __init__ but creating an empty object using __new__ and then fixing all attributes to the original state. This is very similar to __setstate__ which we do not use in general to still allow using __init__ which makes sense in most cases where not storable objects are generated. In the following we discuss an existing generator A simple generator A word about this example. While a Task can be created and configured a new class in adaptivemd needs to be part of the project. So we will write discuss the essential parts of the existing code. A generator is in essence a factory to create Task objects with a single command. A generator can be initialized with certain files that the created tasks will always need, like an engine will need a topology for each task, etc. It also (as explained briefly before in Example 4) knows about certain callback behaviour of their tasks. Last, a generator allows you to assign a worker only to tasks that were created by a generator. The execution structure Let's look at the code of the PyEMMAAnalysis ```py class PyEMMAAnalysis(Analysis): def init(self, pdb_file): super(PyEMMAAnalysis, self).init() self['pdb_file'] = pdb_file stage = pdb_file.transfer('staging:///') self['pdb_file_stage'] = stage.target self.initial_staging.append(stage) @staticmethod def then_func(project, task, model, inputs): # add the input arguments for later reference model.data['input']['trajectories'] = inputs['files'] model.data['input']['pdb'] = inputs['topfile'] project.models.add(model) def execute( self, trajectories, tica_lag=2, tica_dim=2, msm_states=5, msm_lag=2, stride=1): t = PythonTask(self) input_pdb = t.link(self['pdb_file_stage'], 'input.pdb') t.call( remote_analysis, trajectories=list(trajectories), topfile=input_pdb, tica_lag=tica_lag, tica_dim=tica_dim, msm_states=msm_states, msm_lag=msm_lag, stride=stride ) return t ``` ```py def init(self, pdb_file): # don't forget to call super super(PyEMMAAnalysis, self).init() # a generator also acts like a dictionary for files # this way you can later access certain files you might need # save the pdb_file under the same name self['pdb_file'] = pdb_file # this creates a transfer action like it is used in tasks # and moves the passed pdb_file (usually on the local machein) # to the staging_area root directory stage = pdb_file.transfer('staging:///') # and the new target file (which is also like the original) # on the staging_area is saved unter `pdb_file_stage` # so, we can access both files if we wanted to # note that the original file most likely is in the DB # so we could just skip the stage transfer completely self['pdb_file_stage'] = stage.target # last we add this transfer to the initial_staging which # is done only once per used generator self.initial_staging.append(stage) ``` ```py the kwargs is to keep the exmaple short, you should use explicit parameters and add appropriate docs def execute(self, trajectories, **kwargs): # create the task and set the generator to self, our new generator t = PythonTask(self) # we want to copy the staged file to the worker directory # and name it `input.pdb` input_pdb = t.link(self['pdb_file_stage'], 'input.pdb') # if you chose not to use the staging file and copy it directly you # would use in analogy # input_pdb = t.link(self['pdb_file'], 'input.pdb') # finally we use `.call` and want to call the `remote_analysis` function # which we imported earlier from somewhere t.call( remote_analysis, trajectories=list(trajectories), **kwargs ) return t ``` And finally a call_back function. The name then_func is the default function name to be called. ```py we use a static method, but you can of course write a normal method @staticmethod the call_backs take these arguments in this order the second parameter is actually a Model object in this case which has a .data attribute def then_func(project, task, model, inputs): # add the input arguments for later reference to the model model.data['input']['trajectories'] = inputs['kwargs']['files'] model.data['input']['pdb'] = inputs['kwargs']['topfile'] # and save the model in the project project.models.add(model) ``` A brief summary and things you need to set to make your generator work ```py class MyGenerator(Analysis): def init(self, {things your generator always needs}): super(MyGenerator, self).init() # Add input files to self self['file1'] = file1 # stage all files to the staging area of you want to keep these # files on the HPC for fn in ['file1', 'file2', ...]: stage = self[fn].transfer('staging:///') self[fn + '_stage'] = stage.target self.initial_staging.append(stage) @staticmethod def then_func(project, task, outputs, inputs): # do something with input and outputs # store something in your project def task_using_python_rpc( self, {arguments}): t = PythonTask(self) # set any task dependencies if you need t.dependencies = [] input1 = t.link(self['file1'], 'alternative_name1') input2 = t.link(self['file2'], 'alternative_name2') ... # add whatever bash stuff you need BEFORE the function call t.append('some bash command') ... # use input1, etc in your function call if you like. It will # be converted to a regular file location you can use t.call( {my_remote_python_function}, files=list(files), ) # add whatever bash stuff you need AFTER the function call t.append('some bash command') ... return t def task_using_bash_argument_call( self, {arguments}): t = Task(self) # set any task dependencies if you need t.dependencies = [] input1 = t.link(self['file1'], 'alternative_name1') input2 = t.link(self['file2'], 'alternative_name2') ... # add more staging t.append({action}) ... # add whatever bash stuff you want to do t.append('some bash command') ... # add whatever staging stuff you need AFTER the function call t.append({action}) ... return t ``` The simplified code for the OpenMMEngine ```py class OpenMMEngine(Engine): trajectory_ext = 'dcd' def __init__(self, system_file, integrator_file, pdb_file, args=None): super(OpenMMEngine, self).__init__() self['pdb_file'] = pdb_file self['system_file'] = system_file self['integrator_file'] = integrator_file self['_executable_file'] = exec_file for fn in self.files: stage = self[fn].transfer(Location('staging:///')) self[name + '_stage'] = stage.target self.initial_staging.append(stage) if args is None: args = '-p CPU --store-interval 1' self.args = args # this one only works if you start from a file def task_run_trajectory_from_file(self, target): # we create a special Task, that has some additional functionality t = TrajectoryGenerationTask(self, target) # link all the files we require initial_pdb = t.link(self['pdb_file_stage'], Location('initial.pdb')) t.link(self['system_file_stage']) t.link(self['integrator_file_stage']) t.link(self['_executable_file_stage']) # use the initial PDB to be used input_pdb = t.get(target.frame, 'coordinates.pdb') # this represents our output trajectory output = Trajectory('traj/', target.frame, length=target.length, engine=self) # create the directory so openmmrun can write to it t.touch(output) # build the actual bash command cmd = 'python openmmrun.py {args} -t {pdb} --length {length} {output}'.format( pdb=input_pdb, length=target.length, output=output, args=self.args, ) t.append(cmd) # copy the resulting trajectory directory back to the staging area t.put(output, target) return t ``` End of explanation
5,253
Given the following text description, write Python code to implement the functionality described below step by step Description: Learning Algorithms - Unsupervised Learning Reminder Step1: PCA revisited Step2: The pca.explained_variance_ is like the magnitude of a components influence (amount of variance explained) and the pca.components_ is like the direction of influence for each feature in each component. <p style="text-align Step3: QUESTION Step4: Clustering KMeans finds cluster centers that are the mean of the points within them. Likewise, a point is in a cluster because the cluster center is the closest cluster center for that point. If you don't have ipywidgets package installed, go ahead and install it now by running the cell below uncommented. Step5: <p style="text-align Step6: KMeans employ the <i>Expectation-Maximization</i> algorithm which works as follows Step7: <b>Warning</b>! There is absolutely no guarantee of recovering a ground truth. First, choosing the right number of clusters is hard. Second, the algorithm is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate this issue.<br> --Taken directly from sklearn docs <img src='imgs/pca1.png' alt="Original PCA with Labels" align="center"> Novelty detection aka anomaly detection QUICK QUESTION
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt Explanation: Learning Algorithms - Unsupervised Learning Reminder: In machine learning, the problem of unsupervised learning is that of trying to find hidden structure in unlabeled data. Since the training set given to the learner is unlabeled, there is no error or reward signal to evaluate a potential solution. Basically, we are just finding a way to represent the data and get as much information from it that we can. HEY! Remember PCA from above? PCA is actually considered unsupervised learning. We just put it up there because it's a good way to visualize data at the beginning of the ML process. Let's revisit it in a little more detail using the iris dataset. End of explanation from sklearn.decomposition import PCA from sklearn.datasets import load_iris iris = load_iris() # subset data to have only sepal width (cm) and petal length (cm) for simplification X = iris.data[:, 1:3] print(iris.feature_names[1:3]) pca = PCA(n_components = 2) pca.fit(X) print("% of variance attributed to components: "+ \ ', '.join(['%.2f' % (x * 100) for x in pca.explained_variance_ratio_])) print('\ncomponents and amount of variance explained by each feature:', pca.components_) print(pca.mean_) Explanation: PCA revisited End of explanation # plot the original data in X (before PCA) plt.plot(X[:, 0], X[:, 1], 'o', alpha=0.5) # grab the component means to get the center point for plot below means = pca.mean_ # here we use the direction of the components in pca.components_ # and the magnitude of the variance explaine by that component in # pca.explained_variane_ # we plot the vector (manginude and direction) of the components # on top of the original data in X for length, vector in zip(pca.explained_variance_, pca.components_): v = vector * 3 * np.sqrt(length) plt.plot([means[0], v[0]+means[0]], [means[1], v[1]+means[1]], '-k', lw=3) # axis limits plt.xlim(0, max(X[:, 0])+3) plt.ylim(0, max(X[:, 1])+3) # original feature labels of our data X plt.xlabel(iris.feature_names[1]) plt.ylabel(iris.feature_names[2]) Explanation: The pca.explained_variance_ is like the magnitude of a components influence (amount of variance explained) and the pca.components_ is like the direction of influence for each feature in each component. <p style="text-align:right"><i>Code in next cell adapted from Jake VanderPlas's code [here](https://github.com/jakevdp/sklearn_pycon2015)</i></p> End of explanation # get back to our 4D dataset X, y = iris.data, iris.target pca = PCA(n_components = 0.95) # keep 95% of variance X_trans = pca.___(X) # <- fill in the blank print(X.shape) print(X_trans.shape) plt.scatter(X_trans[:, 0], X_trans[:, 1], c=iris.target, edgecolor='none', alpha=0.5, cmap=plt.cm.get_cmap('spring', 10)) plt.ylabel('Component 2') plt.xlabel('Component 1') Explanation: QUESTION: In which direction in the data is the most variance explained? Recall, in the ML 101 module: unsupervised models have a fit(), transform() and/or fit_transform() in sklearn. If you want to both get a fit and new dataset with reduced dimensionality, which would you use below? (Fill in blank in code) End of explanation #!pip install ipywidgets from ipywidgets import interact from sklearn.metrics.pairwise import euclidean_distances from sklearn.datasets.samples_generator import make_blobs from sklearn.datasets import load_iris from sklearn.decomposition import PCA iris = load_iris() X, y = iris.data, iris.target pca = PCA(n_components = 2) # keep 2 components which explain most variance X = pca.fit_transform(X) X.shape # I have to tell KMeans how many cluster centers I want n_clusters = 3 # for consistent results when running the methods below random_state = 2 Explanation: Clustering KMeans finds cluster centers that are the mean of the points within them. Likewise, a point is in a cluster because the cluster center is the closest cluster center for that point. If you don't have ipywidgets package installed, go ahead and install it now by running the cell below uncommented. End of explanation def _kmeans_step(frame=0, n_clusters=n_clusters): rng = np.random.RandomState(random_state) labels = np.zeros(X.shape[0]) centers = rng.randn(n_clusters, 2) nsteps = frame // 3 for i in range(nsteps + 1): old_centers = centers if i < nsteps or frame % 3 > 0: dist = euclidean_distances(X, centers) labels = dist.argmin(1) if i < nsteps or frame % 3 > 1: centers = np.array([X[labels == j].mean(0) for j in range(n_clusters)]) nans = np.isnan(centers) centers[nans] = old_centers[nans] # plot the data and cluster centers plt.scatter(X[:, 0], X[:, 1], c=labels, s=50, cmap='rainbow', vmin=0, vmax=n_clusters - 1); plt.scatter(old_centers[:, 0], old_centers[:, 1], marker='o', c=np.arange(n_clusters), s=200, cmap='rainbow') plt.scatter(old_centers[:, 0], old_centers[:, 1], marker='o', c='black', s=50) # plot new centers if third frame if frame % 3 == 2: for i in range(n_clusters): plt.annotate('', centers[i], old_centers[i], arrowprops=dict(arrowstyle='->', linewidth=1)) plt.scatter(centers[:, 0], centers[:, 1], marker='o', c=np.arange(n_clusters), s=200, cmap='rainbow') plt.scatter(centers[:, 0], centers[:, 1], marker='o', c='black', s=50) plt.xlim(-4, 5) plt.ylim(-2, 2) plt.ylabel('PC 2') plt.xlabel('PC 1') if frame % 3 == 1: plt.text(4.5, 1.7, "1. Reassign points to nearest centroid", ha='right', va='top', size=8) elif frame % 3 == 2: plt.text(4.5, 1.7, "2. Update centroids to cluster means", ha='right', va='top', size=8) Explanation: <p style="text-align:right"><i>Code in next cell adapted from Jake VanderPlas's code [here](https://github.com/jakevdp/sklearn_pycon2015)</i></p> End of explanation # suppress future warning import warnings warnings.filterwarnings('ignore') min_clusters, max_clusters = 1, 6 interact(_kmeans_step, frame=[0, 20], n_clusters=[min_clusters, max_clusters]) Explanation: KMeans employ the <i>Expectation-Maximization</i> algorithm which works as follows: Guess cluster centers Assign points to nearest cluster Set cluster centers to the mean of points Repeat 1-3 until converged End of explanation %matplotlib inline from matplotlib import rcParams, font_manager rcParams['figure.figsize'] = (14.0, 7.0) fprop = font_manager.FontProperties(size=14) import numpy as np import matplotlib.pyplot as plt import matplotlib.font_manager from sklearn import svm from sklearn.datasets import load_iris from sklearn.cross_validation import train_test_split xx, yy = np.meshgrid(np.linspace(-2, 9, 500), np.linspace(-2,9, 500)) # Iris data iris = load_iris() X, y = iris.data, iris.target labels = iris.feature_names[1:3] X = X[:, 1:3] # split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) # make some outliers X_weird = np.random.uniform(low=-2, high=9, size=(20, 2)) # fit the model clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=1, random_state = 0) clf.fit(X_train) # predict labels y_pred_train = clf.predict(X_train) y_pred_test = clf.predict(X_test) y_pred_outliers = clf.predict(X_weird) n_error_train = y_pred_train[y_pred_train == -1].size n_error_test = y_pred_test[y_pred_test == -1].size n_error_outliers = y_pred_outliers[y_pred_outliers == 1].size # plot the line, the points, and the nearest vectors to the plane Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) plt.title("Novelty Detection aka Anomaly Detection") plt.contourf(xx, yy, Z, levels=np.linspace(Z.min(), 0, 7), cmap=plt.cm.Blues_r) a = plt.contour(xx, yy, Z, levels=[0], linewidths=2, colors='red') plt.contourf(xx, yy, Z, levels=[0, Z.max()], colors='orange') b1 = plt.scatter(X_train[:, 0], X_train[:, 1], c='white') b2 = plt.scatter(X_test[:, 0], X_test[:, 1], c='green') c = plt.scatter(X_outliers[:, 0], X_outliers[:, 1], c='red') plt.axis('tight') plt.xlim((-2, 9)) plt.ylim((-2, 9)) plt.ylabel(labels[1], fontsize = 14) plt.legend([a.collections[0], b1, b2, c], ["learned frontier", "training observations", "new regular observations", "new abnormal observations"], loc="best", prop=fprop) plt.xlabel( "%s\nerror train: %d/200 ; errors novel regular: %d/40 ; " "errors novel abnormal: %d/10" % (labels[0], n_error_train, n_error_test, n_error_outliers), fontsize = 14) Explanation: <b>Warning</b>! There is absolutely no guarantee of recovering a ground truth. First, choosing the right number of clusters is hard. Second, the algorithm is sensitive to initialization, and can fall into local minima, although scikit-learn employs several tricks to mitigate this issue.<br> --Taken directly from sklearn docs <img src='imgs/pca1.png' alt="Original PCA with Labels" align="center"> Novelty detection aka anomaly detection QUICK QUESTION: What is the diffrence between outlier detection and anomaly detection? Below we will use a one-class support vector machine classifier to decide if a point is weird or not given our original data. (The code was adapted from sklearn docs here) End of explanation
5,254
Given the following text description, write Python code to implement the functionality described below step by step Description: Reinforcement Learning (DQN) tutorial Author Step2: Replay Memory We'll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure. For this, we're going to need two classses Step3: Now, let's define our model. But first, let quickly recap what a DQN is. DQN algorithm Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment. Our aim will be to train a policy that tries to maximize the discounted, cumulative reward $R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, where $R_{t_0}$ is also known as the return. The discount, $\gamma$, should be a constant between $0$ and $1$ that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about. The main idea behind Q-learning is that if we had a function $Q^* Step4: Input extraction ^^^^^^^^^^^^^^^^ The code below are utilities for extracting and processing rendered images from the environment. It uses the torchvision package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted. Step5: Training Hyperparameters and utilities ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This cell instantiates our model and its optimizer, and defines some utilities Step6: Training loop ^^^^^^^^^^^^^ Finally, the code for training our model. Here, you can find an optimize_model function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes $Q(s_t, a_t)$ and $V(s_{t+1}) = \max_a Q(s_{t+1}, a)$, and combines them into our loss. By defition we set $V(s) = 0$ if $s$ is a terminal state. Step7: Below, you can find the main training loop. At the beginning we reset the environment and initialize the state variable. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop. Below, num_episodes is set small. You should download the notebook and run lot more epsiodes.
Python Code: import gym import math import random import numpy as np import matplotlib import matplotlib.pyplot as plt from collections import namedtuple from itertools import count from copy import deepcopy from PIL import Image import torch import torch.nn as nn import torch.optim as optim import torch.autograd as autograd import torch.nn.functional as F import torchvision.transforms as T env = gym.make('CartPole-v0') is_ipython = 'inline' in matplotlib.get_backend() if is_ipython: from IPython import display Explanation: Reinforcement Learning (DQN) tutorial Author: Adam Paszke &lt;https://github.com/apaszke&gt;_ This tutorial shows how to use PyTorch to train a Deep Q Learning (DQN) agent on the CartPole-v0 task from the OpenAI Gym &lt;https://gym.openai.com/&gt;__. Task The agent has to decide between two actions - moving the cart left or right - so that the pole attached to it stays upright. You can find an official leaderboard with various algorithms and visualizations at the Gym website &lt;https://gym.openai.com/envs/CartPole-v0&gt;__. .. figure:: /_static/img/cartpole.gif :alt: cartpole cartpole As the agent observes the current state of the environment and chooses an action, the environment transitions to a new state, and also returns a reward that indicates the consequences of the action. In this task, the environment terminates if the pole falls over too far. The CartPole task is designed so that the inputs to the agent are 4 real values representing the environment state (position, velocity, etc.). However, neural networks can solve the task purely by looking at the scene, so we'll use a patch of the screen centered on the cart as an input. Because of this, our results aren't directly comparable to the ones from the official leaderboard - our task is much harder. Unfortunately this does slow down the training, because we have to render all the frames. Strictly speaking, we will present the state as the difference between the current screen patch and the previous one. This will allow the agent to take the velocity of the pole into account from one image. Packages First, let's import needed packages. Firstly, we need gym &lt;https://gym.openai.com/docs&gt;__ for the environment (Install using pip install gym). We'll also use the following from PyTorch: neural networks (torch.nn) optimization (torch.optim) automatic differentiation (torch.autograd) utilities for vision tasks (torchvision - a separate package &lt;https://github.com/pytorch/vision&gt;__). End of explanation # class Transition with tuples accessible by name with . operator (here name class=name instance) Transition = namedtuple('Transition', ('state', 'action', 'next_state', 'reward')) class ReplayMemory(object): def __init__(self, capacity): self.capacity = capacity self.memory = [] self.position = 0 def push(self, *args): Saves a transition. if len(self.memory) < self.capacity: self.memory.append(None) self.memory[self.position] = Transition(*args) self.position = (self.position + 1) % self.capacity def sample(self, batch_size): return random.sample(self.memory, batch_size) def __len__(self): return len(self.memory) Explanation: Replay Memory We'll be using experience replay memory for training our DQN. It stores the transitions that the agent observes, allowing us to reuse this data later. By sampling from it randomly, the transitions that build up a batch are decorrelated. It has been shown that this greatly stabilizes and improves the DQN training procedure. For this, we're going to need two classses: Transition - a named tuple representing a single transition in our environment ReplayMemory - a cyclic buffer of bounded size that holds the transitions observed recently. It also implements a .sample() method for selecting a random batch of transitions for training. End of explanation class DQN(nn.Module): def __init__(self): super(DQN, self).__init__() self.conv1 = nn.Conv2d(3, 16, kernel_size=5, stride=2) self.bn1 = nn.BatchNorm2d(16) self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2) self.bn2 = nn.BatchNorm2d(32) self.conv3 = nn.Conv2d(32, 32, kernel_size=5, stride=2) self.bn3 = nn.BatchNorm2d(32) #448 = 32 * H * W, where H and W are the height and width of image after all convolutions self.head = nn.Linear(448, 2) def forward(self, x): x = F.relu(self.bn1(self.conv1(x))) x = F.relu(self.bn2(self.conv2(x))) x = F.relu(self.bn3(self.conv3(x))) return self.head(x.view(x.size(0), -1)) # the size -1 is inferred from other dimensions # after first conv2d, size is Hin = 40; Win = 80; def dim_out(dim_in): ks = 5 stride = 2 return math.floor((dim_in-ks)/stride+1) HH=dim_out(dim_out(dim_out(Hin))) WW=dim_out(dim_out(dim_out(Win))) print(32*HH*WW) Explanation: Now, let's define our model. But first, let quickly recap what a DQN is. DQN algorithm Our environment is deterministic, so all equations presented here are also formulated deterministically for the sake of simplicity. In the reinforcement learning literature, they would also contain expectations over stochastic transitions in the environment. Our aim will be to train a policy that tries to maximize the discounted, cumulative reward $R_{t_0} = \sum_{t=t_0}^{\infty} \gamma^{t - t_0} r_t$, where $R_{t_0}$ is also known as the return. The discount, $\gamma$, should be a constant between $0$ and $1$ that ensures the sum converges. It makes rewards from the uncertain far future less important for our agent than the ones in the near future that it can be fairly confident about. The main idea behind Q-learning is that if we had a function $Q^*: State \times Action \rightarrow \mathbb{R}$, that could tell us what our return would be, if we were to take an action in a given state, then we could easily construct a policy that maximizes our rewards: \begin{align}\pi^(s) = \arg!\max_a \ Q^(s, a)\end{align} However, we don't know everything about the world, so we don't have access to $Q^$. But, since neural networks are universal function approximators, we can simply create one and train it to resemble $Q^$. For our training update rule, we'll use a fact that every $Q$ function for some policy obeys the Bellman equation: \begin{align}Q^{\pi}(s, a) = r + \gamma Q^{\pi}(s', \pi(s'))\end{align} The difference between the two sides of the equality is known as the temporal difference error, $\delta$: \begin{align}\delta = Q(s, a) - (r + \gamma \max_a Q(s', a))\end{align} To minimise this error, we will use the Huber loss &lt;https://en.wikipedia.org/wiki/Huber_loss&gt;__. The Huber loss acts like the mean squared error when the error is small, but like the mean absolute error when the error is large - this makes it more robust to outliers when the estimates of $Q$ are very noisy. We calculate this over a batch of transitions, $B$, sampled from the replay memory: \begin{align}\mathcal{L} = \frac{1}{|B|}\sum_{(s, a, s', r) \ \in \ B} \mathcal{L}(\delta)\end{align} \begin{align}\text{where} \quad \mathcal{L}(\delta) = \begin{cases} \frac{1}{2}{\delta^2} & \text{for } |\delta| \le 1, \ |\delta| - \frac{1}{2} & \text{otherwise.} \end{cases}\end{align} Q-network ^^^^^^^^^ Our model will be a convolutional neural network that takes in the difference between the current and previous screen patches. It has two outputs, representing $Q(s, \mathrm{left})$ and $Q(s, \mathrm{right})$ (where $s$ is the input to the network). In effect, the network is trying to predict the quality of taking each action given the current input. End of explanation resize = T.Compose([T.ToPILImage(), T.Scale(40, interpolation=Image.CUBIC), T.ToTensor()]) # This is based on the code from gym. screen_width = 600 def get_cart_location(): world_width = env.unwrapped.x_threshold * 2 scale = screen_width / world_width return int(env.unwrapped.state[0] * scale + screen_width / 2.0) # MIDDLE OF CART def get_screen(): screen = env.render(mode='rgb_array').transpose( (2, 0, 1)) # transpose into torch order (CHW) # Strip off the top and bottom of the screen screen = screen[:, 160:320] view_width = 320 cart_location = get_cart_location() if cart_location < view_width // 2: slice_range = slice(view_width) elif cart_location > (screen_width - view_width // 2): slice_range = slice(-view_width, None) else: slice_range = slice(cart_location - view_width // 2, cart_location + view_width // 2) # Strip off the edges, so that we have a square image centered on a cart screen = screen[:, :, slice_range] # Convert to float, rescare, convert to torch tensor # (this doesn't require a copy) screen = np.ascontiguousarray(screen, dtype=np.float32) / 255 screen = torch.from_numpy(screen) # Resize, and add a batch dimension (BCHW) print(resize(screen).unsqueeze(0).size) return resize(screen).unsqueeze(0) env.reset() plt.imshow(get_screen().squeeze(0).permute( 1, 2, 0).numpy(), interpolation='none') plt.show() Explanation: Input extraction ^^^^^^^^^^^^^^^^ The code below are utilities for extracting and processing rendered images from the environment. It uses the torchvision package, which makes it easy to compose image transforms. Once you run the cell it will display an example patch that it extracted. End of explanation BATCH_SIZE = 128 GAMMA = 0.999 EPS_START = 0.9 EPS_END = 0.05 EPS_DECAY = 200 USE_CUDA = torch.cuda.is_available() model = DQN() memory = ReplayMemory(10000) optimizer = optim.RMSprop(model.parameters()) if USE_CUDA: model.cuda() class Variable(autograd.Variable): def __init__(self, data, *args, **kwargs): if USE_CUDA: data = data.cuda() super(Variable, self).__init__(data, *args, **kwargs) steps_done = 0 def select_action(state): global steps_done sample = random.random() eps_threshold = EPS_END + (EPS_START - EPS_END) * \ math.exp(-1. * steps_done / EPS_DECAY) steps_done += 1 if sample > eps_threshold: return model(Variable(state, volatile=True)).data.max(1)[1].cpu() else: return torch.LongTensor([[random.randrange(2)]]) episode_durations = [] def plot_durations(): plt.figure(1) plt.clf() durations_t = torch.Tensor(episode_durations) plt.xlabel('Episode') plt.ylabel('Duration') plt.plot(durations_t.numpy()) # Take 100 episode averages and plot them too if len(durations_t) >= 100: means = durations_t.unfold(0, 100, 1).mean(1).view(-1) means = torch.cat((torch.zeros(99), means)) plt.plot(means.numpy()) if is_ipython: display.clear_output(wait=True) display.display(plt.gcf()) Explanation: Training Hyperparameters and utilities ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This cell instantiates our model and its optimizer, and defines some utilities: Variable - this is a simple wrapper around torch.autograd.Variable that will automatically send the data to the GPU every time we construct a Variable. select_action - will select an action accordingly to an epsilon greedy policy. Simply put, we'll sometimes use our model for choosing the action, and sometimes we'll just sample one uniformly. The probability of choosing a random action will start at EPS_START and will decay exponentially towards EPS_END. EPS_DECAY controls the rate of the decay. plot_durations - a helper for plotting the durations of episodes, along with an average over the last 100 episodes (the measure used in the official evaluations). The plot will be underneath the cell containing the main training loop, and will update after every episode. End of explanation last_sync = 0 def optimize_model(): global last_sync print("len<batch:",len(memory) < BATCH_SIZE) # if the memory is smaller than wanted, don't do anything and keep building memory if len(memory) < BATCH_SIZE: return transitions = memory.sample(BATCH_SIZE) # Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for # detailed explanation). batch = Transition(*zip(*transitions)) # Compute a mask of non-final states and concatenate the batch elements non_final_mask = torch.ByteTensor( tuple(map(lambda s: s is not None, batch.next_state))) if USE_CUDA: non_final_mask = non_final_mask.cuda() # We don't want to backprop through the expected action values and volatile # will save us on temporarily changing the model parameters' # requires_grad to False! non_final_next_states = Variable(torch.cat([s for s in batch.next_state if s is not None]), volatile=True) state_batch = Variable(torch.cat(batch.state)) action_batch = Variable(torch.cat(batch.action)) reward_batch = Variable(torch.cat(batch.reward)) # Compute Q(s_t, a) - the model computes Q(s_t), then we select the # columns of actions taken print("In optimize: state_batch", state_batch.data.size()) state_action_values = model(state_batch).gather(1, action_batch) # Compute V(s_{t+1})=max_a Q(s_{t+1}, a) for all next states. next_state_values = Variable(torch.zeros(BATCH_SIZE)) next_state_values[non_final_mask] = model(non_final_next_states).max(1)[0] # Now, we don't want to mess up the loss with a volatile flag, so let's # clear it. After this, we'll just end up with a Variable that has # requires_grad=False next_state_values.volatile = False # Compute the expected Q values expected_state_action_values = (next_state_values * GAMMA) + reward_batch # Compute Huber loss loss = F.smooth_l1_loss(state_action_values, expected_state_action_values) # Optimize the model optimizer.zero_grad() loss.backward() for param in model.parameters(): param.grad.data.clamp_(-1, 1) optimizer.step() transitions = memory.sample(BATCH_SIZE) batch = Transition(*zip(*transitions)) non_final_next_states = Variable(torch.cat([s for s in batch.next_state if s is not None]), volatile=True) state_batch = Variable(torch.cat(batch.state)) action_batch = Variable(torch.cat(batch.action)) reward_batch = Variable(torch.cat(batch.reward)) #print(state_batch.data.size()) #print(action_batch.data.size()) #print(reward_batch.data.size()) x=state_batch x.view(x.size(0), -1) 40*80*3 Explanation: Training loop ^^^^^^^^^^^^^ Finally, the code for training our model. Here, you can find an optimize_model function that performs a single step of the optimization. It first samples a batch, concatenates all the tensors into a single one, computes $Q(s_t, a_t)$ and $V(s_{t+1}) = \max_a Q(s_{t+1}, a)$, and combines them into our loss. By defition we set $V(s) = 0$ if $s$ is a terminal state. End of explanation num_episodes = 1 for i_episode in range(num_episodes): # Initialize the environment and state env.reset() last_screen = get_screen() current_screen = get_screen() state = current_screen - last_screen for t in count(): print(t) # Select and perform an action action = select_action(state) _, reward, done, _ = env.step(action[0, 0]) reward = torch.Tensor([reward]) # Observe new state last_screen = current_screen current_screen = get_screen() if not done: next_state = current_screen - last_screen else: next_state = None # Store the transition in memory memory.push(state, action, next_state, reward) # Move to the next state state = next_state # Perform one step of the optimization (on the target network) optimize_model() if done: episode_durations.append(t + 1) #plot_durations() break Explanation: Below, you can find the main training loop. At the beginning we reset the environment and initialize the state variable. Then, we sample an action, execute it, observe the next screen and the reward (always 1), and optimize our model once. When the episode ends (our model fails), we restart the loop. Below, num_episodes is set small. You should download the notebook and run lot more epsiodes. End of explanation
5,255
Given the following text description, write Python code to implement the functionality described below step by step Description: Plotting the full vector-valued MNE solution The source space that is used for the inverse computation defines a set of dipoles, distributed across the cortex. When visualizing a source estimate, it is sometimes useful to show the dipole directions in addition to their estimated magnitude. This can be accomplished by computing a Step1: Plot the source estimate Step2: Plot the activation in the direction of maximal power for this data Step3: The normal is very similar Step4: You can also do this with a fixed-orientation inverse. It looks a lot like the result above because the loose=0.2 orientation constraint keeps sources close to fixed orientation
Python Code: # Author: Marijn van Vliet <[email protected]> # # License: BSD (3-clause) import numpy as np import mne from mne.datasets import sample from mne.minimum_norm import read_inverse_operator, apply_inverse print(__doc__) data_path = sample.data_path() subjects_dir = data_path + '/subjects' # Read evoked data fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif' evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0)) # Read inverse solution fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' inv = read_inverse_operator(fname_inv) # Apply inverse solution, set pick_ori='vector' to obtain a # :class:`mne.VectorSourceEstimate` object snr = 3.0 lambda2 = 1.0 / snr ** 2 stc = apply_inverse(evoked, inv, lambda2, 'dSPM', pick_ori='vector') # Use peak getter to move visualization to the time point of the peak magnitude _, peak_time = stc.magnitude().get_peak(hemi='lh') Explanation: Plotting the full vector-valued MNE solution The source space that is used for the inverse computation defines a set of dipoles, distributed across the cortex. When visualizing a source estimate, it is sometimes useful to show the dipole directions in addition to their estimated magnitude. This can be accomplished by computing a :class:mne.VectorSourceEstimate and plotting it with :meth:stc.plot &lt;mne.VectorSourceEstimate.plot&gt;, which uses :func:~mne.viz.plot_vector_source_estimates under the hood rather than :func:~mne.viz.plot_source_estimates. It can also be instructive to visualize the actual dipole/activation locations in 3D space in a glass brain, as opposed to activations imposed on an inflated surface (as typically done in :meth:mne.SourceEstimate.plot), as it allows you to get a better sense of the underlying source geometry. End of explanation brain = stc.plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir) # You can save a brain movie with: # brain.save_movie(time_dilation=20, tmin=0.05, tmax=0.16, framerate=10, # interpolation='linear', time_viewer=True) Explanation: Plot the source estimate: End of explanation stc_max, directions = stc.project('pca', src=inv['src']) # These directions must by design be close to the normals because this # inverse was computed with loose=0.2 print('Absolute cosine similarity between source normals and directions: ' f'{np.abs(np.sum(directions * inv["source_nn"][2::3], axis=-1)).mean()}') brain_max = stc_max.plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir, time_label='Max power') Explanation: Plot the activation in the direction of maximal power for this data: End of explanation brain_normal = stc.project('normal', inv['src'])[0].plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir, time_label='Normal') Explanation: The normal is very similar: End of explanation fname_inv_fixed = ( data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-fixed-inv.fif') inv_fixed = read_inverse_operator(fname_inv_fixed) stc_fixed = apply_inverse( evoked, inv_fixed, lambda2, 'dSPM', pick_ori='vector') brain_fixed = stc_fixed.plot( initial_time=peak_time, hemi='lh', subjects_dir=subjects_dir) Explanation: You can also do this with a fixed-orientation inverse. It looks a lot like the result above because the loose=0.2 orientation constraint keeps sources close to fixed orientation: End of explanation
5,256
Given the following text description, write Python code to implement the functionality described below step by step Description: <table style="width Step1: Use debugging tools throughout! Don't forget all the fun debugging tools we covered while you work on these exercises. %debug %pdb import q;q.d() And (if necessary) %prun Exercise 1 You'll notice that our dataset actually has two different files, pumps_train_values.csv and pumps_train_labels.csv. We want to load both of these together in a single DataFrame for our exploratory analysis. Create a function that Step4: Exercise 2 Now that we've loaded our data, we want to do some pre-processing before we model. From inspection of the data, we've noticed that there are some numeric values that are probably not valid that we want to replace. Select the relevant columns for modeling. For the purposes of this exercise, we'll select Step6: Exercise 3 Now that we've got a feature matrix, let's train a model! Add a function as defined below to the src/model/train_model.py The function should use sklearn.linear_model.LogisticRegression to train a logistic regression model. In a dataframe with categorical variables pd.get_dummies will do encoding that can be passed to sklearn. The LogisticRegression class in sklearn handles muticlass models automatically, so no need to use get_dummies on status_group. Finally, this method should return a GridSearchCV object that has been run with the following parameters for a logistic regression model
Python Code: %matplotlib inline from __future__ import print_function import os import pandas as pd import matplotlib.pyplot as plt import seaborn as sns PROJ_ROOT = os.path.join(os.pardir, os.pardir) Explanation: <table style="width:100%; border: 0px solid black;"> <tr style="width: 100%; border: 0px solid black;"> <td style="width:75%; border: 0px solid black;"> <a href="http://www.drivendata.org"> <img src="https://s3.amazonaws.com/drivendata.org/kif-example/img/dd.png" /> </a> </td> </tr> </table> Data Science is Software Developer #lifehacks for the Jupyter Data Scientist Section 3: Refactoring for reusability End of explanation def load_pumps_data(values_path, labels_path): # YOUR CODE HERE pass values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv") labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv") df = load_pumps_data(values, labels) assert df.shape == (59400, 40) #SOLUTION def load_pumps_data(values_path, labels_path): train = pd.read_csv(values_path, index_col='id', parse_dates=["date_recorded"]) labels = pd.read_csv(labels_path, index_col='id') return train.join(labels) values = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_values.csv") labels = os.path.join(PROJ_ROOT, "data", "raw", "pumps_train_labels.csv") df = load_pumps_data(values, labels) assert df.shape == (59400, 40) Explanation: Use debugging tools throughout! Don't forget all the fun debugging tools we covered while you work on these exercises. %debug %pdb import q;q.d() And (if necessary) %prun Exercise 1 You'll notice that our dataset actually has two different files, pumps_train_values.csv and pumps_train_labels.csv. We want to load both of these together in a single DataFrame for our exploratory analysis. Create a function that: - Reads both of the csvs - uses the id column as the index - parses dates of the date_recorded columns - joins the labels and the training set on the id - returns the complete dataframe End of explanation def clean_raw_data(df): Takes a dataframe and performs four steps: - Selects columns for modeling - For numeric variables, replaces 0 values with mean for that region - Fills invalid construction_year values with the mean construction_year - Converts strings to categorical variables :param df: A raw dataframe that has been read into pandas :returns: A dataframe with the preprocessing performed. pass def replace_value_with_grouped_mean(df, value, column, to_groupby): For a given numeric value (e.g., 0) in a particular column, take the mean of column (excluding value) grouped by to_groupby and return that column with the value replaced by that mean. :param df: The dataframe to operate on. :param value: The value in column that should be replaced. :param column: The column in which replacements need to be made. :param to_groupby: Groupby this variable and take the mean of column. Replace value with the group's mean. :returns: The data frame with the invalid values replaced pass #SOLUTION # Load the "autoreload" extension %load_ext autoreload # always reload modules marked with "%aimport" %autoreload 1 import os import sys # add the 'src' directory as one where we can import modules src_dir = os.path.join(PROJ_ROOT, 'src') sys.path.append(src_dir) # import my method from the source code %aimport features.preprocess_solution from features.preprocess_solution import clean_raw_data cleaned_df = clean_raw_data(df) # verify construction year assert (cleaned_df.construction_year > 1000).all() # verify filled in other values for numeric_col in ["population", "longitude", "latitude"]: assert (cleaned_df[numeric_col] != 0).all() # verify the types are in the expected types assert (cleaned_df.dtypes .astype(str) .isin(["int64", "float64", "category"])).all() # check some actual values assert cleaned_df.latitude.mean() == -5.970642969008563 assert cleaned_df.longitude.mean() == 35.14119354200863 assert cleaned_df.population.mean() == 277.3070009774711 Explanation: Exercise 2 Now that we've loaded our data, we want to do some pre-processing before we model. From inspection of the data, we've noticed that there are some numeric values that are probably not valid that we want to replace. Select the relevant columns for modeling. For the purposes of this exercise, we'll select: useful_columns = ['amount_tsh', 'gps_height', 'longitude', 'latitude', 'region', 'population', 'construction_year', 'extraction_type_class', 'management_group', 'quality_group', 'source_type', 'waterpoint_type', 'status_group'] Replace longitude, and population where it is 0 with mean for that region. zero_is_bad_value = ['longitude', 'population'] Replace the latitude where it is -2E-8 (a different bad value) with the mean for that region. other_bad_value = ['latitude'] Replace construction_year less than 1000 with the mean construction year. Convert object type (i.e., string) variables to categoricals. Convert the label column into a categorical variable A skeleton for this work is below where clean_raw_data will call replace_value_with_grouped_mean internally. Copy and Paste the skeleton below into a Python file called preprocess.py in src/features/. Import and autoload the methods from that file to run tests on your changes in this notebook. End of explanation def logistic(df): Trains a multinomial logistic regression model to predict the status of a water pump given characteristics about the pump. :param df: The dataframe with the features and the label. :returns: A trained GridSearchCV classifier pass #SOLUTION #import my method from the source code %aimport model.train_model_solution from model.train_model_solution import logistic %%time clf = logistic(cleaned_df) assert clf.best_score_ > 0.5 # Just for fun, let's profile the whole stack and see what's slowest! %prun logistic(clean_raw_data(load_pumps_data(values, labels))) Explanation: Exercise 3 Now that we've got a feature matrix, let's train a model! Add a function as defined below to the src/model/train_model.py The function should use sklearn.linear_model.LogisticRegression to train a logistic regression model. In a dataframe with categorical variables pd.get_dummies will do encoding that can be passed to sklearn. The LogisticRegression class in sklearn handles muticlass models automatically, so no need to use get_dummies on status_group. Finally, this method should return a GridSearchCV object that has been run with the following parameters for a logistic regression model: params = {'C': [0.1, 1, 10]} End of explanation
5,257
Given the following text description, write Python code to implement the functionality described below step by step Description: 엔트로피 엔트로피(entropy)는 확률 변수가 담을 수 있는 정보의 양을 나타내는 값으로 다음과 같이 정의한다. 확률 변수 $X$가 이산 확률 변수이면 $$ H[X] = -\sum_{k=1}^K p(x_k) \log_2 p(x_k) $$ 확률 변수 $X$가 연속 확률 변수이면 $$ H[X] = -\int p(x) \log_2 p(x) \; dx $$ 이 식에서 $p(x)$는 확률 밀도(질량) 함수이다. 엔트로피 계산 예 실제로 엔트로피를 계산해 보자. 만약 이산 확률 변수가 1부터 8까지의 8개의 값 또는 클래스를 가질 수 있고 각각의 클래스에 대한 확률이 다음과 같다고 가정한다. $$ \Big{ \dfrac{1}{2}, \dfrac{1}{4}, \dfrac{1}{8}, \dfrac{1}{16}, \dfrac{1}{64}, \dfrac{1}{64}, \dfrac{1}{64}, \dfrac{1}{64} \Big} $$ 이 때의 엔트로피는 다음과 같다. $$ H = -\dfrac{1}{2}\log_2\dfrac{1}{2} -\dfrac{1}{4}\log_2\dfrac{1}{4} -\dfrac{1}{8}\log_2\dfrac{1}{8} -\dfrac{1}{16}\log_2\dfrac{1}{16} -\dfrac{4}{64}\log_2\dfrac{1}{64} = 2 $$ 만약 모든 가능한 값(클래스) $x_k$에 대해 $p(x_k) = 0$ 또는 $p(x_k) = 1$ 뿐이라면 엔트로피는 0 임을 알 수 있다. 이 경우는 사실 단 하나의 값만 나올 수 있는 경우이므로 정보가 없는 상수값이다. 만약 이산 확률 변수가 가질 수 있는 값(클래스)의 종류가 $2^K$이고 모두 같은 확률을 가진다면 $$ H = -\frac{2^K}{2^K}\log_2\dfrac{1}{2^K} = K $$ 이다. 즉, 엔트로피는 이산 확률 변수가 가질 수 있는 확률 변수가 동일한 값의 가짓수와 같다. (노트의 사례) Step1: 압축 방법 0000000100110... 10 -> 1bit p=0.5(베르누이 분포일 때) 가장 정보가 많은 경우 p=0.01 이라고 하면 0이 엄청나게 많고 1이 적은 경우 압축하는 방법은 0이 몇 번 나오는 지 쓴다. 예를 들어 0이 7번 나오고 1이 2번 나오고 다시 0이 10번 나온다는 식으로 줄여서 쓴다. 그러면 이 숫자를 다시 이진법으로 쓰는 방식 p=0.5일 경우에는 압축의 의미가 없다. 계속 0과 1이 번갈아 나오기 때문 원리는 그렇다. 확률 변수는 수치를 담고 있고 내가 얼마나 많은 정보를 가지고 있는 지를 판단할 수 있다. 표본 데이터가 주어진 경우 확률 변수 모형, 즉 이론적인 확률 밀도(질량) 함수가 아닌 실제 데이터가 주어진 경우에는 확률 밀도(질량) 함수를 추정하여 엔트로피를 계산한다. 예를 들어 데이터가 모두 80개가 있고 그 중 Y = 0 인 데이터가 40개, Y = 1인 데이터가 40개 있는 경우는 $$ P(y=0) = \dfrac{40}{80} = \dfrac{1}{2} $$ $$ P(y=1) = \dfrac{40}{80} = \dfrac{1}{2} $$ $$ H[Y] = -\dfrac{1}{2}\log_2\left(\dfrac{1}{2}\right) -\dfrac{1}{2}\log_2\left(\dfrac{1}{2}\right) = \dfrac{1}{2} + \dfrac{1}{2} = 1 $$ Step2: 만약 데이터가 모두 60개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 40개 있는 경우는 $$ P(y=0) = \dfrac{20}{60} = \dfrac{1}{3} $$ $$ P(y=1) = \dfrac{40}{60} = \dfrac{2}{3} $$ $$ H[Y] = -\dfrac{1}{3}\log_2\left(\dfrac{1}{3}\right) -\dfrac{2}{3}\log_2\left(\dfrac{2}{3}\right) = 0.92 $$ Step3: 만약 데이터가 모두 40개가 있고 그 중 Y= 0 인 데이터가 30개, Y = 1인 데이터가 10개 있는 경우는 $$ P(y=0) = \dfrac{30}{40} = \dfrac{3}{4} $$ $$ P(y=1) = \dfrac{10}{40} = \dfrac{1}{4} $$ $$ H[Y] = -\dfrac{3}{4}\log_2\left(\dfrac{3}{4}\right) -\dfrac{1}{4}\log_2\left(\dfrac{1}{4}\right) = 0.81 $$ Step4: 만약 데이터가 모두 20개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 0개 있는 경우는 $$ P(y=0) = \dfrac{20}{20} = 1 $$ $$ P(y=1) = \dfrac{0}{20} = 0 $$ $$ H[Y] \rightarrow 0 $$ 조건부 엔트로피 조건부 엔트로피는 다음과 같이 정의한다. $$ H[Y \mid X] = - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) $$ $$ H[Y \mid X] = -\int \int p(x, y) \log_2 p(y \mid x) \; dxdy $$ 이 식은 조건부 확률 분포의 정의를 사용하여 다음과 같이 고칠 수 있다. $$ H[Y \mid X] = \sum_i \,p(x_i)\,H[Y \mid x_i] $$ $$ H[Y \mid X] = \int p(x)\,H[Y \mid x] \; dx $$ 위에는 X가 선택되지 않은 상황의 엔트로피의 가중합이고 아래는 X가 특정한 값이 선택된 경우의 가중합 (증명) $$ \begin{eqnarray} H[Y \mid X] &=& - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) \ &=& - \sum_i \sum_j p(y_j \mid x_i) p(x_i) \log_2 p(y_j \mid x_i) \ &=& - \sum_i p(x_i) \sum_j p(y_j \mid x_i) \log_2 p(y_j \mid x_i) \ &=& \sum_i p(x_i) H[Y \mid x_i] \ \end{eqnarray} $$ 조건부 엔트로피와 결합 엔트로피는 다음과 같은 관계를 가진다. $$ H[ X, Y ] = H[Y \mid X] + H[X] $$ 조건부 엔트로피 계산의 예 예를 들어 데이터가 모두 80개가 있고 $X$, $Y$ 값이 다음과 같다고 하자 | | Y = 0 | Y = 1 | sum | |-|-|-|-| | X = 0 | 30 | 10 | 40 | | X = 1 | 10 | 30 | 40 | $$ H[Y \mid X ] = p(X=0)\,H[Y \mid X=0] + p(X=1)\,H[Y \mid X=1] = \dfrac{40}{80} \cdot 0.81 + \dfrac{40}{80} \cdot 0.81 = 0.81 $$ 만약 데이터가 모두 80개가 있고 $X$, $Y$ 값이 다음과 같다면 | | Y = 0 | Y = 1 | sum | |-|-|-|-| | X = 0 | 20 | 40 | 60 | | X = 1 | 20 | 0 | 20 | $$ H[Y \mid X ] = p(X=0)\,H[Y \mid X=0] + p(X=1)\,H[Y \mid X=1] = \dfrac{60}{80} \cdot 0.92 + \dfrac{20}{80} \cdot 0 = 0.69 $$ 실습 문제 <img src="5.png.jpg" stype="width
Python Code: -1/6*np.log2(1/6)*6 -1/2*np.log2(1/2)-1/4*np.log2(1/4)-1/8*np.log2(1/8)-1/16*np.log2(1/16)-1/32*np.log2(1/32)-1/32*np.log2(1/32) Explanation: 엔트로피 엔트로피(entropy)는 확률 변수가 담을 수 있는 정보의 양을 나타내는 값으로 다음과 같이 정의한다. 확률 변수 $X$가 이산 확률 변수이면 $$ H[X] = -\sum_{k=1}^K p(x_k) \log_2 p(x_k) $$ 확률 변수 $X$가 연속 확률 변수이면 $$ H[X] = -\int p(x) \log_2 p(x) \; dx $$ 이 식에서 $p(x)$는 확률 밀도(질량) 함수이다. 엔트로피 계산 예 실제로 엔트로피를 계산해 보자. 만약 이산 확률 변수가 1부터 8까지의 8개의 값 또는 클래스를 가질 수 있고 각각의 클래스에 대한 확률이 다음과 같다고 가정한다. $$ \Big{ \dfrac{1}{2}, \dfrac{1}{4}, \dfrac{1}{8}, \dfrac{1}{16}, \dfrac{1}{64}, \dfrac{1}{64}, \dfrac{1}{64}, \dfrac{1}{64} \Big} $$ 이 때의 엔트로피는 다음과 같다. $$ H = -\dfrac{1}{2}\log_2\dfrac{1}{2} -\dfrac{1}{4}\log_2\dfrac{1}{4} -\dfrac{1}{8}\log_2\dfrac{1}{8} -\dfrac{1}{16}\log_2\dfrac{1}{16} -\dfrac{4}{64}\log_2\dfrac{1}{64} = 2 $$ 만약 모든 가능한 값(클래스) $x_k$에 대해 $p(x_k) = 0$ 또는 $p(x_k) = 1$ 뿐이라면 엔트로피는 0 임을 알 수 있다. 이 경우는 사실 단 하나의 값만 나올 수 있는 경우이므로 정보가 없는 상수값이다. 만약 이산 확률 변수가 가질 수 있는 값(클래스)의 종류가 $2^K$이고 모두 같은 확률을 가진다면 $$ H = -\frac{2^K}{2^K}\log_2\dfrac{1}{2^K} = K $$ 이다. 즉, 엔트로피는 이산 확률 변수가 가질 수 있는 확률 변수가 동일한 값의 가짓수와 같다. (노트의 사례) End of explanation -1/2*np.log2(1/2)-1/2*np.log2(1/2) # 이럴 경우에는 아까 말한 0,1 두 개가 똑같아서 압축을 해도 의미가 없는 경우다. Explanation: 압축 방법 0000000100110... 10 -> 1bit p=0.5(베르누이 분포일 때) 가장 정보가 많은 경우 p=0.01 이라고 하면 0이 엄청나게 많고 1이 적은 경우 압축하는 방법은 0이 몇 번 나오는 지 쓴다. 예를 들어 0이 7번 나오고 1이 2번 나오고 다시 0이 10번 나온다는 식으로 줄여서 쓴다. 그러면 이 숫자를 다시 이진법으로 쓰는 방식 p=0.5일 경우에는 압축의 의미가 없다. 계속 0과 1이 번갈아 나오기 때문 원리는 그렇다. 확률 변수는 수치를 담고 있고 내가 얼마나 많은 정보를 가지고 있는 지를 판단할 수 있다. 표본 데이터가 주어진 경우 확률 변수 모형, 즉 이론적인 확률 밀도(질량) 함수가 아닌 실제 데이터가 주어진 경우에는 확률 밀도(질량) 함수를 추정하여 엔트로피를 계산한다. 예를 들어 데이터가 모두 80개가 있고 그 중 Y = 0 인 데이터가 40개, Y = 1인 데이터가 40개 있는 경우는 $$ P(y=0) = \dfrac{40}{80} = \dfrac{1}{2} $$ $$ P(y=1) = \dfrac{40}{80} = \dfrac{1}{2} $$ $$ H[Y] = -\dfrac{1}{2}\log_2\left(\dfrac{1}{2}\right) -\dfrac{1}{2}\log_2\left(\dfrac{1}{2}\right) = \dfrac{1}{2} + \dfrac{1}{2} = 1 $$ End of explanation -1/3*np.log2(1/3)-2/3*np.log2(2/3) Explanation: 만약 데이터가 모두 60개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 40개 있는 경우는 $$ P(y=0) = \dfrac{20}{60} = \dfrac{1}{3} $$ $$ P(y=1) = \dfrac{40}{60} = \dfrac{2}{3} $$ $$ H[Y] = -\dfrac{1}{3}\log_2\left(\dfrac{1}{3}\right) -\dfrac{2}{3}\log_2\left(\dfrac{2}{3}\right) = 0.92 $$ End of explanation -3/4*np.log2(3/4)-1/4*np.log2(1/4) Explanation: 만약 데이터가 모두 40개가 있고 그 중 Y= 0 인 데이터가 30개, Y = 1인 데이터가 10개 있는 경우는 $$ P(y=0) = \dfrac{30}{40} = \dfrac{3}{4} $$ $$ P(y=1) = \dfrac{10}{40} = \dfrac{1}{4} $$ $$ H[Y] = -\dfrac{3}{4}\log_2\left(\dfrac{3}{4}\right) -\dfrac{1}{4}\log_2\left(\dfrac{1}{4}\right) = 0.81 $$ End of explanation -(25/100 * (20/25 * np.log2(20/25) + 5/25 * np.log2(5/25)) + 75/100 * (25/75 * np.log2(25/75) + 50/75 * np.log2(50/75))) Explanation: 만약 데이터가 모두 20개가 있고 그 중 Y= 0 인 데이터가 20개, Y = 1인 데이터가 0개 있는 경우는 $$ P(y=0) = \dfrac{20}{20} = 1 $$ $$ P(y=1) = \dfrac{0}{20} = 0 $$ $$ H[Y] \rightarrow 0 $$ 조건부 엔트로피 조건부 엔트로피는 다음과 같이 정의한다. $$ H[Y \mid X] = - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) $$ $$ H[Y \mid X] = -\int \int p(x, y) \log_2 p(y \mid x) \; dxdy $$ 이 식은 조건부 확률 분포의 정의를 사용하여 다음과 같이 고칠 수 있다. $$ H[Y \mid X] = \sum_i \,p(x_i)\,H[Y \mid x_i] $$ $$ H[Y \mid X] = \int p(x)\,H[Y \mid x] \; dx $$ 위에는 X가 선택되지 않은 상황의 엔트로피의 가중합이고 아래는 X가 특정한 값이 선택된 경우의 가중합 (증명) $$ \begin{eqnarray} H[Y \mid X] &=& - \sum_i \sum_j \,p(x_i, y_j) \log_2 p(y_j \mid x_i) \ &=& - \sum_i \sum_j p(y_j \mid x_i) p(x_i) \log_2 p(y_j \mid x_i) \ &=& - \sum_i p(x_i) \sum_j p(y_j \mid x_i) \log_2 p(y_j \mid x_i) \ &=& \sum_i p(x_i) H[Y \mid x_i] \ \end{eqnarray} $$ 조건부 엔트로피와 결합 엔트로피는 다음과 같은 관계를 가진다. $$ H[ X, Y ] = H[Y \mid X] + H[X] $$ 조건부 엔트로피 계산의 예 예를 들어 데이터가 모두 80개가 있고 $X$, $Y$ 값이 다음과 같다고 하자 | | Y = 0 | Y = 1 | sum | |-|-|-|-| | X = 0 | 30 | 10 | 40 | | X = 1 | 10 | 30 | 40 | $$ H[Y \mid X ] = p(X=0)\,H[Y \mid X=0] + p(X=1)\,H[Y \mid X=1] = \dfrac{40}{80} \cdot 0.81 + \dfrac{40}{80} \cdot 0.81 = 0.81 $$ 만약 데이터가 모두 80개가 있고 $X$, $Y$ 값이 다음과 같다면 | | Y = 0 | Y = 1 | sum | |-|-|-|-| | X = 0 | 20 | 40 | 60 | | X = 1 | 20 | 0 | 20 | $$ H[Y \mid X ] = p(X=0)\,H[Y \mid X=0] + p(X=1)\,H[Y \mid X=1] = \dfrac{60}{80} \cdot 0.92 + \dfrac{20}{80} \cdot 0 = 0.69 $$ 실습 문제 <img src="5.png.jpg" stype="width:60%; margin: 0 auto 0 auto;"> End of explanation
5,258
Given the following text description, write Python code to implement the functionality described below step by step Description: Beta Hedging By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards Part of the Quantopian Lecture Series Step1: Now we can perform the regression to find $\alpha$ and $\beta$ Step2: If we plot the line $\alpha + \beta X$, we can see that it does indeed look like the line of best fit Step3: Risk Exposure More generally, this beta gets at the concept of how much risk exposure you take on by holding an asset. If an asset has a high beta exposure to the S&P 500, then while it will do very well while the market is rising, it will do very poorly when the market falls. A high beta corresponds to high speculative risk. You are taking out a more volatile bet. At Quantopian, we value stratgies that have negligible beta exposure to as many factors as possible. What this means is that all of the returns in a strategy lie in the $\alpha$ portion of the model, and are independent of other factors. This is highly desirable, as it means that the strategy is agnostic to market conditions. It will make money equally well in a crash as it will during a bull market. These strategies are the most attractive to individuals with huge cash pools such as endowments and soverign wealth funds. Risk Management The process of reducing exposure to other factors is known as risk management. Hedging is one of the best ways to perform risk management in practice. Hedging If we determine that our portfolio's returns are dependent on the market via this relation $$Y_{portfolio} = \alpha + \beta X_{SPY}$$ then we can take out a short position in SPY to try to cancel out this risk. The amount we take out is $-\beta V$ where $V$ is the total value of our portfolio. This works because if our returns are approximated by $\alpha + \beta X_{SPY}$, then adding a short in SPY will make our new returns be $\alpha + \beta X_{SPY} - \beta X_{SPY} = \alpha$. Our returns are now purely alpha, which is independent of SPY and will suffer no risk exposure to the market. Market Neutral When a stragy exhibits a consistent beta of 0, we say that this strategy is market neutral. Problems with Estimation The problem here is that the beta we estimated is not necessarily going to stay the same as we walk forward in time. As such the amount of short we took out in the SPY may not perfectly hedge our portfolio, and in practice it is quite difficult to reduce beta by a significant amount. We will talk more about problems with estimating parameters in future lectures. In short, each estimate has a stardard error that corresponds with how stable the estimate is within the observed data. Implementing hedging Now that we know how much to hedge, let's see how it affects our returns. We will build our portfolio using the asset and the benchmark, weighing the benchmark by $-\beta$ (negative since we are short in it). Step4: It looks like the portfolio return follows the asset alone fairly closely. We can quantify the difference in their performances by computing the mean returns and the volatilities (standard deviations of returns) for both Step5: We've decreased volatility at the expense of some returns. Let's check that the alpha is the same as before, while the beta has been eliminated Step6: Note that we developed our hedging strategy using historical data. We can check that it is still valid out of sample by checking the alpha and beta values of the asset and the hedged portfolio in a different time frame
Python Code: # Import libraries import numpy as np from statsmodels import regression import statsmodels.api as sm import matplotlib.pyplot as plt import math # Get data for the specified period and stocks start = '2014-01-01' end = '2015-01-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # We have to take the percent changes to get to returns # Get rid of the first (0th) element because it is NAN r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] # Let's plot them just for fun r_a.plot() r_b.plot() plt.ylabel("Daily Return") plt.legend(); Explanation: Beta Hedging By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie with example algorithms by David Edwards Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Factor Models Factor models are a way of explaining the returns of one asset via a linear combination of the returns of other assets. The general form of a factor model is $$Y = \alpha + \beta_1 X_1 + \beta_2 X_2 + \dots + \beta_n X_n$$ This looks familiar, as it is exactly the model type that a linear regression fits. The $X$'s can also be indicators rather than assets. An example might be a analyst estimation. What is Beta? An asset's beta to another asset is just the $\beta$ from the above model. For instance, if we regressed TSLA against the S&P 500 using the model $Y_{TSLA} = \alpha + \beta X$, then TSLA's beta exposure to the S&P 500 would be that beta. If we used the model $Y_{TSLA} = \alpha + \beta X_{SPY} + \beta X_{AAPL}$, then we now have two betas, one is TSLA's exposure to the S&P 500 and one is TSLA's exposure to AAPL. Often "beta" will refer to a stock's beta exposure to the S&P 500. We will use it to mean that unless otherwise specified. End of explanation # Let's define everything in familiar regression terms X = r_b.values # Get just the values, ignore the timestamps Y = r_a.values def linreg(x,y): # We add a constant so that we can also fit an intercept (alpha) to the model # This just adds a column of 1s to our data x = sm.add_constant(x) model = regression.linear_model.OLS(y,x).fit() # Remove the constant now that we're done x = x[:, 1] return model.params[0], model.params[1] alpha, beta = linreg(X,Y) print 'alpha: ' + str(alpha) print 'beta: ' + str(beta) Explanation: Now we can perform the regression to find $\alpha$ and $\beta$: End of explanation X2 = np.linspace(X.min(), X.max(), 100) Y_hat = X2 * beta + alpha plt.scatter(X, Y, alpha=0.3) # Plot the raw data plt.xlabel("SPY Daily Return") plt.ylabel("TSLA Daily Return") # Add the regression line, colored in red plt.plot(X2, Y_hat, 'r', alpha=0.9); Explanation: If we plot the line $\alpha + \beta X$, we can see that it does indeed look like the line of best fit: End of explanation # Construct a portfolio with beta hedging portfolio = -1*beta*r_b + r_a portfolio.name = "TSLA + Hedge" # Plot the returns of the portfolio as well as the asset by itself portfolio.plot(alpha=0.9) r_b.plot(alpha=0.5); r_a.plot(alpha=0.5); plt.ylabel("Daily Return") plt.legend(); Explanation: Risk Exposure More generally, this beta gets at the concept of how much risk exposure you take on by holding an asset. If an asset has a high beta exposure to the S&P 500, then while it will do very well while the market is rising, it will do very poorly when the market falls. A high beta corresponds to high speculative risk. You are taking out a more volatile bet. At Quantopian, we value stratgies that have negligible beta exposure to as many factors as possible. What this means is that all of the returns in a strategy lie in the $\alpha$ portion of the model, and are independent of other factors. This is highly desirable, as it means that the strategy is agnostic to market conditions. It will make money equally well in a crash as it will during a bull market. These strategies are the most attractive to individuals with huge cash pools such as endowments and soverign wealth funds. Risk Management The process of reducing exposure to other factors is known as risk management. Hedging is one of the best ways to perform risk management in practice. Hedging If we determine that our portfolio's returns are dependent on the market via this relation $$Y_{portfolio} = \alpha + \beta X_{SPY}$$ then we can take out a short position in SPY to try to cancel out this risk. The amount we take out is $-\beta V$ where $V$ is the total value of our portfolio. This works because if our returns are approximated by $\alpha + \beta X_{SPY}$, then adding a short in SPY will make our new returns be $\alpha + \beta X_{SPY} - \beta X_{SPY} = \alpha$. Our returns are now purely alpha, which is independent of SPY and will suffer no risk exposure to the market. Market Neutral When a stragy exhibits a consistent beta of 0, we say that this strategy is market neutral. Problems with Estimation The problem here is that the beta we estimated is not necessarily going to stay the same as we walk forward in time. As such the amount of short we took out in the SPY may not perfectly hedge our portfolio, and in practice it is quite difficult to reduce beta by a significant amount. We will talk more about problems with estimating parameters in future lectures. In short, each estimate has a stardard error that corresponds with how stable the estimate is within the observed data. Implementing hedging Now that we know how much to hedge, let's see how it affects our returns. We will build our portfolio using the asset and the benchmark, weighing the benchmark by $-\beta$ (negative since we are short in it). End of explanation print "means: ", portfolio.mean(), r_a.mean() print "volatilities: ", portfolio.std(), r_a.std() Explanation: It looks like the portfolio return follows the asset alone fairly closely. We can quantify the difference in their performances by computing the mean returns and the volatilities (standard deviations of returns) for both: End of explanation P = portfolio.values alpha, beta = linreg(X,P) print 'alpha: ' + str(alpha) print 'beta: ' + str(beta) Explanation: We've decreased volatility at the expense of some returns. Let's check that the alpha is the same as before, while the beta has been eliminated: End of explanation # Get the alpha and beta estimates over the last year start = '2014-01-01' end = '2015-01-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] X = r_b.values Y = r_a.values historical_alpha, historical_beta = linreg(X,Y) print 'Asset Historical Estimate:' print 'alpha: ' + str(historical_alpha) print 'beta: ' + str(historical_beta) # Get data for a different time frame: start = '2015-01-01' end = '2015-06-01' asset = get_pricing('TSLA', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # Repeat the process from before to compute alpha and beta for the asset r_a = asset.pct_change()[1:] r_b = benchmark.pct_change()[1:] X = r_b.values Y = r_a.values alpha, beta = linreg(X,Y) print 'Asset Out of Sample Estimate:' print 'alpha: ' + str(alpha) print 'beta: ' + str(beta) # Create hedged portfolio and compute alpha and beta portfolio = -1*historical_beta*r_b + r_a P = portfolio.values alpha, beta = linreg(X,P) print 'Portfolio Out of Sample:' print 'alpha: ' + str(alpha) print 'beta: ' + str(beta) # Plot the returns of the portfolio as well as the asset by itself portfolio.name = "TSLA + Hedge" portfolio.plot(alpha=0.9) r_a.plot(alpha=0.5); r_b.plot(alpha=0.5) plt.ylabel("Daily Return") plt.legend(); Explanation: Note that we developed our hedging strategy using historical data. We can check that it is still valid out of sample by checking the alpha and beta values of the asset and the hedged portfolio in a different time frame: End of explanation
5,259
Given the following text description, write Python code to implement the functionality described below step by step Description: Resampling documentation Step1: create a time series that includes a simple pattern Step2: Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin Step3: Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left Notice the difference in the time indices; the sum in each bin is the same Step4: Downsample the series into 3 minute bins as above, but close the right side of the bin interval "count backwards" from end of time series Step5: Upsample the series into 30 second bins asfreq() Step6: define a custom function to use with resampling Step7: apply custom resampling function
Python Code: # min: minutes my_index = pd.date_range('9/1/2016', periods=9, freq='min') my_index Explanation: Resampling documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html For arguments to 'freq' parameter, please see Offset Aliases create a date range to use as an index End of explanation my_series = pd.Series(np.arange(9), index=my_index) my_series Explanation: create a time series that includes a simple pattern End of explanation my_series.resample('3min').sum() Explanation: Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin End of explanation my_series.resample('3min', label='right').sum() Explanation: Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left Notice the difference in the time indices; the sum in each bin is the same End of explanation my_series.resample('3min', label='right', closed='right').sum() Explanation: Downsample the series into 3 minute bins as above, but close the right side of the bin interval "count backwards" from end of time series End of explanation #select first 5 rows my_series.resample('30S').asfreq()[0:5] Explanation: Upsample the series into 30 second bins asfreq() End of explanation def custom_arithmetic(array_like): temp = 3 * np.sum(array_like) + 5 return temp Explanation: define a custom function to use with resampling End of explanation my_series.resample('3min').apply(custom_arithmetic) Explanation: apply custom resampling function End of explanation
5,260
Given the following text description, write Python code to implement the functionality described below step by step Description: LDA and NMF on New Job-Skill Matrix Step1: LDA and NMF Global arguments Step2: Trainning LDA Step3: Evaluation of LDA on test set by perplexity Step4: Save LDA models Step5: Assignning skill clusters to job posts The clusters are top-$k$ clusters where we either + fix $k$ OR + choose $k$ for each JD such that the cumulative prob of $k$ clusters is larger than a certain threshold. Step6: Cluster assignment analysis We want to see when the cluster assignment to job post is clear or fuzzy. The former (latter) means that we the list of top clusters assigned to the post has at most 3 clusters (more than 3 clusters) respectively. First, we look at those posts with clear assignment Step7: These posts contain lots of skills. Only 25% of them contain no more than 31 skills in each post, so each of the remaining 75% contains at least 31 skills. We can contrast this quartile with the skill distribution in all job posts below. Step8: Examples of clear vs. fuzzy assignment can be seen in result file. Cluster assignment statistics Step9: Correlation between n_top_cluster and n_skill in job posts We can roughly divide job posts into 4 following groups based on the above quartile Step10: Box plot of mixture size Step11: The box plot reveals the following Step12: Probability of top cluster Step13: NMF Step14: Building TF-IDF matrix Need to proceed like LDA i.e. we need to calculate tfidf for trigram skills, remove them, then calculate tfidf for bigram skills, remove then calculate tfidf for unigram skills. Step15: Training Step16: Save models Step17: Evaluation Step18: Model Comparison
Python Code: import ja_helpers as ja_helpers; from ja_helpers import * HOME_DIR = 'd:/larc_projects/job_analytics/'; DATA_DIR = HOME_DIR + 'data/clean/' RES_DIR = HOME_DIR + 'results/skill_cluster/new/' skill_df = pd.read_csv(DATA_DIR + 'skill_index.csv') doc_skill = mmread(DATA_DIR + 'doc_skill.mtx') skills = skill_df['skill'] print('# skills from the skill index: %d' %len(skills)) n_doc = doc_skill.shape[0]; n_skill = doc_skill.shape[1] print ('# skills in matrix doc-skill: %d' %n_skill) print('# documents in matrix doc-skill: %d' %n_doc) ## May not be needed # doc_index = pd.read_csv(DATA_DIR + 'doc_index.csv') # jd_docs = doc_index['doc']; print('# JDs: %d' %len(jd_docs)) Explanation: LDA and NMF on New Job-Skill Matrix End of explanation ks = range(15, 35, 5) # ks = [15] n_top_words = 10 Explanation: LDA and NMF Global arguments: no. of topics: k in {5, 10, ..., 20} no. of top words to be printed out in result End of explanation print('# docs: {}, # skills: {}'.format(n_doc, n_skill)) in_train, in_test = mkPartition(n_doc, p=80) doc_skill = doc_skill.tocsr() lda_X_train, lda_X_test = doc_skill[in_train, :], doc_skill[in_test, :] beta = 0.1 # or 200/W lda = trainLDA(beta, ks, trainning_set=lda_X_train) LDA_DIR = RES_DIR + 'lda/' ks = [20, 30] for k in ks: doc_topic_distr = lda[k].transform(doc_skill) fname = RES_DIR + 'doc_{}topic_distr.mtx'.format(k) with(open(fname, 'w')) as f: mmwrite(f, doc_topic_distr) Explanation: Trainning LDA End of explanation perp_df = testLDA(lda, ks, test_set=lda_X_test) perp_df perp_df.to_csv(LDA_DIR + 'perplexity.csv', index=False) Explanation: Evaluation of LDA on test set by perplexity End of explanation for k in [25, 30]: # word_dist = pd.DataFrame(lda[k].components_).apply(normalize, axis=1) # word_dist.to_csv(LDA_DIR + 'lda_word_dist_{}topics.csv'.format(k), index=False) lda_topics = top_words_df(n_top_words=10, model=lda[k], feature_names=skills) lda_topics.to_csv(LDA_DIR + '{}topics.csv'.format(k), index=False) ks = range(5, 25, 5) for k in ks: topic_word_dist = lda[k].components_ fname = LDA_DIR + 'word_dist_{}_topics.mtx'.format(k) with(open(fname, 'w')) as f: mmwrite(f, topic_word_dist) # nrow = topic_word_dist.shape[0] # for r in range(nrow): # f.write(topic_word_dist[r, :]) Explanation: Save LDA models End of explanation clusters = pd.read_csv(LDA_DIR + 'cluster.csv')['cluster'] n_cluster = len(clusters) doc_index.shape doc_index.head() doc_index.to_csv(DATA_DIR + 'doc_index.csv', index=False) doc_topic_distr = lda[15].transform(doc_skill) with(open(LDA_DIR + 'doc_topic_distr.mtx', 'w')) as f: mmwrite(f, doc_topic_distr) thres = 0.4 # 0.5 t0 = time() # doc_index['top_clusters'] = doc_index.apply(getTopTopics_GT, axis=1, doc_topic_distr=doc_topic_distr, thres=0.5) # doc_index['n_top_cluster_40'] = doc_index.apply(getTopTopics_GT, axis=1, doc_topic_distr=doc_topic_distr, thres=thres) doc_index['prob_top_cluster'] = doc_index.apply(getTopTopicProb, axis=1, doc_topic_distr=doc_topic_distr) print('Done after %.1fs' %(time() - t0)) res = doc_index.query('n_skill >= 2') res.sort_values('n_skill', ascending=False, inplace=True) print('No. of JDs in result: %d' %res.shape[0]) res.head() n_sample = 100 res.head(n_sample).to_csv(LDA_DIR + 'new/cluster_100top_docs.csv', index=False) res.tail(n_sample).to_csv(LDA_DIR + 'new/cluster_100bottom_docs.csv', index=False) # res.to_csv(LDA_DIR + 'new/cluster_assign2.csv', index=False) res.rename(columns={'n_top_cluster_40': 'n_top_cluster'}, inplace=True) Explanation: Assignning skill clusters to job posts The clusters are top-$k$ clusters where we either + fix $k$ OR + choose $k$ for each JD such that the cumulative prob of $k$ clusters is larger than a certain threshold. End of explanation clear_assign = res.query('n_top_cluster <= 3'); fuzzy_assign = res.query('n_top_cluster > 3') print('# posts with clear assignment: %d' %clear_assign.shape[0]) print('Distribution of skills in these posts:') quantile(clear_assign['n_skill']) Explanation: Cluster assignment analysis We want to see when the cluster assignment to job post is clear or fuzzy. The former (latter) means that we the list of top clusters assigned to the post has at most 3 clusters (more than 3 clusters) respectively. First, we look at those posts with clear assignment: End of explanation print('Distribution of skills in all posts:') quantile(res['n_skill']) fig = plotSkillDist(res) plt.savefig(LDA_DIR + 'fig/n_skill_hist.jpg') plt.show(); plt.close() Explanation: These posts contain lots of skills. Only 25% of them contain no more than 31 skills in each post, so each of the remaining 75% contains at least 31 skills. We can contrast this quartile with the skill distribution in all job posts below. End of explanation res = pd.read_csv(LDA_DIR + 'new/cluster_assign.csv') res.describe().round(2) Explanation: Examples of clear vs. fuzzy assignment can be seen in result file. Cluster assignment statistics End of explanation g1 = res.query('n_skill < 7'); g2 = res.query('n_skill >= 7 & n_skill < 12') g3 = res.query('n_skill >= 12 & n_skill < 18'); g4 = res.query('n_skill >= 18') print('# posts in 4 groups:'); print(','.join([str(g1.shape[0]), str(g2.shape[0]), str(g3.shape[0]), str(g4.shape[0])])) Explanation: Correlation between n_top_cluster and n_skill in job posts We can roughly divide job posts into 4 following groups based on the above quartile: + G1: $ 2 \le $ n_skill $ \le 7 $; G2: $ 7 < $ n_skill $ \le 12 $ + G3: $ 12 < $ n_skill $ \le 18 $; G4: $ 18 < $ n_skill $ \le 115 $ End of explanation bp = mixtureSizePlot(g1, g2, g3, g4) plt.savefig(LDA_DIR + 'fig/boxplot_mixture_size.pdf'); plt.show(); plt.close() Explanation: Box plot of mixture size End of explanation thres = 0.4 fig = errorBarPlot(res, thres=thres) plt.savefig(LDA_DIR + 'fig/mixture_size_thres{}.jpg'.format(int(thres*100))) plt.show(); plt.close() Explanation: The box plot reveals the following: the median mixture size decreases when we have more skills in job post. This is expected as more skills should give clearer assignment. when $ 2 \le $ n_skill $ \le 7 $ and $ 12 < $ n_skill $ \le 18 $, the mixture size is resp 7 and 6 most of the time. Error bar plot of mixture size End of explanation fig = topClusterProbPlot(g1, g2, g3, g4) plt.savefig(LDA_DIR + 'fig/top_cluster_prob.jpg') plt.show(); plt.close() Explanation: Probability of top cluster End of explanation NMF_DIR = RES_DIR + 'new/nmf/' Explanation: NMF End of explanation ## TODO tf_idf_vect = text_manip.TfidfVectorizer(vocabulary=skills, ngram_range=(1, max_n_word)) n_instance, n_feat = posts.shape[0], len(skills) t0 =time() print('Building tf_idf for %d JDs using %d features (skills)...' %(n_instance, n_feat)) doc_skill_tfidf = tf_idf_vect.fit_transform(posts['clean_text']) print('Done after %.1fs' %(time()-t0)) Explanation: Building TF-IDF matrix Need to proceed like LDA i.e. we need to calculate tfidf for trigram skills, remove them, then calculate tfidf for bigram skills, remove then calculate tfidf for unigram skills. End of explanation rnmf = {k: NMF(n_components=k, random_state=0) for k in ks} print( "Fitting NMF using random initialization..." ) print('No. of topics, Error, Running time') rnmf_error = [] for k in ks: t0 = time() rnmf[k].fit(X_train) elapsed = time() - t0 err = rnmf[k].reconstruction_err_ print('%d, %0.1f, %0.1fs' %(k, err, elapsed)) rnmf_error.append(err) # end Explanation: Training End of explanation nmf_features = tf_idf_vect.get_feature_names() pd.DataFrame(nmf_features).to_csv(RES_DIR + 'nmf_features.csv', index=False) for k in ks: top_words = top_words_df(n_top_words, model=rnmf[k],feature_names=nmf_features) top_words.to_csv(RES_DIR + 'nmf_{}_topics.csv'.format(k), index=False) # each word dist is a component in NMF word_dist = pd.DataFrame(rnmf[k].components_).apply(normalize, axis=1) word_dist.to_csv(RES_DIR + 'nmf_word_dist_{}topics.csv'.format(k), index=False) Explanation: Save models: End of explanation print('Calculating test errors of random NMF ...') rnmf_test_error = cal_test_err(mf_models=rnmf) best_k = ks[np.argmin(rnmf_test_error)] print('The best no. of topics is %d' %best_k) rnmf_best = rnmf[best_k] nmf_fig = plotMetrics(train_metric=rnmf_error, test_metric=rnmf_test_error, model_name='NMF') nmf_fig.savefig(RES_DIR + 'nmf.pdf') plt.close(nmf_fig) Explanation: Evaluation End of explanation # Put all model metrics on training & test datasets into 2 data frames model_list = ['LDA', 'randomNMF'] train_metric = pd.DataFrame({'No. of topics': ks, 'LDA': np.divide(lda_scores, 10**6), 'randomNMF': rnmf_error}) test_metric = pd.DataFrame({'No. of topics': ks, 'LDA': perp, 'randomNMF': rnmf_test_error, }) fig = plt.figure(figsize=(10, 6)) for i, model in enumerate(model_list): plt.subplot(2, 2, i+1) plt.subplots_adjust(wspace=.5, hspace=.5) # train metric plt.title(model) plt.plot(ks, train_metric[model], '--') plt.xlabel('No. of topics') if model == 'LDA': plt.ylabel(r'Log likelihood ($\times 10^6$)') else: plt.ylabel(r'$\| X_{train} - W_{train} H \|_2$') plt.grid(True) plt.xticks(ks) # test metric plt.subplot(2, 2, i+3) plt.title(model) plt.plot(ks, test_metric[model], 'r') plt.xlabel('No. of topics') if model == 'LDA': plt.ylabel(r'Perplexity') else: plt.ylabel(r'$\| X_{test} - W_{test} H \|_2$') plt.grid(True) plt.xticks(ks) # end plt.show() fig.savefig(RES_DIR + 'lda_vs_nmf.pdf') plt.close(fig) Explanation: Model Comparison End of explanation
5,261
Given the following text description, write Python code to implement the functionality described below step by step Description: Cleaning Your Data Let's take a web access log, and figure out the most-viewed pages on a website from it! Sounds easy, right? Let's set up a regex that lets us parse an Apache access log line Step1: Here's the full path to the log file I'm analyzing; change this if you want to run this stuff yourself Step2: Now we'll whip up a little script to extract the URL in each access, and use a dictionary to count up the number of times each one appears. Then we'll sort it and print out the top 20 pages. What could go wrong? Step3: Hm. The 'request' part of the line is supposed to look something like this Step4: Huh. In addition to empty fields, there's one that just contains garbage. Well, let's modify our script to check for that case Step5: It worked! But, the results don't really make sense. What we really want is pages accessed by real humans looking for news from our little news site. What the heck is xmlrpc.php? A look at the log itself turns up a lot of entries like this Step6: That's starting to look better. But, this is a news site - are people really reading the little blog on it instead of news pages? That doesn't make sense. Let's look at a typical /blog/ entry in the log Step7: Yikes! In addition to '-', there are also a million different web robots accessing the site and polluting my data. Filtering out all of them is really hard, but getting rid of the ones significantly polluting my data in this case should be a matter of getting rid of '-', anything containing "bot" or "spider", and W3 Total Cache. Step8: Now, our new problem is that we're getting a bunch of hits on things that aren't web pages. We're not interested in those, so let's filter out any URL that doesn't end in / (all of the pages on my site are accessed in that manner - again this is applying knowledge about my data to the analysis!)
Python Code: import re format_pat= re.compile( r"(?P<host>[\d\.]+)\s" r"(?P<identity>\S*)\s" r"(?P<user>\S*)\s" r"\[(?P<time>.*?)\]\s" r'"(?P<request>.*?)"\s' r"(?P<status>\d+)\s" r"(?P<bytes>\S*)\s" r'"(?P<referer>.*?)"\s' r'"(?P<user_agent>.*?)"\s*' ) Explanation: Cleaning Your Data Let's take a web access log, and figure out the most-viewed pages on a website from it! Sounds easy, right? Let's set up a regex that lets us parse an Apache access log line: End of explanation logPath = "E:\\sundog-consult\\Udemy\\DataScience\\access_log.txt" Explanation: Here's the full path to the log file I'm analyzing; change this if you want to run this stuff yourself: End of explanation URLCounts = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() request = access['request'] (action, URL, protocol) = request.split() if URLCounts.has_key(URL): URLCounts[URL] = URLCounts[URL] + 1 else: URLCounts[URL] = 1 results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True) for result in results[:20]: print(result + ": " + str(URLCounts[result])) Explanation: Now we'll whip up a little script to extract the URL in each access, and use a dictionary to count up the number of times each one appears. Then we'll sort it and print out the top 20 pages. What could go wrong? End of explanation URLCounts = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() request = access['request'] fields = request.split() if (len(fields) != 3): print(fields) Explanation: Hm. The 'request' part of the line is supposed to look something like this: GET /blog/ HTTP/1.1 There should be an HTTP action, the URL, and the protocol. But it seems that's not always happening. Let's print out requests that don't contain three items: End of explanation URLCounts = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() request = access['request'] fields = request.split() if (len(fields) == 3): URL = fields[1] if URLCounts.has_key(URL): URLCounts[URL] = URLCounts[URL] + 1 else: URLCounts[URL] = 1 results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True) for result in results[:20]: print(result + ": " + str(URLCounts[result])) Explanation: Huh. In addition to empty fields, there's one that just contains garbage. Well, let's modify our script to check for that case: End of explanation URLCounts = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() request = access['request'] fields = request.split() if (len(fields) == 3): (action, URL, protocol) = fields if (action == 'GET'): if URLCounts.has_key(URL): URLCounts[URL] = URLCounts[URL] + 1 else: URLCounts[URL] = 1 results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True) for result in results[:20]: print(result + ": " + str(URLCounts[result])) Explanation: It worked! But, the results don't really make sense. What we really want is pages accessed by real humans looking for news from our little news site. What the heck is xmlrpc.php? A look at the log itself turns up a lot of entries like this: 46.166.139.20 - - [05/Dec/2015:05:19:35 +0000] "POST /xmlrpc.php HTTP/1.0" 200 370 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)" I'm not entirely sure what the script does, but it points out that we're not just processing GET actions. We don't want POSTS, so let's filter those out: End of explanation UserAgents = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() agent = access['user_agent'] if UserAgents.has_key(agent): UserAgents[agent] = UserAgents[agent] + 1 else: UserAgents[agent] = 1 results = sorted(UserAgents, key=lambda i: int(UserAgents[i]), reverse=True) for result in results: print(result + ": " + str(UserAgents[result])) Explanation: That's starting to look better. But, this is a news site - are people really reading the little blog on it instead of news pages? That doesn't make sense. Let's look at a typical /blog/ entry in the log: 54.165.199.171 - - [05/Dec/2015:09:32:05 +0000] "GET /blog/ HTTP/1.0" 200 31670 "-" "-" Hm. Why is the user agent blank? Seems like some sort of malicious scraper or something. Let's figure out what user agents we are dealing with: End of explanation URLCounts = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() agent = access['user_agent'] if (not('bot' in agent or 'spider' in agent or 'Bot' in agent or 'Spider' in agent or 'W3 Total Cache' in agent or agent =='-')): request = access['request'] fields = request.split() if (len(fields) == 3): (action, URL, protocol) = fields if (action == 'GET'): if URLCounts.has_key(URL): URLCounts[URL] = URLCounts[URL] + 1 else: URLCounts[URL] = 1 results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True) for result in results[:20]: print(result + ": " + str(URLCounts[result])) Explanation: Yikes! In addition to '-', there are also a million different web robots accessing the site and polluting my data. Filtering out all of them is really hard, but getting rid of the ones significantly polluting my data in this case should be a matter of getting rid of '-', anything containing "bot" or "spider", and W3 Total Cache. End of explanation URLCounts = {} with open(logPath, "r") as f: for line in (l.rstrip() for l in f): match= format_pat.match(line) if match: access = match.groupdict() agent = access['user_agent'] if (not('bot' in agent or 'spider' in agent or 'Bot' in agent or 'Spider' in agent or 'W3 Total Cache' in agent or agent =='-')): request = access['request'] fields = request.split() if (len(fields) == 3): (action, URL, protocol) = fields if (URL.endswith("/")): if (action == 'GET'): if URLCounts.has_key(URL): URLCounts[URL] = URLCounts[URL] + 1 else: URLCounts[URL] = 1 results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True) for result in results[:20]: print(result + ": " + str(URLCounts[result])) Explanation: Now, our new problem is that we're getting a bunch of hits on things that aren't web pages. We're not interested in those, so let's filter out any URL that doesn't end in / (all of the pages on my site are accessed in that manner - again this is applying knowledge about my data to the analysis!) End of explanation
5,262
Given the following text description, write Python code to implement the functionality described below step by step Description: Built-In and custom scoring functions Using built-in scoring functions Step1: Binary confusion matrix Step2: Scorers for cross-validation and grid-search Step3: Defining your own scoring callable From scratch Step4: From a score function Step5: Accessing the estimator
Python Code: from sklearn.datasets import make_classification from sklearn.cross_validation import train_test_split X, y = make_classification(random_state=0) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0) from sklearn.linear_model import LogisticRegression lr = LogisticRegression() lr.fit(X_train, y_train) print(lr.score(X_test, y_test)) pred = lr.predict(X_test) from sklearn.metrics import confusion_matrix print(confusion_matrix(y_test, pred)) Explanation: Built-In and custom scoring functions Using built-in scoring functions End of explanation from sklearn.metrics import classification_report print(classification_report(y_test, pred)) from sklearn.metrics import precision_score, f1_score print("precision: %f f1_score: %f" % (precision_score(y_test, pred), f1_score(y_test, pred))) from sklearn.metrics import roc_auc_score, average_precision_score, log_loss probs = lr.predict_proba(X_test)[:, 1] print("area under the roc_curve: %f" % roc_auc_score(y_test, probs)) print("average precision: %f" % average_precision_score(y_test, probs)) print("log loss: %f" % log_loss(y_test, probs)) Explanation: Binary confusion matrix: <table> <tr><td>True Positive (TP)</td><td>False Negative (FN) </td></tr> <tr><td>False Positive (FP) </td><td>True Negative (TN) </td></tr> </table> $$ \text{precision} = \frac{TP}{FP + TP} $$ $$ \text{recall} = \frac{TP}{FN + TP} $$ $$ \text{accuracy} = \frac{TP + TN}{FP + FN + TP + TN} $$ $$ f_1 = 2 \frac{\text{precision} \cdot \text{recall}}{\text{precision} + \text{recall}} $$ End of explanation from sklearn.metrics.scorer import SCORERS print(SCORERS.keys()) from sklearn.cross_validation import cross_val_score cross_val_score(LogisticRegression(), X, y) print("Accuracy scoring: %s" % cross_val_score(LogisticRegression(), X, y, scoring="accuracy")) print("F1 scoring: %s" % cross_val_score(LogisticRegression(), X, y, scoring="f1")) print("AUC scoring: %s" % cross_val_score(LogisticRegression(), X, y, scoring="roc_auc")) print("Log loss scoring: %s" % cross_val_score(LogisticRegression(), X, y, scoring="log_loss")) from sklearn.grid_search import GridSearchCV param_grid = {'C': np.logspace(start=-3, stop=3, num=10)} grid_search = GridSearchCV(LogisticRegression(), param_grid, scoring="log_loss") grid_search.fit(X, y) grid_search.grid_scores_ grid_search.best_params_ Explanation: Scorers for cross-validation and grid-search End of explanation def my_accuracy_scoring(est, X, y): return np.mean(est.predict(X) == y) print(cross_val_score(LogisticRegression(), X, y)) print(cross_val_score(LogisticRegression(), X, y, scoring=my_accuracy_scoring)) Explanation: Defining your own scoring callable From scratch End of explanation from sklearn.metrics import fbeta_score fbeta_score(y_test, pred, beta=10) from sklearn.metrics.scorer import make_scorer my_fbeta_scorer = make_scorer(fbeta_score, beta=10) print(cross_val_score(LogisticRegression(), X, y, scoring=my_fbeta_scorer)) Explanation: From a score function End of explanation def my_sparse_scoring(est, X, y): return np.mean(est.predict(X) == y) - np.mean(est.coef_ != 0) from sklearn.grid_search import GridSearchCV from sklearn.svm import LinearSVC grid = GridSearchCV(LinearSVC(C=.01, dual=False), param_grid={'penalty' : ['l1', 'l2']}, scoring=my_sparse_scoring) grid.fit(X, y) print(grid.best_params_) Explanation: Accessing the estimator End of explanation
5,263
Given the following text description, write Python code to implement the functionality described below step by step Description: Flowers Image Classification with TensorFlow on Cloud ML Engine This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API. Step1: Input functions to read JPEG images The key difference between this notebook and the MNIST one is in the input function. In the input function here, we are doing the following Step2: Now, let's do it on ML Engine. Note the --model parameter Step3: Monitoring training with TensorBoard Use this cell to launch tensorboard Step4: Here are my results Step5: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http Step6: Send it to the prediction service
Python Code: import os PROJECT = "cloud-training-demos" # REPLACE WITH YOUR PROJECT ID BUCKET = "cloud-training-demos-ml" # REPLACE WITH YOUR BUCKET NAME REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 MODEL_TYPE = "cnn" # do not change these os.environ["PROJECT"] = PROJECT os.environ["BUCKET"] = BUCKET os.environ["REGION"] = REGION os.environ["MODEL_TYPE"] = MODEL_TYPE os.environ["TFVERSION"] = "1.13" # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION Explanation: Flowers Image Classification with TensorFlow on Cloud ML Engine This notebook demonstrates how to do image classification from scratch on a flowers dataset using the Estimator API. End of explanation %%bash rm -rf flowersmodel.tar.gz flowers_trained gcloud ml-engine local train \ --module-name=flowersmodel.task \ --package-path=${PWD}/flowersmodel \ -- \ --output_dir=${PWD}/flowers_trained \ --train_steps=5 \ --learning_rate=0.01 \ --batch_size=2 \ --model=$MODEL_TYPE \ --augment \ --train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \ --eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv Explanation: Input functions to read JPEG images The key difference between this notebook and the MNIST one is in the input function. In the input function here, we are doing the following: * Reading JPEG images, rather than 2D integer arrays. * Reading in batches of batch_size images rather than slicing our in-memory structure to be batch_size images. * Resizing the images to the expected HEIGHT, WIDTH. Because this is a real-world dataset, the images are of different sizes. We need to preprocess the data to, at the very least, resize them to constant size. Run as a Python module Let's first run it locally for a short while to test the code works. Note the --model parameter End of explanation %%bash OUTDIR=gs://${BUCKET}/flowers/trained_${MODEL_TYPE} JOBNAME=flowers_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ml-engine jobs submit training $JOBNAME \ --region=$REGION \ --module-name=flowersmodel.task \ --package-path=${PWD}/flowersmodel \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC_GPU \ --runtime-version=$TFVERSION \ -- \ --output_dir=$OUTDIR \ --train_steps=1000 \ --learning_rate=0.01 \ --batch_size=40 \ --model=$MODEL_TYPE \ --augment \ --batch_norm \ --train_data_path=gs://cloud-ml-data/img/flower_photos/train_set.csv \ --eval_data_path=gs://cloud-ml-data/img/flower_photos/eval_set.csv Explanation: Now, let's do it on ML Engine. Note the --model parameter End of explanation from google.datalab.ml import TensorBoard TensorBoard().start("gs://{}/flowers/trained_{}".format(BUCKET, MODEL_TYPE)) for pid in TensorBoard.list()["pid"]: TensorBoard().stop(pid) print("Stopped TensorBoard with pid {}".format(pid)) Explanation: Monitoring training with TensorBoard Use this cell to launch tensorboard End of explanation %%bash MODEL_NAME="flowers" MODEL_VERSION=${MODEL_TYPE} MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/flowers/trained_${MODEL_TYPE}/export/exporter | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ml-engine versions delete --quiet ${MODEL_VERSION} --model ${MODEL_NAME} #gcloud ml-engine models delete ${MODEL_NAME} gcloud ml-engine models create ${MODEL_NAME} --regions $REGION gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION Explanation: Here are my results: Model | Accuracy | Time taken | Run time parameters --- | :---: | --- cnn with batch-norm | 0.582 | 47 min | 1000 steps, LR=0.01, Batch=40 as above, plus augment | 0.615 | 3 hr | 5000 steps, LR=0.01, Batch=40 Deploying and predicting with model Deploy the model: End of explanation %%bash IMAGE_URL=gs://cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg # Copy the image to local disk. gsutil cp $IMAGE_URL flower.jpg # Base64 encode and create request message in json format. python -c 'import base64, sys, json; img = base64.b64encode(open("flower.jpg", "rb").read()).decode(); print(json.dumps({"image_bytes":{"b64": img}}))' &> request.json Explanation: To predict with the model, let's take one of the example images that is available on Google Cloud Storage <img src="http://storage.googleapis.com/cloud-ml-data/img/flower_photos/sunflowers/1022552002_2b93faf9e7_n.jpg" /> The online prediction service expects images to be base64 encoded as described here. End of explanation %%bash gcloud ml-engine predict \ --model=flowers \ --version=${MODEL_TYPE} \ --json-instances=./request.json Explanation: Send it to the prediction service End of explanation
5,264
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Using-Turtle-graphics Step1: Import everything from Turtle graphics Step2: Import floor and vector from Free Games Step3: Declare a dictionary with only one entry Step4: Instance Turtle object in instance path. Set is an invisible Step5: Another instance Step6: The maze Step7: See Step8: See Step9: See
Python Code: from random import choice choice([1,2,3]) choice([1,2,3]) Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#Using-Turtle-graphics:-a-Tkinter-based-turtle-graphics-module-for-Python" data-toc-modified-id="Using-Turtle-graphics:-a-Tkinter-based-turtle-graphics-module-for-Python-1"><span class="toc-item-num">1&nbsp;&nbsp;</span>Using <a href="https://docs.python.org/3.6/library/turtle.html" target="_blank">Turtle graphics</a>: a <a href="https://wiki.python.org/moin/TkInter" target="_blank">Tkinter</a>-based turtle graphics module for Python</a></span><ul class="toc-item"><li><span><a href="#PacMan" data-toc-modified-id="PacMan-1.1"><span class="toc-item-num">1.1&nbsp;&nbsp;</span><a href="https://github.com/grantjenks/free-python-games/blob/master/freegames/pacman.py" target="_blank">PacMan</a></a></span></li><li><span><a href="#Exercises-(proposed-by-the-author):" data-toc-modified-id="Exercises-(proposed-by-the-author):-1.2"><span class="toc-item-num">1.2&nbsp;&nbsp;</span>Exercises (proposed by the author):</a></span></li></ul></li></ul></div> Using Turtle graphics: a Tkinter-based turtle graphics module for Python PacMan Example extracted from https://github.com/grantjenks/free-python-games. Import <a href="https://docs.python.org/3.6/library/random.html#functions-for-sequences)">choice</a> from Turtle graphics: End of explanation from turtle import * Explanation: Import everything from Turtle graphics: End of explanation from freegames import floor, vector # Install freegames with 'pip install freegames' floor(1,10) # value to floor, the floor floor(9,10) floor(11,10) floor(-1,10) floor(3,2) import numpy as np import matplotlib.pyplot as plt %matplotlib inline v1 = vector(1, 2) v2 = v1.copy() v2.move(1) print(v1, v2) plt.figure() ax = plt.gca() ax.quiver((0,0), (0,0), (v1.x, v2.x), (v1.y, v2.y), angles='xy', scale_units='xy', scale=1) ax.set_xlim([-5, 5]) ax.set_ylim([-5, 5]) ax.set_xticks(np.arange(-5, 5, 1)) ax.set_yticks(np.arange(-5, 5, 1)) plt.grid() plt.draw() plt.show() v1 = vector(1,2) v2 = v1.copy() v2.rotate(90) print(v1, v2) plt.figure() ax = plt.gca() ax.quiver((0,0), (0,0), (v1.x, v2.x), (v1.y, v2.y), angles='xy', scale_units='xy', scale=1) ax.set_xlim([-5, 5]) ax.set_ylim([-5, 5]) ax.set_xticks(np.arange(-5, 5, 1)) ax.set_yticks(np.arange(-5, 5, 1)) plt.grid() plt.draw() plt.show() Explanation: Import floor and vector from Free Games: End of explanation state = {'score': 0} 'score' in state state['score'] Explanation: Declare a dictionary with only one entry: End of explanation path = Turtle(visible=False) Explanation: Instance Turtle object in instance path. Set is an invisible: End of explanation writer = Turtle(visible=False) aim = vector(5, 0) pacman = vector(-40, -80) ghosts = [ [vector(-180, 160), vector(5, 0)], [vector(-180, -160), vector(0, 5)], [vector(100, 160), vector(0, -5)], [vector(100, -160), vector(-5, 0)], ] type(ghosts) type(ghosts[0]) type(ghosts[0][0]) Explanation: Another instance: End of explanation tiles = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ] len(tiles) type(tiles) import matplotlib.pyplot as plt import numpy as np plt.matshow(np.array(tiles).reshape(20, 20)) plt.show() def square(x, y): "Draw square using path at (x, y)." path.up() path.goto(x, y) path.down() path.begin_fill() for count in range(4): path.forward(20) path.left(90) path.end_fill() square(5,7) path.isvisible() path.showturtle() def offset(point): "Return offset of point in tiles." x = (floor(point.x, 20) + 200) / 20 y = (180 - floor(point.y, 20)) / 20 index = int(x + y * 20) return index print(offset(vector(2,3))) def valid(point): "Return True if point is valid in tiles." index = offset(point) if tiles[index] == 0: return False index = offset(point + 19) if tiles[index] == 0: return False return point.x % 20 == 0 or point.y % 20 == 0 print(valid(vector(2,3))) Explanation: The maze: End of explanation def world(): "Draw world using path." bgcolor('black') path.color('blue') for index in range(len(tiles)): tile = tiles[index] if tile > 0: x = (index % 20) * 20 - 200 y = 180 - (index // 20) * 20 square(x, y) print(x, y, " ",) if tile == 1: path.up() path.goto(x + 10, y + 10) path.dot(2, 'white') #path.speed(10) #world() Explanation: See: * turtle.bgcolor(): Sets or return background color of the TurtleScreen. * turtle.color(): Returns or set pencolor and fillcolor. * turtle.up(): Pulls the pen up – no drawing when moving. * turtle.goto(): Moves turtle to an absolute position. * turtle.dot(): Draws a circular dot with diameter size, using color. End of explanation def move(): "Move pacman and all ghosts." writer.undo() writer.write(state['score']) clear() if valid(pacman + aim): pacman.move(aim) index = offset(pacman) if tiles[index] == 1: tiles[index] = 2 state['score'] += 1 x = (index % 20) * 20 - 200 y = 180 - (index // 20) * 20 square(x, y) up() goto(pacman.x + 10, pacman.y + 10) dot(20, 'yellow') for point, course in ghosts: if valid(point + course): point.move(course) else: options = [ vector(5, 0), vector(-5, 0), vector(0, 5), vector(0, -5), ] plan = choice(options) course.x = plan.x course.y = plan.y up() goto(point.x + 10, point.y + 10) dot(20, 'red') update() for point, course in ghosts: if abs(pacman - point) < 20: return ontimer(move, 100) #move() def change(x, y): "Change pacman aim if valid." if valid(pacman + vector(x, y)): aim.x = x aim.y = y Explanation: See: * turtle.undo(): Undoes (repeatedly) the last turtle action(s). * turtle.write(): Writes text - the string representation of arg - at the current turtle position. * turtle.clear(): Deletes the turtle’s drawings from the screen. End of explanation setup(420, 420, 370, 0) hideturtle() tracer(False) writer.goto(160, 160) writer.color('white') writer.write(state['score']) listen() onkey(lambda: change(5, 0), 'Right') onkey(lambda: change(-5, 0), 'Left') onkey(lambda: change(0, 5), 'Up') onkey(lambda: change(0, -5), 'Down') world() move() done() Explanation: See: * turtle.setup(): Sets the size and position of the main window. * turtle.hideturtle(): Makes the turtle invisible. * turtle.tracer(): Turns turtle animation on/off and sets delay for update drawings. * turtle.listen(): Sets focus on TurtleScreen (in order to collect key-events). * turtle.onkey(): Binds a function to key-release event of key. * turtle.done(): Starts event loop - calling Tkinter’s mainloop function. * Lambda functions. End of explanation
5,265
Given the following text description, write Python code to implement the functionality described below step by step Description: Permutations Step4: Helper code Let's start by defining a few functions that will help us construct and inspect automata Step5: All permutations Step6: Window of length d Here we keep track of a coverage vector (C) of length d starting from the leftmost uncovered word (l) [l, C] There are two inference rules Step7: Examples Step8: Input Step9: All permutations Step10: For a toy example we can enumerate the permutations Step11: WLd
Python Code: import fst Explanation: Permutations End of explanation # Let's see the input as a simple linear chain FSA def make_input(srcstr, sigma = None): converts a nonempty string into a linear chain acceptor @param srcstr is a nonempty string @param sigma is the source vocabulary assert(srcstr.split()) return fst.linear_chain(srcstr.split(), sigma) # this function will enumerate all paths in an automaton def enumerate_paths(fsa): paths = [[str(arc.ilabel) for arc in path] for path in fsa.paths()] print len(paths), 'paths:' for path in paths: print ' '.join(path) # I am going to start with a very simple wrapper for a python dictionary that # will help us associate unique ids to items # this wrapper simply offers one aditional method (insert) similar to the insert method of an std::map class ItemFactory(object): def __init__(self): self.nextid_ = 0 self.i2s_ = {} def insert(self, item): Inserts a previously unmapped item. Returns the item's unique id and a flag with the result of the intertion. uid = self.i2s_.get(item, None) if uid is None: uid = self.nextid_ self.nextid_ += 1 self.i2s_[item] = uid return uid, True return uid, False def get(self, item): Returns the item's unique id (assumes the item has been mapped before) return self.i2s_[item] Explanation: Helper code Let's start by defining a few functions that will help us construct and inspect automata End of explanation # This program packs all permutations of an input sentence def Permutations(sentence, sigma=None, delta=None): from collections import deque from itertools import takewhile A = fst.Transducer(isyms=sigma, osyms=delta) I = len(sentence) axiom = tuple([False]*I) ifactory = ItemFactory() ifactory.insert(axiom) Q = deque([axiom]) while Q: ant = Q.popleft() # antecedent (coverage vector) sfrom = ifactory.get(ant) # state id if all(ant): # goal item A[sfrom].final = True # is a final node continue for i in range(I): if not ant[i]: cons = list(ant) cons[i] = True cons = tuple(cons) sto, new = ifactory.insert(cons) if new: Q.append(cons) A.add_arc(sfrom, sto, str(i + 1), sentence[i], 0) return A Explanation: All permutations End of explanation # Let's define a model of translational equivalences that performs word replacement of arbitrary permutations of the input # constrained to a window of length $d$ (see WLd in (Lopez, 2009)) # same strategy in Moses (for phrase-based models) def WLdPermutations(sentence, d = 2, sigma = None, delta = None): from collections import deque from itertools import takewhile A = fst.Transducer(isyms = sigma, osyms = delta) I = len(sentence) axiom = (1, tuple([False]*min(I - 1, d - 1))) ifactory = ItemFactory() ifactory.insert(axiom) Q = deque([axiom]) while Q: ant = Q.popleft() # antecedent l, C = ant # signature sfrom = ifactory.get(ant) # state id if l == I + 1: # goal item A[sfrom].final = True # is a final node continue # adjacent n = 0 if (len(C) == 0 or not C[0]) else sum(takewhile(lambda b : b, C)) # leading ones ll = l + n + 1 CC = list(C[n+1:]) maxlen = min(I - ll, d - 1) if maxlen: m = maxlen - len(CC) # missing positions [CC.append(False) for _ in range(m)] cons = (ll, tuple(CC)) sto, inserted = ifactory.insert(cons) if inserted: Q.append(cons) A.add_arc(sfrom, sto, str(l), sentence[l-1], 0) # non-adjacent ll = l for i in range(l + 1, I + 1): if i - l + 1 > d: # beyond limit break if C[i - l - 1]: # already used continue # free position CC = list(C) CC[i-l-1] = True cons = (ll, tuple(CC)) sto, inserted = ifactory.insert(cons) if inserted: Q.append(cons) A.add_arc(sfrom, sto, str(i), sentence[i-1], 0) return A Explanation: Window of length d Here we keep track of a coverage vector (C) of length d starting from the leftmost uncovered word (l) [l, C] There are two inference rules: one moves the window ahead whenever the leftmost uncovered position chances; and another that fills up the window without touching the leftmost input word. End of explanation # Let's create a table for the input vocabulary $\Sigma$ sigma = fst.SymbolTable() # and for the output vocabulary $\Delta$ delta = fst.SymbolTable() Explanation: Examples End of explanation # Let's have a look at the input as an automaton # we call it F ('f' is the cannonical source language) ex1_F = make_input('nosso amigo comum', sigma) ex1_F Explanation: Input End of explanation ex1_all = Permutations('1 2 3 4'.split(), None, sigma) ex1_all Explanation: All permutations End of explanation enumerate_paths(ex1_all) Explanation: For a toy example we can enumerate the permutations End of explanation # these are the permutations of the input according to WL$2$ ex2_WLd2 = WLdPermutations('1 2 3 4'.split(), 2, None, sigma) ex2_WLd2 enumerate_paths(ex2_WLd2) Explanation: WLd End of explanation
5,266
Given the following text description, write Python code to implement the functionality described below step by step Description: Toggle Button Menu Example showing how to construct a toggle button widget that can be used to select a cube dimension. Step1: Load cube. Step2: Compose list of options and then construct widget to present them, along with a default option. Display the widget using the IPython display call. Step3: Print selected widget value for clarity.
Python Code: import ipywidgets import IPython.display import iris Explanation: Toggle Button Menu Example showing how to construct a toggle button widget that can be used to select a cube dimension. End of explanation cube = iris.load_cube(iris.sample_data_path('A1B.2098.pp')) print cube Explanation: Load cube. End of explanation coordinates = [(coord.name()) for coord in cube.coords()] dim_x = ipywidgets.ToggleButtons( description='Dimension:', options=coordinates, value='time') IPython.display.display(dim_x) Explanation: Compose list of options and then construct widget to present them, along with a default option. Display the widget using the IPython display call. End of explanation print dim_x.value Explanation: Print selected widget value for clarity. End of explanation
5,267
Given the following text description, write Python code to implement the functionality described below step by step Description: Generate names Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train NN instead. Dataset contains ~8k human names from different cultures[in latin transcript] Objective (toy problem) Step1: Text processing Step2: Cast everything from symbols into identifiers Step3: Input variables Step4: Build NN You will be building a model that takes token sequence and predicts next token iput sequence one-hot / embedding recurrent layer(s) otput layer(s) that predict output probabilities Step5: Compiling it Step6: generation Simple Step7: Model training Here you can tweak parameters or insert your generation function Once something word-like starts generating, try increasing seq_length
Python Code: start_token = " " with open("names") as f: names = f.read()[:-1].split('\n') names = [start_token+name for name in names] print ('n samples = ',len(names)) for x in names[::1000]: print (x) Explanation: Generate names Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train NN instead. Dataset contains ~8k human names from different cultures[in latin transcript] Objective (toy problem): learn a generative model over names. End of explanation #all unique characters go here tokens = <all unique characters in the dataset> tokens = list(tokens) print ('n_tokens = ',len(tokens)) #!token_to_id = <dictionary of symbol -> its identifier (index in tokens list)> token_to_id = {t:i for i,t in enumerate(tokens) } #!id_to_token = < dictionary of symbol identifier -> symbol itself> id_to_token = {i:t for i,t in enumerate(tokens)} import matplotlib.pyplot as plt %matplotlib inline plt.hist(list(map(len,names),bins=25)); # truncate names longer than ~80% percentile MAX_LEN = ?! Explanation: Text processing End of explanation names_ix = list(map(lambda name: list(map(token_to_id.get,name)),names)) #crop long names and pad short ones for i in range(len(names_ix)): names_ix[i] = names_ix[i][:MAX_LEN] #crop too long if len(names_ix[i]) < MAX_LEN: names_ix[i] += [token_to_id[" "]]*(MAX_LEN - len(names_ix[i])) #pad too short assert len(set(map(len,names_ix)))==1 names_ix = np.array(names_ix) Explanation: Cast everything from symbols into identifiers End of explanation input_sequence = T.matrix('token sequencea','int32') target_values = T.matrix('actual next token','int32') Explanation: Input variables End of explanation from lasagne.layers import InputLayer,DenseLayer,EmbeddingLayer from lasagne.layers import RecurrentLayer,LSTMLayer,GRULayer,CustomRecurrentLayer l_in = lasagne.layers.InputLayer(shape=(None, None),input_var=input_sequence) #!<Your neural network> l_emb = <embedding layer or one-hot encoding> l_rnn = <some recurrent layer(or several such layers)> #flatten batch and time to be compatible with feedforward layers (will un-flatten later) l_rnn_flat = lasagne.layers.reshape(l_rnn, (-1,l_rnn.output_shape[-1])) l_out = <last dense layer (or several layers), returning probabilities for all possible next tokens> # Model weights weights = lasagne.layers.get_all_params(l_out,trainable=True) print( weights) network_output = <NN output via lasagne> #If you use dropout do not forget to create deterministic version for evaluation predicted_probabilities_flat = network_output correct_answers_flat = target_values.ravel() loss = <Loss function - a simple categorical crossentropy will do, maybe add some regularizer> updates = <your favorite optimizer> Explanation: Build NN You will be building a model that takes token sequence and predicts next token iput sequence one-hot / embedding recurrent layer(s) otput layer(s) that predict output probabilities End of explanation #training train = theano.function([input_sequence, target_values], loss, updates=updates, allow_input_downcast=True) #computing loss without training compute_cost = theano.function([input_sequence, target_values], loss, allow_input_downcast=True) Explanation: Compiling it End of explanation #compile the function that computes probabilities for next token given previous text. #reshape back into original shape next_word_probas = network_output.reshape((input_sequence.shape[0],input_sequence.shape[1],len(tokens))) #predictions for next tokens (after sequence end) last_word_probas = next_word_probas[:,-1] probs = theano.function([input_sequence],last_word_probas,allow_input_downcast=True) def generate_sample(seed_phrase=None,N=MAX_LEN,t=1,n_snippets=1): ''' The function generates text given a phrase of length at least SEQ_LENGTH. parameters: sample_fun - max_ or proportional_sample_fun or whatever else you implemented The phrase is set using the variable seed_phrase The optional input "N" is used to set the number of characters of text to predict. ''' if seed_phrase is None: seed_phrase=start_token if len(seed_phrase) > MAX_LEN: seed_phrase = seed_phrase[-MAX_LEN:] assert type(seed_phrase) is str snippets = [] for _ in range(n_snippets): sample_ix = [] x = [token_to_id.get(c,0) for c in seed_phrase] x = np.array([x]) for i in range(N): # Pick the character that got assigned the highest probability p = probs(x).ravel() p = p**t / np.sum(p**t) ix = np.random.choice(np.arange(len(tokens)),p=p) sample_ix.append(ix) x = np.hstack((x[-MAX_LEN+1:],[[ix]])) random_snippet = seed_phrase + ''.join(id_to_token[ix] for ix in sample_ix) snippets.append(random_snippet) print("----\n %s \n----" % '; '.join(snippets)) Explanation: generation Simple: * get initial context(seed), * predict next token probabilities, * sample next token, * add it to the context * repeat from step 2 You'll get a more detailed info on how it works in the homework section. End of explanation def sample_batch(data, batch_size): rows = data[np.random.randint(0,len(data),size=batch_size)] return rows[:,:-1],rows[:,1:] print("Training ...") #total N iterations n_epochs=100 # how many minibatches are there in the epoch batches_per_epoch = 500 #how many training sequences are processed in a single function call batch_size=10 for epoch in xrange(n_epochs): print "Generated names" generate_sample(n_snippets=10) avg_cost = 0; for _ in range(batches_per_epoch): x,y = sample_batch(names_ix,batch_size) avg_cost += train(x, y) print("Epoch {} average loss = {}".format(epoch, avg_cost / batches_per_epoch)) generate_sample(n_snippets=100) generate_sample(seed=" A") Explanation: Model training Here you can tweak parameters or insert your generation function Once something word-like starts generating, try increasing seq_length End of explanation
5,268
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have two data points on a 2-D image grid and the value of some quantity of interest at these two points is known.
Problem: import scipy.interpolate x = [(2,2), (1,2), (2,3), (3,2), (2,1)] y = [5,7,8,10,3] eval = [(2.7, 2.3)] result = scipy.interpolate.griddata(x, y, eval)
5,269
Given the following text description, write Python code to implement the functionality described below step by step Description: k-Nearest Neighbors Introdução O k-Nearest Neighbors (ou KNN) é uma técnica de classificação bem simples que consiste em prever uma classe alvo ao encontrar a(s) classe(s) vizinha(s) mais próxima(s). Neste tutorial iremos apresentar a implementação do algoritmo k-Nearest Neighbors. Usaremos como exemplo o conjunto de dados Abalone, que são um tipo de moluscos gastrópodes. Veja neste link. Este problema deseja predizer a idade de um abalone através de medidas físicas. A idade de um abalone é determinada cortando a concha através do cone, manchando-a e contando o número de anéis através de um microscópio - uma tarefa chata e demorada. Outra medidas que são mais facéis de obter, são usadas para prever a idade, como padrões climáticos e localização (portanto, disponibilidade de alimentos). Passos do Tutorial Tratar os dados Step1: 1.2 Converter Valores Não-numéricos É importante que todos os valores sejam numéricos para que possamos caclular as distâncias euclidianas Abaixo funções que convertem string para float ou string para int Step2: 1.3 Normalizar Algumas colunas têm variância maior que outras, por isso é importante normalizar todas as colunas. Para isso calcularemos primeiro os mínimos e máximos. Step3: 2. Calcular Distância Euclidiana O primeiro passo necessário é calcular a distância entre duas linhas em um conjunto de dados. Linhas de dados são constituídos principalmente por números e uma maneira fácil de calcular a distância entre duas linhas ou vetores de números é desenhar uma linha reta. Isso faz sentido em 2D ou 3D e escala muito bem para maiores dimensões. Podemos calcular a distância da linha reta entre dois vetores usando a medida de distância euclidiana. É calculado como a raiz quadrada da soma do quadrado diferenças entre os dois vetores. Com a distância euclidiana, quanto menos o valor, maior a similaridade. O valor $0$ significa que não há nenhuma diferença entre dois registros. Abaixo temos a função que calcula isso Step4: 3. Obter Vizinhos Os vizinhos para um novo dado no conjunto de dados são as k instâncias mais próximas. Para localizar os vizinhos para um novo dado dentro de um conjunto de dados, devemos primeiro calcular a distância entre cada registro no conjunto de dados para o novo dado. Nós podemos fazer isso usando nossa função de distância acima. Uma vez que as distâncias são calculadas, devemos ordenar todas os registros no conjunto de dados de treinamento por sua distância entre o novo dado. Podemos então selecionar o top k para retornar como os vizinhos mais parecidos. Podemos fazer isso guardando a distância de cada registro no conjunto de dados como uma tupla, ordenar a lista de tuplas pela distância(em ordem decrescente) e depois recuperar os vizinhos. Abaixo está uma função chamada get_neighbors( ) que implementa isso. Step5: 4. Fazer Predições Os vizinhos mais semelhantes coletados do conjunto de dados de treinamento podem ser usados para fazer previsões. No caso da classificação, podemos retornar a classe mais representada entre os vizinhos. Podemos conseguir isso executando a função max( ) na lista de valores de saída dos vizinhos. Dada uma lista de valores de classe observadas nos vizinhos, a função max( ) toma um conjunto de valores de classe únicos e chama a contagem na lista de valores de classe para cada valor de classe no conjunto. Abaixo está a função denominada predict_classification( ) que implementa isso. Step6: 5. Avaliar Precisão 5.1 Calcular Acurácea As previsões podem ser comparadas com os valores de classe no conjunto de dados de teste. A acurácia da classificação pode ser calculada como uma relação de precisão entre 0 e 100%. A função accuracy_metric( ) calculará essa relação de precisão. Step7: 5.2 Avaliar Algoritmo Abaixo temos a função que avalia o algoritmo usando uma divisão de validação cruzada. Step8: Caso de Estudo Abalone Aplicaremos o algoritmo k-Nearest Neighbors ao conjunto de dados do Abalone. O primeiro passo é carregar o conjunto de dados e converter os dados carregados para números com os quais podemos usar o cálculo da distância euclidiana. Step9: Um valor de $k = 5$ vizinhos foi usado para fazer previsões. Você pode experimentar com $k$ maiores para aumentar a precisão. Podemos ver que a precisão média de 23% é melhor do que a linha de base de 16%, mas é bastante pobre em geral. Isto é devido ao grande número de classes tornando a precisão um "pobre juiz" de habilidade sobre esse problema. Esse fato, combinado com o fato de que Muitas das classes têm poucos ou um exemplo também tornam o problema desafiador. Nós também podemos modelar o conjunto de dados como um problema de modelagem preditiva de regressão. Isto é porque os valores da classe têm uma relação ordinal natural. Caso de Estudo Abalone como Regressão A regressão pode ser uma maneira mais útil de modelar este problema, dado o grande número de classes e dispersão de alguns valores de classe. Podemos facilmente mudar nosso exemplo acima para regressão por alterando KNN para prever a média dos vizinhos e usando Root Mean Squared Error(erro quadrado médio) para avaliar previsões. Abaixo está um exemplo completo com essas mudanças.
Python Code: from csv import reader from math import sqrt # carregar um arquivo csv def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) for row in csv_reader: if not row: continue dataset.append(row) return dataset # Carregando dataset Abalone filename = 'abalone.csv' dataset = load_csv(filename) print ('Arquivo de dados {0} carregado com {1} linhas e {2} colunas'.format(filename, len(dataset), len(dataset[0]))) Explanation: k-Nearest Neighbors Introdução O k-Nearest Neighbors (ou KNN) é uma técnica de classificação bem simples que consiste em prever uma classe alvo ao encontrar a(s) classe(s) vizinha(s) mais próxima(s). Neste tutorial iremos apresentar a implementação do algoritmo k-Nearest Neighbors. Usaremos como exemplo o conjunto de dados Abalone, que são um tipo de moluscos gastrópodes. Veja neste link. Este problema deseja predizer a idade de um abalone através de medidas físicas. A idade de um abalone é determinada cortando a concha através do cone, manchando-a e contando o número de anéis através de um microscópio - uma tarefa chata e demorada. Outra medidas que são mais facéis de obter, são usadas para prever a idade, como padrões climáticos e localização (portanto, disponibilidade de alimentos). Passos do Tutorial Tratar os dados: carregar os dados do arquivo CSV e tratar o conjunto de dados. Calcular distância euclidiana Obter vizinhos: pegar os $k$ vizinhos mais próximos Fazer predições Avaliar a precisão: avaliar a precisão das previsões feitas para um conjunto de dados de teste como a porcentagem correta de todas as previsões feitas. 1. Tratar Dados 1.1 Carregar arquivo A primeira coisa que precisamos fazer é carregar nosso arquivo de dados. Os dados estão no formato CSV sem linha de cabeçalho. Podemos abrir o arquivo com a função open e ler as linhas de dados usando a função de leitor no módulo csv. Também precisamos converter os atributos que foram carregados como strings em números para que possamos trabalhar com eles. Abaixo está a função load_csv( ) para carregar o conjunto de dados Abalone. End of explanation # Converte string para float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Converte string para int def str_column_to_int(dataset, column): class_values = [row[column] for row in dataset] unique = set(class_values) lookup = dict() for i, value in enumerate(unique): lookup[value] = i for row in dataset: row[column] = lookup[row[column]] return lookup Explanation: 1.2 Converter Valores Não-numéricos É importante que todos os valores sejam numéricos para que possamos caclular as distâncias euclidianas Abaixo funções que convertem string para float ou string para int End of explanation def dataset_minmax(dataset): minmax = list() for i in range(len(dataset[0])): col_values = [row[i] for row in dataset] value_min = min(col_values) value_max = max(col_values) minmax.append([value_min, value_max]) return minmax def normalize_dataset(dataset, minmax): for row in dataset: for i in range(len(row)): row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0]) Explanation: 1.3 Normalizar Algumas colunas têm variância maior que outras, por isso é importante normalizar todas as colunas. Para isso calcularemos primeiro os mínimos e máximos. End of explanation from math import sqrt def euclidean_distance(row1, row2): distance = 0.0 for i in range(len(row1)-1): distance += (row1[i] - row2[i])**2 return sqrt(distance) # Teste da função euclidean_distance dataset = [[2.7810836,2.550537003,0], [1.465489372,2.362125076,0], [3.396561688,4.400293529,0], [1.38807019,1.850220317,0], [3.06407232,3.005305973,0], [7.627531214,2.759262235,1], [5.332441248,2.088626775,1], [6.922596716,1.77106367,1], [8.675418651,-0.242068655,1], [7.673756466,3.508563011,1]] row0 = dataset[0] for row in dataset: distance = euclidean_distance(row0, row) print(distance) Explanation: 2. Calcular Distância Euclidiana O primeiro passo necessário é calcular a distância entre duas linhas em um conjunto de dados. Linhas de dados são constituídos principalmente por números e uma maneira fácil de calcular a distância entre duas linhas ou vetores de números é desenhar uma linha reta. Isso faz sentido em 2D ou 3D e escala muito bem para maiores dimensões. Podemos calcular a distância da linha reta entre dois vetores usando a medida de distância euclidiana. É calculado como a raiz quadrada da soma do quadrado diferenças entre os dois vetores. Com a distância euclidiana, quanto menos o valor, maior a similaridade. O valor $0$ significa que não há nenhuma diferença entre dois registros. Abaixo temos a função que calcula isso: End of explanation def get_neighbors(train, test_row, num_neighbors): distances = list() for train_row in train: dist = euclidean_distance(test_row, train_row) distances.append((train_row, dist)) neighbors = list() for i in range(num_neighbors): neighbors.append(distances[i][0]) return neighbors # Teste da função get_neighbors dataset = [[2.7810836,2.550537003,0], [1.465489372,2.362125076,0], [3.396561688,4.400293529,0], [1.38807019,1.850220317,0], [3.06407232,3.005305973,0], [7.627531214,2.759262235,1], [5.332441248,2.088626775,1], [6.922596716,1.77106367,1], [8.675418651,-0.242068655,1], [7.673756466,3.508563011,1]] neighbors = get_neighbors(dataset, dataset[0], 3) for neighbor in neighbors: print(neighbor) Explanation: 3. Obter Vizinhos Os vizinhos para um novo dado no conjunto de dados são as k instâncias mais próximas. Para localizar os vizinhos para um novo dado dentro de um conjunto de dados, devemos primeiro calcular a distância entre cada registro no conjunto de dados para o novo dado. Nós podemos fazer isso usando nossa função de distância acima. Uma vez que as distâncias são calculadas, devemos ordenar todas os registros no conjunto de dados de treinamento por sua distância entre o novo dado. Podemos então selecionar o top k para retornar como os vizinhos mais parecidos. Podemos fazer isso guardando a distância de cada registro no conjunto de dados como uma tupla, ordenar a lista de tuplas pela distância(em ordem decrescente) e depois recuperar os vizinhos. Abaixo está uma função chamada get_neighbors( ) que implementa isso. End of explanation def predict_classification(train, test_row, num_neighbors): neighbors = get_neighbors(train, test_row, num_neighbors) output_values = [row[-1] for row in neighbors] prediction = max(set(output_values), key=output_values.count) return prediction # Teste da função predict_classification dataset = [[2.7810836,2.550537003,0], [1.465489372,2.362125076,0], [3.396561688,4.400293529,0], [1.38807019,1.850220317,0], [3.06407232,3.005305973,0], [7.627531214,2.759262235,1], [5.332441248,2.088626775,1], [6.922596716,1.77106367,1], [8.675418651,-0.242068655,1], [7.673756466,3.508563011,1]] prediction = predict_classification(dataset, dataset[0], 3) print( ' Expected %d, Got %d. ' % (dataset[0][-1], prediction)) Explanation: 4. Fazer Predições Os vizinhos mais semelhantes coletados do conjunto de dados de treinamento podem ser usados para fazer previsões. No caso da classificação, podemos retornar a classe mais representada entre os vizinhos. Podemos conseguir isso executando a função max( ) na lista de valores de saída dos vizinhos. Dada uma lista de valores de classe observadas nos vizinhos, a função max( ) toma um conjunto de valores de classe únicos e chama a contagem na lista de valores de classe para cada valor de classe no conjunto. Abaixo está a função denominada predict_classification( ) que implementa isso. End of explanation def accuracy_metric(actual, predicted): correct = 0 for i in range(len(actual)): if actual[i] == predicted[i]: correct += 1 return correct / float(len(actual)) * 100.0 Explanation: 5. Avaliar Precisão 5.1 Calcular Acurácea As previsões podem ser comparadas com os valores de classe no conjunto de dados de teste. A acurácia da classificação pode ser calculada como uma relação de precisão entre 0 e 100%. A função accuracy_metric( ) calculará essa relação de precisão. End of explanation def evaluate_algorithm(dataset, algorithm, n_folds, *args): folds = cross_validation_split(dataset, n_folds) scores = list() for fold in folds: train_set = list(folds) train_set.remove(fold) train_set = sum(train_set, []) test_set = list() for row in fold: row_copy = list(row) test_set.append(row_copy) row_copy[-1] = None predicted = algorithm(train_set, test_set, *args) actual = [row[-1] for row in fold] accuracy = accuracy_metric(actual, predicted) scores.append(accuracy) return scores Explanation: 5.2 Avaliar Algoritmo Abaixo temos a função que avalia o algoritmo usando uma divisão de validação cruzada. End of explanation from random import seed from random import randrange from csv import reader from math import sqrt # Carregar um arquivo CSV def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) for row in csv_reader: if not row: continue dataset.append(row) return dataset # Converter string para float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Converter string para integer def str_column_to_int(dataset, column): class_values = [row[column] for row in dataset] unique = set(class_values) lookup = dict() for i, value in enumerate(unique): lookup[value] = i for row in dataset: row[column] = lookup[row[column]] return lookup # Achar os maximos e minimos de cada coluna def dataset_minmax(dataset): minmax = list() for i in range(len(dataset[0])): col_values = [row[i] for row in dataset] value_min = min(col_values) value_max = max(col_values) minmax.append([value_min, value_max]) return minmax # Normalizar o conjunto de dados def normalize_dataset(dataset, minmax): for row in dataset: for i in range(len(row)): row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0]) # Dividir o conjunto de dados em k folds def cross_validation_split(dataset, n_folds): dataset_split = list() dataset_copy = list(dataset) fold_size = int(len(dataset) / n_folds) for i in range(n_folds): fold = list() while len(fold) < fold_size: index = randrange(len(dataset_copy)) fold.append(dataset_copy.pop(index)) dataset_split.append(fold) return dataset_split # Calcular porcentagem de precisao def accuracy_metric(actual, predicted): correct = 0 for i in range(len(actual)): if actual[i] == predicted[i]: correct += 1 return correct / float(len(actual)) * 100.0 # Avaliar o algoritmo usando a validação cruzada def evaluate_algorithm(dataset, algorithm, n_folds, *args): folds = cross_validation_split(dataset, n_folds) scores = list() for fold in folds: train_set = list(folds) train_set.remove(fold) train_set = sum(train_set, []) test_set = list() for row in fold: row_copy = list(row) test_set.append(row_copy) row_copy[-1] = None predicted = algorithm(train_set, test_set, *args) actual = [row[-1] for row in fold] accuracy = accuracy_metric(actual, predicted) scores.append(accuracy) return scores # Calcular a distância euclidiana entre dois vetores def euclidean_distance(row1, row2): distance = 0.0 for i in range(len(row1)-1): distance += (row1[i] - row2[i])**2 return sqrt(distance) # Achar os vizinhos mais proximos def get_neighbors(train, test_row, num_neighbors): distances = list() for train_row in train: dist = euclidean_distance(test_row, train_row) distances.append((train_row, dist)) distances.sort(key=lambda tup: tup[1]) neighbors = list() for i in range(num_neighbors): neighbors.append(distances[i][0]) return neighbors # Fazer a predição com os vizinhos def predict_classification(train, test_row, num_neighbors): neighbors = get_neighbors(train, test_row, num_neighbors) output_values = [row[-1] for row in neighbors] prediction = max(set(output_values), key=output_values.count) return prediction # Algoritmo kNN def k_nearest_neighbors(train, test, num_neighbors): predictions = list() for row in test: output = predict_classification(train, row, num_neighbors) predictions.append(output) return(predictions) # Teste seed(1) # Carregar e preparar o conjunto de dados filename = 'abalone.csv' dataset = load_csv(filename) for i in range(1, len(dataset[0])): str_column_to_float(dataset, i) # Converter primeira conluna para integer str_column_to_int(dataset, 0) # Avaliar o algoritmo n_folds = 5 num_neighbors = 5 scores = evaluate_algorithm(dataset, k_nearest_neighbors, n_folds, num_neighbors) print('Scores: %s' % scores) print('Mean Accuracy: %.3f%%' % (sum(scores)/float(len(scores)))) Explanation: Caso de Estudo Abalone Aplicaremos o algoritmo k-Nearest Neighbors ao conjunto de dados do Abalone. O primeiro passo é carregar o conjunto de dados e converter os dados carregados para números com os quais podemos usar o cálculo da distância euclidiana. End of explanation # k-Nearest Neighbors on the Abalone Dataset for Regression from random import seed from random import randrange from csv import reader from math import sqrt # Carregar um arquivo CSV def load_csv(filename): dataset = list() with open(filename, 'r') as file: csv_reader = reader(file) for row in csv_reader: if not row: continue dataset.append(row) return dataset # Converter string para float def str_column_to_float(dataset, column): for row in dataset: row[column] = float(row[column].strip()) # Converter string para integer def str_column_to_int(dataset, column): class_values = [row[column] for row in dataset] unique = set(class_values) lookup = dict() for i, value in enumerate(unique): lookup[value] = i for row in dataset: row[column] = lookup[row[column]] return lookup # Achar os maximos e minimos de cada coluna def dataset_minmax(dataset): minmax = list() for i in range(len(dataset[0])): col_values = [row[i] for row in dataset] value_min = min(col_values) value_max = max(col_values) minmax.append([value_min, value_max]) return minmax # Normalizar o conjunto de dados def normalize_dataset(dataset, minmax): for row in dataset: for i in range(len(row)): row[i] = (row[i] - minmax[i][0]) / (minmax[i][1] - minmax[i][0]) # Dividr o conjunto de dados em k folds def cross_validation_split(dataset, n_folds): dataset_split = list() dataset_copy = list(dataset) fold_size = int(len(dataset) / n_folds) for i in range(n_folds): fold = list() while len(fold) < fold_size: index = randrange(len(dataset_copy)) fold.append(dataset_copy.pop(index)) dataset_split.append(fold) return dataset_split # Calcular erro medio def rmse_metric(actual, predicted): sum_error = 0.0 for i in range(len(actual)): prediction_error = predicted[i] - actual[i] sum_error += (prediction_error ** 2) mean_error = sum_error / float(len(actual)) return sqrt(mean_error) # Avaliar o algoritmo usando a validacao cruzada def evaluate_algorithm(dataset, algorithm, n_folds, *args): folds = cross_validation_split(dataset, n_folds) scores = list() for fold in folds: train_set = list(folds) train_set.remove(fold) train_set = sum(train_set, []) test_set = list() for row in fold: row_copy = list(row) test_set.append(row_copy) row_copy[-1] = None predicted = algorithm(train_set, test_set, *args) actual = [row[-1] for row in fold] rmse = rmse_metric(actual, predicted) scores.append(rmse) return scores # Calcular a distância euclidiana entre dois vetores def euclidean_distance(row1, row2): distance = 0.0 for i in range(len(row1)-1): distance += (row1[i] - row2[i])**2 return sqrt(distance) # Achar os vizinhos mais proximos def get_neighbors(train, test_row, num_neighbors): distances = list() for train_row in train: dist = euclidean_distance(test_row, train_row) distances.append((train_row, dist)) distances.sort(key=lambda tup: tup[1]) neighbors = list() for i in range(num_neighbors): neighbors.append(distances[i][0]) return neighbors # Fazer a predição com os vizinhos def predict_regression(train, test_row, num_neighbors): neighbors = get_neighbors(train, test_row, num_neighbors) output_values = [row[-1] for row in neighbors] prediction = sum(output_values) / float(len(output_values)) return prediction # Algoritmo kNN def k_nearest_neighbors(train, test, num_neighbors): predictions = list() for row in test: output = predict_regression(train, row, num_neighbors) predictions.append(output) return(predictions) # Teste seed(1) # Carregar e preparar o conjunto de dados filename = 'abalone.csv' dataset = load_csv(filename) for i in range(1, len(dataset[0])): str_column_to_float(dataset, i) # converter primeira conluna para integer str_column_to_int(dataset, 0) # avaliar o algoritmo n_folds = 5 num_neighbors = 5 scores = evaluate_algorithm(dataset, k_nearest_neighbors, n_folds, num_neighbors) print('Scores: %s' % scores) print('Mean RMSE: %.3f' % (sum(scores)/float(len(scores)))) Explanation: Um valor de $k = 5$ vizinhos foi usado para fazer previsões. Você pode experimentar com $k$ maiores para aumentar a precisão. Podemos ver que a precisão média de 23% é melhor do que a linha de base de 16%, mas é bastante pobre em geral. Isto é devido ao grande número de classes tornando a precisão um "pobre juiz" de habilidade sobre esse problema. Esse fato, combinado com o fato de que Muitas das classes têm poucos ou um exemplo também tornam o problema desafiador. Nós também podemos modelar o conjunto de dados como um problema de modelagem preditiva de regressão. Isto é porque os valores da classe têm uma relação ordinal natural. Caso de Estudo Abalone como Regressão A regressão pode ser uma maneira mais útil de modelar este problema, dado o grande número de classes e dispersão de alguns valores de classe. Podemos facilmente mudar nosso exemplo acima para regressão por alterando KNN para prever a média dos vizinhos e usando Root Mean Squared Error(erro quadrado médio) para avaliar previsões. Abaixo está um exemplo completo com essas mudanças. End of explanation
5,270
Given the following text description, write Python code to implement the functionality described below step by step Description: Using LAMMPS with iPython and Jupyter LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up. Installation Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on) Compile LAMMPS as a shared library and enable exceptions and PNG support bash cd $LAMMPS_DIR/src make yes-molecule make mpi mode=shlib LMP_INC="-DLAMMPS_PNG -DLAMMPS_EXCEPTIONS" JPG_LIB="-lpng" Create a python virtualenv bash virtualenv testing source testing/bin/activate Inside the virtualenv install the lammps package (testing) cd $LAMMPS_DIR/python (testing) python install.py (testing) cd # move to your working directory Install jupyter and ipython in the virtualenv bash (testing) pip install ipython jupyter Run jupyter notebook bash (testing) jupyter notebook Example Step1: Queries about LAMMPS simulation Step2: Working with LAMMPS Variables Step3: Accessing Atom data
Python Code: from lammps import IPyLammps L = IPyLammps() # 2d circle of particles inside a box with LJ walls import math b = 0 x = 50 y = 20 d = 20 # careful not to slam into wall too hard v = 0.3 w = 0.08 L.units("lj") L.dimension(2) L.atom_style("bond") L.boundary("f f p") L.lattice("hex", 0.85) L.region("box", "block", 0, x, 0, y, -0.5, 0.5) L.create_box(1, "box", "bond/types", 1, "extra/bond/per/atom", 6) L.region("circle", "sphere", d/2.0+1.0, d/2.0/math.sqrt(3.0)+1, 0.0, d/2.0) L.create_atoms(1, "region", "circle") L.mass(1, 1.0) L.velocity("all create 0.5 87287 loop geom") L.velocity("all set", v, w, 0, "sum yes") L.pair_style("lj/cut", 2.5) L.pair_coeff(1, 1, 10.0, 1.0, 2.5) L.bond_style("harmonic") L.bond_coeff(1, 10.0, 1.2) L.create_bonds("many", "all", "all", 1, 1.0, 1.5) L.neighbor(0.3, "bin") L.neigh_modify("delay", 0, "every", 1, "check yes") L.fix(1, "all", "nve") L.fix(2, "all wall/lj93 xlo 0.0 1 1 2.5 xhi", x, "1 1 2.5") L.fix(3, "all wall/lj93 ylo 0.0 1 1 2.5 yhi", y, "1 1 2.5") L.image(zoom=1.8) L.thermo_style("custom step temp epair press") L.thermo(100) output = L.run(40000) L.image(zoom=1.8) Explanation: Using LAMMPS with iPython and Jupyter LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up. Installation Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on) Compile LAMMPS as a shared library and enable exceptions and PNG support bash cd $LAMMPS_DIR/src make yes-molecule make mpi mode=shlib LMP_INC="-DLAMMPS_PNG -DLAMMPS_EXCEPTIONS" JPG_LIB="-lpng" Create a python virtualenv bash virtualenv testing source testing/bin/activate Inside the virtualenv install the lammps package (testing) cd $LAMMPS_DIR/python (testing) python install.py (testing) cd # move to your working directory Install jupyter and ipython in the virtualenv bash (testing) pip install ipython jupyter Run jupyter notebook bash (testing) jupyter notebook Example End of explanation L.system L.system.natoms L.system.nbonds L.system.nbondtypes L.communication L.fixes L.computes L.dumps L.groups Explanation: Queries about LAMMPS simulation End of explanation L.variable("a index 2") L.variables L.variable("t equal temp") L.variables import sys if sys.version_info < (3, 0): # In Python 2 'print' is a restricted keyword, which is why you have to use the lmp_print function instead. x = float(L.lmp_print('"${a}"')) else: # In Python 3 the print function can be redefined. # x = float(L.print('"${a}"')") # To avoid a syntax error in Python 2 executions of this notebook, this line is packed into an eval statement x = float(eval("L.print('\"${a}\"')")) x L.variables['t'].value L.eval("v_t/2.0") L.variable("b index a b c") L.variables['b'].value L.eval("v_b") L.variables['b'].definition L.variable("i loop 10") L.variables['i'].value L.next("i") L.variables['i'].value L.eval("ke") Explanation: Working with LAMMPS Variables End of explanation L.atoms[0] [x for x in dir(L.atoms[0]) if not x.startswith('__')] L.atoms[0].position L.atoms[0].id L.atoms[0].velocity L.atoms[0].force L.atoms[0].type Explanation: Accessing Atom data End of explanation
5,271
Given the following text description, write Python code to implement the functionality described below step by step Description: *****Not Working******* In this notebook, we will implement matlab imfilter method in python. Here we will implement four modes know as - (clip, wrap, copy, reflect) in old_matlab, (0, circular, replicate, symmetic) in new_matlab and (constant,wrap and mirror, nearest, reflect) in scipy python. Import Libraries Step1: Read Image Step2: Boundary filters
Python Code: import cv2 import numpy as np import matplotlib.pyplot as plt import scipy.ndimage as scp Explanation: *****Not Working******* In this notebook, we will implement matlab imfilter method in python. Here we will implement four modes know as - (clip, wrap, copy, reflect) in old_matlab, (0, circular, replicate, symmetic) in new_matlab and (constant,wrap and mirror, nearest, reflect) in scipy python. Import Libraries End of explanation img = cv2.imread('paint.jpg', cv2.IMREAD_GRAYSCALE) Explanation: Read Image End of explanation kernal = np.zeros((51,51)) kernal[25,25] = 1 Constant_filter = scp.correlate(img,kernal,mode='constant') Wrap_filter = scp.correlate(img,kernal,mode='wrap') Mirror_filter = scp.correlate(img,kernal,mode='mirror') Nearest_filter = scp.correlate(img,kernal,mode='nearest') Reflect_filter = scp.correlate(img,kernal,mode='reflect') plt.figure(figsize=(10,8)) plt.subplot(2,3,1), plt.imshow(img, cmap='gray'), plt.title('Original') plt.subplot(2,3,2), plt.imshow(Constant_filter, cmap='gray'), plt.title('Constant_filtered') plt.subplot(2,3,3), plt.imshow(Wrap_filter, cmap='gray'), plt.title('Wrap_filtered') plt.subplot(2,3,4), plt.imshow(Mirror_filter, cmap='gray'), plt.title('Mirror_filtered') plt.subplot(2,3,5), plt.imshow(Nearest_filter, cmap='gray'), plt.title('Nearest_filtered') plt.subplot(2,3,6), plt.imshow(Reflect_filter, cmap='gray'), plt.title('Reflect_filtered') plt.show() Explanation: Boundary filters End of explanation
5,272
Given the following text description, write Python code to implement the functionality described below step by step Description: Multiple Linear Regression By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, Delaney Granizo-Mackenzie, and Gilbert Wasserman. Part of the Quantopian Lecture Series Step1: Multiple linear regression generalizes linear regression, allowing the dependent variable to be a linear function of multiple independent variables. As before, we assume that the variable $Y$ is a linear function of $X_1,\ldots, X_k$ Step2: Once we have used this method to determine the coefficients of the regression, we will be able to use new observed values of $X$ to predict values of $Y$. Each coefficient $\beta_j$ tells us how much $Y_i$ will change if we change $X_j$ by one while holding all of the other dependent variables constant. This lets us separate out the contributions of different effects. This is assuming the linear model is the correct one. We start by artificially constructing a $Y$, $X_1$, and $X_2$ in which we know the precise relationship. Step3: We can use the same function from statsmodels as we did for a single linear regression lecture. Step4: The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly $$X_1 + X_2 = X_1 + X^2 + X_1 = 2 X_1 + X^2$$ Or $2X_1$ plus a parabola. However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Multiple linear regression separates out contributions from different variables. Similarly, running a linear regression on two securities might give a high $\beta$. However, if we bring in a third security (like SPY, which tracks the S&P 500) as an independent variable, we may find that the correlation between the first two securities is almost entirely due to them both being correlated with the S&P 500. This is useful because the S&P 500 may then be a more reliable predictor of both securities than they were of each other. This method allows us to better gauge the significance between the two securities and the problem is known as confounding. Step5: The next step after running an analysis is determining if we can even trust the results. A good first step is checking to see if anything looks weird in graphs of the independent variables, dependent variables, and predictions. Step6: Evaluation We can get some statistics about the fit from the result returned by the regression Step7: Model Assumptions The validity of these statistics depends on whether or not the assumptions of the linear regression model are satisfied. These are
Python Code: import numpy as np import pandas as pd import statsmodels.api as sm # If the observations are in a dataframe, you can use statsmodels.formulas.api to do the regression instead from statsmodels import regression import matplotlib.pyplot as plt Explanation: Multiple Linear Regression By Evgenia "Jenny" Nitishinskaya, Maxwell Margenot, Delaney Granizo-Mackenzie, and Gilbert Wasserman. Part of the Quantopian Lecture Series: www.quantopian.com/lectures github.com/quantopian/research_public Notebook released under the Creative Commons Attribution 4.0 License. End of explanation Y = np.array([1, 3.5, 4, 8, 12]) Y_hat = np.array([1, 3, 5, 7, 9]) print 'Error ' + str(Y_hat - Y) # Compute squared error SE = (Y_hat - Y) ** 2 print 'Squared Error ' + str(SE) print 'Sum Squared Error ' + str(np.sum(SE)) Explanation: Multiple linear regression generalizes linear regression, allowing the dependent variable to be a linear function of multiple independent variables. As before, we assume that the variable $Y$ is a linear function of $X_1,\ldots, X_k$: $$ Y_i = \beta_0 + \beta_1 X_{1i} + \ldots + \beta_k X_{ki} + \epsilon_i $$ Often in finance the form will be written as follows, but it is just the variable name that changes and otherwise the model is identical. $$ Y_i = \alpha + \beta_1 X_{1i} + \ldots + \beta_k X_{ki} + \epsilon_i $$ For observations $i = 1,2,\ldots, n$. In order to find the plane (or hyperplane) of best fit, we will use the method of ordinary least-squares (OLS), which seeks to minimize the squared error between predictions and observations, $\sum_{i=1}^n \epsilon_i^2$. The square makes positive and negative errors equally bad, and magnifies large errors. It also makes the closed form math behind linear regression nice, but we won't go into that now. For an example of squared error, see the following. Let's say Y is our actual data, and Y_hat is the predictions made by linear regression. End of explanation # Construct a simple linear curve of 1, 2, 3, ... X1 = np.arange(100) # Make a parabola and add X1 to it, this is X2 X2 = np.array([i ** 2 for i in range(100)]) + X1 # This is our real Y, constructed using a linear combination of X1 and X2 Y = X1 + X2 plt.plot(X1, label='X1') plt.plot(X2, label='X2') plt.plot(Y, label='Y') plt.legend(); Explanation: Once we have used this method to determine the coefficients of the regression, we will be able to use new observed values of $X$ to predict values of $Y$. Each coefficient $\beta_j$ tells us how much $Y_i$ will change if we change $X_j$ by one while holding all of the other dependent variables constant. This lets us separate out the contributions of different effects. This is assuming the linear model is the correct one. We start by artificially constructing a $Y$, $X_1$, and $X_2$ in which we know the precise relationship. End of explanation # Use column_stack to combine independent variables, then add a column of ones so we can fit an intercept X = sm.add_constant( np.column_stack( (X1, X2) ) ) # Run the model results = regression.linear_model.OLS(Y, X).fit() print 'Beta_0:', results.params[0] print 'Beta_1:', results.params[1] print 'Beta_2:', results.params[2] Explanation: We can use the same function from statsmodels as we did for a single linear regression lecture. End of explanation # Load pricing data for two arbitrarily-chosen assets and SPY start = '2014-01-01' end = '2015-01-01' asset1 = get_pricing('DTV', fields='price', start_date=start, end_date=end) asset2 = get_pricing('FISV', fields='price', start_date=start, end_date=end) benchmark = get_pricing('SPY', fields='price', start_date=start, end_date=end) # First, run a linear regression on the two assets slr = regression.linear_model.OLS(asset1, sm.add_constant(asset2)).fit() print 'SLR beta of asset2:', slr.params[1] # Run multiple linear regression using asset2 and SPY as independent variables mlr = regression.linear_model.OLS(asset1, sm.add_constant(np.column_stack((asset2, benchmark)))).fit() prediction = mlr.params[0] + mlr.params[1]*asset2 + mlr.params[2]*benchmark prediction.name = 'Prediction' print 'MLR beta of asset2:', mlr.params[1], '\nMLR beta of S&P 500:', mlr.params[2] Explanation: The same care must be taken with these results as with partial derivatives. The formula for $Y$ is ostensibly $$X_1 + X_2 = X_1 + X^2 + X_1 = 2 X_1 + X^2$$ Or $2X_1$ plus a parabola. However, the coefficient of $X_1$ is 1. That is because $Y$ changes by 1 if we change $X_1$ by 1 <i>while holding $X_2$ constant</i>. Multiple linear regression separates out contributions from different variables. Similarly, running a linear regression on two securities might give a high $\beta$. However, if we bring in a third security (like SPY, which tracks the S&P 500) as an independent variable, we may find that the correlation between the first two securities is almost entirely due to them both being correlated with the S&P 500. This is useful because the S&P 500 may then be a more reliable predictor of both securities than they were of each other. This method allows us to better gauge the significance between the two securities and the problem is known as confounding. End of explanation # Plot the three variables along with the prediction given by the MLR asset1.plot() asset2.plot() benchmark.plot() prediction.plot(color='y') plt.xlabel('Price') plt.legend(bbox_to_anchor=(1,1), loc=2); # Plot only the dependent variable and the prediction to get a closer look asset1.plot() prediction.plot(color='y') plt.xlabel('Price') plt.legend(); Explanation: The next step after running an analysis is determining if we can even trust the results. A good first step is checking to see if anything looks weird in graphs of the independent variables, dependent variables, and predictions. End of explanation mlr.summary() Explanation: Evaluation We can get some statistics about the fit from the result returned by the regression: End of explanation X1 = np.arange(100) X2 = [i**2 for i in range(100)] - X1 X3 = [np.log(i) for i in range(1, 101)] + X2 X4 = 5 * X1 Y = 2 * X1 + 0.5 * X2 + 10 * X3 + X4 plt.plot(X1, label='X1') plt.plot(X2, label='X2') plt.plot(X3, label='X3') plt.plot(X4, label='X4') plt.plot(Y, label='Y') plt.legend(); results = regression.linear_model.OLS(Y, sm.add_constant(np.column_stack((X1,X2,X3,X4)))).fit() print "Beta_0: ", results.params[0] print "Beta_1: ", results.params[1] print "Beta_2: ", results.params[2] print "Beta_3: ", results.params[3] print "Beta_4: ", results.params[4] data = pd.DataFrame(np.column_stack((X1,X2,X3,X4)), columns = ['X1','X2','X3','X4']) response = pd.Series(Y, name='Y') def forward_aic(response, data): # This function will work with pandas dataframes and series # Initialize some variables explanatory = list(data.columns) selected = pd.Series(np.ones(data.shape[0]), name="Intercept") current_score, best_new_score = np.inf, np.inf # Loop while we haven't found a better model while current_score == best_new_score and len(explanatory) != 0: scores_with_elements = [] count = 0 # For each explanatory variable for element in explanatory: # Make a set of explanatory variables including our current best and the new one tmp = pd.concat([selected, data[element]], axis=1) # Test the set result = regression.linear_model.OLS(Y, tmp).fit() score = result.aic scores_with_elements.append((score, element, count)) count += 1 # Sort the scoring list scores_with_elements.sort(reverse = True) # Get the best new variable best_new_score, best_element, index = scores_with_elements.pop() if current_score > best_new_score: # If it's better than the best add it to the set explanatory.pop(index) selected = pd.concat([selected, data[best_element]],axis=1) current_score = best_new_score # Return the final model model = regression.linear_model.OLS(Y, selected).fit() return model result = forward_aic(Y, data) result.summary() Explanation: Model Assumptions The validity of these statistics depends on whether or not the assumptions of the linear regression model are satisfied. These are: * The independent variable is not random. * The variance of the error term is constant across observations. This is important for evaluating the goodness of the fit. * The errors are not autocorrelated. The Durbin-Watson statistic reported by the regression detects this. If it is close to $2$, there is no autocorrelation. * The errors are normally distributed. If this does not hold, we cannot use some of the statistics, such as the F-test. Multiple linear regression also requires an additional assumption: * There is no exact linear relationship between the independent variables. Otherwise, it is impossible to solve for the coefficients $\beta_i$ uniquely, since the same linear equation can be expressed in multiple ways. If there is a linear relationship between any set of independent variables, also known as covariance, we say that they are linear combinations of each other. In the case where they are dependent on each other in this manner, the values of our $\beta_i$ coefficients will be inaccurate for a given $X_i$. The intuition for this can be found in an exteme example where $X_1$ and $X_2$ are 100% covarying. In that case then linear regression can equivalently assign the total coefficient sum in any combination without affecting the predictive capability. $$ 1X_1 + 0X_2 = 0.5X_1 + 0.5X_2 = 0X_1 + 1X_2 $$ While our coefficients may be nondescriptive, the ultimate model may still be accurate provided that there is a good overall fit between the independent variables and the dependent variables. The best practice for constructing a model where dependence is a problem is to leave out the less descriptive variables that are correlated with the better ones. This improves the model by reducing the chances of overfitting while bringing the $\beta_i$ estimates closer to their true values. If we confirm that the necessary assumptions of the regression model are satisfied, we can safely use the statistics reported to analyze the fit. For example, the $R^2$ value tells us the fraction of the total variation of $Y$ that is explained by the model. When doing multiple linear regression, however, we prefer to use adjusted $R^2$, which corrects for the small increases in $R^2$ that occur when we add more regression variables to the model, even if they are not significantly correlated with the dependent variable. Adjusted $R^2$ is defined as $$ 1 - (1 - R^2)\frac{n-1}{n-k-1} $$ Where $n$ is the number of observations and $k$ is the number of independent variables in the model. Other useful statistics include the F-statistic and the standard error of estimate. Model Selection Example When deciding on the best possible model of your dependent variables, there are several different methods to turn to. If you use too many explanatory variables, you run the risk of overfitting your model, but if you use too few you may end up with a terrible fit. One of the most prominent methods to decide on a best model is stepwise regression. Forward stepwise regression starts from an empty model and tests each individual variable, selecting the one that results in the best model quality, usually measured with AIC or BIC (lowest is best). It then adds the remaining variables one at a time, testing each subsequent combination of explanatory variables in a regression and calculating the AIC or BIC value at each step. At the end of the regression, the model with the best quality (according to the given measure) is selected and presented as a the final, best model. This does have limitations, however. It does not test every single possible combination of variables so it may miss the theoretical best model if a particular variable was written off earlier in performing the algorithm. As such, stepwise regression should be used in combination with your best judgment regarding the model. End of explanation
5,273
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1 align="center">Non-Rigid Registration Step1: Utilities Load utilities that are specific to the POPI data, functions for loading ground truth data, display and the labels for masks. Step2: Loading Data Load all of the images, masks and point data into corresponding lists. If the data is not available locally it will be downloaded from the original remote repository. Take a look at the images. According to the documentation on the POPI site, volume number one corresponds to end inspiration (maximal air volume). Step3: Geting to know your data While the POPI site states that image number 1 is end inspiration, and visual inspection seems to suggest this is correct, we should probably take a look at the lung volumes to ensure that what we expect is indeed what is happening. Which image is end inspiration and which end expiration? Step4: Free Form Deformation This function will align the fixed and moving images using a FFD. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed during registration. As this notebook performs intra-modal registration, we use the MeanSquares similarity metric (simple to compute and appropriate for the task). Step5: Perform Registration The following cell allows you to select the images used for registration, runs the registration, and afterwards computes statstics comparing the target registration errors before and after registration and displays a histogram of the TREs. To time the registration, uncomment the timeit magic. <b>Note</b> Step6: Another option for evaluating the registration is to use segmentation. In this case, we transfer the segmentation from one image to the other and compare the overlaps, both visually, and quantitatively.
Python Code: import SimpleITK as sitk import registration_utilities as ru import registration_callbacks as rc from __future__ import print_function import matplotlib.pyplot as plt %matplotlib inline from ipywidgets import interact, fixed #utility method that either downloads data from the MIDAS repository or #if already downloaded returns the file name for reading from disk (cached data) from downloaddata import fetch_data as fdata Explanation: <h1 align="center">Non-Rigid Registration: Free Form Deformation</h1> This notebook illustrates the use of the Free Form Deformation (FFD) based non-rigid registration algorithm in SimpleITK. The data we work with is a 4D (3D+time) thoracic-abdominal CT, the Point-validated Pixel-based Breathing Thorax Model (POPI) model. This data consists of a set of temporal CT volumes, a set of masks segmenting each of the CTs to air/body/lung, and a set of corresponding points across the CT volumes. The POPI model is provided by the Léon Bérard Cancer Center & CREATIS Laboratory, Lyon, France. The relevant publication is: J. Vandemeulebroucke, D. Sarrut, P. Clarysse, "The POPI-model, a point-validated pixel-based breathing thorax model", Proc. XVth International Conference on the Use of Computers in Radiation Therapy (ICCR), Toronto, Canada, 2007. The POPI data, and additional 4D CT data sets with reference points are available from the CREATIS Laboratory <a href="http://www.creatis.insa-lyon.fr/rio/popi-model?action=show&redirect=popi">here</a>. End of explanation %run popi_utilities_setup.py Explanation: Utilities Load utilities that are specific to the POPI data, functions for loading ground truth data, display and the labels for masks. End of explanation images = [] masks = [] points = [] for i in range(0,10): image_file_name = 'POPI/meta/{0}0-P.mhd'.format(i) mask_file_name = 'POPI/masks/{0}0-air-body-lungs.mhd'.format(i) points_file_name = 'POPI/landmarks/{0}0-Landmarks.pts'.format(i) images.append(sitk.ReadImage(fdata(image_file_name), sitk.sitkFloat32)) #read and cast to format required for registration masks.append(sitk.ReadImage(fdata(mask_file_name))) points.append(read_POPI_points(fdata(points_file_name))) interact(display_coronal_with_overlay, temporal_slice=(0,len(images)-1), coronal_slice = (0, images[0].GetSize()[1]-1), images = fixed(images), masks = fixed(masks), label=fixed(lung_label), window_min = fixed(-1024), window_max=fixed(976)); Explanation: Loading Data Load all of the images, masks and point data into corresponding lists. If the data is not available locally it will be downloaded from the original remote repository. Take a look at the images. According to the documentation on the POPI site, volume number one corresponds to end inspiration (maximal air volume). End of explanation label_shape_statistics_filter = sitk.LabelShapeStatisticsImageFilter() for i, mask in enumerate(masks): label_shape_statistics_filter.Execute(mask) print('Lung volume in image {0} is {1} liters.'.format(i,0.000001*label_shape_statistics_filter.GetPhysicalSize(lung_label))) Explanation: Geting to know your data While the POPI site states that image number 1 is end inspiration, and visual inspection seems to suggest this is correct, we should probably take a look at the lung volumes to ensure that what we expect is indeed what is happening. Which image is end inspiration and which end expiration? End of explanation def bspline_intra_modal_registration(fixed_image, moving_image, fixed_image_mask=None, fixed_points=None, moving_points=None): registration_method = sitk.ImageRegistrationMethod() # Determine the number of Bspline control points using the physical spacing we want for the control grid. grid_physical_spacing = [50.0, 50.0, 50.0] # A control point every 50mm image_physical_size = [size*spacing for size,spacing in zip(fixed_image.GetSize(), fixed_image.GetSpacing())] mesh_size = [int(image_size/grid_spacing + 0.5) \ for image_size,grid_spacing in zip(image_physical_size,grid_physical_spacing)] initial_transform = sitk.BSplineTransformInitializer(image1 = fixed_image, transformDomainMeshSize = mesh_size, order=3) registration_method.SetInitialTransform(initial_transform) registration_method.SetMetricAsMeanSquares() # Settings for metric sampling, usage of a mask is optional. When given a mask the sample points will be # generated inside that region. Also, this implicitly speeds things up as the mask is smaller than the # whole image. registration_method.SetMetricSamplingStrategy(registration_method.RANDOM) registration_method.SetMetricSamplingPercentage(0.01) if fixed_image_mask: registration_method.SetMetricFixedMask(fixed_image_mask) # Multi-resolution framework. registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1]) registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0]) registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn() registration_method.SetInterpolator(sitk.sitkLinear) registration_method.SetOptimizerAsLBFGSB(gradientConvergenceTolerance=1e-5, numberOfIterations=100) # If corresponding points in the fixed and moving image are given then we display the similarity metric # and the TRE during the registration. if fixed_points and moving_points: registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot) registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot) registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points)) return registration_method.Execute(fixed_image, moving_image) Explanation: Free Form Deformation This function will align the fixed and moving images using a FFD. If given a mask, the similarity metric will be evaluated using points sampled inside the mask. If given fixed and moving points the similarity metric value and the target registration errors will be displayed during registration. As this notebook performs intra-modal registration, we use the MeanSquares similarity metric (simple to compute and appropriate for the task). End of explanation #%%timeit -r1 -n1 # Select the fixed and moving images, valid entries are in [0,9]. fixed_image_index = 0 moving_image_index = 7 tx = bspline_intra_modal_registration(fixed_image = images[fixed_image_index], moving_image = images[moving_image_index], fixed_image_mask = (masks[fixed_image_index] == lung_label), fixed_points = points[fixed_image_index], moving_points = points[moving_image_index] ) initial_errors_mean, initial_errors_std, _, initial_errors_max, initial_errors = ru.registration_errors(sitk.Euler3DTransform(), points[fixed_image_index], points[moving_image_index]) final_errors_mean, final_errors_std, _, final_errors_max, final_errors = ru.registration_errors(tx, points[fixed_image_index], points[moving_image_index]) plt.hist(initial_errors, bins=20, alpha=0.5, label='before registration', color='blue') plt.hist(final_errors, bins=20, alpha=0.5, label='after registration', color='green') plt.legend() plt.title('TRE histogram'); print('Initial alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max)) print('Final alignment errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max)) Explanation: Perform Registration The following cell allows you to select the images used for registration, runs the registration, and afterwards computes statstics comparing the target registration errors before and after registration and displays a histogram of the TREs. To time the registration, uncomment the timeit magic. <b>Note</b>: this creates a seperate scope for the cell. Variables set inside the cell, specifically tx, will become local variables and thus their value is not available in other cells. End of explanation # Transfer the segmentation via the estimated transformation. Use Nearest Neighbor interpolation to retain the labels. transformed_labels = sitk.Resample(masks[moving_image_index], images[fixed_image_index], tx, sitk.sitkNearestNeighbor, 0.0, masks[moving_image_index].GetPixelIDValue()) segmentations_before_and_after = [masks[moving_image_index], transformed_labels] interact(display_coronal_with_label_maps_overlay, coronal_slice = (0, images[0].GetSize()[1]-1), mask_index=(0,len(segmentations_before_and_after)-1), image = fixed(images[fixed_image_index]), masks = fixed(segmentations_before_and_after), label=fixed(lung_label), window_min = fixed(-1024), window_max=fixed(976)); # Compute the Dice coefficient and Hausdorf distance between the segmentations before, and after registration. ground_truth = masks[fixed_image_index] == lung_label before_registration = masks[moving_image_index] == lung_label after_registration = transformed_labels == lung_label label_overlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter() label_overlap_measures_filter.Execute(ground_truth, before_registration) print("Dice coefficient before registration: {:.2f}".format(label_overlap_measures_filter.GetDiceCoefficient())) label_overlap_measures_filter.Execute(ground_truth, after_registration) print("Dice coefficient after registration: {:.2f}".format(label_overlap_measures_filter.GetDiceCoefficient())) hausdorff_distance_image_filter = sitk.HausdorffDistanceImageFilter() hausdorff_distance_image_filter.Execute(ground_truth, before_registration) print("Hausdorff distance before registration: {:.2f}".format(hausdorff_distance_image_filter.GetHausdorffDistance())) hausdorff_distance_image_filter.Execute(ground_truth, after_registration) print("Hausdorff distance after registration: {:.2f}".format(hausdorff_distance_image_filter.GetHausdorffDistance())) Explanation: Another option for evaluating the registration is to use segmentation. In this case, we transfer the segmentation from one image to the other and compare the overlaps, both visually, and quantitatively. End of explanation
5,274
Given the following text description, write Python code to implement the functionality described below step by step Description: A Glance of LSTM structure and embedding layer We will build a LSTM network to learn from char only. At each time, input is a char. We will see this LSTM is able to learn words and grammers from sequence of chars. The following figure is showing an unrolled LSTM network, and how we generate embedding of a char. The one-hot to embedding operation is a special case of fully connected network. <img src="http Step1: Get Data Step2: Sample training data Step3: Train model Step4: Inference from model
Python Code: from lstm import lstm_unroll, lstm_inference_symbol from bucket_io import BucketSentenceIter from rnn_model import LSTMInferenceModel # Read from doc def read_content(path): with open(path) as ins: content = ins.read() return content # Build a vocabulary of what char we have in the content def build_vocab(path): content = read_content(path) content = list(content) idx = 1 # 0 is left for zero-padding the_vocab = {} for word in content: if len(word) == 0: continue if not word in the_vocab: the_vocab[word] = idx idx += 1 return the_vocab # We will assign each char with a special numerical id def text2id(sentence, the_vocab): words = list(sentence) words = [the_vocab[w] for w in words if len(w) > 0] return words # Evaluation def Perplexity(label, pred): label = label.T.reshape((-1,)) loss = 0. for i in range(pred.shape[0]): loss += -np.log(max(1e-10, pred[i][int(label[i])])) return np.exp(loss / label.size) Explanation: A Glance of LSTM structure and embedding layer We will build a LSTM network to learn from char only. At each time, input is a char. We will see this LSTM is able to learn words and grammers from sequence of chars. The following figure is showing an unrolled LSTM network, and how we generate embedding of a char. The one-hot to embedding operation is a special case of fully connected network. <img src="http://data.mxnet.io/mxnet/data/char-rnn_1.png"> <img src="http://data.mxnet.io/mxnet/data/char-rnn_2.png"> End of explanation import os data_url = "http://data.mxnet.io/mxnet/data/char_lstm.zip" os.system("wget %s" % data_url) os.system("unzip -o char_lstm.zip") Explanation: Get Data End of explanation # The batch size for training batch_size = 32 # We can support various length input # For this problem, we cut each input sentence to length of 129 # So we only need fix length bucket buckets = [129] # hidden unit in LSTM cell num_hidden = 512 # embedding dimension, which is, map a char to a 256 dim vector num_embed = 256 # number of lstm layer num_lstm_layer = 3 # we will show a quick demo in 2 epoch # and we will see result by training 75 epoch num_epoch = 2 # learning rate learning_rate = 0.01 # we will use pure sgd without momentum momentum = 0.0 # we can select multi-gpu for training # for this demo we only use one devs = [mx.context.gpu(i) for i in range(1)] # build char vocabluary from input vocab = build_vocab("./obama.txt") # generate symbol for a length def sym_gen(seq_len): return lstm_unroll(num_lstm_layer, seq_len, len(vocab) + 1, num_hidden=num_hidden, num_embed=num_embed, num_label=len(vocab) + 1, dropout=0.2) # initalize states for LSTM init_c = [('l%d_init_c'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)] init_h = [('l%d_init_h'%l, (batch_size, num_hidden)) for l in range(num_lstm_layer)] init_states = init_c + init_h # we can build an iterator for text data_train = BucketSentenceIter("./obama.txt", vocab, buckets, batch_size, init_states, seperate_char='\n', text2id=text2id, read_content=read_content) # the network symbol symbol = sym_gen(buckets[0]) Explanation: Sample training data: all to Renewal Keynote Address Call to Renewal Pt 1Call to Renewal Part 2 TOPIC: Our Past, Our Future &amp; Vision for America June 28, 2006 Call to Renewal' Keynote Address Complete Text Good morning. I appreciate the opportunity to speak here at the Call to R enewal's Building a Covenant for a New America conference. I've had the opportunity to take a look at your Covenant for a New Ame rica. It is filled with outstanding policies and prescriptions for much of what ails this country. So I'd like to congratulate yo u all on the thoughtful presentations you've given so far about poverty and justice in America, and for putting fire under the fe et of the political leadership here in Washington.But today I'd like to talk about the connection between religion and politics a nd perhaps offer some thoughts about how we can sort through some of the often bitter arguments that we've been seeing over the l ast several years.I do so because, as you all know, we can affirm the importance of poverty in the Bible; and we can raise up and pass out this Covenant for a New America. We can talk to the press, and we can discuss the religious call to address poverty and environmental stewardship all we want, but it won't have an impact unless we tackle head-on the mutual suspicion that sometimes LSTM Hyperparameters End of explanation # Train a LSTM network as simple as feedforward network model = mx.model.FeedForward(ctx=devs, symbol=symbol, num_epoch=num_epoch, learning_rate=learning_rate, momentum=momentum, wd=0.0001, initializer=mx.init.Xavier(factor_type="in", magnitude=2.34)) # Fit it model.fit(X=data_train, eval_metric = mx.metric.np(Perplexity), batch_end_callback=mx.callback.Speedometer(batch_size, 50), epoch_end_callback=mx.callback.do_checkpoint("obama")) Explanation: Train model End of explanation # helper strcuture for prediction def MakeRevertVocab(vocab): dic = {} for k, v in vocab.items(): dic[v] = k return dic # make input from char def MakeInput(char, vocab, arr): idx = vocab[char] tmp = np.zeros((1,)) tmp[0] = idx arr[:] = tmp # helper function for random sample def _cdf(weights): total = sum(weights) result = [] cumsum = 0 for w in weights: cumsum += w result.append(cumsum / total) return result def _choice(population, weights): assert len(population) == len(weights) cdf_vals = _cdf(weights) x = random.random() idx = bisect.bisect(cdf_vals, x) return population[idx] # we can use random output or fixed output by choosing largest probability def MakeOutput(prob, vocab, sample=False, temperature=1.): if sample == False: idx = np.argmax(prob, axis=1)[0] else: fix_dict = [""] + [vocab[i] for i in range(1, len(vocab) + 1)] scale_prob = np.clip(prob, 1e-6, 1 - 1e-6) rescale = np.exp(np.log(scale_prob) / temperature) rescale[:] /= rescale.sum() return _choice(fix_dict, rescale[0, :]) try: char = vocab[idx] except: char = '' return char # load from check-point _, arg_params, __ = mx.model.load_checkpoint("obama", 75) # build an inference model model = LSTMInferenceModel(num_lstm_layer, len(vocab) + 1, num_hidden=num_hidden, num_embed=num_embed, num_label=len(vocab) + 1, arg_params=arg_params, ctx=mx.gpu(), dropout=0.2) # generate a sequence of 1200 chars seq_length = 1200 input_ndarray = mx.nd.zeros((1,)) revert_vocab = MakeRevertVocab(vocab) # Feel free to change the starter sentence output ='The joke' random_sample = True new_sentence = True ignore_length = len(output) for i in range(seq_length): if i <= ignore_length - 1: MakeInput(output[i], vocab, input_ndarray) else: MakeInput(output[-1], vocab, input_ndarray) prob = model.forward(input_ndarray, new_sentence) new_sentence = False next_char = MakeOutput(prob, revert_vocab, random_sample) if next_char == '': new_sentence = True if i >= ignore_length - 1: output += next_char # Let's see what we can learned from char in Obama's speech. print(output) Explanation: Inference from model End of explanation
5,275
Given the following text description, write Python code to implement the functionality described below step by step Description: In depth with SVMs Step1: The rbf kernel has an inverse bandwidth-parameter gamma, where large gamma mean a very localized influence for each data point, and small values mean a very global influence. Let's see these two parameters in action Step2: Exercise
Python Code: from sklearn.metrics.pairwise import rbf_kernel line = np.linspace(-3, 3, 100)[:, np.newaxis] kernel_value = rbf_kernel(line, [[0]], gamma=1) plt.plot(line, kernel_value) Explanation: In depth with SVMs: Support Vector Machines SVM stands for "support vector machines". They are efficient and easy to use estimators. They come in two kinds: SVCs, Support Vector Classifiers, for classification problems, and SVRs, Support Vector Regressors, for regression problems. Linear SVMs The SVM module contains LinearSVC, which we already discussed briefly in the section on linear models. Using SVC(kernel="linear") will also yield a linear predictor that is only different in minor technical aspects. Kernel SVMs The real power of SVMs lies in using kernels, which allow for non-linear decision boundaries. A kernel defines a similarity measure between data points. The most common are: linear will give linear decision frontiers. It is the most computationally efficient approach and the one that requires the least amount of data. poly will give decision frontiers that are polynomial. The order of this polynomial is given by the 'order' argument. rbf uses 'radial basis functions' centered at each support vector to assemble a decision frontier. The size of the RBFs ultimately controls the smoothness of the decision frontier. RBFs are the most flexible approach, but also the one that will require the largest amount of data. Predictions in a kernel-SVM are made using the formular $$ \hat{y} = \text{sign}(\alpha_0 + \sum_{j}\alpha_j y_j k(\mathbf{x^{(j)}}, \mathbf{x})) $$ where $\mathbf{x}^{(j)}$ are training samples, $\mathbf{y}^{(j)}$ the corresponding labels, $\mathbf{x}$ is a test-sample to predict on, $k$ is the kernel, and $\alpha$ are learned parameters. What this says is "if $\mathbf{x}$ is similar to $\mathbf{x}^{(j)}$ then they probably have the same label", where the importance of each $\mathbf{x}^{(j)}$ for this decision is learned. [Or something much less intuitive about an infinite dimensional Hilbert-space] Often only few samples have non-zero $\alpha$, these are called the "support vectors" from which SVMs get their name. These are the most discriminant samples. The most important parameter of the SVM is the regularization parameter $C$, which bounds the influence of each individual sample: Low C values: many support vectors... Decision frontier = mean(class A) - mean(class B) High C values: small number of support vectors: Decision frontier fully driven by most discriminant samples The other important parameters are those of the kernel. Let's look at the RBF kernel in more detail: $$k(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma ||\mathbf{x} - \mathbf{x'}||^2)$$ End of explanation from figures import plot_svm_interactive plot_svm_interactive() Explanation: The rbf kernel has an inverse bandwidth-parameter gamma, where large gamma mean a very localized influence for each data point, and small values mean a very global influence. Let's see these two parameters in action: End of explanation from sklearn import datasets digits = datasets.load_digits() X, y = digits.data, digits.target # split the dataset, apply grid-search Explanation: Exercise: tune an SVM on the digits dataset End of explanation
5,276
Given the following text description, write Python code to implement the functionality described below step by step Description: Fundamentals of Python written by Gene Kogan This notebook contains a small review of Python basics. We will only review several core concepts in Python with which we will be working a lot. The lecture video for this notebook will discuss some of these basics in more detail. If you are already familiar with Python, you may feel free to skip this. If you are a beginner to Python, it will be very helpful to review a more comprehensive tutorial before moving on. Some online resources for learning python follow Step1: Lists We start with the most basic data containers in Python, lists Step2: Slicing lists Referring to subsets of lists (remember the 0 indexing) Step3: List operations Various methods for lists Step4: Count number of elements in list Step5: Add element to the end of the list Step6: Insert element at index 2 Step7: List comprehensions Convenient way in Python to make lists which are functions of other lists. Note the ** operator is an exponent, so x**2 means $x^2$. New list is squares of z Step8: New list is True/False if element is >3 or not Step9: Dictionaries (Dicts) Another way of storing data, which can be looked up using keys. Step10: Loops For-loops are simple, but we will learn powerful ways to avoid them with Numpy, because they are not optimized for speed. Step11: Get a list of integers between two endpoints with range. Step12: Functions A function is a reusable block of code. Step13: Classes Classes bring object-oriented programming to Python. Each class can specify its constructor using the built-in __init__ method. Step14: Libraries / packages There are many libraries (often called "packages") avaialble for Python for various functions. Let's import two packages Step15: Plotting Let's plot our sine curve. We'll be plotting a lot to make concepts more visually clear in the future.
Python Code: myVariable = 'hello world' print(myVariable) Explanation: Fundamentals of Python written by Gene Kogan This notebook contains a small review of Python basics. We will only review several core concepts in Python with which we will be working a lot. The lecture video for this notebook will discuss some of these basics in more detail. If you are already familiar with Python, you may feel free to skip this. If you are a beginner to Python, it will be very helpful to review a more comprehensive tutorial before moving on. Some online resources for learning python follow: Learn Python the Hard Way Codecademy Learnpython.org How to use these notebooks Whether you are viewing this notebook using Project Jupyter or using Google Colab, you may refer to any guides which explain how to navigate and interact with IPython, Jupyter, or Colab notebooks. The notebook consists of cells which are either plain text/markdown or Python code. The code cells can be executed by hitting the play button or with the keyboard shortcut CTRL+Enter (or SHIFT+Enter to execute and advance to the next cell). To modify a cell, click into it and a text cursor will appear (double click in the case of the markdown/plain text slides). When you first connect to the notebook, you enter into an empty Python shell. As you execute the cells, you introduce variables and run operations on them. If at some point you disconnect, the variables and state of the shell are lost. Note that you are not forced into running the cells in orer from top to bottom, but if you ever receive an error saying a variable is missing, it's probably because you ddiddn't execute an earlier cell declaring that variable, or the notebook disconnected after declaring it. Features in Python A review of core concepts in Python follows: Variables You can assign any item in Python to a variable, to refer to or operate on later. End of explanation # basic list in Python X = [2, 5, 7, -2, 0, 8, 13] # lists can contain anything Y = [99, 'Alice', -14.2903, 'Bob', [1,2,3], 50] # lists are 0-indexed, so index 2 is the third element in X print(X[2]) Explanation: Lists We start with the most basic data containers in Python, lists: End of explanation # slicing y = X[0:2] print(y) y = X[:-2] # -2 means 2 from the end. equivalent here to x[0:4] print(y) Explanation: Slicing lists Referring to subsets of lists (remember the 0 indexing): End of explanation print(X.index(7)) Explanation: List operations Various methods for lists: Find index of element with value 5 End of explanation print(len(X)) Explanation: Count number of elements in list End of explanation X.append(99) print(X) Explanation: Add element to the end of the list End of explanation X.insert(1, 55) print(X) Explanation: Insert element at index 2 End of explanation z = [x**2 for x in X] print(z) Explanation: List comprehensions Convenient way in Python to make lists which are functions of other lists. Note the ** operator is an exponent, so x**2 means $x^2$. New list is squares of z End of explanation z = [x>3 for x in X] print(z) Explanation: New list is True/False if element is >3 or not End of explanation z = {'name':'Gene', 'apples':5, 'oranges':8} print(z['name']) print(z['oranges']) if 'apples' in z: print('yes, the key apples is in the dict z') Explanation: Dictionaries (Dicts) Another way of storing data, which can be looked up using keys. End of explanation names = ['Alice', 'Bob', 'Carol', 'David'] for name in names: print('Hi %s' % name) Explanation: Loops For-loops are simple, but we will learn powerful ways to avoid them with Numpy, because they are not optimized for speed. End of explanation for i in range(5, 9): print(i) Explanation: Get a list of integers between two endpoints with range. End of explanation def myFunction(myArgument): print('Hello '+myArgument) myFunction('Alice') myFunction('Bob') Explanation: Functions A function is a reusable block of code. End of explanation class MyClass(object): def __init__(self, message): # constructor self.message = message # assign local variable in object def print_message(self, n_times=2): for i in range(n_times): print('%s' % self.message) M = MyClass('Hello from ml4a!') M.print_message(3) Explanation: Classes Classes bring object-oriented programming to Python. Each class can specify its constructor using the built-in __init__ method. End of explanation import matplotlib.pyplot as plt import math z = math.cos(1) print(z) Explanation: Libraries / packages There are many libraries (often called "packages") avaialble for Python for various functions. Let's import two packages: math and matplotlib.pyplot (with an alias plt). Let's use the math function to calculate the cosine of 1. End of explanation X = [0.1*x for x in range(-50,50)] Y = [math.sin(x) for x in X] # make the figure plt.figure(figsize=(6,6)) plt.plot(X, Y) plt.xlabel('x') plt.ylabel('y = sin(x)') plt.title('My plot title') Explanation: Plotting Let's plot our sine curve. We'll be plotting a lot to make concepts more visually clear in the future. End of explanation
5,277
Given the following text description, write Python code to implement the functionality described below step by step Description: Semantic Segmentation In this exercise we will train an end-to-end convolutional neural network for semantic segmentation. The goal of semantic segmentation is to classify the image on the pixel level. For each pixel we want to determine the class of the object to which it belongs. This is different from image classification which classifies an image as a whole and doesn't tell us the location of the objects. This is why semantic segmentation goes into the category of structured prediction problems. It answers on both the 'what' and 'where' questions while classifcation tells us only 'what'. By classifying each pixel we are infering the structure of the whole scene. Typical examples of input image and target labels for this problem are shown below. Input image | Target image -|- | | 1. Cityscapes dataset Cityscapes dataset contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations. Dataset contains 2975 training and 500 validation images of size 2048x1024. The test set of 1000 images is evaluated on the server and benchmark is available here. Here we will use downsampled images of size 384x160. The original dataset has 19 classes but we lowered that to 7 by uniting similar classes into broader categories. This makes sense due to low visibility of very small objects in downsampled images. We also have ignore class which we need to ignore during training because those pixels don't belong to any class. Download the prepared dataset here and extract it to the current directory. ID | Class | Color -|-|- 0 | road | purple 1 | building | grey 2 | infrastructure | yellow 3 | nature | green 4 | sky | light blue 5 | person | red 6 | vehicle | dark blue 7 | ignore | black 2. Building the graph Let's begin by importing all the modules and setting the fixed random seed. Step1: Dataset The Dataset class implements an iterator which returns the next batch data in each iteration. Data is already normalized to have zero mean and unit variance. The iteration is terminated when we reach the end of the dataset (one epoch). Step2: Inputs First, we will create input placeholders for Tensorflow computational graph of the model. For a supervised learning model, we need to declare placeholders which will hold input images (x) and target labels (y) of the mini-batches as we feed them to the network. Step3: Model Now we can define the computational graph. Here we will heavily use tf.layers high level API which handles tf.Variable creation for us. The main difference here compared to the classification model is that the network is going to be fully convolutional without any fully connected layers. Brief sketch of the model we are going to define is given below. conv3x3(32) -&gt; 4 x (pool2x2 -&gt; conv3x3(64) -&gt; conv3x3(64)) -&gt; conv1x1(7) -&gt; resize_bilinear -&gt; softmax() -&gt; Loss Step4: Loss Now we are going to implement the build_loss function which will create nodes for loss computation and return the final tf.Tensor representing the scalar loss value. Because segmentation is just classification on a pixel level we can again use the cross entropy loss function \(L\) between the target one-hot distribution \( \mathbf{y} \) and the predicted distribution from a softmax layer \( \mathbf{s} \). But compared to the image classification here we need to define the loss at each pixel. Below are the equations describing the loss for just one example (one pixel in our case). $$ L = - \sum_{i=1}^{C} y_i log(s_j(\mathbf{x})) \ s_i(\mathbf{x}) = \frac{e^{x_i}}{\sum_{j=1}^{C} e^{x_j}} \ $$ Step5: Putting it all together Now we can use all the building blocks from above and construct the whole forward pass Tensorflow graph in just a couple of lines. Step6: 3. Training the model Training During training we are going to compute the forward pass first to get the value of the loss function. After that we are doing the backward pass and computing all gradients the loss wrt parameters at each layer with backpropagation. Step7: Validation We usually evaluate the semantic segmentation results with Intersection over Union measure (IoU aka Jaccard index). Note that accurracy we used on MNIST image classification problem is a bad measure in this case because semantic segmentation datasets are often heavily imbalanced. First we compute IoU for each class in one-vs-all fashion (shown below) and then take the mean IoU (mIoU) over all classes. By taking the mean we are treating all classes as equally important. In order to compute the IoU we are going to do the forward pass on validation data collect the confusion matrix first. $$ IOU = \frac{TP}{TP + FN + FP} $$ Step8: Tensorboard $ tensorboard --logdir=local/logs/ 4. Restoring the pretrained network Step9: Day 4 5. Improved model with skip connections In this part we are going to improve on the previous model by adding skip connections. The role of the skip connections will be to restore the information lost due to downsampling.
Python Code: %matplotlib inline import time from os.path import join import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix import utils from data import Dataset tf.set_random_seed(31415) tf.logging.set_verbosity(tf.logging.ERROR) plt.rcParams["figure.figsize"] = (15, 5) Explanation: Semantic Segmentation In this exercise we will train an end-to-end convolutional neural network for semantic segmentation. The goal of semantic segmentation is to classify the image on the pixel level. For each pixel we want to determine the class of the object to which it belongs. This is different from image classification which classifies an image as a whole and doesn't tell us the location of the objects. This is why semantic segmentation goes into the category of structured prediction problems. It answers on both the 'what' and 'where' questions while classifcation tells us only 'what'. By classifying each pixel we are infering the structure of the whole scene. Typical examples of input image and target labels for this problem are shown below. Input image | Target image -|- | | 1. Cityscapes dataset Cityscapes dataset contains a diverse set of stereo video sequences recorded in street scenes from 50 different cities, with high quality pixel-level annotations. Dataset contains 2975 training and 500 validation images of size 2048x1024. The test set of 1000 images is evaluated on the server and benchmark is available here. Here we will use downsampled images of size 384x160. The original dataset has 19 classes but we lowered that to 7 by uniting similar classes into broader categories. This makes sense due to low visibility of very small objects in downsampled images. We also have ignore class which we need to ignore during training because those pixels don't belong to any class. Download the prepared dataset here and extract it to the current directory. ID | Class | Color -|-|- 0 | road | purple 1 | building | grey 2 | infrastructure | yellow 3 | nature | green 4 | sky | light blue 5 | person | red 6 | vehicle | dark blue 7 | ignore | black 2. Building the graph Let's begin by importing all the modules and setting the fixed random seed. End of explanation batch_size = 10 num_classes = Dataset.num_classes # create the Dataset for training and validation train_data = Dataset('train', batch_size) val_data = Dataset('val', batch_size, shuffle=False) print('Train shape:', train_data.x.shape) print('Validation shape:', val_data.x.shape) #print('mean = ', train_data.x.mean((0,1,2))) #print('std = ', train_data.x.std((0,1,2))) Explanation: Dataset The Dataset class implements an iterator which returns the next batch data in each iteration. Data is already normalized to have zero mean and unit variance. The iteration is terminated when we reach the end of the dataset (one epoch). End of explanation # store the input image dimensions height = train_data.height width = train_data.width channels = train_data.channels # create placeholders for inputs def build_inputs(): ... Explanation: Inputs First, we will create input placeholders for Tensorflow computational graph of the model. For a supervised learning model, we need to declare placeholders which will hold input images (x) and target labels (y) of the mini-batches as we feed them to the network. End of explanation # helper function which applies conv2d + ReLU with filter size k def conv(x, num_maps, k=3): ... # helper function for 2x2 max pooling with stride=2 def pool(x): ... # this functions takes the input placeholder and the number of classes, builds the model and returns the logits def build_model(x, num_classes): ... Explanation: Model Now we can define the computational graph. Here we will heavily use tf.layers high level API which handles tf.Variable creation for us. The main difference here compared to the classification model is that the network is going to be fully convolutional without any fully connected layers. Brief sketch of the model we are going to define is given below. conv3x3(32) -&gt; 4 x (pool2x2 -&gt; conv3x3(64) -&gt; conv3x3(64)) -&gt; conv1x1(7) -&gt; resize_bilinear -&gt; softmax() -&gt; Loss End of explanation # this funcions takes logits and targets (y) and builds the loss subgraph def build_loss(logits, y): ... Explanation: Loss Now we are going to implement the build_loss function which will create nodes for loss computation and return the final tf.Tensor representing the scalar loss value. Because segmentation is just classification on a pixel level we can again use the cross entropy loss function \(L\) between the target one-hot distribution \( \mathbf{y} \) and the predicted distribution from a softmax layer \( \mathbf{s} \). But compared to the image classification here we need to define the loss at each pixel. Below are the equations describing the loss for just one example (one pixel in our case). $$ L = - \sum_{i=1}^{C} y_i log(s_j(\mathbf{x})) \ s_i(\mathbf{x}) = \frac{e^{x_i}}{\sum_{j=1}^{C} e^{x_j}} \ $$ End of explanation # create inputs # create model # create loss # we will need argmax predictions for IoU Explanation: Putting it all together Now we can use all the building blocks from above and construct the whole forward pass Tensorflow graph in just a couple of lines. End of explanation # this functions trains the model def train(sess, x, y, y_pred, loss, checkpoint_dir): num_epochs = 30 batch_size = 10 log_dir = 'local/logs' utils.clear_dir(log_dir) utils.clear_dir(checkpoint_dir) learning_rate = 1e-3 decay_power = 1.0 global_step = tf.Variable(0, trainable=False) decay_steps = num_epochs * train_data.num_batches # usually SGD learning rate is decreased over time which enables us # to better fine-tune the parameters when close to solution lr = tf.train.polynomial_decay(learning_rate, global_step, decay_steps, end_learning_rate=0, power=decay_power) ... sess = tf.Session() train(sess, x, y, y_pred, loss, 'local/checkpoint1') Explanation: 3. Training the model Training During training we are going to compute the forward pass first to get the value of the loss function. After that we are doing the backward pass and computing all gradients the loss wrt parameters at each layer with backpropagation. End of explanation def validate(sess, data, x, y, y_pred, loss, draw_steps=0): print('\nValidation phase:') ... return utils.print_stats(conf_mat, 'Validation', Dataset.class_info) sess = tf.Session() train(sess, x, y, y_pred, loss, 'local/checkpoint1') Explanation: Validation We usually evaluate the semantic segmentation results with Intersection over Union measure (IoU aka Jaccard index). Note that accurracy we used on MNIST image classification problem is a bad measure in this case because semantic segmentation datasets are often heavily imbalanced. First we compute IoU for each class in one-vs-all fashion (shown below) and then take the mean IoU (mIoU) over all classes. By taking the mean we are treating all classes as equally important. In order to compute the IoU we are going to do the forward pass on validation data collect the confusion matrix first. $$ IOU = \frac{TP}{TP + FN + FP} $$ End of explanation # restore the checkpoint ... Explanation: Tensorboard $ tensorboard --logdir=local/logs/ 4. Restoring the pretrained network End of explanation # upsampling layer def upsample(x, skip, num_maps): # this functions takes the input placeholder and the number of classes, builds the model and returns the logits def build_model(x, num_classes): sess.close() tf.reset_default_graph() # create inputs # create model # create loss # we are going to need argmax predictions for IoU sess = tf.Session() train(sess, x, y, y_pred, loss, 'local/checkpoint2') # restore the checkpoint ... Explanation: Day 4 5. Improved model with skip connections In this part we are going to improve on the previous model by adding skip connections. The role of the skip connections will be to restore the information lost due to downsampling. End of explanation
5,278
Given the following text description, write Python code to implement the functionality described below step by step Description: This notebook performs the same task as DistanceComputtion.ipynb but for the topics, ie it computes the distance matrix for each votation subjects based on the topic modelling results. Step1: We first erase the duplicates and only collect the results of the topic modelling for each votation Step2: We then implement the distance function, which is simply the euclidean distance between the vectors whose entries are the percentage for each topic computed by topic modelling. Step3: We then apply it to every pairs of subjects in order to compute the distance matrix. Step4: We save the matrix. We observe as expected that the diagonal of the distance matrix contains only 0 as the distance between some subject and itself is 0. Step5: We finally compute for each subject the topic which appears the most.
Python Code: import pandas as pd import glob import os import numpy as np import matplotlib.pyplot as plt import sklearn import sklearn.ensemble from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score, train_test_split, cross_val_predict, learning_curve import sklearn.metrics %matplotlib inline %load_ext autoreload %autoreload 2 # There's a lot of columns in the DF. # Therefore, we add this option so that we can see more columns pd.options.display.max_columns = 100 path = '../datas/treated_data/Vote/' voting_no_topics_df = pd.read_csv(path+'legiid_47-50.csv') voting_no_topics_df = voting_no_topics_df.drop_duplicates(['BillTitle'], keep = 'last') print(len(voting_no_topics_df)) voting_no_topics_df.head(3) voting_no_topics_df.shape path = '../../datas/nlp_results/' voting_df = pd.read_csv(path+'voting_with_topics.csv') print('Entries in the DataFrame',voting_df.shape) #Dropping the useless column voting_df = voting_df.drop('Unnamed: 0',1) #Putting numerical values into the columns that should have numerical values #print(voting_df.columns.values) num_cols = ['Decision', ' armée', ' asile / immigration', ' assurances', ' budget', ' dunno', ' entreprise/ finance', ' environnement', ' famille / enfants', ' imposition', ' politique internationale', ' retraite '] voting_df[num_cols] = voting_df[num_cols].apply(pd.to_numeric) #Inserting the full name at the second position voting_df.insert(2,'Name', voting_df['FirstName'] + ' ' + voting_df['LastName']) voting_df.head(5) Explanation: This notebook performs the same task as DistanceComputtion.ipynb but for the topics, ie it computes the distance matrix for each votation subjects based on the topic modelling results. End of explanation voting_df_copy = voting_df.drop_duplicates(['BillTitle', 'BusinessTitle'], keep = 'last') print(len(voting_df_copy)) for i in voting_df_copy.index: if str(voting_df_copy.loc[i].BillTitle) == 'nan': voting_df_copy.set_value(i,'BillTitle',voting_df_copy.loc[i].BusinessTitle) voting_df_copy = voting_df_copy.drop_duplicates(['BillTitle'], keep = 'last') print(len(voting_df_copy)) voting_subjects = voting_df_copy['BillTitle'].unique() topics = [' armée', ' asile / immigration', ' assurances', ' budget', ' dunno', ' entreprise/ finance', ' environnement', ' famille / enfants', ' imposition', ' politique internationale', ' retraite '] print("{n} subjects voted in the parliament from 2009 to 2015".format(n=voting_subjects.shape[0])) voting_df_copy = voting_df_copy.set_index(['BillTitle']) voting_df_copy = voting_df_copy[topics] voting_df_copy.head() Explanation: We first erase the duplicates and only collect the results of the topic modelling for each votation End of explanation def distance(p1, p2): return np.linalg.norm(p1-p2) Explanation: We then implement the distance function, which is simply the euclidean distance between the vectors whose entries are the percentage for each topic computed by topic modelling. End of explanation n = voting_subjects.shape[0] distanceMatrix = np.zeros((n,n)) for i in range(n): if i % 10 == 0: print("Compute distances from subject " + str(i)) for j in range(n): distanceMatrix[i][j] = distance(voting_df_copy.loc[voting_subjects[i]].values, voting_df_copy.loc[voting_subjects[j]].values) print("Mean distance : {d}".format(d = np.mean(distanceMatrix))) Explanation: We then apply it to every pairs of subjects in order to compute the distance matrix. End of explanation import pandas as pd df = pd.DataFrame(distanceMatrix, index = voting_subjects, columns = voting_subjects) df.to_csv("distanceMatrixSubjects.csv") df.head() Explanation: We save the matrix. We observe as expected that the diagonal of the distance matrix contains only 0 as the distance between some subject and itself is 0. End of explanation topic_df = pd.DataFrame(index = voting_subjects) topic_df['Topic'] = voting_df_copy[topics].idxmax(axis=1) topic_df.head() topic_df.to_csv("SubjectTopicMapping.csv") Explanation: We finally compute for each subject the topic which appears the most. End of explanation
5,279
Given the following text description, write Python code to implement the functionality described below step by step Description: ColorScale The colors for the ColorScale can be defined one of two ways Step1: Attributes ColorScales share attributes with the other Scale types Step2: Mid In addition they also have a mid attribute, a value that will be mapped to the middle color. This is especially suited to diverging color schemes. Step3: DateColorScale The DateColorScale is a color scale for dates. It works in every way like the regular ColorScale, except that its min, mid and max attributes — if defined — must be date elements (datetime, numpy or pandas). Step4: Color Schemes Use the following widgets to browse through the available color schemes Step6: Diverging schemes Step7: Non-diverging schemes Step8: OrdinalColorScale The OrdinalColorScale is a color scale for categorical data, i.e. data that does not have an intrinsic order. The scale colors may be specified by the user, or chosen from a set scheme. As of now, the supported color schemes are the colorbrewer categorical schemes, listed here along with their maximum number of colors.
Python Code: import numpy as np import bqplot.pyplot as plt from bqplot import ColorScale, DateColorScale, OrdinalColorScale, ColorAxis # setup data for plotting np.random.seed(0) n = 100 x_data = range(n) y_data = np.cumsum(np.random.randn(n) * 100.0) def create_fig(color_scale, color_data, fig_margin=None): # allow some margin on right for color bar if fig_margin is None: fig_margin = dict(top=50, bottom=70, left=50, right=100) fig = plt.figure(title="Up and Down", fig_margin=fig_margin) # setup color scale plt.scales(scales={"color": color_scale}) # show color bar on right axes_options = {"color": {"orientation": "vertical", "side": "right"}} scat = plt.scatter( x_data, y_data, color=color_data, stroke="black", axes_options=axes_options ) return fig fig = create_fig(ColorScale(), y_data) fig Explanation: ColorScale The colors for the ColorScale can be defined one of two ways: - Manually, by setting the scale's colors attribute to a list of css colors. They can be either: - html colors (link) 'white' - hex '#000000' - rgb 'rgb(0, 0, 0)'. python col_sc = ColorScale(colors=['yellow', 'red']) - Using one of bqplot's color-schemes. As of now we support all the colorbrewer schemes (link), as well as the matplotlib schemes 'viridis', 'magma', 'inferno' and 'plasma'. python col_sc = ColorScale(scheme=['viridis']) The color scale then linearly interpolates between its colors. ColorAxis A ColorAxis, like other Axis types, takes a color scale as input. It can then be displayed in a Figure. python ax_col = ColorAxis(scale=col_sc) End of explanation color_scale = fig.marks[0].scales["color"] color_scale.min = 0 color_scale.reverse = True Explanation: Attributes ColorScales share attributes with the other Scale types: - Their domain can be manually constrained with the min and max attributes - They can be inversed by setting the reverse attribute to True End of explanation color_scale.min = None color_scale.mid = 0 Explanation: Mid In addition they also have a mid attribute, a value that will be mapped to the middle color. This is especially suited to diverging color schemes. End of explanation import pandas as pd fig_margin = dict(top=50, bottom=70, left=50, right=200) date_col_sc = DateColorScale() dates = pd.date_range(start="2015-01-01", periods=n) create_fig(date_col_sc, dates, fig_margin=fig_margin) date_col_sc.min = pd.datetime(2016, 2, 28) Explanation: DateColorScale The DateColorScale is a color scale for dates. It works in every way like the regular ColorScale, except that its min, mid and max attributes — if defined — must be date elements (datetime, numpy or pandas). End of explanation from bqplot.market_map import MarketMap from ipywidgets import IntSlider, SelectionSlider, Dropdown from ipywidgets import VBox, HBox, Layout from traitlets import link Explanation: Color Schemes Use the following widgets to browse through the available color schemes End of explanation div_schemes = [ "Spectral", "RdYlGn", "RdBu", "PiYG", "PRGn", "RdYlBu", "BrBG", "RdGy", "PuOr", ] def scheme_inspector(fig, schemes, title=""): Takes a Figure and a list of schemes and returns the Figure along with dropdown to go through the different schemes # Get the color scale col_sc = fig.marks[0].scales["color"] # Create the widgets to select the colorscheme scheme_dd = Dropdown(description="Scheme", options=schemes) def update_scheme(*args): col_sc.scheme = scheme_dd.value scheme_dd.observe(update_scheme, "value") update_scheme() return VBox([scheme_dd, fig]) scheme_inspector(create_fig(ColorScale(), y_data), div_schemes) Explanation: Diverging schemes End of explanation lin_schemes = [ "OrRd", "PuBu", "BuPu", "Oranges", "BuGn", "YlOrBr", "YlGn", "Reds", "RdPu", "Greens", "YlGnBu", "Purples", "GnBu", "Greys", "YlOrRd", "PuRd", "Blues", "PuBuGn", "viridis", "plasma", "inferno", "magma", ] scheme_inspector( create_fig(ColorScale(), y_data), lin_schemes, title="Non-diverging schemes" ) Explanation: Non-diverging schemes End of explanation ord_schemes = { "Set2": 8, "Accent": 8, "Set1": 9, "Set3": 12, "Dark2": 8, "Paired": 12, "Pastel2": 8, "Pastel1": 9, } def partition(array, n_groups): n_elements = len(array) if n_groups > n_elements: return np.arange(n_elements) n_per_group = n_elements // n_groups + (n_elements % n_groups > 0) return np.tile(range(1, n_groups + 1), n_per_group)[:n_elements] # Define the control widgets n_groups_slider = IntSlider(description="n colors", min=3) scheme_dd = Dropdown(description="Scheme", options=ord_schemes) def update_scheme(*args): col_sc.scheme = scheme_dd.label ax_c.label = scheme_dd.label n_groups_slider.max = scheme_dd.value def update_categories(*args): groups = partition(names, n_groups_slider.value) market_map.color = groups market_map.groups = groups n_groups_slider.observe(update_categories, "value") scheme_dd.observe(update_scheme) # Define the bqplot marketmap names = range(100) col_sc = OrdinalColorScale() ax_c = ColorAxis(scale=col_sc) market_map = MarketMap( names=names, display_text=["" for _ in names], scales={"color": col_sc}, axes=[ax_c], layout=Layout(min_width="800px", min_height="600px"), ) update_scheme() update_categories() VBox([HBox([scheme_dd, n_groups_slider]), market_map]) Explanation: OrdinalColorScale The OrdinalColorScale is a color scale for categorical data, i.e. data that does not have an intrinsic order. The scale colors may be specified by the user, or chosen from a set scheme. As of now, the supported color schemes are the colorbrewer categorical schemes, listed here along with their maximum number of colors. End of explanation
5,280
Given the following text description, write Python code to implement the functionality described below step by step Description: Step2: syncID Step3: Read in SERC Reflectance Tile Step4: Extract NIR and VIS bands Now that we have uploaded all the required functions, we can calculate NDVI and plot it. Below we print the center wavelengths that these bands correspond to Step5: Calculate & Plot NDVI Here we see that band 58 represents red visible light, while band 90 is in the NIR portion of the spectrum. Let's extract these two bands from the reflectance array and calculate the ratio using the numpy.divide which divides arrays element-wise. Step6: We can use the function plot_aop_refl to plot this, and choose the seismic color pallette to highlight the difference between positive and negative NDVI values. Since this is a normalized index, the values should range from -1 to +1. Step7: Extract Spectra Using Masks In the second part of this tutorial, we will learn how to extract the average spectra of pixels whose NDVI exceeds a specified threshold value. There are several ways to do this using numpy, including the mask functions numpy.ma, as well as numpy.where and finally using boolean indexing. To start, lets copy the NDVI calculated above and use booleans to create an array only containing NDVI > 0.6. Step8: Calculate the mean spectra, thresholded by NDVI Below we will demonstrate how to calculate statistics on arrays where you have applied a mask numpy.ma. In this example, the function calculates the mean spectra for values that remain after masking out values by a specified threshold. Step9: We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3). Step10: Finally, we can use pandas to plot the mean spectra. First set up the pandas dataframe. Step11: Plot the masked NDVI dataframe to display the mean spectra for NDVI values that exceed 0.6 and that are less than 0.3
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline import warnings warnings.filterwarnings('ignore') #don't display warnings # %load ../neon_aop_hyperspectral.py Created on Wed Jun 20 10:34:49 2018 @author: bhass import matplotlib.pyplot as plt import numpy as np import h5py, os, copy def aop_h5refl2array(refl_filename): aop_h5refl2array reads in a NEON AOP reflectance hdf5 file and returns 1. reflectance array (with the no data value and reflectance scale factor applied) 2. dictionary of metadata including spatial information, and wavelengths of the bands -------- Parameters refl_filename -- full or relative path and name of reflectance hdf5 file -------- Returns -------- reflArray: array of reflectance values metadata: dictionary containing the following metadata: bad_band_window1 (tuple) bad_band_window2 (tuple) bands: # of bands (float) data ignore value: value corresponding to no data (float) epsg: coordinate system code (float) map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string) reflectance scale factor: factor by which reflectance is scaled (float) wavelength: wavelength values (float) wavelength unit: 'm' (string) -------- NOTE: This function applies to the NEON hdf5 format implemented in 2016, and should be used for data acquired 2016 and after. Data in earlier NEON hdf5 format (collected prior to 2016) is expected to be re-processed after the 2018 flight season. -------- Example Execution: -------- sercRefl, sercRefl_metadata = h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') import h5py #Read in reflectance hdf5 file hdf5_file = h5py.File(refl_filename,'r') #Get the site name file_attrs_string = str(list(hdf5_file.items())) file_attrs_string_split = file_attrs_string.split("'") sitename = file_attrs_string_split[1] #Extract the reflectance & wavelength datasets refl = hdf5_file[sitename]['Reflectance'] reflData = refl['Reflectance_Data'] reflRaw = refl['Reflectance_Data'].value #Create dictionary containing relevant metadata information metadata = {} metadata['map info'] = refl['Metadata']['Coordinate_System']['Map_Info'].value metadata['wavelength'] = refl['Metadata']['Spectral_Data']['Wavelength'].value #Extract no data value & scale factor metadata['data ignore value'] = float(reflData.attrs['Data_Ignore_Value']) metadata['reflectance scale factor'] = float(reflData.attrs['Scale_Factor']) #metadata['interleave'] = reflData.attrs['Interleave'] #Apply no data value reflClean = reflRaw.astype(float) arr_size = reflClean.shape if metadata['data ignore value'] in reflRaw: print('% No Data: ',np.round(np.count_nonzero(reflClean==metadata['data ignore value'])*100/(arr_size[0]*arr_size[1]*arr_size[2]),1)) nodata_ind = np.where(reflClean==metadata['data ignore value']) reflClean[nodata_ind]=np.nan #Apply scale factor reflArray = reflClean/metadata['reflectance scale factor'] #Extract spatial extent from attributes metadata['spatial extent'] = reflData.attrs['Spatial_Extent_meters'] #Extract bad band windows metadata['bad band window1'] = (refl.attrs['Band_Window_1_Nanometers']) metadata['bad band window2'] = (refl.attrs['Band_Window_2_Nanometers']) #Extract projection information #metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value metadata['epsg'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value) #Extract map information: spatial extent & resolution (pixel size) mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value hdf5_file.close return reflArray, metadata def plot_aop_refl(band_array,refl_extent,colorlimit=(0,1),ax=plt.gca(),title='',cbar ='on',cmap_title='',colormap='Greys'): '''plot_refl_data reads in and plots a single band or 3 stacked bands of a reflectance array -------- Parameters -------- band_array: array of reflectance values, created from aop_h5refl2array refl_extent: extent of reflectance data to be plotted (xMin, xMax, yMin, yMax) use metadata['spatial extent'] from aop_h5refl2array function colorlimit: optional, range of values to plot (min,max). - helpful to look at the histogram of reflectance values before plotting to determine colorlimit. ax: optional, default = current axis title: optional; plot title (string) cmap_title: optional; colorbar title colormap: optional (string, see https://matplotlib.org/examples/color/colormaps_reference.html) for list of colormaps -------- Returns -------- plots flightline array of single band of reflectance data -------- Examples: -------- plot_aop_refl(sercb56, sercMetadata['spatial extent'], colorlimit=(0,0.3), title='SERC Band 56 Reflectance', cmap_title='Reflectance', colormap='Greys_r') ''' import matplotlib.pyplot as plt plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit); if cbar == 'on': cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap); cbar.set_label(cmap_title,rotation=90,labelpad=20) plt.title(title); ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation for ticklabels rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees def stack_rgb(reflArray,bands): red = reflArray[:,:,bands[0]-1] green = reflArray[:,:,bands[1]-1] blue = reflArray[:,:,bands[2]-1] stackedRGB = np.stack((red,green,blue),axis=2) return stackedRGB def plot_aop_rgb(rgbArray,ext,ls_pct=5,plot_title=''): from skimage import exposure pLow, pHigh = np.percentile(rgbArray[~np.isnan(rgbArray)], (ls_pct,100-ls_pct)) img_rescale = exposure.rescale_intensity(rgbArray, in_range=(pLow,pHigh)) plt.imshow(img_rescale,extent=ext) plt.title(plot_title + '\n Linear ' + str(ls_pct) + '% Contrast Stretch'); ax = plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation # rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree Explanation: syncID: 19e0b890b3c64f46b2189c8273a2e0a4 title: "Calculate NDVI & Extract Spectra Using Masks in Python - Tiled Data" description: "Learn to calculate Normalized Difference Vegetation Index (NDVI) and extract spectral using masks with Python and NEON tiled hyperspectral data products." dateCreated: 2018-07-05 authors: Bridget Hass contributors: Donal O'Leary estimatedTime: 0.5 hours packagesLibraries: numpy, h5py, gdal, matplotlib.pyplot topics: hyperspectral-remote-sensing, HDF5, remote-sensing, languagesTool: python dataProduct: NEON.DP3.30006, NEON.DP3.30008 code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/indices/Calc_NDVI_Extract_Spectra_Masks_Tiles_py/Calc_NDVI_Extract_Spectra_Masks_Tiles_py.ipynb tutorialSeries: intro-hsi-py-series urlTitle: calc-ndvi-tiles-py In this tutorial, we will calculate the Normalized Difference Vegetation Index (NDVI). This tutorial uses the mosaiced or tiled NEON data product. For a tutorial using the flightline data, please see <a href="/calc-ndvi-py" target="_blank"> Calculate NDVI & Extract Spectra Using Masks in Python - Flightline Data</a>. <div id="ds-objectives" markdown="1"> ### Objectives After completing this tutorial, you will be able to: * Calculate NDVI from hyperspectral data in Python. ### Install Python Packages * **numpy** * **pandas** * **gdal** * **matplotlib** * **h5py** ### Download Data To complete this tutorial, you will use data available from the NEON 2017 Data Institute. This tutorial uses the following files: <ul> <li> <a href="https://www.neonscience.org/sites/default/files/neon_aop_spectral_python_functions_tiled_data.zip">neon_aop_spectral_python_functions_tiled_data.zip (10 KB)</a> <- Click to Download</li> <li><a href="https://ndownloader.figshare.com/files/25752665" target="_blank">NEON_D02_SERC_DP3_368000_4306000_reflectance.h5 (618 MB)</a> <- Click to Download</li> </ul> <a href="https://ndownloader.figshare.com/files/25752665" class="link--button link--arrow"> Download Dataset</a> The LiDAR and imagery data used to create this raster teaching data subset were collected over the <a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a> <a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a> and processed at NEON headquarters. The entire dataset can be accessed on the <a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>. </div> Calculate NDVI & Extract Spectra with Masks Background: The Normalized Difference Vegetation Index (NDVI) is a standard band-ratio calculation frequently used to analyze ecological remote sensing data. NDVI indicates whether the remotely-sensed target contains live green vegetation. When sunlight strikes objects, certain wavelengths of the electromagnetic spectrum are absorbed and other wavelengths are reflected. The pigment chlorophyll in plant leaves strongly absorbs visible light (with wavelengths in the range of 400-700 nm) for use in photosynthesis. The cell structure of the leaves, however, strongly reflects near-infrared light (wavelengths ranging from 700 - 1100 nm). Plants reflect up to 60% more light in the near infrared portion of the spectrum than they do in the green portion of the spectrum. By calculating the ratio of Near Infrared (NIR) to Visible (VIS) bands in hyperspectral data, we can obtain a metric of vegetation density and health. The formula for NDVI is: $$NDVI = \frac{(NIR - VIS)}{(NIR+ VIS)}$$ <figure> <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-indices/ndvi_tree.png"> <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/hyperspectral-indices/ndvi_tree.png"></a> <figcaption> NDVI is calculated from the visible and near-infrared light reflected by vegetation. Healthy vegetation (left) absorbs most of the visible light that hits it, and reflects a large portion of near-infrared light. Unhealthy or sparse vegetation (right) reflects more visible light and less near-infrared light. Source: <a href="https://www.researchgate.net/figure/266947355_fig1_Figure-1-Green-vegetation-left-absorbs-visible-light-and-reflects-near-infrared-light" target="_blank">Figure 1 in Wu et. al. 2014. PLOS. </a> </figcaption> </figure> Start by setting plot preferences and loading the neon_aop_refl_hdf5_functions module: End of explanation # Note you will need to update this filepath for your local machine sercRefl, sercRefl_md = aop_h5refl2array('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5') Explanation: Read in SERC Reflectance Tile End of explanation print('band 58 center wavelength (nm): ',sercRefl_md['wavelength'][57]) print('band 90 center wavelength (nm) : ', sercRefl_md['wavelength'][89]) Explanation: Extract NIR and VIS bands Now that we have uploaded all the required functions, we can calculate NDVI and plot it. Below we print the center wavelengths that these bands correspond to: End of explanation vis = sercRefl[:,:,57] nir = sercRefl[:,:,89] ndvi = np.divide((nir-vis),(nir+vis)) Explanation: Calculate & Plot NDVI Here we see that band 58 represents red visible light, while band 90 is in the NIR portion of the spectrum. Let's extract these two bands from the reflectance array and calculate the ratio using the numpy.divide which divides arrays element-wise. End of explanation plot_aop_refl(ndvi,sercRefl_md['spatial extent'], colorlimit = (np.min(ndvi),np.max(ndvi)), title='SERC Subset NDVI \n (VIS = Band 58, NIR = Band 90)', cmap_title='NDVI', colormap='seismic') Explanation: We can use the function plot_aop_refl to plot this, and choose the seismic color pallette to highlight the difference between positive and negative NDVI values. Since this is a normalized index, the values should range from -1 to +1. End of explanation import copy ndvi_gtpt6 = copy.copy(ndvi) #set all pixels with NDVI < 0.6 to nan, keeping only values > 0.6 ndvi_gtpt6[ndvi<0.6] = np.nan print('Mean NDVI > 0.6:',round(np.nanmean(ndvi_gtpt6),2)) plot_aop_refl(ndvi_gtpt6, sercRefl_md['spatial extent'], colorlimit=(0.6,1), title='SERC Subset NDVI > 0.6 \n (VIS = Band 58, NIR = Band 90)', cmap_title='NDVI', colormap='RdYlGn') Explanation: Extract Spectra Using Masks In the second part of this tutorial, we will learn how to extract the average spectra of pixels whose NDVI exceeds a specified threshold value. There are several ways to do this using numpy, including the mask functions numpy.ma, as well as numpy.where and finally using boolean indexing. To start, lets copy the NDVI calculated above and use booleans to create an array only containing NDVI > 0.6. End of explanation import numpy.ma as ma def calculate_mean_masked_spectra(reflArray,ndvi,ndvi_threshold,ineq='>'): mean_masked_refl = np.zeros(reflArray.shape[2]) for i in np.arange(reflArray.shape[2]): refl_band = reflArray[:,:,i] if ineq == '>': ndvi_mask = ma.masked_where((ndvi<=ndvi_threshold) | (np.isnan(ndvi)),ndvi) elif ineq == '<': ndvi_mask = ma.masked_where((ndvi>=ndvi_threshold) | (np.isnan(ndvi)),ndvi) else: print('ERROR: Invalid inequality. Enter < or >') masked_refl = ma.MaskedArray(refl_band,mask=ndvi_mask.mask) mean_masked_refl[i] = ma.mean(masked_refl) return mean_masked_refl Explanation: Calculate the mean spectra, thresholded by NDVI Below we will demonstrate how to calculate statistics on arrays where you have applied a mask numpy.ma. In this example, the function calculates the mean spectra for values that remain after masking out values by a specified threshold. End of explanation sercSpectra_ndvi_gtpt6 = calculate_mean_masked_spectra(sercRefl,ndvi,0.6) sercSpectra_ndvi_ltpt3 = calculate_mean_masked_spectra(sercRefl,ndvi,0.3,ineq='<') Explanation: We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3). End of explanation import pandas #Remove water vapor bad band windows & last 10 bands w = copy.copy(sercRefl_md['wavelength']) w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan w[-10:]=np.nan; nan_ind = np.argwhere(np.isnan(w)) sercSpectra_ndvi_gtpt6[nan_ind] = np.nan sercSpectra_ndvi_ltpt3[nan_ind] = np.nan #Create dataframe with masked NDVI mean spectra sercSpectra_ndvi_df = pandas.DataFrame() sercSpectra_ndvi_df['wavelength'] = w sercSpectra_ndvi_df['mean_refl_ndvi_gtpt6'] = sercSpectra_ndvi_gtpt6 sercSpectra_ndvi_df['mean_refl_ndvi_ltpt3'] = sercSpectra_ndvi_ltpt3 Explanation: Finally, we can use pandas to plot the mean spectra. First set up the pandas dataframe. End of explanation ax = plt.gca(); sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_gtpt6',color='green', edgecolor='none',kind='scatter',label='NDVI > 0.6',legend=True); sercSpectra_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_ltpt3',color='red', edgecolor='none',kind='scatter',label='NDVI < 0.3',legend=True); ax.set_title('Mean Spectra of Reflectance Masked by NDVI') ax.set_xlim([np.nanmin(w),np.nanmax(w)]); ax.set_ylim(0,0.45) ax.set_xlabel("Wavelength, nm"); ax.set_ylabel("Reflectance") ax.grid('on'); Explanation: Plot the masked NDVI dataframe to display the mean spectra for NDVI values that exceed 0.6 and that are less than 0.3: End of explanation
5,281
Given the following text description, write Python code to implement the functionality described below step by step Description: Classical Planning Classical Planning Approaches Introduction Planning combines the two major areas of AI Step1: Planning as Planning Graph Search A planning graph is a directed graph organized into levels each of which contains information about the current state of the knowledge base and the possible state-action links to and from that level. The first level contains the initial state with nodes representing each fluent that holds in that level. This level has state-action links linking each state to valid actions in that state. Each action is linked to all its preconditions and its effect states. Based on these effects, the next level is constructed and contains similarly structured information about the next state. In this way, the graph is expanded using state-action links till we reach a state where all the required goals hold true simultaneously. In every planning problem, we are allowed to carry out the no-op action, ie, we can choose no action for a particular state. These are called persistence actions and has effects same as its preconditions. This enables us to carry a state to the next level. Mutual exclusivity (mutex) between two actions means that these cannot be taken together and occurs in the following cases Step2: A planning graph can be used to give better heuristic estimates which can be applied to any of the search techniques. Alternatively, we can search for a solution over the space formed by the planning graph, using an algorithm called GraphPlan. The GraphPlan algorithm repeatedly adds a level to a planning graph. Once all the goals show up as non-mutex in the graph, the algorithm runs backward from the last level to the first searching for a plan that solves the problem. If that fails, it records the (level , goals) pair as a no-good (as in constraint learning for CSPs), expands another level and tries again, terminating with failure when there is no reason to go on. Step3: Planning as State-Space Search The description of a planning problem defines a search problem Step4: Forward State-Space Search Forward search through the space of states, starting in the initial state and using the problem’s actions to search forward for a member of the set of goal states. Step5: Backward Relevant-States Search Backward search through sets of relevant states, starting at the set of states representing the goal and using the inverse of the actions to search backward for the initial state. Step6: Planning as Constraint Satisfaction Problem In forward planning, the search is constrained by the initial state and only uses the goal as a stopping criterion and as a source for heuristics. In regression planning, the search is constrained by the goal and only uses the start state as a stopping criterion and as a source for heuristics. By converting the problem to a constraint satisfaction problem (CSP), the initial state can be used to prune what is not reachable and the goal to prune what is not useful. The CSP will be defined for a finite number of steps; the number of steps can be adjusted to find the shortest plan. One of the CSP methods can then be used to solve the CSP and thus find a plan. To construct a CSP from a planning problem, first choose a fixed planning horizon, which is the number of time steps over which to plan. Suppose the horizon is $k$. The CSP has the following variables Step7: Planning as Boolean Satisfiability Problem As shown in <a name="ref-2"/>[2] the translation of a Planning Domain Definition Language (PDDL) description into a Conjunctive Normal Form (CNF) formula is a series of straightforward steps Step8: Experimental Results Blocks World Step9: GraphPlan Step10: ForwardPlan Step11: ForwardPlan with Ignore Delete Lists Heuristic Step12: BackwardPlan Step13: BackwardPlan with Ignore Delete Lists Heuristic Step14: CSPlan Step15: CSPlan with SAT UP Arc Heuristic Step16: SATPlan with DPLL Step17: SATPlan with CDCL Step18: Spare Tire Step19: GraphPlan Step20: ForwardPlan Step21: ForwardPlan with Ignore Delete Lists Heuristic Step22: BackwardPlan Step23: BackwardPlan with Ignore Delete Lists Heuristic Step24: CSPlan Step25: CSPlan with SAT UP Arc Heuristic Step26: SATPlan with DPLL Step27: SATPlan with CDCL Step28: Shopping Problem Step29: GraphPlan Step30: ForwardPlan Step31: ForwardPlan with Ignore Delete Lists Heuristic Step32: BackwardPlan Step33: BackwardPlan with Ignore Delete Lists Heuristic Step34: CSPlan Step35: CSPlan with SAT UP Arc Heuristic Step36: SATPlan with CDCL Step37: Air Cargo Step38: GraphPlan Step39: ForwardPlan Step40: ForwardPlan with Ignore Delete Lists Heuristic Step41: BackwardPlan Step42: BackwardPlan with Ignore Delete Lists Heuristic Step43: CSPlan Step44: CSPlan with SAT UP Arc Heuristic
Python Code: from planning import * Explanation: Classical Planning Classical Planning Approaches Introduction Planning combines the two major areas of AI: search and logic. A planner can be seen either as a program that searches for a solution or as one that constructively proves the existence of a solution. Currently, the most popular and effective approaches to fully automated planning are: - searching using a planning graph; - state-space search with heuristics; - translating to a constraint satisfaction (CSP) problem; - translating to a boolean satisfiability (SAT) problem. End of explanation %psource Graph %psource Level Explanation: Planning as Planning Graph Search A planning graph is a directed graph organized into levels each of which contains information about the current state of the knowledge base and the possible state-action links to and from that level. The first level contains the initial state with nodes representing each fluent that holds in that level. This level has state-action links linking each state to valid actions in that state. Each action is linked to all its preconditions and its effect states. Based on these effects, the next level is constructed and contains similarly structured information about the next state. In this way, the graph is expanded using state-action links till we reach a state where all the required goals hold true simultaneously. In every planning problem, we are allowed to carry out the no-op action, ie, we can choose no action for a particular state. These are called persistence actions and has effects same as its preconditions. This enables us to carry a state to the next level. Mutual exclusivity (mutex) between two actions means that these cannot be taken together and occurs in the following cases: - inconsistent effects: one action negates the effect of the other; - interference: one of the effects of an action is the negation of a precondition of the other; - competing needs: one of the preconditions of one action is mutually exclusive with a precondition of the other. We can say that we have reached our goal if none of the goal states in the current level are mutually exclusive. End of explanation %psource GraphPlan Explanation: A planning graph can be used to give better heuristic estimates which can be applied to any of the search techniques. Alternatively, we can search for a solution over the space formed by the planning graph, using an algorithm called GraphPlan. The GraphPlan algorithm repeatedly adds a level to a planning graph. Once all the goals show up as non-mutex in the graph, the algorithm runs backward from the last level to the first searching for a plan that solves the problem. If that fails, it records the (level , goals) pair as a no-good (as in constraint learning for CSPs), expands another level and tries again, terminating with failure when there is no reason to go on. End of explanation from search import * Explanation: Planning as State-Space Search The description of a planning problem defines a search problem: we can search from the initial state through the space of states, looking for a goal. One of the nice advantages of the declarative representation of action schemas is that we can also search backward from the goal, looking for the initial state. However, neither forward nor backward search is efficient without a good heuristic function because the real-world planning problems often have large state spaces. A heuristic function $h(s)$ estimates the distance from a state $s$ to the goal and, if it is admissible, ie if does not overestimate, then we can use $A^∗$ search to find optimal solutions. Planning uses a factored representation for states and action schemas which makes it possible to define good domain-independent heuristics to prune the search space. An admissible heuristic can be derived by defining a relaxed problem that is easier to solve. The length of the solution of this easier problem then becomes the heuristic for the original problem. Assume that all goals and preconditions contain only positive literals, ie that the problem is defined according to the Stanford Research Institute Problem Solver (STRIPS) notation: we want to create a relaxed version of the original problem that will be easier to solve by ignoring delete lists from all actions, ie removing all negative literals from effects. As shown in <a name="ref-1"/>[1] the planning graph of a relaxed problem does not contain any mutex relations at all (which is the crucial thing when building a planning graph) and for this reason GraphPlan will never backtrack looking for a solution: for this reason the ignore delete lists heuristic makes it possible to find the optimal solution for relaxed problem in polynomial time through GraphPlan algorithm. End of explanation %psource ForwardPlan Explanation: Forward State-Space Search Forward search through the space of states, starting in the initial state and using the problem’s actions to search forward for a member of the set of goal states. End of explanation %psource BackwardPlan Explanation: Backward Relevant-States Search Backward search through sets of relevant states, starting at the set of states representing the goal and using the inverse of the actions to search backward for the initial state. End of explanation from csp import * %psource CSPlan Explanation: Planning as Constraint Satisfaction Problem In forward planning, the search is constrained by the initial state and only uses the goal as a stopping criterion and as a source for heuristics. In regression planning, the search is constrained by the goal and only uses the start state as a stopping criterion and as a source for heuristics. By converting the problem to a constraint satisfaction problem (CSP), the initial state can be used to prune what is not reachable and the goal to prune what is not useful. The CSP will be defined for a finite number of steps; the number of steps can be adjusted to find the shortest plan. One of the CSP methods can then be used to solve the CSP and thus find a plan. To construct a CSP from a planning problem, first choose a fixed planning horizon, which is the number of time steps over which to plan. Suppose the horizon is $k$. The CSP has the following variables: a state variable for each feature and each time from 0 to $k$. If there are $n$ features for a horizon of $k$, there are $n \cdot (k+1)$ state variables. The domain of the state variable is the domain of the corresponding feature; an action variable, $Action_t$, for each $t$ in the range 0 to $k-1$. The domain of $Action_t$, represents the action that takes the agent from the state at time $t$ to the state at time $t+1$. There are several types of constraints: a precondition constraint between a state variable at time $t$ and the variable $Actiont_t$ constrains what actions are legal at time $t$; an effect constraint between $Action_t$ and a state variable at time $t+1$ constrains the values of a state variable that is a direct effect of the action; a frame constraint among a state variable at time $t$, the variable $Action_t$, and the corresponding state variable at time $t+1$ specifies when the variable that does not change as a result of an action has the same value before and after the action; an initial-state constraint constrains a variable on the initial state (at time 0). The initial state is represented as a set of domain constraints on the state variables at time 0; a goal constraint constrains the final state to be a state that satisfies the achievement goal. These are domain constraints on the variables that appear in the goal; a state constraint is a constraint among variables at the same time step. These can include physical constraints on the state or can ensure that states that violate maintenance goals are forbidden. This is extra knowledge beyond the power of the feature-based or PDDL representations of the action. The PDDL representation gives precondition, effect and frame constraints for each time $t$ as follows: for each $Var = v$ in the precondition of action $A$, there is a precondition constraint: $$ Var_t = v \leftarrow Action_t = A $$ that specifies that if the action is to be $A$, $Var_t$ must have value $v$ immediately before. This constraint is violated when $Action_t = A$ and $Var_t \neq v$, and thus is equivalent to $\lnot{(Var_t \neq v \land Action_t = A)}$; or each $Var = v$ in the effect of action $A$, there is a effect constraint: $$ Var_{t+1} = v \leftarrow Action_t = A $$ which is violated when $Action_t = A$ and $Var_{t+1} \neq v$, and thus is equivalent to $\lnot{(Var_{t+1} \neq v \land Action_t = A)}$; for each $Var$, there is a frame constraint, where $As$ is the set of actions that include $Var$ in the effect of the action: $$ Var_{t+1} = Var_t \leftarrow Action_t \notin As $$ which specifies that the feature $Var$ has the same value before and after any action that does not affect $Var$. The CSP representation assumes a fixed planning horizon (ie a fixed number of steps). To find a plan over any number of steps, the algorithm can be run for a horizon of $k = 0, 1, 2, \dots$ until a solution is found. End of explanation from logic import * %psource SATPlan %psource SAT_plan Explanation: Planning as Boolean Satisfiability Problem As shown in <a name="ref-2"/>[2] the translation of a Planning Domain Definition Language (PDDL) description into a Conjunctive Normal Form (CNF) formula is a series of straightforward steps: - propositionalize the actions: replace each action schema with a set of ground actions formed by substituting constants for each of the variables. These ground actions are not part of the translation, but will be used in subsequent steps; - define the initial state: assert $F^0$ for every fluent $F$ in the problem’s initial state, and $\lnot{F}$ for every fluent not mentioned in the initial state; - propositionalize the goal: for every variable in the goal, replace the literals that contain the variable with a disjunction over constants; - add successor-state axioms: for each fluent $F$, add an axiom of the form $$ F^{t+1} \iff ActionCausesF^t \lor (F^t \land \lnot{ActionCausesNotF^t}) $$ where $ActionCausesF$ is a disjunction of all the ground actions that have $F$ in their add list, and $ActionCausesNotF$ is a disjunction of all the ground actions that have $F$ in their delete list; - add precondition axioms: for each ground action $A$, add the axiom $A^t \implies PRE(A)^t$, that is, if an action is taken at time $t$, then the preconditions must have been true; - add action exclusion axioms: say that every action is distinct from every other action. A propositional planning procedure implements the basic idea just given but, because the agent does not know how many steps it will take to reach the goal, the algorithm tries each possible number of steps $t$, up to some maximum conceivable plan length $T_{max}$ . In this way, it is guaranteed to find the shortest plan if one exists. Because of the way the propositional planning procedure searches for a solution, this approach cannot be used in a partially observable environment, ie WalkSAT, but would just set the unobservable variables to the values it needs to create a solution. End of explanation %psource three_block_tower Explanation: Experimental Results Blocks World End of explanation %time blocks_world_solution = GraphPlan(three_block_tower()).execute() linearize(blocks_world_solution) Explanation: GraphPlan End of explanation %time blocks_world_solution = uniform_cost_search(ForwardPlan(three_block_tower()), display=True).solution() blocks_world_solution = list(map(lambda action: Expr(action.name, *action.args), blocks_world_solution)) blocks_world_solution Explanation: ForwardPlan End of explanation %time blocks_world_solution = astar_search(ForwardPlan(three_block_tower()), display=True).solution() blocks_world_solution = list(map(lambda action: Expr(action.name, *action.args), blocks_world_solution)) blocks_world_solution Explanation: ForwardPlan with Ignore Delete Lists Heuristic End of explanation %time blocks_world_solution = uniform_cost_search(BackwardPlan(three_block_tower()), display=True).solution() blocks_world_solution = list(map(lambda action: Expr(action.name, *action.args), blocks_world_solution)) blocks_world_solution[::-1] Explanation: BackwardPlan End of explanation %time blocks_world_solution = astar_search(BackwardPlan(three_block_tower()), display=True).solution() blocks_world_solution = list(map(lambda action: Expr(action.name, *action.args), blocks_world_solution)) blocks_world_solution[::-1] Explanation: BackwardPlan with Ignore Delete Lists Heuristic End of explanation %time blocks_world_solution = CSPlan(three_block_tower(), 3, arc_heuristic=no_heuristic) blocks_world_solution Explanation: CSPlan End of explanation %time blocks_world_solution = CSPlan(three_block_tower(), 3, arc_heuristic=sat_up) blocks_world_solution Explanation: CSPlan with SAT UP Arc Heuristic End of explanation %time blocks_world_solution = SATPlan(three_block_tower(), 4, SAT_solver=dpll_satisfiable) blocks_world_solution Explanation: SATPlan with DPLL End of explanation %time blocks_world_solution = SATPlan(three_block_tower(), 4, SAT_solver=cdcl_satisfiable) blocks_world_solution Explanation: SATPlan with CDCL End of explanation %psource spare_tire Explanation: Spare Tire End of explanation %time spare_tire_solution = GraphPlan(spare_tire()).execute() linearize(spare_tire_solution) Explanation: GraphPlan End of explanation %time spare_tire_solution = uniform_cost_search(ForwardPlan(spare_tire()), display=True).solution() spare_tire_solution = list(map(lambda action: Expr(action.name, *action.args), spare_tire_solution)) spare_tire_solution Explanation: ForwardPlan End of explanation %time spare_tire_solution = astar_search(ForwardPlan(spare_tire()), display=True).solution() spare_tire_solution = list(map(lambda action: Expr(action.name, *action.args), spare_tire_solution)) spare_tire_solution Explanation: ForwardPlan with Ignore Delete Lists Heuristic End of explanation %time spare_tire_solution = uniform_cost_search(BackwardPlan(spare_tire()), display=True).solution() spare_tire_solution = list(map(lambda action: Expr(action.name, *action.args), spare_tire_solution)) spare_tire_solution[::-1] Explanation: BackwardPlan End of explanation %time spare_tire_solution = astar_search(BackwardPlan(spare_tire()), display=True).solution() spare_tire_solution = list(map(lambda action: Expr(action.name, *action.args), spare_tire_solution)) spare_tire_solution[::-1] Explanation: BackwardPlan with Ignore Delete Lists Heuristic End of explanation %time spare_tire_solution = CSPlan(spare_tire(), 3, arc_heuristic=no_heuristic) spare_tire_solution Explanation: CSPlan End of explanation %time spare_tire_solution = CSPlan(spare_tire(), 3, arc_heuristic=sat_up) spare_tire_solution Explanation: CSPlan with SAT UP Arc Heuristic End of explanation %time spare_tire_solution = SATPlan(spare_tire(), 4, SAT_solver=dpll_satisfiable) spare_tire_solution Explanation: SATPlan with DPLL End of explanation %time spare_tire_solution = SATPlan(spare_tire(), 4, SAT_solver=cdcl_satisfiable) spare_tire_solution Explanation: SATPlan with CDCL End of explanation %psource shopping_problem Explanation: Shopping Problem End of explanation %time shopping_problem_solution = GraphPlan(shopping_problem()).execute() linearize(shopping_problem_solution) Explanation: GraphPlan End of explanation %time shopping_problem_solution = uniform_cost_search(ForwardPlan(shopping_problem()), display=True).solution() shopping_problem_solution = list(map(lambda action: Expr(action.name, *action.args), shopping_problem_solution)) shopping_problem_solution Explanation: ForwardPlan End of explanation %time shopping_problem_solution = astar_search(ForwardPlan(shopping_problem()), display=True).solution() shopping_problem_solution = list(map(lambda action: Expr(action.name, *action.args), shopping_problem_solution)) shopping_problem_solution Explanation: ForwardPlan with Ignore Delete Lists Heuristic End of explanation %time shopping_problem_solution = uniform_cost_search(BackwardPlan(shopping_problem()), display=True).solution() shopping_problem_solution = list(map(lambda action: Expr(action.name, *action.args), shopping_problem_solution)) shopping_problem_solution[::-1] Explanation: BackwardPlan End of explanation %time shopping_problem_solution = astar_search(BackwardPlan(shopping_problem()), display=True).solution() shopping_problem_solution = list(map(lambda action: Expr(action.name, *action.args), shopping_problem_solution)) shopping_problem_solution[::-1] Explanation: BackwardPlan with Ignore Delete Lists Heuristic End of explanation %time shopping_problem_solution = CSPlan(shopping_problem(), 5, arc_heuristic=no_heuristic) shopping_problem_solution Explanation: CSPlan End of explanation %time shopping_problem_solution = CSPlan(shopping_problem(), 5, arc_heuristic=sat_up) shopping_problem_solution Explanation: CSPlan with SAT UP Arc Heuristic End of explanation %time shopping_problem_solution = SATPlan(shopping_problem(), 5, SAT_solver=cdcl_satisfiable) shopping_problem_solution Explanation: SATPlan with CDCL End of explanation %psource air_cargo Explanation: Air Cargo End of explanation %time air_cargo_solution = GraphPlan(air_cargo()).execute() linearize(air_cargo_solution) Explanation: GraphPlan End of explanation %time air_cargo_solution = uniform_cost_search(ForwardPlan(air_cargo()), display=True).solution() air_cargo_solution = list(map(lambda action: Expr(action.name, *action.args), air_cargo_solution)) air_cargo_solution Explanation: ForwardPlan End of explanation %time air_cargo_solution = astar_search(ForwardPlan(air_cargo()), display=True).solution() air_cargo_solution = list(map(lambda action: Expr(action.name, *action.args), air_cargo_solution)) air_cargo_solution Explanation: ForwardPlan with Ignore Delete Lists Heuristic End of explanation %time air_cargo_solution = uniform_cost_search(BackwardPlan(air_cargo()), display=True).solution() air_cargo_solution = list(map(lambda action: Expr(action.name, *action.args), air_cargo_solution)) air_cargo_solution[::-1] Explanation: BackwardPlan End of explanation %time air_cargo_solution = astar_search(BackwardPlan(air_cargo()), display=True).solution() air_cargo_solution = list(map(lambda action: Expr(action.name, *action.args), air_cargo_solution)) air_cargo_solution[::-1] Explanation: BackwardPlan with Ignore Delete Lists Heuristic End of explanation %time air_cargo_solution = CSPlan(air_cargo(), 6, arc_heuristic=no_heuristic) air_cargo_solution Explanation: CSPlan End of explanation %time air_cargo_solution = CSPlan(air_cargo(), 6, arc_heuristic=sat_up) air_cargo_solution Explanation: CSPlan with SAT UP Arc Heuristic End of explanation
5,282
Given the following text description, write Python code to implement the functionality described below step by step Description: Recurrent neural networks Import various modules that we need for this notebook (now using Keras 1.0.0) Step1: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data. I. Example We read in the IMDB dataset, using the next 500 most commonly used terms.
Python Code: %pylab inline import copy import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.datasets import imdb, reuters from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation, Flatten from keras.optimizers import SGD, RMSprop from keras.utils import np_utils from keras.layers.convolutional import Convolution1D, MaxPooling1D, ZeroPadding1D, AveragePooling1D from keras.callbacks import EarlyStopping from keras.layers.normalization import BatchNormalization from keras.preprocessing import sequence from keras.layers.embeddings import Embedding from gensim.models import word2vec Explanation: Recurrent neural networks Import various modules that we need for this notebook (now using Keras 1.0.0) End of explanation (X_train, y_train), (X_test, y_test) = imdb.load_data(nb_words=500, maxlen=100, test_split=0.2) X_train = sequence.pad_sequences(X_train, maxlen=100) X_test = sequence.pad_sequences(X_test, maxlen=100) Explanation: Load the MNIST dataset, flatten the images, convert the class labels, and scale the data. I. Example We read in the IMDB dataset, using the next 500 most commonly used terms. End of explanation
5,283
Given the following text description, write Python code to implement the functionality described below step by step Description: Total Calls by Community Area In the WBEZ article and CNT analysis of neighborhood flooding, they used zip code as the primary identifier of geography. While the relied on additional data sources, 311 calls seemed to factor into their analysis. Totaling all of the water in street and basement 311 calls by community area gives Austin as the top area by a significant margin, and it's worth noting that Austin is essentially split into three zip codes, which may have resulted in it not seeming to have as much flooding activity as it does in reality. Will look at zip code to see if the numbers are similar to WBEZ's for 2009-2015 by zip code Step1: Zip Code Data Comparison - WBEZ, Current Data To make sure that the data is at least similar, plotting out both next to each other, and they seem to match up overall with some slight differences. Given that it relatively ensures we're working with the same data, we can plot out the same patterns by neighborhood and see if Austin is still overlooked (given that it is in the zip code breakdown) Data mostly matches up, can see it on map on Gross Gatherings article Step2: Community Area Breakdown for 2009-2015 Looking at the period WBEZ reviewed calls for, it's clear that zip codes are missing a large part of the picture. In reality, Austin is far and away the community area with the most floods, not necessarily Chatham. While many areas seem to have significant problems with flooding, using zip codes as a proxy for neighborhoods in this case loses some of the concentration.
Python Code: flood_comm_top = flood_comm_sum.sort_values(by='Count Calls', ascending=False)[:20] flood_comm_top.plot(kind='bar',x='Community Area',y='Count Calls') # WBEZ zip data wbez_zip = pd.read_csv('wbez_flood_311_zip.csv') wbez_zip_top = wbez_zip.sort_values(by='number_of_311_calls',ascending=False)[:20] wbez_zip_top.plot(kind='bar',x='zip_code',y='number_of_311_calls') flood_zip_df = pd.read_csv('311_data/flood_calls_311_zip.csv') flood_zip_df['Created Date'] = pd.to_datetime(flood_zip_df['Created Date']) flood_zip_df = flood_zip_df.set_index(pd.DatetimeIndex(flood_zip_df['Created Date'])) # Setting to same time frame as WBEZ flood_zip_df = flood_zip_df['2009-01-01':'2015-12-30'] flood_zip_df = flood_zip_df[flood_zip_df.columns.values[1:]] flood_zip_stack = pd.DataFrame(flood_zip_df.stack()).reset_index() flood_zip_stack = flood_zip_stack.rename(columns={'level_0':'Created Date','level_1':'Zip Code',0:'Count Calls'}) flood_zip_sum = flood_zip_stack.groupby(['Zip Code'])['Count Calls'].sum() flood_zip_sum = flood_zip_sum.reset_index() flood_zip_sum.head() flood_zip_top = flood_zip_sum.sort_values(by='Count Calls', ascending=False)[:20] flood_zip_top.plot(kind='bar',x='Zip Code',y='Count Calls') Explanation: Total Calls by Community Area In the WBEZ article and CNT analysis of neighborhood flooding, they used zip code as the primary identifier of geography. While the relied on additional data sources, 311 calls seemed to factor into their analysis. Totaling all of the water in street and basement 311 calls by community area gives Austin as the top area by a significant margin, and it's worth noting that Austin is essentially split into three zip codes, which may have resulted in it not seeming to have as much flooding activity as it does in reality. Will look at zip code to see if the numbers are similar to WBEZ's for 2009-2015 by zip code End of explanation fig, axs = plt.subplots(1,2) plt.rcParams["figure.figsize"] = [15, 5] wbez_zip_top.plot(title='WBEZ Data', ax=axs[0], kind='bar',x='zip_code',y='number_of_311_calls') flood_zip_top.plot(title='FOIA Data', ax=axs[1], kind='bar',x='Zip Code',y='Count Calls') flood_comm_time = flood_comm_stack_df.copy() flood_comm_time['Date'] = pd.to_datetime(flood_comm_time['Date']) flood_comm_time = flood_comm_time.set_index(pd.DatetimeIndex(flood_comm_time['Date'])) flood_comm_time = flood_comm_time['2009-01-01':'2015-12-30'] flood_comm_time_sum = flood_comm_time.groupby(['Community Area'])['Count Calls'].sum() flood_comm_time_sum = flood_comm_time_sum.reset_index() flood_comm_time_sum.head() Explanation: Zip Code Data Comparison - WBEZ, Current Data To make sure that the data is at least similar, plotting out both next to each other, and they seem to match up overall with some slight differences. Given that it relatively ensures we're working with the same data, we can plot out the same patterns by neighborhood and see if Austin is still overlooked (given that it is in the zip code breakdown) Data mostly matches up, can see it on map on Gross Gatherings article End of explanation flood_comm_time_top = flood_comm_time_sum.sort_values(by='Count Calls', ascending=False)[:20] flood_comm_time_top.plot(kind='bar',x='Community Area',y='Count Calls') Explanation: Community Area Breakdown for 2009-2015 Looking at the period WBEZ reviewed calls for, it's clear that zip codes are missing a large part of the picture. In reality, Austin is far and away the community area with the most floods, not necessarily Chatham. While many areas seem to have significant problems with flooding, using zip codes as a proxy for neighborhoods in this case loses some of the concentration. End of explanation
5,284
Given the following text description, write Python code to implement the functionality described below step by step Description: Source localization with MNE/dSPM/sLORETA/eLORETA The aim of this tutorial is to teach you how to compute and apply a linear inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data. Step1: Process MEG data Step2: Compute regularized noise covariance For more details see tut_compute_covariance. Step3: Compute the evoked response Let's just use MEG channels for simplicity. Step4: Inverse modeling Step5: Compute inverse solution Step6: Visualization View activation time-series Step7: Examine the original data and the residual after fitting Step8: Here we use peak getter to move visualization to the time point of the peak and draw a marker at the maximum peak vertex. Step9: Morph data to average brain Step10: Dipole orientations The pick_ori parameter of the Step11: Note that there is a relationship between the orientation of the dipoles and the surface of the cortex. For this reason, we do not use an inflated cortical surface for visualization, but the original surface used to define the source space. For more information about dipole orientations, see tut-dipole-orientations. Now let's look at each solver
Python Code: # sphinx_gallery_thumbnail_number = 10 import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse Explanation: Source localization with MNE/dSPM/sLORETA/eLORETA The aim of this tutorial is to teach you how to compute and apply a linear inverse method such as MNE/dSPM/sLORETA/eLORETA on evoked/raw/epochs data. End of explanation data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname) # already has an average reference events = mne.find_events(raw, stim_channel='STI 014') event_id = dict(aud_l=1) # event trigger and conditions tmin = -0.2 # start of each epoch (200ms before the trigger) tmax = 0.5 # end of each epoch (500ms after the trigger) raw.info['bads'] = ['MEG 2443', 'EEG 053'] baseline = (None, 0) # means from the first instant to t = 0 reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6) epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=('meg', 'eog'), baseline=baseline, reject=reject) Explanation: Process MEG data End of explanation noise_cov = mne.compute_covariance( epochs, tmax=0., method=['shrunk', 'empirical'], rank=None, verbose=True) fig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info) Explanation: Compute regularized noise covariance For more details see tut_compute_covariance. End of explanation evoked = epochs.average().pick('meg') evoked.plot(time_unit='s') evoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag', time_unit='s') # Show whitening evoked.plot_white(noise_cov, time_unit='s') del epochs # to save memory Explanation: Compute the evoked response Let's just use MEG channels for simplicity. End of explanation # Read the forward solution and compute the inverse operator fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif' fwd = mne.read_forward_solution(fname_fwd) # make an MEG inverse operator info = evoked.info inverse_operator = make_inverse_operator(info, fwd, noise_cov, loose=0.2, depth=0.8) del fwd # You can write it to disk with:: # # >>> from mne.minimum_norm import write_inverse_operator # >>> write_inverse_operator('sample_audvis-meg-oct-6-inv.fif', # inverse_operator) Explanation: Inverse modeling: MNE/dSPM on evoked and raw data End of explanation method = "dSPM" snr = 3. lambda2 = 1. / snr ** 2 stc, residual = apply_inverse(evoked, inverse_operator, lambda2, method=method, pick_ori=None, return_residual=True, verbose=True) Explanation: Compute inverse solution End of explanation plt.figure() plt.plot(1e3 * stc.times, stc.data[::100, :].T) plt.xlabel('time (ms)') plt.ylabel('%s value' % method) plt.show() Explanation: Visualization View activation time-series End of explanation fig, axes = plt.subplots(2, 1) evoked.plot(axes=axes) for ax in axes: ax.texts = [] for line in ax.lines: line.set_color('#98df81') residual.plot(axes=axes) Explanation: Examine the original data and the residual after fitting: End of explanation vertno_max, time_max = stc.get_peak(hemi='rh') subjects_dir = data_path + '/subjects' surfer_kwargs = dict( hemi='rh', subjects_dir=subjects_dir, clim=dict(kind='value', lims=[8, 12, 15]), views='lateral', initial_time=time_max, time_unit='s', size=(800, 800), smoothing_steps=5) brain = stc.plot(**surfer_kwargs) brain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue', scale_factor=0.6, alpha=0.5) brain.add_text(0.1, 0.9, 'dSPM (plus location of maximal activation)', 'title', font_size=14) Explanation: Here we use peak getter to move visualization to the time point of the peak and draw a marker at the maximum peak vertex. End of explanation # setup source morph morph = mne.compute_source_morph( src=inverse_operator['src'], subject_from=stc.subject, subject_to='fsaverage', spacing=5, # to ico-5 subjects_dir=subjects_dir) # morph data stc_fsaverage = morph.apply(stc) brain = stc_fsaverage.plot(**surfer_kwargs) brain.add_text(0.1, 0.9, 'Morphed to fsaverage', 'title', font_size=20) del stc_fsaverage Explanation: Morph data to average brain End of explanation stc_vec = apply_inverse(evoked, inverse_operator, lambda2, method=method, pick_ori='vector') brain = stc_vec.plot(**surfer_kwargs) brain.add_text(0.1, 0.9, 'Vector solution', 'title', font_size=20) del stc_vec Explanation: Dipole orientations The pick_ori parameter of the :func:mne.minimum_norm.apply_inverse function controls the orientation of the dipoles. One useful setting is pick_ori='vector', which will return an estimate that does not only contain the source power at each dipole, but also the orientation of the dipoles. End of explanation for mi, (method, lims) in enumerate((('dSPM', [8, 12, 15]), ('sLORETA', [3, 5, 7]), ('eLORETA', [0.75, 1.25, 1.75]),)): surfer_kwargs['clim']['lims'] = lims stc = apply_inverse(evoked, inverse_operator, lambda2, method=method, pick_ori=None) brain = stc.plot(figure=mi, **surfer_kwargs) brain.add_text(0.1, 0.9, method, 'title', font_size=20) del stc Explanation: Note that there is a relationship between the orientation of the dipoles and the surface of the cortex. For this reason, we do not use an inflated cortical surface for visualization, but the original surface used to define the source space. For more information about dipole orientations, see tut-dipole-orientations. Now let's look at each solver: End of explanation
5,285
Given the following text description, write Python code to implement the functionality described below step by step Description: Document retrieval from wikipedia data Fire up GraphLab Create Step1: Load some text data - from wikipedia, pages on people Step2: Data contains Step3: Explore the dataset and checkout the text it contains Exploring the entry for president Obama Step4: Exploring the entry for actor George Clooney Step5: Get the word counts for Obama article Step6: Sort the word counts for the Obama article Turning dictonary of word counts into a table Step7: Sorting the word counts to show most common words at the top Step8: Most common words include uninformative words like "the", "in", "and",... Compute TF-IDF for the corpus To give more weight to informative words, we weigh them by their TF-IDF scores. Step9: Examine the TF-IDF for the Obama article Step10: Words with highest TF-IDF are much more informative. Manually compute distances between a few people Let's manually compare the distances between the articles for a few famous people. Step11: Is Obama closer to Clinton than to Beckham? We will use cosine distance, which is given by (1-cosine_similarity) and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham. Step12: Build a nearest neighbor model for document retrieval We now create a nearest-neighbors model and apply it to document retrieval. Step13: Applying the nearest-neighbors model for retrieval Who is closest to Obama? Step14: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. Other examples of document retrieval
Python Code: import graphlab graphlab.product_key.set_product_key("7348-CE53-3B3E-DBED-152B-828E-A99E-F303") Explanation: Document retrieval from wikipedia data Fire up GraphLab Create End of explanation people = graphlab.SFrame('people_wiki.gl/people_wiki.gl') Explanation: Load some text data - from wikipedia, pages on people End of explanation people.head() len(people) Explanation: Data contains: link to wikipedia article, name of person, text of article. End of explanation obama = people[people['name'] == 'Barack Obama'] obama obama['text'] Explanation: Explore the dataset and checkout the text it contains Exploring the entry for president Obama End of explanation clooney = people[people['name'] == 'George Clooney'] clooney['text'] Explanation: Exploring the entry for actor George Clooney End of explanation obama['word_count'] = graphlab.text_analytics.count_words(obama['text']) print obama['word_count'] Explanation: Get the word counts for Obama article End of explanation obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count']) Explanation: Sort the word counts for the Obama article Turning dictonary of word counts into a table End of explanation obama_word_count_table.head() obama_word_count_table.sort('count',ascending=False) Explanation: Sorting the word counts to show most common words at the top End of explanation people['word_count'] = graphlab.text_analytics.count_words(people['text']) people.head() tfidf = graphlab.text_analytics.tf_idf(people['word_count']) tfidf people['tfidf'] = tfidf['docs'] Explanation: Most common words include uninformative words like "the", "in", "and",... Compute TF-IDF for the corpus To give more weight to informative words, we weigh them by their TF-IDF scores. End of explanation obama = people[people['name'] == 'Barack Obama'] obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False) Explanation: Examine the TF-IDF for the Obama article End of explanation clinton = people[people['name'] == 'Bill Clinton'] beckham = people[people['name'] == 'David Beckham'] Explanation: Words with highest TF-IDF are much more informative. Manually compute distances between a few people Let's manually compare the distances between the articles for a few famous people. End of explanation graphlab.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0]) graphlab.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0]) Explanation: Is Obama closer to Clinton than to Beckham? We will use cosine distance, which is given by (1-cosine_similarity) and find that the article about president Obama is closer to the one about former president Clinton than that of footballer David Beckham. End of explanation knn_model = graphlab.nearest_neighbors.create(people,features=['tfidf'],label='name') Explanation: Build a nearest neighbor model for document retrieval We now create a nearest-neighbors model and apply it to document retrieval. End of explanation knn_model.query(obama) Explanation: Applying the nearest-neighbors model for retrieval Who is closest to Obama? End of explanation swift = people[people['name'] == 'Taylor Swift'] knn_model.query(swift) jolie = people[people['name'] == 'Angelina Jolie'] knn_model.query(jolie) arnold = people[people['name'] == 'Arnold Schwarzenegger'] knn_model.query(arnold) elton = people[people['name'] == 'Elton John'] elton['word_count'] = graphlab.text_analytics.count_words(elton['text']) print elton['word_count'] elton_word_count_table = elton[['word_count']].stack('word_count', new_column_name = ['word','count']) elton_word_count_table.sort('count',ascending=False) elton[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False) victoria = people[people['name'] == 'Victoria Beckham'] graphlab.distances.cosine(elton['tfidf'][0],victoria['tfidf'][0]) paul = people[people['name'] == 'Paul McCartney'] graphlab.distances.cosine(elton['tfidf'][0],paul['tfidf'][0]) knn_model.query(elton, k=None).print_rows(num_rows=30) kwc_model = graphlab.nearest_neighbors.create(people,features=['word_count'],label='name') kwc_model.query(elton) Explanation: As we can see, president Obama's article is closest to the one about his vice-president Biden, and those of other politicians. Other examples of document retrieval End of explanation
5,286
Given the following text description, write Python code to implement the functionality described below step by step Description: Modelo de evolución de un Pulsar Binario Cálculo Simbólico de $a$ en función de $e$ Dividiendo las ecuaciones para $\dot{a}$ y $\dot{e}$ podemos eliminar el tiempo de estas expresiones y encontrar una ecuación que relaciona directamente $a$ con $e$ Step1: y ahora exponenciamos Step2: Si definimos $$ g(e) Step3: La constante $C_1$ es determinada por la condición inicial $a(e_0)=a_0$ que, luego de ser reemplazada de vuelta en la solución, entrega la misma expresión encontrada por el otro método. Solución numérica Step4: Usaremos la siguiente adimencionalización de las variables Step5: Usamos los datos del Pulsar de Hulse y Taulor, de acuerdo a lo por Weisberg, Nice y Taylor (2010) (http Step6: Calculamos algunos otros parámetros que nos serán útiles Step7: Definimos también un par de funciones que relacionan el periodo orbital $T$ (en segundos) con el semieje mayor $a$ (de la coordenada relativa, en metros), y viceversa Step8: Dado que aquí resolveremos el sistema de ecuaciones dos veces (con distintas condiciones iniciales), definiremos una función que nos entrega todas las soluciones Step9: Primera integracion Step10: Como vemos, dadas las condiciones iniciales, se requiere un tiempo adimensionalizado del orden de $10^{21}$ para que el sistema colapse (suponiendo que el modelo es válido incluso a pequeñas distancias, cosa que en realidad no es cierta). A continuación, graficamos las cantidades físicas (con dimensiones) Step11: Vemos entonces que el tiempo de colapso es del orden de $10^8$ años. Podemos también graficar cómo evoluciona la excentricidad Step12: Finalmente, graficamos la evolución del periodo orbital del sistema Step13: Graficando la dependencia de $a$ con $e$ Primero definimos la función $g(e)$ y la graficamos Step14: Ahora graficamos $a$ en términos de $e$, tanto para la solución analítica como numérica Step15: Como vemos, en el tiempo de observación del pulsar binario, aproximadamente 30 años, el decaimiento tanto de $a$ como $e$ será en la práctica a una tasa constante (línea recta en el gráfico en función del tiempo). Resolvemos nuevamente el sistema, pero sólo en el intervalo de tiempo de 30 años Step16: Este comportamiento implica que en este intervalo de tiempo de aproximadamente 30 años los valores de $\dot{T}$, $\dot{a}$ y $\dot{e}$ pueden considerarse constantes. Con el valor de $\dot{T}$ podemos modelar el retardo acumulado en el movimiento orbital del sistema. Si $\dot{T}=$ cte. entonces el tiempo transcurrido hasta completar la $n$-ésima revolución es determinado por las relaciones $$ T(t)\approx T_0 + \dot{T}(t-t_0), $$ $$ t_{n+1} \approx t_n + T(t_n), $$ que al ser iteradas implican que \begin{equation} t_n \approx t_0+nT_0+\dot{T}T_0\frac{n(n-1)}{2}+O(\dot{T}{}^2). \end{equation} Por lo tanto, el retardo respecto al valor newtoniano ($t_n^{\rm Newton}=t_0+nT_0$), luego de $n$ revoluciones es dado por \begin{equation} (\Delta t)_n \approx \dot{T}T_0\frac{n(n-1)}{2}+O(\dot{T}{}^2). \end{equation} El valor de $\dot{T}$ puede ser evaluado usando la función dotx que definimos previamente, y con la relación \begin{equation} \dot{T}=\frac{3}{2}\frac{c}{R_\ast}\frac{T}{\tilde{a}}\frac{d\tilde{a}}{d\tilde{t}} \end{equation} Step17: Este valor concuerda con el reportado por Weisberg, Nice y Taylor (2010) (http
Python Code: from sympy import * init_printing(use_unicode=True) a0 = Symbol('a_0') e0 = Symbol('e_0') e = Symbol('e') a = Symbol('a') integrando = Rational(12,19)*((1+Rational(73,24)*e**2+Rational(37,96)*e**4) /(e*(1-e**2)*(1+Rational(121,304)*e**2))) integrando Integral = integrate(integrando,(e,e0,e)) Integral Explanation: Modelo de evolución de un Pulsar Binario Cálculo Simbólico de $a$ en función de $e$ Dividiendo las ecuaciones para $\dot{a}$ y $\dot{e}$ podemos eliminar el tiempo de estas expresiones y encontrar una ecuación que relaciona directamente $a$ con $e$: $$\frac{da}{de}=\frac{12}{19}a\frac{1+(73/24)e^2+(37/96)e^4}{e(1-e^2)[1+(121/304)e^2]}$$ Integrando esta relación respecto a $e$, encontramos: $$ \int_{a_{0}}^{a}\frac{d\bar{a}}{\bar{a}}=\int_{e_{0}}^{e}\frac{12}{19}\frac{1+(73/24)\bar{e}^2+(37/96)\bar{e}^4}{\bar{e}(1-\bar{e}^2)[1+(121/304)\bar{e}^2]}\,d\bar{e} $$ Por lo tanto, $$ \ln \left(\frac{a}{a_{0}}\right)=\int_{e_{0}}^{e}\frac{12}{19}\frac{1+(73/24)\bar{e}^2+(37/96)\bar{e}^4}{\bar{e}(1-\bar{e}^2)[1+(121/304)\bar{e}^2]}\,d\bar{e} $$ Despejando $a$, obtenemos la expresión $$ \bar{a}= a_{0}\exp\left[\int_{e_{0}}^{e}\frac{12}{19}\frac{1+(73/24)\bar{e}^2+(37/96)\bar{e}^4}{\bar{e}(1-\bar{e}^2)[1+(121/304)\bar{e}^2]}\,d\bar{e}\right]. $$ Primero calcularemos la integral en la expresión anterior, usando sympy End of explanation a = a0*exp(Integral) a Explanation: y ahora exponenciamos: End of explanation af = Function('a') solucion = dsolve(Derivative(af(e),e)-af(e)*integrando,af(e)) solucion Explanation: Si definimos $$ g(e):= \frac{e^{12/19}}{1-e^2}\left(1+\frac{121}{304} \right)^{870/2299}, $$ entonces la solución para $a(e)$ puede escribirse como $$ a(e)=a_{0}\frac{g(e)}{g(e_{0})}. $$ Alternativamente, podemos intentar usar la función dsolve de sympy para resolver directamente la EDO determinada por la expresión de $da/dt$ descrita arriba: End of explanation %matplotlib inline import numpy as np from scipy.integrate import odeint import matplotlib.pyplot as plt from __future__ import division Explanation: La constante $C_1$ es determinada por la condición inicial $a(e_0)=a_0$ que, luego de ser reemplazada de vuelta en la solución, entrega la misma expresión encontrada por el otro método. Solución numérica End of explanation def dotx(x,t): a = x[0] e = x[1] return [-(16/(5*a**3))*(1+(73/24)*e**2+(37/96)*e**4)/((1-e**2)**(7/2)), -(76/(15*a**4))*e*(1+(121/304)*e**2)/((1-e**2)**(5/2))] Explanation: Usaremos la siguiente adimencionalización de las variables: $$ \tilde{a}:= \frac{a}{R_{}}, \qquad \tilde{t}:= \frac{c t}{R_{}}, $$ con $$ R_{*}^3:=\frac{4 G^3 \mu M^2}{c^6}. $$ Las ecuaciones adimensionalizadas que describen el decaimiento (y circularización de la órbita) son: \begin{align} \frac{d\tilde{a}}{d\tilde{t}} &= -\frac{16}{5}\frac{1}{\tilde{a}^3}\frac{1}{\left(1-e^2\right)^{7/2}}\left(1+\frac{73}{24}e^2+\frac{37}{96}e^4\right) ,\ \frac{de}{d\tilde{t}} &= -\frac{76}{15}\frac{1}{\tilde{a}^4}\frac{e}{\left(1-e^2\right)^{5/2}}\left(1+\frac{121}{304}e^2\right) . \end{align} Como el sistema es de primer orden, basta definir el vector (bidimensional) solución $x$ por medio de $x[0]:=\tilde{a}$, $x[1]=e$. Por lo tanto, la función dotx, que define la derivada temporal de $x$ (en nuestro caso, derivada con respecto al tiempo adimensionalizado $\tilde{t}$, es dada por End of explanation T0_d = 0.322997448911 # periodo inicial, en días e0 = 0.6171334 # excentricidad inicial M_c = 1.3886 # masa de la compañera, en masas solares M_p = 1.4398 # masa del pulsar, en masas solares c = 299792458 # rapidez de la luz, en metros por segundo MGcm3 = 4.925490947E-6 # MG/c^3, en segundos Explanation: Usamos los datos del Pulsar de Hulse y Taulor, de acuerdo a lo por Weisberg, Nice y Taylor (2010) (http://arxiv.org/abs/1011.0718v1): End of explanation m_sol = MGcm3*c # parametro de masa del Sol m=GM/c^2, en metros M = M_c+M_p # masa total, en masas solares mu = (M_c*M_p)/M # masa reducida, en masas solares R_ast = m_sol*(4*mu*M**2)**(1/3) # R_\ast en metros T0_s = T0_d*86400 # periodo inicial, en segundos print('R_ast = '+str(R_ast)+' [m]') print('T0 = '+str(T0_s) + ' [s]') Explanation: Calculamos algunos otros parámetros que nos serán útiles: End of explanation def a(T_s): return (m_sol*M*(c*T_s/(2*np.pi))**2)**(1/3) def T(a_m): return (2*np.pi/c)*(a_m**3/(M*m_sol))**(1/2) a0_m = a(T0_s) # a inicial, en metros at0 = a0_m/R_ast # a tilde inicial print('a0 = '+str(a0_m)+' m') print('at0 = '+str(at0)) Explanation: Definimos también un par de funciones que relacionan el periodo orbital $T$ (en segundos) con el semieje mayor $a$ (de la coordenada relativa, en metros), y viceversa: End of explanation def solucion(x0,tt_int): print 'Se resuelve con at0 = %2.f y e0 = %2.f'%(x0[0],x0[1]) sol = odeint(dotx,x0,tt_int) at_todos = sol[:,0] # verifica si at llega a 2. En caso positivo corta el arreglo de soluciones restriccion = np.where(at_todos<2)[0] if len(restriccion) is not 0: pos_ttmax = restriccion[0] # determina el tiempo en el que at=2 print('Acortando intervalo a tt_max = '+str(tt_int[pos_ttmax])) else: pos_ttmax = len(tt_int) tt = tt_int[:pos_ttmax] t_a = tt*R_ast/c/31557600 # el tiempo, en años at = sol[:pos_ttmax,0] e = sol[:pos_ttmax,1] a_m = at*R_ast # solución de a, en metros T_s = T(a_m) # solución de T, en segundos return tt,t_a,at,e,a_m,T_s Explanation: Dado que aquí resolveremos el sistema de ecuaciones dos veces (con distintas condiciones iniciales), definiremos una función que nos entrega todas las soluciones: End of explanation tt_int_max = 10**22 # tiempo adimensional máximo de integración. Con este valor se llega hasta a=2 tt_int = np.linspace(0,tt_int_max,100000) # tiempos en los que se integrará el sistema print('tt_int_max = '+str(tt_int_max)) x0 = [at0,e0] # valores iniciales tt,t_a,at,e,a_m,T_s = solucion(x0,tt_int) # calcula y asigna valores de la solución fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(tt,at) eje.set_xlabel(r'$\tilde{t}$',fontsize=15) eje.set_ylabel(r'$\tilde{a}$',fontsize=15) plt.grid() Explanation: Primera integracion: hasta el colapso final! End of explanation fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,a_m) eje.set_title(u'Evolución del semieje mayor',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$a (m)$',fontsize=15) plt.grid() Explanation: Como vemos, dadas las condiciones iniciales, se requiere un tiempo adimensionalizado del orden de $10^{21}$ para que el sistema colapse (suponiendo que el modelo es válido incluso a pequeñas distancias, cosa que en realidad no es cierta). A continuación, graficamos las cantidades físicas (con dimensiones): End of explanation fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,e) eje.set_title(u'Evolución de la excentricidad',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$e$',fontsize=15) plt.grid() Explanation: Vemos entonces que el tiempo de colapso es del orden de $10^8$ años. Podemos también graficar cómo evoluciona la excentricidad: End of explanation T_h = T_s/3600. # periodo orbital, en horas plt.figure(figsize=(5,5)) plt.plot(t_a,T_h) plt.title(u'Evolución del Periodo orbital',fontsize=15) plt.xlabel(u'$t$ (años)',fontsize=15) plt.ylabel(r'$T$ (horas)',fontsize=15) plt.grid() Explanation: Finalmente, graficamos la evolución del periodo orbital del sistema: End of explanation def g(e): return e**(12/19)*(1+121*e**2/304)**(870/2299)/(1-e**2) fig,eje= plt.subplots(1,1,figsize=(5,5)) ee = np.linspace(0,1,100) eje.plot(ee,g(ee)) eje.set_yscale('log') eje.set_title(r'$g$ versus $e$',fontsize=15) eje.set_xlabel(r'$e$',fontsize=15) eje.set_ylabel(r'$g$',fontsize=15) #plt.legend(loc='best') plt.grid() Explanation: Graficando la dependencia de $a$ con $e$ Primero definimos la función $g(e)$ y la graficamos End of explanation fig,eje= plt.subplots(1,1,figsize=(5,5)) eje.plot(e,at*R_ast, label=u'sol. numérica') ee = np.linspace(min(e),e0,10) a_an = a0_m*g(ee)/g(e0) eje.plot(ee,a_an,'o',label=u'sol. analítica') eje.set_title(u'Semieje mayor v/s excentricidad',fontsize=14) eje.set_xlabel(r'$e$',fontsize=15) eje.set_ylabel(r'$a$',fontsize=15) plt.legend(loc='best') plt.grid() Explanation: Ahora graficamos $a$ en términos de $e$, tanto para la solución analítica como numérica End of explanation t_max_a = 30 # tiempo de integración, en años tt_int_max = 31557600*c*t_max_a/R_ast #tiempo adimensional máximo de integración tt_int = np.linspace(0,tt_int_max,100000) print('tt_int_max = '+str(tt_int_max)) tt,t_a,at,e,a_m,T_s = solucion(x0,tt_int) fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,at/at0-1) eje.set_title(u'Evolución del semieje mayor',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$(a-a_0)/a_0$',fontsize=15) plt.grid() fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,e-e0) eje.set_title(u'Evolución de la excentricidad',fontsize=15) eje.set_xlabel(r'$t\ (a\~nos)$',fontsize=15) eje.set_ylabel(r'$e$',fontsize=15) plt.grid() T_h = T_s/3600. # periodo orbital, en horas fig,eje = plt.subplots(1,1,figsize=(5,5)) eje.plot(t_a,T_h-T_h[0]) eje.set_title(u'Evolución del periodo orbiral',fontsize=15) eje.set_xlabel(u'$t$ (años)',fontsize=15) eje.set_ylabel(r'$T-T_0$ (horas)',fontsize=15) plt.grid() Explanation: Como vemos, en el tiempo de observación del pulsar binario, aproximadamente 30 años, el decaimiento tanto de $a$ como $e$ será en la práctica a una tasa constante (línea recta en el gráfico en función del tiempo). Resolvemos nuevamente el sistema, pero sólo en el intervalo de tiempo de 30 años: End of explanation dota = dotx(x0,0)[0] dotT = (3/2)*(c/R_ast)*(T0_s/at0)*dota print('dT/dt= ' + str(dotT)) Explanation: Este comportamiento implica que en este intervalo de tiempo de aproximadamente 30 años los valores de $\dot{T}$, $\dot{a}$ y $\dot{e}$ pueden considerarse constantes. Con el valor de $\dot{T}$ podemos modelar el retardo acumulado en el movimiento orbital del sistema. Si $\dot{T}=$ cte. entonces el tiempo transcurrido hasta completar la $n$-ésima revolución es determinado por las relaciones $$ T(t)\approx T_0 + \dot{T}(t-t_0), $$ $$ t_{n+1} \approx t_n + T(t_n), $$ que al ser iteradas implican que \begin{equation} t_n \approx t_0+nT_0+\dot{T}T_0\frac{n(n-1)}{2}+O(\dot{T}{}^2). \end{equation} Por lo tanto, el retardo respecto al valor newtoniano ($t_n^{\rm Newton}=t_0+nT_0$), luego de $n$ revoluciones es dado por \begin{equation} (\Delta t)_n \approx \dot{T}T_0\frac{n(n-1)}{2}+O(\dot{T}{}^2). \end{equation} El valor de $\dot{T}$ puede ser evaluado usando la función dotx que definimos previamente, y con la relación \begin{equation} \dot{T}=\frac{3}{2}\frac{c}{R_\ast}\frac{T}{\tilde{a}}\frac{d\tilde{a}}{d\tilde{t}} \end{equation} End of explanation data = np.genfromtxt('data-HT.txt') t_exp = data[:,0]-data[0,0] Delta_t_exp = data[:,1] fig,eje = plt.subplots(1,1,figsize=(5,5)) n = np.arange(40000) t_n = (n*T0_s+dotT*T0_s*n*(n-1)/2.)/31557600. # tiempo, en años Delta_t_n = dotT*T0_s*n*(n-1)/2 #retraso acumulado, en segundos eje.plot(t_n,Delta_t_n, label='RG') eje.hlines(0,0,40, color='red',label='Newtoniano') eje.set_xlabel(u'Tiempo (años)') eje.set_ylabel(r'Retraso acumulado (s)') eje.set_xlim(0,35) eje.set_ylim(-45,1) plt.plot(t_exp,Delta_t_exp,'o',label='Datos') eje.legend(loc=3) plt.grid() Explanation: Este valor concuerda con el reportado por Weisberg, Nice y Taylor (2010) (http://arxiv.org/abs/1011.0718v1), ver ec. (4). Para comparar con los datos observacionales, cargamos los valores del retardo acumulado, obtenidos a partir del gráfico original de Wiesberg, Nice y Taylor (2010) usando WebPlotDigitizer para extraer los valores. End of explanation
5,287
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: Question Answer with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Import the required packages. Step3: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answer Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models. Supported Model | Name of model_spec | Model Description --- | --- | --- MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario. MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1. BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks. In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task. Step4: Load Input Data Specific to an On-device ML App and Preprocess the Data The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library. To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by Step5: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar. <img src="https Step6: Customize the TensorFlow Model Create a custom question answer model based on the loaded data. The create function comprises the following steps Step7: Have a look at the detailed model structure. Step8: Evaluate the Customized Model Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0. Step9: Export to TensorFlow Lite Model Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration Step10: Export the quantized TFLite model according to the quantization config with metadata. The default TFLite model filename is model.tflite. Step11: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following Step12: You can also evalute the tflite model with the evaluate_tflite method. This step is expected to take a long time. Step13: Advanced Usage The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQAModelSpec class is currently supported. There are 2 models
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation !pip install tflite-model-maker Explanation: Question Answer with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_question_answer"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_question_answer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications. This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task. Introduction to Question Answer Task The supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer. <p align="center"><img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png" width="500"></p> <p align="center"> <em>Answers are spans in the passage (image credit: <a href="https://rajpurkar.github.io/mlx/qa-and-squad/">SQuAD blog</a>) </em> </p> As for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage. The size of input could be set and adjusted according to the length of passage and question. End-to-End Overview The following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format. ```python Chooses a model specification that represents the model. spec = model_spec.get('mobilebert_qa') Gets the training data and validation data. train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True) validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Fine-tunes the model. model = question_answer.create(train_data, model_spec=spec) Gets the evaluation result. metric = model.evaluate(validation_data) Exports the model to the TensorFlow Lite format with metadata in the export directory. model.export(export_dir) ``` The following sections explain the code in more detail. Prerequisites To run this example, install the required packages, including the Model Maker package from the GitHub repo. End of explanation import numpy as np import os import tensorflow as tf assert tf.__version__.startswith('2') from tflite_model_maker import configs from tflite_model_maker import ExportFormat from tflite_model_maker import model_spec from tflite_model_maker import question_answer from tflite_model_maker import QuestionAnswerDataLoader Explanation: Import the required packages. End of explanation spec = model_spec.get('mobilebert_qa_squad') Explanation: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail. Choose a model_spec that represents a model for question answer Each model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models. Supported Model | Name of model_spec | Model Description --- | --- | --- MobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario. MobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1. BERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks. In this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task. End of explanation train_data_path = tf.keras.utils.get_file( fname='triviaqa-web-train-8000.json', origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json') validation_data_path = tf.keras.utils.get_file( fname='triviaqa-verified-web-dev.json', origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json') Explanation: Load Input Data Specific to an On-device ML App and Preprocess the Data The TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library. To load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by: * Skipping the samples that couldn't find any answer in the context document; * Getting the original answer in the context without uppercase or lowercase. Download the archived version of the already converted dataset. End of explanation train_data = QuestionAnswerDataLoader.from_squad(train_data_path, spec, is_training=True) validation_data = QuestionAnswerDataLoader.from_squad(validation_data_path, spec, is_training=False) Explanation: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar. <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png" alt="Upload File" width="800" hspace="100"> If you prefer not to upload your data to the cloud, you can also run the library offline by following the guide. Use the QuestionAnswerDataLoader.from_squad method to load and preprocess the SQuAD format data according to a specific model_spec. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter version_2_with_negative as True means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, version_2_with_negative is False. End of explanation model = question_answer.create(train_data, model_spec=spec) Explanation: Customize the TensorFlow Model Create a custom question answer model based on the loaded data. The create function comprises the following steps: Creates the model for question answer according to model_spec. Train the question answer model. The default epochs and the default batch size are set according to two variables default_training_epochs and default_batch_size in the model_spec object. End of explanation model.summary() Explanation: Have a look at the detailed model structure. End of explanation model.evaluate(validation_data) Explanation: Evaluate the Customized Model Evaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0. End of explanation config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY]) config._experimental_new_quantizer = True Explanation: Export to TensorFlow Lite Model Convert the existing model to TensorFlow Lite model format that you can later use in an on-device ML application. Since MobileBERT is too big for on-device applications, use dynamic range quantization on the model to compress MobileBERT by 4x with the minimal loss of performance. First, define the quantization configuration: End of explanation model.export(export_dir='.', quantization_config=config) Explanation: Export the quantized TFLite model according to the quantization config with metadata. The default TFLite model filename is model.tflite. End of explanation model.export(export_dir='.', export_format=ExportFormat.VOCAB) Explanation: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab. The allowed export formats can be one or a list of the following: ExportFormat.TFLITE ExportFormat.VOCAB ExportFormat.SAVED_MODEL By default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows: End of explanation model.evaluate_tflite('model.tflite', validation_data) Explanation: You can also evalute the tflite model with the evaluate_tflite method. This step is expected to take a long time. End of explanation new_spec = model_spec.get('mobilebert_qa') new_spec.seq_len = 512 Explanation: Advanced Usage The create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQAModelSpec class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The create function comprises the following steps: Creates the model for question answer according to model_spec. Train the question answer model. This section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc. Adjust the model You can adjust the model infrastructure like parameters seq_len and query_len in the BertQAModelSpec class. Adjustable parameters for model: seq_len: Length of the passage to feed into the model. query_len: Length of the question to feed into the model. doc_stride: The stride when doing a sliding window approach to take chunks of the documents. initializer_range: The stdev of the truncated_normal_initializer for initializing all weight matrices. trainable: Boolean, whether pre-trained layer is trainable. Adjustable parameters for training pipeline: model_dir: The location of the model checkpoint files. If not set, temporary directory will be used. dropout_rate: The rate for dropout. learning_rate: The initial learning rate for Adam. predict_batch_size: Batch size for prediction. tpu: TPU address to connect to. Only used if using tpu. For example, you can train the model with a longer sequence length. If you change the model, you must first construct a new model_spec. End of explanation
5,288
Given the following text description, write Python code to implement the functionality described below step by step Description: Summary report on temperature datasets In this notebook we inspect the temperature datasets along with the station metadata. At the end, a figure showing the locations of the sites on a map is generated. Step2: Constants / Parameters Step3: Read the cleaned temperature data for each site Also load the "gaps" file which was generated for each site by the cleaning temperatures script. We will use this information to identify at what point the data no longer contains any gaps bigger than BIG_GAP_LENGTH Step4: The 'good start' column contains the date after which there are no longer any gaps bigger than BIG_GAP_LENGTH Step5: Read the isd-history.txt metadata file Only take rows where the callsign matches one we are interested in. There will be multiple entries for each callsign. Station data formats and id numbers changed over the years. Also, sometimes stations were moved (small distances). These changes resulted in separate entries with the same callsign. Step6: Take just the entry with the most recent data for each station callsign We will use this most recent entry for the site location. If the station was moved, this is location won't be precisely correct for older data, but is should be quite close and good enough for our purporses.
Python Code: # boilerplate includes import sys import os import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt #from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.basemap import Basemap import matplotlib.patheffects as path_effects import pandas as pd import seaborn as sns import datetime # import scipy.interpolate # import re from IPython.display import display, HTML %matplotlib notebook plt.style.use('seaborn-notebook') pd.set_option('display.max_columns', None) Explanation: Summary report on temperature datasets In this notebook we inspect the temperature datasets along with the station metadata. At the end, a figure showing the locations of the sites on a map is generated. End of explanation STATIONS = [ 'KLAX', 'KSFO', 'KSAN', 'KMIA', 'KFAT', 'KJAX', 'KTPA', 'KRIV', 'KIAH', 'KMCO', 'KBUR', # 'KSNA', # Not good # 'KONT', # Not good, 1973+ might be usable but 2001 has an 86 day and 41 day gap. # 'KHST', # Not good # 'KFUL', # Bad ] TEMPERATURE_DATADIR = '../data/temperatures' # The label of the hourly temperature column TEMP_COL = 'AT' # Gaps in the data this big or longer will be considerd 'breaks'... # so the 'good' continuous timeseries will start at the end of the last big gap BIG_GAP_LENGTH = pd.Timedelta('14 days') # # Time range to use for computing normals (30 year, just like NOAA uses) # NORM_IN_START_DATE = '1986-07-01' # NORM_IN_END_DATE = '2016-07-01' # # Time range or normals to output to use when running 'medfoes on normal temperature' (2 years, avoiding leapyears) # NORM_OUT_START_DATE = '2014-01-01' # NORM_OUT_END_DATE = '2015-12-31 23:59:59' def read_isd_history_stations_list(filename, skiprows=22): Read and parse stations information from isd_history.txt file fwfdef = (( ('USAF', (6, str)), ('WBAN', (5, str)), ('STATION NAME', (28, str)), ('CTRY', (4, str)), ('ST', (2, str)), ('CALL', (5, str)), ('LAT', (7, str)), ('LON', (8, str)), ('EVEV', (7, str)), ('BEGIN', (8, str)), ('END', (8, str)), )) names = [] colspecs = [] converters = {} i = 0 for k,v in fwfdef: names.append(k) colspecs.append((i, i+v[0]+1)) i += v[0]+1 converters[k] = v[1] stdf = pd.read_fwf(filename, skiprows=skiprows, names=names, colspecs=colspecs, converters=converters) return stdf Explanation: Constants / Parameters End of explanation #df = pd.DataFrame(columns=['callsign', 'data start', 'data end', 'good start', 'gaps after good start'], tmp = [] for callsign in STATIONS: # load the temperature dataset fn = "{}_AT_cleaned.h5".format(callsign) ot = pd.read_hdf(os.path.join(TEMPERATURE_DATADIR,fn), 'table') # load the gaps information gaps = pd.read_csv(os.path.join(TEMPERATURE_DATADIR,"{}_AT_gaps.tsv".format(callsign)), sep='\t', comment='#', names=['begin','end','length'], parse_dates=[0,1]) # convert length to an actual timedelta gaps['length'] = pd.to_timedelta(gaps['length']) # make sure begin and end have the right timezone association (UTC) gaps['begin'] = gaps['begin'].apply(lambda x: x.tz_localize('UTC')) gaps['end'] = gaps['end'].apply(lambda x: x.tz_localize('UTC')) big_gaps = gaps[gaps['length'] >= BIG_GAP_LENGTH] end_of_last_big_gap = big_gaps['end'].max() if pd.isnull(end_of_last_big_gap): # No big gaps... so use start of data end_of_last_big_gap = ot.index[0] num_gaps_after_last_big_gap = gaps[gaps['end'] > end_of_last_big_gap].shape[0] print(callsign, ot.index[0], ot.index[-1], end_of_last_big_gap, num_gaps_after_last_big_gap, sep='\t') tmp.append([ callsign, ot.index[0], ot.index[-1], end_of_last_big_gap, num_gaps_after_last_big_gap, ]) #display(big_gaps) df = pd.DataFrame(tmp, columns=['callsign', 'data start', 'data end', 'good start', 'gaps after good start']).set_index('callsign') Explanation: Read the cleaned temperature data for each site Also load the "gaps" file which was generated for each site by the cleaning temperatures script. We will use this information to identify at what point the data no longer contains any gaps bigger than BIG_GAP_LENGTH End of explanation df['good start'] Explanation: The 'good start' column contains the date after which there are no longer any gaps bigger than BIG_GAP_LENGTH End of explanation historydf = read_isd_history_stations_list( os.path.join(TEMPERATURE_DATADIR,'ISD/isd-history.txt')) df.join(historydf.set_index('CALL')) Explanation: Read the isd-history.txt metadata file Only take rows where the callsign matches one we are interested in. There will be multiple entries for each callsign. Station data formats and id numbers changed over the years. Also, sometimes stations were moved (small distances). These changes resulted in separate entries with the same callsign. End of explanation sthistdf = historydf.set_index('CALL').loc[df.index] sthistdf = sthistdf.reset_index().sort_values(['CALL','END'], ascending=[True,False]).set_index('CALL') foo = sthistdf[~sthistdf.index.duplicated('first')] foo = foo.join(df) foo.drop('END',1, inplace=True) foo.drop('BEGIN',1, inplace=True) foo = foo.reindex(STATIONS) foo # Save the summary table foo.to_csv('stations_summary.csv') tmp = foo[['STATION NAME','ST','LAT','LON', 'EVEV','good start','data end']].sort_values(['ST','LAT'], ascending=[True,False]) display(tmp) Explanation: Take just the entry with the most recent data for each station callsign We will use this most recent entry for the site location. If the station was moved, this is location won't be precisely correct for older data, but is should be quite close and good enough for our purporses. End of explanation
5,289
Given the following text description, write Python code to implement the functionality described below step by step Description: 處理旅程資訊 先照之前的,讀取資料 Step1: 時間的格式固定 Step2: 先用慢動作來解析看看格式 Step3: Q 把上面改成 tqdm.tqdm.pandas(tqdm.tqdm_notebook)? 偵測站 手冊附錄 https Step4: Q 查看一下內容,比方看國道五號 python node_data[node_data['編號'].str.startswith('05')] 畫圖看看 Step5: Q 試試看其他劃法,比方依照方向設定顏色 python colors = node_data.方向.apply({'S'
Python Code: import tqdm import tarfile import pandas from urllib.request import urlopen # 檔案名稱格式 filename_format="M06A_{year:04d}{month:02d}{day:02d}.tar.gz".format xz_filename_format="xz/M06A_{year:04d}{month:02d}{day:02d}.tar.xz".format csv_format = "M06A/{year:04d}{month:02d}{day:02d}/{hour:02d}/TDCS_M06A_{year:04d}{month:02d}{day:02d}_{hour:02d}0000.csv".format # 打開剛才下載的檔案試試 data_config ={"year":2016, "month":12, "day":18} tar = tarfile.open(filename_format(**data_config), 'r') # 如果沒有下載,可以試試看 xz 檔案 #data_dconfig ={"year":2016, "month":11, "day":18} #tar = tarfile.open(xz_filename_format(**data_config), 'r') # 設定欄位名稱 M06A_fields = ['VehicleType', 'DetectionTime_O','GantryID_O', 'DetectionTime_D','GantryID_D ', 'TripLength', 'TripEnd', 'TripInformation'] # 打開裡面 10 點鐘的資料 csv = tar.extractfile(csv_format(hour=10, **data_config)) # 讀進資料 data = pandas.read_csv(csv, names=M06A_fields) # 檢查異常的資料 print("異常資料數:", data[data.TripEnd == 'N'].shape[0]) # 去除異常資料 data = data[data.TripEnd == 'Y'] # 只保留 TripInformation 和 VehicleType data = data[['VehicleType', "TripInformation"]] # 看前五筆 data.head(5) Explanation: 處理旅程資訊 先照之前的,讀取資料 End of explanation import datetime # 用來解析時間格式 def strptime(x): return datetime.datetime.strptime(x, "%Y-%m-%d %H:%M:%S") Explanation: 時間的格式固定 End of explanation data.iloc[0].TripInformation # 切開鎖鏈 _.split("; ") # 切開加鎖 [x.split('+') for x in _] [(strptime(t), node) for t,node in _] # 合在一起看看 for idx, row in data.head(10).iterrows(): trip = row.TripInformation.split("; ") trip = (x.split('+') for x in trip) trip = [(strptime(t), node) for t,node in trip] print(trip) def parse_tripinfo(tripinfo): split1 = tripinfo.split("; ") split2 = (x.split('+') for x in split1) return [(strptime(t), node) for t,node in split2] data.head(10).TripInformation.apply(parse_tripinfo) # progress bar tqdm.tqdm.pandas() # 新增一欄 data['Trip'] = data.TripInformation.progress_apply(parse_tripinfo) Explanation: 先用慢動作來解析看看格式 End of explanation node_data_url = "http://www.freeway.gov.tw/Upload/DownloadFiles/%e5%9c%8b%e9%81%93%e8%a8%88%e8%b2%bb%e9%96%80%e6%9e%b6%e5%ba%a7%e6%a8%99%e5%8f%8a%e9%87%8c%e7%a8%8b%e7%89%8c%e5%83%b9%e8%a1%a8104.09.04%e7%89%88.csv" node_data = pandas.read_csv(urlopen(node_data_url), encoding='big5', header=1) # 簡單清理資料 node_data = node_data[node_data["方向"].apply(lambda x:x in 'NS')] node_data.head(10) Explanation: Q 把上面改成 tqdm.tqdm.pandas(tqdm.tqdm_notebook)? 偵測站 手冊附錄 https://zh.wikipedia.org/wiki/%E9%AB%98%E9%80%9F%E5%85%AC%E8%B7%AF%E9%9B%BB%E5%AD%90%E6%94%B6%E8%B2%BB%E7%B3%BB%E7%B5%B1_(%E8%87%BA%E7%81%A3)#.E6.94.B6.E8.B2.BB.E9.96.80.E6.9E.B6 交流道服務區里程 http://www.freeway.gov.tw/Publish.aspx?cnid=1906 門架資訊 https://www.freeway.gov.tw/Upload/DownloadFiles/%e5%9c%8b%e9%81%93%e8%a8%88%e8%b2%bb%e9%96%80%e6%9e%b6%e5%ba%a7%e6%a8%99%e5%8f%8a%e9%87%8c%e7%a8%8b%e7%89%8c%e5%83%b9%e8%a1%a8104.09.04%e7%89%88.csv End of explanation %matplotlib inline node_data['經度(東經)'] = node_data['經度(東經)'].astype(float) node_data['緯度(北緯)'] = node_data['緯度(北緯)'].astype(float) node_data.plot.scatter(x='經度(東經)', y='緯度(北緯)') from PIL import Image import numpy as np import matplotlib.pyplot as plt # 網路上的台灣地圖,有經緯度 taiwan_img_url="http://gallery.mjes.ntpc.edu.tw/gallery2/main.php?g2_view=core.DownloadItem&g2_itemId=408&g2_serialNumber=1" taiwan_img = Image.open(urlopen(taiwan_img_url)) taiwan_img # 查看編號的前置碼 set(node_data['編號'].str[:3].tolist()) # 依照路線編號 cfunc = {'01F':"green", '01H':"blue", '03A':"yellow", '03F':"red", '05F':"purple"}.get colors = node_data['編號'].str[:3].apply(cfunc) fig = plt.gcf() fig.set_size_inches(8,8) extent=[118.75,123.05,21.45,25.75] plt.xlim(*extent[:2]) plt.ylim(*extent[2:]) plt.scatter(node_data['經度(東經)'], node_data['緯度(北緯)'], c=colors, alpha=1) plt.imshow(np.array(taiwan_img), extent=extent); Explanation: Q 查看一下內容,比方看國道五號 python node_data[node_data['編號'].str.startswith('05')] 畫圖看看 End of explanation node_data[node_data.編號=="03F-318.7S"] node_data[node_data.編號=="03F-321.1S"] Explanation: Q 試試看其他劃法,比方依照方向設定顏色 python colors = node_data.方向.apply({'S':'red', 'N':'blue'}.get).tolist() 或只畫國道一號、改變 mark。 End of explanation
5,290
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Linear Regression Adapted from Chapter 3 of An Introduction to Statistical Learning Predictive modeling, using a data samples to make predictions about unobserved or future events, is a common data analytics task. Predictive modeling is considered to be a form of machine learning. Linear regression is a technique for predicting a response/dependent variable based on one or more explanatory/independent variables, or features. The term "linear" refers to the fact that the method models data as a linear combination of explanatory variables. Linear regression, in its simplest form, fits a straight line to the response variable data so that the line minimizes the squared differences (also called errors or residuals) between the actual obbserved response and the predicted point on the line. Since linear regression fits the observed data with a line, it is most effective when the response and the explanatory variable do have a linear relationship. Motivating Example Step1: The features? - TV Step2: There are 200 observations, corresponding to 200 markets. We can try to discover if there is any relationship between the money spend on a specific type of ad, in a given market, and the sales in that market by plotting the sales figures against each category of advertising expenditure. Step3: Questions How can the company selling the product decide on how to spend its advertising money in the future? We first need to answer the following question Step4: Interpreting Model Coefficients Q Step5: The predicted Sales in that market are of 9.409444 * 1000 =~ 9409 widgets Using Statsmodels Step6: Plotting the Least Squares Line Let's make predictions for the smallest and largest observed values of money spent on TV ads, and then use the predicted values to plot the least squares line Step7: Confidence in Linear Regression Models Q Step8: Since we only have a single sample of data, and not the entire population the "true" value of the regression coefficient is either within this interval or it isn't, but there is no way to actually know. We estimate the regression coefficient using the data we have, and then we characterize the uncertainty about that estimate by giving a confidence interval, an interval that will "probably" contain the value coefficient. Note that there is no probability associated with the true value of the regression coefficient being in the given confidence interval! Also note that using 95% confidence intervals is simply a convention. One can create 90% confidence intervals (narrower intervals), 99% confidence intervals (wider intervals), etc. Hypothesis Testing and p-values Closely related to confidence intervals is hypothesis testing. Generally speaking, you start with a null hypothesis and an alternative hypothesis (that is opposite the null). Then, you check whether the data supports rejecting the null hypothesis or failing to reject the null hypothesis. (Note that "failing to reject" the null is not the same as "accepting" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.) As it relates to model coefficients, here is the conventional hypothesis test Step9: If the 95% confidence interval includes zero, the p-value for that coefficient will be greater than 0.05. If the 95% confidence interval does not include zero, the p-value will be less than 0.05. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.) In this case, the p-value for TV is far less than 0.05, and so we believe that there is a relationship between TV ads and Sales. Note that we generally ignore the p-value for the intercept. How Well Does the Model Fit the data? The most common way to evaluate the overall fit of a linear model to the available data is by calculating the R-squared (a.k.a, "coefficient of determination") value. R-squared has several interpretations Step10: Is that a "good" R-squared value? One cannot generally assess that. What a "good" R-squared value is depends on the domain and therefore R-squared is most useful as a tool for comparing different models. Multiple Linear Regression Simple linear regression can be extended to include multiple explanatory variables Step11: How do we interpret the coefficients? For a given amount of Radio and Newspaper ad spending, an increase of a unit ($1000 dollars) in TV ad spending is associated with an increase in Sales of 45.765 widgets. Other information is available in the model summary output Step12: TV and Radio have significant p-values, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper. TV and Radio ad spending are both positively associated with Sales, whereas Newspaper ad spending is slightly negatively associated with Sales. This model has a higher R-squared (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV. Feature Selection How do I decide which features to include in a linear model? - Try different models and check whether the R-squared value goes up when you add new predictors. What are the drawbacks to this approach? - Linear models rely upon a lot of assumptions (such as the predictors/features being independent), and if those assumptions are violated (which they usually are), R-squared are less reliable. - R-squared is susceptible to overfitting, and thus there is no guarantee that a model with a high R-squared value will generalize well to new data. For example Step13: R-squared will always increase as you add more features to the model, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model. There is alternative to R-squared called adjusted R-squared that penalizes model complexity (to control for overfitting), but this approach has its own set of issues. Is there a better approach to feature selection? Cross-validation, which provides a more reliable estimate of out-of-sample error, and thus is better at choosing which model will better generalize to out-of-sample data. Cross-validation can be applied to any type of model, not just linear models. Linear Regression in scikit-learn The work done using Statsmodels can also be using scikit-learn Step14: Handling Categorical Predictors with Two Categories What if one of the predictors was categorical? Let's create a new feature called Size, and randomly assign observations to be small or large Step15: When using scikit-learn, we need to represent all data numerically. For example, if the feature we want to represent has only two categories, we create a dummy variable that represents the categories as a binary value Step16: The multiple linear regression including the IsLarge predictor Step17: How do we interpret the coefficient of IsLarge? For a given amount of TV/Radio/Newspaper ad spending, a large market is associated with an average increase in Sales of 51.55 widgets (as compared to sales in a Small market). If we reverse the 0/1 encoding and created the feature 'IsSmall', the coefficient would be the same in absolute value, but negative instead of positive. All that changes is the interpretation of the coefficient. Handling Categorical Predictors with More than Two Categories Let's create a new feature called Area, and randomly assign observations to be rural, suburban, or urban Step18: We have to represent Area numerically, but an encoding such as 0=rural, 1=suburban, 2=urban would not work because that would imply that there is an ordered relationship between suburban and urban. Instead, we can create another dummy variable. Step19: rural is coded as Area_suburban=0 and Area_urban=0 suburban is coded as Area_suburban=1 and Area_urban=0 urban is coded as Area_suburban=0 and Area_urban=1 Only two dummies are needed to captures all of the information about the Area feature.(In general, for a categorical feature with k levels, we create k-1 dummy variables.) Let's include the two new dummy variables in the model
Python Code: # imports import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # read data into a DataFrame data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0) data.head() Explanation: Introduction to Linear Regression Adapted from Chapter 3 of An Introduction to Statistical Learning Predictive modeling, using a data samples to make predictions about unobserved or future events, is a common data analytics task. Predictive modeling is considered to be a form of machine learning. Linear regression is a technique for predicting a response/dependent variable based on one or more explanatory/independent variables, or features. The term "linear" refers to the fact that the method models data as a linear combination of explanatory variables. Linear regression, in its simplest form, fits a straight line to the response variable data so that the line minimizes the squared differences (also called errors or residuals) between the actual obbserved response and the predicted point on the line. Since linear regression fits the observed data with a line, it is most effective when the response and the explanatory variable do have a linear relationship. Motivating Example: Advertising Data Let us look at data depicting the money(in thousands of dollars) spent on TV, Radio and newspaper ads for a product in a given market, as well as the corresponding sales figures. End of explanation # print the size of the DataFrame object, i.e., the size of the dataset data.shape Explanation: The features? - TV: advertising dollars spent on TV (for a single product, in a given market) - Radio: advertising dollars spent on Radio (for a single product, in a given market) - Newspaper: advertising dollars spent on Newspaper (for a single product, in a given market) What is the response? - Sales: sales of a single product in a given market (in thousands of widgets) End of explanation fig, axs = plt.subplots(1, 3, sharey=True) data.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8)) data.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1]) data.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2]) Explanation: There are 200 observations, corresponding to 200 markets. We can try to discover if there is any relationship between the money spend on a specific type of ad, in a given market, and the sales in that market by plotting the sales figures against each category of advertising expenditure. End of explanation import statsmodels.formula.api as sf #create a model with Sales as dependent variable and TV as explanatory variable model = sf.ols('Sales ~ TV', data) #fit the model to the data fitted_model = model.fit() # print the coefficients print(fitted_model.params) Explanation: Questions How can the company selling the product decide on how to spend its advertising money in the future? We first need to answer the following question: "Based on this data, does there apear to be a relationship between ads and sales?" If yes, 1. Which ad types contribute to sales? 2. How strong is the relationship between each ad type and sales? 4. What is the effect of each ad type of sales? 5. Given ad spending in a particular market, can sales be predicted? We will use Linear Regression to try and asnwer these questions. Simple Linear Regression Simple linear regression is an approach for modeling the relatrionship between a dependent variable (a "response") and an explanatory variable, also known as a "predictor" or "feature". The relationship is modeled as a linear function $y = \beta_0 + \beta_1x$ whose parameters are estimated from the available data. In the equation above: - $y$ is called the response, regressand, endogenous variable, dependent variable, etc. - $x$ is the feature, regressor, exogenous variable, explanatory variables, predictor, etc. - $\beta_0$ is known as the intercept - $\beta_1$ is the regression coefficient, effect, etc. Together, $\beta_0$ and $\beta_1$ are called paramaters, model/regression coefficients, or effects. To create a model, we must discover/learn/estimate the values of these coefficients. Estimating/Learning Model/Regression Coefficients Regression coefficients are estimated using a variety of methods. The least squares method, which finds the line which minimizes the sum of squared residuals (or "sum of squared errors") is among the most oftenly used. In the pictures below: - The blue dots are the observed values of x and y. - The red line is the least squares line. - The residuals are the distances between the observed values and the least squares line. $\beta_0$ is the intercept of the least squares line (the value of $y$ when $x$=0) $\beta_1$ is the slope of the least squares line, i.e. the ratio of the vertical change (in $y$) and the horizontal change (in $x$). We can use the statsmodels package to estimate the model coefficients for the advertising data: End of explanation 7.032594 + 0.047537*50 Explanation: Interpreting Model Coefficients Q: How do we interpret the coefficient ($\beta_1$) of the explanatory variable "TV"? A: A unit (a thousand dollars) increase in TV ad spending is associated with a 0.047537 unit (a thousand widgets) increase in Sales, i.e., an additional $1000 spent on TV ads is associated with an increase in sales of ~47.5 widgets. Note that it is, in general, possible to have a negative effect, e.g., an increase in TV ad spending to be associated with a decrease in sales. $\beta_1$ would be negative in this case. Using the Model for Prediction Can we use the model we develop to guide advertising spending decisions? For example, if the company spends $50,000 on TV advertising in a new market, what would the model predict for the sales in that market? $$y = \beta_0 + \beta_1x$$ $$y = 7.032594 + 0.047537 \times 50$$ End of explanation # create a DataFrame to use with the Statsmodels formula interface New_TV_spending = pd.DataFrame({'TV': [50]}) #check the newly created DataFrame New_TV_spending.head() # use the model created above to predict the sales to be generated by the new TV ad money sales = fitted_model.predict(New_TV_spending) print(sales) Explanation: The predicted Sales in that market are of 9.409444 * 1000 =~ 9409 widgets Using Statsmodels: End of explanation # create a DataFrame with the minimum and maximum values of TV ad money New_TV_money = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]}) print(New_TV_money.head()) # make predictions for those x values and store them sales_predictions = fitted_model.predict(New_TV_money) print(sales_predictions) # plot the observed data data.plot(kind='scatter', x='TV', y='Sales') # plot the least squares line plt.plot(New_TV_money, sales_predictions, c='red', linewidth=2) Explanation: Plotting the Least Squares Line Let's make predictions for the smallest and largest observed values of money spent on TV ads, and then use the predicted values to plot the least squares line: End of explanation # print the confidence intervals for the model coefficients print(fitted_model.conf_int()) Explanation: Confidence in Linear Regression Models Q: Is linear regression a high bias/low variance model, or a low variance/high bias model? A: High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won't do a great job capturing the true relationship (high bias). (A low variance is a useful characteristic when limited training data is available.) We can use Statsmodels to calculate 95% confidence intervals for the model coefficients, which are interpreted as follows: If the population from which this sample was drawn was sampled 100 times, approximately 95 of those confidence intervals would contain the "true" coefficient. End of explanation # print the p-values for the model coefficients fitted_model.pvalues Explanation: Since we only have a single sample of data, and not the entire population the "true" value of the regression coefficient is either within this interval or it isn't, but there is no way to actually know. We estimate the regression coefficient using the data we have, and then we characterize the uncertainty about that estimate by giving a confidence interval, an interval that will "probably" contain the value coefficient. Note that there is no probability associated with the true value of the regression coefficient being in the given confidence interval! Also note that using 95% confidence intervals is simply a convention. One can create 90% confidence intervals (narrower intervals), 99% confidence intervals (wider intervals), etc. Hypothesis Testing and p-values Closely related to confidence intervals is hypothesis testing. Generally speaking, you start with a null hypothesis and an alternative hypothesis (that is opposite the null). Then, you check whether the data supports rejecting the null hypothesis or failing to reject the null hypothesis. (Note that "failing to reject" the null is not the same as "accepting" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.) As it relates to model coefficients, here is the conventional hypothesis test: - null hypothesis: There is no relationship between TV ads and Sales (and thus $\beta_1$ equals zero) - alternative hypothesis: There is a relationship between TV ads and Sales (and thus $\beta_1$ is not equal to zero) How do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval does not include zero. Conversely, the p-value represents the probability that the coefficient is actually zero: End of explanation # print the R-squared value for the model fitted_model.rsquared Explanation: If the 95% confidence interval includes zero, the p-value for that coefficient will be greater than 0.05. If the 95% confidence interval does not include zero, the p-value will be less than 0.05. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.) In this case, the p-value for TV is far less than 0.05, and so we believe that there is a relationship between TV ads and Sales. Note that we generally ignore the p-value for the intercept. How Well Does the Model Fit the data? The most common way to evaluate the overall fit of a linear model to the available data is by calculating the R-squared (a.k.a, "coefficient of determination") value. R-squared has several interpretations: (1) R-squared ×100 percent of the variation in the dependent variable ($y$) is reduced by taking into account predictor $x$ (2) R-squared is the proportion of variance in the observed data that is "explained" by the model. R-squared is between 0 and 1, and, generally speaking, higher is considered to be better because more variance is accounted for ("explained") by the model. Note, however, that R-squared does not indicate whether a regression model is actually good. You can have a low R-squared value for a good model, or a high R-squared value for a model that does not fit the data! One should evaluate the adequacy of a model by looking at R-squared values as well as residual (i.e., observed value - fitted value) plots, other model statistics, and subject area knowledge. The R-squared value for our simple linear regression model is: End of explanation # create a model with all three features multi_model = sf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data) fitted_multi_model = multi_model.fit() # print the coefficients print(fitted_multi_model.params) Explanation: Is that a "good" R-squared value? One cannot generally assess that. What a "good" R-squared value is depends on the domain and therefore R-squared is most useful as a tool for comparing different models. Multiple Linear Regression Simple linear regression can be extended to include multiple explanatory variables: $y = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$ Each $x$ represents a different predictor/feature, and each predictor has its own coefficient. In our case: $y = \beta_0 + \beta_1 \times TV + \beta_2 \times Radio + \beta_3 \times Newspaper$ Let's use Statsmodels to estimate these coefficients: End of explanation # print a summary of the fitted model fitted_multi_model.summary() Explanation: How do we interpret the coefficients? For a given amount of Radio and Newspaper ad spending, an increase of a unit ($1000 dollars) in TV ad spending is associated with an increase in Sales of 45.765 widgets. Other information is available in the model summary output: End of explanation # only include TV and Radio in the model model1 = sf.ols(formula='Sales ~ TV + Radio', data=data).fit() print(model1.rsquared) # add Newspaper to the model (which we believe has no association with Sales) model2 = sf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit() print(model2.rsquared) Explanation: TV and Radio have significant p-values, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper. TV and Radio ad spending are both positively associated with Sales, whereas Newspaper ad spending is slightly negatively associated with Sales. This model has a higher R-squared (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV. Feature Selection How do I decide which features to include in a linear model? - Try different models and check whether the R-squared value goes up when you add new predictors. What are the drawbacks to this approach? - Linear models rely upon a lot of assumptions (such as the predictors/features being independent), and if those assumptions are violated (which they usually are), R-squared are less reliable. - R-squared is susceptible to overfitting, and thus there is no guarantee that a model with a high R-squared value will generalize well to new data. For example: End of explanation # create a DataFrame feature_cols = ['TV', 'Radio', 'Newspaper'] X = data[feature_cols] y = data.Sales from sklearn.linear_model import LinearRegression lm = LinearRegression() lm.fit(X, y) # print intercept and coefficients print(lm.intercept_) print(lm.coef_) # pair the feature names with the coefficients print(zip(feature_cols, lm.coef_)) # predict for a new observation lm.predict([[100, 25, 25]]) # calculate the R-squared lm.score(X, y) Explanation: R-squared will always increase as you add more features to the model, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model. There is alternative to R-squared called adjusted R-squared that penalizes model complexity (to control for overfitting), but this approach has its own set of issues. Is there a better approach to feature selection? Cross-validation, which provides a more reliable estimate of out-of-sample error, and thus is better at choosing which model will better generalize to out-of-sample data. Cross-validation can be applied to any type of model, not just linear models. Linear Regression in scikit-learn The work done using Statsmodels can also be using scikit-learn: End of explanation import numpy as np # create a Series of booleans in which roughly half are True #generate len(data) numbers between 0 and 1 numbers = np.random.rand(len(data)) #create and index of 0s and 1s by based on whether the corresponding random number #is greater than 0.5. index_for_large = (numbers > 0.5) #create a new data column called Size and set its values to 'small' data['Size'] = 'small' # change the values of Size to 'large' whenever the corresponding value of the index is 1 data.loc[index_for_large, 'Size'] = 'large' data.head() Explanation: Handling Categorical Predictors with Two Categories What if one of the predictors was categorical? Let's create a new feature called Size, and randomly assign observations to be small or large: End of explanation # create a new Series called IsLarge data['IsLarge'] = data.Size.map({'small':0, 'large':1}) data.head() Explanation: When using scikit-learn, we need to represent all data numerically. For example, if the feature we want to represent has only two categories, we create a dummy variable that represents the categories as a binary value: End of explanation # create X and y feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge'] X = data[feature_cols] y = data.Sales # instantiate, fit lm = LinearRegression() lm.fit(X, y) # print coefficients list(zip(feature_cols, lm.coef_)) Explanation: The multiple linear regression including the IsLarge predictor: End of explanation # set a seed for reproducibility np.random.seed(123456) # assign roughly one third of observations to each group nums = np.random.rand(len(data)) mask_suburban = (nums > 0.33) & (nums < 0.66) mask_urban = nums > 0.66 data['Area'] = 'rural' data.loc[mask_suburban, 'Area'] = 'suburban' data.loc[mask_urban, 'Area'] = 'urban' data.head() Explanation: How do we interpret the coefficient of IsLarge? For a given amount of TV/Radio/Newspaper ad spending, a large market is associated with an average increase in Sales of 51.55 widgets (as compared to sales in a Small market). If we reverse the 0/1 encoding and created the feature 'IsSmall', the coefficient would be the same in absolute value, but negative instead of positive. All that changes is the interpretation of the coefficient. Handling Categorical Predictors with More than Two Categories Let's create a new feature called Area, and randomly assign observations to be rural, suburban, or urban: End of explanation # create three dummy variables using get_dummies, then exclude the first dummy column area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:] # concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns) data = pd.concat([data, area_dummies], axis=1) data.head() Explanation: We have to represent Area numerically, but an encoding such as 0=rural, 1=suburban, 2=urban would not work because that would imply that there is an ordered relationship between suburban and urban. Instead, we can create another dummy variable. End of explanation # read data into a DataFrame #data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0) # create X and y feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge', 'Area_suburban', 'Area_urban'] X = data[feature_cols] y = data.Sales # instantiate, fit lm = LinearRegression() lm.fit(X, y) # print coefficients list(zip(feature_cols, lm.coef_)) Explanation: rural is coded as Area_suburban=0 and Area_urban=0 suburban is coded as Area_suburban=1 and Area_urban=0 urban is coded as Area_suburban=0 and Area_urban=1 Only two dummies are needed to captures all of the information about the Area feature.(In general, for a categorical feature with k levels, we create k-1 dummy variables.) Let's include the two new dummy variables in the model: End of explanation
5,291
Given the following text description, write Python code to implement the functionality described below step by step Description: Ch 10 Step1: Split the timeseries dataset into two components. The first section will be for training, and the next section will be for testing. Step2: Download some CSV timeseries data. Like the one here https
Python Code: %matplotlib inline import csv import numpy as np import matplotlib.pyplot as plt def load_series(filename, series_idx=1): try: with open(filename) as csvfile: csvreader = csv.reader(csvfile) data = [float(row[series_idx]) for row in csvreader if len(row) > 0] normalized_data = (data - np.mean(data)) / np.std(data) return normalized_data except IOError: return None Explanation: Ch 10: Concept 01 Processing timeseries data Load a CSV file, where each row is a feature vector: End of explanation def split_data(data, percent_train=0.80): num_rows = len(data) train_data, test_data = [], [] for idx, row in enumerate(data): if idx < num_rows * percent_train: train_data.append(row) else: test_data.append(row) return train_data, test_data Explanation: Split the timeseries dataset into two components. The first section will be for training, and the next section will be for testing. End of explanation if __name__=='__main__': # https://datamarket.com/data/set/22u3/international-airline-passengers-monthly-totals-in-thousands-jan-49-dec-60#!ds=22u3&display=line timeseries = load_series('international-airline-passengers.csv') print(np.shape(timeseries)) plt.figure() plt.plot(timeseries) plt.show() Explanation: Download some CSV timeseries data. Like the one here https://datamarket.com/data/set/22u3/international-airline-passengers-monthly-totals-in-thousands-jan-49-dec-60#!ds=22u3&display=line. End of explanation
5,292
Given the following text description, write Python code to implement the functionality described below step by step Description: Optimization Exercise 1 Imports Step1: Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential" Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$ Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np import scipy.optimize as opt Explanation: Optimization Exercise 1 Imports End of explanation # YOUR CODE HERE def hat(x,a,b): v=-1*a*x**2+b*x**4 return v assert hat(0.0, 1.0, 1.0)==0.0 assert hat(0.0, 1.0, 1.0)==0.0 assert hat(1.0, 10.0, 1.0)==-9.0 Explanation: Hat potential The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential": $$ V(x) = -a x^2 + b x^4 $$ Write a function hat(x,a,b) that returns the value of this function: End of explanation x=np.linspace(-3,3) b=1.0 a=5.0 plt.plot(x,hat(x,a,b)) # YOUR CODE HERE x0=-2 a = 5.0 b = 1.0 y=opt.minimize(hat,x0,(a,b)) y.x assert True # leave this to grade the plot Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$: End of explanation # YOUR CODE HERE x0=-2 a = 5.0 b = 1.0 i=0 y.x mini=[] x=np.linspace(-3,3) for i in x: y=opt.minimize(hat,i,(a,b)) z=int(y.x *100000) if np.any(mini[:] == z): i=i+1 else: mini=np.append(mini,z) mini=mini/100000 mini plt.plot(x,hat(x,a,b),label="Hat Function") plt.plot(mini[0],hat(mini[0],a,b),'ro',label="Minima") plt.plot(mini[1],hat(mini[1],a,b),'ro') plt.xlabel=("X-Axis") plt.ylabel=("Y-Axis") plt.title("Graph of Function and its Local Minima") plt.legend() assert True # leave this for grading the plot Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$. Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima. Print the x values of the minima. Plot the function as a blue line. On the same axes, show the minima as red circles. Customize your visualization to make it beatiful and effective. End of explanation
5,293
Given the following text description, write Python code to implement the functionality described below step by step Description: Classic Approach Step1: First Step Step2: Second Step Step3: By just randomly guessing, we get approx. 1/3 right, which is what we expect Step4: Third Step Step5: This is the baseline we have to beat
Python Code: import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import pandas as pd print(pd.__version__) Explanation: Classic Approach End of explanation df = pd.read_csv('./insurance-customers-300.csv', sep=';') y=df['group'] df.drop('group', axis='columns', inplace=True) X = df.as_matrix() df.describe() Explanation: First Step: Load Data and disassemble for our purposes End of explanation # ignore this, it is just technical code # should come from a lib, consider it to appear magically # http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html import matplotlib.pyplot as plt from matplotlib.colors import ListedColormap cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD']) cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00']) cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD']) font_size=25 def meshGrid(x_data, y_data): h = 1 # step size in the mesh x_min, x_max = x_data.min() - 1, x_data.max() + 1 y_min, y_max = y_data.min() - 1, y_data.max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) return (xx,yy) def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fname=None): xx,yy = meshGrid(x_data, y_data) plt.figure(figsize=(20,10)) if clf and mesh: Z = clf.predict(np.c_[yy.ravel(), xx.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.pcolormesh(xx, yy, Z, cmap=cmap_light) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) if fname: plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k') else: plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k') plt.xlabel(x_label, fontsize=font_size) plt.ylabel(y_label, fontsize=font_size) plt.title(title, fontsize=font_size) if fname: plt.savefig(fname) # 0: red # 1: green # 2: yellow class ClassifierBase: def predict(self, X): return np.array([ self.predict_single(x) for x in X]) def score(self, X, y): n = len(y) correct = 0 predictions = self.predict(X) for prediction, ground_truth in zip(predictions, y): if prediction == ground_truth: correct = correct + 1 return correct / n from random import randrange class RandomClassifier(ClassifierBase): def predict_single(self, x): return randrange(3) random_clf = RandomClassifier() plotPrediction(random_clf, X[:, 1], X[:, 0], 'Age', 'Max Speed', y, title="Max Speed vs Age (Random)") Explanation: Second Step: Visualizing Prediction End of explanation random_clf.score(X, y) Explanation: By just randomly guessing, we get approx. 1/3 right, which is what we expect End of explanation class BaseLineClassifier(ClassifierBase): def predict_single(self, x): try: speed, age, km_per_year = x except: speed, age = x km_per_year = 0 if age < 25: if speed > 180: return 0 else: return 2 if age > 75: return 0 if km_per_year > 50: return 0 if km_per_year > 35: return 2 return 1 base_clf = BaseLineClassifier() plotPrediction(base_clf, X[:, 1], X[:, 0], 'Age', 'Max Speed', y, title="Max Speed vs Age with Classification") Explanation: Third Step: Creating a Base Line Creating a naive classifier manually, how much better is it? End of explanation base_clf.score(X, y) Explanation: This is the baseline we have to beat End of explanation
5,294
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualizing Networks The following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network. Before you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository Step1: Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance Step2: That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else Step3: outp now contains the rendered output, render to an image with graphviz Step4: In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. "bottumUpIn" and "bottomUpOut", are specific to the region type. The arrows indicate links between outputs from one region to the input of another. I know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real! Continuing below, I'll instantiate a HTMPredictionModel and visualize it. In this case, I'll use one of the "hotgym" examples. Step5: Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline.
Python Code: from nupic.engine import Network, Dimensions # Create Network instance network = Network() # Add three TestNode regions to network network.addRegion("region1", "TestNode", "") network.addRegion("region2", "TestNode", "") network.addRegion("region3", "TestNode", "") # Set dimensions on first region region1 = network.getRegions().getByName("region1") region1.setDimensions(Dimensions([1, 1])) # Link regions network.link("region1", "region2", "UniformLink", "") network.link("region2", "region1", "UniformLink", "") network.link("region1", "region3", "UniformLink", "") network.link("region2", "region3", "UniformLink", "") # Initialize network network.initialize() Explanation: Visualizing Networks The following demonstrates basic use of nupic.frameworks.viz.NetworkVisualizer to visualize a network. Before you begin, you will need to install the otherwise optional dependencies. From the root of nupic repository: pip install --user .[viz] Setup a simple network so we have something to work with: End of explanation from nupic.frameworks.viz import NetworkVisualizer # Initialize Network Visualizer viz = NetworkVisualizer(network) # Render to dot (stdout) viz.render() Explanation: Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance: End of explanation from nupic.frameworks.viz import DotRenderer from io import StringIO outp = StringIO() viz.render(renderer=lambda: DotRenderer(outp)) Explanation: That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else: End of explanation # Render dot to image from graphviz import Source from IPython.display import Image Image(Source(outp.getvalue()).pipe("png")) Explanation: outp now contains the rendered output, render to an image with graphviz: End of explanation from nupic.frameworks.opf.modelfactory import ModelFactory # Note: parameters copied from examples/opf/clients/hotgym/simple/model_params.py model = ModelFactory.create({'aggregationInfo': {'hours': 1, 'microseconds': 0, 'seconds': 0, 'fields': [('consumption', 'sum')], 'weeks': 0, 'months': 0, 'minutes': 0, 'days': 0, 'milliseconds': 0, 'years': 0}, 'model': 'HTMPrediction', 'version': 1, 'predictAheadTime': None, 'modelParams': {'sensorParams': {'verbosity': 0, 'encoders': {'timestamp_timeOfDay': {'type': 'DateEncoder', 'timeOfDay': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay'}, u'consumption': {'resolution': 0.88, 'seed': 1, 'fieldname': u'consumption', 'name': u'consumption', 'type': 'RandomDistributedScalarEncoder'}, 'timestamp_weekend': {'type': 'DateEncoder', 'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'weekend': 21}}, 'sensorAutoReset': None}, 'spParams': {'columnCount': 2048, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermConnected': 0.1, 'seed': 1956, 'numActiveColumnsPerInhArea': 40, 'globalInhibition': 1, 'inputWidth': 0, 'synPermInactiveDec': 0.005, 'synPermActiveInc': 0.04, 'potentialPct': 0.85, 'boostStrength': 3.0}, 'spEnable': True, 'clParams': {'implementation': 'cpp', 'alpha': 0.1, 'verbosity': 0, 'steps': '1,5', 'regionName': 'SDRClassifierRegion'}, 'inferenceType': 'TemporalMultiStep', 'tmEnable': True, 'tmParams': {'columnCount': 2048, 'activationThreshold': 16, 'pamLength': 1, 'cellsPerColumn': 32, 'permanenceInc': 0.1, 'minThreshold': 12, 'verbosity': 0, 'maxSynapsesPerSegment': 32, 'outputType': 'normal', 'initialPerm': 0.21, 'globalDecay': 0.0, 'maxAge': 0, 'permanenceDec': 0.1, 'seed': 1960, 'newSynapseCount': 20, 'maxSegmentsPerCell': 128, 'temporalImp': 'cpp', 'inputWidth': 2048}, 'trainSPNetOnlyIfRequested': False}}) Explanation: In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. "bottumUpIn" and "bottomUpOut", are specific to the region type. The arrows indicate links between outputs from one region to the input of another. I know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real! Continuing below, I'll instantiate a HTMPredictionModel and visualize it. In this case, I'll use one of the "hotgym" examples. End of explanation # New network, new NetworkVisualizer instance viz = NetworkVisualizer(model._netInfo.net) # Render to Dot output to buffer outp = StringIO() viz.render(renderer=lambda: DotRenderer(outp)) # Render Dot to image, display inline Image(Source(outp.getvalue()).pipe("png")) Explanation: Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline. End of explanation
5,295
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: Transferência de Aprendizado com uma ConvNet Pré-Treinada <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Pré-Processamento dos Dados Baixar os Dados Use [Conjuntos de dados TensorFlow] (http Step3: O método tfds.load baixa e armazena em cache os dados e retorna um objeto tf.data.Dataset. Esses objetos fornecem métodos poderosos e eficientes para manipular dados e canalizá-los para o seu modelo. Como "cats_vs_dogs" não define padrões para divisões, use o recurso subsplit para dividi-lo em (treinamento, validação, teste) com 80%, 10% e 10% dos dados, respectivamente. Step4: Os objetos tf.data.Dataset resultantes contêm pares (image, label) onde as imagens têm formato variável e 3 canais, e o rótulo é escalar. Step5: Mostre as duas primeiras imagens e rótulos do conjunto de treinamento Step6: Formate os dados Use o módulo tf.image para formatar as imagens para a tarefa. Redimensione as imagens para um tamanho de entrada fixo e redimensione os canais de entrada para um intervalo de [-1,1] <!-- TODO (markdaoust) Step7: Aplique esta função a cada item no conjunto de dados usando o método map Step8: Agora embaralhe e agrupe os dados. Step9: Inspecione um lote de dados Step10: Crie o modelo base a partir das ConvNets pré-treinadas Você criará o modelo base a partir do modelo MobileNet V2 desenvolvido no Google. Isso é pré-treinado no conjunto de dados ImageNet, um grande conjunto de dados composto por 1,4 milhões de imagens e 1000 classes. O ImageNet é um conjunto de dados de treinamento de pesquisa com uma ampla variedade de categorias, como jaca e seringa. Essa base de conhecimento nos ajudará a classificar cães e gatos de nosso conjunto de dados específico. Primeiro, você precisa escolher qual camada do MobileNet V2 usará para extração de características. A última camada de classificação (na parte superior, como a maioria dos diagramas dos modelos de aprendizado de máquina vai de baixo para cima) não é muito útil. Em vez disso, você seguirá a prática comum de depender da última camada antes da operação de nivelamento. Essa camada é chamada de "camada de gargalo". Os recursos da camada de gargalo retêm mais generalidade em comparação com a camada final/superior. Primeiro, instancie um modelo MobileNet V2 pré-carregado com pesos treinados no ImageNet. Ao especificar o argumento include_top = False, você carrega uma rede que não inclui as camadas de classificação na parte superior, o que é ideal para a extração de características. Step11: Este extrator de características converte cada imagem 160x160x3 em um bloco de características 5x5x1280. Veja o que ele faz com o lote de imagens de exemplo Step12: Extração de Características Nesta etapa, você congelará a base convolucional criada a partir da etapa anterior e utilizará como extrator de características. Além disso, você adiciona um classificador sobre ele e treina o classificador de nível superior. Congelar a base convolucional É importante congelar a base convolucional antes de compilar e treinar o modelo. O congelamento (configurando layer.trainable = False) impede que os pesos em uma determinada camada sejam atualizados durante o treinamento. O MobileNet V2 possui muitas camadas, portanto, definir o sinalizador treinável do modelo inteiro como False congelará todas as camadas. Step13: Adicionar um cabeçalho de classificação Para gerar previsões a partir do bloco de características, calcule a média dos espaços 5x5, usando uma camada tf.keras.layers.GlobalAveragePooling2D para converter as características em um único vetor de 1280 elementos por imagem. Step14: Aplique uma camada tf.keras.layers.Dense para converter esses recursos em uma única previsão por imagem. Você não precisa de uma função de ativação aqui porque esta previsão será tratada como um logit ou um valor bruto de previsão. Números positivos predizem a classe 1, números negativos predizem a classe 0. Step15: Agora empilhe o extrator de características e essas duas camadas usando um modelo tf.keras.Sequential Step16: Compilar o modelo Você deve compilar o modelo antes de treiná-lo. Como existem duas classes, use uma perda de entropia cruzada binária com from_logits = True, pois o modelo fornece uma saída linear. Step17: Os parâmetros de 2,5 milhões no MobileNet estão congelados, mas existem 1,2 mil parâmetros trainable na camada Dense. Estes são divididos entre dois objetos tf.Variable, os pesos e desvios. Step18: Trainar o modelo Após o treinamento por 10 épocas, você deverá ver ~96% de acurácia. Step19: Curvas de Aprendizado Vamos dar uma olhada nas curvas de aprendizado da acurácia/perda do treinamento e da validação ao usar o modelo base do MobileNet V2 como um extrator de características fixo. Step20: Nota Step21: Compilar o modelo Compile o modelo usando uma taxa de aprendizado muito menor. Step22: Continue treinando o modelo Se você treinou para convergência anteriormente, esta etapa melhorará sua acurácia em alguns pontos percentuais. Step23: Vamos dar uma olhada nas curvas de aprendizado da acurácia/perda do treinamento e da validação ao ajustar as últimas camadas do modelo base do MobileNet V2 e treinar o classificador sobre ele. A perda de validação é muito maior do que a perda de treinamento, portanto, você pode obter um overfitting. Você também pode obter um overfitting, pois o novo conjunto de treinamento é relativamente pequeno e semelhante aos conjuntos de dados originais do MobileNet V2. Após o ajuste fino, o modelo atinge quase 98% de acurácia.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # IGNORE_COPYRIGHT: cleared by OSS licensing # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation from __future__ import absolute_import, division, print_function, unicode_literals import os import numpy as np import matplotlib.pyplot as plt try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf keras = tf.keras Explanation: Transferência de Aprendizado com uma ConvNet Pré-Treinada <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/images/transfer_learning"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver em TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Executar no Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Visualizar Código Fonte no GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/images/transfer_learning.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixar notebook</a> </td> </table> Neste tutorial, você aprenderá a classificar imagens de cães e gatos usando a transferência de aprendizado de uma rede pré-treinada. Um modelo pré-treinado é uma rede salva que foi treinada anteriormente em um grande conjunto de dados, geralmente em uma tarefa de classificação de imagem em larga escala. Você usa o modelo pré-treinado como está ou usa a transferência de aprendizado para personalizar esse modelo para uma determinada tarefa. A intuição por trás da transferência de aprendizado para classificação de imagens é que, se um modelo for treinado em um conjunto de dados grande e geral o suficiente, esse modelo servirá efetivamente como um modelo genérico do mundo visual. Você pode aproveitar esses mapas de características aprendidas sem precisar começar do zero treinando um modelo grande em um grande conjunto de dados. Neste notebook, você tentará duas maneiras de personalizar um modelo pré-treinado: Extração de características: use as representações aprendidas por uma rede anterior para extrair características significativas de novas amostras. Você simplesmente adiciona um novo classificador, que será treinado do zero, sobre o modelo pré-treinado, para que você possa adaptar novamente os mapas de características aprendidas anteriormente para o conjunto de dados. Você não precisa (re) treinar o modelo inteiro. A rede convolucional de base já contém características que são genericamente úteis para classificar imagens. No entanto, a parte final de classificação do modelo pré-treinado é específica para a tarefa de classificação original e subsequentemente específica para o conjunto de classes em que o modelo foi treinado. Ajuste fino: descongele algumas das camadas superiores de uma base modelo congelada e treine em conjunto as camadas de classificação recém-adicionadas e as últimas camadas do modelo base. Isso nos permite "ajustar" as representações de características de ordem superior no modelo base para torná-las mais relevantes para a tarefa específica. Você seguirá o fluxo de trabalho geral de aprendizado de máquina. Examine e entenda os dados Crie um pipeline de entrada, neste caso usando o Keras ImageDataGenerator Componha o modelo    * Carregar no modelo básico pré-treinado (e pesos pré-treinados)    * Empilhe as camadas de classificação na parte superior Treine o modelo Avalie o modelo End of explanation import tensorflow_datasets as tfds tfds.disable_progress_bar() Explanation: Pré-Processamento dos Dados Baixar os Dados Use [Conjuntos de dados TensorFlow] (http://tensorflow.org/datasets) para carregar o conjunto de dados de cães e gatos. O pacote tfds é a maneira mais fácil de carregar dados pré-definidos. Se você possui seus próprios dados e está interessado em importá-los com o TensorFlow, consulte [carregando dados da imagem] (../load_data/images.ipynb) End of explanation SPLIT_WEIGHTS = (8, 1, 1) splits = tfds.Split.TRAIN.subsplit(weighted=SPLIT_WEIGHTS) (raw_train, raw_validation, raw_test), metadata = tfds.load( 'cats_vs_dogs', split=list(splits), with_info=True, as_supervised=True) Explanation: O método tfds.load baixa e armazena em cache os dados e retorna um objeto tf.data.Dataset. Esses objetos fornecem métodos poderosos e eficientes para manipular dados e canalizá-los para o seu modelo. Como "cats_vs_dogs" não define padrões para divisões, use o recurso subsplit para dividi-lo em (treinamento, validação, teste) com 80%, 10% e 10% dos dados, respectivamente. End of explanation print(raw_train) print(raw_validation) print(raw_test) Explanation: Os objetos tf.data.Dataset resultantes contêm pares (image, label) onde as imagens têm formato variável e 3 canais, e o rótulo é escalar. End of explanation get_label_name = metadata.features['label'].int2str for image, label in raw_train.take(2): plt.figure() plt.imshow(image) plt.title(get_label_name(label)) Explanation: Mostre as duas primeiras imagens e rótulos do conjunto de treinamento: End of explanation IMG_SIZE = 160 # Todas as imagens serão ajustadas para 160x160 def format_example(image, label): image = tf.cast(image, tf.float32) image = (image/127.5) - 1 image = tf.image.resize(image, (IMG_SIZE, IMG_SIZE)) return image, label Explanation: Formate os dados Use o módulo tf.image para formatar as imagens para a tarefa. Redimensione as imagens para um tamanho de entrada fixo e redimensione os canais de entrada para um intervalo de [-1,1] <!-- TODO (markdaoust): corrige as funções de pré-processamento keras_applications para trabalhar em tf2 --> End of explanation train = raw_train.map(format_example) validation = raw_validation.map(format_example) test = raw_test.map(format_example) Explanation: Aplique esta função a cada item no conjunto de dados usando o método map: End of explanation BATCH_SIZE = 32 SHUFFLE_BUFFER_SIZE = 1000 train_batches = train.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE) validation_batches = validation.batch(BATCH_SIZE) test_batches = test.batch(BATCH_SIZE) Explanation: Agora embaralhe e agrupe os dados. End of explanation for image_batch, label_batch in train_batches.take(1): pass image_batch.shape Explanation: Inspecione um lote de dados: End of explanation IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) # Criar o modelo base a partir do modelo MobileNet V2 pré-treinado base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE, include_top=False, weights='imagenet') Explanation: Crie o modelo base a partir das ConvNets pré-treinadas Você criará o modelo base a partir do modelo MobileNet V2 desenvolvido no Google. Isso é pré-treinado no conjunto de dados ImageNet, um grande conjunto de dados composto por 1,4 milhões de imagens e 1000 classes. O ImageNet é um conjunto de dados de treinamento de pesquisa com uma ampla variedade de categorias, como jaca e seringa. Essa base de conhecimento nos ajudará a classificar cães e gatos de nosso conjunto de dados específico. Primeiro, você precisa escolher qual camada do MobileNet V2 usará para extração de características. A última camada de classificação (na parte superior, como a maioria dos diagramas dos modelos de aprendizado de máquina vai de baixo para cima) não é muito útil. Em vez disso, você seguirá a prática comum de depender da última camada antes da operação de nivelamento. Essa camada é chamada de "camada de gargalo". Os recursos da camada de gargalo retêm mais generalidade em comparação com a camada final/superior. Primeiro, instancie um modelo MobileNet V2 pré-carregado com pesos treinados no ImageNet. Ao especificar o argumento include_top = False, você carrega uma rede que não inclui as camadas de classificação na parte superior, o que é ideal para a extração de características. End of explanation feature_batch = base_model(image_batch) print(feature_batch.shape) Explanation: Este extrator de características converte cada imagem 160x160x3 em um bloco de características 5x5x1280. Veja o que ele faz com o lote de imagens de exemplo: End of explanation base_model.trainable = False # Vamos dar uma olhada na arquitetura do modelo base base_model.summary() Explanation: Extração de Características Nesta etapa, você congelará a base convolucional criada a partir da etapa anterior e utilizará como extrator de características. Além disso, você adiciona um classificador sobre ele e treina o classificador de nível superior. Congelar a base convolucional É importante congelar a base convolucional antes de compilar e treinar o modelo. O congelamento (configurando layer.trainable = False) impede que os pesos em uma determinada camada sejam atualizados durante o treinamento. O MobileNet V2 possui muitas camadas, portanto, definir o sinalizador treinável do modelo inteiro como False congelará todas as camadas. End of explanation global_average_layer = tf.keras.layers.GlobalAveragePooling2D() feature_batch_average = global_average_layer(feature_batch) print(feature_batch_average.shape) Explanation: Adicionar um cabeçalho de classificação Para gerar previsões a partir do bloco de características, calcule a média dos espaços 5x5, usando uma camada tf.keras.layers.GlobalAveragePooling2D para converter as características em um único vetor de 1280 elementos por imagem. End of explanation prediction_layer = keras.layers.Dense(1) prediction_batch = prediction_layer(feature_batch_average) print(prediction_batch.shape) Explanation: Aplique uma camada tf.keras.layers.Dense para converter esses recursos em uma única previsão por imagem. Você não precisa de uma função de ativação aqui porque esta previsão será tratada como um logit ou um valor bruto de previsão. Números positivos predizem a classe 1, números negativos predizem a classe 0. End of explanation model = tf.keras.Sequential([ base_model, global_average_layer, prediction_layer ]) Explanation: Agora empilhe o extrator de características e essas duas camadas usando um modelo tf.keras.Sequential: End of explanation base_learning_rate = 0.0001 model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=base_learning_rate), loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), metrics=['accuracy']) model.summary() Explanation: Compilar o modelo Você deve compilar o modelo antes de treiná-lo. Como existem duas classes, use uma perda de entropia cruzada binária com from_logits = True, pois o modelo fornece uma saída linear. End of explanation len(model.trainable_variables) Explanation: Os parâmetros de 2,5 milhões no MobileNet estão congelados, mas existem 1,2 mil parâmetros trainable na camada Dense. Estes são divididos entre dois objetos tf.Variable, os pesos e desvios. End of explanation num_train, num_val, num_test = ( metadata.splits['train'].num_examples*weight/10 for weight in SPLIT_WEIGHTS ) initial_epochs = 10 steps_per_epoch = round(num_train)//BATCH_SIZE validation_steps=20 loss0,accuracy0 = model.evaluate(validation_batches, steps = validation_steps) print("initial loss: {:.2f}".format(loss0)) print("initial accuracy: {:.2f}".format(accuracy0)) history = model.fit(train_batches, epochs=initial_epochs, validation_data=validation_batches) Explanation: Trainar o modelo Após o treinamento por 10 épocas, você deverá ver ~96% de acurácia. End of explanation acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.ylabel('Accuracy') plt.ylim([min(plt.ylim()),1]) plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.ylabel('Cross Entropy') plt.ylim([0,1.0]) plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() Explanation: Curvas de Aprendizado Vamos dar uma olhada nas curvas de aprendizado da acurácia/perda do treinamento e da validação ao usar o modelo base do MobileNet V2 como um extrator de características fixo. End of explanation base_model.trainable = True # Vamos dar uma olhada para ver quantas camadas existem no modelo base print("Number of layers in the base model: ", len(base_model.layers)) # Ajuste a partir desta camada em diante fine_tune_at = 100 # Congele todas as camadas antes de `fine_tune_at` for layer in base_model.layers[:fine_tune_at]: layer.trainable = False Explanation: Nota: Se você está se perguntando por que as métricas de validação são claramente melhores que as métricas de treinamento, o principal fator é que camadas como tf.keras.layers.BatchNormalization e tf.keras.layers.Dropout afetam a acurácia durante o treinamento. Eles são desativados ao calcular a perda de validação. Em menor grau, é também porque as métricas de treinamento relatam a média de uma época, enquanto as métricas de validação são avaliadas após a época, portanto, as métricas de validação veem um modelo que foi treinado um pouco mais. Ajuste Fino No experimento de extração de características, você treinava apenas algumas camadas sobre um modelo base do MobileNet V2. Os pesos da rede pré-treinada não foram atualizados durante o treinamento. Uma maneira de aumentar ainda mais o desempenho é treinar (ou "ajustar") os pesos das camadas superiores do modelo pré-treinado, juntamente com o treinamento do classificador adicionado. O processo de treinamento forçará os pesos a serem ajustados com mapas de características genéricas para recursos associados especificamente ao conjunto de dados. Nota: Isso só deve ser tentado depois de você treinar o classificador de nível superior com o modelo pré-treinado definido como não treinável. Se você adicionar um classificador inicializado aleatoriamente sobre um modelo pré-treinado e tentar treinar todas as camadas em conjunto, a magnitude das atualizações de gradiente será muito grande (devido aos pesos aleatórios do classificador) e seu modelo pré-treinado esquecerá o que aprendeu. Além disso, você deve tentar ajustar um pequeno número de camadas superiores em vez de todo o modelo MobileNet. Na maioria das redes convolucionais, quanto mais alta a camada, mais especializada ela é. As primeiras camadas aprendem recursos muito simples e genéricos que generalizam para quase todos os tipos de imagens. À medida que você aumenta, as características são cada vez mais específicas para o conjunto de dados no qual o modelo foi treinado. O objetivo do ajuste fino é adaptar essas características especializados para trabalhar com o novo conjunto de dados, em vez de substituir o aprendizado genérico. Descongele as camadas superiores do modelo Tudo o que você precisa fazer é descongelar o base_model e definir as camadas inferiores para que não possam ser treinadas. Em seguida, recompile o modelo (necessário para que essas alterações entrem em vigor) e reinicie o treinamento. End of explanation model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True), optimizer = tf.keras.optimizers.RMSprop(lr=base_learning_rate/10), metrics=['accuracy']) model.summary() len(model.trainable_variables) Explanation: Compilar o modelo Compile o modelo usando uma taxa de aprendizado muito menor. End of explanation fine_tune_epochs = 10 total_epochs = initial_epochs + fine_tune_epochs history_fine = model.fit(train_batches, epochs=total_epochs, initial_epoch = history.epoch[-1], validation_data=validation_batches) Explanation: Continue treinando o modelo Se você treinou para convergência anteriormente, esta etapa melhorará sua acurácia em alguns pontos percentuais. End of explanation acc += history_fine.history['accuracy'] val_acc += history_fine.history['val_accuracy'] loss += history_fine.history['loss'] val_loss += history_fine.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(2, 1, 1) plt.plot(acc, label='Training Accuracy') plt.plot(val_acc, label='Validation Accuracy') plt.ylim([0.8, 1]) plt.plot([initial_epochs-1,initial_epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(2, 1, 2) plt.plot(loss, label='Training Loss') plt.plot(val_loss, label='Validation Loss') plt.ylim([0, 1.0]) plt.plot([initial_epochs-1,initial_epochs-1], plt.ylim(), label='Start Fine Tuning') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.xlabel('epoch') plt.show() Explanation: Vamos dar uma olhada nas curvas de aprendizado da acurácia/perda do treinamento e da validação ao ajustar as últimas camadas do modelo base do MobileNet V2 e treinar o classificador sobre ele. A perda de validação é muito maior do que a perda de treinamento, portanto, você pode obter um overfitting. Você também pode obter um overfitting, pois o novo conjunto de treinamento é relativamente pequeno e semelhante aos conjuntos de dados originais do MobileNet V2. Após o ajuste fino, o modelo atinge quase 98% de acurácia. End of explanation
5,296
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing a Model Based on Kevin Markham's video series Step1: Logistic regression Step2: Classification accuracy Step3: Generating an Optimal KNN classifier Look back at 04_model_training and see how high an accuracy you can achieve for different values of n_neighbors. Try to understand why different values do better than others in terms of the pictures we saw in 04_model_training. You can change feature1 and feature2 in the cell below to visualize different projections of the data.
Python Code: # read in the iris data from sklearn.datasets import load_iris iris = load_iris() # create X (features) and y (response) X = iris.data y = iris.target Explanation: Testing a Model Based on Kevin Markham's video series: Introduction to machine learning with scikit-learn jupyter notebook 05_model_evaluation_ta.ipynb End of explanation # import the class from sklearn.linear_model import LogisticRegression # instantiate the model (using the default parameters) logreg = LogisticRegression() # fit the model with data logreg.fit(X, y) # predict the response values for the observations in X y_pred = logreg.predict(X) print(y_pred) print("{0} predictions".format(len(y_pred))) Explanation: Logistic regression End of explanation # compute classification accuracy for the logistic regression model from sklearn import metrics print metrics.accuracy_score(y, y_pred) Explanation: Classification accuracy: Proportion of correct predictions Common evaluation metric for classification problems End of explanation feature1 = 1 # feature on x axis feature2 = 3 # feature on y axis data = X f1vals = X[:,feature1] f2vals = X[:,feature2] import numpy as np targets = dict(zip(range(3), iris.target_names)) features = dict(zip(range(4), iris.feature_names)) %matplotlib inline import matplotlib.pyplot as plt colors = ['g', 'r', 'b'] fig = plt.figure(figsize=(8,8)) ax = plt.subplot() for species in targets.keys(): f1 = f1vals[np.where(y==species)] f2 = f2vals[np.where(y==species)] ax.scatter(f1, f2, c=colors[species], label=targets[species], s=40) ax.set(xlabel=features[feature1], ylabel=features[feature2]) ax.legend() Explanation: Generating an Optimal KNN classifier Look back at 04_model_training and see how high an accuracy you can achieve for different values of n_neighbors. Try to understand why different values do better than others in terms of the pictures we saw in 04_model_training. You can change feature1 and feature2 in the cell below to visualize different projections of the data. End of explanation
5,297
Given the following text description, write Python code to implement the functionality described below step by step Description: Class 4 Step1: matplotlib matplotlib is a powerful plotting module that is part of Python's standard library. The website for matplotlib is at http Step2: Next, we want to make sure that the plots that we create are displayed in this notebook. To achieve this we have to issue a command to be interpretted by Jupyter -- called a magic command. A magic command is preceded by a % character. Magics are not Python and will create errs if used outside of the Jupyter notebook Step3: A quick matplotlib example Create a plot of the sine function for x values between -6 and 6. Add axis labels and a title. Step4: The plot function The plot function creates a two-dimensional plot of one variable against another. Step5: Example Create a plot of $f(x) = x^2$ with $x$ between -2 and 2. * Set the linewidth to 3 points * Set the line transparency (alpha) to 0.6 * Set axis labels and title * Add a grid to the plot Step6: Example Create plots of the functions $f(x) = \log x$ (natural log) and $g(x) = 1/x$ between 0.01 and 5 * Set the limits for the $x$-axis to (0,5) * Set the limits for the $y$-axis to (-2,5) * Make the line for $log(x)$ solid blue * Make the line for $1/x$ dashd magenta * Set the linewidth of each line to 3 points * Set the line transparency (alpha) for each line to 0.6 * Set axis labels and title * Add a legend * Add a grid to the plot Step7: Example Consider the linear regression model Step8: Example Create plots of the functions $f(x) = x$, $g(x) = x^2$, and $h(x) = x^3$ for $x$ between -2 and 2 * Use the optional string format argument to format the lines Step9: Figures, axes, and subplots Often we want to create plots with multiple axes or we want to modify the size and shape of the plot areas. To be able to do these things, we need to explicity create a figure and then create the axes within the figure. The best way to see how this works is by example. Example Step10: In the previous example the figure() function creates a new figure and add_subplot() puts a new axis on the figure. The command fig.add_subplot(1,1,1) means divide the figure fig into a 1 by 1 grid and assign the first component of that grid to the variable ax1. Example Step11: Example Step12: Exporting figures to image files Use the plt.savefig() function to save figures to images.
Python Code: # Import numpy import numpy as np # Define T and g T = 40 y0 =50 g = 0.01 # Compute yT using the direct approach and print yT = (1+g)**T*y0 print('Direct approach: ',yT) # Initialize a 1-dimensional array called y that has T+1 zeros y = np.zeros(T+1) # Set the initial value of y to equal y0 y[0] = y0 # Use a for loop to update the values of y one at a time for t in np.arange(T): y[t+1] = (1+g)*y[t] # Print the final value in the array y print('Iterative approach:',y[-1]) Explanation: Class 4: matplotlib (and a quick Numpy example) Brief introduction to the matplotlib module. Preliminary example: Economic growth A country with GDP in year $t-1$ denoted by $y_{t-1}$ and an annual GDP growth rate of $g$, will have GDP in year $t$ given by the recursive equation: \begin{align} y_{t} & = (1+g)y_{t-1} \end{align} Given an initial value of $y_0$, we can find $y_t$ for any given $t$ in one of two ways: 1. By iterating on the equation 2. Or by using substitution and deriving: \begin{align} y_t & = (1+g)^t y_0 \end{align} In this example we'll do both. Example: Economic growth A country with GDP in year $t-1$ denoted by $y_{t-1}$ and an annual GDP growth rate of $g$, will have GDP in year $t$ given by the recursive equation: \begin{align} y_{t} & = (1+g)y_{t-1} \end{align} Given an initial value of $y_0$, we can find $y_t$ for any given $t$ in one of two ways: 1. By iterating on the equation 2. Or by using substitution and deriving: \begin{align} y_t & = (1+g)^t y_0 \end{align} In this example we'll do both. End of explanation # Import matplotlib.pyplot import matplotlib.pyplot as plt Explanation: matplotlib matplotlib is a powerful plotting module that is part of Python's standard library. The website for matplotlib is at http://matplotlib.org/. And you can find a bunch of examples at the following two locations: http://matplotlib.org/examples/index.html and http://matplotlib.org/gallery.html. matplotlib contains a module called pyplot that was written to provide a Matlab-style ploting interface. End of explanation # Magic command for the Jupyter Notebook %matplotlib inline Explanation: Next, we want to make sure that the plots that we create are displayed in this notebook. To achieve this we have to issue a command to be interpretted by Jupyter -- called a magic command. A magic command is preceded by a % character. Magics are not Python and will create errs if used outside of the Jupyter notebook End of explanation # Import numpy as np import numpy as np # Create an array of x values from -6 to 6 x = np.arange(-6,6,0.001) # Create a variable y equal to the sin of x y = np.sin(x) # Use the plot function to plot the plt.plot(x,y) # Add a title and axis labels plt.title('sin(x)') plt.xlabel('x') plt.ylabel('y') Explanation: A quick matplotlib example Create a plot of the sine function for x values between -6 and 6. Add axis labels and a title. End of explanation # Use the help function to see the documentation for plot help(plt.plot) Explanation: The plot function The plot function creates a two-dimensional plot of one variable against another. End of explanation # Create an array of x values from -6 to 6 x = np.arange(-2,2,0.001) # Create a variable y equal to the x squared y = x**2 # Use the plot function to plot the line plt.plot(x,y,linewidth=3,alpha = 0.6) # Add a title and axis labels plt.title('$f(x) = x^2$') plt.xlabel('x') plt.ylabel('y') # Add grid plt.grid() Explanation: Example Create a plot of $f(x) = x^2$ with $x$ between -2 and 2. * Set the linewidth to 3 points * Set the line transparency (alpha) to 0.6 * Set axis labels and title * Add a grid to the plot End of explanation # Create an array of x values from -6 to 6 x = np.arange(0.05,5,0.011) # Create y variables y1 = np.log(x) y2 = 1/x # Use the plot function to plot the lines plt.plot(x,y1,'b-',linewidth=3,alpha = 0.6,label='$log(x)$') plt.plot(x,y2,'m--',linewidth=3,alpha = 0.6,label='$1/x$') # Add a title and axis labels plt.title('Two functions') plt.xlabel('x') plt.ylabel('y') # Set axis limits plt.xlim([0,5]) plt.ylim([-2,4]) # legend plt.legend(loc='lower right',ncol=2) # Add grid plt.grid() Explanation: Example Create plots of the functions $f(x) = \log x$ (natural log) and $g(x) = 1/x$ between 0.01 and 5 * Set the limits for the $x$-axis to (0,5) * Set the limits for the $y$-axis to (-2,5) * Make the line for $log(x)$ solid blue * Make the line for $1/x$ dashd magenta * Set the linewidth of each line to 3 points * Set the line transparency (alpha) for each line to 0.6 * Set axis labels and title * Add a legend * Add a grid to the plot End of explanation # Set betas beta0 = 1 beta1 = -0.5 # Create x values x = np.arange(-5,5,0.01) # create epsilon values from the standard normal distribution epsilon = np.random.normal(size=len(x)) # create y y = beta0 + beta1*x+epsilon # plot plt.plot(x,y,'o',alpha = 0.5) # Add a title and axis labels plt.title('Data') plt.xlabel('x') plt.ylabel('y') # Set axis limits plt.xlim([-5,5]) # Add grid plt.grid() Explanation: Example Consider the linear regression model: \begin{align} y_i = \beta_0 + \beta_1 x_i + \epsilon_i \end{align} where $x_i$ is the independent variable, $\epsilon_i$ is a random regression error term, $y_i$ is the dependent variable and $\beta_0$ and $\beta_1$ are constants. Let's simulate the model * Set values for $\beta_0$ and $\beta_1$ * Create an array of $x_i$ values from -5 to 5 * Create an array of $\epsilon_i$ values from the standard normal distribution equal in length to the array of $x_i$s * Create an array of $y_i$s * Plot y against x with either a circle ('o'), triangle ('^'), or square ('s') marker and transparency (alpha) to 0.5 * Add axis lables, a title, and a grid to the plot End of explanation # Create an array of x values from -6 to 6 x = np.arange(-2,2,0.001) # Create y variables y1 = x y2 = x**2 y3 = x**3 # Use the plot function to plot the lines plt.plot(x,y1,'b-',lw=3,label='$x$') plt.plot(x,y2,'g--',lw=3,label='$x^2$') plt.plot(x,y3,'m-.',lw=3,label='$x^3$') # Add a title and axis labels plt.title('Three functions') plt.xlabel('x') plt.ylabel('y') # Add grid plt.grid() # legend plt.legend(loc='lower right',ncol=3) Explanation: Example Create plots of the functions $f(x) = x$, $g(x) = x^2$, and $h(x) = x^3$ for $x$ between -2 and 2 * Use the optional string format argument to format the lines: - $x$: solid blue line - $x^2$: dashed green line - $x^3$: dash-dot magenta line * Set the linewidth of each line to 3 points * Set transparency (alpha) for each line to 0.6 * Add a legend to lower right with 3 columns * Set axis labels and title * Add a grid to the plot End of explanation # Create data x = np.arange(-6,6,0.001) y = np.sin(x) # Create a new figure fig = plt.figure(figsize=(12,4)) # Create axis ax1 = fig.add_subplot(1,1,1) # Plot ax1.plot(x,y,lw=3,alpha = 0.6) # Add grid ax1.grid() Explanation: Figures, axes, and subplots Often we want to create plots with multiple axes or we want to modify the size and shape of the plot areas. To be able to do these things, we need to explicity create a figure and then create the axes within the figure. The best way to see how this works is by example. Example: A single plot with double width The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide. Plot the sine function on -6 to 6 using a figure with dimensions 12 inches by 4 inches End of explanation # Create data x = np.arange(-6,6,0.001) y1 = np.sin(x) y2 = np.cos(x) # Create a new figure fig = plt.figure(figsize=(12,4)) # Create axis 1 and plot with title ax1 = fig.add_subplot(1,2,1) ax1.plot(x,y1,lw=3,alpha = 0.6) ax1.grid() ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('sin') # Create axis 2 and plot with title ax2 = fig.add_subplot(1,2,2) ax2.plot(x,y2,lw=3,alpha = 0.6) ax2.grid() ax2.set_xlabel('x') ax2.set_ylabel('y') ax2.set_title('sin') Explanation: In the previous example the figure() function creates a new figure and add_subplot() puts a new axis on the figure. The command fig.add_subplot(1,1,1) means divide the figure fig into a 1 by 1 grid and assign the first component of that grid to the variable ax1. Example: Two plots side-by-side Create a new figure with two axes side-by-side and plot the sine function on -6 to 6 on the left axis and the cosine function on -6 to 6 on the right axis. End of explanation # Create data x = np.arange(-2,2,0.001) y1 = x y2 = x**2 y3 = x**3 y4 = x**4 # Create a new figure fig = plt.figure() # Create axis 1 and plot with title ax1 = fig.add_subplot(2,2,1) ax1.plot(x,y1,lw=3,alpha = 0.6) ax1.grid() ax1.set_xlabel('x') ax1.set_ylabel('y') ax1.set_title('$x$') # Create axis 2 and plot with title ax2 = fig.add_subplot(2,2,2) ax2.plot(x,y2,lw=3,alpha = 0.6) ax2.grid() ax2.set_xlabel('x') ax2.set_ylabel('y') ax2.set_title('$x^2$') # Create axis 3 and plot with title ax3 = fig.add_subplot(2,2,3) ax3.plot(x,y3,lw=3,alpha = 0.6) ax3.grid() ax3.set_xlabel('x') ax3.set_ylabel('y') ax3.set_title('$x^3$') # Create axis 4 and plot with title ax4 = fig.add_subplot(2,2,4) ax4.plot(x,y4,lw=3,alpha = 0.6) ax4.grid() ax4.set_xlabel('x') ax4.set_ylabel('y') ax4.set_title('$x^4$') # Adjust margins plt.tight_layout() Explanation: Example: Block of four plots The default dimensions of a matplotlib figure are 6 inches by 4 inches. As we saw above, this leaves some whitespace on the right side of the figure. Suppose we want to remove that by making the plot area twice as wide. Create a new figure with four axes in a two-by-two grid. Plot the following functions on the interval -2 to 2: * $y = x$ * $y = x^2$ * $y = x^3$ * $y = x^4$ Leave the figure size at the default (6in. by 4in.) but run the command plt.tight_layout() to adust the figure's margins after creating your figure, axes, and plots. End of explanation # Create data x = np.arange(-6,6,0.001) y = np.sin(x) # Create a new figure, axis, and plot fig = plt.figure() ax1 = fig.add_subplot(1,1,1) ax1.plot(x,y,lw=3,alpha = 0.6) ax1.grid() # Save plt.savefig('fig_econ129_class04_sine.png',dpi=120) Explanation: Exporting figures to image files Use the plt.savefig() function to save figures to images. End of explanation
5,298
Given the following text description, write Python code to implement the functionality described below step by step Description: 1. Tipos de fronteras en Clasificación Primero, se generan los conjuntos de datos con los que se analizarán las distintas fronteras y algoritmos a utilizar Step1: A continuación, se visualizan brevemente los datos Step2: A continuación se define una función para visualizar la frontera que divide a los datos dado un modelo entrenado con el objeto de rápidamente ver los bordes de decisión encontrados por los distintos algoritmos Step3: Linear Discriminant Analysis (LDA) A continuación se entrena un modelo usando LDA. Este algoritmo asume que la función de densidad de cada clase es gaussiana y que además existe una matriz de covarianza $\Sigma$ igual entre las clases. Step4: Con un error de clasificación de entrenamiento igual a Step5: Y con un error de clasificación de testing igual a Step6: Quadratic Discriminant Analysis (QDA) A continuación se entrena un modelo usando QDA. Este algoritmo, al igual que LDA, asume que la densidad de los datos siguen distribuciones gaussianas, buscando encontrar la frontera que maximice la distancia entre dos distribuciones. A diferencia de LDA, QDA no asume nada respecto a las matrices de covarianza de dichas distribuciones. Step7: Con un error de entrenamiento igual a Step8: Y un error de testing igual a Step9: QDA v/s LDA Diferencia cualitativa y teórica La principal diferencia entre QDA y LDA se puede visualizar en las secciones anteriores en donde se puede observar claramente que las fronteras calculadas por LDA son lineales, mientras que las de QDA y en honor a su nombre son cuadráticas. Diferencia cuantitativa A continuación se presentan los errores de clasificación para ambos modelos Step10: Se puede apreciar que, en este caso particular, es QDA el que tiene mejor rendimiento en comparación a LDA. Esto no sorprende debido a la naturaleza de la distribución de los datos que hacen a QDA un mejor estimador. Logistic Regression La regresión logística es una forma de utilizar el concepto regresión lineal para clasificar datos según atributos dados. Esta función hace uso de la función Logit para transformar outputs numéricos a labels. A continuación se define una función para graficar las fronteras seleccionadas por cada máquina permitiendo la interacción con distintos parámetros. Step11: Se entrena un modelo de regresión logística regularizado con la normal $l_2$ (Lasso) Step12: El parámetro $C$ es un parámetro de regularización que actua como el inverzo de la fuerza de regularización para la norma $l_2$ en donde valores cercanos a $1.0$ indican ausencia de regularización y valores cercanos a $0.0$ indican la máxima posible fuerza de regularización. Cuando hay alta regularización, aumenta el número de outliers que son ignorados durante el entrenamiento. SVM Lineal El método de SVM (Support Vector Machine) busca una frontera de decisión maximizando el margen entre las distintas clases o labels del conjunto de datos, procurando a la vez clasificar la mayor cantidad de datos en el conjunto de entrenamiento correctamente. Existe un hiperparámetro que mide el tradeoff entre la maximización del margen y el error de entrenamiento que es generalmente denotado por C Se construye el siguiente gráfico con la frontera seleccionada por una SVM de tipo lineal Step13: De manera teórica, el parametro $C$ en la SVM lineal controla el tradeoff entre maximizar el margen entre las dos clases y minimizar el error de clasificación del conjunto de entrenamiento (considerando una función objetivo dada por $CA+B$ y en dónde la constante está dada generalmente por $C = \frac{1}{\lambda}$). Si se utilizan valores de $C$ grandes, el modelo de SVM lineal intentará con más fuerza clasificar correctamente los puntos de entrenamiento, lo que disminuye la maximización del margen y lleva a posible overfitting. Se puede observar en el gráfico interactivo que para un valor de $C$ cercano a 0, se tiene una frontera que clasifica menos puntos de la clase $AZUL$, mientras que valores cercanos a 1 consideran mayor puntos de la clase $AZUL$. Esto es esperado considerando el significado del parámetro $C$. SVM No lineal En el método de SVM no lineal, se utilizan kernels que realizan transformaciones al espacio original de datos a nuevos espacios en donde sea posible tener fronteras linealmente separables. El ejemplo clásico es cuando la frontera es de tipo circular. En este caso, se necesita utilizar un método de kernel para llevar a un espacio lineal a los datos. En este caso, una transformación teórica podria ser $z = x^2 + y^2$ que transforma el espacio euclidiano original en un espacio de coordenadas poalres. Aquí, si es posible encontrar una frontera linealmente separable. A continuación se exploran las fronteras de decisión de distintos kernels (por tanto, supuestos de transformaciones distintos) sujetos a distintos valores del parámetro C. Estos kernels producen fronteras que no son lineales en el espacio original de los datos. Kernel de función radial base o Radial basis function (RBF) Step14: Se puede notar que con este kernel se pueden obtener bajos errores de clasificación debido a que la naturaleza de RBF permite envolver correctamente al conjunto sinusoidal de datos, generando una frontera ideal Kernel polinomial Step15: Este kernel no funciona tan bien en comparación a RBF debido a la naturaleza de la distribución de los datos, por lo que presenta un error de testing mayor Kernel sigmoidal Este kernel se basa en la función sigmoidal y asume dicha distribución de los datos Step16: Como es de esperarse, este kernel no se comporta lo suficientemente bien en comparación a RBF debido a que la forma de las fronteras no es sigmoidal. Además, requiere de altos valores de $C$ para lograr errores pequeños lo que posiblemente indica overfitting del conjunto de datos Árbol de decisión Los árboles de decisión dividen el espacio de los atributos en subconjuntos con el fin de utilizar la heurística de dividir y conquistar, que es ampliamente utilizada en la cienca de la computación. Cada subespacio en el árbol contendrá la clase más probable dentro de una misma sección. El hiperparámetro de este modelo corresponde al a profundidad del árbol que corresponde la número máximo de subcortes posibles del espacio original Step17: Se puede notar que es muy fácil que el árbol de decisión caiga en overfitting. En efecto, la profundidad óptima está entre 3 y 4, y valores mayores a estos manifiestan el efecto de overfitting Clasificador de K-vecinos más cercano Este modelo utiliza los k-vecinos más similares de cada dato de entrenamiento para calcular la frontera de decisión. Su parámetro corresponde al número de vecinos $K$. Cuando $K$ es muy pequeño, restringimos la región de comparación haciendo que nuestro clsificador sea más "ciego" a la distribución global de los datos. Este valor pequeño conducirá a overfitting. Cuando $K$ es grande, el clasificador se hace más resistente a outliers, pero se hará más propenso a underfitting
Python Code: # Generacion de los datos para analisis from sklearn.utils import check_random_state import matplotlib.patches as mpatches import numpy as np def build_data(seed, noise_seed=64, n_samples=500, noise=20): n_samples=500 mean = (0,-4) C = np.array([[0.3, 0.1], [0.1, 1.5]]) np.random.seed(seed) datos1 = np.random.multivariate_normal(mean, C, n_samples) outer_circ_x = np.cos(np.linspace(0, np.pi, n_samples))*3 outer_circ_y = np.sin(np.linspace(0, np.pi, n_samples))*3 datos2 = np.vstack((outer_circ_x,outer_circ_y)).T generator = check_random_state(noise_seed) datos2 += generator.normal(scale=0.3, size=datos2.shape) X = np.concatenate((datos1, datos2), axis=0) n = noise #ruido/noise y1 = np.zeros(datos1.shape[0]+n) y2 = np.ones(datos2.shape[0]-n) y = np.concatenate((y1,y2),axis=0) return (X, y) (X, y) = build_data(14) (Xtest, ytest) = build_data(8000, noise_seed=7000) Explanation: 1. Tipos de fronteras en Clasificación Primero, se generan los conjuntos de datos con los que se analizarán las distintas fronteras y algoritmos a utilizar End of explanation # Se grafican los datos obtenidos import matplotlib.pyplot as plt fig = plt.figure(figsize=(12,6)) plt.scatter(X[:, 0], X[:,1], s=50, c=y, cmap=plt.cm.winter) plt.title('Datos generados') plt.show() Explanation: A continuación, se visualizan brevemente los datos End of explanation import matplotlib.pyplot as plt from sklearn.metrics import accuracy_score def visualize_border(model,x,y, title="", x_test=None, y_test=None): fig = plt.figure(figsize=(12,6)) plt.scatter(x[:,0], x[:,1], s=50, c=y, cmap=plt.cm.winter) h = .02 # step size in the mesh x_min, x_max = x[:, 0].min() - 1, x[:, 0].max() + 1 y_min, y_max = x[:, 1].min() - 1, x[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h)) Z = model.predict(np.c_[xx.ravel(), yy.ravel()]) if x_test is not None and y_test is not None: y_train_pred = model.predict(x) y_test_pred = model.predict(x_test) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(y_test, y_test_pred)) red_patch = mpatches.Patch(color='red', label="Train ME: %f" % train_error) green_patch = mpatches.Patch(color='green', label="Test ME: %f" % test_error) plt.legend(handles=[red_patch, green_patch]) Z = Z.reshape(xx.shape) plt.contour(xx, yy, Z, cmap=plt.cm.Paired) plt.title(title) plt.show() Explanation: A continuación se define una función para visualizar la frontera que divide a los datos dado un modelo entrenado con el objeto de rápidamente ver los bordes de decisión encontrados por los distintos algoritmos End of explanation from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA model_LDA = LDA() model_LDA.fit(X,y) visualize_border(model_LDA,X,y,"LDAs") Explanation: Linear Discriminant Analysis (LDA) A continuación se entrena un modelo usando LDA. Este algoritmo asume que la función de densidad de cada clase es gaussiana y que además existe una matriz de covarianza $\Sigma$ igual entre las clases. End of explanation 1-model_LDA.score(X, y) Explanation: Con un error de clasificación de entrenamiento igual a: End of explanation 1-model_LDA.score(Xtest, ytest) Explanation: Y con un error de clasificación de testing igual a: End of explanation from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA model_QDA = QDA() model_QDA.fit(X,y) visualize_border(model_QDA,X,y,"QDA") Explanation: Quadratic Discriminant Analysis (QDA) A continuación se entrena un modelo usando QDA. Este algoritmo, al igual que LDA, asume que la densidad de los datos siguen distribuciones gaussianas, buscando encontrar la frontera que maximice la distancia entre dos distribuciones. A diferencia de LDA, QDA no asume nada respecto a las matrices de covarianza de dichas distribuciones. End of explanation 1-model_QDA.score(X, y) Explanation: Con un error de entrenamiento igual a: End of explanation (Xtest, ytest) = build_data(8000, noise_seed=7000) 1-model_LDA.score(Xtest, ytest) Explanation: Y un error de testing igual a: End of explanation from sklearn.metrics import accuracy_score y_pred_LDA = model_LDA.predict(X) y_pred_QDA = model_QDA.predict(X) print("Miss Classification Loss for LDA: %f"%(1-accuracy_score(y, y_pred_LDA))) print("Miss Classification Loss for QDA: %f"%(1-accuracy_score(y, y_pred_QDA))) Explanation: QDA v/s LDA Diferencia cualitativa y teórica La principal diferencia entre QDA y LDA se puede visualizar en las secciones anteriores en donde se puede observar claramente que las fronteras calculadas por LDA son lineales, mientras que las de QDA y en honor a su nombre son cuadráticas. Diferencia cuantitativa A continuación se presentan los errores de clasificación para ambos modelos End of explanation from ipywidgets import interactive def visualize_border_interactive(param): model = train_model(param) visualize_border(model,X,y,x_test=Xtest,y_test=ytest) Explanation: Se puede apreciar que, en este caso particular, es QDA el que tiene mejor rendimiento en comparación a LDA. Esto no sorprende debido a la naturaleza de la distribución de los datos que hacen a QDA un mejor estimador. Logistic Regression La regresión logística es una forma de utilizar el concepto regresión lineal para clasificar datos según atributos dados. Esta función hace uso de la función Logit para transformar outputs numéricos a labels. A continuación se define una función para graficar las fronteras seleccionadas por cada máquina permitiendo la interacción con distintos parámetros. End of explanation from sklearn.linear_model import LogisticRegression as LR def train_model(param): model=LR() #define your model model.set_params(C=param,penalty='l2') model.fit(X,y) return model p_min_lr = 0.001 p_max_lr = 1 interactive(visualize_border_interactive,param=(p_min_lr,p_max_lr, 0.001)) import numpy as np import matplotlib.pyplot as plt params = np.arange(0.001, 1.0, 0.001) train_errors = [] test_errors = [] for p in params: model= LR() model.set_params(C=p,penalty='l2') model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: Se entrena un modelo de regresión logística regularizado con la normal $l_2$ (Lasso) End of explanation from sklearn.svm import SVC as SVM def train_model(param): model= SVM() model.set_params(C=param,kernel='linear') model.fit(X,y) return model p_min_svm_linear = 0.001 p_max_svm_linear = 1 interactive(visualize_border_interactive,param=(p_min_svm_linear,p_max_svm_linear, 0.001)) import numpy as np import matplotlib.pyplot as plt params = np.arange(0.001, 1.0, 0.03) train_errors = [] test_errors = [] for p in params: model= SVM() model.set_params(C=p,kernel='linear') model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: El parámetro $C$ es un parámetro de regularización que actua como el inverzo de la fuerza de regularización para la norma $l_2$ en donde valores cercanos a $1.0$ indican ausencia de regularización y valores cercanos a $0.0$ indican la máxima posible fuerza de regularización. Cuando hay alta regularización, aumenta el número de outliers que son ignorados durante el entrenamiento. SVM Lineal El método de SVM (Support Vector Machine) busca una frontera de decisión maximizando el margen entre las distintas clases o labels del conjunto de datos, procurando a la vez clasificar la mayor cantidad de datos en el conjunto de entrenamiento correctamente. Existe un hiperparámetro que mide el tradeoff entre la maximización del margen y el error de entrenamiento que es generalmente denotado por C Se construye el siguiente gráfico con la frontera seleccionada por una SVM de tipo lineal End of explanation from sklearn.svm import SVC as SVM #SVC is for classification def train_model(param): model= SVM() model.set_params(C=param,kernel='rbf') model.fit(X,y) return model p_min_svm_rbf = 0.001 p_max_svm_rbf = 1 interactive(visualize_border_interactive,param=(p_min_svm_rbf,p_max_svm_rbf, 0.001)) import matplotlib.pyplot as plt params = np.arange(0.001, 1.0, 0.03) train_errors = [] test_errors = [] for p in params: model= SVM() model.set_params(C=p,kernel='rbf') model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: De manera teórica, el parametro $C$ en la SVM lineal controla el tradeoff entre maximizar el margen entre las dos clases y minimizar el error de clasificación del conjunto de entrenamiento (considerando una función objetivo dada por $CA+B$ y en dónde la constante está dada generalmente por $C = \frac{1}{\lambda}$). Si se utilizan valores de $C$ grandes, el modelo de SVM lineal intentará con más fuerza clasificar correctamente los puntos de entrenamiento, lo que disminuye la maximización del margen y lleva a posible overfitting. Se puede observar en el gráfico interactivo que para un valor de $C$ cercano a 0, se tiene una frontera que clasifica menos puntos de la clase $AZUL$, mientras que valores cercanos a 1 consideran mayor puntos de la clase $AZUL$. Esto es esperado considerando el significado del parámetro $C$. SVM No lineal En el método de SVM no lineal, se utilizan kernels que realizan transformaciones al espacio original de datos a nuevos espacios en donde sea posible tener fronteras linealmente separables. El ejemplo clásico es cuando la frontera es de tipo circular. En este caso, se necesita utilizar un método de kernel para llevar a un espacio lineal a los datos. En este caso, una transformación teórica podria ser $z = x^2 + y^2$ que transforma el espacio euclidiano original en un espacio de coordenadas poalres. Aquí, si es posible encontrar una frontera linealmente separable. A continuación se exploran las fronteras de decisión de distintos kernels (por tanto, supuestos de transformaciones distintos) sujetos a distintos valores del parámetro C. Estos kernels producen fronteras que no son lineales en el espacio original de los datos. Kernel de función radial base o Radial basis function (RBF) End of explanation from sklearn.svm import SVC as SVM #SVC is for classification def train_model(param): model= SVM() model.set_params(C=param,kernel='poly') model.fit(X,y) return model p_min_svm_poly = 0.001 p_max_svm_poly = 1 interactive(visualize_border_interactive,param=(p_min_svm_poly,p_max_svm_poly, 0.001)) import matplotlib.pyplot as plt params = np.arange(0.001, 1.0, 0.03) train_errors = [] test_errors = [] for p in params: model= SVM() model.set_params(C=p,kernel='poly') model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: Se puede notar que con este kernel se pueden obtener bajos errores de clasificación debido a que la naturaleza de RBF permite envolver correctamente al conjunto sinusoidal de datos, generando una frontera ideal Kernel polinomial End of explanation from sklearn.svm import SVC as SVM #SVC is for classification def train_model(param): model= SVM() model.set_params(C=param,kernel='sigmoid') model.fit(X,y) return model p_min_svm_sigmoid = 0.001 p_max_svm_sigmoid = 1 interactive(visualize_border_interactive,param=(p_min_svm_sigmoid,p_max_svm_sigmoid)) import matplotlib.pyplot as plt params = np.arange(0.001, 1.0, 0.03) train_errors = [] test_errors = [] for p in params: model= SVM() model.set_params(C=p,kernel='sigmoid') model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: Este kernel no funciona tan bien en comparación a RBF debido a la naturaleza de la distribución de los datos, por lo que presenta un error de testing mayor Kernel sigmoidal Este kernel se basa en la función sigmoidal y asume dicha distribución de los datos End of explanation from sklearn.tree import DecisionTreeClassifier as Tree def train_model(param): model = Tree() #edit the train_model function model.set_params(max_depth=param,criterion='gini',splitter='best') model.fit(X,y) return model p_tree_min = 1 p_tree_max = 10 interactive(visualize_border_interactive,param=(p_tree_min,p_tree_max)) import matplotlib.pyplot as plt params = np.arange(1, 10, 1) train_errors = [] test_errors = [] for p in params: model= Tree() model.set_params(max_depth=p,criterion='gini',splitter='best') model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: Como es de esperarse, este kernel no se comporta lo suficientemente bien en comparación a RBF debido a que la forma de las fronteras no es sigmoidal. Además, requiere de altos valores de $C$ para lograr errores pequeños lo que posiblemente indica overfitting del conjunto de datos Árbol de decisión Los árboles de decisión dividen el espacio de los atributos en subconjuntos con el fin de utilizar la heurística de dividir y conquistar, que es ampliamente utilizada en la cienca de la computación. Cada subespacio en el árbol contendrá la clase más probable dentro de una misma sección. El hiperparámetro de este modelo corresponde al a profundidad del árbol que corresponde la número máximo de subcortes posibles del espacio original End of explanation from sklearn.neighbors import KNeighborsClassifier def train_model(param): model = KNeighborsClassifier() model.set_params(n_neighbors=param) model.fit(X,y) return model p_tree_min = 1 p_tree_max = 50 interactive(visualize_border_interactive,param=(p_tree_min,p_tree_max)) import matplotlib.pyplot as plt params = np.arange(1, 50, 1) train_errors = [] test_errors = [] for p in params: model = KNeighborsClassifier() model.set_params(n_neighbors=p) model.fit(X,y) y_train_pred = model.predict(X) y_test_pred = model.predict(Xtest) train_error = (1-accuracy_score(y, y_train_pred)) test_error = (1-accuracy_score(ytest, y_test_pred)) train_errors.append(train_error) test_errors.append(test_error) plt.figure(figsize=(10, 8)) plt.plot(params, train_errors, label="Train Error") plt.plot(params, test_errors, label="Test Error") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('Parámetro') plt.ylabel('Error') plt.show() Explanation: Se puede notar que es muy fácil que el árbol de decisión caiga en overfitting. En efecto, la profundidad óptima está entre 3 y 4, y valores mayores a estos manifiestan el efecto de overfitting Clasificador de K-vecinos más cercano Este modelo utiliza los k-vecinos más similares de cada dato de entrenamiento para calcular la frontera de decisión. Su parámetro corresponde al número de vecinos $K$. Cuando $K$ es muy pequeño, restringimos la región de comparación haciendo que nuestro clsificador sea más "ciego" a la distribución global de los datos. Este valor pequeño conducirá a overfitting. Cuando $K$ es grande, el clasificador se hace más resistente a outliers, pero se hará más propenso a underfitting End of explanation
5,299
Given the following text description, write Python code to implement the functionality described below step by step Description: Zero-copy communication between C++ and Python Numpy arrays are just C arrays wrapped with metadata in Python. Thus, we can share data between C and Python without even copying. In general, this communication is not overwrite safe Step1: Using it in CMSSW CMSSW can be executed within a Python process, thanks to Chris's PR #17236. Since the configuration language is also in Python, you can build the configuration and start CMSSW in the same Python process. We can get our common block into CMSSW by passing its pointer as part of a ParameterSet. Since this is all one process, that pointer address is still valid when CMSSW launches. Step2: On the C++ side NumpyCommonBlock.h is a header-only library that defines the interface. We pick up the object by casting the pointer Step3: Demonstration In this demo, I loop over AlCaZMuMu muons and fill the arrays with track parameters (before and after adding muon hits to the fit) and display them as Pandas DataFrames as soon as they're full (before CMSSW finishes). The idea is that one would stream data from CMSSW into some Python thing in large blocks (1000 tracks/5000 hits at a time in this example). Bi-directional communication is possible, but I don't know what it could be used for.
Python Code: import numpy import commonblock tracks = commonblock.NumpyCommonBlock( trackermu_qoverp = numpy.zeros(1000, dtype=numpy.double), trackermu_qoverp_err = numpy.zeros(1000, dtype=numpy.double), trackermu_phi = numpy.zeros(1000, dtype=numpy.double), trackermu_eta = numpy.zeros(1000, dtype=numpy.double), trackermu_dxy = numpy.zeros(1000, dtype=numpy.double), trackermu_dz = numpy.zeros(1000, dtype=numpy.double), globalmu_qoverp = numpy.zeros(1000, dtype=numpy.double), globalmu_qoverp_err = numpy.zeros(1000, dtype=numpy.double)) hits = commonblock.NumpyCommonBlock( detid = numpy.zeros(5000, dtype=numpy.uint64), localx = numpy.zeros(5000, dtype=numpy.double), localy = numpy.zeros(5000, dtype=numpy.double), localx_err = numpy.zeros(5000, dtype=numpy.double), localy_err = numpy.zeros(5000, dtype=numpy.double)) Explanation: Zero-copy communication between C++ and Python Numpy arrays are just C arrays wrapped with metadata in Python. Thus, we can share data between C and Python without even copying. In general, this communication is not overwrite safe: Python protects against out-of-range indexes, but C/C++ does not; not type safe: no guarantee that C and Python will interpret bytes in memory the same way, including endianness; not thread safe: no protection at all against concurrent access. But without much overhead, we can wrap a shared array (or collection of arrays) in two APIs— one in C++, one in Python— to provide these protections. commonblock is a nascent library to do this. It passes array lengths and types from Python to C++ via ctypes and uses librt.so (wrapped by prwlock in Python) to implement locks that are usable on both sides. End of explanation import FWCore.ParameterSet.Config as cms process = cms.Process("Demo") process.load("FWCore.MessageService.MessageLogger_cfi") process.maxEvents = cms.untracked.PSet(input = cms.untracked.int32(1000)) process.source = cms.Source( "PoolSource", fileNames = cms.untracked.vstring("file:MuAlZMuMu-2016H-002590494DA0.root")) process.demo = cms.EDAnalyzer( "DemoAnalyzer", tracks = cms.uint64(tracks.pointer()), # pass the arrays to C++ as a pointer hits = cms.uint64(hits.pointer())) process.p = cms.Path(process.demo) Explanation: Using it in CMSSW CMSSW can be executed within a Python process, thanks to Chris's PR #17236. Since the configuration language is also in Python, you can build the configuration and start CMSSW in the same Python process. We can get our common block into CMSSW by passing its pointer as part of a ParameterSet. Since this is all one process, that pointer address is still valid when CMSSW launches. End of explanation import threading import libFWCorePythonFramework import libFWCorePythonParameterSet class CMSSWThread(threading.Thread): def __init__(self, process): super(CMSSWThread, self).__init__() self.process = process def run(self): processDesc = libFWCorePythonParameterSet.ProcessDesc() self.process.fillProcessDesc(processDesc.pset()) cppProcessor = libFWCorePythonFramework.PythonEventProcessor(processDesc) cppProcessor.run() Explanation: On the C++ side NumpyCommonBlock.h is a header-only library that defines the interface. We pick up the object by casting the pointer: tracksBlock = (NumpyCommonBlock*)iConfig.getParameter&lt;unsigned long long&gt;("tracks"); hitsBlock = (NumpyCommonBlock*)iConfig.getParameter&lt;unsigned long long&gt;("hits"); and then get safe accessors to each array with a templated method that checks C++'s compiled type against Python's runtime type. ``` trackermu_qoverp = tracksBlock->newAccessor<double>("trackermu_qoverp"); trackermu_qoverp_err = tracksBlock->newAccessor<double>("trackermu_qoverp_err"); trackermu_phi = tracksBlock->newAccessor<double>("trackermu_phi"); trackermu_eta = tracksBlock->newAccessor<double>("trackermu_eta"); trackermu_dxy = tracksBlock->newAccessor<double>("trackermu_dxy"); trackermu_dz = tracksBlock->newAccessor<double>("trackermu_dz"); globalmu_qoverp = tracksBlock->newAccessor<double>("globalmu_qoverp"); globalmu_qoverp_err = tracksBlock->newAccessor<double>("globalmu_qoverp_err"); detid = hitsBlock->newAccessor<uint64_t>("detid"); localx = hitsBlock->newAccessor<double>("localx"); localy = hitsBlock->newAccessor<double>("localy"); localx_err = hitsBlock->newAccessor<double>("localx_err"); localy_err = hitsBlock->newAccessor<double>("localy_err"); ``` Running CMSSW Chris's PythonEventProcessor.run() method blocks, so I put it in a thread to let CMSSW and Python run at the same time. I had to release the GIL with PR #18683 to make this work, and that feature will work its way into releases eventually. End of explanation cmsswThread = CMSSWThread(process) cmsswThread.start() tracks.wait(1) # CMSSW notifies that it has filled the tracks array tracks.pandas() hits.pandas() %matplotlib inline tracks.pandas().plot.hist() df = hits.pandas() df[numpy.abs(df.localy) > 0].plot.hexbin(x="localx", y="localy", gridsize=25) Explanation: Demonstration In this demo, I loop over AlCaZMuMu muons and fill the arrays with track parameters (before and after adding muon hits to the fit) and display them as Pandas DataFrames as soon as they're full (before CMSSW finishes). The idea is that one would stream data from CMSSW into some Python thing in large blocks (1000 tracks/5000 hits at a time in this example). Bi-directional communication is possible, but I don't know what it could be used for. End of explanation