Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
1,900
Given the following text description, write Python code to implement the functionality described below step by step Description: CrowdTruth for Recognizing Textual Entailment Annotation This analysis uses the data gathered in the "Recognizing Textual Entailment" crowdsourcing experiment published in Rion Snow, Brendan O’Connor, Dan Jurafsky, and Andrew Y. Ng Step1: Declaring a pre-processing configuration The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Recognizing Textual Entailment task Step3: Pre-processing the input data After declaring the configuration of our input file, we are ready to pre-process the crowd data Step4: Computing the CrowdTruth metrics The pre-processed data can then be used to calculate the CrowdTruth metrics Step5: results is a dict object that contains the quality metrics for the sentences, annotations and crowd workers. The sentence metrics are stored in results["units"] Step6: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentences. Here we plot its histogram Step7: Plot the change in unit qualtity score at the beginning of the process and at the end Step8: The unit_annotation_score column in results["units"] contains the sentence-annotation scores, capturing the likelihood that an annotation is expressed in a sentence. For each sentence, we store a dictionary mapping each annotation to its sentence-relation score. Step9: Save unit metrics Step10: The worker metrics are stored in results["workers"] Step11: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. Step12: Save the worker metrics Step13: The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one relation. Step14: Example of a very clear unit Step15: Example of an unclear unit Step16: MACE for Recognizing Textual Entailment Annotation We first pre-processed the crowd results to create compatible files for running the MACE tool. Each row in a csv file should point to a unit in the dataset and each column in the csv file should point to a worker. The content of the csv file captures the worker answer for that particular unit (or remains empty if the worker did not annotate that unit). Step17: CrowdTruth vs. MACE on Worker Quality Step18: CrowdTruth vs. MACE vs. Majority Vote on Annotation Performance
Python Code: import pandas as pd test_data = pd.read_csv("../data/rte.standardized.csv") test_data.head() Explanation: CrowdTruth for Recognizing Textual Entailment Annotation This analysis uses the data gathered in the "Recognizing Textual Entailment" crowdsourcing experiment published in Rion Snow, Brendan O’Connor, Dan Jurafsky, and Andrew Y. Ng: Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. EMNLP 2008, pages 254–263. Task Description: Given two sentences, the crowd has to choose whether the second hypothesis sentence can be inferred from the first sentence (binary choice, true/false). Following, we provide an example from the aforementioned publication: Text: “Crude Oil Prices Slump” Hypothesis: “Oil prices drop” A screenshot of the task as it appeared to workers can be seen at the following repository. The dataset for this task was downloaded from the following repository, which contains the raw output from the crowd on AMT. Currently, you can find the processed input file in the folder named data. Besides the raw crowd annotations, the processed file also contains the text and the hypothesis that needs to be tested with the given text, which were given as input to the crowd. End of explanation import crowdtruth from crowdtruth.configuration import DefaultConfig Explanation: Declaring a pre-processing configuration The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class: End of explanation class TestConfig(DefaultConfig): inputColumns = ["gold", "task", "text", "hypothesis"] outputColumns = ["response"] customPlatformColumns = ["!amt_annotation_ids", "orig_id", "!amt_worker_ids", "start", "end"] # processing of a closed task open_ended_task = False annotation_vector = ["relevant", "not_relevant"] def processJudgments(self, judgments): # pre-process output to match the values in annotation_vector for col in self.outputColumns: # transform to lowercase judgments[col] = judgments[col].apply(lambda x: str(x).lower()) return judgments Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Recognizing Textual Entailment task: inputColumns: list of input columns from the .csv file with the input data outputColumns: list of output columns from the .csv file with the answers from the workers customPlatformColumns: a list of columns from the .csv file that defines a standard annotation tasks, in the following order - judgment id, unit id, worker id, started time, submitted time. This variable is used for input files that do not come from AMT or FigureEight (formarly known as CrowdFlower). annotation_separator: string that separates between the crowd annotations in outputColumns open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector The complete configuration class is declared below: End of explanation data, config = crowdtruth.load( file = "../data/rte.standardized.csv", config = TestConfig() ) data['judgments'].head() Explanation: Pre-processing the input data After declaring the configuration of our input file, we are ready to pre-process the crowd data: End of explanation results = crowdtruth.run(data, config) Explanation: Computing the CrowdTruth metrics The pre-processed data can then be used to calculate the CrowdTruth metrics: End of explanation results["units"].head() Explanation: results is a dict object that contains the quality metrics for the sentences, annotations and crowd workers. The sentence metrics are stored in results["units"]: End of explanation import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = 15, 5 plt.subplot(1, 2, 1) plt.hist(results["units"]["uqs"]) plt.ylim(0,270) plt.xlabel("Sentence Quality Score") plt.ylabel("#Sentences") plt.subplot(1, 2, 2) plt.hist(results["units"]["uqs_initial"]) plt.ylim(0,270) plt.xlabel("Sentence Quality Score Initial") plt.ylabel("# Units") Explanation: The uqs column in results["units"] contains the sentence quality scores, capturing the overall workers agreement over each sentences. Here we plot its histogram: End of explanation import numpy as np sortUQS = results["units"].sort_values(['uqs'], ascending=[1]) sortUQS = sortUQS.reset_index() plt.rcParams['figure.figsize'] = 15, 5 plt.plot(np.arange(sortUQS.shape[0]), sortUQS["uqs_initial"], 'ro', lw = 1, label = "Initial UQS") plt.plot(np.arange(sortUQS.shape[0]), sortUQS["uqs"], 'go', lw = 1, label = "Final UQS") plt.ylabel('Sentence Quality Score') plt.xlabel('Sentence Index') Explanation: Plot the change in unit qualtity score at the beginning of the process and at the end End of explanation results["units"]["unit_annotation_score"].head() Explanation: The unit_annotation_score column in results["units"] contains the sentence-annotation scores, capturing the likelihood that an annotation is expressed in a sentence. For each sentence, we store a dictionary mapping each annotation to its sentence-relation score. End of explanation rows = [] header = ["orig_id", "gold", "hypothesis", "text", "uqs", "uqs_initial", "true", "false", "true_initial", "false_initial"] units = results["units"].reset_index() for i in range(len(units.index)): row = [units["unit"].iloc[i], units["input.gold"].iloc[i], units["input.hypothesis"].iloc[i], \ units["input.text"].iloc[i], units["uqs"].iloc[i], units["uqs_initial"].iloc[i], \ units["unit_annotation_score"].iloc[i]["relevant"], units["unit_annotation_score"].iloc[i]["not_relevant"], \ units["unit_annotation_score_initial"].iloc[i]["relevant"], units["unit_annotation_score_initial"].iloc[i]["not_relevant"]] rows.append(row) rows = pd.DataFrame(rows, columns=header) rows.to_csv("../data/results/crowdtruth_units_rte.csv", index=False) Explanation: Save unit metrics: End of explanation results["workers"].head() Explanation: The worker metrics are stored in results["workers"]: End of explanation plt.rcParams['figure.figsize'] = 15, 5 plt.subplot(1, 2, 1) plt.hist(results["workers"]["wqs"]) plt.ylim(0,55) plt.xlabel("Worker Quality Score") plt.ylabel("#Workers") plt.subplot(1, 2, 2) plt.hist(results["workers"]["wqs_initial"]) plt.ylim(0,55) plt.xlabel("Worker Quality Score Initial") plt.ylabel("#Workers") Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. End of explanation results["workers"].to_csv("../data/results/crowdtruth_workers_rte.csv", index=True) Explanation: Save the worker metrics: End of explanation results["annotations"] sortedUQS = results["units"].sort_values(["uqs"]) Explanation: The annotation metrics are stored in results["annotations"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one relation. End of explanation sortedUQS.tail(1) print("Hypothesis: %s" % sortedUQS["input.hypothesis"].iloc[len(sortedUQS.index)-1]) print("Text: %s" % sortedUQS["input.text"].iloc[len(sortedUQS.index)-1]) print("Expert Answer: %s" % sortedUQS["input.gold"].iloc[len(sortedUQS.index)-1]) print("Crowd Answer with CrowdTruth: %s" % sortedUQS["unit_annotation_score"].iloc[len(sortedUQS.index)-1]) print("Crowd Answer without CrowdTruth: %s" % sortedUQS["unit_annotation_score_initial"].iloc[len(sortedUQS.index)-1]) Explanation: Example of a very clear unit End of explanation sortedUQS.head(1) print("Hypothesis: %s" % sortedUQS["input.hypothesis"].iloc[0]) print("Text: %s" % sortedUQS["input.text"].iloc[0]) print("Expert Answer: %s" % sortedUQS["input.gold"].iloc[0]) print("Crowd Answer with CrowdTruth: %s" % sortedUQS["unit_annotation_score"].iloc[0]) print("Crowd Answer without CrowdTruth: %s" % sortedUQS["unit_annotation_score_initial"].iloc[0]) Explanation: Example of an unclear unit End of explanation import numpy as np test_data = pd.read_csv("../data/mace_rte.standardized.csv", header=None) test_data = test_data.replace(np.nan, '', regex=True) test_data.head() import pandas as pd mace_data = pd.read_csv("../data/results/mace_units_rte.csv") mace_data.head() mace_workers = pd.read_csv("../data/results/mace_workers_rte.csv") mace_workers.head() Explanation: MACE for Recognizing Textual Entailment Annotation We first pre-processed the crowd results to create compatible files for running the MACE tool. Each row in a csv file should point to a unit in the dataset and each column in the csv file should point to a worker. The content of the csv file captures the worker answer for that particular unit (or remains empty if the worker did not annotate that unit). End of explanation mace_workers = pd.read_csv("../data/results/mace_workers_rte.csv") crowdtruth_workers = pd.read_csv("../data/results/crowdtruth_workers_rte.csv") mace_workers = mace_workers.sort_values(["worker"]) crowdtruth_workers = crowdtruth_workers.sort_values(["worker"]) %matplotlib inline import matplotlib import matplotlib.pyplot as plt plt.scatter( mace_workers["competence"], crowdtruth_workers["wqs"], ) plt.title("Worker Quality Score") plt.xlabel("MACE") plt.ylabel("CrowdTruth") sortWQS = crowdtruth_workers.sort_values(['wqs'], ascending=[1]) sortWQS = sortWQS.reset_index() worker_ids = list(sortWQS["worker"]) mace_workers = mace_workers.set_index('worker') mace_workers.loc[worker_ids] plt.rcParams['figure.figsize'] = 15, 5 plt.plot(np.arange(sortWQS.shape[0]), sortWQS["wqs"], 'bo', lw = 1, label = "CrowdTruth Worker Score") plt.plot(np.arange(mace_workers.shape[0]), mace_workers["competence"], 'go', lw = 1, label = "MACE Worker Score") plt.ylabel('Worker Quality Score') plt.xlabel('Worker Index') plt.legend() Explanation: CrowdTruth vs. MACE on Worker Quality End of explanation import pandas as pd import numpy as np majvote = pd.read_csv("../data/results/majorityvote_units_rte.csv") mace = pd.read_csv("../data/results/mace_units_rte.csv") crowdtruth = pd.read_csv("../data/results/crowdtruth_units_rte.csv") def compute_F1_score(dataset): nyt_f1 = np.zeros(shape=(100, 2)) for idx in xrange(0, 100): thresh = (idx + 1) / 100.0 tp = 0 fp = 0 tn = 0 fn = 0 for gt_idx in range(0, len(dataset.index)): if dataset['true'].iloc[gt_idx] >= thresh: if dataset['gold'].iloc[gt_idx] == 1: tp = tp + 1.0 else: fp = fp + 1.0 else: if dataset['gold'].iloc[gt_idx] == 1: fn = fn + 1.0 else: tn = tn + 1.0 nyt_f1[idx, 0] = thresh if tp != 0: nyt_f1[idx, 1] = 2.0 * tp / (2.0 * tp + fp + fn) else: nyt_f1[idx, 1] = 0 return nyt_f1 def compute_majority_vote(dataset, crowd_column): tp = 0 fp = 0 tn = 0 fn = 0 for j in range(len(dataset.index)): if dataset['true_initial'].iloc[gt_idx] >= 0.5: if dataset['gold'].iloc[gt_idx] == 1: tp = tp + 1.0 else: fp = fp + 1.0 else: if dataset['gold'].iloc[gt_idx] == 1: fn = fn + 1.0 else: tn = tn + 1.0 return 2.0 * tp / (2.0 * tp + fp + fn) F1_crowdtruth = compute_F1_score(crowdtruth) print(F1_crowdtruth[F1_crowdtruth[:,1].argsort()][-10:]) F1_mace = compute_F1_score(mace) print(F1_mace[F1_mace[:,1].argsort()][-10:]) F1_majority_vote = compute_majority_vote(majvote, 'value') F1_majority_vote Explanation: CrowdTruth vs. MACE vs. Majority Vote on Annotation Performance End of explanation
1,901
Given the following text description, write Python code to implement the functionality described below step by step Description: VHSE-Based Prediction of Proteasomal Cleavage Sites Xie J, Xu Z, Zhou S, Pan X, Cai S, Yang L, et al. (2013) The VHSE-Based Prediction of Proteasomal Cleavage Sites. PLoS ONE 8(9) Step1: The principal component score Vector of Hydrophobic, Steric, and Electronic properties (VHSE) is a set of amino acid descriptors that come from A new set of amino acid descriptors and its application in peptide QSARs VHSE1 and VHSE2 are related to hydrophobic (H) properties, VHSE3 and VHSE4 to steric (S) properties, and VHSE5 to VHSE8 to electronic (E) properties. Step2: There were eight dataset used in this study. The reference datasets (s1, s3, s5, s7) were converted into the actual datasets used in the analysis (s2, s4, s6, s8) using the vhse vector. The s2 and s4 datasets were used for training the SVM model and the s6 and s8 were used for testing. Step3: Creating the In Vivo Data To create the in vivo training set, the authors Queried the AntiJen database (7,324 MHC-I ligands) Removed ligands with unknown source protein in ExPASy/SWISS-PROT (6036 MHC-I ligands) Removed duplicate ligands (3,148 ligands) Removed the 231 ligands used for test samples by Saxova et al, (2,917 ligands) Removed sequences less than 28 residues (2,607 ligands) to create the cleavage sample set Assigned non-cleavage sites, removed sequences with less than 28 resides (2,480 ligands) to create the non-cleavage sample set This process created 5,087 training samples Step4: Creating the Linear SVM Model The authors measured linear, polynomial, radial basis, and sigmoid kernel and found no significant difference in performance. The linear kernel was chosen for its simplicity and interpretability. The authors did not provide the C value used in their linear model, so I used GridSearchCV to find the best value. Step5: Testing In Vivo SVM Model Step6: Comparing Linear SVM to PAProC, FragPredict, and NetChop Interpreting Model Weights <img src="images/journal.pone.0074506.g002.png" align="left" border="0"/> The VHSE1 variable at the P1 position has the largest positive weight coefficient (10.49) in line with research showing that C-terminal residues are usually hydrophobic to aid in ER transfer and binding to the MHC molecule. There is a mostly positive and mostly negative coefficents upstream and downstream of the cleavage site respectively. This potential difference appears to be conducive to cleavage. Step7: PCA vs. full matrices Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components Source. Mei et al, creators of the VHSE, applied PCA on 18 hydophobic, 17 steric, and 15 electronic properties. The first 2, 2, and 4 principle components account for 74.33, 78.68, and 77.9% of variability in original matrices. The authors of this paper only used the first principle component from the hydrophobic, steric, and electronic matrices. What performance would the authors have found if used the full matrices instead of PCA features? | Matrix | Features | Sensitivity | Specificity | MCC | |--------|----------|-------------|-------------|------| | VHSE | 3x20=60 | 82.2 | 63.2 | 0.46 | | Full | 50x20=1000 | 81.2 | 64.1 | 0.46 |
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn import svm, metrics from sklearn.preprocessing import MinMaxScaler Explanation: VHSE-Based Prediction of Proteasomal Cleavage Sites Xie J, Xu Z, Zhou S, Pan X, Cai S, Yang L, et al. (2013) The VHSE-Based Prediction of Proteasomal Cleavage Sites. PLoS ONE 8(9): e74506. doi:10.1371/journal.pone.0074506 Abstract: "Prediction of proteasomal cleavage sites has been a focus of computational biology. Up to date, the predictive methods are mostly based on nonlinear classifiers and variables with little physicochemical meanings. In this paper, the physicochemical properties of 14 residues both upstream and downstream of a cleavage site are characterized by VHSE (principal component score vector of hydrophobic, steric, and electronic properties) descriptors. Then, the resulting VHSE descriptors are employed to construct prediction models by support vector machine (SVM). For both in vivo and in vitro datasets, the performance of VHSE-based method is comparatively better than that of the well-known PAProC, MAPPP, and NetChop methods. The results reveal that the hydrophobic property of 10 residues both upstream and downstream of the cleavage site is a dominant factor affecting in vivo and in vitro cleavage specificities, followed by residue’s electronic and steric properties. Furthermore, the difference in hydrophobic potential between residues flanking the cleavage site is proposed to favor substrate cleavages. Overall, the interpretable VHSE-based method provides a preferable way to predict proteasomal cleavage sites." Notes: Databases used in this study to create training and test sets Immune Epitope Database and Analysis Resource NCBI's reference sequence (RefSeq) database AntiJen - a kinetic, thermodynamic and cellular database v2.0 ExPASy/SWISS-PROT Peptide Formation Image by GYassineMrabetTalk. (Own work) [Public domain], <a href="https://commons.wikimedia.org/wiki/File%3APeptidformationball.svg">via Wikimedia Commons</a> Hydrophobic, Steric, and Electronic Properties <img src="images/Amino_Acids.png" align="left" border="0" height="500" width="406" alt="Amino Acids"/> Amino acids grouped by electrically charged, polar uncharged, hydrophobic, and special case sidechains. Each amino acid has a single letter designation. Image by Dancojocari [<a href="http://creativecommons.org/licenses/by-sa/3.0">CC BY-SA 3.0</a> or <a href="http://www.gnu.org/copyleft/fdl.html">GFDL</a>], <a href="https://commons.wikimedia.org/wiki/File%3AAmino_Acids.svg">via Wikimedia Commons</a> Protein Representation (FASTA) Bradykinin is an inflammatory mediator. It is a peptide that causes blood vessels to dilate (enlarge), and therefore causes blood pressure to fall. A class of drugs called ACE inhibitors, which are used to lower blood pressure, increase bradykinin (by inhibiting its degradation) further lowering blood pressure. ``` sp|P01042|KNG1_HUMAN Kininogen-1 OS=Homo sapiens GN=KNG1 PE=1 SV=2 MKLITILFLCSRLLLSLTQESQSEEIDCNDKDLFKAVDAALKKYNSQNQSNNQFVLYRIT EATKTVGSDTFYSFKYEIKEGDCPVQSGKTWQDCEYKDAAKAATGECTATVGKRSSTKFS VATQTCQITPAEGPVVTAQYDCLGCVHPISTQSPDLEPILRHGIQYFNNNTQHSSLFMLN EVKRAQRQVVAGLNFRITYSIVQTNCSKENFLFLTPDCKSLWNGDTGECTDNAYIDIQLR IASFSQNCDIYPGKDFVQPPTKICVGCPRDIPTNSPELEETLTHTITKLNAENNATFYFK IDNVKKARVQVVAGKKYFIDFVARETTCSKESNEELTESCETKKLGQSLDCNAEVYVVPW EKKIYPTVNCQPLGMISLMKRPPGFSPFRSSRIGEIKEETTVSPPHTSMAPAQDEERDSG KEQGHTRRHDWGHEKQRKHNLGHGHKHERDQGHGHQRGHGLGHGHEQQHGLGHGHKFKLD DDLEHQGGHVLDHGHKHKHGHGHGKHKNKGKKNGKHNGWKTEHLASSSEDSTTPSAQTQE KTEGPTPIPSLAKPGVTVTFSDFQDSDLIATMMPPISPAPIQSDDDWIPDIQIDPNGLSF NPISDFPDTTSPKCPGRPWKSVSEINPTTQMKESYYFDLTDGLS ``` Bradykinin Structure By Yikrazuul (Own work) [Public domain], <a href="https://commons.wikimedia.org/wiki/File%3ABradykinin_structure.svg">via Wikimedia Commons</a> MHC Class I Processing <img src="images/MHC_Class_I_processing.png" align="right" border="0"/> The proteasome digests polypeptides into smaller peptides 5–25 amino acids in length and is the major protease responsible for generating peptide C termini. Transporter associated with Antigen Processing (TAP) binds to peptides of length 9-20 amino acids and transports them into the endoplasmic reticulum (ER). Image by <a href="//commons.wikimedia.org/wiki/User:Scray" title="User:Scray">Scray</a> - <span class="int-own-work" lang="en">Own work</span>, <a href="http://creativecommons.org/licenses/by-sa/3.0" title="Creative Commons Attribution-Share Alike 3.0">CC BY-SA 3.0</a>, <a href="https://commons.wikimedia.org/w/index.php?curid=6251017">Link</a> Text from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2913210/ Cytotoxic T lymphocytes (CTLs) are the effector cells of the adaptive immune response that deal with infected, or malfunctioning, cells. Whereas intracellular pathogens are shielded from antibodies, CTLs are endowed with the ability to recognize and destroy cells harbouring intracellular threats. This obviously requires that information on the intracellular protein metabolism (including that of any intracellular pathogen) be translocated to the outside of the cell, where the CTL reside. To this end, the immune system has created an elaborate system of antigen processing and presentation. During the initial phase of antigen processing, peptide antigens are generated from intracellular pathogens and translocated into the endoplasmic reticulum. In here, these peptide antigens are specifically sampled by major histocompatibility complex (MHC) class I molecules and then exported to the cell surface, where they are presented as stable peptide: MHC I complexes awaiting the arrival of scrutinizing T cells. Hence, identifying which peptides are able to induce CTLs is of general interest for our understanding of the immune system, and of particular interest for the development of vaccines and immunotherapy directed against infectious pathogens, as previously reviewed. Peptide binding to MHC molecules is the key feature in cell-mediated immunity, because it is the peptide–MHC class I complex that can be recognized by the T-cell receptor (TCR) and thereby initiate the immune response. The CTLs are CD8+ T cells, whose TCRs recognize foreign peptides in complex with MHC class I molecules. In addition to peptide binding to MHC molecules, several other events have to be considered to be able to explain why a given peptide is eventually presented at the cell surface. Generally, an immunogenic peptide is generated from proteins expressed within the presenting cell, and peptides originating from proteins with high expression rate will normally have a higher chance of being immunogenic, compared with peptides from proteins with a lower expression rate. There are, however, significant exceptions to this generalization, e.g. cross-presentation, but this will be ignored in the following. In the classical MHC class I presenting pathway (see image on right) proteins expressed within a cell will be degraded in the cytosol by the protease complex, named the proteasome. The proteasome digests polypeptides into smaller peptides 5–25 amino acids in length and is the major protease responsible for generating peptide C termini. Some of the peptides that survive further degradation by other cytosolic exopeptidases can be bound by the transporter associated with antigen presentation (TAP), reviewed by Schölz et al. This transporter molecule binds peptides of lengths 9–20 amino acids and transports the peptides into the endoplasmic reticulum, where partially folded MHC molecules [in humans called human leucocyte antigens (HLA)], will complete folding if the peptide is able to bind to the particular allelic MHC molecule. The latter step is furthermore facilitated by the endoplasmic-reticulum-hosted protein tapasin. Each of these steps has been characterized and their individual importance has been related to final presentation on the cell surface. End of explanation # (3-letter, VHSE1, VHSE2, VHSE3, VHSE4, VHSE5, VHSE6, VHSE7, VHSE8) vhse = { "A": ("Ala", 0.15, -1.11, -1.35, -0.92, 0.02, -0.91, 0.36, -0.48), "R": ("Arg", -1.47, 1.45, 1.24, 1.27, 1.55, 1.47, 1.30, 0.83), "N": ("Asn", -0.99, 0.00, -0.37, 0.69, -0.55, 0.85, 0.73, -0.80), "D": ("Asp", -1.15, 0.67, -0.41, -0.01, -2.68, 1.31, 0.03, 0.56), "C": ("Cys", 0.18, -1.67, -0.46, -0.21, 0.00, 1.20, -1.61, -0.19), "Q": ("Gln", -0.96, 0.12, 0.18, 0.16, 0.09, 0.42, -0.20, -0.41), "E": ("Glu", -1.18, 0.40, 0.10, 0.36, -2.16, -0.17, 0.91, 0.02), "G": ("Gly", -0.20, -1.53, -2.63, 2.28, -0.53, -1.18, 2.01, -1.34), "H": ("His", -0.43, -0.25, 0.37, 0.19, 0.51, 1.28, 0.93, 0.65), "I": ("Ile", 1.27, -0.14, 0.30, -1.80, 0.30, -1.61, -0.16, -0.13), "L": ("Leu", 1.36, 0.07, 0.26, -0.80, 0.22, -1.37, 0.08, -0.62), "K": ("Lys", -1.17, 0.70, 0.70, 0.80, 1.64, 0.67, 1.63, 0.13), "M": ("Met", 1.01, -0.53, 0.43, 0.00, 0.23, 0.10, -0.86, -0.68), "F": ("Phe", 1.52, 0.61, 0.96, -0.16, 0.25, 0.28, -1.33, -0.20), "P": ("Pro", 0.22, -0.17, -0.50, 0.05, -0.01, -1.34, -0.19, 3.56), "S": ("Ser", -0.67, -0.86, -1.07, -0.41, -0.32, 0.27, -0.64, 0.11), "T": ("Thr", -0.34, -0.51, -0.55, -1.06, 0.01, -0.01, -0.79, 0.39), "W": ("Trp", 1.50, 2.06, 1.79, 0.75, 0.75, -0.13, -1.06, -0.85), "Y": ("Tyr", 0.61, 1.60, 1.17, 0.73, 0.53, 0.25, -0.96, -0.52), "V": ("Val", 0.76, -0.92, 0.17, -1.91, 0.22, -1.40, -0.24, -0.03)} Explanation: The principal component score Vector of Hydrophobic, Steric, and Electronic properties (VHSE) is a set of amino acid descriptors that come from A new set of amino acid descriptors and its application in peptide QSARs VHSE1 and VHSE2 are related to hydrophobic (H) properties, VHSE3 and VHSE4 to steric (S) properties, and VHSE5 to VHSE8 to electronic (E) properties. End of explanation %ls data/proteasomal_cleavage from aa_props import seq_to_aa_props # Converts the raw input into our X matrix and y vector. The 'peptide_key' # and 'activity_key' parameters are the names of the column in the dataframe # for the peptide amino acid string and activity (not cleaved/cleaved) # respectively. The 'sequence_len' allows for varying the number of flanking # amino acids to cleavage site (which is at position 14 of 28 in each cleaved # sample. def dataset_to_X_y(dataframe, peptide_key, activity_key, sequence_len = 28, use_vhse = True): raw_peptide_len = 28 if (sequence_len % 2 or sequence_len > raw_peptide_len or sequence_len <= 0): raise ValueError("sequence_len needs to an even value (0,%d]" % (raw_peptide_len)) X = [] y = [] for (peptide, activity) in zip(dataframe[peptide_key], dataframe[activity_key]): if (len(peptide) != raw_peptide_len): # print "Skipping peptide! len(%s)=%d. Should be len=%d" \ # % (peptide, len(peptide), raw_peptide_len) continue y.append(activity) num_amino_acids_to_clip = (raw_peptide_len - sequence_len) / 2 clipped_peptide = peptide if num_amino_acids_to_clip == 0 else \ peptide[num_amino_acids_to_clip:-num_amino_acids_to_clip] # There is a single peptide in dataset s6 with an "'" in the sequence. # The VHSE values used for it in the study match Proline (P). clipped_peptide = clipped_peptide.replace('\'', 'P') row = [] if use_vhse: for amino_acid in clipped_peptide: row.append(vhse[amino_acid][1]) # hydrophobic row.append(vhse[amino_acid][3]) # steric row.append(vhse[amino_acid][5]) # electric else: row = seq_to_aa_props(clipped_peptide) X.append(row) return (X, y) Explanation: There were eight dataset used in this study. The reference datasets (s1, s3, s5, s7) were converted into the actual datasets used in the analysis (s2, s4, s6, s8) using the vhse vector. The s2 and s4 datasets were used for training the SVM model and the s6 and s8 were used for testing. End of explanation training_set = pd.DataFrame.from_csv("data/proteasomal_cleavage/s2_in_vivo_mhc_1_antijen_swiss_prot_dataset.csv") print training_set.head(3) Explanation: Creating the In Vivo Data To create the in vivo training set, the authors Queried the AntiJen database (7,324 MHC-I ligands) Removed ligands with unknown source protein in ExPASy/SWISS-PROT (6036 MHC-I ligands) Removed duplicate ligands (3,148 ligands) Removed the 231 ligands used for test samples by Saxova et al, (2,917 ligands) Removed sequences less than 28 residues (2,607 ligands) to create the cleavage sample set Assigned non-cleavage sites, removed sequences with less than 28 resides (2,480 ligands) to create the non-cleavage sample set This process created 5,087 training samples: 2,607 cleavage and 2,480 non-cleavage samples. Creating Samples from Ligands and Proteins The C-terminus of the ligand is assumed to be a cleavage site and the midpoint between the N-terminus and C-terminus is assumed to not be a cleavage site. Both the cleavage and non-cleavage sites are at the center position of each sample. <img src="images/creating_samples_from_ligands.png"/> Format of Training Data Each Sequence is 28 residues long, however the authors found the best performance using 20 residues. The Activity is 1 for cleavage and -1 for no cleavage. There are 28 * 8 = 224 features in the raw training set. End of explanation from sklearn.model_selection import GridSearchCV from sklearn.feature_selection import RFECV def create_linear_svc_model(parameters, sequence_len = 28, use_vhse = True): scaler = MinMaxScaler() (X_train_unscaled, y_train) = dataset_to_X_y(training_set, \ "Sequence", "Activity", \ sequence_len = sequence_len, \ use_vhse = use_vhse) X_train = pd.DataFrame(scaler.fit_transform(X_train_unscaled)) parameters={'estimator__C': [pow(2, i) for i in xrange(-25, 4, 1)]} svc = svm.LinearSVC() rfe = RFECV(estimator=svc, step=.1, cv=2, scoring='accuracy', n_jobs=8) clf = GridSearchCV(rfe, parameters, scoring='accuracy', n_jobs=8, cv=2, verbose=1) clf.fit(X_train, y_train) # summarize results print("Best: %f using %s" % (clf.best_score_, clf.best_params_)) means = clf.cv_results_['mean_test_score'] stds = clf.cv_results_['std_test_score'] params = clf.cv_results_['params'] for mean, stdev, param in zip(means, stds, params): print("%f (%f) with: %r" % (mean, stdev, param)) # #svr = svm.LinearSVC() #clf = GridSearchCV(svr, parameters, cv=10, scoring='accuracy', n_jobs=1) #clf.fit(X_train, y_train) #print("The best parameters are %s with a score of %0.2f" \ # % (clf.best_params_, clf.best_score_)) return (scaler, clf) (vhse_scaler, vhse_model) = create_linear_svc_model( parameters = {'estimator__C': [pow(2, i) for i in xrange(-25, 4, 1)]}, use_vhse = False) Explanation: Creating the Linear SVM Model The authors measured linear, polynomial, radial basis, and sigmoid kernel and found no significant difference in performance. The linear kernel was chosen for its simplicity and interpretability. The authors did not provide the C value used in their linear model, so I used GridSearchCV to find the best value. End of explanation def test_linear_svc_model(scaler, model, sequence_len = 28, use_vhse = True): testing_set = pd.DataFrame.from_csv("data/proteasomal_cleavage/s6_in_vivo_mhc_1_ligands_dataset.csv") (X_test_prescaled, y_test) = dataset_to_X_y(testing_set, \ "Sequences", "Activity", \ sequence_len = sequence_len,\ use_vhse = use_vhse) X_test = pd.DataFrame(scaler.transform(X_test_prescaled)) y_predicted = model.predict(X_test) accuracy = 100.0 * metrics.accuracy_score(y_test, y_predicted) ((tn, fp), (fn, tp)) = metrics.confusion_matrix(y_test, y_predicted, labels=[-1, 1]) sensitivity = 100.0 * tp/(tp + fn) specificity = 100.0 * tn/(tn + fp) mcc = metrics.matthews_corrcoef(y_test, y_predicted) print "Authors reported performance" print "Acc: 73.5, Sen: 82.3, Spe: 64.8, MCC: 0.48" print "Notebook performance (sequence_len=%d, use_vhse=%s)" % (sequence_len, use_vhse) print "Acc: %.1f, Sen: %.1f, Spe: %.1f, MCC: %.2f" \ %(accuracy, sensitivity, specificity, mcc) test_linear_svc_model(vhse_scaler, vhse_model, use_vhse = False) testing_set = pd.DataFrame.from_csv("data/proteasomal_cleavage/s6_in_vivo_mhc_1_ligands_dataset.csv") (X_test_prescaled, y_test) = dataset_to_X_y(testing_set, \ "Sequences", "Activity", \ sequence_len = 28,\ use_vhse = False) X_test = pd.DataFrame(vhse_scaler.transform(X_test_prescaled)) poslabels = ["-%02d" % (i) for i in range(14, 0, -1)] + ["+%02d" % (i) for i in range(1,15)] # 18 H 17 S 15 E proplables = ["H%02d" % (i) for i in range(18)] + ["S%02d" % (i) for i in range(17)] + ["E%02d" % (i) for i in range(15)] cols = [] for poslabel in poslabels: for proplable in proplables: cols.append("%s%s" % (poslabel, proplable)) X_test.columns = cols for col in X_test.columns[vhse_model.best_estimator_.get_support()]: print col Explanation: Testing In Vivo SVM Model End of explanation #h = svr.coef_[:, 0::3] #s = svr.coef_[:, 1::3] #e = svr.coef_[:, 2::3] #%matplotlib notebook #n_groups = h.shape[1] #fig, ax = plt.subplots(figsize=(12,9)) #index = np.arange(n_groups) #bar_width = 0.25 #ax1 = ax.bar(index + bar_width, h.T, bar_width, label="Hydrophobic", color='b') #ax2 = ax.bar(index, s.T, bar_width, label="Steric", color='r') #ax3 = ax.bar(index - bar_width, e.T, bar_width, label="Electronic", color='g') #ax.set_xlim(-bar_width,len(index)+bar_width) #plt.xlabel('Amino Acid Position') #plt.ylabel('SVM Coefficient Value') #plt.title('Hydrophobic, Steric, and Electronic Effect by Amino Acid Position') #plt.xticks(index, range (n_groups/2, 0, -1) + [str(i)+"'" for i in range (1, n_groups/2+1)]) #plt.legend() #plt.tight_layout() #plt.show() Explanation: Comparing Linear SVM to PAProC, FragPredict, and NetChop Interpreting Model Weights <img src="images/journal.pone.0074506.g002.png" align="left" border="0"/> The VHSE1 variable at the P1 position has the largest positive weight coefficient (10.49) in line with research showing that C-terminal residues are usually hydrophobic to aid in ER transfer and binding to the MHC molecule. There is a mostly positive and mostly negative coefficents upstream and downstream of the cleavage site respectively. This potential difference appears to be conducive to cleavage. End of explanation # Performance with no VHSE (no_vhse_scaler, no_vhse_model) = create_linear_svc_model( parameters = {'C': [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1]}, use_vhse = False) test_linear_svc_model(no_vhse_scaler, no_vhse_model, use_vhse = False) # Performance with more flanking residues and no VHSE (full_flank_scaler, full_flank_model) = create_linear_svc_model( parameters = {'C': [0.0001, 0.003, 0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1]}, use_vhse = False, sequence_len = 28) test_linear_svc_model(full_flank_scaler, full_flank_model, use_vhse = False, sequence_len=28) Explanation: PCA vs. full matrices Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components Source. Mei et al, creators of the VHSE, applied PCA on 18 hydophobic, 17 steric, and 15 electronic properties. The first 2, 2, and 4 principle components account for 74.33, 78.68, and 77.9% of variability in original matrices. The authors of this paper only used the first principle component from the hydrophobic, steric, and electronic matrices. What performance would the authors have found if used the full matrices instead of PCA features? | Matrix | Features | Sensitivity | Specificity | MCC | |--------|----------|-------------|-------------|------| | VHSE | 3x20=60 | 82.2 | 63.2 | 0.46 | | Full | 50x20=1000 | 81.2 | 64.1 | 0.46 | End of explanation
1,902
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Q-learning In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible. We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game. Step1: Note Step2: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1. Run the code below to watch the simulation run. Step3: To shut the window showing the simulation, use env.close(). If you ran the simulation above, we can look at the rewards Step4: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right. Q-Network We train our Q-learning agent using the Bellman Equation Step5: Experience replay Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on. Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those. Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer. Step6: Exploration - Exploitation To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy. At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training. Q-Learning training algorithm Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent Step7: Populate the experience memory Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game. Step8: Training Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game. Step9: Visualizing training Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue. Step10: Testing Let's checkout how our trained agent plays the game.
Python Code: import gym import tensorflow as tf import numpy as np Explanation: Deep Q-learning In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible. We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game. End of explanation # Create the Cart-Pole game environment env = gym.make('CartPole-v0') Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo. End of explanation env.reset() rewards = [] for _ in range(100): env.render() state, reward, done, info = env.step(env.action_space.sample()) # take a random action rewards.append(reward) if done: rewards = [] env.reset() env.close() Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1. Run the code below to watch the simulation run. End of explanation print(rewards[-20:]) Explanation: To shut the window showing the simulation, use env.close(). If you ran the simulation above, we can look at the rewards: End of explanation class QNetwork: def __init__(self, learning_rate=0.01, state_size=4, action_size=2, hidden_size=10, name='QNetwork'): # state inputs to the Q-network with tf.variable_scope(name): self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs') # One hot encode the actions to later choose the Q-value for the action self.actions_ = tf.placeholder(tf.int32, [None], name='actions') one_hot_actions = tf.one_hot(self.actions_, action_size) # Target Q values for training self.targetQs_ = tf.placeholder(tf.float32, [None], name='target') # ReLU hidden layers self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size) self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size) # Linear output layer self.output = tf.contrib.layers.fully_connected(self.fc2, action_size, activation_fn=None) ### Train with loss (targetQ - Q)^2 # output has length 2, for two actions. This next line chooses # one value from output (per row) according to the one-hot encoded actions. self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1) self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q)) self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss) Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right. Q-Network We train our Q-learning agent using the Bellman Equation: $$ Q(s, a) = r + \gamma \max{Q(s', a')} $$ where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$. Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function. <img src="assets/deep-q-learning.png" width=450px> Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers. <img src="assets/q-network.png" width=550px> As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$. For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights. Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out. End of explanation from collections import deque class Memory(): def __init__(self, max_size = 1000): self.buffer = deque(maxlen=max_size) def add(self, experience): self.buffer.append(experience) def sample(self, batch_size): idx = np.random.choice(np.arange(len(self.buffer)), size=batch_size, replace=False) return [self.buffer[ii] for ii in idx] Explanation: Experience replay Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on. Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those. Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer. End of explanation train_episodes = 1000 # max number of episodes to learn from max_steps = 200 # max steps in an episode gamma = 0.99 # future reward discount # Exploration parameters explore_start = 1.0 # exploration probability at start explore_stop = 0.01 # minimum exploration probability decay_rate = 0.0001 # exponential decay rate for exploration prob # Network parameters hidden_size = 64 # number of units in each Q-network hidden layer learning_rate = 0.0001 # Q-network learning rate # Memory parameters memory_size = 10000 # memory capacity batch_size = 20 # experience mini-batch size pretrain_length = batch_size # number experiences to pretrain the memory tf.reset_default_graph() mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate) Explanation: Exploration - Exploitation To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy. At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training. Q-Learning training algorithm Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent: Initialize the memory $D$ Initialize the action-value network $Q$ with random weights For episode = 1, $M$ do For $t$, $T$ do With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$ Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$ Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$ Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$ Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$ Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$ endfor endfor Hyperparameters One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation. End of explanation # Initialize the simulation env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) memory = Memory(max_size=memory_size) # Make a bunch of random actions and store the experiences for ii in range(pretrain_length): # Uncomment the line below to watch the simulation # env.render() # Make a random action action = env.action_space.sample() next_state, reward, done, _ = env.step(action) if done: # The simulation fails so no next state next_state = np.zeros(state.shape) # Add experience to memory memory.add((state, action, reward, next_state)) # Start new episode env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) else: # Add experience to memory memory.add((state, action, reward, next_state)) state = next_state Explanation: Populate the experience memory Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game. End of explanation # Now train with experiences saver = tf.train.Saver() rewards_list = [] with tf.Session() as sess: # Initialize variables sess.run(tf.global_variables_initializer()) step = 0 for ep in range(1, train_episodes): total_reward = 0 t = 0 while t < max_steps: step += 1 # Uncomment this next line to watch the training # env.render() # Explore or Exploit explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step) if explore_p > np.random.rand(): # Make a random action action = env.action_space.sample() else: # Get action from Q-network feed = {mainQN.inputs_: state.reshape((1, *state.shape))} Qs = sess.run(mainQN.output, feed_dict=feed) action = np.argmax(Qs) # Take action, get new state and reward next_state, reward, done, _ = env.step(action) total_reward += reward if done: # the episode ends so no next state next_state = np.zeros(state.shape) t = max_steps print('Episode: {}'.format(ep), 'Total reward: {}'.format(total_reward), 'Training loss: {:.4f}'.format(loss), 'Explore P: {:.4f}'.format(explore_p)) rewards_list.append((ep, total_reward)) # Add experience to memory memory.add((state, action, reward, next_state)) # Start new episode env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) else: # Add experience to memory memory.add((state, action, reward, next_state)) state = next_state t += 1 # Sample mini-batch from memory batch = memory.sample(batch_size) states = np.array([each[0] for each in batch]) actions = np.array([each[1] for each in batch]) rewards = np.array([each[2] for each in batch]) next_states = np.array([each[3] for each in batch]) # Train network target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states}) # Set target_Qs to 0 for states where episode ends episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1) target_Qs[episode_ends] = (0, 0) targets = rewards + gamma * np.max(target_Qs, axis=1) loss, _ = sess.run([mainQN.loss, mainQN.opt], feed_dict={mainQN.inputs_: states, mainQN.targetQs_: targets, mainQN.actions_: actions}) saver.save(sess, "checkpoints/cartpole.ckpt") Explanation: Training Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game. End of explanation %matplotlib inline import matplotlib.pyplot as plt def running_mean(x, N): cumsum = np.cumsum(np.insert(x, 0, 0)) return (cumsum[N:] - cumsum[:-N]) / N eps, rews = np.array(rewards_list).T smoothed_rews = running_mean(rews, 10) plt.plot(eps[-len(smoothed_rews):], smoothed_rews) plt.plot(eps, rews, color='grey', alpha=0.3) plt.xlabel('Episode') plt.ylabel('Total Reward') Explanation: Visualizing training Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue. End of explanation test_episodes = 10 test_max_steps = 400 env.reset() with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) for ep in range(1, test_episodes): t = 0 while t < test_max_steps: env.render() # Get action from Q-network feed = {mainQN.inputs_: state.reshape((1, *state.shape))} Qs = sess.run(mainQN.output, feed_dict=feed) action = np.argmax(Qs) # Take action, get new state and reward next_state, reward, done, _ = env.step(action) if done: t = test_max_steps env.reset() # Take one random step to get the pole and cart moving state, reward, done, _ = env.step(env.action_space.sample()) else: state = next_state t += 1 env.close() Explanation: Testing Let's checkout how our trained agent plays the game. End of explanation
1,903
Given the following text description, write Python code to implement the functionality described below step by step Description: "or" operator This presentation is inspired by Neil Ludban's use of the "or" operator in Test-Driven Development with Python at last month's technical meeting. split diff view of code on github view of code on github Step1: Python's or operator has some similarities with C's || operator. Both always evaluate the first operand. Both guarantee that the second operand is not evaluated if the first operand is true. C's || operator yields an integer Step2: Python's concept of truthiness Numerical values are false if zero and true if not zero. None is false. Sequences and collections are false if empty, and true if not empty. Step3: Slowly scroll through the output of the following cell, predicting the output of each value before scrolling to reveal the actual output. Step4: There was some confusion and disbelief that True and False are integers, so that was played with. Some folks also did not know about the distinction between the / and // operators in Python 3, so that was played with also. Step5: Now we get to how the "or" operator was used in fizzbuzz(). Step6: Now we get to a more serious discussion of when it is good to use the "or" operator where the operands are not merely the typical case of False or True. The short answer is Step7: Pete Carswell asked about doing the above long expressions with a lamdba, hence the following. Step8: Note that the reduce() evaluates all the elements of its second operand, whereas a big long multiple "or" expression is guaranteed to stop evaluating operands after the first true operand. Then I thought the operator module should eliminate the need for the lamdba, so I explored the operator module. Step9: Unfortunately, I was not able to find an equivalent to the "or" operator. Zach brought up another use of the "or" operator for handling default arguments. Step10: It is too bad that there is not an or= operator. C does not have a ||= operator either. Step11: Zach prefers his code above to code below, with the danger of its mutable default value. Step12: The cell above changes the mutable default argument, as shown below. Step13: Zach's version does not suffer from the mutable default argument problem. Step14: How can I screw up Zach's version? It is sensitive to false arguments. Step15: That can be fixed with a traditional "is None" test. Step16: Maybe a better name for this presentation would be 'this' or 'that'. Step17: Zach reported that the "or" operator in Javascript works like Python. This presentation concentrated on the "or" operator. The "and" operator works like you (should) expect. Explore it on your own.
Python Code: # meld is a great visual difference program # http://meldmerge.org/ # the following command relies on the directory structure on my computer # tdd-demo comes from https://github.com/james-prior/tdd-demo/ !cd ~/projects/tdd-demo;git difftool -t meld -y 389df2a^ 389df2a Explanation: "or" operator This presentation is inspired by Neil Ludban's use of the "or" operator in Test-Driven Development with Python at last month's technical meeting. split diff view of code on github view of code on github End of explanation False or False 0 or False False or 0 0 or 0 False or True True or False True or True True or 1 1 or True 1 or 1 1 > 2 3 < 4 # This kind of expression using the "or" operator is very typical, # comprising the vast majority of use. 1 > 2 or 3 < 4 'hello' or 'world' '' or 'world' 'hello' or '' '' or '' '' or None False or 3.14 'False' or 3.14 bool('False' or 3.14) [] or {} '' or [] '' or {} '' or (1, 3) '' or 'False' '' or 'True' '' or True '' or False Explanation: Python's or operator has some similarities with C's || operator. Both always evaluate the first operand. Both guarantee that the second operand is not evaluated if the first operand is true. C's || operator yields an integer: 0 (false) or 1 (true). Python "or" operator yields one of the operands. If the first operand is true, it is the result and the second operand is guaranteed to not be evaluated. If the first operand if false, the second operand is evaluated and is the result. Note that Python returns one of the operands, not merely 0 or 1. Python's concept of truth ... Here are most of the built-in objects considered false: constants defined to be false: None and False. zero of any numeric type: 0, 0.0, 0j, Decimal(0), Fraction(0, 1) empty sequences and collections: '', (), [], {}, set(), range(0) ... Objects that are not false are true. constants defined to be true: True. not zero of any numeric type: 1, -1, 0.01, 1j, Decimal(1), Fraction(-1, 1) non-empty sequences and collections: 'hello', (0,), [0], {0: 0}, set([0]), range(1) For each of the following cells, predict the output. During the meeting, there was much disagreement in predictions, especially in the first four cells below. That led to real learning. End of explanation values = ( None, 0, 0.0, 0j, (), [], {}, set(), False, True, True + True, (True + True + True) / True, 1, -1, 1.e-30, '', 'False', 'True', [], [None], # This fools many people. [0], [0.0], [0j], [1], [1, 2], [[]], # This fools many people. [{}], [()], [], (), (None,), (0,), (0.0,), (0j,), (1,), (1, 2), ([],), ({},), ((),), (), {}, {None: None}, {False: None}, {'False': None}, set(), {None}, {0}, {0.0}, {0j}, {1}, {1, 2}, {()}, ) Explanation: Python's concept of truthiness Numerical values are false if zero and true if not zero. None is false. Sequences and collections are false if empty, and true if not empty. End of explanation for value in values: print(repr(value), type(value)) print(bool(value)) print() Explanation: Slowly scroll through the output of the following cell, predicting the output of each value before scrolling to reveal the actual output. End of explanation True + True True / (True + True) True // (True + True) Explanation: There was some confusion and disbelief that True and False are integers, so that was played with. Some folks also did not know about the distinction between the / and // operators in Python 3, so that was played with also. End of explanation '' or 1 '' or 2 'fizz' or 3 'buzz' or 5 'fizz' or 6 'fizzbuzz' or 15 '' or 16 Explanation: Now we get to how the "or" operator was used in fizzbuzz(). End of explanation False or 0 or 0j or 0.0 or [] or {} or set() or None or () False or 0 or 0j or 0.0 or 'false' or [] or {} or set() or None or () Explanation: Now we get to a more serious discussion of when it is good to use the "or" operator where the operands are not merely the typical case of False or True. The short answer is: Use the "or" operator when it makes the code more readable. But that begs the question. When does using the "or" operator make the code more readable? This is a question I have been struggling with. Let's go back to the old code at hand. if not output: return str(n) return output It returns either output or str(n). When said that simply in English, Neil's refactoring is the most readable code. It says most simply and directly what we want. return output or str(n) The problem may be from the biases of experienced programmers like myself who expect the "or" operator to yield only a true or false value, like we expect from other languages such as but not limited to C. Inexperienced folks do not bring such baggage from other languages. For myself, I have decided to absorb and use the idiom like Neil showed us. It is part of learning Python. With a long string of "or"ed stuff, the result is the first true operand. If no operands are true, the result is the last operand. End of explanation from functools import reduce a = ( False, 0, 0j, 0.0, [], {}, 'look ma no hands', set(), None, (), ) reduce(lambda x, y: x or y, a) Explanation: Pete Carswell asked about doing the above long expressions with a lamdba, hence the following. End of explanation import operator [s for s in dir(operator) if 'or' in s] Explanation: Note that the reduce() evaluates all the elements of its second operand, whereas a big long multiple "or" expression is guaranteed to stop evaluating operands after the first true operand. Then I thought the operator module should eliminate the need for the lamdba, so I explored the operator module. End of explanation def foo(p=None): p = p or [1, 2, 3, 4] return p Explanation: Unfortunately, I was not able to find an equivalent to the "or" operator. Zach brought up another use of the "or" operator for handling default arguments. End of explanation foo(5) foo() Explanation: It is too bad that there is not an or= operator. C does not have a ||= operator either. End of explanation def foo(p=[1, 2, 3, 4]): return p foo(3) foo() a = foo() a[1] = 'hi mom' a Explanation: Zach prefers his code above to code below, with the danger of its mutable default value. End of explanation foo() Explanation: The cell above changes the mutable default argument, as shown below. End of explanation def foo(p=None): p = p or [1, 2, 3, 4] return p b = foo() b b[2] = 'this' b foo() Explanation: Zach's version does not suffer from the mutable default argument problem. End of explanation foo([1]) foo([]) foo(0) Explanation: How can I screw up Zach's version? It is sensitive to false arguments. End of explanation def foo(p=None): if p is None: p = [1, 2, 3, 4] return p foo() foo(None) foo([1]) foo([]) foo(0) Explanation: That can be fixed with a traditional "is None" test. End of explanation 'this' or 'that' 'give me liberty' or 'give me death' Explanation: Maybe a better name for this presentation would be 'this' or 'that'. End of explanation False and 1 'False' and 1 Explanation: Zach reported that the "or" operator in Javascript works like Python. This presentation concentrated on the "or" operator. The "and" operator works like you (should) expect. Explore it on your own. End of explanation
1,904
Given the following text description, write Python code to implement the functionality described below step by step Description: Superposed epoch analysis in the presence of high internal variability We will be using a 4000yr pre-industrial time series of monthly-mean NINO3.4 SST from the GFDL CM2.1, described in Step1: Applying the tropical year average to this is a little tedious, so we skip some tests and load a up a file provided by Feng. Step2: This shows a nice skewness comparable to observations. Let's define a quantile-based threshold for El Niño and La Niña events Step3: Accidental El Niño composites Under stationary boundary conditions, warm (or cold) events can only appear in composites due to sampling artifacts, which should be larger for small number of key dates. Let us use resampling to evaluate the risk of wrongly identifying "forced" responses when none exists. Here our criterion for identifying warm events is that they exceed the threshold defined above.
Python Code: %load_ext autoreload %autoreload 2 %matplotlib inline import LMRt import os import numpy as np import seaborn as sns import matplotlib.pyplot as plt from scipy.stats.mstats import mquantiles import xarray as xr from matplotlib import gridspec from scipy.signal import find_peaks import pandas as pd import pickle from tqdm import tqdm import pyleoclim as pyleo with xr.open_dataset('sst_nino34_cm2p1_1860.nc') as ds: print(ds) nino34 = ds['sst_nino34'].values time = ds['time'].values #print(np.shape(nino34)) Explanation: Superposed epoch analysis in the presence of high internal variability We will be using a 4000yr pre-industrial time series of monthly-mean NINO3.4 SST from the GFDL CM2.1, described in: CM2.1 model formulation, and tropical/ENSO evaluation: - Delworth et al. (2006): http://doi.org/10.1175/JCLI3629.1 - Wittenberg et al. (2006): http://doi.org/10.1175/JCLI3631.1 Pre-industrial control simulation, and long-range ENSO modulation & memory: - Wittenberg et al. (2009): http://doi.org/10.1029/2009GL038710 - Wittenberg et al. (2014): http://doi.org/10.1175/JCLI-D-13-00577.1 - Atwood et al. (CD 2017): http://doi.org/10.1007/s00382-016-3477-9 h/t Andrew Wittenberg for providing the simulation. Exploratory analysis End of explanation with open('cm2.1_nino34_TY.pkl', 'rb') as f: year, nino34_ann = pickle.load(f) # ignore last value (NaN) year = year[:-1]; nino34_ann = nino34_ann[:-1] nino34_ann -= np.mean(nino34_ann) fig, ax = plt.subplots() sns.distplot(nino34_ann,ax=ax) sns.despine() ax.set(title='Distribution of annual values',xlabel = 'NINO3.4 SST',ylabel = 'PDF') Explanation: Applying the tropical year average to this is a little tedious, so we skip some tests and load a up a file provided by Feng. End of explanation thre = 0.15 q = np.quantile(nino34_ann,[thre, 1-thre]) nina = np.where(nino34_ann <= q[0]) nino = np.where(nino34_ann >= q[1]) fig, ax = plt.subplots(figsize=[10, 4]) ax.plot(year, nino34_ann, color='gray',linewidth=0.2) ax.plot(year[nino],nino34_ann[nino],'o',alpha=0.6,markersize=3,color='C3') ax.plot(year[nina],nino34_ann[nina],'o',alpha=0.6,markersize=3,color='C0') plt.text(4100,2,'El Niño events',color='C3') plt.text(4100,-2,'La Niña events',color='C0') # ax.set_xlabel('Year') ax.set_ylabel('Niño 3.4') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) len(nina[0]) Explanation: This shows a nice skewness comparable to observations. Let's define a quantile-based threshold for El Niño and La Niña events: End of explanation from scipy.stats import gaussian_kde nkeys = [5,10,15,20,50] clr = plt.cm.tab10(np.linspace(0,1,10)) prob = np.empty([len(nkeys),1]) nt = len(year) nMC = 10000 # number of Monte Carlo draws comp = np.empty([len(nkeys),nMC]) fig, ax = plt.subplots(figsize=(8,4)) xm = np.linspace(-2,3,200) for key in nkeys: i = nkeys.index(key) for m in range(nMC): events = np.random.choice(nt, size=[key, 1], replace=False, p=None) comp[i,m] = np.mean(nino34_ann[events],axis=0) x = np.sort(comp[i,:]) # sort it by increasing values kde = gaussian_kde(x,bw_method=0.2) # apply Kernel Density Estimation if any(x>=q[1]): comp_nino = x[x>=q[1]] prob[i] = kde.integrate_box_1d(q[1],5) ax.fill_between(comp_nino,kde(comp_nino),alpha=0.3, color = clr[i]) else: prob[i]=0 ax.plot(xm,kde(xm),linewidth=2,color=clr[i],label=str(key) + ', '+ f'{prob[i][0]:3.4f}') ax.axvline(q[1],linestyle='--',alpha=0.2,color='black') plt.legend(title=r'# key dates, $P(x > x_{crit})$',loc=5,fontsize=10,title_fontsize=12) ax.set_xlim([-2,3]) ax.set_xlabel('Niño 3.4') ax.set_ylabel('Density') ax.spines['right'].set_visible(False) ax.spines['top'].set_visible(False) ax.set_title('Probability of accidentally identifying unforced events as forced') fig.savefig("CM2.1_compositing_accidents.pdf",dpi=200,pad_inches=0.2) Explanation: Accidental El Niño composites Under stationary boundary conditions, warm (or cold) events can only appear in composites due to sampling artifacts, which should be larger for small number of key dates. Let us use resampling to evaluate the risk of wrongly identifying "forced" responses when none exists. Here our criterion for identifying warm events is that they exceed the threshold defined above. End of explanation
1,905
Given the following text description, write Python code to implement the functionality described below step by step Description: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out Step1: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise Step2: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement Step3: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise Step4: Hyperparameters Step5: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise Step6: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise Step7: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise Step8: Training Step9: Training loss Here we'll check out the training losses for the generator and discriminator. Step10: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. Step12: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
Python Code: %matplotlib inline import pickle as pkl import numpy as np import tensorflow as tf import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data') Explanation: Generative Adversarial Network In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits! GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out: Pix2Pix CycleGAN A whole list The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator. The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator. The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow. End of explanation def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(dtype=tf.float32, shape=(None, real_dim), name="inputs_real") inputs_z = tf.placeholder(dtype=tf.float32, shape=(None, z_dim), name="inputs_z") return inputs_real, inputs_z Explanation: Model Inputs First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively. End of explanation def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01): ''' Build the generator network. Arguments --------- z : Input tensor for the generator out_dim : Shape of the generator output n_units : Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('generator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(inputs=z, units=n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) # Logits and tanh output logits = tf.layers.dense(h1, out_dim) out = tf.tanh(logits) return out Explanation: Generator network Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values. Variable Scope Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks. We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again. To use tf.variable_scope, you use a with statement: python with tf.variable_scope('scope_name', reuse=False): # code here Here's more from the TensorFlow documentation to get another look at using tf.variable_scope. Leaky ReLU TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x: $$ f(x) = max(\alpha * x, x) $$ Tanh Output The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1. Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope. End of explanation def discriminator(x, n_units=128, reuse=False, alpha=0.01): ''' Build the discriminator network. Arguments --------- x : Input tensor for the discriminator n_units: Number of units in hidden layer reuse : Reuse the variables with tf.variable_scope alpha : leak parameter for leaky ReLU Returns ------- out, logits: ''' with tf.variable_scope('discriminator', reuse=reuse): # finish this # Hidden layer h1 = tf.layers.dense(inputs=x, units=n_units, activation=None) # Leaky ReLU h1 = tf.maximum(alpha * h1, h1) logits = tf.layers.dense(h1, 1, activation=None) out = tf.sigmoid(logits) return out, logits Explanation: Discriminator The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope. End of explanation # Size of input image to discriminator input_size = 784 # 28x28 MNIST images flattened # Size of latent vector to generator z_size = 100 # Sizes of hidden layers in generator and discriminator g_hidden_size = 128 d_hidden_size = 128 # Leak factor for leaky ReLU alpha = 0.01 # Label smoothing smooth = 0.1 Explanation: Hyperparameters End of explanation tf.reset_default_graph() # Create our input placeholders input_real, input_z = model_inputs(input_size, z_size) # Generator network here g_model = generator(input_z, input_size) # g_model is the generator output # Disriminator network here d_model_real, d_logits_real = discriminator(input_real) d_model_fake, d_logits_fake = discriminator(g_model, reuse=True) Explanation: Build network Now we're building the network from the functions defined above. First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z. Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes. Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True). Exercise: Build the network from the functions you defined earlier. End of explanation # Calculate losses real_labels = tf.ones_like(d_logits_real) * (1 - smooth) fake_labels = tf.zeros_like(d_logits_real) gen_labels = tf.ones_like(d_logits_fake) d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_real, labels=real_labels)) d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_fake, labels=fake_labels)) d_loss = d_loss_real + d_loss_fake g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits( logits=d_logits_fake, labels=gen_labels)) Explanation: Discriminator and Generator Losses Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like python tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels)) For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth) The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images. Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator. End of explanation # Optimizers learning_rate = 0.002 # Get the trainable_variables, split into G and D parts t_vars = tf.trainable_variables() g_vars = [var for var in t_vars if var.name.find("generator") != -1] d_vars = [var for var in t_vars if var.name.find("discriminator") != -1] d_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(d_loss, var_list=d_vars) g_train_opt = tf.train.AdamOptimizer(learning_rate).minimize(g_loss, var_list=g_vars) Explanation: Optimizers We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph. For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance). We can do something similar with the discriminator. All the variables in the discriminator start with discriminator. Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list. Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately. End of explanation batch_size = 100 epochs = 100 samples = [] losses = [] saver = tf.train.Saver(var_list = g_vars) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for e in range(epochs): for ii in range(mnist.train.num_examples//batch_size): batch = mnist.train.next_batch(batch_size) # Get images, reshape and rescale to pass to D batch_images = batch[0].reshape((batch_size, 784)) batch_images = batch_images*2 - 1 # Sample random noise for G batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size)) # Run optimizers _ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z}) _ = sess.run(g_train_opt, feed_dict={input_z: batch_z}) # At the end of each epoch, get the losses and print them out train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images}) train_loss_g = g_loss.eval({input_z: batch_z}) print("Epoch {}/{}...".format(e+1, epochs), "Discriminator Loss: {:.4f}...".format(train_loss_d), "Generator Loss: {:.4f}".format(train_loss_g)) # Save losses to view after training losses.append((train_loss_d, train_loss_g)) # Sample from generator as we're training for viewing afterwards sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) samples.append(gen_samples) saver.save(sess, './checkpoints/generator.ckpt') # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) Explanation: Training End of explanation %matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator') plt.plot(losses.T[1], label='Generator') plt.title("Training Losses") plt.legend() Explanation: Training loss Here we'll check out the training losses for the generator and discriminator. End of explanation def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((28,28)), cmap='Greys_r') return fig, axes # Load samples from generator taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) Explanation: Generator samples from training Here we can view samples of images from the generator. First we'll look at images taken while training. End of explanation _ = view_samples(-1, samples) Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make. End of explanation rows, cols = 10, 6 fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True) for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes): for img, ax in zip(sample[::int(len(sample)/cols)], ax_row): ax.imshow(img.reshape((28,28)), cmap='Greys_r') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! End of explanation saver = tf.train.Saver(var_list=g_vars) with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) sample_z = np.random.uniform(-1, 1, size=(16, z_size)) gen_samples = sess.run( generator(input_z, input_size, reuse=True), feed_dict={input_z: sample_z}) view_samples(0, [gen_samples]) Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3. Sampling from the generator We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! End of explanation
1,906
Given the following text description, write Python code to implement the functionality described below step by step Description: 05 - Continuous Training After testing, compiling, and uploading the pipeline definition to Cloud Storage, the pipeline is executed with respect to a trigger. We use Cloud Functions and Cloud Pub/Sub as a triggering mechanism. The triggering can be scheduled using Cloud Scheduler. The trigger source sends a message to a Cloud Pub/Sub topic that the Cloud Function listens to, and then it submits the pipeline to AI Platform Managed Pipelines to be executed. This notebook covers the following steps Step1: Setup Google Cloud project Step2: Set configurations Step3: 1. Create a Pub/Sub topic Step5: 2. Deploy the Cloud Function Step6: 3. Trigger the pipeline Step7: Wait for a few seconds for the pipeline run to be submitted, then you can see the run in the Cloud Console Step8: 4. Extracting pipeline runs metadata
Python Code: import json import os import logging import tensorflow as tf import tfx import IPython logging.getLogger().setLevel(logging.INFO) print("Tensorflow Version:", tfx.__version__) Explanation: 05 - Continuous Training After testing, compiling, and uploading the pipeline definition to Cloud Storage, the pipeline is executed with respect to a trigger. We use Cloud Functions and Cloud Pub/Sub as a triggering mechanism. The triggering can be scheduled using Cloud Scheduler. The trigger source sends a message to a Cloud Pub/Sub topic that the Cloud Function listens to, and then it submits the pipeline to AI Platform Managed Pipelines to be executed. This notebook covers the following steps: 1. Create the Cloud Pub/Sub topic. 2. Deploy the Cloud Function 3. Test triggering a pipeline. 4. Extracting pipeline run metadata. Setup Import libraries End of explanation PROJECT = '[your-project-id]' # Change to your project id. REGION = 'us-central1' # Change to your region. BUCKET = '[your-bucket-name]' # Change to your bucket name. if PROJECT == "" or PROJECT is None or PROJECT == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT = shell_output[0] if BUCKET == "" or BUCKET is None or BUCKET == "[your-bucket-name]": # Get your bucket name to GCP projet id BUCKET = PROJECT print("Project ID:", PROJECT) print("Region:", REGION) print("Bucket name:", BUCKET) Explanation: Setup Google Cloud project End of explanation VERSION = 'v01' DATASET_DISPLAY_NAME = 'chicago-taxi-tips' MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier-{VERSION}' PIPELINE_NAME = f'{MODEL_DISPLAY_NAME}-train-pipeline' PIPELINES_STORE = f'gs://{BUCKET}/{DATASET_DISPLAY_NAME}/compiled_pipelines/' GCS_PIPELINE_FILE_LOCATION = os.path.join(PIPELINES_STORE, f'{PIPELINE_NAME}.json') PUBSUB_TOPIC = f'trigger-{PIPELINE_NAME}' CLOUD_FUNCTION_NAME = f'trigger-{PIPELINE_NAME}-fn' !gsutil ls {GCS_PIPELINE_FILE_LOCATION} Explanation: Set configurations End of explanation !gcloud pubsub topics create {PUBSUB_TOPIC} Explanation: 1. Create a Pub/Sub topic End of explanation ENV_VARS=f\ PROJECT={PROJECT},\ REGION={REGION},\ GCS_PIPELINE_FILE_LOCATION={GCS_PIPELINE_FILE_LOCATION} !echo {ENV_VARS} !rm -r src/pipeline_triggering/.ipynb_checkpoints !gcloud functions deploy {CLOUD_FUNCTION_NAME} \ --region={REGION} \ --trigger-topic={PUBSUB_TOPIC} \ --runtime=python37 \ --source=src/pipeline_triggering\ --entry-point=trigger_pipeline\ --stage-bucket={BUCKET}\ --update-env-vars={ENV_VARS} cloud_fn_url = f"https://console.cloud.google.com/functions/details/{REGION}/{CLOUD_FUNCTION_NAME}" html = f'See the Cloud Function details <a href="{cloud_fn_url}" target="_blank">here</a>.' IPython.display.display(IPython.display.HTML(html)) Explanation: 2. Deploy the Cloud Function End of explanation from google.cloud import pubsub publish_client = pubsub.PublisherClient() topic = f'projects/{PROJECT}/topics/{PUBSUB_TOPIC}' data = { 'num_epochs': 7, 'learning_rate': 0.0015, 'batch_size': 512, 'hidden_units': '256,126' } message = json.dumps(data) _ = publish_client.publish(topic, message.encode()) Explanation: 3. Trigger the pipeline End of explanation from kfp.v2.google.client import AIPlatformClient pipeline_client = AIPlatformClient( project_id=PROJECT, region=REGION) job_display_name = pipeline_client.list_jobs()['pipelineJobs'][0]['displayName'] job_url = f"https://console.cloud.google.com/vertex-ai/locations/{REGION}/pipelines/runs/{job_display_name}" html = f'See the Pipeline job <a href="{job_url}" target="_blank">here</a>.' IPython.display.display(IPython.display.HTML(html)) Explanation: Wait for a few seconds for the pipeline run to be submitted, then you can see the run in the Cloud Console End of explanation from google.cloud import aiplatform as vertex_ai pipeline_df = vertex_ai.get_pipeline_df(PIPELINE_NAME) pipeline_df = pipeline_df[pipeline_df.pipeline_name == PIPELINE_NAME] pipeline_df.T Explanation: 4. Extracting pipeline runs metadata End of explanation
1,907
Given the following text description, write Python code to implement the functionality described below step by step Description: CesiumWidget together with CZML library This notebook shows how to use the CesiumWidget together with the CZML library from https Step1: Some data for the viewer to display Step2: Create widget object Step3: Display the widget
Python Code: from CesiumWidget import CesiumWidget import czml Explanation: CesiumWidget together with CZML library This notebook shows how to use the CesiumWidget together with the CZML library from https://github.com/cleder/czml If the CesiumWidget is installed correctly, Cesium should be accessable at: http://localhost:8888/nbextensions/CesiumWidget/cesium/index.html End of explanation # Initialize a document doc = czml.CZML() # Create and append the document packet packet1 = czml.CZMLPacket(id='document',version='1.0') doc.packets.append(packet1) p3 = czml.CZMLPacket(id='test') p3.position = czml.Position(cartographicDegrees = [18.07,59.33, 20]) point = czml.Point(pixelSize=20, show=True) point.color = czml.Color(rgba=(223, 150, 47, 128)) point.show = True p3.point = point l = czml.Label(show=True, text='Stockholm') l.scale = 0.5 p3.label = l doc.packets.append(p3) Explanation: Some data for the viewer to display End of explanation cesiumExample = CesiumWidget(width="100%", czml=tuple(doc.data())) Explanation: Create widget object End of explanation cesiumExample Explanation: Display the widget: End of explanation
1,908
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2018 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); Step1: Linear Mixed Effects Models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU". The following snippet will verify that we have access to a GPU. Step4: Note Step5: We load and preprocess the data set. We hold out 20% of the data so we can evaluate our fitted model on unseen data points. Below we visualize the first few rows. Step6: We set up the data set in terms of a features dictionary of inputs and a labels output corresponding to the ratings. Each feature is encoded as an integer and each label (evaluation rating) is encoded as a floating point number. Step7: Model A typical linear model assumes independence, where any pair of data points has a constant linear relationship. In the InstEval data set, observations arise in groups each of which may have varying slopes and intercepts. Linear mixed effects models, also known as hierarchical linear models or multilevel linear models, capture this phenomenon (Gelman & Hill, 2006). Examples of this phenomenon include Step8: As a Probabilistic graphical program, we can also visualize the model's structure in terms of its computational graph. This graph encodes dataflow across the random variables in the program, making explicit their relationships in terms of a graphical model (Jordan, 2003). As a statistical tool, we might look at the graph in order to better see, for example, that intercept and effect_service are conditionally dependent given ratings; this may be harder to see from the source code if the program is written with classes, cross references across modules, and/or subroutines. As a computational tool, we might also notice latent variables flow into the ratings variable via tf.gather ops. This may be a bottleneck on certain hardware accelerators if indexing Tensors is expensive; visualizing the graph makes this readily apparent. Step9: Parameter Estimation Given data, the goal of inference is to fit the model's fixed effects slope $\beta$, intercept $\alpha$, and variance component parameter $\sigma^2$. The maximum likelihood principle formalizes this task as $$ \max_{\beta, \alpha, \sigma}~\log p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}; \beta, \alpha, \sigma) = \max_{\beta, \alpha, \sigma}~\log \int p(\eta; \sigma) ~p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}, \eta; \beta, \alpha)~d\eta. $$ In this tutorial, we use the Monte Carlo EM algorithm to maximize this marginal density (Dempster et al., 1977; Wei and Tanner, 1990).¹ We perform Markov chain Monte Carlo to compute the expectation of the conditional likelihood with respect to the random effects ("E-step"), and we perform gradient descent to maximize the expectation with respect to the parameters ("M-step") Step10: We perform a warm-up stage, which runs one MCMC chain for a number of iterations so that training may be initialized within the posterior's probability mass. We then run a training loop. It jointly runs the E and M-steps and records values during training. Step11: You can also write the warmup for-loop into a tf.while_loop, and the training step into a tf.scan or tf.while_loop for even faster inference. For example Step12: Above, we did not run the algorithm until a convergence threshold was detected. To check whether training was sensible, we verify that the loss function indeed tends to converge over training iterations. Step13: We also use a trace plot, which shows the Markov chain Monte Carlo algorithm's trajectory across specific latent dimensions. Below we see that specific instructor effects indeed meaningfully transition away from their initial state and explore the state space. The trace plot also indicates that the effects differ across instructors but with similar mixing behavior. Step14: Criticism Above, we fitted the model. We now look into criticizing its fit using data, which lets us explore and better understand the model. One such technique is a residual plot, which plots the difference between the model's predictions and ground truth for each data point. If the model were correct, then their difference should be standard normally distributed; any deviations from this pattern in the plot indicate model misfit. We build the residual plot by first forming the posterior predictive distribution over ratings, which replaces the prior distribution on the random effects with its posterior given training data. In particular, we run the model forward and intercept its dependence on prior random effects with their inferred posterior means.² Step15: Upon visual inspection, the residuals look somewhat standard-normally distributed. However, the fit is not perfect Step16: To explore how the model makes individual predictions, we look at the histogram of effects for students, instructors, and departments. This lets us understand how individual elements in a data point's feature vector tends to influence the outcome. Not surprisingly, we see below that each student typically has little effect on an instructor's evaluation rating. Interestingly, we see that the department an instructor belongs to has a large effect.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" } # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2018 The TensorFlow Probability Authors. Licensed under the Apache License, Version 2.0 (the "License"); End of explanation #@title Import and set ups{ display-mode: "form" } import csv import matplotlib.pyplot as plt import numpy as np import pandas as pd import requests import tensorflow.compat.v2 as tf tf.enable_v2_behavior() import tensorflow_probability as tfp tfd = tfp.distributions tfb = tfp.bijectors dtype = tf.float64 %config InlineBackend.figure_format = 'retina' %matplotlib inline plt.style.use('ggplot') Explanation: Linear Mixed Effects Models <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/probability/examples/Linear_Mixed_Effects_Models"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Linear_Mixed_Effects_Models.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> A linear mixed effects model is a simple approach for modeling structured linear relationships (Harville, 1997; Laird and Ware, 1982). Each data point consists of inputs of varying type—categorized into groups—and a real-valued output. A linear mixed effects model is a hierarchical model: it shares statistical strength across groups in order to improve inferences about any individual data point. In this tutorial, we demonstrate linear mixed effects models with a real-world example in TensorFlow Probability. We'll use the JointDistributionCoroutine and Markov Chain Monte Carlo (tfp.mcmc) modules. Dependencies & Prerequisites End of explanation if tf.test.gpu_device_name() != '/device:GPU:0': print('WARNING: GPU device not found.') else: print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name())) Explanation: Make things Fast! Before we dive in, let's make sure we're using a GPU for this demo. To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU". The following snippet will verify that we have access to a GPU. End of explanation def load_insteval(): Loads the InstEval data set. It contains 73,421 university lecture evaluations by students at ETH Zurich with a total of 2,972 students, 2,160 professors and lecturers, and several student, lecture, and lecturer attributes. Implementation is built from the `observations` Python package. Returns: Tuple of np.ndarray `x_train` with 73,421 rows and 7 columns and dictionary `metadata` of column headers (feature names). url = ('https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/' 'lme4/InstEval.csv') with requests.Session() as s: download = s.get(url) f = download.content.decode().splitlines() iterator = csv.reader(f) columns = next(iterator)[1:] x_train = np.array([row[1:] for row in iterator], dtype=np.int) metadata = {'columns': columns} return x_train, metadata Explanation: Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.) Data We use the InstEval data set from the popular lme4 package in R (Bates et al., 2015). It is a data set of courses and their evaluation ratings. Each course includes metadata such as students, instructors, and departments, and the response variable of interest is the evaluation rating. End of explanation data, metadata = load_insteval() data = pd.DataFrame(data, columns=metadata['columns']) data = data.rename(columns={'s': 'students', 'd': 'instructors', 'dept': 'departments', 'y': 'ratings'}) data['students'] -= 1 # start index by 0 # Remap categories to start from 0 and end at max(category). data['instructors'] = data['instructors'].astype('category').cat.codes data['departments'] = data['departments'].astype('category').cat.codes train = data.sample(frac=0.8) test = data.drop(train.index) train.head() Explanation: We load and preprocess the data set. We hold out 20% of the data so we can evaluate our fitted model on unseen data points. Below we visualize the first few rows. End of explanation get_value = lambda dataframe, key, dtype: dataframe[key].values.astype(dtype) features_train = { k: get_value(train, key=k, dtype=np.int32) for k in ['students', 'instructors', 'departments', 'service']} labels_train = get_value(train, key='ratings', dtype=np.float32) features_test = {k: get_value(test, key=k, dtype=np.int32) for k in ['students', 'instructors', 'departments', 'service']} labels_test = get_value(test, key='ratings', dtype=np.float32) num_students = max(features_train['students']) + 1 num_instructors = max(features_train['instructors']) + 1 num_departments = max(features_train['departments']) + 1 num_observations = train.shape[0] print("Number of students:", num_students) print("Number of instructors:", num_instructors) print("Number of departments:", num_departments) print("Number of observations:", num_observations) Explanation: We set up the data set in terms of a features dictionary of inputs and a labels output corresponding to the ratings. Each feature is encoded as an integer and each label (evaluation rating) is encoded as a floating point number. End of explanation class LinearMixedEffectModel(tf.Module): def __init__(self): # Set up fixed effects and other parameters. # These are free parameters to be optimized in E-steps self._intercept = tf.Variable(0., name="intercept") # alpha in eq self._effect_service = tf.Variable(0., name="effect_service") # beta in eq self._stddev_students = tfp.util.TransformedVariable( 1., bijector=tfb.Exp(), name="stddev_students") # sigma in eq self._stddev_instructors = tfp.util.TransformedVariable( 1., bijector=tfb.Exp(), name="stddev_instructors") # sigma in eq self._stddev_departments = tfp.util.TransformedVariable( 1., bijector=tfb.Exp(), name="stddev_departments") # sigma in eq def __call__(self, features): model = tfd.JointDistributionSequential([ # Set up random effects. tfd.MultivariateNormalDiag( loc=tf.zeros(num_students), scale_identity_multiplier=self._stddev_students), tfd.MultivariateNormalDiag( loc=tf.zeros(num_instructors), scale_identity_multiplier=self._stddev_instructors), tfd.MultivariateNormalDiag( loc=tf.zeros(num_departments), scale_identity_multiplier=self._stddev_departments), # This is the likelihood for the observed. lambda effect_departments, effect_instructors, effect_students: tfd.Independent( tfd.Normal( loc=(self._effect_service * features["service"] + tf.gather(effect_students, features["students"], axis=-1) + tf.gather(effect_instructors, features["instructors"], axis=-1) + tf.gather(effect_departments, features["departments"], axis=-1) + self._intercept), scale=1.), reinterpreted_batch_ndims=1) ]) # To enable tracking of the trainable variables via the created distribution, # we attach a reference to `self`. Since all TFP objects sub-class # `tf.Module`, this means that the following is possible: # LinearMixedEffectModel()(features_train).trainable_variables # ==> tuple of all tf.Variables created by LinearMixedEffectModel. model._to_track = self return model lmm_jointdist = LinearMixedEffectModel() # Conditioned on feature/predictors from the training data lmm_train = lmm_jointdist(features_train) lmm_train.trainable_variables Explanation: Model A typical linear model assumes independence, where any pair of data points has a constant linear relationship. In the InstEval data set, observations arise in groups each of which may have varying slopes and intercepts. Linear mixed effects models, also known as hierarchical linear models or multilevel linear models, capture this phenomenon (Gelman & Hill, 2006). Examples of this phenomenon include: Students. Observations from a student are not independent: some students may systematically give low (or high) lecture ratings. Instructors. Observations from an instructor are not independent: we expect good teachers to generally have good ratings and bad teachers to generally have bad ratings. Departments. Observations from a department are not independent: certain departments may generally have dry material or stricter grading and thus be rated lower than others. To capture this, recall that for a data set of $N\times D$ features $\mathbf{X}$ and $N$ labels $\mathbf{y}$, linear regression posits the model $$ \begin{equation} \mathbf{y} = \mathbf{X}\beta + \alpha + \epsilon, \end{equation} $$ where there is a slope vector $\beta\in\mathbb{R}^D$, intercept $\alpha\in\mathbb{R}$, and random noise $\epsilon\sim\text{Normal}(\mathbf{0}, \mathbf{I})$. We say that $\beta$ and $\alpha$ are "fixed effects": they are effects held constant across the population of data points $(x, y)$. An equivalent formulation of the equation as a likelihood is $\mathbf{y} \sim \text{Normal}(\mathbf{X}\beta + \alpha, \mathbf{I})$. This likelihood is maximized during inference in order to find point estimates of $\beta$ and $\alpha$ that fit the data. A linear mixed effects model extends linear regression as $$ \begin{align} \eta &\sim \text{Normal}(\mathbf{0}, \sigma^2 \mathbf{I}), \ \mathbf{y} &= \mathbf{X}\beta + \mathbf{Z}\eta + \alpha + \epsilon. \end{align} $$ where there is still a slope vector $\beta\in\mathbb{R}^P$, intercept $\alpha\in\mathbb{R}$, and random noise $\epsilon\sim\text{Normal}(\mathbf{0}, \mathbf{I})$. In addition, there is a term $\mathbf{Z}\eta$, where $\mathbf{Z}$ is a features matrix and $\eta\in\mathbb{R}^Q$ is a vector of random slopes; $\eta$ is normally distributed with variance component parameter $\sigma^2$. $\mathbf{Z}$ is formed by partitioning the original $N\times D$ features matrix in terms of a new $N\times P$ matrix $\mathbf{X}$ and $N\times Q$ matrix $\mathbf{Z}$, where $P + Q=D$: this partition allows us to model the features separately using the fixed effects $\beta$ and the latent variable $\eta$ respectively. We say the latent variables $\eta$ are "random effects": they are effects that vary across the population (although they may be constant across subpopulations). In particular, because the random effects $\eta$ have mean 0, the data label's mean is captured by $\mathbf{X}\beta + \alpha$. The random effects component $\mathbf{Z}\eta$ captures variations in the data: for example, "Instructor #54 is rated 1.4 points higher than the mean." In this tutorial, we posit the following effects: Fixed effects: service. service is a binary covariate corresponding to whether the course belongs to the instructor's main department. No matter how much additional data we collect, it can only take on values $0$ and $1$. Random effects: students, instructors, and departments. Given more observations from the population of course evaluation ratings, we may be looking at new students, teachers, or departments. In the syntax of R's lme4 package (Bates et al., 2015), the model can be summarized as ratings ~ service + (1|students) + (1|instructors) + (1|departments) + 1 where x denotes a fixed effect,(1|x) denotes a random effect for x, and 1 denotes an intercept term. We implement this model below as a JointDistribution. To have better support for parameter tracking (e.g., we want to track all the tf.Variable in model.trainable_variables), we implement the model template as tf.Module. End of explanation lmm_train.resolve_graph() Explanation: As a Probabilistic graphical program, we can also visualize the model's structure in terms of its computational graph. This graph encodes dataflow across the random variables in the program, making explicit their relationships in terms of a graphical model (Jordan, 2003). As a statistical tool, we might look at the graph in order to better see, for example, that intercept and effect_service are conditionally dependent given ratings; this may be harder to see from the source code if the program is written with classes, cross references across modules, and/or subroutines. As a computational tool, we might also notice latent variables flow into the ratings variable via tf.gather ops. This may be a bottleneck on certain hardware accelerators if indexing Tensors is expensive; visualizing the graph makes this readily apparent. End of explanation target_log_prob_fn = lambda *x: lmm_train.log_prob(x + (labels_train,)) trainable_variables = lmm_train.trainable_variables current_state = lmm_train.sample()[:-1] # For debugging target_log_prob_fn(*current_state) # Set up E-step (MCMC). hmc = tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=target_log_prob_fn, step_size=0.015, num_leapfrog_steps=3) kernel_results = hmc.bootstrap_results(current_state) @tf.function(autograph=False, jit_compile=True) def one_e_step(current_state, kernel_results): next_state, next_kernel_results = hmc.one_step( current_state=current_state, previous_kernel_results=kernel_results) return next_state, next_kernel_results optimizer = tf.optimizers.Adam(learning_rate=.01) # Set up M-step (gradient descent). @tf.function(autograph=False, jit_compile=True) def one_m_step(current_state): with tf.GradientTape() as tape: loss = -target_log_prob_fn(*current_state) grads = tape.gradient(loss, trainable_variables) optimizer.apply_gradients(zip(grads, trainable_variables)) return loss Explanation: Parameter Estimation Given data, the goal of inference is to fit the model's fixed effects slope $\beta$, intercept $\alpha$, and variance component parameter $\sigma^2$. The maximum likelihood principle formalizes this task as $$ \max_{\beta, \alpha, \sigma}~\log p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}; \beta, \alpha, \sigma) = \max_{\beta, \alpha, \sigma}~\log \int p(\eta; \sigma) ~p(\mathbf{y}\mid \mathbf{X}, \mathbf{Z}, \eta; \beta, \alpha)~d\eta. $$ In this tutorial, we use the Monte Carlo EM algorithm to maximize this marginal density (Dempster et al., 1977; Wei and Tanner, 1990).¹ We perform Markov chain Monte Carlo to compute the expectation of the conditional likelihood with respect to the random effects ("E-step"), and we perform gradient descent to maximize the expectation with respect to the parameters ("M-step"): For the E-step, we set up Hamiltonian Monte Carlo (HMC). It takes a current state—the student, instructor, and department effects—and returns a new state. We assign the new state to TensorFlow variables, which will denote the state of the HMC chain. For the M-step, we use the posterior sample from HMC to calculate an unbiased estimate of the marginal likelihood up to a constant. We then apply its gradient with respect to the parameters of interest. This produces an unbiased stochastic descent step on the marginal likelihood. We implement it with the Adam TensorFlow optimizer and minimize the negative of the marginal. End of explanation num_warmup_iters = 1000 num_iters = 1500 num_accepted = 0 effect_students_samples = np.zeros([num_iters, num_students]) effect_instructors_samples = np.zeros([num_iters, num_instructors]) effect_departments_samples = np.zeros([num_iters, num_departments]) loss_history = np.zeros([num_iters]) # Run warm-up stage. for t in range(num_warmup_iters): current_state, kernel_results = one_e_step(current_state, kernel_results) num_accepted += kernel_results.is_accepted.numpy() if t % 500 == 0 or t == num_warmup_iters - 1: print("Warm-Up Iteration: {:>3} Acceptance Rate: {:.3f}".format( t, num_accepted / (t + 1))) num_accepted = 0 # reset acceptance rate counter # Run training. for t in range(num_iters): # run 5 MCMC iterations before every joint EM update for _ in range(5): current_state, kernel_results = one_e_step(current_state, kernel_results) loss = one_m_step(current_state) effect_students_samples[t, :] = current_state[0].numpy() effect_instructors_samples[t, :] = current_state[1].numpy() effect_departments_samples[t, :] = current_state[2].numpy() num_accepted += kernel_results.is_accepted.numpy() loss_history[t] = loss.numpy() if t % 500 == 0 or t == num_iters - 1: print("Iteration: {:>4} Acceptance Rate: {:.3f} Loss: {:.3f}".format( t, num_accepted / (t + 1), loss_history[t])) Explanation: We perform a warm-up stage, which runs one MCMC chain for a number of iterations so that training may be initialized within the posterior's probability mass. We then run a training loop. It jointly runs the E and M-steps and records values during training. End of explanation @tf.function(autograph=False, jit_compile=True) def run_k_e_steps(k, current_state, kernel_results): _, next_state, next_kernel_results = tf.while_loop( cond=lambda i, state, pkr: i < k, body=lambda i, state, pkr: (i+1, *one_e_step(state, pkr)), loop_vars=(tf.constant(0), current_state, kernel_results) ) return next_state, next_kernel_results Explanation: You can also write the warmup for-loop into a tf.while_loop, and the training step into a tf.scan or tf.while_loop for even faster inference. For example: End of explanation plt.plot(loss_history) plt.ylabel(r'Loss $-\log$ $p(y\mid\mathbf{x})$') plt.xlabel('Iteration') plt.show() Explanation: Above, we did not run the algorithm until a convergence threshold was detected. To check whether training was sensible, we verify that the loss function indeed tends to converge over training iterations. End of explanation for i in range(7): plt.plot(effect_instructors_samples[:, i]) plt.legend([i for i in range(7)], loc='lower right') plt.ylabel('Instructor Effects') plt.xlabel('Iteration') plt.show() Explanation: We also use a trace plot, which shows the Markov chain Monte Carlo algorithm's trajectory across specific latent dimensions. Below we see that specific instructor effects indeed meaningfully transition away from their initial state and explore the state space. The trace plot also indicates that the effects differ across instructors but with similar mixing behavior. End of explanation lmm_test = lmm_jointdist(features_test) [ effect_students_mean, effect_instructors_mean, effect_departments_mean, ] = [ np.mean(x, axis=0).astype(np.float32) for x in [ effect_students_samples, effect_instructors_samples, effect_departments_samples ] ] # Get the posterior predictive distribution (*posterior_conditionals, ratings_posterior), _ = lmm_test.sample_distributions( value=( effect_students_mean, effect_instructors_mean, effect_departments_mean, )) ratings_prediction = ratings_posterior.mean() Explanation: Criticism Above, we fitted the model. We now look into criticizing its fit using data, which lets us explore and better understand the model. One such technique is a residual plot, which plots the difference between the model's predictions and ground truth for each data point. If the model were correct, then their difference should be standard normally distributed; any deviations from this pattern in the plot indicate model misfit. We build the residual plot by first forming the posterior predictive distribution over ratings, which replaces the prior distribution on the random effects with its posterior given training data. In particular, we run the model forward and intercept its dependence on prior random effects with their inferred posterior means.² End of explanation plt.title("Residuals for Predicted Ratings on Test Set") plt.xlim(-4, 4) plt.ylim(0, 800) plt.hist(ratings_prediction - labels_test, 75) plt.show() Explanation: Upon visual inspection, the residuals look somewhat standard-normally distributed. However, the fit is not perfect: there is larger probability mass in the tails than a normal distribution, which indicates the model might improve its fit by relaxing its normality assumptions. In particular, although it is most common to use a normal distribution to model ratings in the InstEval data set, a closer look at the data reveals that course evaluation ratings are in fact ordinal values from 1 to 5. This suggests that we should be using an ordinal distribution, or even Categorical if we have enough data to throw away the relative ordering. This is a one-line change to the model above; the same inference code is applicable. End of explanation plt.title("Histogram of Student Effects") plt.hist(effect_students_mean, 75) plt.show() plt.title("Histogram of Instructor Effects") plt.hist(effect_instructors_mean, 75) plt.show() plt.title("Histogram of Department Effects") plt.hist(effect_departments_mean, 75) plt.show() Explanation: To explore how the model makes individual predictions, we look at the histogram of effects for students, instructors, and departments. This lets us understand how individual elements in a data point's feature vector tends to influence the outcome. Not surprisingly, we see below that each student typically has little effect on an instructor's evaluation rating. Interestingly, we see that the department an instructor belongs to has a large effect. End of explanation
1,909
Given the following text description, write Python code to implement the functionality described below step by step Description: Named Entity Recognition Author Step1: Let's take a paragraph from the Wikipedia page of Ada Lovelace as an example. We need to put the text in triple quotes since the text itself contains quoting characters. Step2: First we need to tokenize the text and then we apply the NER tagger. Let's try both, the 3 class version and the 7 class version. Step3: We see that each word is tagged. Tags are for instance ORGANIZATION or PERSON. Very prominently, the O tag appears often. This is the other class (everything that is not an organisation or person, etc.). But it is still an aweful lot of text. Let's just have a look at the non-other entities detected. We do this assuming that adjacent words having the same tag should be collapsed into one named entity.
Python Code: from nltk.tag import StanfordNERTagger from nltk.tokenize import word_tokenize # Adapt those lines to your installation jar_location = '/Users/sech/stanford-ner-2018-10-16/stanford-ner.jar' model_location_3classes = '/Users/sech/stanford-ner-2018-10-16/classifiers/english.all.3class.distsim.crf.ser.gz' model_location_7classes = '/Users/sech/stanford-ner-2018-10-16/classifiers/english.muc.7class.distsim.crf.ser.gz' st3 = StanfordNERTagger(model_location_3classes,jar_location,encoding='utf-8') st7 = StanfordNERTagger(model_location_7classes,jar_location,encoding='utf-8') print(st3) print(st7) Explanation: Named Entity Recognition Author: Christin Seifert, licensed under the Creative Commons Attribution 3.0 Unported License https://creativecommons.org/licenses/by/3.0/ This is a tutorial for NER (named entity recognition). In this tutorial you will see * how to apply a pre-trained named entity recognition model to your text It is assumed that you have some general knowledge on * .. no particular knowledge required. You should be able to read texts, though ;-) Prerequisites. We first need to install the Stanford NER tagger from here. And java also has to be installed. You have to figure out * where the jar file stanford-ner.jar is located * where the pretrained models (e.g. english.all.3class.distsim.crf.ser.gz) is located, this is the subdirectory classifiers * whether the right version of java is installed. On a command line type java -version to see the version. Refer back to the documentation on the stanford nlp page to see which version is needed. You can also test the NER tagger online here. End of explanation text = '''Lovelace became close friends with her tutor Mary Somerville, who introduced her to Charles Babbage in 1833. She had a strong respect and affection for Somerville, and they corresponded for many years. Other acquaintances included the scientists Andrew Crosse, Sir David Brewster, Charles Wheatstone, Michael Faraday and the author Charles Dickens. She was presented at Court at the age of seventeen "and became a popular belle of the season" in part because of her "brilliant mind." By 1834 Ada was a regular at Court and started attending various events. She danced often and was able to charm many people, and was described by most people as being dainty, although John Hobhouse, Byron's friend, described her as "a large, coarse-skinned young woman but with something of my friend's features, particularly the mouth". This description followed their meeting on 24 February 1834 in which Ada made it clear to Hobhouse that she did not like him, probably because of the influence of her mother, which led her to dislike all of her father's friends. This first impression was not to last, and they later became friends.''' print(text) Explanation: Let's take a paragraph from the Wikipedia page of Ada Lovelace as an example. We need to put the text in triple quotes since the text itself contains quoting characters. End of explanation tokenized_text = word_tokenize(text) text_ner3 = st3.tag(tokenized_text) text_ner7 = st7.tag(tokenized_text) print(text_ner3) print(text_ner7) Explanation: First we need to tokenize the text and then we apply the NER tagger. Let's try both, the 3 class version and the 7 class version. End of explanation from itertools import groupby print("**** 3 classes ****") for tag, chunk in groupby(text_ner3, lambda x:x[1]): if tag != "O": print("%-12s"%tag, " ".join(w for w, t in chunk)) print("**** 7 classes ****") for tag, chunk in groupby(text_ner7, lambda x:x[1]): if tag != "O": print("%-12s"%tag, " ".join(w for w, t in chunk)) Explanation: We see that each word is tagged. Tags are for instance ORGANIZATION or PERSON. Very prominently, the O tag appears often. This is the other class (everything that is not an organisation or person, etc.). But it is still an aweful lot of text. Let's just have a look at the non-other entities detected. We do this assuming that adjacent words having the same tag should be collapsed into one named entity. End of explanation
1,910
Given the following text description, write Python code to implement the functionality described below step by step Description: Feature Step1: NLTK tools Step2: Config Automatically discover the paths to various data folders and compose the project structure. Step3: Identifier for storing these features on disk and referring to them later. Step4: Read data Original question sets. Step5: NLTK built-in stopwords. Step6: Build features Step7: Save features
Python Code: from pygoose import * Explanation: Feature: "Jaccard with WHQ" (@dasolmar) Based on the kernel XGB with whq jaccard by David Solis. Imports This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace. End of explanation import nltk from collections import Counter from nltk.corpus import stopwords nltk.download('stopwords') Explanation: NLTK tools End of explanation project = kg.Project.discover() Explanation: Config Automatically discover the paths to various data folders and compose the project structure. End of explanation feature_list_id = '3rdparty_dasolmar_whq' Explanation: Identifier for storing these features on disk and referring to them later. End of explanation df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('') df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('') Explanation: Read data Original question sets. End of explanation stops = set(stopwords.words("english")) Explanation: NLTK built-in stopwords. End of explanation # If a word appears only once, we ignore it completely (likely a typo) # Epsilon defines a smoothing constant, which makes the effect of extremely rare words smaller def get_weight(count, eps=10000, min_count=2): return 0 if count < min_count else 1 / (count + eps) def add_word_count(x, df, word): x['das_q1_' + word] = df['question1'].apply(lambda x: (word in str(x).lower())*1) x['das_q2_' + word] = df['question2'].apply(lambda x: (word in str(x).lower())*1) x['das_' + word + '_both'] = x['das_q1_' + word] * x['das_q2_' + word] train_qs = pd.Series(df_train['question1'].tolist() + df_train['question2'].tolist()).astype(str) words = (" ".join(train_qs)).lower().split() counts = Counter(words) weights = {word: get_weight(count) for word, count in counts.items()} def word_shares(row): q1_list = str(row['question1']).lower().split() q1 = set(q1_list) q1words = q1.difference(stops) if len(q1words) == 0: return '0:0:0:0:0:0:0:0' q2_list = str(row['question2']).lower().split() q2 = set(q2_list) q2words = q2.difference(stops) if len(q2words) == 0: return '0:0:0:0:0:0:0:0' words_hamming = sum(1 for i in zip(q1_list, q2_list) if i[0]==i[1])/max(len(q1_list), len(q2_list)) q1stops = q1.intersection(stops) q2stops = q2.intersection(stops) q1_2gram = set([i for i in zip(q1_list, q1_list[1:])]) q2_2gram = set([i for i in zip(q2_list, q2_list[1:])]) shared_2gram = q1_2gram.intersection(q2_2gram) shared_words = q1words.intersection(q2words) shared_weights = [weights.get(w, 0) for w in shared_words] q1_weights = [weights.get(w, 0) for w in q1words] q2_weights = [weights.get(w, 0) for w in q2words] total_weights = q1_weights + q1_weights R1 = np.sum(shared_weights) / np.sum(total_weights) #tfidf share R2 = len(shared_words) / (len(q1words) + len(q2words) - len(shared_words)) #count share R31 = len(q1stops) / len(q1words) #stops in q1 R32 = len(q2stops) / len(q2words) #stops in q2 Rcosine_denominator = (np.sqrt(np.dot(q1_weights,q1_weights))*np.sqrt(np.dot(q2_weights,q2_weights))) Rcosine = np.dot(shared_weights, shared_weights)/Rcosine_denominator if len(q1_2gram) + len(q2_2gram) == 0: R2gram = 0 else: R2gram = len(shared_2gram) / (len(q1_2gram) + len(q2_2gram)) return '{}:{}:{}:{}:{}:{}:{}:{}'.format(R1, R2, len(shared_words), R31, R32, R2gram, Rcosine, words_hamming) df = pd.concat([df_train, df_test]) df['word_shares'] = df.apply(word_shares, axis=1, raw=True) x = pd.DataFrame() x['das_word_match'] = df['word_shares'].apply(lambda x: float(x.split(':')[0])) x['das_word_match_2root'] = np.sqrt(x['das_word_match']) x['das_tfidf_word_match'] = df['word_shares'].apply(lambda x: float(x.split(':')[1])) x['das_shared_count'] = df['word_shares'].apply(lambda x: float(x.split(':')[2])) x['das_stops1_ratio'] = df['word_shares'].apply(lambda x: float(x.split(':')[3])) x['das_stops2_ratio'] = df['word_shares'].apply(lambda x: float(x.split(':')[4])) x['das_shared_2gram'] = df['word_shares'].apply(lambda x: float(x.split(':')[5])) x['das_cosine'] = df['word_shares'].apply(lambda x: float(x.split(':')[6])) x['das_words_hamming'] = df['word_shares'].apply(lambda x: float(x.split(':')[7])) x['das_diff_stops_r'] = np.abs(x['das_stops1_ratio'] - x['das_stops2_ratio']) x['das_len_q1'] = df['question1'].apply(lambda x: len(str(x))) x['das_len_q2'] = df['question2'].apply(lambda x: len(str(x))) x['das_diff_len'] = np.abs(x['das_len_q1'] - x['das_len_q2']) x['das_caps_count_q1'] = df['question1'].apply(lambda x:sum(1 for i in str(x) if i.isupper())) x['das_caps_count_q2'] = df['question2'].apply(lambda x:sum(1 for i in str(x) if i.isupper())) x['das_diff_caps'] = np.abs(x['das_caps_count_q1'] - x['das_caps_count_q2']) x['das_len_char_q1'] = df['question1'].apply(lambda x: len(str(x).replace(' ', ''))) x['das_len_char_q2'] = df['question2'].apply(lambda x: len(str(x).replace(' ', ''))) x['das_diff_len_char'] = np.abs(x['das_len_char_q1'] - x['das_len_char_q2']) x['das_len_word_q1'] = df['question1'].apply(lambda x: len(str(x).split())) x['das_len_word_q2'] = df['question2'].apply(lambda x: len(str(x).split())) x['das_diff_len_word'] = np.abs(x['das_len_word_q1'] - x['das_len_word_q2']) x['das_avg_word_len1'] = x['das_len_char_q1'] / x['das_len_word_q1'] x['das_avg_word_len2'] = x['das_len_char_q2'] / x['das_len_word_q2'] x['das_diff_avg_word'] = np.abs(x['das_avg_word_len1'] - x['das_avg_word_len2']) # x['exactly_same'] = (df['question1'] == df['question2']).astype(int) # x['duplicated'] = df.duplicated(['question1','question2']).astype(int) whq_words = ['how', 'what', 'which', 'who', 'where', 'when', 'why'] for whq in whq_words: add_word_count(x, df, whq) whq_columns_q1 = ['das_q1_' + whq for whq in whq_words] whq_columns_q2 = ['das_q2_' + whq for whq in whq_words] x['whq_count_q1'] = x[whq_columns_q1].sum(axis=1) x['whq_count_q2'] = x[whq_columns_q2].sum(axis=1) x['whq_count_diff'] = np.abs(x['whq_count_q1'] - x['whq_count_q2']) feature_names = list(x.columns.values) print("Features: {}".format(feature_names)) X_train = x[:df_train.shape[0]].values X_test = x[df_train.shape[0]:].values Explanation: Build features End of explanation project.save_features(X_train, X_test, feature_names, feature_list_id) Explanation: Save features End of explanation
1,911
Given the following text description, write Python code to implement the functionality described below step by step Description: Reinforcement Learning with Policy Gradients using PyTorch Agenda What reinforcement learning is all about? Introduction Deep Reinforcement Learning Methods to solve reinforcement learning problems Insight into Policy Gradients PG Pong Environment (OpenAI Gym) Input (preprocessing) Model (policy network) How to decide what action to take and why with stochastic policy? Learning! Extras Why to discount rewards? Deriving Policy Gradients from score function gradient estimator Policy distribution shifting interpretation Weights visualization Improvement ideas 1. What reinforcement learning is all about? Introduction Reinforcement Learning is a framework to formalize substantial amount of reward-related learning problems An RL algorithm seeks to maximize the agent’s expected return (total future reward), given a previously unknown environment, through a trial-and-error learning process. Our solution will be a policy. Deep Reinforcement Learning Solve RL problems through deep learning. Methods All of them seek to maximize expected return but in different ways. Policy Gradients Expected return Step2: Input (preprocessing) Step4: Model Step5: Stochastic policy We use stochastic policy which means our model produces probability distribution over all actions, π(a | s) = probability of action given state. Then we sample from this distribution in order to get action. Why stochastic policy? Step6: Learning Supervised Learning Maximize log likelihood of true label (e.g. cross-entropy error). Loss Step7: 3. Extras Discounted reward In a more general RL setting we would receive some reward \(r_t\) at every time step. One common choice is to use a discounted reward, so the “eventual reward” in the diagram above would become
Python Code: import gym env = gym.make('Pong-v0').unwrapped observation = env.reset() while True: env.render() observation, reward, done, _ = env.step(action) # Record reward for future training policy.rewards.append(reward) reward_sum += reward Explanation: Reinforcement Learning with Policy Gradients using PyTorch Agenda What reinforcement learning is all about? Introduction Deep Reinforcement Learning Methods to solve reinforcement learning problems Insight into Policy Gradients PG Pong Environment (OpenAI Gym) Input (preprocessing) Model (policy network) How to decide what action to take and why with stochastic policy? Learning! Extras Why to discount rewards? Deriving Policy Gradients from score function gradient estimator Policy distribution shifting interpretation Weights visualization Improvement ideas 1. What reinforcement learning is all about? Introduction Reinforcement Learning is a framework to formalize substantial amount of reward-related learning problems An RL algorithm seeks to maximize the agent’s expected return (total future reward), given a previously unknown environment, through a trial-and-error learning process. Our solution will be a policy. Deep Reinforcement Learning Solve RL problems through deep learning. Methods All of them seek to maximize expected return but in different ways. Policy Gradients Expected return: $$ J\left( \mathbf{\theta}\right) =\mathbf{E}\left[ \sum\nolimits_{t=0}^{T}R_{t}\right] \ R_t - \text{random variable representing reward reached at time } t \ \text{ following policy } \pi \text{ from some initial state} \ T - \text{final time step or end of the episode} $$ All we do is finding gradient estimate of expected return to do stochastic gradient ascend update! Convergence If the gradient estimate is unbiased and learning rates fulfill \(\sum\textstyle_{h=0}^{\infty}\alpha_{h}>0\) and \(\sum\textstyle_{h=0}^{\infty}\alpha_{h}^{2}=\textrm{const}\ ,\) the learning process is guaranteed to converge at least to a local minimum. 2. PG Pong Environment (OpenAI Gym) End of explanation def preprocess(img): Preprocess 210x160x3 uint8 frame into 6400 (80x80) 1D float vector I = img[35:195] # crop I = I[::2, ::2, 0] # downsample by factor of 2 I[I == 144] = 0 # erase background (background type 1) I[I == 109] = 0 # erase background (background type 2) I[I != 0] = 1 # everything else (paddles, ball) just set to 1 return I.astype(np.float).ravel() Explanation: Input (preprocessing) End of explanation class PolicyGradient(nn.Module): It's out model class. def __init__(self, in_dim): super(PolicyGradient, self).__init__() self.hidden = nn.Linear(in_dim, 200) self.out = nn.Linear(200, 3) self.rewards = [] self.actions = [] # Weights initialization for m in self.modules(): if isinstance(m, nn.Linear): # 'n' is number of inputs to each neuron n = len(m.weight.data[1]) # "Xavier" initialization m.weight.data.normal_(0, np.sqrt(2. / n)) m.bias.data.zero_() def forward(self, x): h = F.relu(self.hidden(x)) logits = self.out(h) return F.softmax(logits) def reset(self): del self.rewards[:] del self.actions[:] Explanation: Model End of explanation def get_action(policy, observation): # Get current state, which is difference between current and previous state cur_state = preprocess(observation) state = cur_state - get_action.prev_state \ if get_action.prev_state is not None else np.zeros(len(cur_state)) get_action.prev_state = cur_state var_state = Variable( # Make torch FloatTensor from numpy array and add batch dimension torch.from_numpy(state).type(FloatTensor).unsqueeze(0) ) probabilities = policy(var_state) # Stochastic policy: roll a biased dice to get an action action = probabilities.multinomial() # Record action for future training policy.actions.append(action) # '+ 1' converts action to valid Pong env action return action.data[0, 0] + 1 Explanation: Stochastic policy We use stochastic policy which means our model produces probability distribution over all actions, π(a | s) = probability of action given state. Then we sample from this distribution in order to get action. Why stochastic policy?: We can use the score function gradient estimator, which tries to make good actions more probable. Stochastic environments. Partially observable states. The randomness inherent in the policy leads to exploration, which is crucial for most learning problems. End of explanation # Let's play the game ;) while True: [...] ### Here actions are taken in environment ### action = get_action(policy, observation) observation, reward, done, _ = env.step(action) # Record reward for future training policy.rewards.append(reward) reward_sum += reward ### Here is our reinforcement learning logic ### if done: num_episodes += 1 [...] # Reinforce actions for action, reward in zip(policy.actions, rewards): action.reinforce(reward) # BACKPROP!!! (Gradients are accumulated each episode until update) autograd.backward(policy.actions, [None for a in policy.actions]) ### Here we do weight update each batch ### if num_episodes % HPARAMS.batch_size == 0: optimizer.step() optimizer.zero_grad() print "### Updated parameters! ###" Explanation: Learning Supervised Learning Maximize log likelihood of true label (e.g. cross-entropy error). Loss: \( \sum_i log p(\text{a } \vert \text{ img}) \) Reinforcement Learning Maximize log likelihood of good action and minimize it for bad actions (via advantage or on diagram "eventual reward"). Policy Gradients: Run a policy for a while. See what actions led to high rewards. Increase their probability. Loss: \( \sum_i A_i log p(\text{a } \vert \text{ img}) \) End of explanation # Compute discounted reward discounted_R = [] running_add = 0 for reward in policy.rewards[::-1]: if reward != 0: # Reset the sum, since this was a game boundary (pong specific!) running_add = 0 running_add = running_add * HPARAMS.gamma + reward # "Further" actions have less discounted rewards discounted_R.insert(0, running_add) rewards = FloatTensor(discounted_R) # Standardize rewards rewards = (rewards - rewards.mean()) / \ (rewards.std() + np.finfo(np.float32).eps) # Batch size shouldn't influence update step rewards = rewards / HPARAMS.batch_size Explanation: 3. Extras Discounted reward In a more general RL setting we would receive some reward \(r_t\) at every time step. One common choice is to use a discounted reward, so the “eventual reward” in the diagram above would become: $$ R_t = \sum_{k=0}^{\infty} \gamma^k r_{t+k} \ 0 \leq \gamma < 1 $$ But why discounted? * We care more about tomorrow than what will be sometime in the distant future. * In infinite horizont without discount we would get infinite rewards (infinite in this case means troubles). Know your limit!... $$ 0 \leq \gamma < 1 \ R_t = \sum_{k=0}^{\infty} \gamma^k r_{t+k} \leq \sum_{k=0}^{\infty} \gamma^k R_{max} = \frac{R_{max}}{1 - \gamma} $$ Infinite horizont has finite sum of discounted rewards... Why is this important? Maximum Expected Utility (MEU) principle says... A rational agent should chose the action that maximizes its expected utility given its knowlage. Expected utility in state s with respect to policy: $$ U^{\pi}(s) = E[\sum_{t = 0}^{\infty}\gamma^tR(S_t)] \ S_t - \text{random variable representing state reached at time } t \text{ following policy } \pi $$ ...where the expectation is with respect to the probability distribution over state sequences determined by s and π. Comparing infinities could be problematic... End of explanation
1,912
Given the following text description, write Python code to implement the functionality described below step by step Description: Debug Models Using tfdbg Open a Terminal through Jupyter Notebook (Menu Bar -> Terminal -> New Terminal) Run the Next Cell to Display the Code Find the DebugWrapper around the tf.Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess) Step1: Run the following in the Terminal (CPU)
Python Code: %%bash cat /root/src/main/python/debug/debug_model_cpu.py Explanation: Debug Models Using tfdbg Open a Terminal through Jupyter Notebook (Menu Bar -> Terminal -> New Terminal) Run the Next Cell to Display the Code Find the DebugWrapper around the tf.Session sess = tf.Session(config=config) sess = tf_debug.LocalCLIDebugWrapperSession(sess) End of explanation %%bash cat /root/src/main/python/debug/debug_model_gpu.py Explanation: Run the following in the Terminal (CPU): python /root/src/main/python/debug/debug_model_cpu.py End of explanation
1,913
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have a data frame with one (string) column and I'd like to split it into two (string) columns, with one column header as 'fips' and the other 'row'
Problem: import pandas as pd df = pd.DataFrame({'row': ['00000 UNITED STATES', '01000 ALABAMA', '01001 Autauga County, AL', '01003 Baldwin County, AL', '01005 Barbour County, AL']}) def g(df): return pd.DataFrame(df.row.str.split(' ', 1).tolist(), columns=['fips', 'row']) df = g(df.copy())
1,914
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: I wanted to implement a GAN but my derivatives didn't work after a lot of try, so I gave up. If you are available in Monday or Tuesday can we have a look at them? Generative Adverserial Networks In this notebook, I will try to implement the idea of a GAN, for capturing unimodal and bimodal distributions. Step2: As our input vector we will always use a uniform random distrubiton, but instead of using between 0-1 we will use 0-100 range to make life a little bit easier for our Generator network. Step3: Generator Network We will use a basic 2 layer FCN for our generator and discriminator, since our dataset is basically one dimensional it should be enough Step4: Gradient Check Step5: Let us start with a standart distribution as our dataset
Python Code: # As usual, a bit of setup import time, os, json import numpy as np import matplotlib.pyplot as plt from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): returns relative error return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) Explanation: I wanted to implement a GAN but my derivatives didn't work after a lot of try, so I gave up. If you are available in Monday or Tuesday can we have a look at them? Generative Adverserial Networks In this notebook, I will try to implement the idea of a GAN, for capturing unimodal and bimodal distributions. End of explanation inputVector = np.random.uniform(-100.,100.,(1000000,1)) Explanation: As our input vector we will always use a uniform random distrubiton, but instead of using between 0-1 we will use 0-100 range to make life a little bit easier for our Generator network. End of explanation from cs231n.classifiers.neural_net import * #we will use the 2layer FCN from HW1 g_inpsize = 1 #we will just take 1 real number uniform distrubtion g_hidsize = 50 # g_outsize = 1 #we will output a real number from our data distribution Generator = GenNet(g_inpsize, g_hidsize, g_outsize) #it is gonna perform cross entropy loss instead of softmax, #since our labels are just real or fake, #it fits our purposes d_inpsize = 1 #first check without batch normaliziton d_hidsize = 50 d_outsize = 1 #again we are just gonna output real or fake Discriminator = DiscNet(d_inpsize, d_hidsize, d_outsize) #we will use slightly modified version of TwoLayerNet #we just added sigmoid at the output layer since all of #our outputs needs to be in range [0,1] Explanation: Generator Network We will use a basic 2 layer FCN for our generator and discriminator, since our dataset is basically one dimensional it should be enough End of explanation from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array D = 1 x = np.random.rand(200, D) y = np.ones((200,1)) W1 = np.random.randn(D, d_hidsize) W2 = np.random.randn(d_hidsize, d_hidsize) W3 = np.random.randn(d_hidsize, d_outsize) b1 = np.random.randn(d_hidsize) b2 = np.random.randn(d_hidsize) b3 = np.random.randn(d_outsize) fx = lambda x: Discriminator.loss(x, y)[0] fW1 = lambda W1: Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fW2 = lambda W2: Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fW3 = lambda W3: Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fb1 = lambda b1: Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fb2 = lambda b2: Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fb3 = lambda b3: Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] num_grad = lambda x,y: eval_numerical_gradient(x,y, verbose=False, h=1e-6) dx_num = num_grad(fx, x) dW1_num = num_grad(fW1, W1) dW2_num = num_grad(fW2, W2) dW3_num = num_grad(fW3, W3) db1_num = num_grad(fb1, b1) db2_num = num_grad(fb2, b2) db3_num = num_grad(fb3, b3) loss, grads = Discriminator.loss(x, y, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3) dx, dW1, dW2, dW3, db1, db2, db3 = grads['X'], grads['W1'], grads['W2'], grads['W3'], grads['b1'], grads['b2'], grads['b3'] print 'dx error: ', rel_error(dx_num, dx) print 'dW1 error: ', rel_error(dW1_num, dW1) print 'dW2 error: ', rel_error(dW2_num, dW2) print 'dW3 error: ', rel_error(dW3_num, dW3) print 'db1 error: ', rel_error(db1_num, db1) print 'db2 error: ', rel_error(db2_num, db2) print 'db3 error: ', rel_error(db3_num, db3) D = 1 x = np.random.randn(200, D) W1 = np.random.randn(D, g_hidsize) W2 = np.random.randn(g_hidsize, g_hidsize) W3 = np.random.randn(g_hidsize, g_outsize) b1 = np.random.randn(g_hidsize) b2 = np.random.randn(g_hidsize) b3 = np.random.randn(g_outsize) fx = lambda x: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fW1 = lambda W1: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fW2 = lambda W2: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fW3 = lambda W3: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fb1 = lambda b1: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fb2 = lambda b2: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] fb3 = lambda b3: Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3)[0] num_grad = lambda x,y: eval_numerical_gradient(x,y, verbose=False) dx_num = num_grad(fx, x) dW1_num = num_grad(fW1, W1) dW2_num = num_grad(fW2, W2) dW3_num = num_grad(fW3, W3) db1_num = num_grad(fb1, b1) db2_num = num_grad(fb2, b2) db3_num = num_grad(fb3, b3) loss, grads = Generator.loss(x, Discriminator, W1=W1, W2=W2, W3=W3, b1=b1, b2=b2, b3=b3) dx, dW1, dW2, dW3, db1, db2, db3 = grads['X'], grads['W1'], grads['W2'], grads['W3'], grads['b1'], grads['b2'], grads['b3'] print 'dx error: ', rel_error(dx_num, dx) print 'dW1 error: ', rel_error(dW1_num, dW1) print 'dW2 error: ', rel_error(dW2_num, dW2) print 'dW3 error: ', rel_error(dW3_num, dW3) print 'db1 error: ', rel_error(db1_num, db1) print 'db2 error: ', rel_error(db2_num, db2) print 'db3 error: ', rel_error(db3_num, db3) Explanation: Gradient Check End of explanation N = 1000000 mean = 0.0 std = 1.0 data = np.random.normal(mean, std, (N,1)) _ = plt.hist(data, 100, normed=1) _ = plt.hist(inputVector[np.abs(inputVector)<4], 100, normed=1) g_inpsize = 1 #we will just take 1 real number uniform distrubtion g_hidsize = 50 # g_outsize = 1 #we will output a real number from our data distribution Generator = GenNet(g_inpsize, g_hidsize, g_outsize) #it is gonna perform cross entropy loss instead of softmax, #since our labels are just real or fake, #it fits our purposes d_inpsize = 1 #first check without batch normaliziton d_hidsize = 50 d_outsize = 1 #again we are just gonna output real or fake Discriminator = DiscNet(d_inpsize, d_hidsize, d_outsize) #we will use slightly modified version of TwoLayerNet #we just added sigmoid at the output layer since all of #our outputs needs to be in range [0,1] Trainer = GANTrainer(Generator, Discriminator, data, update_rule='adam', num_epochs=5, batch_size=100, optim_config={ 'learning_rate': 2e-5, }, lr_decay=0.995, verbose=True, print_every=10000) Trainer.train() plt.plot([x[0] for x in Trainer.loss_history]) plt.plot([x[1] for x in Trainer.loss_history]) plt.plot([x[2] for x in Trainer.loss_history]) Explanation: Let us start with a standart distribution as our dataset End of explanation
1,915
Given the following text description, write Python code to implement the functionality described below step by step Description: Get Started Here are some sample queries. See what BQX can do. Initialization Step1: 1. Simple examples 1.1 Make simple query. Step2: 1.2 Get rid of quotes using Aliases. Step3: 1.3 You'll want WHERE clause. Column alias has overridden operators. It provides syntax highlighting feature on conditions. Step4: 1.4 SUM of column? Of course! Step5: 2. BQX's special features 2.1 Keep it partial. Use it later. Put your query in in-complete state (we call it 'partial query'). Generate variety of queries with Python's power. Step6: 2.2 Escape from bracket hell. I guess you have ever seen a nested query in nested query in nested query with bunch of AS clauses like Step7: 2.3 I WANT MORE, MORE SIMPLE QUERY!!! BQX have SELECT chain feature for simplification. Literally you can chain SELECT clauses and omit FROM clauses. Here is another example which provides identical query shown above, with shorter code.
Python Code: from bqx.query import Query as Q from bqx.parts import Table as T, Column as C from bqx.func import SUM Explanation: Get Started Here are some sample queries. See what BQX can do. Initialization End of explanation q = Q().SELECT('name').FROM('sample_table') print(q.getq()) Explanation: 1. Simple examples 1.1 Make simple query. End of explanation sample_table = T('sample_table') name = C('name') q = Q().SELECT(name).FROM(sample_table) print(q.getq()) Explanation: 1.2 Get rid of quotes using Aliases. End of explanation sample_table = T('sample_table') name = C('name') q = Q().SELECT(name).FROM(sample_table).WHERE(name == 'Hatsune Miku') print(q.getq()) Explanation: 1.3 You'll want WHERE clause. Column alias has overridden operators. It provides syntax highlighting feature on conditions. End of explanation sample_table = T('sample_table') name = C('name') score = C('score') score_sum = SUM(score) q = Q().SELECT(name, score_sum).FROM(sample_table).WHERE(name == 'Hatsune Miku').GROUP_BY(score) print(q.getq()) Explanation: 1.4 SUM of column? Of course! End of explanation sample_tables = [T('table_foo'), T('table_bar'), T('table_baz')] name = C('name') q = Q().SELECT(name) # Query without FROM??? for table in sample_tables: print(q.FROM(table).getq()) # Now it's complete query print() Explanation: 2. BQX's special features 2.1 Keep it partial. Use it later. Put your query in in-complete state (we call it 'partial query'). Generate variety of queries with Python's power. End of explanation # Call AS function manually to define AS clause. x = T('table_x').AS('x') y = T('table_y').AS('y') # You don't have to call AS func all time. # If you say auto_alias is True, AS clause will be auto-generated # next to columns like 'x.pid', 'x.a', 'x.b', 'x.c' declared below. q1 = ( Q(auto_alias=True) .SELECT(x.pid, x.a, x.b, x.c, y.name) .FROM(x) .INNER_JOIN(y) .ON(x.pid == y.pid)) pid, name, a, b, c = C('pid'), C('name'), C('a'), C('b'), C('c') average_calc = ((a + b + c) / 3).AS('average') q2 = ( Q() .SELECT(pid, average_calc, name) .FROM(q1)) average = C('average') q3 = ( Q() .SELECT(average, name) .FROM(q2) .ORDER_BY(name)) print(q3.getq()) Explanation: 2.2 Escape from bracket hell. I guess you have ever seen a nested query in nested query in nested query with bunch of AS clauses like: sql SELECT average, name FROM ( SELECT pid, (a+b+c)/3 AS average, name FROM ( SELECT x.pid AS pid, x.a AS a, x.b AS b, x.c AS c, y.name AS name FROM [dataset.x] AS x INNER JOIN [dataset.y] as y ON x.pid = y.pid ) ) ORDER BY name Here is a solution to this. Sub query reference feature and Auto alias feature is used. End of explanation x = T('table_x').AS('x') y = T('table_y').AS('y') pid, name, average, a, b, c = C('pid'), C('name'), C('average'), C('a'), C('b'), C('c') average_calc = ((a + b + c) / 3).AS('average') q = ( Q(auto_alias=True) .SELECT(x.pid, x.a, x.b, x.c, y.name) .FROM(x) .INNER_JOIN(y) .ON(x.pid == y.pid) .SELECT(pid, average_calc, name) .SELECT(average, name) .ORDER_BY(name)) print(q.getq()) Explanation: 2.3 I WANT MORE, MORE SIMPLE QUERY!!! BQX have SELECT chain feature for simplification. Literally you can chain SELECT clauses and omit FROM clauses. Here is another example which provides identical query shown above, with shorter code. End of explanation
1,916
Given the following text description, write Python code to implement the functionality described below step by step Description: SPLAT Tutorials Step1: Reading in and visualizing spectra SPLAT contains a built-in library of published SpeX prism spectra of ultracool dwarfs. It is also possible to download additional spectral datasets and read in your own spectrum or a spectrum from an website. Once you've read a spectrum into a Spectrum object, you can use the built-in features to visualize the spectrum. Step2: Plotting spectra There are several nice features contained in the splat.plot code and built into the .plot() routine that allows for publication-ready plots of spectra. Here's just a few examples Step3: Spectrum manipulation There are many built-in features for manipulating a spectrum object Step4: Spectral Math The Spectrum object takes care of all of the necessary math steps to add, subtract, multiply and divide spectra Step5: We can also compare spectra to each other using the compareSpectra routine, which returns a comparison statistic (by default chi^2) and a scale factor Step6: Comparing spectra and spectral classification We often want to compare spectra against each other, either to classify or to fit to models. The main function to do this is splat.compareSpectra, which returns the comparison statistic and optimal scale factor, and has many options for modifying and visualizing the comparison. Step7: A more efficient way to accomplish this is to use the built-in splat.classifyByStandard() function which will find the best match among pre-defined standards Step8: Classify by indices You can also use spectral indices to classify spectra; these indices sample specific features, such as molecular absorption bands Step9: Classify gravity Allers & Liu (2013) have published a gravity classification scheme that allows us to distinguish low-gravity (young) obejcts from high-gravity (old) objects Step10: Index measurement SPLAT has built-in functions to do index measurement, including literature-defined index sets and empirical relations to turn these into classifications Step11: Exercise Here's a real science case Step12: Exercise Solution
Python Code: # main splat import import splat import splat.plot as splot import splat.photometry as sphot import splat.empirical as spem # other useful imports import matplotlib.pyplot as plt import numpy as np import pandas import astropy.units as u from astropy.io import fits from astropy.utils.data import download_file # check what version you are using splat.VERSION # check that you have some spectra in the library splat.DB_SOURCES # who has contributed to this code? splat.AUTHORS Explanation: SPLAT Tutorials: Basic Spectral Analysis Authors Adam Burgasser Version date 18 January 2022 Learning Goals Read in a spectrum from the SPLAT database or externally (splat.searchLibrary, splat.getSpectrum) Plot a spectrum (splat.Spectrum.plot) Some basic manipulation of spectra - normalizing, scaling, trimming, changing units, spectral math (splat.Spectrum) Flux calibrate a spectrum (splat.Spectrum.fluxCalibrate) Compare a spectrum to another spectrum (splat.compareSpectrum) Compare a spectrum a set of spectral standards (splat.classifyByStandard) Measure a set of indices to infer a classification (splat.measureIndexSet, splat.classifyByIndex) Keywords spectral archive, spectral analysis, indices, classification Companion Content None Summary In this tutorial, we will examine how to draw a spectrum from the SPLAT library and conduct some basic spectral analyses to that object, including visualization, manipulation of the spectrum, using photometry to flux calibrate or measure the colors of a spectrum, measure spectral indices, and classification. Starting off Let's make sure the code is properly downloaded through the import statements; see http://pono.ucsd.edu/~adam/browndwarfs/splat/ for more detail on the proper installation procedures End of explanation splat.getSpectrum? # grab a random spectrum from the library and plot it # this produces a list of Spectrum objects so we want just the first one sp = splat.getSpectrum(lucky=True)[0] sp.plot() # get some information about this spectrum using info() sp.info() # grab a random L5 dwarf # this produces a list of Spectrum objects so we want just the first one sp = splat.getSpectrum(spt='L5', lucky=True)[0] sp.plot() # grab a very specific spectrum based on its source ID sp = splat.Spectrum(10001) sp.plot() # grab all the spectra of TWA 30A splist = splat.getSpectrum(name='TWA 30A') print(splist) for sp in splist: sp.plot() # grab a spectrum based on a "shortname" (RA and DEC shorthand) sp = splat.getSpectrum(shortname='J0559-1404')[0] sp.plot() # we can also search the library for spectra # this produces a pandas table of the relevant spectra s = splat.searchLibrary(spt=['L5','L9'],snr=50) s # choose one of these spectra sp = splat.Spectrum(s['DATA_KEY'][1]) sp.plot() # read in a spectrum from an online fits file f = download_file('http://pono.ucsd.edu/~adam/data/spex_test/spex_prism_PSOJ0077921+578267_120924.fits',cache="update") sp = splat.Spectrum(file=f,file_type='fits',name='PSOJ0077921+578267') sp.plot() Explanation: Reading in and visualizing spectra SPLAT contains a built-in library of published SpeX prism spectra of ultracool dwarfs. It is also possible to download additional spectral datasets and read in your own spectrum or a spectrum from an website. Once you've read a spectrum into a Spectrum object, you can use the built-in features to visualize the spectrum. End of explanation # get a nice high S/N L4 spectrum sp = splat.getSpectrum(spt='L4',snr=50,lucky=True)[0] sp.plot() # there are some nice addons on the default plot routine # this shows the regions of strong telluric absorption sp.plot(telluric=True) # this shows the locations of key spectral features sp.plot(features=['feh','h2o','co']) # or you can plot a pre-defined set of features sp.plot(ldwarf=True) # you can save you figure to .pdf or .png files sp.plot(ldwarf=True,telluric=True,output='MyPlot.pdf') # you can also plot a set of spectra using splat.plot.plotSpectrum commands # this sequence reads in all of the TWA30B spectrum, normalizes them, and # saves the file as a PDF file in your directory splist = splat.getSpectrum(name = 'TWA 30B') # get all 20 spectra of TWA 30B for sp in splist: sp.normalize([1.0,1.5]) # normalize the spectra legend = [sp.observation_date for sp in splist] # assigned legends to correspond to the observing dates splot.plotSpectrum(splist,multiplot=True,layout=[2,2],multipage=True,legend=legend,yrange=[0,1.2],output='TWA30B.pdf') Explanation: Plotting spectra There are several nice features contained in the splat.plot code and built into the .plot() routine that allows for publication-ready plots of spectra. Here's just a few examples End of explanation # grab a random T5 dwarf sp = splat.getSpectrum(spt='T5', lucky=True)[0] sp.plot(tdwarf=True) # normalize the spectrum to maximum value sp.normalize() sp.plot() # normalize over a specific region sp.normalize([1.5,1.7]) sp.plot() # multiple by a scale factor sp.scale(50) sp.plot() # flux calibrate the spectrum using a photometric magnitude # form SpeX prism spectra these should be filters in the 1-2.5 micron range # such as 2MASS JHKs, UKIDSS JHK, HST F110W/F160W, etc. sp.fluxCalibrate('2MASS J',14.5,absolute=True) # the "absolute" flag indicates this is an absolute magnitude sp.plot() # trim the edges sp.trim([0.9,2.3]) sp.plot(telluric=True) # mask part of the spectrum sp.maskFlux([1.8,2.0]) sp.plot() # change the wavelength units sp.toWaveUnit(u.Angstrom) sp.plot() # change the flux units sp.toFluxUnit(u.W/u.m/u.m/u.Angstrom) sp.plot() # change to fnu units (erg/cm2/s/Hz) sp.toFnu() sp.plot() # reset all your changes to go back to the original spectrum sp.reset() sp.plot() Explanation: Spectrum manipulation There are many built-in features for manipulating a spectrum object End of explanation # read in two M-type spectra, normalize them and add them together sp1 = splat.getSpectrum(spt=['M5','M9'],lucky=True)[0] sp2 = splat.getSpectrum(spt=['M5','M9'],lucky=True)[0] sp1.normalize() sp2.normalize() # add together sp3 = sp1+sp2 # plot this up using matplotlib plt.plot(sp1.wave,sp1.flux,'b-') plt.plot(sp2.wave,sp2.flux,'g-') plt.plot(sp3.wave,sp3.flux,'k-') plt.legend([sp1.name,sp2.name,'Sum']) plt.ylim([0,2.2]) plt.xlim([0.8,2.4]) plt.xlabel('Wavelength (micron)') plt.ylabel('Normalized Flux Density') sp3.plot() # read in two M7 spectra, normalize them and subtract them to see differences sp1 = splat.getSpectrum(spt='M7',lucky=True)[0] sp2 = splat.getSpectrum(spt='M7',lucky=True)[0] sp1.normalize() sp2.normalize() # subtract sp3 = sp1-sp2 # plot the individual spectra and their difference in two panels plt.subplot(211) plt.plot(sp1.wave,sp1.flux,'b-') plt.plot(sp2.wave,sp2.flux,'g-') #plt.ylim([0,1.2]) plt.xlim([0.8,2.4]) plt.ylabel('Normalized Flux Density') plt.legend([sp1.name,sp2.name]) plt.subplot(212) plt.plot(sp3.wave,sp3.flux,'k-') plt.legend(['Difference']) plt.plot([0.8,2.4],[0,0],'k--') plt.fill_between(sp3.wave,sp3.noise,-1.*sp3.noise,color='k',alpha=0.3) #plt.ylim([-0.5,0.5]) plt.xlim([0.8,2.4]) plt.xlabel('Wavelength (micron)') plt.ylabel('Difference') # fit part of a spectrum to a line and divide this out fit_range = [0.8,1.15] # read in an L dwarf spectrum and trim sp = splat.getSpectrum(spt='L4',snr=40,lucky=True)[0] sp.trim(fit_range) # fit to a line using np.polyfit fit = np.polyfit(sp.wave.value,sp.flux.value,1) print(fit) # generate a spectrum that is this linear function sp_continuum = splat.Spectrum(wave=sp.wave,flux=np.polyval(fit,sp.wave.value)*sp.flux.unit) # divide out this continuum sp_normalized = sp/sp_continuum # plot the results plt.subplot(211) plt.plot(sp.wave,sp.flux,'k-') plt.plot(sp_continuum.wave,sp_continuum.flux,'g-') plt.ylim([0,np.nanquantile(sp.flux.value,0.98)*1.5]) plt.xlim(fit_range) plt.ylabel('Normalized Flux Density') plt.legend([sp.name,'Continuum']) plt.subplot(212) plt.plot(sp_normalized.wave,sp_normalized.flux,'k-') plt.legend(['Continuum-Corrected']) plt.plot(fit_range,[1,1],'k--') plt.ylim([0.5,1.5]) plt.xlim(fit_range) plt.xlabel('Wavelength (micron)') plt.ylabel('Ratio') Explanation: Spectral Math The Spectrum object takes care of all of the necessary math steps to add, subtract, multiply and divide spectra End of explanation # read in two spectra of similar types sp1 = splat.getSpectrum(spt='L5',lucky=True)[0] sp2 = splat.getSpectrum(spt='L5',lucky=True)[0] chi,scale = splat.compareSpectra(sp1,sp2,plot=True) print(chi,scale) # we can also constrain the range over which the copmarison is made chi,scale = splat.compareSpectra(sp1,sp2,fit_range=[1.0,1.25],plot=True) print(chi,scale) # we can now overplot these by using the scale factor sp2.scale(scale) plt.plot(sp1.wave,sp1.flux,'k-') plt.plot(sp2.wave,sp2.flux,'m-') plt.ylim([0,np.quantile(sp1.flux.value,0.98)*1.5]) plt.xlabel('Wavelength ({})'.format(sp1.flux.unit)) plt.ylabel('Flux Density ({})'.format(sp1.flux.unit)) Explanation: We can also compare spectra to each other using the compareSpectra routine, which returns a comparison statistic (by default chi^2) and a scale factor End of explanation # check out the options of compareSpectra splat.compareSpectra? # read in M7 and M8 spectra and compare them sp1 = splat.getSpectrum(spt='M7',lucky=True)[0] sp2 = splat.getSpectrum(spt='M8',lucky=True)[0] splat.compareSpectra(sp1,sp2,plot=True) # limit comparison to a specific range splat.compareSpectra(sp1,sp2,fit_ranges=[0.8,1.0],plot=True) # to compare to spectral standards, you can use the built-in list of these standards splat.initializeStandards() # first read in the standards stdM8 = splat.STDS_DWARF_SPEX['M8.0'] # there are different standard for different instruments splat.compareSpectra(sp2,stdM8,plot=True) Explanation: Comparing spectra and spectral classification We often want to compare spectra against each other, either to classify or to fit to models. The main function to do this is splat.compareSpectra, which returns the comparison statistic and optimal scale factor, and has many options for modifying and visualizing the comparison. End of explanation # learn about the options for this routine splat.classifyByStandard()? # read in a random L5 dwarf sp = splat.getSpectrum(spt='L5',lucky=True)[0] sp.plot() # the easiest way to classify is to use classifyByStandard # this will take some time on the first go as it reads in the standards # the verbose command gives you additional feedback splat.classifyByStandard(sp,plot=True,verbose=True) # here's what the standards are splat.STDS_DWARF_SPEX # we can also vary how the classification is done # this uses the method of Kirkpatrick et al. 2010, limiting the scaling to the 0.9-1.4 micron region splat.classifyByStandard(sp,method='kirkpatrick',plot=True) # there are other standard sets we can read in splat.initializeStandards(sd=True) splat.STDS_SD_SPEX # try classifying a subdwarf with these #sp = splat.getSpectrum(spt='M7',subdwarf=True,lucky=True)[0] #splat.classifyByStandard(sp,method='kirkpatrick',plot=True) sp = splat.getSpectrum(spt='sdM8',subdwarf=True,lucky=True)[0] splat.classifyByStandard(sp,sd=True,plot=True) Explanation: A more efficient way to accomplish this is to use the built-in splat.classifyByStandard() function which will find the best match among pre-defined standards End of explanation # here's an example of measuring an existing set of indices # it return a dictionary with the index names conneting to the measurement & uncertainty sp = splat.getSpectrum(spt='L4',lucky=True)[0] sp.plot(ldwarf=True) splat.measureIndexSet(sp,set='burgasser') # you can find what index sets are available, and their definitions, using this command spem.info_indices() # Let's classify using the allers2013 set # this will return the mean type and uncertainty splat.classifyByIndex(sp,ref='allers',verbose=True) Explanation: Classify by indices You can also use spectral indices to classify spectra; these indices sample specific features, such as molecular absorption bands End of explanation # grab a young spectrum sp = splat.getSpectrum(spt=['M9','L2'],lowg=True,snr=40,lucky=True)[0] sp.plot() splat.classifyByStandard(sp,method='kirkpatrick',plot=True) splat.classifyGravity(sp,verbose=True) splat.classifyByStandard(sp,lowg=True,plot=True) Explanation: Classify gravity Allers & Liu (2013) have published a gravity classification scheme that allows us to distinguish low-gravity (young) obejcts from high-gravity (old) objects End of explanation # do a basic index measurement # read in a random T5 sp = splat.getSpectrum(spt='T5',lucky=True)[0] # measure the ratio of two regions - first range is numerator second range is denominator ind = splat.measureIndex(sp,[[1.1,1.2],[1.22,1.32]],method='ratio',sample='integrate') print(ind) # you can visualize the placement of these indices by setting plot=True # NOTE: THIS IS CURRENTLY THROWING ERRORS SO DON'T RUN! #ind = splat.measureIndex(sp,[[1.1,1.2],[1.22,1.32]],method='ratio',sample='integrate',plot=True) # measure an index set that is pre-defined in the literature # this returns a dictionary of values splat.measureIndexSet(sp,ref='burgasser',verbose=True) # there is a handy information function to find out what index sets are currently included spem.info_indices() # you can use these indices to classify an object splat.classifyByIndex(sp,ref='burgasser',verbose=True) # indices are also used for gravity classification of young sources sp = splat.getSpectrum(spt='L2',young=True,lucky=True)[0] splat.classifyGravity(sp,verbose=True) # you can compare to alternate standards as well # this command compares to a suite of subdwarf standards splat.classifyByStandard(sp2,plot=True,sd=True) # this command compares to a suite of low gravity standards splat.classifyByStandard(sp2,plot=True,vlg=True) Explanation: Index measurement SPLAT has built-in functions to do index measurement, including literature-defined index sets and empirical relations to turn these into classifications End of explanation # first read in the spectrum of 2MASS J0518-2828 by seaching on the shortname 'J0518-2828' [enter code here] # measure the spectral indices from burgasser et al. [enter code here] # determine the spectral type using the kirkpatrick method [enter code here] # read in spectral templates for the primary and secondary types [enter code here] # the absolute magnitudes of these types come from the function splat.empirical.typeToMag mag_L5 = spem.typeToMag('L5','2MASS J',set='filippazzo')[0] mag_T5 = spem.typeToMag('T5','2MASS J',set='filippazzo')[0] print(mag_L5,mag_T5) # now use the magnitudes to scale the template spectra [enter here] # add the template spectra together to make a binary template [enter code here] # now compare the binary template and J0518-2828 spectrum using compareSpectra, and plot the result [enter code here] # BONUS: do the above steps a few times until you get a "best" fit, and plot the # appropriately scaled primary, secondary, binary templates and J0518-2828, and # and the difference between J0518-2828 and the binary template to compare them [enter code here] Explanation: Exercise Here's a real science case: we're going to analyze the spectrum of a known unresolved binary, 2MASS J0518-2828, by measuring its indices, comparing to spectral standards, and then comparing to a binary template constructed from two differently-classified sources (L5 and T5) that are scaled to their spectral type-based absolute J-band magnitudes. The outline of this exercise is in the next few cells; the solution is provided below End of explanation # read in spectrum of known spectral binary sp = splat.getSpectrum(shortname='J0518-2828')[0] sp.normalize() sp.plot() # indices splat.measureIndexSet(sp,'burgasser',verbose=True) # classification spt,spt_e = splat.classifyByStandard(sp,plot=True) print('\nSpectral types: {}+/-{}'.format(spt,spt_e)) # read in template spectra sp1 = splat.getSpectrum(spt='L5',snr=20,binary=False,lucky=True)[0] sp2 = splat.getSpectrum(spt='T5',snr=20,binary=False,lucky=True)[0] # get the right magnitudes from an empirical relation of Filippazzo et al. (2015) # this returns the value and uncertainty mag_L5 = spem.typeToMag('L5','2MASS J',set='filippazzo')[0] mag_T5 = spem.typeToMag('T5','2MASS J',set='filippazzo')[0] print('\nL5 M_J = {}, T5 M_J = {}'.format(mag_L5,mag_T5)) # scale the spectra sp1.fluxCalibrate('2MASS J',mag_L5,absolute=True) sp2.fluxCalibrate('2MASS J',mag_T5,absolute=True) # add them to make a binary sp3 = sp1+sp2 sp3.plot() # do an initial compareSpectra to get the scale factors chi,scl_std = splat.compareSpectra(sp,splat.STDS_DWARF_SPEX[spt]) chi,scl_binary = splat.compareSpectra(sp,sp3) sp3.scale(scl_binary) # read in spectrum of known spectral binary sp = splat.getSpectrum(shortname='J0518-2828')[0] sp.normalize() # indices splat.measureIndexSet(sp,'burgasser',verbose=True) # classification spt,spt_e = splat.classifyByStandard(sp) print('\nSpectral types: {}+/-{}'.format(spt,spt_e)) # read in template spectra sp1 = splat.getSpectrum(spt='L5',snr=20,binary=False,lucky=True)[0] sp2 = splat.getSpectrum(spt='T5',snr=20,binary=False,lucky=True)[0] # get the right magnitudes from an empirical relation of Filippazzo et al. (2015) # this returns the value and uncertainty mag_L5 = spem.typeToMag('L5','2MASS J',set='filippazzo')[0] mag_T5 = spem.typeToMag('T5','2MASS J',set='filippazzo')[0] print('\nL5 M_J = {}, T5 M_J = {}'.format(mag_L5,mag_T5)) # scale the spectra sp1.fluxCalibrate('2MASS J',mag_L5,absolute=True) sp2.fluxCalibrate('2MASS J',mag_T5,absolute=True) # add them to make a binary sp3 = sp1+sp2 # do an initial compareSpectra to get the scale factors chi,scl_std = splat.compareSpectra(sp,splat.STDS_DWARF_SPEX[spt]) chi,scl_binary = splat.compareSpectra(sp,sp3) # compute the difference spdiff = sp-sp3 # visualize the results plt.figure(figsize=[6,8]) plt.subplot(211) plt.plot(sp.wave,sp.flux,'k-') plt.plot(splat.STDS_DWARF_SPEX[spt].wave,splat.STDS_DWARF_SPEX[spt].flux*scl_std,'b-') plt.legend(['J0518-2828',spt]) plt.ylim([0,np.nanquantile(sp.flux.value,0.98)*1.5]) plt.xlim([0.8,2.4]) plt.ylabel('Normalized Flux Density') plt.subplot(212) plt.plot(sp.wave,sp.flux,'k-') plt.plot(sp1.wave,sp1.flux*scl_binary,'m-') plt.plot(sp2.wave,sp2.flux*scl_binary,'b-') plt.plot(sp3.wave,sp3.flux*scl_binary,'g-') plt.legend(['J0518-2828','L5','T5','L5+T5']) plt.ylim([0,np.nanquantile(sp.flux.value,0.98)*1.5]) plt.xlim([0.8,2.4]) plt.ylabel('Normalized Flux Density') plt.xlabel('Wavelength') Explanation: Exercise Solution End of explanation
1,917
Given the following text description, write Python code to implement the functionality described below step by step Description: Model-Free Reinforcement Learning Remember how in last Notebook we felt like cheating by using directions calculated from the map of the environment?? Well, model-free reinforcement learning deals with that. Model-free refers to the fact that algorithms unders this category do not need a model of the environment, also known as MDP, to calculate optimal policies. In this notebook, we will look at what is perhaps the most popular model-free reinforcement learning algorithm, q-learning. Q-learning run without needing a map of the environment, it works by balancing the need for exploration with the need for exploiting previously explored knowledge. Let's take a look. Step1: Q-Learning The function below, action_selection is an important aspect of reinforcement learning algorithms. The fact is, when you have possibly conflicting needs, explore vs exploit, you enter into a difficult situation, dilemma. The Exploration vs Exploitation Dilemma is at the core of reinforcement learning and it is good for you to think about it for a little while. How much do you need to explore an environment before you exploit it? In the function below we use one of the many alternatives which is we explore a lot at the begining and decay the amount of exploration as we increase the number of episodes. Let's take a look at what the function looks like Step2: See that? So, at episode 0 we have 100% change of acting randomly, all the way down to 0 when we stop exploring and instead always select the action that we think would maximizing the discounted future rewards. Again, this is a way of doing this, there are many and you surely should be thinking about better ways of doing so. Next, let me show you what Q-Learning looks like Step3: Nice, right? You just pass it an environment, nS and nA are the number of states and actions respectively. Q is a table of states as rows and actions as columns that will hold the expected reward the agent expects to get for taking action 'a' on state 's'. You can see how we initialize Q(s,a)'s to a random value, but also we multiply that by 2. You may ask, why is this? This is called "Optimism in the face of uncertainty" and it is a common reinforcement learning technique for encouraging agents to explore. Think about it on an intuitive level. If you think positively most of the time, if you receive a low balling job offer, you are going to pass on it and potentially get a better offer later. Worst case, you don't find any better offer and after 'adjusting' your estimates you will think an offer like the "low balling" one you got wasn't that bad after all. The same applies to reinforcement learning agent, cool right? Then, I go on a loop for n_episodes using the action_selection function as described above. Don't pay too much attention to the range start and end, that is just the way I get the exploration strategy the way I showed. You should not like it, I don't like it. You will have a chance to make it better. For now, let's unleash this agent and see how it does!!! Step5: Let's look at a couple of the episodes in more detail. Step6: Nice!!! You can see the progress of this agent. From total caos completely sinking into holes, to sliding into the goal fairly consistently. Let's inspect the Values and Policies. Step7: Fair enough, let's close this environment and you will have a chance to submit to your OpenAI account. After that, you will have a chance to modify the action_selection to try something different. Step8: Your turn Maybe you want to try an exponential decay?? (http Step9: Use the following code to test your new exploration strategy Step10: Let's redefine the q_learning function we had above and run it against the environment again. Step12: Curious to see how the new agent did?? Let's check it out! Step13: Did it do good??? This isn't an easy thing, take your time. Be sure to look into the Notebook solution if you want an idea. For now, let's take a look at the value function and policy the agent came up with. Step14: Good??? Nice! Let's wrap-up!
Python Code: import matplotlib.pyplot as plt import numpy as np import pandas as pd import tempfile import pprint import math import json import sys import gym from gym import wrappers from subprocess import check_output from IPython.display import HTML Explanation: Model-Free Reinforcement Learning Remember how in last Notebook we felt like cheating by using directions calculated from the map of the environment?? Well, model-free reinforcement learning deals with that. Model-free refers to the fact that algorithms unders this category do not need a model of the environment, also known as MDP, to calculate optimal policies. In this notebook, we will look at what is perhaps the most popular model-free reinforcement learning algorithm, q-learning. Q-learning run without needing a map of the environment, it works by balancing the need for exploration with the need for exploiting previously explored knowledge. Let's take a look. End of explanation def action_selection(state, Q, episode, n_episodes): epsilon = max(0, episode/n_episodes*2) if np.random.random() < epsilon: action = np.random.randint(len(Q[0])) else: action = np.argmax(Q[state]) return action, epsilon Q = [[0]] n_episodes = 10000 epsilons = [] for episode in range(n_episodes//2, -n_episodes//2, -1): _, epsilon = action_selection(0, Q, episode, n_episodes) epsilons.append(epsilon) plt.plot(np.arange(len(epsilons)), epsilons, '.') plt.ylabel('Probability') plt.xlabel('Episode') Explanation: Q-Learning The function below, action_selection is an important aspect of reinforcement learning algorithms. The fact is, when you have possibly conflicting needs, explore vs exploit, you enter into a difficult situation, dilemma. The Exploration vs Exploitation Dilemma is at the core of reinforcement learning and it is good for you to think about it for a little while. How much do you need to explore an environment before you exploit it? In the function below we use one of the many alternatives which is we explore a lot at the begining and decay the amount of exploration as we increase the number of episodes. Let's take a look at what the function looks like: End of explanation def q_learning(env, alpha = 0.9, gamma = 0.9): nS = env.env.observation_space.n nA = env.env.action_space.n Q = np.random.random((nS, nA)) * 2.0 n_episodes = 10000 for episode in range(n_episodes//2, -n_episodes//2, -1): state = env.reset() done = False while not done: action, _ = action_selection(state, Q, episode, n_episodes) nstate, reward, done, info = env.step(action) Q[state][action] += alpha * (reward + gamma * Q[nstate].max() * (not done) - Q[state][action]) state = nstate return Q Explanation: See that? So, at episode 0 we have 100% change of acting randomly, all the way down to 0 when we stop exploring and instead always select the action that we think would maximizing the discounted future rewards. Again, this is a way of doing this, there are many and you surely should be thinking about better ways of doing so. Next, let me show you what Q-Learning looks like: End of explanation mdir = tempfile.mkdtemp() env = gym.make('FrozenLake-v0') env = wrappers.Monitor(env, mdir, force=True) Q = q_learning(env) Explanation: Nice, right? You just pass it an environment, nS and nA are the number of states and actions respectively. Q is a table of states as rows and actions as columns that will hold the expected reward the agent expects to get for taking action 'a' on state 's'. You can see how we initialize Q(s,a)'s to a random value, but also we multiply that by 2. You may ask, why is this? This is called "Optimism in the face of uncertainty" and it is a common reinforcement learning technique for encouraging agents to explore. Think about it on an intuitive level. If you think positively most of the time, if you receive a low balling job offer, you are going to pass on it and potentially get a better offer later. Worst case, you don't find any better offer and after 'adjusting' your estimates you will think an offer like the "low balling" one you got wasn't that bad after all. The same applies to reinforcement learning agent, cool right? Then, I go on a loop for n_episodes using the action_selection function as described above. Don't pay too much attention to the range start and end, that is just the way I get the exploration strategy the way I showed. You should not like it, I don't like it. You will have a chance to make it better. For now, let's unleash this agent and see how it does!!! End of explanation videos = np.array(env.videos) n_videos = 5 idxs = np.linspace(0, len(videos) - 1, n_videos).astype(int) videos = videos[idxs,:] urls = [] for i in range(n_videos): out = check_output(["asciinema", "upload", videos[i][0]]) out = out.decode("utf-8").replace('\n', '').replace('\r', '') urls.append([out]) videos = np.concatenate((videos, urls), axis=1) strm = '' for video_path, meta_path, url in videos: with open(meta_path) as data_file: meta = json.load(data_file) castid = url.split('/')[-1] html_tag = <h2>{0} <script type="text/javascript" src="https://asciinema.org/a/{1}.js" id="asciicast-{1}" async data-autoplay="true" data-size="big"> </script> strm += html_tag.format('Episode ' + str(meta['episode_id']), castid) HTML(data=strm) Explanation: Let's look at a couple of the episodes in more detail. End of explanation V = np.max(Q, axis=1) V pi = np.argmax(Q, axis=1) pi Explanation: Nice!!! You can see the progress of this agent. From total caos completely sinking into holes, to sliding into the goal fairly consistently. Let's inspect the Values and Policies. End of explanation env.close() gym.upload(mdir, api_key='<YOUR OPENAI API KEY>') Explanation: Fair enough, let's close this environment and you will have a chance to submit to your OpenAI account. After that, you will have a chance to modify the action_selection to try something different. End of explanation def action_selection(state, Q, episode, n_episodes, decay=0.0006, initial=1.00): epsilon = initial * math.exp(-decay*episode) if np.random.random() < epsilon: action = np.random.randint(len(Q[0])) else: action = np.argmax(Q[state]) return action, epsilon Explanation: Your turn Maybe you want to try an exponential decay?? (http://www.miniwebtool.com/exponential-decay-calculator/) P(t) = P0e-rt where: * P(t) = the amount of some quantity at time t * P0 = initial amount at time t = 0 * r = the decay rate * t = time (number of periods) End of explanation Q = [[0]] n_episodes = 10000 epsilons = [] for episode in range(n_episodes): _, epsilon = action_selection(0, Q, episode, n_episodes) epsilons.append(epsilon) plt.plot(np.arange(len(epsilons)), epsilons, '.') plt.ylabel('Probability') plt.xlabel('Episode') Explanation: Use the following code to test your new exploration strategy: End of explanation def q_learning(env, alpha = 0.9, gamma = 0.9): nS = env.env.observation_space.n nA = env.env.action_space.n Q = np.random.random((nS, nA)) * 2.0 n_episodes = 10000 for episode in range(n_episodes): state = env.reset() done = False while not done: action, _ = action_selection(state, Q, episode, n_episodes) nstate, reward, done, info = env.step(action) Q[state][action] += alpha * (reward + gamma * Q[nstate].max() * (not done) - Q[state][action]) state = nstate return Q mdir = tempfile.mkdtemp() env = gym.make('FrozenLake-v0') env = wrappers.Monitor(env, mdir, force=True) Q = q_learning(env) Explanation: Let's redefine the q_learning function we had above and run it against the environment again. End of explanation videos = np.array(env.videos) n_videos = 5 idxs = np.linspace(0, len(videos) - 1, n_videos).astype(int) videos = videos[idxs,:] urls = [] for i in range(n_videos): out = check_output(["asciinema", "upload", videos[i][0]]) out = out.decode("utf-8").replace('\n', '').replace('\r', '') urls.append([out]) videos = np.concatenate((videos, urls), axis=1) strm = '' for video_path, meta_path, url in videos: with open(meta_path) as data_file: meta = json.load(data_file) castid = url.split('/')[-1] html_tag = <h2>{0} <script type="text/javascript" src="https://asciinema.org/a/{1}.js" id="asciicast-{1}" async data-autoplay="true" data-size="big"> </script> strm += html_tag.format('Episode ' + str(meta['episode_id']), castid) HTML(data=strm) Explanation: Curious to see how the new agent did?? Let's check it out! End of explanation V = np.max(Q, axis=1) V pi = np.argmax(Q, axis=1) pi Explanation: Did it do good??? This isn't an easy thing, take your time. Be sure to look into the Notebook solution if you want an idea. For now, let's take a look at the value function and policy the agent came up with. End of explanation env.close() gym.upload(mdir, api_key='<YOUR OPENAI API KEY>') Explanation: Good??? Nice! Let's wrap-up! End of explanation
1,918
Given the following text description, write Python code to implement the functionality described below step by step Description: DirectLiNGAM by Kernel Method Import and settings In this example, we need to import numpy, pandas, and graphviz in addition to lingam. Step1: Test data We create test data consisting of 6 variables. Step2: Causal Discovery To run causal discovery, we create a DirectLiNGAM object by specifying 'kernel' in the measure parameter. Then, we call the fit method. Step3: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. Step4: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. Step5: We can draw a causal graph by utility funciton.
Python Code: import numpy as np import pandas as pd import graphviz import lingam from lingam.utils import make_dot print([np.__version__, pd.__version__, graphviz.__version__, lingam.__version__]) np.set_printoptions(precision=3, suppress=True) np.random.seed(0) Explanation: DirectLiNGAM by Kernel Method Import and settings In this example, we need to import numpy, pandas, and graphviz in addition to lingam. End of explanation n = 1000 e = lambda n: np.random.laplace(0, 1, n) x3 = e(n) x2 = 0.3*x3 + e(n) x1 = 0.3*x3 + 0.3*x2 + e(n) x0 = 0.3*x2 + 0.3*x1 + e(n) x4 = 0.3*x1 + 0.3*x0 + e(n) X = pd.DataFrame(np.array([x0, x1, x2, x3, x4]).T ,columns=['x0', 'x1', 'x2', 'x3', 'x4']) X.head() m = np.array([[0.0, 0.3, 0.3, 0.0, 0.0], [0.0, 0.0, 0.3, 0.3, 0.0], [0.0, 0.0, 0.0, 0.3, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], [0.3, 0.3, 0.0, 0.0, 0.0]]) make_dot(m) Explanation: Test data We create test data consisting of 6 variables. End of explanation model = lingam.DirectLiNGAM(measure='kernel') model.fit(X) Explanation: Causal Discovery To run causal discovery, we create a DirectLiNGAM object by specifying 'kernel' in the measure parameter. Then, we call the fit method. End of explanation model.causal_order_ Explanation: Using the causal_order_ properties, we can see the causal ordering as a result of the causal discovery. End of explanation model.adjacency_matrix_ Explanation: Also, using the adjacency_matrix_ properties, we can see the adjacency matrix as a result of the causal discovery. End of explanation make_dot(model.adjacency_matrix_) Explanation: We can draw a causal graph by utility funciton. End of explanation
1,919
Given the following text description, write Python code to implement the functionality described below step by step Description: Estimating Precession Frequencies Introduction This Notebook demonstrates how to use QInfer to estimate a single precession frequency, for example in a Rabi or Ramsey experiment. Setup First, to make sure that this example works in both Python 2 and 3, we tell Python 2 to use 3-style division and printing. Step1: Next, we import QInfer itself, along with NumPy and Matplotlib. Step2: We finish by configuring Matplotlib for plotting in Jupyter Notebook. Step3: Uniform Sampling In particular, we'll assume that for a frequency $\omega$, if one measures at a time $t$, then the probability of a "1" measurement is given by \begin{equation} \Pr(1 | \omega, t) = \sin^2(\omega t / 2). \end{equation} To estimate $\omega$, we will need a set of measurements ${(t_k, n_k)}{k=1}^N$, where $t_k$ is the evolution time used for the $k$th sample and where $n_k$ "1" measurements are seen. Conventionally, one might choose $t_k = k \pi / (2 \omega\max)$, where $\omega_\max$ is the maximum frequency that one expects to observe. For each $k$, many "shots" are then collected to give a good average, such that the Fourier transform of the collected measurements clearly shows a peak at the $\omega$. To see this, let's start by making some fake data from a precession experiment (e.g. Step4: Now let's look at the periodogram of the data (that is, the squared modulus of the Fourier-transformed data), and note that we see the true value of $\omega$ that we used to generate the data appears as a clearly-visible peak. Step5: Though things look quite nice when zoomed out so far, if we look closer to the true value of $\omega$, we see that there's still a lot of uncertianty about the precession frequency. Step6: We can improve on the situation dramatically by using Bayesian analysis to find the precession frequency, since in this case, we already know the phase and amplitude of the precession and that there is exactly one peak. To do so, we first make a new array containing our data, the times we collected that data at and how many shots we used at each measurement. Step7: We can then call qi.simple_est_prec with our data to find the Bayesian mean estimator (BME) for $\omega$. To see how long it takes to run, we'll surround the call with a qi.timing block; this is not necessary in practice, though. Step8: Notice that the estimation procedure also provided an error bar; this is possible because we have the entire posterior distribution describing our state of knowledge after the experiment. Step9: Indeed, the plot above only shows the uncertianty in the Bayes estimate because it is zoomed in near the peak of the posterior. If we zoom out and compare to the width of the periodogram peak, we can clearly see that the Bayes estimator does a much better job. Step10: Exponentially-Sparse Sampling Another important advantage of the Bayes estimation procedure is that we need not assume uniform sampling times. For instance, we can choose our samples to be exponentially sparse. Doing so makes it somewhat harder to visualize the data before processing, but can give us much better estimates. Step11: We can again stack the data in the same way. Step12: One-Bit Sampling Note that we can also get very good results even when only one bit of data is used to estimate each sample. Step13: Loading Data Using Pandas DataFrames If your experimental data is stored as a Pandas DataFrame (for instance, if it was loaded from an Excel file), then QInfer can use the DataFrame directly Step14: Comma-Separated Finally, we can also import from comma-separated files (CSV files). We'll test this out by using Python's StringIO object to simulate reading and writing to a file, so that the example is standalone. Step15: First, we'll export the data we collected to a CSV-formatted string. In practice, the contents of this string would normally be written to a file, but we'll show the first few rows to illustrate the point. Step16: We can then pass a file-like object that reads from this string to QInfer, which will automatically call NumPy and load the data appropriately.
Python Code: from __future__ import division, print_function Explanation: Estimating Precession Frequencies Introduction This Notebook demonstrates how to use QInfer to estimate a single precession frequency, for example in a Rabi or Ramsey experiment. Setup First, to make sure that this example works in both Python 2 and 3, we tell Python 2 to use 3-style division and printing. End of explanation import qinfer as qi import numpy as np import matplotlib.pyplot as plt Explanation: Next, we import QInfer itself, along with NumPy and Matplotlib. End of explanation %matplotlib inline try: plt.style.use('ggplot') except: pass Explanation: We finish by configuring Matplotlib for plotting in Jupyter Notebook. End of explanation true_omega = 70.3 omega_min, omega_max = [0, 99.1] n_shots = 400 ts = np.pi * (1 + np.arange(100)) / (2 * omega_max) signal = np.sin(true_omega * ts / 2) ** 2 counts = np.random.binomial(n=n_shots, p=signal) plt.plot(ts, signal, label='Signal') plt.plot(ts, counts / n_shots, 'x', label='Data') plt.xlabel('$t$') plt.ylabel(r'$\Pr(1 | \omega, t)$') plt.ylim(-0.05, 1.05) plt.legend() Explanation: Uniform Sampling In particular, we'll assume that for a frequency $\omega$, if one measures at a time $t$, then the probability of a "1" measurement is given by \begin{equation} \Pr(1 | \omega, t) = \sin^2(\omega t / 2). \end{equation} To estimate $\omega$, we will need a set of measurements ${(t_k, n_k)}{k=1}^N$, where $t_k$ is the evolution time used for the $k$th sample and where $n_k$ "1" measurements are seen. Conventionally, one might choose $t_k = k \pi / (2 \omega\max)$, where $\omega_\max$ is the maximum frequency that one expects to observe. For each $k$, many "shots" are then collected to give a good average, such that the Fourier transform of the collected measurements clearly shows a peak at the $\omega$. To see this, let's start by making some fake data from a precession experiment (e.g.: Rabi, Ramsey or phase estimation), then using the Fourier method. End of explanation spectrum = np.abs(np.fft.fftshift(np.fft.fft(counts - counts.mean())))**2 ft_freq = 2 * np.pi * np.fft.fftshift(np.fft.fftfreq(n=len(counts), d=ts[1] - ts[0])) plt.plot(ft_freq, spectrum) ylim = plt.ylim() plt.vlines(true_omega, *ylim) plt.ylim(*ylim) plt.xlabel('$\omega$') Explanation: Now let's look at the periodogram of the data (that is, the squared modulus of the Fourier-transformed data), and note that we see the true value of $\omega$ that we used to generate the data appears as a clearly-visible peak. End of explanation plt.plot(ft_freq, spectrum, '.-', markersize=10) ylim = plt.ylim() plt.vlines(true_omega, *ylim) plt.ylim(*ylim) plt.xlim(true_omega - 5, true_omega + 5) plt.xlabel('$\omega$') plt.ylabel('Amplitude') Explanation: Though things look quite nice when zoomed out so far, if we look closer to the true value of $\omega$, we see that there's still a lot of uncertianty about the precession frequency. End of explanation data = np.column_stack([counts, ts, n_shots * np.ones_like(counts)]) Explanation: We can improve on the situation dramatically by using Bayesian analysis to find the precession frequency, since in this case, we already know the phase and amplitude of the precession and that there is exactly one peak. To do so, we first make a new array containing our data, the times we collected that data at and how many shots we used at each measurement. End of explanation with qi.timing() as timing: mean, cov, extra = qi.simple_est_prec(data, freq_min=omega_min, freq_max=omega_max, return_all=True) print("{}. Error: {:0.2e}. Estimated error: {:0.2e}.".format(timing, abs(mean - true_omega) / true_omega, np.sqrt(cov) / true_omega)) Explanation: We can then call qi.simple_est_prec with our data to find the Bayesian mean estimator (BME) for $\omega$. To see how long it takes to run, we'll surround the call with a qi.timing block; this is not necessary in practice, though. End of explanation extra['updater'].plot_posterior_marginal() ylim = plt.ylim() plt.vlines(true_omega, *ylim) plt.ylim(*ylim) plt.ylabel(r'$\Pr(\omega | \mathrm{data})$') Explanation: Notice that the estimation procedure also provided an error bar; this is possible because we have the entire posterior distribution describing our state of knowledge after the experiment. End of explanation plt.plot(ft_freq[ft_freq > 0], spectrum[ft_freq > 0] / np.trapz(spectrum[ft_freq > 0], ft_freq[ft_freq > 0])) xlim = plt.xlim(0, 100) extra['updater'].plot_posterior_marginal(range_min=xlim[0], range_max=xlim[1], res=400) plt.xlim(*xlim) plt.legend(['Spectrum', 'Posterior'], loc='upper left') Explanation: Indeed, the plot above only shows the uncertianty in the Bayes estimate because it is zoomed in near the peak of the posterior. If we zoom out and compare to the width of the periodogram peak, we can clearly see that the Bayes estimator does a much better job. End of explanation n_shots = 400 ts = np.pi * 1.125 ** (1 + np.arange(100)) / (2 * omega_max) signal = np.sin(true_omega * ts / 2) ** 2 counts = np.random.binomial(n=n_shots, p=signal) plt.plot(ts, signal, label='Signal') plt.plot(ts, counts / n_shots, 'x', label='Data') plt.xlabel('$t$') plt.ylabel(r'$\Pr(1 | \omega, t)$') plt.ylim(-0.05, 1.05) plt.legend() Explanation: Exponentially-Sparse Sampling Another important advantage of the Bayes estimation procedure is that we need not assume uniform sampling times. For instance, we can choose our samples to be exponentially sparse. Doing so makes it somewhat harder to visualize the data before processing, but can give us much better estimates. End of explanation data = np.column_stack([counts, ts, n_shots * np.ones_like(counts)]) with qi.timing() as timing: mean, cov, extra = qi.simple_est_prec(data, freq_min=omega_min, freq_max=omega_max, return_all=True) print("{}. Error: {:0.2e}. Estimated error: {:0.2e}.".format(timing, abs(mean - true_omega) / true_omega, np.sqrt(cov) / true_omega)) extra['updater'].plot_posterior_marginal() ylim = plt.ylim() plt.vlines(true_omega, *ylim) plt.ylim(*ylim) Explanation: We can again stack the data in the same way. End of explanation true_omega = 70.3 omega_min, omega_max = [10.3, 99.1] n_shots = 1 t_min = np.pi / 2 t_max = 100000 ts = np.logspace(np.log10(t_min), np.log10(t_max), 1000) / (2 * omega_max) signal = np.sin(true_omega * ts / 2) ** 2 counts = np.random.binomial(n=n_shots, p=signal) plt.plot(ts, signal, label='Signal') plt.plot(ts, counts / n_shots, 'x', label='Data') plt.xlabel('$t$') plt.ylabel(r'$\Pr(1 | \omega, t)$') plt.ylim(-0.05, 1.05) plt.legend() data = np.column_stack([counts, ts, n_shots * np.ones_like(counts)]) with qi.timing() as timing: mean, cov, extra = qi.simple_est_prec(data, freq_min=omega_min, freq_max=omega_max, return_all=True) print("{}. Error: {:0.2e}. Estimated error: {:0.2e}.".format(timing, abs(mean - true_omega) / true_omega, np.sqrt(cov) / true_omega)) extra['updater'].plot_posterior_marginal() ylim = plt.ylim() plt.vlines(true_omega, *ylim) plt.ylim(*ylim) Explanation: One-Bit Sampling Note that we can also get very good results even when only one bit of data is used to estimate each sample. End of explanation import pandas as pd dataframe = pd.DataFrame(data, columns=['counts', 't', 'n_shots']) dataframe[:10] with qi.timing() as timing: mean, cov, extra = qi.simple_est_prec(dataframe , freq_min=omega_min, freq_max=omega_max, return_all=True) print("{}. Error: {:0.2e}. Estimated error: {:0.2e}.".format(timing, abs(mean - true_omega) / true_omega, np.sqrt(cov) / true_omega)) Explanation: Loading Data Using Pandas DataFrames If your experimental data is stored as a Pandas DataFrame (for instance, if it was loaded from an Excel file), then QInfer can use the DataFrame directly: End of explanation try: # Python 2 from cStringIO import StringIO as IO conv = lambda x: x except ImportError: # Python 3 from io import BytesIO as IO # On Python 3, we need to decode bytes into a string before we # can print them. conv = lambda x: x.decode('utf-8') Explanation: Comma-Separated Finally, we can also import from comma-separated files (CSV files). We'll test this out by using Python's StringIO object to simulate reading and writing to a file, so that the example is standalone. End of explanation csv_io = IO() np.savetxt(csv_io, data, delimiter=',', fmt=['%i', '%.16e', '%i']) csv = csv_io.getvalue() print("\n".join(conv(csv).split('\n')[:10])) Explanation: First, we'll export the data we collected to a CSV-formatted string. In practice, the contents of this string would normally be written to a file, but we'll show the first few rows to illustrate the point. End of explanation csv_io = IO(csv) with qi.timing() as timing: mean, cov, extra = qi.simple_est_prec(csv_io, freq_min=omega_min, freq_max=omega_max, return_all=True) print("{}. Error: {:0.2e}. Estimated error: {:0.2e}.".format(timing, abs(mean - true_omega) / true_omega, np.sqrt(cov) / true_omega)) Explanation: We can then pass a file-like object that reads from this string to QInfer, which will automatically call NumPy and load the data appropriately. End of explanation
1,920
Given the following text description, write Python code to implement the functionality described below step by step Description: ====================================================================== Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell) ====================================================================== This example demonstrates the different time-frequency estimation methods on simulated data. It shows the time-frequency resolution trade-off and the problem of estimation variance. In addition it highlights alternative functions for generating TFRs without averaging across trials, or by operating on numpy arrays. Step1: Simulate data We'll simulate data with a known spectro-temporal structure. Step2: Calculate a time-frequency representation (TFR) Below we'll demonstrate the output of several TFR functions in MNE Step3: (1) Least smoothing (most variance/background fluctuations). Step4: (2) Less frequency smoothing, more time smoothing. Step5: (3) Less time smoothing, more frequency smoothing. Step6: Stockwell (S) transform Stockwell uses a Gaussian window to balance temporal and spectral resolution. Importantly, frequency bands are phase-normalized, hence strictly comparable with regard to timing, and, the input signal can be recoverd from the transform in a lossless way if we disregard numerical errors. In this case, we control the spectral / temporal resolution by specifying different widths of the gaussian window using the width parameter. Step7: Morlet Wavelets Finally, show the TFR using morlet wavelets, which are a sinusoidal wave with a gaussian envelope. We can control the balance between spectral and temporal resolution with the n_cycles parameter, which defines the number of cycles to include in the window. Step8: Calculating a TFR without averaging over epochs It is also possible to calculate a TFR without averaging across trials. We can do this by using average=False. In this case, an instance of Step9: Operating on arrays MNE also has versions of the functions above which operate on numpy arrays instead of MNE objects. They expect inputs of the shape (n_epochs, n_channels, n_times). They will also return a numpy array of shape (n_epochs, n_channels, n_freqs, n_times).
Python Code: # Authors: Hari Bharadwaj <[email protected]> # Denis Engemann <[email protected]> # Chris Holdgraf <[email protected]> # # License: BSD (3-clause) import numpy as np from matplotlib import pyplot as plt from mne import create_info, EpochsArray from mne.baseline import rescale from mne.time_frequency import (tfr_multitaper, tfr_stockwell, tfr_morlet, tfr_array_morlet) print(__doc__) Explanation: ====================================================================== Time-frequency on simulated data (Multitaper vs. Morlet vs. Stockwell) ====================================================================== This example demonstrates the different time-frequency estimation methods on simulated data. It shows the time-frequency resolution trade-off and the problem of estimation variance. In addition it highlights alternative functions for generating TFRs without averaging across trials, or by operating on numpy arrays. End of explanation sfreq = 1000.0 ch_names = ['SIM0001', 'SIM0002'] ch_types = ['grad', 'grad'] info = create_info(ch_names=ch_names, sfreq=sfreq, ch_types=ch_types) n_times = int(sfreq) # 1 second long epochs n_epochs = 40 seed = 42 rng = np.random.RandomState(seed) noise = rng.randn(n_epochs, len(ch_names), n_times) # Add a 50 Hz sinusoidal burst to the noise and ramp it. t = np.arange(n_times, dtype=np.float) / sfreq signal = np.sin(np.pi * 2. * 50. * t) # 50 Hz sinusoid signal signal[np.logical_or(t < 0.45, t > 0.55)] = 0. # Hard windowing on_time = np.logical_and(t >= 0.45, t <= 0.55) signal[on_time] *= np.hanning(on_time.sum()) # Ramping data = noise + signal reject = dict(grad=4000) events = np.empty((n_epochs, 3), dtype=int) first_event_sample = 100 event_id = dict(sin50hz=1) for k in range(n_epochs): events[k, :] = first_event_sample + k * n_times, 0, event_id['sin50hz'] epochs = EpochsArray(data=data, info=info, events=events, event_id=event_id, reject=reject) Explanation: Simulate data We'll simulate data with a known spectro-temporal structure. End of explanation freqs = np.arange(5., 100., 3.) vmin, vmax = -3., 3. # Define our color limits. Explanation: Calculate a time-frequency representation (TFR) Below we'll demonstrate the output of several TFR functions in MNE: :func:mne.time_frequency.tfr_multitaper :func:mne.time_frequency.tfr_stockwell :func:mne.time_frequency.tfr_morlet Multitaper transform First we'll use the multitaper method for calculating the TFR. This creates several orthogonal tapering windows in the TFR estimation, which reduces variance. We'll also show some of the parameters that can be tweaked (e.g., time_bandwidth) that will result in different multitaper properties, and thus a different TFR. You can trade time resolution or frequency resolution or both in order to get a reduction in variance. End of explanation n_cycles = freqs / 2. time_bandwidth = 2.0 # Least possible frequency-smoothing (1 taper) power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Sim: Least smoothing, most variance') Explanation: (1) Least smoothing (most variance/background fluctuations). End of explanation n_cycles = freqs # Increase time-window length to 1 second. time_bandwidth = 4.0 # Same frequency-smoothing as (1) 3 tapers. power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Sim: Less frequency smoothing, more time smoothing') Explanation: (2) Less frequency smoothing, more time smoothing. End of explanation n_cycles = freqs / 2. time_bandwidth = 8.0 # Same time-smoothing as (1), 7 tapers. power = tfr_multitaper(epochs, freqs=freqs, n_cycles=n_cycles, time_bandwidth=time_bandwidth, return_itc=False) # Plot results. Baseline correct based on first 100 ms. power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Sim: Less time smoothing, more frequency smoothing') Explanation: (3) Less time smoothing, more frequency smoothing. End of explanation fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True) fmin, fmax = freqs[[0, -1]] for width, ax in zip((0.2, .7, 3.0), axs): power = tfr_stockwell(epochs, fmin=fmin, fmax=fmax, width=width) power.plot([0], baseline=(0., 0.1), mode='mean', axes=ax, show=False, colorbar=False) ax.set_title('Sim: Using S transform, width = {:0.1f}'.format(width)) plt.tight_layout() Explanation: Stockwell (S) transform Stockwell uses a Gaussian window to balance temporal and spectral resolution. Importantly, frequency bands are phase-normalized, hence strictly comparable with regard to timing, and, the input signal can be recoverd from the transform in a lossless way if we disregard numerical errors. In this case, we control the spectral / temporal resolution by specifying different widths of the gaussian window using the width parameter. End of explanation fig, axs = plt.subplots(1, 3, figsize=(15, 5), sharey=True) all_n_cycles = [1, 3, freqs / 2.] for n_cycles, ax in zip(all_n_cycles, axs): power = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False) power.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, axes=ax, show=False, colorbar=False) n_cycles = 'scaled by freqs' if not isinstance(n_cycles, int) else n_cycles ax.set_title('Sim: Using Morlet wavelet, n_cycles = %s' % n_cycles) plt.tight_layout() Explanation: Morlet Wavelets Finally, show the TFR using morlet wavelets, which are a sinusoidal wave with a gaussian envelope. We can control the balance between spectral and temporal resolution with the n_cycles parameter, which defines the number of cycles to include in the window. End of explanation n_cycles = freqs / 2. power = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, return_itc=False, average=False) print(type(power)) avgpower = power.average() avgpower.plot([0], baseline=(0., 0.1), mode='mean', vmin=vmin, vmax=vmax, title='Using Morlet wavelets and EpochsTFR', show=False) Explanation: Calculating a TFR without averaging over epochs It is also possible to calculate a TFR without averaging across trials. We can do this by using average=False. In this case, an instance of :class:mne.time_frequency.EpochsTFR is returned. End of explanation power = tfr_array_morlet(epochs.get_data(), sfreq=epochs.info['sfreq'], freqs=freqs, n_cycles=n_cycles, output='avg_power') # Baseline the output rescale(power, epochs.times, (0., 0.1), mode='mean', copy=False) fig, ax = plt.subplots() mesh = ax.pcolormesh(epochs.times * 1000, freqs, power[0], cmap='RdBu_r', vmin=vmin, vmax=vmax) ax.set_title('TFR calculated on a numpy array') ax.set(ylim=freqs[[0, -1]], xlabel='Time (ms)') fig.colorbar(mesh) plt.tight_layout() plt.show() Explanation: Operating on arrays MNE also has versions of the functions above which operate on numpy arrays instead of MNE objects. They expect inputs of the shape (n_epochs, n_channels, n_times). They will also return a numpy array of shape (n_epochs, n_channels, n_freqs, n_times). End of explanation
1,921
Given the following text description, write Python code to implement the functionality described below step by step Description: <a id='top'> </a> Author Step1: Cosmic-ray composition effective area analysis Table of contents Load simulation DataFrame and apply quality cuts Define functions to be fit to effective area Calculate effective areas Plot result Step2: Load simulation DataFrame and apply quality cuts [ back to top ] Step3: Define energy binning for this analysis Step4: Define functions to be fit to effective area Step5: Calculate effective areas Step6: Fit functions to effective area data Step7: Plot result Step8: Effective area as quality cuts are sequentially applied
Python Code: %load_ext watermark %watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend Explanation: <a id='top'> </a> Author: James Bourbeau End of explanation %matplotlib inline from __future__ import division, print_function from collections import defaultdict import os import numpy as np from scipy import optimize from scipy.stats import chisquare import pandas as pd import matplotlib.pyplot as plt import seaborn.apionly as sns import comptools as comp color_dict = comp.analysis.get_color_dict() Explanation: Cosmic-ray composition effective area analysis Table of contents Load simulation DataFrame and apply quality cuts Define functions to be fit to effective area Calculate effective areas Plot result End of explanation # config = 'IC79' config = 'IC86.2012' df_sim = comp.load_sim(config=config, test_size=0) df_sim # df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config=config, return_cut_dict=True) # selection_mask = np.array([True] * len(df_sim)) # # standard_cut_keys = ['IceTopQualityCuts', 'lap_InIce_containment', # # # 'num_hits_1_60', 'max_qfrac_1_60', # # 'InIceQualityCuts', 'num_hits_1_60'] # standard_cut_keys = ['passed_IceTopQualityCuts', 'FractionContainment_Laputop_InIce', # 'passed_InIceQualityCuts', 'num_hits_1_60'] # # for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']: # # standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)] # for key in standard_cut_keys: # selection_mask *= cut_dict_sim[key] # print(key, np.sum(selection_mask)) # df_sim = df_sim[selection_mask] Explanation: Load simulation DataFrame and apply quality cuts [ back to top ] End of explanation log_energy_bins = np.arange(5.0, 9.51, 0.05) # log_energy_bins = np.arange(5.0, 9.51, 0.1) energy_bins = 10**log_energy_bins energy_midpoints = (energy_bins[1:] + energy_bins[:-1]) / 2 energy_min_fit, energy_max_fit = 5.8, 7.0 midpoints_fitmask = (energy_midpoints >= 10**energy_min_fit) & (energy_midpoints <= 10**energy_max_fit) log_energy_bins np.log10(energy_midpoints[midpoints_fitmask]) Explanation: Define energy binning for this analysis End of explanation def constant(energy, c): return c def linefit(energy, m, b): return m*np.log10(energy) + b def sigmoid_flat(energy, p0, p1, p2): return p0 / (1 + np.exp(-p1*np.log10(energy) + p2)) def sigmoid_slant(energy, p0, p1, p2, p3): return (p0 + p3*np.log10(energy)) / (1 + np.exp(-p1*np.log10(energy) + p2)) def red_chisquared(obs, fit, sigma, n_params): zero_mask = sigma != 0 return np.nansum(((obs[zero_mask] - fit[zero_mask])/sigma[zero_mask]) ** 2) / (len(obs[zero_mask]) - n_params) # return np.sum(((obs - fit)/sigma) ** 2) / (len(obs) - 1 - n_params) np.sum(midpoints_fitmask)-3 Explanation: Define functions to be fit to effective area End of explanation eff_area, eff_area_error, _ = comp.calculate_effective_area_vs_energy(df_sim, energy_bins) eff_area_light, eff_area_error_light, _ = comp.calculate_effective_area_vs_energy(df_sim[df_sim.MC_comp_class == 'light'], energy_bins) eff_area_heavy, eff_area_error_heavy, _ = comp.calculate_effective_area_vs_energy(df_sim[df_sim.MC_comp_class == 'heavy'], energy_bins) eff_area, eff_area_error, _ = comp.analysis.get_effective_area(df_sim, energy_bins, energy='MC') eff_area_light, eff_area_error_light, _ = comp.analysis.get_effective_area( df_sim[df_sim.MC_comp_class == 'light'], energy_bins, energy='MC') eff_area_heavy, eff_area_error_heavy, _ = comp.analysis.get_effective_area( df_sim[df_sim.MC_comp_class == 'heavy'], energy_bins, energy='MC') eff_area_light Explanation: Calculate effective areas End of explanation p0 = [1.5e5, 8.0, 50.0] popt_light, pcov_light = optimize.curve_fit(sigmoid_flat, energy_midpoints[midpoints_fitmask], eff_area_light[midpoints_fitmask], p0=p0, sigma=eff_area_error_light[midpoints_fitmask]) popt_heavy, pcov_heavy = optimize.curve_fit(sigmoid_flat, energy_midpoints[midpoints_fitmask], eff_area_heavy[midpoints_fitmask], p0=p0, sigma=eff_area_error_heavy[midpoints_fitmask]) print(popt_light) print(popt_heavy) perr_light = np.sqrt(np.diag(pcov_light)) print(perr_light) perr_heavy = np.sqrt(np.diag(pcov_heavy)) print(perr_heavy) avg = (popt_light[0] + popt_heavy[0]) / 2 print('avg eff area = {}'.format(avg)) eff_area_light light_chi2 = red_chisquared(eff_area_light, sigmoid_flat(energy_midpoints, *popt_light), eff_area_error_light, len(popt_light)) print(light_chi2) heavy_chi2 = red_chisquared(eff_area_heavy, sigmoid_flat(energy_midpoints, *popt_heavy), eff_area_error_heavy, len(popt_heavy)) print(heavy_chi2) Explanation: Fit functions to effective area data End of explanation fig, ax = plt.subplots() # plot effective area data points with poisson errors ax.errorbar(np.log10(energy_midpoints), eff_area_light, yerr=eff_area_error_light, ls='None', marker='.') ax.errorbar(np.log10(energy_midpoints), eff_area_heavy, yerr=eff_area_error_heavy, ls='None', marker='.') # plot corresponding sigmoid fits to effective area x = 10**np.arange(5.0, 9.5, 0.01) ax.plot(np.log10(x), sigmoid_flat(x, *popt_light), color=color_dict['light'], label='light', marker='None', ls='-') ax.plot(np.log10(x), sigmoid_flat(x, *popt_heavy), color=color_dict['heavy'], label='heavy', marker='None') avg_eff_area = (sigmoid_flat(x, *popt_light) + sigmoid_flat(x, *popt_heavy)) / 2 ax.plot(np.log10(x), avg_eff_area, color=color_dict['total'], label='avg', marker='None') ax.fill_between(np.log10(x), avg_eff_area-0.01*avg_eff_area, avg_eff_area+0.01*avg_eff_area, color=color_dict['total'], alpha=0.5) ax.axvline(6.4, marker='None', ls='-.', color='k') ax.set_ylabel('Effective area [m$^2$]') ax.set_xlabel('$\mathrm{\log_{10}(E_{true}/GeV)}$') # ax.set_title('$\mathrm{A_{eff} = 143177 \pm 1431.77 \ m^2}$') ax.grid() # ax.set_ylim([0, 180000]) ax.set_xlim([5.4, 8.1]) ax.set_title(config) #set label style ax.ticklabel_format(style='sci',axis='y') ax.yaxis.major.formatter.set_powerlimits((0,0)) leg = plt.legend(title='True composition') for legobj in leg.legendHandles: legobj.set_linewidth(2.0) # eff_area_outfile = os.path.join(comp.paths.figures_dir, 'effective-area-{}.png'.format(config)) # comp.check_output_dir(eff_area_outfile) # plt.savefig(eff_area_outfile) plt.show() Explanation: Plot result End of explanation df_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config='IC79', return_cut_dict=True) standard_cut_keys = ['num_hits_1_60', 'IceTopQualityCuts', 'lap_InIce_containment', # 'num_hits_1_60', 'max_qfrac_1_60', 'InIceQualityCuts'] # for cut in ['MilliNCascAbove2', 'MilliQtotRatio', 'MilliRloglBelow2', 'StochRecoSucceeded']: # standard_cut_keys += ['InIceQualityCuts_{}'.format(cut)] eff_area_dict = {} eff_area_err_dict = {} selection_mask = np.array([True] * len(df_sim)) for key in standard_cut_keys: selection_mask *= cut_dict_sim[key] print(key, np.sum(selection_mask)) eff_area, eff_area_error, _ = comp.analysis.get_effective_area(df_sim[selection_mask], energy_bins, energy='MC') # eff_area, eff_area_error = comp.analysis.effective_area.effective_area(df_sim[selection_mask], # np.arange(5.0, 9.51, 0.1)) eff_area_dict[key] = eff_area eff_area_err_dict[key] = eff_area_error fig, ax = plt.subplots() cut_labels = {'num_hits_1_60': 'NStations/NChannels', 'IceTopQualityCuts': 'IceTopQualityCuts', 'lap_InIce_containment': 'InIce containment', 'InIceQualityCuts': 'InIceQualityCuts'} for key in standard_cut_keys: # plot effective area data points with poisson errors ax.errorbar(np.log10(energy_midpoints), eff_area_dict[key], yerr=eff_area_err_dict[key], ls='None', marker='.', label=cut_labels[key], alpha=0.75) ax.set_ylabel('Effective area [m$^2$]') ax.set_xlabel('$\log_{10}(E_{\mathrm{MC}}/\mathrm{GeV})$') ax.grid() # ax.set_ylim([0, 180000]) ax.set_xlim([5.4, 9.6]) #set label style ax.ticklabel_format(style='sci',axis='y') ax.yaxis.major.formatter.set_powerlimits((0,0)) leg = plt.legend() plt.savefig('/home/jbourbeau/public_html/figures/effective-area-cuts.png') plt.show() Explanation: Effective area as quality cuts are sequentially applied End of explanation
1,922
Given the following text description, write Python code to implement the functionality described below step by step Description: Tyler Jensen Recursive Backtracking || Brute Force Solutions Why use recursion? You now have a couple tools to solve programming, namely iteration and recursion. Both can be used in many situations, but recursion allows us to solve problems in a way that human beings cannot. For example, let's consider guessing someone's PIN. 8800 is mine. A human being could guess every single possible combination of numbers for a PIN (10,000 possible combinations), but that would take forever. 10,000 guesses is actually a relatively small number of guesses for a computer. While it's possible to solve this with iteration, it's much easier to do with recursion, and specifically recursive backtracking. Visualizing Recursive Backtracking How is recursive backtracking different? Recursive backtracking still follows all the principles of recursion. Those being Step1: Questions? What happens if we change the order? How can we make another choice? Why don't we have to Unchoose? How do we stop from going to far? Step2: Problem 2 Step3: Questions? Why don't we have to unchoose? Problem 3
Python Code: def pathTo(x, y, path): #basecase if x == 0 and y == 0: print path #recursive case #this is an elif because we don't want to recurse forever once we are too far to the right, or too high up elif x >= 0 and y >= 0: pathTo(x - 1, y, path + "Right ") #choose right, explore pathTo(x, y - 1, path + "Up ") #choose up, explore #pathTo(5, 5, "") Explanation: Tyler Jensen Recursive Backtracking || Brute Force Solutions Why use recursion? You now have a couple tools to solve programming, namely iteration and recursion. Both can be used in many situations, but recursion allows us to solve problems in a way that human beings cannot. For example, let's consider guessing someone's PIN. 8800 is mine. A human being could guess every single possible combination of numbers for a PIN (10,000 possible combinations), but that would take forever. 10,000 guesses is actually a relatively small number of guesses for a computer. While it's possible to solve this with iteration, it's much easier to do with recursion, and specifically recursive backtracking. Visualizing Recursive Backtracking How is recursive backtracking different? Recursive backtracking still follows all the principles of recursion. Those being : 1. A recursive algorithm must have a base case. 2. A recursive algorithm must change its state and move toward the base case. 3. A recursive algorithm must call itself, recursively. Recursive backtracking will always have a base case, or it will go forever. In recursive backtracking, we add a concept called "Choose, Explore, Unchoose". When we want to change our state and move towards the base case (the second principles), we will generally have a few choices to make (following the PIN example, 10 choices, one for each number). When we implement recursive backtracking, we do this with Choose, Explore, Unchoose. Problem 1 : Pathfinding Another use for recursive backtracking is finding all the possible different paths to a point. Consider a basic graph; we may want to find all the paths from the origin to the point (5, 5) given that we can only go up or right. So for example, two possible paths might be : Up Up Up Up Up Right Right Right Right Right Up Up Up UP Right Right Right Right Right Up Base Case : Generally the easiest case, in this situation if the coordinates we are given are (0, 0) Recursive Case : At every point, we have two choices to make (How many recursive calls do you think we will make each time through the method?) We have to move towards the base case (subtract 1 from X or Y to eventually get to (0, 0)) End of explanation def pathTo(x, y, path): #basecase if x == 0 and y == 0: print path #recursive case #this is an elif because we don't want to recurse forever once we are too far to the right, or too high up elif x >= 0 and y >= 0: pathTo(x - 1, y, path + "E ") #choose right, explore pathTo(x, y - 1, path + "N ") #choose up, explore pathTo(x - 1, y - 1, path + "NE ") #choose diagnal, explore #pathTo(5, 5, "") Explanation: Questions? What happens if we change the order? How can we make another choice? Why don't we have to Unchoose? How do we stop from going to far? End of explanation def hackPassword(correctPassword): hackPass(correctPassword, "") def hackPass(correctPassword, guess): #base case : guess is the correct password if guess == correctPassword: print guess #recursive case : we don't have more than 3 numbers, so make 10 choices elif len(correctPassword) > len(guess): for number in range(10): #choice : add number to guess #explore : make the recursive call hackPass(correctPassword, guess + str(number)) hackPassword("8800") Explanation: Problem 2 : PIN Guesser Given a PIN, we can use recursive backtracking to "brute force" our way into a solution. This means we are essentially just exhuasting all possible guesses. We are going to need a second parameter here to start out our soluation Base Case : The PIN numbers match Recursive Case : At every point, we have 10 choices to make (one for each number). This looks more like a loop with a recursive call rather than 10 recursive calls. End of explanation import sys def possibleSteps(steps): myList = [] #we have to make this list in here so that we have a way to store steps #gonna draw the staircase for fun for number in range(steps)[::-1]: for stepNum in range(number): sys.stdout.write(' ') print "__|" print "" possibleStepsRecurse(myList, steps) def possibleStepsRecurse(myList, steps): #base case : no steps left if steps == 0: print myList #recursive case : don't recurse if we are past the number of steps needed elif steps > 0: myList.append(1) # choose possibleStepsRecurse(myList, steps - 1) # explore myList.pop() #unchoose myList.append(2) # choose possibleStepsRecurse(myList, steps - 2) # explore myList.pop() # unchoose possibleSteps(5) #test comment Explanation: Questions? Why don't we have to unchoose? Problem 3 : Climbing Stairs We've all climbed stairs two stairs at a time. Given a number of steps, how many different combinations of stepping once and stepping twice can we climb the given staircase in? Base Case : The easiest staircase to climb is when we're already at the top, so 0 stairs, or 0 steps left. Recursive Case : At every point, we have 2 choices to make. 1 step or 2 steps. What makes this problem more difficult is how we are going to choose to store these steps. In this case, a list is the easiest. Every time we make a choice we will append either 1 or 2 to the list. We finally get to see Unchoose in action here! We have to undo our choice of 1 step before we explore solutions with 2 steps. End of explanation
1,923
Given the following text description, write Python code to implement the functionality described below step by step Description: Real Life Example Step1: Loading and Preparing Data Step3: Big Kudos to Waleed Abdulla for providing the initial idea and many of the functions used to prepare and display the images Step4: Let's start with creating a minimal model that overfits on a very small training set http Step5: This is how overfitting looks like in the Metrics Accuracy Validation Accuracy Step6: Hands-On Step7: How Metrics might look like when training 500 epochs with given full model Training size makes this a little bit hard to interpret. Might look different for different random split. Accuracy Validation Accuracy Step8: What images does it work well on?
Python Code: import warnings warnings.filterwarnings('ignore') %matplotlib inline %pylab inline import matplotlib.pylab as plt import numpy as np from distutils.version import StrictVersion import sklearn print(sklearn.__version__) assert StrictVersion(sklearn.__version__ ) >= StrictVersion('0.18.1') import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) print(tf.__version__) assert StrictVersion(tf.__version__) >= StrictVersion('1.1.0') import keras print(keras.__version__) assert StrictVersion(keras.__version__) >= StrictVersion('2.0.6') Explanation: Real Life Example: Classifying Speed Limit Signs End of explanation !ls -l speed-limit-signs !cat speed-limit-signs/README.md Explanation: Loading and Preparing Data End of explanation import os import skimage.data import skimage.transform from keras.utils.np_utils import to_categorical import numpy as np def load_data(data_dir, type=".ppm"): num_categories = 6 # Get all subdirectories of data_dir. Each represents a label. directories = [d for d in os.listdir(data_dir) if os.path.isdir(os.path.join(data_dir, d))] # Loop through the label directories and collect the data in # two lists, labels and images. labels = [] images = [] for d in directories: label_dir = os.path.join(data_dir, d) file_names = [os.path.join(label_dir, f) for f in os.listdir(label_dir) if f.endswith(type)] # For each label, load it's images and add them to the images list. # And add the label number (i.e. directory name) to the labels list. for f in file_names: images.append(skimage.data.imread(f)) labels.append(int(d)) images64 = [skimage.transform.resize(image, (64, 64)) for image in images] return images64, labels # Load datasets. ROOT_PATH = "./" original_dir = os.path.join(ROOT_PATH, "speed-limit-signs") images, labels = load_data(original_dir, type=".ppm") import matplotlib import matplotlib.pyplot as plt def display_images_and_labels(images, labels): Display the first image of each label. unique_labels = set(labels) plt.figure(figsize=(15, 15)) i = 1 for label in unique_labels: # Pick the first image for each label. image = images[labels.index(label)] plt.subplot(8, 8, i) # A grid of 8 rows x 8 columns plt.axis('off') plt.title("Label {0} ({1})".format(label, labels.count(label))) i += 1 _ = plt.imshow(image) display_images_and_labels(images, labels) # again a little bit of feature engeneering y = np.array(labels) X = np.array(images) from keras.utils.np_utils import to_categorical num_categories = 6 y = to_categorical(y, num_categories) Explanation: Big Kudos to Waleed Abdulla for providing the initial idea and many of the functions used to prepare and display the images: https://medium.com/@waleedka/traffic-sign-recognition-with-tensorflow-629dffc391a6#.i728o84ib End of explanation from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.9, random_state=42, stratify=y) X_train.shape, y_train.shape Explanation: Let's start with creating a minimal model that overfits on a very small training set http://cs231n.github.io/neural-networks-3/#sanitycheck End of explanation # full architecture # %load https://djcordhose.github.io/ai/fragments/vgg_style_no_dropout.py # my sample minimized architecture # %load https://djcordhose.github.io/ai/fragments/vgg_style_no_dropout_overfitting.py model = Model(input=inputs, output=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) # Determines how many samples are using for training in one batch # Depends on harware GPU architecture, set as high as possible (this works well on K80) BATCH_SIZE = 500 %time model.fit(X_train, y_train, epochs=100, validation_split=0.2, batch_size=BATCH_SIZE) Explanation: This is how overfitting looks like in the Metrics Accuracy Validation Accuracy End of explanation X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y) # https://keras.io/callbacks/#tensorboard tb_callback = keras.callbacks.TensorBoard(log_dir='./tf_log') # To start tensorboard # tensorboard --logdir=/mnt/c/Users/olive/Development/ml/tf_log # open http://localhost:6006 early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=50, verbose=1) checkpoint_callback = keras.callbacks.ModelCheckpoint('./model-checkpoints/weights.epoch-{epoch:02d}-val_loss-{val_loss:.2f}.hdf5'); keras.layers.Dropout? # full architecture with dropout # %load https://djcordhose.github.io/ai/fragments/vgg_style_dropout.py # my sample minimized architecture # %load https://djcordhose.github.io/ai/fragments/vgg_style_dropout_minmal.py model = Model(input=inputs, output=predictions) model.summary() model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) !rm -r tf_log %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback, early_stopping_callback]) # %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2, callbacks=[tb_callback]) # %time model.fit(X_train, y_train, epochs=500, batch_size=BATCH_SIZE, validation_split=0.2) train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE) train_loss, train_accuracy test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE) test_loss, test_accuracy Explanation: Hands-On: Create a minimal model Step #1: Simplify the given architecture until you can no longer overfit on the small training set reduce number of epochs while training to 50 or even less to have quick experinemtation cycles reduce number of layers reduce number of feature channels make sure your modell actualy has less parameters than the original one (was 4,788,358) if you need a special challenge you can write your model from scratch (you can always reload the original one using the prepared %load) Now we see that the model at least has the basic capacity for the task, we have to get rid of the overfitting How to avoid Overfitting using Dropouts A Dropout Layers blacks out a certain percentage of input neurons Which each update of weihts during training other neurons are chosen Hope is to train different parts of the network with each iteration avoiding overfitting Dropout rate typically between 40% and 75% VGG adds Dropout after each convolutional block and after fc layer x = Dropout(0.5)(x) this only applies for training phase, in prediction there is no such layer Step #2: Train on the complete training set and make sure to still avoid overfitting by optimizing for val-acc train on the complete training set add dropout of 50% as described above gradually make your model more complex until you have minimized overfitting 90% and more of validation accuracy are possible again, reduce the number of epochs of your model to make it trainable on your hardware (100 might work well) if it does not show signs of converging early on, it is likely not complex enough you can also start with a pre-defined architecture and make this less compelx (again using the prepared %load) save the trained model for later comparison End of explanation # model.save('conv-vgg.hdf5') model.save('conv-simple.hdf5') !ls -lh # https://transfer.sh/ # Saved for 14 days # !curl --upload-file conv-vgg.hdf5 https://transfer.sh !curl --upload-file conv-simple.hdf5 https://transfer.sh # pre-trained model # acc: 0.98- val_acc: 0.89 # https://transfer.sh/DuZA7/conv-simple.hdf5 Explanation: How Metrics might look like when training 500 epochs with given full model Training size makes this a little bit hard to interpret. Might look different for different random split. Accuracy Validation Accuracy End of explanation import random # Pick 10 random images for test data set random.seed(42) # to make this deterministic sample_indexes = random.sample(range(len(X_test)), 10) sample_images = [X_test[i] for i in sample_indexes] sample_labels = [y_test[i] for i in sample_indexes] ground_truth = np.argmax(sample_labels, axis=1) ground_truth X_sample = np.array(sample_images) prediction = model.predict(X_sample) predicted_categories = np.argmax(prediction, axis=1) predicted_categories # Display the predictions and the ground truth visually. def display_prediction (images, true_labels, predicted_labels): fig = plt.figure(figsize=(10, 10)) for i in range(len(true_labels)): truth = true_labels[i] prediction = predicted_labels[i] plt.subplot(5, 2,1+i) plt.axis('off') color='green' if truth == prediction else 'red' plt.text(80, 10, "Truth: {0}\nPrediction: {1}".format(truth, prediction), fontsize=12, color=color) plt.imshow(images[i]) display_prediction(sample_images, ground_truth, predicted_categories) Explanation: What images does it work well on? End of explanation
1,924
Given the following text description, write Python code to implement the functionality described below step by step Description: UAT for NbAgg backend. The first line simply reloads matplotlib, uses the nbagg backend and then reloads the backend, just to ensure we have the latest modification to the backend code. Note Step1: UAT 1 - Simple figure creation using pyplot Should produce a figure window which is interactive with the pan and zoom buttons. (Do not press the close button, but any others may be used). Step2: UAT 2 - Creation of another figure, without the need to do plt.figure. As above, a new figure should be created. Step3: UAT 3 - Connection info The printout should show that there are two figures which have active CommSockets, and no figures pending show. Step4: UAT 4 - Closing figures Closing a specific figure instance should turn the figure into a plain image - the UI should have been removed. In this case, scroll back to the first figure and assert this is the case. Step5: UAT 5 - No show without plt.show in non-interactive mode Simply doing a plt.plot should not show a new figure, nor indeed update an existing one (easily verified in UAT 6). The output should simply be a list of Line2D instances. Step6: UAT 6 - Connection information We just created a new figure, but didn't show it. Connection info should no longer have "Figure 1" (as we closed it in UAT 4) and should have figure 2 and 3, with Figure 3 without any connections. There should be 1 figure pending. Step7: UAT 7 - Show of previously created figure We should be able to show a figure we've previously created. The following should produce two figure windows. Step8: UAT 8 - Interactive mode In interactive mode, creating a line should result in a figure being shown. Step9: Subsequent lines should be added to the existing figure, rather than creating a new one. Step10: Calling connection_info in interactive mode should not show any pending figures. Step11: Disable interactive mode again. Step12: UAT 9 - Multiple shows Unlike most of the other matplotlib backends, we may want to see a figure multiple times (with or without synchronisation between the views, though the former is not yet implemented). Assert that plt.gcf().canvas.manager.reshow() results in another figure window which is synchronised upon pan & zoom. Step13: UAT 10 - Saving notebook Saving the notebook (with CTRL+S or File->Save) should result in the saved notebook having static versions of the figues embedded within. The image should be the last update from user interaction and interactive plotting. (check by converting with ipython nbconvert &lt;notebook&gt;) UAT 11 - Creation of a new figure on second show Create a figure, show it, then create a new axes and show it. The result should be a new figure. BUG Step14: UAT 12 - OO interface Should produce a new figure and plot it. Step15: UAT 13 - Animation The following should generate an animated line Step16: UAT 14 - Keyboard shortcuts in IPython after close of figure After closing the previous figure (with the close button above the figure) the IPython keyboard shortcuts should still function. UAT 15 - Figure face colours The nbagg honours all colours appart from that of the figure.patch. The two plots below should produce a figure with a transparent background and a red background respectively (check the transparency by closing the figure, and dragging the resulting image over other content). There should be no yellow figure. Step17: UAT 16 - Events Pressing any keyboard key or mouse button (or scrolling) should cycle the line line while the figure has focus. The figure should have focus by default when it is created and re-gain it by clicking on the canvas. Clicking anywhere outside of the figure should release focus, but moving the mouse out of the figure should not release focus. Step18: UAT 17 - Timers Single-shot timers follow a completely different code path in the nbagg backend than regular timers (such as those used in the animation example above.) The next set of tests ensures that both "regular" and "single-shot" timers work properly. The following should show a simple clock that updates twice a second Step19: However, the following should only update once and then stop Step20: And the next two examples should never show any visible text at all Step21: UAT17 - stoping figure when removed from DOM When the div that contains from the figure is removed from the DOM the figure should shut down it's comm, and if the python-side figure has no more active comms, it should destroy the figure. Repeatedly running the cell below should always have the same figure number Step22: Running the cell below will re-show the figure. After this, re-running the cell above should result in a new figure number.
Python Code: import matplotlib reload(matplotlib) matplotlib.use('nbagg') import matplotlib.backends.backend_nbagg reload(matplotlib.backends.backend_nbagg) Explanation: UAT for NbAgg backend. The first line simply reloads matplotlib, uses the nbagg backend and then reloads the backend, just to ensure we have the latest modification to the backend code. Note: The underlying JavaScript will not be updated by this process, so a refresh of the browser after clearing the output and saving is necessary to clear everything fully. End of explanation import matplotlib.backends.backend_webagg_core reload(matplotlib.backends.backend_webagg_core) import matplotlib.pyplot as plt plt.interactive(False) fig1 = plt.figure() plt.plot(range(10)) plt.show() Explanation: UAT 1 - Simple figure creation using pyplot Should produce a figure window which is interactive with the pan and zoom buttons. (Do not press the close button, but any others may be used). End of explanation plt.plot([3, 2, 1]) plt.show() Explanation: UAT 2 - Creation of another figure, without the need to do plt.figure. As above, a new figure should be created. End of explanation print(matplotlib.backends.backend_nbagg.connection_info()) Explanation: UAT 3 - Connection info The printout should show that there are two figures which have active CommSockets, and no figures pending show. End of explanation plt.close(fig1) Explanation: UAT 4 - Closing figures Closing a specific figure instance should turn the figure into a plain image - the UI should have been removed. In this case, scroll back to the first figure and assert this is the case. End of explanation plt.plot(range(10)) Explanation: UAT 5 - No show without plt.show in non-interactive mode Simply doing a plt.plot should not show a new figure, nor indeed update an existing one (easily verified in UAT 6). The output should simply be a list of Line2D instances. End of explanation print(matplotlib.backends.backend_nbagg.connection_info()) Explanation: UAT 6 - Connection information We just created a new figure, but didn't show it. Connection info should no longer have "Figure 1" (as we closed it in UAT 4) and should have figure 2 and 3, with Figure 3 without any connections. There should be 1 figure pending. End of explanation plt.show() plt.figure() plt.plot(range(5)) plt.show() Explanation: UAT 7 - Show of previously created figure We should be able to show a figure we've previously created. The following should produce two figure windows. End of explanation plt.interactive(True) plt.figure() plt.plot([3, 2, 1]) Explanation: UAT 8 - Interactive mode In interactive mode, creating a line should result in a figure being shown. End of explanation plt.plot(range(3)) Explanation: Subsequent lines should be added to the existing figure, rather than creating a new one. End of explanation print(matplotlib.backends.backend_nbagg.connection_info()) Explanation: Calling connection_info in interactive mode should not show any pending figures. End of explanation plt.interactive(False) Explanation: Disable interactive mode again. End of explanation plt.gcf().canvas.manager.reshow() Explanation: UAT 9 - Multiple shows Unlike most of the other matplotlib backends, we may want to see a figure multiple times (with or without synchronisation between the views, though the former is not yet implemented). Assert that plt.gcf().canvas.manager.reshow() results in another figure window which is synchronised upon pan & zoom. End of explanation fig = plt.figure() plt.axes() plt.show() plt.plot([1, 2, 3]) plt.show() Explanation: UAT 10 - Saving notebook Saving the notebook (with CTRL+S or File->Save) should result in the saved notebook having static versions of the figues embedded within. The image should be the last update from user interaction and interactive plotting. (check by converting with ipython nbconvert &lt;notebook&gt;) UAT 11 - Creation of a new figure on second show Create a figure, show it, then create a new axes and show it. The result should be a new figure. BUG: Sometimes this doesn't work - not sure why (@pelson). End of explanation from matplotlib.backends.backend_nbagg import new_figure_manager,show manager = new_figure_manager(1000) fig = manager.canvas.figure ax = fig.add_subplot(1,1,1) ax.plot([1,2,3]) fig.show() Explanation: UAT 12 - OO interface Should produce a new figure and plot it. End of explanation import matplotlib.animation as animation import numpy as np fig, ax = plt.subplots() x = np.arange(0, 2*np.pi, 0.01) # x-array line, = ax.plot(x, np.sin(x)) def animate(i): line.set_ydata(np.sin(x+i/10.0)) # update the data return line, #Init only required for blitting to give a clean slate. def init(): line.set_ydata(np.ma.array(x, mask=True)) return line, ani = animation.FuncAnimation(fig, animate, np.arange(1, 200), init_func=init, interval=32., blit=True) plt.show() Explanation: UAT 13 - Animation The following should generate an animated line: End of explanation import matplotlib matplotlib.rcParams.update({'figure.facecolor': 'red', 'savefig.facecolor': 'yellow'}) plt.figure() plt.plot([3, 2, 1]) with matplotlib.rc_context({'nbagg.transparent': False}): plt.figure() plt.plot([3, 2, 1]) plt.show() Explanation: UAT 14 - Keyboard shortcuts in IPython after close of figure After closing the previous figure (with the close button above the figure) the IPython keyboard shortcuts should still function. UAT 15 - Figure face colours The nbagg honours all colours appart from that of the figure.patch. The two plots below should produce a figure with a transparent background and a red background respectively (check the transparency by closing the figure, and dragging the resulting image over other content). There should be no yellow figure. End of explanation import itertools fig, ax = plt.subplots() x = np.linspace(0,10,10000) y = np.sin(x) ln, = ax.plot(x,y) evt = [] colors = iter(itertools.cycle(['r', 'g', 'b', 'k', 'c'])) def on_event(event): if event.name.startswith('key'): fig.suptitle('%s: %s' % (event.name, event.key)) elif event.name == 'scroll_event': fig.suptitle('%s: %s' % (event.name, event.step)) else: fig.suptitle('%s: %s' % (event.name, event.button)) evt.append(event) ln.set_color(next(colors)) fig.canvas.draw() fig.canvas.draw_idle() fig.canvas.mpl_connect('button_press_event', on_event) fig.canvas.mpl_connect('button_release_event', on_event) fig.canvas.mpl_connect('scroll_event', on_event) fig.canvas.mpl_connect('key_press_event', on_event) fig.canvas.mpl_connect('key_release_event', on_event) plt.show() Explanation: UAT 16 - Events Pressing any keyboard key or mouse button (or scrolling) should cycle the line line while the figure has focus. The figure should have focus by default when it is created and re-gain it by clicking on the canvas. Clicking anywhere outside of the figure should release focus, but moving the mouse out of the figure should not release focus. End of explanation import time fig, ax = plt.subplots() text = ax.text(0.5, 0.5, '', ha='center') def update(text): text.set(text=time.ctime()) text.axes.figure.canvas.draw() timer = fig.canvas.new_timer(500, [(update, [text], {})]) timer.start() plt.show() Explanation: UAT 17 - Timers Single-shot timers follow a completely different code path in the nbagg backend than regular timers (such as those used in the animation example above.) The next set of tests ensures that both "regular" and "single-shot" timers work properly. The following should show a simple clock that updates twice a second: End of explanation fig, ax = plt.subplots() text = ax.text(0.5, 0.5, '', ha='center') timer = fig.canvas.new_timer(500, [(update, [text], {})]) timer.single_shot = True timer.start() plt.show() Explanation: However, the following should only update once and then stop: End of explanation fig, ax = plt.subplots() text = ax.text(0.5, 0.5, '', ha='center') timer = fig.canvas.new_timer(500, [(update, [text], {})]) timer.start() timer.stop() plt.show() fig, ax = plt.subplots() text = ax.text(0.5, 0.5, '', ha='center') timer = fig.canvas.new_timer(500, [(update, [text], {})]) timer.single_shot = True timer.start() timer.stop() plt.show() Explanation: And the next two examples should never show any visible text at all: End of explanation fig, ax = plt.subplots() ax.plot(range(5)) plt.show() Explanation: UAT17 - stoping figure when removed from DOM When the div that contains from the figure is removed from the DOM the figure should shut down it's comm, and if the python-side figure has no more active comms, it should destroy the figure. Repeatedly running the cell below should always have the same figure number End of explanation fig.canvas.manager.reshow() Explanation: Running the cell below will re-show the figure. After this, re-running the cell above should result in a new figure number. End of explanation
1,925
Given the following text description, write Python code to implement the functionality described below step by step Description: Equation for Neuron Paper A dendritic segment can robustly classify a pattern by subsampling a small number of cells from a larger population. Assuming a random distribution of patterns, the exact probability of a false match is given by the following equation Step1: where n refers to the size of the population of cells, a is the number of active cells at any instance in time, s is the number of actual synapses on a dendritic segment, and θ is the threshold for NMDA spikes. Following (Ahmad & Hawkins, 2015), the numerator counts the number of possible ways θ or more cells can match a fixed set of s synapses. The denominator counts the number of ways a cells out of n can be active. Example usage Step2: Table 1B Step3: Table 1C Step4: Table 1D
Python Code: oxp = Symbol("Omega_x'") b = Symbol("b") n = Symbol("n") theta = Symbol("theta") w = Symbol("w") s = Symbol("s") a = Symbol("a") subsampledOmega = (binomial(s, b) * binomial(n - s, a - b)) / binomial(n, a) subsampledFpF = Sum(subsampledOmega, (b, theta, s)) subsampledOmegaSlow = (binomial(s, b) * binomial(n - s, a - b)) subsampledFpFSlow = Sum(subsampledOmegaSlow, (b, theta, s))/ binomial(n, a) display(subsampledFpF) display(subsampledFpFSlow) Explanation: Equation for Neuron Paper A dendritic segment can robustly classify a pattern by subsampling a small number of cells from a larger population. Assuming a random distribution of patterns, the exact probability of a false match is given by the following equation: End of explanation display("n=1024, a=8, s=4, omega=2", subsampledFpF.subs(s, 4).subs(n, 1024).subs(a, 8).subs(theta, 2).evalf()) display("n=100000, a=2000, s=10, theta=10", subsampledFpFSlow.subs(theta, 10).subs(s, 10).subs(n, 100000).subs(a, 2000).evalf()) display("n=2048, a=400, s=40, theta=20", subsampledFpF.subs(theta, 20).subs(s, 40).subs(n, 2048).subs(a, 400).evalf()) Explanation: where n refers to the size of the population of cells, a is the number of active cells at any instance in time, s is the number of actual synapses on a dendritic segment, and θ is the threshold for NMDA spikes. Following (Ahmad & Hawkins, 2015), the numerator counts the number of possible ways θ or more cells can match a fixed set of s synapses. The denominator counts the number of ways a cells out of n can be active. Example usage End of explanation T1B = subsampledFpFSlow.subs(n, 100000).subs(a, 2000).subs(theta,s).evalf() print "n=100000, a=2000, theta=s" display("s=6",T1B.subs(s,6).evalf()) display("s=8",T1B.subs(s,8).evalf()) display("s=10",T1B.subs(s,10).evalf()) Explanation: Table 1B End of explanation T1C = subsampledFpFSlow.subs(n, 100000).subs(a, 2000).subs(s,2*theta).evalf() print "n=100000, a=2000, s=2*theta" display("theta=6",T1C.subs(theta,6).evalf()) display("theta=8",T1C.subs(theta,8).evalf()) display("theta=10",T1C.subs(theta,10).evalf()) display("theta=12",T1C.subs(theta,12).evalf()) Explanation: Table 1C End of explanation m = Symbol("m") T1D = subsampledFpF.subs(n, 100000).subs(a, 2000).subs(s,2*m*theta).evalf() print "n=100000, a=2000, s=2*m*theta" display("theta=10, m=2",T1D.subs(theta,10).subs(m,2).evalf()) display("theta=10, m=4",T1D.subs(theta,10).subs(m,4).evalf()) display("theta=10, m=6",T1D.subs(theta,10).subs(m,6).evalf()) display("theta=20, m=6",T1D.subs(theta,20).subs(m,6).evalf()) Explanation: Table 1D End of explanation
1,926
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: NERC Source ID: SANDBOX-1 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:27 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
1,927
Given the following text description, write Python code to implement the functionality described below step by step Description: Name Data preparation using PySpark on Cloud Dataproc Label Cloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components Summary A Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended use Use the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments | Argument | Description | Optional | Data type | Accepted values | Default | |----------------------|------------|----------|--------------|-----------------|---------| | project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | | | region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | | | cluster_name | The name of the cluster to run the job. | No | String | | | | main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | | | args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None | | pyspark_job | The payload of a PySparkJob. | Yes | Dict | | None | | job | The payload of a Dataproc job. | Yes | Dict | | None | Output Name | Description | Type Step1: Load the component using KFP SDK Step2: Sample Note Step3: Set sample parameters Step4: Example pipeline that uses the component Step5: Compile the pipeline Step6: Submit the pipeline for execution
Python Code: %%capture --no-stderr !pip3 install kfp --upgrade Explanation: Name Data preparation using PySpark on Cloud Dataproc Label Cloud Dataproc, GCP, Cloud Storage,PySpark, Kubeflow, pipelines, components Summary A Kubeflow Pipeline component to prepare data by submitting a PySpark job to Cloud Dataproc. Details Intended use Use the component to run an Apache PySpark job as one preprocessing step in a Kubeflow Pipeline. Runtime arguments | Argument | Description | Optional | Data type | Accepted values | Default | |----------------------|------------|----------|--------------|-----------------|---------| | project_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to. | No | GCPProjectID | | | | region | The Cloud Dataproc region to handle the request. | No | GCPRegion | | | | cluster_name | The name of the cluster to run the job. | No | String | | | | main_python_file_uri | The HCFS URI of the Python file to use as the driver. This must be a .py file. | No | GCSPath | | | | args | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission. | Yes | List | | None | | pyspark_job | The payload of a PySparkJob. | Yes | Dict | | None | | job | The payload of a Dataproc job. | Yes | Dict | | None | Output Name | Description | Type :--- | :---------- | :--- job_id | The ID of the created job. | String Cautions & requirements To use the component, you must: * Set up a GCP project by following this guide. * Create a new cluster. * The component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details. * Grant the Kubeflow user service account the role roles/dataproc.editor on the project. Detailed description This component creates a PySpark job from the Dataproc submit job REST API. Follow these steps to use the component in a pipeline: Install the Kubeflow Pipeline SDK: End of explanation import kfp.components as comp dataproc_submit_pyspark_job_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_pyspark_job/component.yaml') help(dataproc_submit_pyspark_job_op) Explanation: Load the component using KFP SDK End of explanation !gsutil cat gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py Explanation: Sample Note: The following sample code works in an IPython notebook or directly in Python code. See the sample code below to learn how to execute the template. Setup a Dataproc cluster Create a new Dataproc cluster (or reuse an existing one) before running the sample code. Prepare a PySpark job Upload your PySpark code file to a Cloud Storage bucket. For example, this is a publicly accessible hello-world.py in Cloud Storage: End of explanation PROJECT_ID = '<Please put your project ID here>' CLUSTER_NAME = '<Please put your existing cluster name here>' REGION = 'us-central1' PYSPARK_FILE_URI = 'gs://dataproc-examples-2f10d78d114f6aaec76462e3c310f31f/src/pyspark/hello-world/hello-world.py' ARGS = '' EXPERIMENT_NAME = 'Dataproc - Submit PySpark Job' Explanation: Set sample parameters End of explanation import kfp.dsl as dsl import json @dsl.pipeline( name='Dataproc submit PySpark job pipeline', description='Dataproc submit PySpark job pipeline' ) def dataproc_submit_pyspark_job_pipeline( project_id = PROJECT_ID, region = REGION, cluster_name = CLUSTER_NAME, main_python_file_uri = PYSPARK_FILE_URI, args = ARGS, pyspark_job='{}', job='{}', wait_interval='30' ): dataproc_submit_pyspark_job_op( project_id=project_id, region=region, cluster_name=cluster_name, main_python_file_uri=main_python_file_uri, args=args, pyspark_job=pyspark_job, job=job, wait_interval=wait_interval) Explanation: Example pipeline that uses the component End of explanation pipeline_func = dataproc_submit_pyspark_job_pipeline pipeline_filename = pipeline_func.__name__ + '.zip' import kfp.compiler as compiler compiler.Compiler().compile(pipeline_func, pipeline_filename) Explanation: Compile the pipeline End of explanation #Specify pipeline argument values arguments = {} #Get or create an experiment and submit a pipeline run import kfp client = kfp.Client() experiment = client.create_experiment(EXPERIMENT_NAME) #Submit a pipeline run run_name = pipeline_func.__name__ + ' run' run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) Explanation: Submit the pipeline for execution End of explanation
1,928
Given the following text description, write Python code to implement the functionality described below step by step Description: model 03 linear_model with dictVectorizer Load train, test, questions data from pklz First of all, we need to read those three data set. Step1: Make training set For training model, we might need to make feature and lable pair. In this case, we will use only uid, qid, and position for feature. Step2: It means that user 0 tried to solve question number 1 which has 77 tokens for question and he or she answered at 61st token. Train model and make predictions Let's train model and make predictions. Step3: Here is 4749 predictions. Writing submission. OK, let's writing submission into guess.csv file. In the given submission form, we realized that we need to put header. So, we will insert header at the first of predictions, and then make it as a file.
Python Code: import gzip import cPickle as pickle with gzip.open("../data/train.pklz", "rb") as train_file: train_set = pickle.load(train_file) with gzip.open("../data/test.pklz", "rb") as test_file: test_set = pickle.load(test_file) with gzip.open("../data/questions.pklz", "rb") as questions_file: questions = pickle.load(questions_file) Explanation: model 03 linear_model with dictVectorizer Load train, test, questions data from pklz First of all, we need to read those three data set. End of explanation print train_set[1] print questions[1].keys() X = [] Y = [] for key in train_set: # We only care about positive case at this time if train_set[key]['position'] < 0: continue uid = train_set[key]['uid'] qid = train_set[key]['qid'] pos = train_set[key]['position'] q_length = max(questions[qid]['pos_token'].keys()) category = questions[qid]['category'].lower() answer = questions[qid]['answer'].lower() feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer} X.append(feat) Y.append([pos]) print len(X) print len(Y) print X[0], Y[0] Explanation: Make training set For training model, we might need to make feature and lable pair. In this case, we will use only uid, qid, and position for feature. End of explanation from sklearn.feature_extraction import DictVectorizer vec = DictVectorizer() X = vec.fit_transform(X) from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet from sklearn.cross_validation import train_test_split, cross_val_score X_train, X_test, Y_train, Y_test = train_test_split (X, Y) regressor = LinearRegression() scores = cross_val_score(regressor, X, Y, cv=10) print 'Cross validation r-squared scores:', scores.mean() print scores regressor = Ridge() scores = cross_val_score(regressor, X, Y, cv=10) print 'Cross validation r-squared scores:', scores.mean() print scores regressor = Lasso() scores = cross_val_score(regressor, X, Y, cv=10) print 'Cross validation r-squared scores:', scores.mean() print scores regressor = ElasticNet() scores = cross_val_score(regressor, X, Y, cv=10) print 'Cross validation r-squared scores:', scores.mean() print scores a = [{1: 2}, {2: 3}] b = [{3: 2}, {4: 3}] c = a + b print c[:len(a)] print c[len(a):] X_train = [] Y_train = [] for key in train_set: # We only care about positive case at this time if train_set[key]['position'] < 0: continue uid = train_set[key]['uid'] qid = train_set[key]['qid'] pos = train_set[key]['position'] q_length = max(questions[qid]['pos_token'].keys()) category = questions[qid]['category'].lower() answer = questions[qid]['answer'].lower() feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer} X_train.append(feat) Y_train.append(pos) X_test = [] Y_test = [] for key in test_set: uid = test_set[key]['uid'] qid = test_set[key]['qid'] q_length = max(questions[qid]['pos_token'].keys()) category = questions[qid]['category'].lower() answer = questions[qid]['answer'].lower() feat = {"uid": str(uid), "qid": str(qid), "q_length": q_length, "category": category, "answer": answer} X_test.append(feat) Y_test.append(key) print "Before transform: ", len(X_test) X_train_length = len(X_train) X = vec.fit_transform(X_train + X_test) X_train = X[:X_train_length] X_test = X[X_train_length:] # regressor = LinearRegression() regressor = Ridge() regressor.fit(X_train, Y_train) predictions = regressor.predict(X_test) predictions = sorted([[id, predictions[index]] for index, id in enumerate(Y_test)]) print len(predictions) predictions[:5] Explanation: It means that user 0 tried to solve question number 1 which has 77 tokens for question and he or she answered at 61st token. Train model and make predictions Let's train model and make predictions. End of explanation import csv predictions.insert(0,["id", "position"]) with open('guess.csv', 'wb') as fp: writer = csv.writer(fp, delimiter=',') writer.writerows(predictions) Explanation: Here is 4749 predictions. Writing submission. OK, let's writing submission into guess.csv file. In the given submission form, we realized that we need to put header. So, we will insert header at the first of predictions, and then make it as a file. End of explanation
1,929
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: TensorBoard.dev を使う TensorBoard.dev は、無料で提供されている一般向けの TensorBoard サービスです。機械学習の実験をアップロードし、あらゆるユーザーと共有することができます。 このノートブックでは、簡単なモデルをトレーニングし、TensorBoard.dev にログをアップロードする方法を学習します。プレビュー セットアップとインポート このノートブックでは、バージョン 2.3.0 以降でのみ利用できる TensorBoard 機能を使用します。 Step2: 簡単なモデルのトレーニングと TensorBoard ログの作成 Step3: TensorBoard ログは、トレーニング中に TensorBoard と ハイパーパラメータコールバック を Keras の Model.fit() に渡して作成します。作成後は、そのログを TensorBoard.dev にアップロードすることができます。 Step4: (Jupyter 限定)TensorBoard.dev の認証 Colab では、このステップは不要です。 このステップには、Jupyter の外部でシェルコンソールを使って認証する必要があります。ご利用のコンソールで、次のコマンドを実行してください。 tensorboard dev list このフローの一環として、認証コードが提供されます。このコードは、サービス規約に同意する際に必要となります。 TensorBoard.dev へのアップロード TensorBoard ログをアップロードすると、ほかの人に共有できる URL が提示されます。 アップロードした TensorBoards は一般に公開されるため、機密データはアップロードしないようにしてください。 logdir 全体のアップロードが完了すると、アップローダは終了します。(この動作は、--one_shot フラグによって指定されています。) Step5: 各アップロードには、一意の実験 ID があり、同じディレクトリで新しいアップロードを開始する場合には、新しい実験 ID が与えられます。アップロードしたすべての実験は、https Step6: TensorBoard.dev のスクリーンショット https
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation import tensorflow as tf import datetime from tensorboard.plugins.hparams import api as hp Explanation: TensorBoard.dev を使う TensorBoard.dev は、無料で提供されている一般向けの TensorBoard サービスです。機械学習の実験をアップロードし、あらゆるユーザーと共有することができます。 このノートブックでは、簡単なモデルをトレーニングし、TensorBoard.dev にログをアップロードする方法を学習します。プレビュー セットアップとインポート このノートブックでは、バージョン 2.3.0 以降でのみ利用できる TensorBoard 機能を使用します。 End of explanation mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 def create_model(): return tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) Explanation: 簡単なモデルのトレーニングと TensorBoard ログの作成 End of explanation model = create_model() model.compile( optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) log_dir="logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S") tensorboard_callback = tf.keras.callbacks.TensorBoard( log_dir=log_dir, histogram_freq=1) hparams_callback = hp.KerasCallback(log_dir, { 'num_relu_units': 512, 'dropout': 0.2 }) model.fit( x=x_train, y=y_train, epochs=5, validation_data=(x_test, y_test), callbacks=[tensorboard_callback, hparams_callback]) Explanation: TensorBoard ログは、トレーニング中に TensorBoard と ハイパーパラメータコールバック を Keras の Model.fit() に渡して作成します。作成後は、そのログを TensorBoard.dev にアップロードすることができます。 End of explanation !tensorboard dev upload --logdir ./logs \ --name "Simple experiment with MNIST" \ --description "Training results from https://colab.sandbox.google.com/github/tensorflow/tensorboard/blob/master/docs/tbdev_getting_started.ipynb" \ --one_shot Explanation: (Jupyter 限定)TensorBoard.dev の認証 Colab では、このステップは不要です。 このステップには、Jupyter の外部でシェルコンソールを使って認証する必要があります。ご利用のコンソールで、次のコマンドを実行してください。 tensorboard dev list このフローの一環として、認証コードが提供されます。このコードは、サービス規約に同意する際に必要となります。 TensorBoard.dev へのアップロード TensorBoard ログをアップロードすると、ほかの人に共有できる URL が提示されます。 アップロードした TensorBoards は一般に公開されるため、機密データはアップロードしないようにしてください。 logdir 全体のアップロードが完了すると、アップローダは終了します。(この動作は、--one_shot フラグによって指定されています。) End of explanation !tensorboard dev list Explanation: 各アップロードには、一意の実験 ID があり、同じディレクトリで新しいアップロードを開始する場合には、新しい実験 ID が与えられます。アップロードしたすべての実験は、https://tensorboard.dev/experiments/ で表示できます。または、次のコマンドを使用して、ターミナルに実験を一覧表示することもできます。 tensorboard dev list End of explanation # You must replace YOUR_EXPERIMENT_ID with the value output from the previous # tensorboard `list` command or `upload` command. For example # `tensorboard dev delete --experiment_id pQpJNh00RG2Lf1zOe9BrQA` ## !tensorboard dev delete --experiment_id YOUR_EXPERIMENT_ID_HERE Explanation: TensorBoard.dev のスクリーンショット https://tensorboard.dev/experiments/: に移動すると、次のように表示されます。 TensorBoard.dev で新しい実験に移動すると、次のように表示されます。 TensorBoard.dev 実験の削除 アップロード済みの実験を削除するには、delete コマンドを使用して、削除する experiment_id を指定します。上記のスクリーンショットでは、experiment_id は左下(w1lkBAOrR4eH35Y7Lg1DQQ)に示されています。 End of explanation
1,930
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: write a small code snippet for edge detection for an image using python
Python Code:: from PIL import Image import numpy as np import matplotlib.pyplot as plt from scipy.signal import convolve2d image = Image.open('image.jpg') gray = np.mean(image, axis = 2) h_x = [[1,0,-1], [2,0,-2], [1,0,-1]] h_y = [[1,2,1], [0,0,0], [-1,-2,-1]] g_x = convolve2d(gray, h_x) g_y = convolve2d(gray, h_y) a = np.square(g_x) + np.square(g_y) G = np.sqrt(a) plt.imshow(G, cmap='gray')
1,931
Given the following text description, write Python code to implement the functionality described below step by step Description: markdown test, i can write normal text here and it will not run as code! Step1: strings yoiu can use the %s to format strings into your print statements
Python Code: # this is a comment and will not run in the code '''this is just a mulit line comment''' pwd #addition 2+1 # substraction 2-1 1-2 2*2 3/2 3.0/2 float(3)/2 3/float(2) from __future__ import division 3/2 1/2 2/3 root(2) sqrt(2) 4^2 4^.5 4**.5 a=5 a=6 a+a a 0.1+0.2-0.3 'hello' 'this entire thing can be a string' "this is using double quotes" print 'hello' print("hello") s='hello' s len(s) print(s) s[3] s[10] s[5] s[2:4] z*10 letter='z' letter*10 letter.upper() letter.center('z') print 'this is a string' Explanation: markdown test, i can write normal text here and it will not run as code! End of explanation s = 'STRING' print 'place another string with a mod and s: %s' %(s) from __future__ import print_function print('hello') print('one: {x}'.format(x='INSERT')) Explanation: strings yoiu can use the %s to format strings into your print statements End of explanation
1,932
Given the following text description, write Python code to implement the functionality described below step by step Description: Create TensorFlow Wide and Deep Model Learning Objective - Create a Wide and Deep model using the high-level Estimator API - Determine which features to use as wide columns and which to use as deep columns Introduction In this notebook, we'll explore modeling our data using a Wide & Deep Neural Network. As before, we can do this uisng the high-level Estimator API in Tensorflow. Have a look at the various other models available through the Estimator API in the documentation here. In particular, have a look at the implementation for Wide & Deep models. Start by setting the environment variables related to your project. Step1: Let's have a look at the csv files we created in the previous notebooks that we will use for training/eval. Step2: Create TensorFlow model using TensorFlow's Estimator API We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps. Step3: Exercise 1 To begin creating out Tensorflow model, we need to set up variables that determine the csv column values, the label column and the key column. Fill in the TODOs below to set these variables. Note, CSV_COLUMNS should be a list and LABEL_COLUMN should be a string. It is important to get the column names in the correct order as they appear in the csv train/eval/test sets. If necessary, look back at the previous notebooks at how these csv files were created to ensure you have the correct ordering. We also need to set DEFAULTS for each of the CSV column values we prescribe. This will also the a list of entities that will vary depending on the data type of the csv column value. Have a look back at the previous examples to ensure you have the proper formatting. Step4: Create the input function Now we are ready to create an input function using the Dataset API. Exercise 2 In the code below you are asked to complete the TODOs to create the input function for our model. Look back at the previous examples we have completed if you need a hint as to how to complete the missing fields below. In the first block of TODOs, your decode_csv file should return a dictionary called features and a value label. In the next TODO, use tf.gfile.Glob to create a list of files that match the given filename_pattern. Have a look at the documentation for tf.gfile.Glob if you get stuck. In the next TODO, use tf.data.TextLineDataset to read text file and apply the decode_csv function you created above to parse each row example. In the next TODO you are asked to set up the dataset depending on whether you are in TRAIN mode or not. (Hint Step5: Create the feature columns Next, define the feature columns. For a wide and deep model, we need to determine which features we will use as wide features and which to pass as deep features. The function get_wide_deep below will return a tuple containing the wide feature columns and deep feature columns. Have a look at this blog post on wide and deep models to remind yourself how best to describe the features. Exercise 3 There are different ways to set up the feature columns for our Wide & Deep model. In the cell below, we create a function called get_wide_deep. It has no arguments but returns a tuple containing two things Step6: Create the Serving Input function To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user. Exercise 4 In the first TODO below, create the feature_placeholders dictionary by setting up the placeholders for each of the features we will use in our model. Look at the documentation for tf.placeholder to make sure you provide all the necessary arguments. You'll need to create placeholders for the features - 'is_male' - 'mother_age' - 'plurality' - 'gestation_weeks' - 'key' You'll also need to create the features dictionary to pass to the tf.estimator.export.ServingInputReceiver function. The features dictionary will reference the fearture_placeholders dict you created above. Remember to expand the dimensions of the tensors you'll incoude in the features dictionary to accomodate for batched data we'll send to the model for predicitons later. Step7: Create the model and run training and evaluation Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a DNNRegressor estimator and the train and evaluation operations. Exercise 5 In the cell below, complete the TODOs to create our model for training. - First you must create your estimator using tf.estimator.DNNLinearCombinedRegressor. - Next, complete the code to set up your tf.estimator.TrainSpec, selecting the appropriate input function and dataset to use to read data to your function during training. - Next, set up your exporter and tf.estimator.EvalSpec. - Finally, pass the variables you created above to call tf.estimator.train_and_evaluate Be sure to check the documentation for these Tensorflow operations to make sure you set things up correctly. Step8: Finally, we train the model!
Python Code: PROJECT = "cloud-training-demos" # Replace with your PROJECT BUCKET = "cloud-training-bucket" # Replace with your BUCKET REGION = "us-central1" # Choose an available region for Cloud MLE TFVERSION = "1.14" # TF version for CMLE to use import os os.environ["BUCKET"] = BUCKET os.environ["PROJECT"] = PROJECT os.environ["REGION"] = REGION os.environ["TFVERSION"] = TFVERSION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi Explanation: Create TensorFlow Wide and Deep Model Learning Objective - Create a Wide and Deep model using the high-level Estimator API - Determine which features to use as wide columns and which to use as deep columns Introduction In this notebook, we'll explore modeling our data using a Wide & Deep Neural Network. As before, we can do this uisng the high-level Estimator API in Tensorflow. Have a look at the various other models available through the Estimator API in the documentation here. In particular, have a look at the implementation for Wide & Deep models. Start by setting the environment variables related to your project. End of explanation %%bash ls *.csv Explanation: Let's have a look at the csv files we created in the previous notebooks that we will use for training/eval. End of explanation import shutil import numpy as np import tensorflow as tf print(tf.__version__) Explanation: Create TensorFlow model using TensorFlow's Estimator API We'll begin by writing an input function to read the data and define the csv column names and label column. We'll also set the default csv column values and set the number of training steps. End of explanation # Determine CSV, label, and key columns CSV_COLUMNS = # TODO: Your code goes here LABEL_COLUMN = # TODO: Your code goes here # Set default values for each CSV column DEFAULTS = # TODO: Your code goes here TRAIN_STEPS = 1000 Explanation: Exercise 1 To begin creating out Tensorflow model, we need to set up variables that determine the csv column values, the label column and the key column. Fill in the TODOs below to set these variables. Note, CSV_COLUMNS should be a list and LABEL_COLUMN should be a string. It is important to get the column names in the correct order as they appear in the csv train/eval/test sets. If necessary, look back at the previous notebooks at how these csv files were created to ensure you have the correct ordering. We also need to set DEFAULTS for each of the CSV column values we prescribe. This will also the a list of entities that will vary depending on the data type of the csv column value. Have a look back at the previous examples to ensure you have the proper formatting. End of explanation # Create an input function reading a file using the Dataset API # Then provide the results to the Estimator API def read_dataset(filename_pattern, mode, batch_size = 512): def _input_fn(): def decode_csv(line_of_text): columns = # TODO: Your code goes here features = # TODO: Your code goes here label = # TODO: Your code goes here return features, label # Create list of files that match pattern file_list = # TODO: Your code goes here # Create dataset from file list dataset = # TODO: Your code goes here # In training mode, shuffle the dataset and repeat indefinitely # TODO: Your code goes here # This will now return batches of features, label dataset = # TODO: Your code goes here return dataset return _input_fn Explanation: Create the input function Now we are ready to create an input function using the Dataset API. Exercise 2 In the code below you are asked to complete the TODOs to create the input function for our model. Look back at the previous examples we have completed if you need a hint as to how to complete the missing fields below. In the first block of TODOs, your decode_csv file should return a dictionary called features and a value label. In the next TODO, use tf.gfile.Glob to create a list of files that match the given filename_pattern. Have a look at the documentation for tf.gfile.Glob if you get stuck. In the next TODO, use tf.data.TextLineDataset to read text file and apply the decode_csv function you created above to parse each row example. In the next TODO you are asked to set up the dataset depending on whether you are in TRAIN mode or not. (Hint: Use tf.estimator.ModeKeys.TRAIN). When in TRAIN mode, set the appropriate number of epochs and shuffle the data accordingly. When not in TRAIN mode, you will use a different number of epochs and there is no need to shuffle the data. Finally, in the last TODO, collect the operations you set up above to produce the final dataset we'll use to feed data into our model. Have a look at the examples we did in the previous notebooks if you need inspiration. End of explanation def get_wide_deep(): # Define column types fc_is_male,fc_plurality,fc_mother_age,fc_gestation_weeks = [# TODO: Your code goes here] # Bucketized columns fc_age_buckets = # TODO: Your code goes here fc_gestation_buckets = # TODO: Your code goes here # Sparse columns are wide, have a linear relationship with the output wide = [# TODO: Your code goes here] # Feature cross all the wide columns and embed into a lower dimension fc_crossed = # TODO: Your code goes here fc_embed = # TODO: Your code goes here # Continuous columns are deep, have a complex relationship with the output deep = [# TODO: Your code goes here] return wide, deep Explanation: Create the feature columns Next, define the feature columns. For a wide and deep model, we need to determine which features we will use as wide features and which to pass as deep features. The function get_wide_deep below will return a tuple containing the wide feature columns and deep feature columns. Have a look at this blog post on wide and deep models to remind yourself how best to describe the features. Exercise 3 There are different ways to set up the feature columns for our Wide & Deep model. In the cell below, we create a function called get_wide_deep. It has no arguments but returns a tuple containing two things: a list of our wide feature columns, and a list of our deep feature columns. In the first block of TODOs below, you are asked to create a list and assing the necessary feature columns for the features is_male, plurality, mother_age and gestation_weeks. Think about the nature of these features and make sure you use the appropriate tf.feature_column. In the next TODO, you will create the bucketized features for mother_age and gestation_weeks. Think about a values to set for the boundaries argument for these feature columns. Hint: use np.arange([start],[stop],[step]).tolist() to easily create boundaries. Next, create a list of the appropriate feature columns you created above to define the wide columns of our model. In the next two TODOs, create a crossed feature column that uses all of the wide columns you created above. You'll want to use a large enough hash_bucket_size to ensure there aren't collisions. Then, use that crossed feature column to create a feature column that embeds fc_crossed into a lower dimension. Finally, collect the deep feature columns you created into a single list called deep End of explanation def serving_input_fn(): feature_placeholders = # TODO: Your code goes here features = # TODO: Your code goes here return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders) Explanation: Create the Serving Input function To predict with the TensorFlow model, we also need a serving input function. This will allow us to serve prediction later using the predetermined inputs. We will want all the inputs from our user. Exercise 4 In the first TODO below, create the feature_placeholders dictionary by setting up the placeholders for each of the features we will use in our model. Look at the documentation for tf.placeholder to make sure you provide all the necessary arguments. You'll need to create placeholders for the features - 'is_male' - 'mother_age' - 'plurality' - 'gestation_weeks' - 'key' You'll also need to create the features dictionary to pass to the tf.estimator.export.ServingInputReceiver function. The features dictionary will reference the fearture_placeholders dict you created above. Remember to expand the dimensions of the tensors you'll incoude in the features dictionary to accomodate for batched data we'll send to the model for predicitons later. End of explanation def train_and_evaluate(output_dir): EVAL_INTERVAL = 300 run_config = tf.estimator.RunConfig( save_checkpoints_secs = EVAL_INTERVAL, keep_checkpoint_max = 3) estimator = # TODO: Your code goes here train_spec = # TODO: Your code goes here exporter = # TODO: Your code goes here eval_spec = # TODO: Your code goes here tf.estimator.train_and_evaluate(# TODO: Your code goes here) Explanation: Create the model and run training and evaluation Lastly, we'll create the estimator to train and evaluate. In the cell below, we'll set up a DNNRegressor estimator and the train and evaluation operations. Exercise 5 In the cell below, complete the TODOs to create our model for training. - First you must create your estimator using tf.estimator.DNNLinearCombinedRegressor. - Next, complete the code to set up your tf.estimator.TrainSpec, selecting the appropriate input function and dataset to use to read data to your function during training. - Next, set up your exporter and tf.estimator.EvalSpec. - Finally, pass the variables you created above to call tf.estimator.train_and_evaluate Be sure to check the documentation for these Tensorflow operations to make sure you set things up correctly. End of explanation # Run the model shutil.rmtree(path = "babyweight_trained_wd", ignore_errors = True) # start fresh each time train_and_evaluate("babyweight_trained_wd") Explanation: Finally, we train the model! End of explanation
1,933
Given the following text description, write Python code to implement the functionality described below step by step Description: L-BFGS vs GD Curiously, the original L-BFGS convergence proof essentially reduces the L-BFGS iteration to GD. This establishes L-BFGS converges globally for sufficiently regular functions and also that it has local linear convergence, just like GD, for smooth and strongly convex functions. But if you look carefully at the proof, the construction is very strange Step1: OK, so pretty interestingly, L-BFGS is still fundamentally linear in terms of its convergence rate (which translates to $\log \epsilon^{-1}$ speed for quadratic problems like ours). But clearly it gets better bang for its buck in the rate itself. And this is obviously important---even though the $\kappa = 50$ GD is still "exponentially fast", it's clear that the small slope means it'll still take a practically long time to converge. We know from explicit analysis that the GD linear rate will be something like $((\kappa - 1)/\kappa)^2$. If you squint really hard, that's basically $((1-\kappa^{-1})^\kappa)^{2/\kappa}\approx e^{-2/\kappa}$ for large $\kappa$, which is why our "exponential rates" look not-so-exponential, especially for $\kappa$ near the number of iterations (because then the suboptimality gap looks like $r^{T/\kappa}$ for $T$ iterations and $r=e^{-2}$). It's interesting to compare how sensitive L-BFGS and GD are to the condition number increase. Yes, we're fixing the eigenvalue pattern to be a linear spread, but let's save inspecting that to the end. The linear rate is effectively the slope that the log plot has at the end. While we're at it, what's the effect of more memory? Step2: Now that's pretty cool, it looks like the limiting behavior is still ultimately linear (as expected, it takes about as many iterations as the memory size for the limiting behavior to "kick in"), but as memory increases, the rate gets better. What if we make the eigenvalues clumpy? Step3: Wow, done in 12 iterations for the clustered eigenvalues. It looks like the hardest spectrum for L-BFGS (and coincedentally the one with the cleanest descent curves) is evenly log-spaced spectra. Let's try to map out the relationship between memory, kappa, and the linear convergence rate. Step4: OK, so the dependence in $\kappa$ still smells like $1-\kappa^{-1}$, but at least there's a very interesting linear trend between the linear convergence rate and memory (which does seem to bottom out for well-conditioned problems, but those don't matter so much). What's cool is that it's preserved across different $\kappa$. To finish off, just out of curiosity, do any of the BFGS diagnostics tell us much about the convergence rate? Step5: So, as we can see above, it's not quite right to look to either $\cos\theta_k$ nor $\|(B_k-\nabla^2_k)\mathbf{p}_k\|/\|\mathbf{p}_k\|$ to demonstrate L-BFGS convergence (the latter should tend to zero per BFGS theory as memory tends to infinity). But at least for quadratic functions, perhaps it's possible to work out the linear rate acceleration observed earlier via some matrix algebra. A follow-up question by Brian Borchers was what happens in the ill-conditioned regime. Unfortunately, the Wolfe search no longer converges, for GD and L-BFGS. Switching to backtracking-only stabilizes the descent. We end up with noisier curves so I geometrically average over a few samples. Note the rates are all still linear but much worse.
Python Code: from numpy_ringbuffer import RingBuffer import numpy as np from scipy.stats import special_ortho_group from scipy import linalg as sla %matplotlib inline from matplotlib import pyplot as plt from scipy.optimize import line_search class LBFGS: def __init__(self, m, d, x0, g0): self.s = RingBuffer(capacity=m, dtype=(float, d)) self.y = RingBuffer(capacity=m, dtype=(float, d)) self.x = x0.copy() self.g = g0.copy() def mvm(self, q): q = q.copy() m = len(self.s) alphas = np.zeros(m, dtype=float) for i, (s, y) in enumerate(zip(reversed(self.s), reversed(self.y))): inv_rho = s.dot(y) alphas[m - i - 1] = s.dot(q) / inv_rho q -= alphas[m - i - 1] * y if m > 0: s = next(reversed(self.s)) y = next(reversed(self.y)) gamma = s.dot(y) / y.dot(y) else: gamma = 1 z = gamma * q for (alpha, s, y) in zip(alphas, self.s, self.y): inv_rho = s.dot(y) beta = y.dot(z) / inv_rho z += s * (alpha - beta) return -z # mvm(self, self.g) gives current lbfgs direction # - H g def update(self, x, g): s = x - self.x y = g - self.g if self.s.is_full: assert self.y.is_full self.s.popleft() self.y.popleft() self.s.append(s) self.y.append(y) self.x = x.copy() self.g = g.copy() from scipy.optimize.linesearch import line_search_armijo def haar(n, d, rng=np.random): # https://nhigham.com/2020/04/22/what-is-a-random-orthogonal-matrix/ assert n >= d z = rng.normal(size=(n, d)) if n > d: q, r = sla.qr(z, mode='economic') else: q, r = sla.qr(z, mode='full') assert q.shape[1] == d, (q.shape[1], d) return q np.random.seed(1234) d = 100 n = 1000 vt = haar(d, d) u = haar(n, d) # bottom singular value we'll keep at 1 # so top determines the condition number # for a vector s of singular values # A = u diag(s) vt # objective = 1/2 ||Ax - 1||_2^2 x0 = np.zeros(d) b = np.ones(n) def xopt(A): u, s, vt = A return vt.T.dot(u.T.dot(b) / s) def objective(A, x): u, s, vt = A vtx = vt.dot(x) Ax = u.dot(s * vtx) diff = Ax - b f = diff.dot(diff) / 2 g = vt.T.dot(s * (s * vtx - u.T.dot(b))) return f, g def hessian_mvm(A, q): u, s, vt = A return vt.T.dot(s * (s * vt.dot(q))) def gd(A, max_iter=1000, tol=1e-11, c1=0.2, c2=0.8, armijo=False): x = x0.copy() xsol = xopt(A) fsol = objective(A, xsol)[0] gaps = [] for _ in range(max_iter): f, g = objective(A, x) gaps.append(abs(f - fsol)) if gaps[-1] < tol: break if armijo: alpha, *_ = line_search_armijo( lambda x: objective(A, x)[0], x, -g, g, f) else: alpha = line_search( lambda x: objective(A, x)[0], lambda x: objective(A, x)[1], x, -g, maxiter=1000, c1=c1, c2=c2) if alpha[0] is None: raise RuntimeError((alpha, g, x)) alpha = alpha[0] x -= alpha * g return gaps def lbfgs(A, m, max_iter=1000, tol=1e-11, extras=False, c1=0.2, c2=0.8, armijo=False): x = x0.copy() xsol = xopt(A) fsol = objective(A, xsol)[0] gaps = [] if extras: newton = [] cosine = [] f, g = objective(A, x) opt = LBFGS(m, d, x, g) for i in range(max_iter): gaps.append(abs(f - fsol)) p = opt.mvm(opt.g) if extras: newton.append(np.linalg.norm( hessian_mvm(A, p) - opt.mvm(p) ) / np.linalg.norm(p)) cosine.append(1 - p.dot(-g) / np.linalg.norm(p) / np.linalg.norm(g)) if gaps[-1] < tol: break if armijo: alpha, *_ = line_search_armijo( lambda x: objective(A, x)[0], x, p, opt.g, f) else: alpha = line_search( lambda x: objective(A, x)[0], lambda x: objective(A, x)[1], x, p, maxiter=1000, c1=c1, c2=c2) if alpha[0] is None: raise RuntimeError(alpha) alpha = alpha[0] x += alpha * p f, g = objective(A, x) opt.update(x, g) if extras: return gaps, newton, cosine return gaps for kappa, ls in [(10, '-'), (50, '--')]: s = np.linspace(1, kappa, d) A = (u, s, vt) gds = gd(A) memory = 10 lbs = lbfgs(A, memory) matrix_name = 'linspace eigenvals' plt.semilogy(gds, c='b', label=r'GD ($\kappa = {kappa}$)'.format(kappa=kappa), ls=ls) plt.semilogy(lbs, c='r', ls=ls, label=r'L-BFGS ($m = {memory}, \kappa = {kappa}$)'.format( kappa=kappa, memory=memory)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show() Explanation: L-BFGS vs GD Curiously, the original L-BFGS convergence proof essentially reduces the L-BFGS iteration to GD. This establishes L-BFGS converges globally for sufficiently regular functions and also that it has local linear convergence, just like GD, for smooth and strongly convex functions. But if you look carefully at the proof, the construction is very strange: the more memory L-BFGS uses the less it looks like GD, the worse the smoothness constants are for the actual local rate of convergence. I go to into more detail on this in my SO question on the topic, but I was curious about some empirical assessments of how these compare. I found a study which confirms high-level intuition: L-BFGS interpolates between CG and BFGS as you increase memory. This relationship is true in a limiting sense: when $L=0$ L-BFGS is equal to a flavor of CG (with exact line search) and when $L=\infty$ it's BFGS. BFGS, in turn, iteratively constructs approximations $B_k$ to the Hessian which eventually satisfy a directional inequality $\|(B_k-\nabla_k^2)\mathbf{p}_k\|=o(\|\mathbf{p}_k\|)$ where $\mathbf{p}_k=-B_k^{-1}\nabla_k$ is the descent direction, which it turns out is enough to be "close enough" to Newton that you can achieve superlinear convergence rates. So, to what extent does agreement between $\mathbf{p}_k,\nabla_k$ (measured as $\cos^2 \theta_k$, the square of the cosine of the angle between the two) explain fast L-BFGS convergence? How about the magnitude of the Hessian-approximate-BFGS-Hessian agreement along the descent direction $\|(B_k-\nabla_k^2)\mathbf{p}_k\|$? What about the secant equation difference? One interesting hypothesis is that the low-rank view L-BFGS has into the Hessian means that it can't approximate the Hessian well if its eigenvalues are spread far apart (since you need to "spend" rank to explore parts of the eigenspace). Let's take some simple overdetermined least squares systems with varying eigenspectra and see how all the metrics above respond. End of explanation for memory, ls in [(10, '-'), (25, '--'), (50, ':')]: for kappa, color in [(10, 'r'), (25, 'b'), (50, 'g')]: s = np.linspace(1, kappa, d) A = (u, s, vt) lbs = lbfgs(A, memory) plt.semilogy(lbs, c=color, ls=ls, label=r'$\kappa = {kappa}, m = {memory}$'.format( memory=memory, kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show('linspace eigenvals (L-BFGS)') Explanation: OK, so pretty interestingly, L-BFGS is still fundamentally linear in terms of its convergence rate (which translates to $\log \epsilon^{-1}$ speed for quadratic problems like ours). But clearly it gets better bang for its buck in the rate itself. And this is obviously important---even though the $\kappa = 50$ GD is still "exponentially fast", it's clear that the small slope means it'll still take a practically long time to converge. We know from explicit analysis that the GD linear rate will be something like $((\kappa - 1)/\kappa)^2$. If you squint really hard, that's basically $((1-\kappa^{-1})^\kappa)^{2/\kappa}\approx e^{-2/\kappa}$ for large $\kappa$, which is why our "exponential rates" look not-so-exponential, especially for $\kappa$ near the number of iterations (because then the suboptimality gap looks like $r^{T/\kappa}$ for $T$ iterations and $r=e^{-2}$). It's interesting to compare how sensitive L-BFGS and GD are to the condition number increase. Yes, we're fixing the eigenvalue pattern to be a linear spread, but let's save inspecting that to the end. The linear rate is effectively the slope that the log plot has at the end. While we're at it, what's the effect of more memory? End of explanation for memory, ls in [(10, '-'), (25, '--'), (50, ':')]: for kappa, color in [(10, 'r'), (25, 'b'), (50, 'g')]: bot, mid, top = d // 3, d // 3, d - 2 * d // 3 s = [1] * bot + [kappa / 2] * mid + [kappa] * top s = np.array(s) A = (u, s, vt) lbs = lbfgs(A, memory) plt.semilogy(lbs, c=color, ls=ls, label=r'$\kappa = {kappa}, m = {memory}$'.format( memory=memory, kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title('tri-cluster eigenvals (L-BFGS)') plt.show() for memory, ls in [(10, '-'), (25, '--'), (50, ':')]: for kappa, color in [(10, 'r'), (25, 'b'), (50, 'g')]: s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) lbs = lbfgs(A, memory) plt.semilogy(lbs, c=color, ls=ls, label=r'$\kappa = {kappa}, m = {memory}$'.format( memory=memory, kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title('logspace eigenvals (L-BFGS)') plt.show() Explanation: Now that's pretty cool, it looks like the limiting behavior is still ultimately linear (as expected, it takes about as many iterations as the memory size for the limiting behavior to "kick in"), but as memory increases, the rate gets better. What if we make the eigenvalues clumpy? End of explanation from scipy.stats import linregress kappa = 30 memory = list(range(5, 100 + 1, 5)) for kappa, color in [(10, 'r'), (30, 'b'), (50, 'g')]: rates = [] for m in memory: s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) lbs = lbfgs(A, m) y = np.log(lbs) x = np.arange(len(lbs)) + 1 slope, *_ = linregress(x, y) rates.append(np.exp(slope)) plt.plot(memory, rates, c=color, label=r'$\kappa = {kappa}$'.format(kappa=kappa)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('memory (dimension = {d})'.format(d=d)) plt.ylabel('linear convergence rate') plt.title(r'logspace eigenvals, L-BFGS' .format(kappa=kappa)) plt.show() kappas = list(range(5, 50 + 1, 5)) # interestingly, large memory becomes unstable for memory, color in [(10, 'r'), (15, 'b'), (20, 'g')]: rates = [] for kappa in kappas: s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) lbs = lbfgs(A, memory) y = np.log(lbs) x = np.arange(len(lbs)) + 1 slope, *_ = linregress(x, y) rates.append(np.exp(slope)) plt.plot(kappas, rates, c = color, label=r'$m = {memory}$'.format(memory=memory)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel(r'$\kappa$') plt.ylabel('linear convergence rate') plt.title(r'logspace eigenvals, L-BFGS' .format(memory=memory)) plt.show() Explanation: Wow, done in 12 iterations for the clustered eigenvalues. It looks like the hardest spectrum for L-BFGS (and coincedentally the one with the cleanest descent curves) is evenly log-spaced spectra. Let's try to map out the relationship between memory, kappa, and the linear convergence rate. End of explanation kappa = 30 s = np.logspace(0, np.log10(kappa), d) A = (u, s, vt) newtons, cosines = [], [] memories = [5, 50] for color, memory in zip(['r', 'g'], memories): lbs, newton, cosine = lbfgs(A, memory, extras=True) matrix_name = r'logspace eigenvals, $\kappa = {kappa}$'.format(kappa=kappa) plt.semilogy(lbs, c=color, label=r'L-BFGS ($m = {memory}$)'.format(memory=memory)) newtons.append(newton) cosines.append(cosine) gds = gd(A, max_iter=len(lbs)) plt.semilogy(gds, c='b', label='GD') plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show() for newton, memory in zip(newtons, memories): newton = np.array(newton) index = np.arange(len(newton)) q = np.percentile(newton, 95) plt.semilogy(index[newton < q], newton[newton < q], label=r'$m = {memory}$'.format(memory=memory)) plt.xlabel('iterations') plt.ylabel(r'$\|(B_k -\nabla_k^2)\mathbf{p}_k\|_2/\|\mathbf{p}_k\|_2$') plt.title('Directional Newton Approximation') plt.show() for cosine, memory in zip(cosines, memories): cosine = np.array(cosine) plt.plot(cosine, label=r'$m = {memory}$'.format(memory=memory)) plt.xlabel('iterations') plt.ylabel(r'$\cos \theta_k$') plt.title('L-BFGS and GD Cosine') plt.show() Explanation: OK, so the dependence in $\kappa$ still smells like $1-\kappa^{-1}$, but at least there's a very interesting linear trend between the linear convergence rate and memory (which does seem to bottom out for well-conditioned problems, but those don't matter so much). What's cool is that it's preserved across different $\kappa$. To finish off, just out of curiosity, do any of the BFGS diagnostics tell us much about the convergence rate? End of explanation kappa_log10 = 5 s = np.logspace(0, kappa_log10, d) memory = 5 import ray ray.init(ignore_reinit_error=True) # https://vladfeinberg.com/2019/10/20/prngs.html from numpy.random import SeedSequence, default_rng ss = SeedSequence(12345) trials = 16 child_seeds = ss.spawn(trials) maxit = 1000 * 100 @ray.remote(num_cpus=1) def descent(A, algo): if algo == 'lbfgs': return lbfgs(A, memory, armijo=True, max_iter=maxit) else: return gd(A, max_iter=maxit, armijo=True) @ray.remote def trial(seed): rng = default_rng(seed) vt = haar(d, d, rng) u = haar(n, d, rng) A = (u, s, vt) lbs = descent.remote(A, 'lbfgs') gds = descent.remote(A, 'gd') lbs = ray.get(lbs) gds = ray.get(gds) lbsnp = np.full(maxit, min(lbs)) gdsnp = np.full(maxit, min(gds)) lbsnp[:len(lbs)] = lbs gdsnp[:len(gds)] = gds return lbsnp, gdsnp lbs_gm = np.zeros(maxit) gds_gm = np.zeros(maxit) for i, fut in enumerate([trial.remote(seed) for seed in child_seeds]): lbs, gds = ray.get(fut) lbs_gm += np.log(lbs) gds_gm += np.log(gds) lbs_gm /= trials gds_gm /= trials matrix_name = r'logspace eigenvals, $\kappa = 10^{{{kappa_log10}}}$, GM over {trials} trials'.format(kappa_log10=kappa_log10, trials=trials) plt.semilogy(np.exp(gds_gm), c='b', label='GD') plt.semilogy(np.exp(lbs_gm), c=color, label=r'L-BFGS ($m = {memory}$)'.format(memory=memory)) plt.legend(bbox_to_anchor=(1.05, 0.5), loc='center left') plt.xlabel('iterations') plt.ylabel('function optimality gap') plt.title(matrix_name) plt.show() Explanation: So, as we can see above, it's not quite right to look to either $\cos\theta_k$ nor $\|(B_k-\nabla^2_k)\mathbf{p}_k\|/\|\mathbf{p}_k\|$ to demonstrate L-BFGS convergence (the latter should tend to zero per BFGS theory as memory tends to infinity). But at least for quadratic functions, perhaps it's possible to work out the linear rate acceleration observed earlier via some matrix algebra. A follow-up question by Brian Borchers was what happens in the ill-conditioned regime. Unfortunately, the Wolfe search no longer converges, for GD and L-BFGS. Switching to backtracking-only stabilizes the descent. We end up with noisier curves so I geometrically average over a few samples. Note the rates are all still linear but much worse. End of explanation
1,934
Given the following text description, write Python code to implement the functionality described below step by step Description: Resources For further information and tutorials see Step1: Variables Step2: Lists Python has two array-like things. The first is called a "list", which can hold any data types. Step3: The second is called a "tuple", which is an immutable list (nothing can be added or subtracted) whose elements also can't be reassigned. Step4: Indexing and Slicing Step5: If Statements Step6: For Loops Step9: Functions Step10: Useful Packages
Python Code: # you can type math directly into Python (IPython, command line) 2+2 4**2 # this is "4 to the power of 2" 1./2 # converts 1 to a float 4*3; # The semicolon suppresses the output in IPython - this is similar to Matlab or Oracle Explanation: Resources For further information and tutorials see: Software Carpentry Python Programming Author: Madeleine Bonsma-Fisher, heavily borrowing from Lina Tran and Charles Zhu The interpreter End of explanation # the = symbol indicates that what is on the right is assigned to the variable name on the left name = 'Madeleine' year = 2017 # we can then see what our variable is holding using the print() function print(name) # we can check the type of our variable using the type(variable_name) function print(type(year)) # you can pull up help information on any object or function: help(type) # you must assign a variable before you call it, otherwise an error will occur print(age) Explanation: Variables End of explanation fruits = ['apple', 'banana', 'mango', 'lychee'] print(fruits) fruits.append('orange') print(fruits) # lists don't need to comprise of all the same type misc = [29, 'dog', fruits] print(misc) print(fruits + fruits) Explanation: Lists Python has two array-like things. The first is called a "list", which can hold any data types. End of explanation tup1 = (1,2) print(tup1) tup1[0] = 2 # this gives an error Explanation: The second is called a "tuple", which is an immutable list (nothing can be added or subtracted) whose elements also can't be reassigned. End of explanation #indexing in Python starts at 0, not 1 (like in Matlab or Oracle) print(fruits[0]) print(fruits[1]) # strings are just a particular kind of list s = 'This is a string.' print(s[0]) # use -1 to get the last element print(fruits[-1]) print(fruits[-2]) # to get a slice of the string use the : symbol print(s[0:4]) print(s[:4]) print(s[4:7]) print(s[7:]) print(s[7:len(s)]) Explanation: Indexing and Slicing End of explanation s2 = [19034, 23] # You will always need to start with an 'if' line # You do not need the elif or else statements # You can have as many elif statements as needed if type(s2) == str: print('s2 is a string') elif type(s2) == int: print('s2 is an integer') elif type(s2) == float: print('s2 is a float') else: print('s2 is not a string or integer') Explanation: If Statements End of explanation nums = [23, 56, 1, 10, 15, 0] # in this case, 'n' is a dummy variable that will be used by the for loop # you do not need to assign it ahead of time for n in nums: if n%2 == 0: print('even') else: print('odd') # for loops can iterate over strings as well vowels = 'aeiou' for vowel in vowels: print(vowel) Explanation: For Loops End of explanation # always use descriptive naming for functions, variables, arguments etc. def sum_of_squares(num1, num2): Input: two numbers Output: the sum of the squares of the two numbers ss = num1**2 + num2**2 return(ss) # The stuff inside is called the "docstring". It can be accessed by typing help(sum_of_squares) print(sum_of_squares(4,2)) # the return statement in a function allows us to store the output of a function call in a variable for later use ss1 = sum_of_squares(5,5) print(ss1) Explanation: Functions End of explanation # use a package by importing it, you can also give it a shorter alias, in this case 'np' import numpy as np array = np.arange(15) lst = list(range(15)) print(array) print(lst) print(type(array)) print(type(lst)) # numpy arrays allow for vectorized calculations print(array*2) print(lst*2) array = array.reshape([5,3]) print(array) # we can get the mean over all rows (using axis=1) array.mean(axis=1) # max value in each column array.max(axis=0) import pandas as pd # this will read in a csv file into a pandas DataFrame # this csv has data of country spending on healthcare data = pd.read_csv('health.csv', header=0, index_col=0, encoding="ISO-8859-1") # the .head() function will allow us to look at first few lines of the dataframe data.head() # by default, rows are indicated first, followed by the column: [row, column] data.loc['Canada', '2008'] # you can also slice a dataframe data.loc['Canada':'Denmark', '1999':'2001'] %matplotlib inline import matplotlib.pyplot as plt # the .plot() function will create a simple graph for you to quickly visualize your data data.loc['Denmark'].plot() data.loc['Canada'].plot() data.loc['India'].plot() plt.legend(loc='best') Explanation: Useful Packages End of explanation
1,935
Given the following text description, write Python code to implement the functionality described below step by step Description: Generate large frames for testing Step1: On Flex 5 Step2: On Flex 5
Python Code: from Frame2D import Frame2D from Tables import Table, DataSource import numpy as np import pandas as pd ## NOTE: all units are kN and m FD = {'storey_heights': [6.5] + [5.5]*20 + [7.0], # m 'bay_widths': [10.5,10,10,10,10,10.5], # m 'frame_spacing':8, # m, used only for load calculation 'specified_loads':{'live':2.4,'dead':4.0,'snow':2.5,'wind':2.5}, # kPa 'load_combinations':{'Case-2a':{'dead':1.25,'live':1.5,'snow':0.5}, 'Case-2b':{'dead':1.25,'live':1.5,'wind':0.4}, 'Case-3a':{'dead':1.25,'snow':1.5,'live':0.5}, 'Case-3b':{'dead':1.25,'snow':1.5,'wind':0.4}, 'Case-4a':{'dead':1.25,'wind':1.4,'live':0.5}, 'Case-4b':{'dead':1.25,'wind':1.4,'snow':0.5}, }, 'load_combo':'Case-2b', 'braced_bays':[0,2,3,5], 'support_fixity': ['fx,fy']*7, 'beam_size': 'W1000x222', 'column_size': 'W360x216', } SaveData = True ShowResults = True FD def genframe(fd): h = fd['storey_heights'] w = fd['bay_widths'] s = fd['frame_spacing'] nnodes = (len(h)+1)*(len(w)+1) # names of column stacks and floor levels bayline = [chr(ord('A')+i) for i in range(len(w)+1)] floorlev = [str(i) for i in range(len(h)+1)] # generate the nodes nodelist = [] nidgrid = np.ndarray((len(h)+1,len(w)+1),dtype=np.object) for i in range(len(h)+1): y = sum(h[:i])*1000. for j in range(len(w)+1): x = sum(w[:j])*1000. nid = bayline[j]+floorlev[i] nodelist.append((nid,x,y)) nidgrid[i,j] = nid nodes = pd.DataFrame(nodelist,columns=['NODEID','X','Y']) # generate the supports assert len(fd['support_fixity'])==nidgrid.shape[1] supplist = [] for j,s in enumerate(fd['support_fixity']): nid = nidgrid[0,j] fix = s.strip().upper().split(',') if len(fix) < 3: fix += [np.nan] * (3-len(fix)) supplist.append([nid,]+fix) supports = pd.DataFrame(supplist,columns=['NODEID','C0','C1','C2']) # generate columns columns = [] for i in range(nidgrid.shape[0]-1): for j in range(nidgrid.shape[1]): nidj = nidgrid[i,j] nidk = nidgrid[i+1,j] mid = 'C' + nidj + nidk columns.append((mid,nidj,nidk)) # generate beams beams = [] roofbeams = [] pinnedbeams = [] for i in range(1,nidgrid.shape[0]): beamlist = beams if i < nidgrid.shape[0]-1 else roofbeams for j in range(nidgrid.shape[1]-1): nidj = nidgrid[i,j] nidk = nidgrid[i,j+1] mid = 'B' + nidj + nidk beamlist.append((mid,nidj,nidk)) if j not in fd['braced_bays']: pinnedbeams.append(mid) members = pd.DataFrame(columns+beams+roofbeams,columns=['MEMBERID','NODEJ','NODEK']) # generate releases rellist = [] for mid in pinnedbeams: rellist.append((mid,'MZJ')) rellist.append((mid,'MZK')) releases = pd.DataFrame(rellist,columns=['MEMBERID','RELEASE']) # generate properties proplist = [] size = fd['column_size'] for mid,j,k in columns: proplist.append((mid,size,np.nan,np.nan)) size = np.nan size = fd['beam_size'] for mid,j,k in beams+roofbeams: proplist.append((mid,size,np.nan,np.nan)) size = np.nan properties = pd.DataFrame(proplist,columns=['MEMBERID','SIZE','IX','A']) # generate node loads (wind from left) nloadlist = [] L = fd['specified_loads'] # area loads for i in range(1,nidgrid.shape[0]+1): H = (sum(h[:i+1])-sum(h[:i-1]))/2. FL = H*fd['frame_spacing']*L['wind'] if FL != 0.: nloadlist.append(('wind',nidgrid[i,0],'FX',FL*1000.)) node_loads = pd.DataFrame(nloadlist,columns=['LOAD','NODEID','DIRN','F']) # generate member loads mloadlist = [] UDL = -L['dead']*fd['frame_spacing'] mloadlist += [('dead',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in beams] mloadlist += [('dead',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in roofbeams] UDL = -L['live']*fd['frame_spacing'] mloadlist += [('live',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in beams] UDL = -L['snow']*fd['frame_spacing'] mloadlist += [('snow',mid,'UDL',UDL,np.nan,np.nan,np.nan,np.nan) for mid,nj,nk in roofbeams] member_loads = pd.DataFrame(mloadlist,columns='LOAD,MEMBERID,TYPE,W1,W2,A,B,C'.split(',')) # generate load combinations lclist = [] for case,loads in fd['load_combinations'].items(): for load,factor in loads.items(): lclist.append((case,load,factor)) load_combinations = pd.DataFrame(lclist,columns=['CASE','LOAD','FACTOR']) ds = DataSource ds.set_source(None) ds.set_table('nodes',nodes) ds.set_table('supports',supports) ds.set_table('members',members) ds.set_table('releases',releases) ds.set_table('properties',properties) ds.set_table('node_loads',node_loads) ds.set_table('member_loads',member_loads) ds.set_table('load_combinations',load_combinations) frame = Frame2D() frame.input_all() return frame %time f = genframe(FD) Explanation: Generate large frames for testing End of explanation if SaveData: NS = len(FD['storey_heights']) NB = len(FD['bay_widths']) name = 'l{}x{}'.format(NS,NB) f.write_all(name,makedir=True) %time rs = f.solve(FD['load_combo']) Explanation: On Flex 5: CPU times: user 261 ms, sys: 7.8 ms, total: 269 ms Wall time: 264 ms On yoga260: CPU times: user 289 ms, sys: 1.5 ms, total: 290 ms Wall time: 294 ms On post: CPU times: user 390 ms, sys: 23.9 ms, total: 414 ms Wall time: 394 ms End of explanation if ShowResults: f.print_input() if ShowResults: f.print_results(rs) if SaveData: f.write_results(name,rs) %matplotlib inline f.show() Explanation: On Flex 5: ``` CPU times: user 115 ms, sys: 35.8 ms, total: 151 ms Wall time: 82.9 ms CPU times: user 132 ms, sys: 36.7 ms, total: 168 ms Wall time: 95.2 ms ``` On yoga260: ``` CPU times: user 73 ms, sys: 56 ms, total: 129 ms Wall time: 85.4 ms CPU times: user 263 ms, sys: 113 ms, total: 375 ms Wall time: 163 ms ``` On post: CPU times: user 127 ms, sys: 91 ms, total: 218 ms Wall time: 109 ms End of explanation
1,936
Given the following text description, write Python code to implement the functionality described below step by step Description: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. Step2: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. Step4: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise Step5: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al. Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. Step7: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise Step8: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise Step9: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise Step10: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. Step11: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. Step12: Restore the trained network if you need to Step13: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
Python Code: import time import numpy as np import tensorflow as tf import utils Explanation: Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. A really good conceptual overview of word2vec from Chris McCormick First word2vec paper from Mikolov et al. NIPS paper with improvements for word2vec also from Mikolov et al. An implementation of word2vec from Thushan Ganegedara TensorFlow word2vec tutorial Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning. Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. End of explanation from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space. End of explanation words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) Explanation: Preprocessing Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to &lt;PERIOD&gt;. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. End of explanation vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words. End of explanation ## Your code here from collections import Counter import random threshold=1e-5; word_counts=Counter(int_words) total=len(int_words) word_freq={word:count/total for word,count in word_counts.items()} p_drop={word:1-np.sqrt(threshold/word_freq[word]) for word,count in word_counts.items()} train_words = [word for word in int_words if random.random()<(1-p_drop[word])] print(train_words[:100]) Explanation: Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it. Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words. End of explanation def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # Your code here R=np.random.randint(1,window_size+1) start=idx-R if idx>=R else 0 stop=idx+R target_words=set(words[start:idx]+words[idx+1:stop+1]) return list(target_words) Explanation: Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window. End of explanation def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. End of explanation train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32,[None],name="inputs") labels = tf.placeholder(tf.int32,[None,None],name="labels") Explanation: Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal. Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1. End of explanation n_vocab = len(int_to_vocab) n_embedding = 200# Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab,n_embedding),-1,1)) embed = tf.nn.embedding_lookup(embedding,inputs) Explanation: Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform. End of explanation # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab,n_embedding),stddev=0.1))# create softmax weight matrix here softmax_b = tf.Variable(tf.zeros(n_vocab))# create softmax biases here # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss (softmax_w,softmax_b,labels,embed,n_sampled,n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) Explanation: Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss. Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works. End of explanation with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints Explanation: Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. End of explanation epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: ## From Thushan Ganegedara's implementation # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) Explanation: Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words. End of explanation with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) Explanation: Restore the trained network if you need to: End of explanation %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) Explanation: Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. End of explanation
1,937
Given the following text description, write Python code to implement the functionality described below step by step Description: Quasi-binomial regression This notebook demonstrates using custom variance functions and non-binary data with the quasi-binomial GLM family to perform a regression analysis using a dependent variable that is a proportion. The notebook uses the barley leaf blotch data that has been discussed in several textbooks. See below for one reference Step2: The raw data, expressed as percentages. We will divide by 100 to obtain proportions. Step3: The regression model is a two-way additive model with site and variety effects. The data are a full unreplicated design with 10 rows (sites) and 9 columns (varieties). Step4: Fit the quasi-binomial regression with the standard variance function. Step5: The plot below shows that the default variance function is not capturing the variance structure very well. Also note that the scale parameter estimate is quite small. Step6: An alternative variance function is mu^2 * (1 - mu)^2. Step7: Fit the quasi-binomial regression with the alternative variance function. Step8: With the alternative variance function, the mean/variance relationship seems to capture the data well, and the estimated scale parameter is close to 1.
Python Code: import statsmodels.api as sm import numpy as np import pandas as pd import matplotlib.pyplot as plt from io import StringIO Explanation: Quasi-binomial regression This notebook demonstrates using custom variance functions and non-binary data with the quasi-binomial GLM family to perform a regression analysis using a dependent variable that is a proportion. The notebook uses the barley leaf blotch data that has been discussed in several textbooks. See below for one reference: https://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_glimmix_sect016.htm End of explanation raw = StringIO(0.05,0.00,1.25,2.50,5.50,1.00,5.00,5.00,17.50 0.00,0.05,1.25,0.50,1.00,5.00,0.10,10.00,25.00 0.00,0.05,2.50,0.01,6.00,5.00,5.00,5.00,42.50 0.10,0.30,16.60,3.00,1.10,5.00,5.00,5.00,50.00 0.25,0.75,2.50,2.50,2.50,5.00,50.00,25.00,37.50 0.05,0.30,2.50,0.01,8.00,5.00,10.00,75.00,95.00 0.50,3.00,0.00,25.00,16.50,10.00,50.00,50.00,62.50 1.30,7.50,20.00,55.00,29.50,5.00,25.00,75.00,95.00 1.50,1.00,37.50,5.00,20.00,50.00,50.00,75.00,95.00 1.50,12.70,26.25,40.00,43.50,75.00,75.00,75.00,95.00) Explanation: The raw data, expressed as percentages. We will divide by 100 to obtain proportions. End of explanation df = pd.read_csv(raw, header=None) df = df.melt() df["site"] = 1 + np.floor(df.index / 10).astype(np.int) df["variety"] = 1 + (df.index % 10) df = df.rename(columns={"value": "blotch"}) df = df.drop("variable", axis=1) df["blotch"] /= 100 Explanation: The regression model is a two-way additive model with site and variety effects. The data are a full unreplicated design with 10 rows (sites) and 9 columns (varieties). End of explanation model1 = sm.GLM.from_formula("blotch ~ 0 + C(variety) + C(site)", family=sm.families.Binomial(), data=df) result1 = model1.fit(scale="X2") print(result1.summary()) Explanation: Fit the quasi-binomial regression with the standard variance function. End of explanation plt.clf() plt.grid(True) plt.plot(result1.predict(linear=True), result1.resid_pearson, 'o') plt.xlabel("Linear predictor") plt.ylabel("Residual") Explanation: The plot below shows that the default variance function is not capturing the variance structure very well. Also note that the scale parameter estimate is quite small. End of explanation class vf(sm.families.varfuncs.VarianceFunction): def __call__(self, mu): return mu**2 * (1 - mu)**2 def deriv(self, mu): return 2*mu - 6*mu**2 + 4*mu**3 Explanation: An alternative variance function is mu^2 * (1 - mu)^2. End of explanation bin = sm.families.Binomial() bin.variance = vf() model2 = sm.GLM.from_formula("blotch ~ 0 + C(variety) + C(site)", family=bin, data=df) result2 = model2.fit(scale="X2") print(result2.summary()) Explanation: Fit the quasi-binomial regression with the alternative variance function. End of explanation plt.clf() plt.grid(True) plt.plot(result2.predict(linear=True), result2.resid_pearson, 'o') plt.xlabel("Linear predictor") plt.ylabel("Residual") Explanation: With the alternative variance function, the mean/variance relationship seems to capture the data well, and the estimated scale parameter is close to 1. End of explanation
1,938
Given the following text description, write Python code to implement the functionality described below step by step Description: Setup Step1: Tweet activity Let's explore counts by hour, day of the week, and weekday versus weekend hourly trends. Step2: Hmmm, what's this created_at attribute? Step3: Hourly counts Step4: Because there are hours of the day where there are no tweets, one must explicitly add any zero-count hours to the index. Step5: Day of the week counts Step6: Weekday vs weekend hourly counts Step7: Visualize tweet counts By hour Step8: Let's see if we can "fancy-it-up" a bit by making it 538 blog-like. Note Step9: By day of the week Step10: By weekday and weekend
Python Code: %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns plt.style.use('fivethirtyeight') import tweepy import numpy as np import pandas as pd from collections import Counter from datetime import datetime # Turn on retina mode for high-quality inline plot resolution from IPython.display import set_matplotlib_formats set_matplotlib_formats('retina') # Version of Python import platform platform.python_version() # Import Twitter API keys from credentials import * # Helper function to connect to Twitter API def twitter_setup(): auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_SECRET) api = tweepy.API(auth) return api # Extract Twitter data extractor = twitter_setup() # Twitter user twitter_handle = 'fastforwardlabs' # Get most recent two hundred tweets tweets = extractor.user_timeline(screen_name=twitter_handle, count=200) print('Number of tweets extracted: {}.\n'.format(len(tweets))) Explanation: Setup End of explanation # Inspect attributes of tweepy object print(dir(tweets[0])) # look at the first element/record Explanation: Tweet activity Let's explore counts by hour, day of the week, and weekday versus weekend hourly trends. End of explanation # What format is it in? answer: GMT, according to Twitter API print(tweets[0].created_at) # Create datetime index: convert to GMT then to Eastern daylight time EDT tweet_dates = pd.DatetimeIndex([tweet.created_at for tweet in tweets], tz='GMT').tz_convert('US/Eastern') Explanation: Hmmm, what's this created_at attribute? End of explanation # Count the number of tweets per hour num_per_hour = pd.DataFrame( { 'counts': Counter(tweet_dates.hour) }) # Create hours data frame hours = pd.DataFrame({'hours': np.arange(24)}) Explanation: Hourly counts: End of explanation # Merge data frame objects on common index, peform left outer join and fill NaN with zero-values hour_counts = pd.merge(hours, num_per_hour, left_index=True, right_index=True, how='left').fillna(0) hour_counts Explanation: Because there are hours of the day where there are no tweets, one must explicitly add any zero-count hours to the index. End of explanation # Count the number of tweets by day of the week num_per_day = pd.DataFrame( { 'counts': Counter(tweet_dates.weekday) }) # Create days data frame days = pd.DataFrame({'day': np.arange(7)}) # Merge data frame objects on common index, perform left outer join and fill NaN with zero-values daily_counts = pd.merge(days, num_per_day, left_index=True, right_index=True, how='left').fillna(0) Explanation: Day of the week counts: End of explanation # Flag the weekend from weekday tweets weekend = np.where(tweet_dates.weekday < 5, 'weekday', 'weekend') # Construct multiply-indexed DataFrame obj indexed by weekday/weekend and by hour by_time = pd.DataFrame([tweet.created_at for tweet in tweets], columns=['counts'], index=tweet_dates).groupby([weekend, tweet_dates.hour]).count() # Optionally, set the names attribute of the index by_time.index.names=['daytype', 'hour'] # Show two-dimensional view of multiply-indexed DataFrame by_time.unstack() # Merge DataFrame on common index, perform left outer join and fill NaN with zero-values by_time = pd.merge(hours, by_time.unstack(level=0), left_index=True, right_index=True, how='left').fillna(0) # Show last five records by_time.tail() Explanation: Weekday vs weekend hourly counts: End of explanation # Optional: Create xtick labels in Standard am/pm time format xticks = pd.date_range('00:00', '23:00', freq='H', tz='US/Eastern').map(lambda x: pd.datetime.strftime(x, '%I %p')) Explanation: Visualize tweet counts By hour: End of explanation %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } # Plot ax = hour_counts.plot(x='hours', y='counts', kind='line', figsize=(12, 8)) ax.set_xticks(np.arange(24)) #ax.set_xticklabels(xticks, rotation=50) #ax.set_title('Number of Tweets per hour') #ax.set_xlabel('Hour') #ax.set_ylabel('No. of Tweets') #ax.set_yticklabels(labels=['0 ', '5 ', '10 ', '15 ', '20 ', '25 ', '30 ', '35 ', '40 ']) ax.tick_params(axis='both', which='major', labelsize=14) ax.axhline(y=0, color='black', linewidth=1.3, alpha=0.7) ax.set_xlim(left=-1, right=24) ax.xaxis.label.set_visible(False) now = datetime.strftime(datetime.now(), '%a, %Y-%b-%d at %I:%M %p EDT') ax.text(x=-2.25, y=-5.5, s = u"\u00A9" + 'THE_KLEI {} Source: Twitter, Inc. '.format(now), fontsize=14, color='#f0f0f0', backgroundcolor='grey') ax.text(x=-2.35, y=44, s="When does @{} tweet? - time of the day".format(twitter_handle), fontsize=26, weight='bold', alpha=0.75) ax.text(x=-2.35, y=42, s='Number of Tweets per hour based-on 200 most-recent tweets as of {}'.format(datetime.strftime(datetime.now(), '%b %d, %Y')), fontsize=19, alpha=0.85) plt.show() Explanation: Let's see if we can "fancy-it-up" a bit by making it 538 blog-like. Note: The following cell disables notebook autoscrolling for long outputs. Otherwise, the notebook will embed the plot inside a scrollable cell, which is more difficult to read the plot. End of explanation # Plot daily_counts.index = ['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'] daily_counts['counts'].plot(title='Daily tweet counts', figsize=(12, 8), legend=True) plt.show() Explanation: By day of the week: End of explanation %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } # Plot fig, ax = plt.subplots(2, 1, figsize=(14, 12)) # weekdays by_time.loc[:, [('counts', 'weekday')]].plot(ax=ax[0], title='Weekdays', kind='line') # weekends by_time.loc[:, [('counts', 'weekend')]].plot(ax=ax[1], title='Weekend', kind='line') ax[0].set_xticks(np.arange(24)) #ax[0].set_xticklabels(xticks, rotation=50) ax[1].set_xticks(np.arange(24)) #ax[1].set_xticklabels(xticks, rotation=50) plt.show() Explanation: By weekday and weekend: End of explanation
1,939
Given the following text description, write Python code to implement the functionality described below step by step Description: Analysing tabular data we are going to use a LIBRARY called numpy Step1: Variables Step2: Tasks * Produce maximum and minimum plots of this data * What do you think?
Python Code: import numpy numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',') Explanation: Analysing tabular data we are going to use a LIBRARY called numpy End of explanation Weight_kg = 55 print (Weight_kg) print('Weight in pounds:', Weight_kg * 2.2) Weight_kg = 57.5 print ('New weight: ', Weight_kg * 2.2) %whos data = numpy.loadtxt(fname='data/weather-01.csv', delimiter = ',') print (data) print (type(data)) %whos # Finding out the data type print (data.dtype) # Find out the shape print (data.shape) # This is 60 rows * 40 columns # Getting a single number out of the array print ("First value in data: ", data [0, 0]) print ('A middle value: ', data[30, 20]) # Lets get the first 10 columns for the first 4 rows print (data[0:4, 0:10]) # Start at index 0 and go up to BUT NOT INCLUDING index 4 # We don't need to start slicing at 0 print (data [5:10, 7:15]) # We don't even need to include the UPPER and LOWER bounds smallchunk = data [:3, 36:] print (smallchunk) # Arithmetic on arrays doublesmallchunk = smallchunk * 2.0 print (doublesmallchunk) triplesmallchunk = smallchunk + doublesmallchunk print (triplesmallchunk) print (numpy.mean(data)) print (numpy.transpose(data)) print (numpy.max(data)) print (numpy.min(data)) # Get a set of data for the first station station_0 = data [0, :] print (numpy.max(station_0)) # We don't need to create 'temporary' array slices # We can refer to what we call array axes # axis = 0 gets the mean DOWN each column, so the mean temperature for each recording period print (numpy.mean(data, axis = 0)) # axis = 1 gets the mean ACROSS each row, so the mean temperature for each recording period print (numpy.mean(data, axis = 1)) # do some simple visualisations import matplotlib.pyplot %matplotlib inline image = matplotlib.pyplot.imshow(data) # Let's look at the average temprature over time avg_temperature = numpy.mean(data, axis = 0) avg_plot = matplotlib.pyplot.plot(avg_temperature) Explanation: Variables End of explanation max_temprature = numpy.max(data, axis = 0) min_temprature = numpy.min(data, axis = 0) max_plot = matplotlib.pyplot.plot(max_temprature) min_plot = matplotlib.pyplot.plot(min_temprature) min_p = numpy.min(data, axis = 0) min_plot = matplotlib.pyplot.plot(min_p) Explanation: Tasks * Produce maximum and minimum plots of this data * What do you think? End of explanation
1,940
Given the following text description, write Python code to implement the functionality described below step by step Description: Using Jupyter notebook for interactive development url Step1: PYTRAJ Step2: Compute multiple dihedrals Step3: get help? Step4: Protein/DNA/RNA viewer in notebook Written in Python/Javascript super light (~3 MB) super easy to install (pip install nglview)
Python Code: import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) warnings.filterwarnings("ignore", category=UserWarning) import parmed as pmd x = pmd.load_file('tz2.pdb') [res.name for res in x.residues] [atom.name for atom in x.residues[0]] Explanation: Using Jupyter notebook for interactive development url: http://jupyter.org/ How to run this notebook? Click Cell and then Run All How to run it online? mybinder.org/repo/hainm/notebook-pytraj See also protein viewer example Install ```bash conda install parmed -c ambermd # Python package for topology editing and force field development conda install pytraj-dev -c ambermd # Python interface for cpptraj (MD trajectory data analysis) conda install pysander -c ambermd # Python interface for SANDER all above will be also available in AMBER16 release (next few months) conda install nglview -c ambermd # Protein/DNA/RAN viewer in notebook notebook conda install jupyter notebook ``` ParmEd: Cross-program parameter and topology file editor and molecular mechanical simulator engine. url: https://github.com/ParmEd/ParmEd (AMBER16) End of explanation import pytraj as pt traj = pt.load('tz2.nc', 'tz2.parm7') distances = pt.distances(traj, ':1 :12', dtype='dataframe') distances.head() %matplotlib inline distances.hist() Explanation: PYTRAJ: Interactive data analysis for molecular dynamics simulations url: https://github.com/Amber-MD/pytraj (AMBER 16) Compute distances and plot End of explanation dihedrals = pt.multidihedral(traj, resrange='1-3', dtype='dataframe') dihedrals.head(3) # show only first 3 snapshots %matplotlib inline from matplotlib import pyplot as plt plt.plot(dihedrals['phi_2'], dihedrals['psi_2'], '-bo', linewidth=0) plt.xlim([-180, 180]) plt.ylim([-180, 180]) Explanation: Compute multiple dihedrals End of explanation help(pt.multidihedral) Explanation: get help? End of explanation import warnings warnings.filterwarnings('ignore') import nglview as nv view = nv.show_pytraj(traj) view view.representations = [] view.add_representation('cartoon', color='residueindex') view.add_representation('licorice') t0 = pt.fetch_pdb('3pqr') view0 = pt.view.to_nglview(t0) view0 view0.representations = [] view0.add_representation('cartoon', selection='protein', color='residueindex') view0.add_representation('surface', selection='protein', opacity='0.2') Explanation: Protein/DNA/RNA viewer in notebook Written in Python/Javascript super light (~3 MB) super easy to install (pip install nglview) End of explanation
1,941
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have my data in a pandas DataFrame, and it looks like the following:
Problem: import pandas as pd df = pd.DataFrame({'cat': ['A', 'B', 'C'], 'val1': [7, 10, 5], 'val2': [10, 2, 15], 'val3': [0, 1, 6], 'val4': [19, 14, 16]}) def g(df): df = df.set_index('cat') res = df.div(df.sum(axis=0), axis=1) return res.reset_index() df = g(df.copy())
1,942
Given the following text description, write Python code to implement the functionality described below step by step Description: Scipy Python's favorite library for scientific computing. Scipy modules are viewed as the equivalent of Matlab's standard toolboxes Scikit modules are viewed as the equivalent of Matlab's external toolboxes We learn scipy by inspecting a few examples Step1: scipy.signal and scipy.fftpack Step2: scipy.optimize Step3: scipy.interpolate Step4: scipy.integrate Step5: scipy.ndimage - Image processing This module is useful for containing functions for multidimensional image manipulation. It mainly contains filters, interpolation and morphology functions. https
Python Code: from sklearn.datasets import load_iris iris = load_iris() print(iris.feature_names, iris.target_names) print(iris.data.shape) #print(iris.DESCR) from scipy import linalg # perform SVD A = iris.data U, s, V = linalg.svd(A) print("U.shape, V.shape, s.shape: ", U.shape, V.shape, s.shape) print("Singular values:", s) #colums really orthogonal, see? print(V[:,0].dot(V[:,1])) print(U[:,1].dot(U[:,99])) # Evaluate the model, wow so close, so close S = linalg.diagsvd(s, 150, 4) print(S[:6, :]) A_transformed = np.dot(U, np.dot(S, V)) print("Closeness test: ", np.allclose(A, A_transformed)) %matplotlib inline import matplotlib.pyplot as plt import numpy as np fig, axes = plt.subplots(1, 2, figsize=(15,5)) # fig = plt.figure() ax1 = axes[0] colors = np.array(['blue','red','black']) labels = np.array(['setosa','versicolour','verginica']) ax1.scatter(U[:,0], U[:,1], color = colors[iris.target]) ax1.set_xlabel("First singular vector") ax1.set_ylabel("Second singular vector") ax1.legend() ax2 = axes[1] colors = np.array(['blue','red','black']) labels = np.array(['setosa','versicolour','verginica']) ax2.scatter(A[:,0], A[:,1], color = colors[iris.target]) ax2.set_xlabel("First feature vector") ax2.set_ylabel("Second feature vector") ax2.legend() Explanation: Scipy Python's favorite library for scientific computing. Scipy modules are viewed as the equivalent of Matlab's standard toolboxes Scikit modules are viewed as the equivalent of Matlab's external toolboxes We learn scipy by inspecting a few examples: singular value decomposition, with scipy.linalg scipy.signal and scipy.fftpack: Signal theory scipy.optimize: Local and global optimization, fitting and root finding scipy.interpolate: Cubic interpolation scipy.integrate: Integration and ODE solvers scipy.ndimage - Image processing Further reading: - https://docs.scipy.org/doc/numpy/user/basics.html - https://scipy-lectures.github.io/index.html - served as my main inspiration source for this chapter. - http://docs.scipy.org/doc/scipy-0.15.1/reference/ - The Scipy reference guide, containing a very good tutorial for each of the libraries. - http://www.scipy.org/topical-software.html This page is containing links to some of the most common Python modules. However it is far from complete, for example PIL, a library commonly used in image processing in Python, is not listed. scipy.linalg - Singuar Value Decomposition http://docs.scipy.org/doc/scipy/reference/linalg.html solving linear equations, solving eigenvalues problems and matrix factorizations. SVD is a matrix factorization problem: $X = U S V$ the columns of U are orthogonal (left singular vectors) the columns of V are orthogonal (right singluar vectors) S is a diagonal matrix (singular values). SVD is the core mechanic behind PCA In the example below, most of the variation in the dataset is explained by the first two singular values, corresponding to the first two features. Obs: - scipy.linalg.orth(A) - uses SVD to find an orthonormal basis for A. End of explanation %matplotlib inline import numpy as np from scipy import fftpack import matplotlib.pyplot as plt time_step = 0.22 period = 5. time_vec = np.arange(0, 20, time_step) sig = np.sin(2 * np.pi / period * time_vec) + 0.5 * np.random.randn(time_vec.size) from scipy import fftpack #print(sig.size) sample_freq = fftpack.fftfreq(sig.size, d=time_step) sig_fft = fftpack.fft(sig) pidxs = np.where(sample_freq > 0) freqs, power = sample_freq[pidxs], np.abs(sig_fft)[pidxs] freq = freqs[power.argmax()] print("Determined frequency:",freq) sig_fft[np.abs(sample_freq) > freq] = 0 main_sig = fftpack.ifft(sig_fft)#Discrete inverse Fourier transform fig = plt.figure() ax1 = fig.add_subplot(311) ax1.plot(time_vec,sig) ax1.set_title('Signal') ax2 = fig.add_subplot(312) ax2.plot(freqs, power) ax2.set_xlabel('Frequency [Hz]') ax2.set_ylabel('power') ax2.set_title('Peak frequency') ax3 = fig.add_subplot(313) ax3.plot(time_vec,main_sig) ax1.set_title('Cleaned signal') Explanation: scipy.signal and scipy.fftpack: Signal theory Signal processing is useful in order to interpret the data of many measuring instruments, especially if there is a time delayed response. We are performing a simple example, but for those that want to learn more applications of Python for signal processing I reccomend a number of online IPython courses. A small example would be a noisy signal whose frequency is unknown to the observer, who only knows the sampling time step. The signal is supposed to come from a real function so the Fourier transform will be symmetric. The scipy.fftpack.fftfreq() function will generate the sampling frequencies and scipy.fftpack.fft() will compute the fast Fourier transform: End of explanation import numpy as np import scipy.optimize as optimize def f(x): # The rosenbrock function return .5*(1 - x[0])**2 + (x[1] - x[0]**2)**2 def fprime(x): return np.array((-2*.5*(1 - x[0]) - 4*x[0]*(x[1] - x[0]**2), 2*(x[1] - x[0]**2))) print optimize.fmin_ncg(f, [2, 2], fprime=fprime) def hessian(x): # Computed with sympy return np.array(((1 - 4*x[1] + 12*x[0]**2, -4*x[0]), (-4*x[0], 2))) print optimize.fmin_ncg(f, [2, 2], fprime=fprime, fhess=hessian) %matplotlib inline from matplotlib import cm import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D fig = plt.figure() ax = fig.gca(projection='3d') X = np.arange(-5, 5, 0.25) Y = np.arange(-5, 5, 0.25) X, Y = np.meshgrid(X, Y) Z = .5*(1 - X)**2 + (Y - X**2)**2 surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False) ax.set_zlim(-1000.01, 1000.01) fig.colorbar(surf, shrink=0.5, aspect=5) plt.show() Explanation: scipy.optimize: Local and global optimization, fitting and root finding In the statistics chapter we use this package for line fitting. We also have a self standing optimization chapter where we will get back to this module. We estimated the parameters of a function by performing an error minimization. An optimization problem complexity is dependent on several factors: - Do you intend a local or a global optimization? - Is the function linear or nonlinear? - Is the function convex or not? - Can a gradient be computed? - Can the Hessian matrix be computed? - Do we perform optimization under constraints? Scipy does not cover all solvers efficiently but there are several Python packages specialized for certain classes of optimization problems. In general though heavy optimization is solved with dedicated programs, many of whom have language bindings for Python. To exemplify, we use Newton's optimization to find the minima of a nonlinear function. (also covered in the optimization chapter) End of explanation %matplotlib inline import numpy as np from scipy.interpolate import interp1d import pylab as pl measured_time = np.linspace(0, 1, 10) noise = 0.1 * np.random.randn(10) measures = np.sin(2 * np.pi * measured_time) + noise linear_interp = interp1d(measured_time, measures) computed_time = np.linspace(0, 1, 50) linear_results = linear_interp(computed_time) cubic_interp = interp1d(measured_time, measures, kind='cubic') cubic_results = cubic_interp(computed_time) pl.plot(measured_time, measures, 'o', ms=6, label='measures') pl.plot(computed_time, linear_results, label='linear interp') pl.plot(computed_time, cubic_results, label='cubic interp') pl.legend() Explanation: scipy.interpolate: Cubic interpolation Interpolation is useful when we have sampled a function but want to approximate its values on different points. A well known class of interpolation functions are the splines, most commonly three spline curves are combined in order to interpolate a smooth curved line between two datapoints. End of explanation %matplotlib inline #from scipy import * import scipy.integrate as integrate ''' Slightly modified from a sample code generated by this program, that formulates a solver for different cases of enzime reactions: http://code.google.com/p/kinpy/ ## Reaction ## #Michaelis-Menten enzyme kinetics. E + S <-> ES ES <-> E + P ## Mapping ## E 0 -1*v_0(y[1], y[0], y[2]) +1*v_1(y[3], y[0], y[2]) S 1 -1*v_0(y[1], y[0], y[2]) ES 2 +1*v_0(y[1], y[0], y[2]) -1*v_1(y[3], y[0], y[2]) P 3 +1*v_1(y[3], y[0], y[2]) ''' dy = lambda y, t: array([\ -1*v_0(y[1], y[0], y[2]) +1*v_1(y[3], y[0], y[2]),\ -1*v_0(y[1], y[0], y[2]),\ +1*v_0(y[1], y[0], y[2]) -1*v_1(y[3], y[0], y[2]),\ +1*v_1(y[3], y[0], y[2])\ ]) #Initial concentrations: y0 = array([\ #E 0.6,\ #S 1.2,\ #ES 3.0,\ #P 0.2,\ ]) #E + S <-> ES v_0 = lambda S, E, ES : k0 * E**1 * S**1 - k0r * ES**1 k0 = 1.2 k0r = 1.5 #ES <-> E + P v_1 = lambda P, E, ES : k1 * ES**1 - k1r * E**1 * P**1 k1 = 0.9 k1r = 1.9 t = arange(0, 10, 0.01) Y = integrate.odeint(dy, y0, t) import pylab as pl pl.plot(t, Y, label='y') Explanation: scipy.integrate: Integration and ODE solvers This submodule is useful for summing up function values over intervals (integration) and solving ordinary differential equations. Partial differential equations are not covered and require other Python packages. As a quick example, we solve a case of Michaelis-Menten enzime kinetics. End of explanation %matplotlib inline import numpy as np from scipy import ndimage from scipy import misc import matplotlib.pyplot as plt import pylab as pl koon = misc.face(gray=True) #from scipy import misc #face = misc.face(gray=True) plt.imshow(koon) plt.show() blurred_koon = ndimage.gaussian_filter(koon, sigma=5) plt.imshow(blurred_koon) plt.show() noisy_koon = np.copy(koon).astype(np.float) noisy_koon += koon.std()*np.random.standard_normal(koon.shape) plt.imshow(noisy_koon) plt.show() from scipy import signal wiener_koon = signal.wiener(blurred_koon, (5,5)) plt.imshow(wiener_koon) plt.show() Explanation: scipy.ndimage - Image processing This module is useful for containing functions for multidimensional image manipulation. It mainly contains filters, interpolation and morphology functions. https://scipy-lectures.org/advanced/image_processing/index.html https://scipy-lectures.org/packages/scikit-image/index.html End of explanation
1,943
Given the following text description, write Python code to implement the functionality described below step by step Description: Running Code First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefor runs Python code. Code cells allow you to enter and run code Run a code cell using Shift-Enter or pressing the <button class='btn btn-default btn-xs'><i class="icon-step-forward fa fa-step-forward"></i></button> button in the toolbar above Step1: There are two other keyboard shortcuts for running code Step2: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via ctypes to segfault the Python interpreter Step3: Cell menu The "Cell" menu has a number of menu items for running code in different ways. These includes Step4: Output is asynchronous All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end. Step5: Large outputs To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output Step6: Beyond a certain point, output will scroll automatically
Python Code: a = 10 print(a) Explanation: Running Code First and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefor runs Python code. Code cells allow you to enter and run code Run a code cell using Shift-Enter or pressing the <button class='btn btn-default btn-xs'><i class="icon-step-forward fa fa-step-forward"></i></button> button in the toolbar above: End of explanation import time time.sleep(10) Explanation: There are two other keyboard shortcuts for running code: Alt-Enter runs the current cell and inserts a new one below. Ctrl-Enter run the current cell and enters command mode. Managing the Kernel Code is run in a separate process called the Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above. End of explanation import sys from ctypes import CDLL # This will crash a Linux or Mac system # equivalent calls can be made on Windows dll = 'dylib' if sys.platform == 'darwin' else 'so.6' libc = CDLL("libc.%s" % dll) libc.time(-1) # BOOM!! Explanation: If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via ctypes to segfault the Python interpreter: End of explanation print("hi, stdout") from __future__ import print_function print('hi, stderr', file=sys.stderr) Explanation: Cell menu The "Cell" menu has a number of menu items for running code in different ways. These includes: Run and Select Below Run and Insert Below Run All Run All Above Run All Below Restarting the kernels The kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above. sys.stdout and sys.stderr The stdout and stderr streams are displayed as text in the output area. End of explanation import time, sys for i in range(8): print(i) time.sleep(0.5) Explanation: Output is asynchronous All output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end. End of explanation for i in range(50): print(i) Explanation: Large outputs To better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output: End of explanation for i in range(500): print(2**i - 1) Explanation: Beyond a certain point, output will scroll automatically: End of explanation
1,944
Given the following text description, write Python code to implement the functionality described below step by step Description: Session 3 Step1: The functions in np.random return one dimensional arrays. You can check this with .shape and change it with .reshape() Step2: You can perform basic arithmetic on arrays, using scalars or other arrays. For example, given the following two arrays Step3: pandas Pandas DataFrame objects are one of the key data types used in veneer-py. A DataFrame is a tabular, two-dimensional data structure, which can be indexed in a range of ways, including a date and date/time index. DataFrames are aranged in named columns, each with a particular type (eg double, string, integer) and in this sense they are more flexible than numpy arrays. Each column in a DataFrame is a pandas Series, which is useful in its own right. Step4: Pandas DataFrames have a tabular presentation in the Jupyter notebook. It's also possible to slice subsets of rows Step5: You can quickly get stats for each column in a DataFrame Step6: You can get the same stats along rows Step7: Jupyter and visualisation It is worth spending some time exploring the capabilities of the Jupyter notebook. In terms of managing your work Step8: Typically, you'll create a single plot from a single cell Step9: ... But the matplotlib subplots functionality allows you to create matrices of plots.
Python Code: import numpy as np random = np.random.normal(size=100) random Explanation: Session 3: Python Data Analysis World Python has a very strong community in the data analytics and scientific computing world. There are a lot of great Python packages to support different analyses, but there are a few very key packages: Workhorses - numpy, pandas, scipy Spatial tools - shapely, ogr/gdal, geopandas, etc Environments and Visualisation - Jupiter, matplotlib You will have access to all of these after installing Anaconda and installing the additional packages described in Session 0. (The additional packages relate to spatial analysis - you can skip them if you don't need them) Where possible, veneer-py functions will accept and return objects that are directly usable by these packages. In particular, time series and other tabular data structures are returned as pandas DataFrame objects. This session gives very brief introductions to most of these packages. In most cases, the links in Session 0 are relevant for more information. numpy numpy represents multi-dimensional arrays and operations on those arrays. The arrays are typed (eg float, double precision float, integer, etc) and are indexed by integers (one per dimension). In veneer-py, we use pandas Data Frames more than numpy arrays, but the basics of the array operations in numpy are the foundations on which pandas is built. You can create an array of random numbers using functions under the np.random namespace. The following example creates 100 random floats using a normal distribution Note: numpy is typically imported as np. End of explanation random.shape threed = random.reshape(10,5,2) threed Explanation: The functions in np.random return one dimensional arrays. You can check this with .shape and change it with .reshape() End of explanation a1 = np.array([20.0,12.0,77.0,77.0]) a2 = np.array([25.0,6.0,80.0,80.0]) # You can add: a1 + a2 # Multiply (element wise): a1 * a2 # Compute a dot product a1.dot(a2) # You can also perform matrix operations # First tell numpy that your array is a matrix, # Then transpose to compatible shapes # Then multiply np.matrix(a1).transpose() * np.matrix(a2) Explanation: You can perform basic arithmetic on arrays, using scalars or other arrays. For example, given the following two arrays End of explanation import veneer v = veneer.Veneer(port=9876) downstream_flow_vol = v.retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume'}) Explanation: pandas Pandas DataFrame objects are one of the key data types used in veneer-py. A DataFrame is a tabular, two-dimensional data structure, which can be indexed in a range of ways, including a date and date/time index. DataFrames are aranged in named columns, each with a particular type (eg double, string, integer) and in this sense they are more flexible than numpy arrays. Each column in a DataFrame is a pandas Series, which is useful in its own right. End of explanation downstream_flow_vol[0:10] # <-- Look at first 10 rows (timesteps) downstream_flow_vol[0::3000] # <-- Look at every 3000th timestep Explanation: Pandas DataFrames have a tabular presentation in the Jupyter notebook. It's also possible to slice subsets of rows End of explanation downstream_flow_vol.mean() Explanation: You can quickly get stats for each column in a DataFrame End of explanation downstream_flow_vol.mean(axis=1) Explanation: You can get the same stats along rows: End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: Jupyter and visualisation It is worth spending some time exploring the capabilities of the Jupyter notebook. In terms of managing your work: The Edit and Insert menus have useful functions for rearranging cells, creating new cells, etc The Cell menu has functions for running all cells in a notebook, all cells above a particular point and all cells below a point. The Kernel menu controls the execution and lifecycle of the Python session. (In this instance, Kernel refers to an instance of an IPython session that is connected to the notebook. The Restart command clears all variables - even though earlier output is still visible in the notebook) At this stage, most visualisation in Python notebooks is handled by matplotlib. Matplotlib is powerful, but the learning curve can be steep. End of explanation plt.hist(np.random.normal(size=500)) Explanation: Typically, you'll create a single plot from a single cell End of explanation methods=[np.random.uniform,np.random.normal,np.random.exponential] n=len(methods) # Create n sets of random numbers, where n is the number of methods specified random_sets = [method(size=1000) for method in methods] for i in range(n): # Arrange subplots 2 rows x 3 columns # Access the i'th column on the first row ax = plt.subplot(2,3,i+1) # Plot the random numbers ax.plot(random_sets[i]) # Access the i'th column on the second row ax = plt.subplot(2,3,n+i+1) # Plot a histogram of the corresponding numbers ax.hist(random_sets[i]) Explanation: ... But the matplotlib subplots functionality allows you to create matrices of plots. End of explanation
1,945
Given the following text description, write Python code to implement the functionality described below step by step Description: Astronomy I & II sessions [Astropy Basics] Constants and units Celestial Coordinates Time and dates FITS files FITS Tables (and other common formats) Spectra Images & WCS [Astroquery] Query to CVS/Vizier catalogues [Photutils] Source detection (DAO vs SExtractor) Modelling Step1: Astropy Basics In this section, we will explain the basics of what can be done with Astropy, such as working with internal units, opening FITS files, tables, spectra and WCS. Constants and Units Astropy provides a large amount of astronomical constants... but warning Step2: By default astropy constants uses S.I. units... Step3: It can be transformed to any units... Step4: You can also define your own constant using astropy Units Step5: Here we can compute earth's orbit speed using astropy constants... Step6: <span style="color Step7: Celestial Coordinates The simplest coordinate we can define is a single point in the sky, by default in the ICRS frame. Step8: Definition It can be defined in almost any format used in astronomy (and there are many, as usual...) all representing the same location. Step9: Astropy also has a significantly large list of sources than can be retrieved by its name Step10: Transformation We can easily convert to other coordinate systems, like the galactic... Step11: Or even get what is the closest constellation to the object, very useful for astronomers as you know... Step12: Distances Coordinates allow also to define distances Step13: If we define one or more coordinates we can compute the distance between the two objects Step14: Catalogue of sources A catalogue of positions can also be created using numpy arrays Step15: Time and Date The astropy.time package provides functionality for manipulating times and dates used in astronomy, such as UTC or MJD. Definition Step16: Default format is ISOT and scale UTC, but it can be set to others. Step17: Format conversion Step18: Timezones When a Time object is constructed from a timezone-aware datetime, no timezone information is saved in the Time object. However, Time objects can be converted to timezone-aware datetime objects Step19: FITS files Tables in FITS We will read a FITS table containing a catalogue, in this case a custom collection of Gaia stars created with CosmoHub. With two instructions I can open the fits file and preview the content of it. For this file we find a list of two units, a primary HDU and the binary table Step20: To access the primary HDU, we can directly call it by its name, and preview the header like this Step21: As the second extension has no name, we can access to it from its index Step22: The data is contained in the Binary table and can be accessed very similarly to a numpy/pandas table Step23: Tables not in FITS Even FITS is widely used in astronomy, there are a several other widely used formats for storing table and catalogue data. The module astropy.io.ascii provides read capability for most of them. Find the list of supported formats in astropy's documentation Step24: The read method tries to identify the file format automatically, but it can be specified in the format input parameter Step25: Any catalogue can then be exported (in this case to screen) to any format Step26: Spectra data Let's read a fits file containing spectra for a QSO observed with SDSS. First we want to open the fits and inspect what it's in there... Step27: The coadd seems to have the coadded spectra from several observations. Let's now inspect what columns we get in the spectra Step28: We can now have a look at the spectra data itself, using a scatter plot. Step29: The previous spectra seems to have some bad measurements, but we can make use of the OR mask included to discard those measurements. To better visualize the spectra file we will apply a gaussian filtering... Step30: Another information that is included in this spectra file is the emission lines measured by SDSS. We can inspect the columns of that extension Step31: <span style="color Step32: FITS images and WCS Step33: The image header contains the World Coordinate System (WCS) information, stored in a set of keywords (CD, CRVAL, CRPIX and optionally some distortion parameters). The WCS provides the projection of the image in the sky, allowing to work with pixels and sky coordinates. Step34: We can load the WCS of the image directly like this Step35: Once loaded the WCS, we can retrieve the corners of the image footprint Step36: We can infer its pixel scale from the CD matrix Step37: It is also useful to know the coordinates of a specific pixel in the image Step38: In the same way, sky coordinates can be transformed to pixel positions in the image. Step39: Note that the function we used is called all_XXXXXX. This is the method to use all distortion information (such as SIP, TPV,...). To use only the WCS without the distortion, use the equivalent method wcs_XXXXXXX. <span style="color Step40: Using WCS axes X and Y in the plot correspond to the X and Y of the image. WCS axes allows you to plot sky coordinates without remapping the image pixels
Python Code: %matplotlib inline import numpy as np import math import matplotlib.pyplot as plt import seaborn plt.rcParams['figure.figsize'] = (12, 8) plt.rcParams['font.size'] = 14 plt.rcParams['lines.linewidth'] = 2 plt.rcParams['xtick.labelsize'] = 13 plt.rcParams['ytick.labelsize'] = 13 plt.rcParams['axes.titlesize'] = 14 plt.rcParams['legend.fontsize'] = 13 Explanation: Astronomy I & II sessions [Astropy Basics] Constants and units Celestial Coordinates Time and dates FITS files FITS Tables (and other common formats) Spectra Images & WCS [Astroquery] Query to CVS/Vizier catalogues [Photutils] Source detection (DAO vs SExtractor) Modelling: Measuring the PSF FHWM with a moffat profile Background modelling and aperture Photometry [Pyephem/Astroplan] Earth, Time and Fixed Bodies Sun rising, setting and dark time Night planning: objects visibility and distance to moon Other interesting packages not covered in this workshop Astroscrappy: Cosmic ray detection / rejection / masking Astropy-Cosmology: Cosmology utility functions Astropy-ccdproc: Data reduction of CCD images Sewpy: Sextractor wrapper Naima:Derivation of non-thermal particle distributions through MCMC spectral fitting Reproject: Astronomical image reprojection gammapy: Gamma-ray astronomy sncosmo: Supernova cosmology End of explanation from astropy import constants as const from astropy import units as u Explanation: Astropy Basics In this section, we will explain the basics of what can be done with Astropy, such as working with internal units, opening FITS files, tables, spectra and WCS. Constants and Units Astropy provides a large amount of astronomical constants... but warning: The use of units can slow down the processing of a large data set. End of explanation print(const.c) Explanation: By default astropy constants uses S.I. units... End of explanation const.c.to('km/s') const.c.to('pc/yr') Explanation: It can be transformed to any units... End of explanation my_emission_line_flux = 12.32 * u.erg / u.cm ** 2 / u.s my_emission_line_flux Explanation: You can also define your own constant using astropy Units End of explanation speed_of_earth = const.au * 2 * math.pi / u.yr speed_of_earth.to('km/s') Explanation: Here we can compute earth's orbit speed using astropy constants... End of explanation # %load -r 2-3 solutions/10_Astronomy.py Explanation: <span style="color:blue">Exercise Astropy 1:</span> Working with astronomy constants Compute (approximately) the speed of the earth's orbit using the gravitational force between the two. End of explanation from astropy.coordinates import SkyCoord c = SkyCoord(ra=10.625*u.degree, dec=41.2*u.degree, frame='icrs') Explanation: Celestial Coordinates The simplest coordinate we can define is a single point in the sky, by default in the ICRS frame. End of explanation c = SkyCoord(10.625, 41.2, frame='icrs', unit='deg') c = SkyCoord('00h42m30s', '+41d12m00s', frame='icrs') c = SkyCoord('00h42.5m', '+41d12m') c = SkyCoord('00 42 30 +41 12 00', unit=(u.hourangle, u.deg)) c = SkyCoord('00:42.5 +41:12', unit=(u.hourangle, u.deg)) c Explanation: Definition It can be defined in almost any format used in astronomy (and there are many, as usual...) all representing the same location. End of explanation a_big_blue_star = SkyCoord.from_name('rigel') print (a_big_blue_star.ra, a_big_blue_star.dec) Explanation: Astropy also has a significantly large list of sources than can be retrieved by its name: End of explanation c.galactic Explanation: Transformation We can easily convert to other coordinate systems, like the galactic... End of explanation c.get_constellation() Explanation: Or even get what is the closest constellation to the object, very useful for astronomers as you know... End of explanation c = SkyCoord(ra=10.68458*u.degree, dec=41.26917*u.degree, distance=770*u.kpc) print (c.cartesian.x, c.cartesian.y, c.cartesian.z) Explanation: Distances Coordinates allow also to define distances: End of explanation c1 = SkyCoord(ra=10*u.degree, dec=9*u.degree, distance=10*u.pc, frame='icrs') c2 = SkyCoord(ra=11*u.degree, dec=10*u.degree, distance=11.5*u.pc, frame='icrs') print ("Angular Separation: %s" % c1.separation(c2)) print ("Distance between objects: %s" % c1.separation_3d(c2)) Explanation: If we define one or more coordinates we can compute the distance between the two objects: End of explanation ras = np.array([0-.7, 21.5, 120.9]) * u.deg decs = np.array([4.5, -5.2, 6.3]) * u.deg catalogue = SkyCoord(ras, decs, frame='icrs') catalogue.galactic Explanation: Catalogue of sources A catalogue of positions can also be created using numpy arrays: End of explanation from astropy.time import Time times = ['2017-09-13T00:00:00', '2017-09-15T11:20:15.123456789',] t1 = Time(times) t1 Explanation: Time and Date The astropy.time package provides functionality for manipulating times and dates used in astronomy, such as UTC or MJD. Definition End of explanation times = [58009, 58011.47239726] t2 = Time(times, format='mjd', scale='tai') t2 Explanation: Default format is ISOT and scale UTC, but it can be set to others. End of explanation print ("To julian date: %s" % t1[0].jd) print ("To modified julian date: %s" % t1[0].mjd) print ("To FITS: %s" % t1[0].fits) print ("To GPS: %s" % t1[0].gps) print ("To Bessel Epoch Year: %s" % t1[0].byear_str) print ("To Julian Epoch Year: %s" % t1[0].jyear_str) Explanation: Format conversion End of explanation from datetime import datetime from astropy.time import Time, TimezoneInfo import astropy.units as u utc_plus_one_hour = TimezoneInfo(utc_offset=1*u.hour) dt_aware = datetime(2000, 1, 1, 0, 0, 0, tzinfo=utc_plus_one_hour) t = Time(dt_aware) # Loses timezone info, converts to UTC print(t) # will return UTC print(t.to_datetime(timezone=utc_plus_one_hour)) # to timezone-aware datetime Explanation: Timezones When a Time object is constructed from a timezone-aware datetime, no timezone information is saved in the Time object. However, Time objects can be converted to timezone-aware datetime objects: End of explanation from astropy.io import fits gaia_hdulist = fits.open('../resources/cosmohub_catalogue.fits') gaia_hdulist.info() Explanation: FITS files Tables in FITS We will read a FITS table containing a catalogue, in this case a custom collection of Gaia stars created with CosmoHub. With two instructions I can open the fits file and preview the content of it. For this file we find a list of two units, a primary HDU and the binary table: End of explanation gaia_hdulist['PRIMARY'].header Explanation: To access the primary HDU, we can directly call it by its name, and preview the header like this: End of explanation gaia_hdulist[1].header Explanation: As the second extension has no name, we can access to it from its index: End of explanation plt.scatter(gaia_hdulist[1].data['ra'], gaia_hdulist[1].data['dec']) plt.xlabel('Right Ascension (deg)') plt.ylabel('Declination (deg)') Explanation: The data is contained in the Binary table and can be accessed very similarly to a numpy/pandas table: End of explanation from astropy.io import ascii data = ascii.read("../resources/sources.dat") print(data) Explanation: Tables not in FITS Even FITS is widely used in astronomy, there are a several other widely used formats for storing table and catalogue data. The module astropy.io.ascii provides read capability for most of them. Find the list of supported formats in astropy's documentation: http://docs.astropy.org/en/stable/io/ascii/index.html#supported-formats End of explanation data = ascii.read("../resources/sources.csv", format='csv') print(data) Explanation: The read method tries to identify the file format automatically, but it can be specified in the format input parameter: End of explanation import sys ascii.write(data, sys.stdout, format='latex') Explanation: Any catalogue can then be exported (in this case to screen) to any format: End of explanation sdss_qso_hdulist = fits.open('../resources/sdss_qso_spec-0501-52235-0313.fits') sdss_qso_hdulist.info() Explanation: Spectra data Let's read a fits file containing spectra for a QSO observed with SDSS. First we want to open the fits and inspect what it's in there... End of explanation sdss_qso_hdulist['COADD'].columns Explanation: The coadd seems to have the coadded spectra from several observations. Let's now inspect what columns we get in the spectra: End of explanation plt.scatter(10**sdss_qso_hdulist['COADD'].data['loglam'], sdss_qso_hdulist['COADD'].data['flux'], s=2) plt.xlabel('Wavelengths (Angstroms)') plt.ylabel(r'f$\lambda$ (erg/s/cm2/A)') Explanation: We can now have a look at the spectra data itself, using a scatter plot. End of explanation from scipy.ndimage.filters import gaussian_filter from scipy.interpolate import interp1d y_values_masked = np.ma.masked_where(sdss_qso_hdulist['COADD'].data['or_mask'], sdss_qso_hdulist['COADD'].data['flux']) x_values_masked = np.ma.masked_where(sdss_qso_hdulist['COADD'].data['or_mask'], sdss_qso_hdulist['COADD'].data['loglam']) plt.scatter(10**x_values_masked, y_values_masked, s=2, label='masked') plt.plot(10**sdss_qso_hdulist['COADD'].data['loglam'], gaussian_filter(y_values_masked, sigma=16), color='orange', linewidth=3, label='masked and filtered') plt.xlabel('Wavelengths (Angstroms)') plt.ylabel(r'f$\lambda$ (erg/s/cm2/A)') plt.legend() Explanation: The previous spectra seems to have some bad measurements, but we can make use of the OR mask included to discard those measurements. To better visualize the spectra file we will apply a gaussian filtering... End of explanation sdss_qso_hdulist['SPZLINE'].data.columns Explanation: Another information that is included in this spectra file is the emission lines measured by SDSS. We can inspect the columns of that extension: End of explanation # %load -r 7-18 solutions/10_Astronomy.py Explanation: <span style="color:blue">Exercise Astropy 2:</span> Working with spectra Display the emission lines available in the SPZLINE extension over the QSO spectra End of explanation hst_hdulist = fits.open('../resources/hst_656nmos.fits') hst_hdulist.info() plt.imshow(hst_hdulist['PRIMARY'].data) plt.xlabel('X pixels') plt.ylabel('Y pixels') plt.colorbar() from astropy.visualization import ZScaleInterval norm = ZScaleInterval() vmin, vmax = norm.get_limits(hst_hdulist['PRIMARY'].data) plt.imshow(hst_hdulist[0].data, vmin=vmin, vmax=vmax, interpolation='none', origin='lower') plt.xlabel('X pixels') plt.ylabel('Y pixels') plt.colorbar() Explanation: FITS images and WCS End of explanation print ("WCS projection type:") print (hst_hdulist['PRIMARY'].header['CTYPE1']) print (hst_hdulist['PRIMARY'].header['CTYPE2']) print ("WCS reference values:") print (hst_hdulist['PRIMARY'].header['CRVAL1']) print (hst_hdulist['PRIMARY'].header['CRVAL2']) print ("WCS reference pixel:") print (hst_hdulist['PRIMARY'].header['CRPIX1']) print (hst_hdulist['PRIMARY'].header['CRPIX2']) print ("WCS pixel to sky matrix:") print (hst_hdulist['PRIMARY'].header['CD1_1']) print (hst_hdulist['PRIMARY'].header['CD1_2']) print (hst_hdulist['PRIMARY'].header['CD2_1']) print (hst_hdulist['PRIMARY'].header['CD2_2']) Explanation: The image header contains the World Coordinate System (WCS) information, stored in a set of keywords (CD, CRVAL, CRPIX and optionally some distortion parameters). The WCS provides the projection of the image in the sky, allowing to work with pixels and sky coordinates. End of explanation from astropy import wcs hst_image_wcs = wcs.WCS(hst_hdulist['PRIMARY'].header) hst_image_wcs.printwcs() Explanation: We can load the WCS of the image directly like this: End of explanation hst_image_wcs.calc_footprint() Explanation: Once loaded the WCS, we can retrieve the corners of the image footprint: End of explanation hst_pixelscale = np.mean(wcs.utils.proj_plane_pixel_scales(hst_image_wcs) * u.degree).to('arcsec') hst_pixelscale Explanation: We can infer its pixel scale from the CD matrix: End of explanation # Origin of the pixel coordinates convention: # Set 0 when first pixel is 0 (c/python-like) # Set 1 when first pixel is 1 (fortran-like) origin = 0 # convert the pixels lon, lat = hst_image_wcs.all_pix2world(20, 30, origin) print (lon, lat) Explanation: It is also useful to know the coordinates of a specific pixel in the image: End of explanation x, y = hst_image_wcs.all_world2pix(lon, lat, origin) print (x, y) Explanation: In the same way, sky coordinates can be transformed to pixel positions in the image. End of explanation # %load -r 22-38 solutions/10_Astronomy.py Explanation: Note that the function we used is called all_XXXXXX. This is the method to use all distortion information (such as SIP, TPV,...). To use only the WCS without the distortion, use the equivalent method wcs_XXXXXXX. <span style="color:blue">Exercise Astropy 3:</span> Plot sources on image Use tha gaia catalogue loaded previously and plot the stars over the HST image. TIP: a list of coordinates can be passed directly to the WCS function. End of explanation ax = plt.subplot(projection=hst_image_wcs) ax.imshow(hst_hdulist[0].data, vmin=vmin, vmax=vmax, origin='lower') overlay = ax.get_coords_overlay('fk5') overlay.grid(color='white', ls='dotted') overlay[0].set_axislabel('Right Ascension (J2000)') overlay[1].set_axislabel('Declination (J2000)') Explanation: Using WCS axes X and Y in the plot correspond to the X and Y of the image. WCS axes allows you to plot sky coordinates without remapping the image pixels: End of explanation
1,946
Given the following text description, write Python code to implement the functionality described below step by step Description: Object Detection Demo Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Imports Step1: Env setup Step2: Object detection imports Here are the imports from the object detection module. Step3: Model preparation Variables Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies. Step4: Download Model Step5: Load a (frozen) Tensorflow model into memory. Step6: Loading label map Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine Step7: Helper code Step8: Detection
Python Code: import numpy as np import os import six.moves.urllib as urllib import sys import tarfile import tensorflow as tf import zipfile from collections import defaultdict from io import StringIO from matplotlib import pyplot as plt from PIL import Image Explanation: Object Detection Demo Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Imports End of explanation # This is needed to display the images. %matplotlib inline # This is needed since the notebook is stored in the object_detection folder. sys.path.append("..") sys.path Explanation: Env setup End of explanation from utils import label_map_util from utils import visualization_utils as vis_util Explanation: Object detection imports Here are the imports from the object detection module. End of explanation # What model to download. MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017' MODEL_FILE = MODEL_NAME + '.tar.gz' DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/' # Path to frozen detection graph. This is the actual model that is used for the object detection. PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb' # List of the strings that is used to add correct label for each box. PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt') NUM_CLASSES = 90 Explanation: Model preparation Variables Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies. End of explanation opener = urllib.request.URLopener() opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE) tar_file = tarfile.open(MODEL_FILE) for file in tar_file.getmembers(): file_name = os.path.basename(file.name) if 'frozen_inference_graph.pb' in file_name: tar_file.extract(file, os.getcwd()) Explanation: Download Model End of explanation detection_graph = tf.Graph() with detection_graph.as_default(): od_graph_def = tf.GraphDef() with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid: serialized_graph = fid.read() od_graph_def.ParseFromString(serialized_graph) tf.import_graph_def(od_graph_def, name='') Explanation: Load a (frozen) Tensorflow model into memory. End of explanation label_map = label_map_util.load_labelmap(PATH_TO_LABELS) categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True) category_index = label_map_util.create_category_index(categories) Explanation: Loading label map Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine End of explanation def load_image_into_numpy_array(image): (im_width, im_height) = image.size return np.array(image.getdata()).reshape( (im_height, im_width, 3)).astype(np.uint8) Explanation: Helper code End of explanation # For the sake of simplicity we will use only 2 images: # image1.jpg # image2.jpg # If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS. PATH_TO_TEST_IMAGES_DIR = 'test_images' TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in range(1, 3) ] # Size, in inches, of the output images. IMAGE_SIZE = (12, 8) with detection_graph.as_default(): with tf.Session(graph=detection_graph) as sess: for image_path in TEST_IMAGE_PATHS: image = Image.open(image_path) # the array based representation of the image will be used later in order to prepare the # result image with boxes and labels on it. image_np = load_image_into_numpy_array(image) # Expand dimensions since the model expects images to have shape: [1, None, None, 3] image_np_expanded = np.expand_dims(image_np, axis=0) image_tensor = detection_graph.get_tensor_by_name('image_tensor:0') # Each box represents a part of the image where a particular object was detected. boxes = detection_graph.get_tensor_by_name('detection_boxes:0') # Each score represent how level of confidence for each of the objects. # Score is shown on the result image, together with the class label. scores = detection_graph.get_tensor_by_name('detection_scores:0') classes = detection_graph.get_tensor_by_name('detection_classes:0') num_detections = detection_graph.get_tensor_by_name('num_detections:0') # Actual detection. (boxes, scores, classes, num_detections) = sess.run( [boxes, scores, classes, num_detections], feed_dict={image_tensor: image_np_expanded}) # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array( image_np, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=8) plt.figure(figsize=IMAGE_SIZE) plt.imshow(image_np) Explanation: Detection End of explanation
1,947
Given the following text description, write Python code to implement the functionality described below step by step Description: Preprocessing and Pipelines <img src="figures/pipeline.svg" width=60%> Step1: Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline. Step2: <img src="figures/pipeline_cross_validation.svg" width=40%> Cross-validation with a pipeline Step3: Grid Search with a pipeline Step4: Exercises Add random features to the iris dataset using np.random.uniform and np.hstack. Build a pipeline using the SelectKBest univariate feature selection from the sklearn.feature_selection module and the LinearSVC on the iris dataset. Use GridSearchCV to adjust C and the number of features selected in SelectKBest.
Python Code: from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split digits = load_digits() X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target) Explanation: Preprocessing and Pipelines <img src="figures/pipeline.svg" width=60%> End of explanation from sklearn.pipeline import Pipeline, make_pipeline from sklearn.svm import SVC from sklearn.preprocessing import StandardScaler pipeline = Pipeline([("scaler", StandardScaler()), ("svm", SVC())]) # or for short: make_pipeline(StandardScaler(), SVC()) pipeline.fit(X_train, y_train) pipeline.predict(X_test) Explanation: Cross-validated pipelines including scaling, we need to estimate mean and standard deviation separately for each fold. To do that, we build a pipeline. End of explanation from sklearn.cross_validation import cross_val_score cross_val_score(pipeline, X_train, y_train) Explanation: <img src="figures/pipeline_cross_validation.svg" width=40%> Cross-validation with a pipeline End of explanation from sklearn.grid_search import GridSearchCV param_grid = {'svm__C': 10. ** np.arange(-3, 3), 'svm__gamma' : 10. ** np.arange(-3, 3)} grid_pipeline = GridSearchCV(pipeline, param_grid=param_grid, n_jobs=-1) grid_pipeline.fit(X_train, y_train) grid_pipeline.score(X_test, y_test) Explanation: Grid Search with a pipeline End of explanation # %load solutions/pipeline_iris.py Explanation: Exercises Add random features to the iris dataset using np.random.uniform and np.hstack. Build a pipeline using the SelectKBest univariate feature selection from the sklearn.feature_selection module and the LinearSVC on the iris dataset. Use GridSearchCV to adjust C and the number of features selected in SelectKBest. End of explanation
1,948
Given the following text description, write Python code to implement the functionality described below step by step Description: What's the terminal velocity of a skydiver? Names of group members // put your names here! Goals of this assignment The main goal of this assignment is to use numerical integration and differentiation to study the behavior of a skydiver. You're going to use the numerical integration and differentiation techniques that you learned in the pre-class assignment. Some background knowledge that we need for this model Position, velocity, and acceleration In physics, three important properties of a moving object are its position ($\vec{x}$), velocity ($\vec{v}$), and acceleration ($\vec{a}$). These are vector quantities, meaning that they have both a magnitude and a direction, and are related in the following way Step1: The second part of the challenge In addition to your professor, a mouse and an elephant have also chosen to go skydiving. (Choice may have had less to do with it than a tired physics professor trying to make a point; work with me here.) Their speeds were recorded as well, in the files mouse_time_velocities.csv and elephant_time_velocities.csv. Read the data in for these two unfortunate creatures and store them in their own arrays. (Don't worry, they had parachutes too, they're just not very happy about the whole situation!) Then, do the same calculations as before and plot the position, velocity, and acceleration as a function of time for all three individuals on the same set of graphs. Do the mouse and/or elephant reach terminal velocity? If so, at what time, and at what height above the ground? put your answer here! Step3: Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
Python Code: ''' The code in this cell opens up the file skydiver_time_velocities.csv and extracts two 1D numpy arrays of equal length. One array is of the velocity data taken by the radar gun, and the second is the times that the data is taken. ''' import numpy as np skydiver_time, skydiver_velocity = np.loadtxt("skydiver_time_velocities.csv", delimiter=',',skiprows=1,unpack=True) ''' This is a piece of example code that shows you how to get the velocity at any time you want using the Numpy interp() method. This requires you to pick a time where you want the velocity as an input parameter to the method, as well as the time and velocity arrays that you will interpolate from. ''' time = 7.2 # time in seconds vel = np.interp(time,skydiver_time,skydiver_velocity) print("velocity at time {:.3f} s is {:.3f} m/s".format(time,vel)) # put your code here! Explanation: What's the terminal velocity of a skydiver? Names of group members // put your names here! Goals of this assignment The main goal of this assignment is to use numerical integration and differentiation to study the behavior of a skydiver. You're going to use the numerical integration and differentiation techniques that you learned in the pre-class assignment. Some background knowledge that we need for this model Position, velocity, and acceleration In physics, three important properties of a moving object are its position ($\vec{x}$), velocity ($\vec{v}$), and acceleration ($\vec{a}$). These are vector quantities, meaning that they have both a magnitude and a direction, and are related in the following way: $\vec{v} = \frac{d\vec{x}}{dt}$ $\vec{a} = \frac{d\vec{v}}{dt}$ - i.e., acceleration is the rate of change of velocity (units of meters per second$^2$) In words, velocity is the rate of change of position with time (and having units of length per time) and acceleration is the rate of change of velocity with time (and having units of length per time$^2$). Given this, the fundamental theorem of calculus tells us that we can relate these quantities by integration as well. Expressed mathematically: $\vec{x} = \vec{x}_0 + \int_0^t \vec{v}(t) dt$ $\vec{v} = \vec{v}_0 + \int_0^t \vec{a}(t) dt$ So, we can get the position at any time by starting at the initial position and integrating the velocity over time, and can get the velocity at any time by starting with the initial velocity and integrating the acceleration over time. Terminal velocity An object moving through a fluid like air or water experiences a force of friction - just think about what happens if you stick your hand out of the window of a moving car! This is why airplanes need to run their engines constantly while in flight; when traveling at a constant speed, the force exerted by the engines just balances the force exerted by the air friction. This force of friction always points in the opposite direction of the object's motion (in other words, in the opposite direction of its velocity). A similar thing happens to a falling object. As an object falls downward faster and faster, the force of gravity pulling downward is eventually perfectly balanced by the upward force from air resistance (upward because the direction of motion is down). When these forces perfectly balance, the object experiences zero acceleration, and thus its velocity becomes constant. We call this the terminal velocity. The challenge Your professor happens to mention that he went skydiving last weekend. He jumped from a stationary helicopter that was hovering 2,000 meters above the ground, and opened the parachute at the last possible moment. In the interests of science, he had a friend stand on the ground with a radar gun and measure his velocity as a function of time. This file, skydiver_time_velocities.csv, has been provided to you to examine. You are asked to do the following: Calculate and plot the position, velocity, and acceleration as a function of time. If you start the clock when your professor steps out of the helicopter (i.e., $t=0$), at what time does he land on the ground? At what time, and at what height above ground, does he reach terminal velocity? In the cells below, we have provided two pieces of code: one that reads the data you want from the file into two Numpy arrays, and a second piece of code that can provide you with the velocity at any time. End of explanation # put your code here! Explanation: The second part of the challenge In addition to your professor, a mouse and an elephant have also chosen to go skydiving. (Choice may have had less to do with it than a tired physics professor trying to make a point; work with me here.) Their speeds were recorded as well, in the files mouse_time_velocities.csv and elephant_time_velocities.csv. Read the data in for these two unfortunate creatures and store them in their own arrays. (Don't worry, they had parachutes too, they're just not very happy about the whole situation!) Then, do the same calculations as before and plot the position, velocity, and acceleration as a function of time for all three individuals on the same set of graphs. Do the mouse and/or elephant reach terminal velocity? If so, at what time, and at what height above the ground? put your answer here! End of explanation from IPython.display import HTML HTML( <iframe src="https://goo.gl/forms/XvxmPrGnDOD3UZcI2?embedded=true" width="80%" height="1200px" frameborder="0" marginheight="0" marginwidth="0"> Loading... </iframe> ) Explanation: Assignment wrapup Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! End of explanation
1,949
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Type Is Required Step7: 1.4. Elemental Stoichiometry Is Required Step8: 1.5. Elemental Stoichiometry Details Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 1.7. Diagnostic Variables Is Required Step11: 1.8. Damping Is Required Step12: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required Step13: 2.2. Timestep If Not From Ocean Is Required Step14: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required Step15: 3.2. Timestep If Not From Ocean Is Required Step16: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required Step17: 4.2. Scheme Is Required Step18: 4.3. Use Different Scheme Is Required Step19: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required Step20: 5.2. River Input Is Required Step21: 5.3. Sediments From Boundary Conditions Is Required Step22: 5.4. Sediments From Explicit Model Is Required Step23: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required Step24: 6.2. CO2 Exchange Type Is Required Step25: 6.3. O2 Exchange Present Is Required Step26: 6.4. O2 Exchange Type Is Required Step27: 6.5. DMS Exchange Present Is Required Step28: 6.6. DMS Exchange Type Is Required Step29: 6.7. N2 Exchange Present Is Required Step30: 6.8. N2 Exchange Type Is Required Step31: 6.9. N2O Exchange Present Is Required Step32: 6.10. N2O Exchange Type Is Required Step33: 6.11. CFC11 Exchange Present Is Required Step34: 6.12. CFC11 Exchange Type Is Required Step35: 6.13. CFC12 Exchange Present Is Required Step36: 6.14. CFC12 Exchange Type Is Required Step37: 6.15. SF6 Exchange Present Is Required Step38: 6.16. SF6 Exchange Type Is Required Step39: 6.17. 13CO2 Exchange Present Is Required Step40: 6.18. 13CO2 Exchange Type Is Required Step41: 6.19. 14CO2 Exchange Present Is Required Step42: 6.20. 14CO2 Exchange Type Is Required Step43: 6.21. Other Gases Is Required Step44: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required Step45: 7.2. PH Scale Is Required Step46: 7.3. Constants If Not OMIP Is Required Step47: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required Step48: 8.2. Sulfur Cycle Present Is Required Step49: 8.3. Nutrients Present Is Required Step50: 8.4. Nitrous Species If N Is Required Step51: 8.5. Nitrous Processes If N Is Required Step52: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required Step53: 9.2. Upper Trophic Levels Treatment Is Required Step54: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required Step55: 10.2. Pft Is Required Step56: 10.3. Size Classes Is Required Step57: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required Step58: 11.2. Size Classes Is Required Step59: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required Step60: 12.2. Lability Is Required Step61: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required Step62: 13.2. Types If Prognostic Is Required Step63: 13.3. Size If Prognostic Is Required Step64: 13.4. Size If Discrete Is Required Step65: 13.5. Sinking Speed If Prognostic Is Required Step66: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required Step67: 14.2. Abiotic Carbon Is Required Step68: 14.3. Alkalinity Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-3', 'ocnbgchem') Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: DWD Source ID: SANDBOX-3 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:57 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation
1,950
Given the following text description, write Python code to implement the functionality described below step by step Description: This is the client!! Step1: Above is the output I'm getting- still need to discuss interpolation and also adding in the parameter for number of timeseries to find
Python Code: import MessageFormatting import importlib importlib.reload(MessageFormatting) from MessageFormatting import * from timeseries.ArrayTimeSeries import ArrayTimeSeries as ts import numpy as np from scipy.stats import norm t = np.arange(0.0, 1.0, 0.01) v = norm.pdf(t, 100, 100) + 1000*np.random.randn(100) ts_test = ts(t, v) d2 = {'op':'storeTS','id':1000,'ts':[[1,2,3], [-1,3,-10]],'courtesy':'please'} #d2 = {'op':'TSfromID','id':1000,'courtesy':'please'} #d2 = {'op':'simsearch_id','id':12,'n_closest':2,'courtesy':'please'} #d2 = {'op':'simsearch_ts','ts':[list(ts_test.times()), list(ts_test.values())],'courtesy':'please'} s2 = serialize(json.dumps(d2)) s2 import sys from socket import socket, AF_INET, SOCK_STREAM s = socket(AF_INET, SOCK_STREAM) s.connect(('localhost', 20000)) s.send(s2) msg = s.recv(8192) print(msg) ds = Deserializer() ds.append(msg) ds.ready() response = ds.deserialize() #print(response) if 'ts' in response: a = response['ts'] elif 'id' in response: a = response['id'] print(response) print(a) print(a) a = b = 0 response['ts'] Explanation: This is the client!! End of explanation def dic_fun(**kwargs): a = {} for k,v in kwargs.items(): a[k]=v print(a) dic_fun(a=12,b=17) import MessageFormatting import importlib importlib.reload(MessageFormatting) from MessageFormatting import * d2 = {'op':'simsearch_ts','ts':[[1,2,3],[4,5,6]],'courtesy':'please'} d2 = {'op':'simsearch_id','id':12,'courtesy':'please','n_closest':12} d2 = {'op':'TSfromID','id':12,'courtesy':'please'} c = TSDBOp.from_json(d2) c ds = Deserializer() ds.append(msg) ds.deserialize() msg json.dumps('success!') #json.loads(TSDBOp.to_json('success!')) from Similarity.find_most_similar import find_most_similiar sys.path os.getcwd() os.path.dirname(os.path.abspath(__file__)) from StorageManager import FileStorageManager sm = File Explanation: Above is the output I'm getting- still need to discuss interpolation and also adding in the parameter for number of timeseries to find End of explanation
1,951
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Python 2 Creating Functions <section class="objectives panel panel-warning"> <div class="panel-heading"> <h3><span class="fa fa-certificate"></span> Learning Objectives Step1: The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces. When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it. Let’s try running our function. Calling our own function is no different from calling any other function Step2: We’ve successfully called the function that we defined, and we have access to the value that we returned. Integer division We are using Python 3 division, which always returns a floating point number Step3: Unfortunately, this wasn’t the case in Python 2 Step4: If you are using Python 2 and want to keep the fractional part of division you need to convert one or the other number to floating point Step5: And if you want an integer result from division in Python 3, use a double-slash Step6: Composing Functions Now that we’ve seen how to turn Kelvin into Celsius, let's try converting Celsius to Fahrenheit Step7: What about converting Kelvin to Fahrenheit? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created Step8: This is our first taste of how larger programs are built Step9: and another function called detect_problems that checks for those systematics we noticed Step10: Notice that rather than jumbling this code together in one giant for loop, we can now read and reuse both ideas separately. We can reproduce the previous analysis with a much simpler for loop Step11: By giving our functions human-readable names, we can more easily read and understand what is happening in the for loop. Even better, if at some later date we want to use either of those pieces of code again, we can do so in a single line. Testing and Documenting Once we start putting things in functions so that we can re-use them, we need to start testing that those functions are working correctly. To see how to do this, let’s write a function to center a dataset around a particular value Step12: We could test this on our actual data, but since we don’t know what the values ought to be, it will be hard to tell if the result was correct. Instead, let’s use NumPy to create a matrix of 0’s and then center that around 3 Step13: That looks right, so let’s try center on our real data Step14: It’s hard to tell from the default output whether the result is correct, but there are a few simple tests that will reassure us Step15: That seems almost right Step16: Those values look the same, but we probably wouldn’t notice if they were different in the sixth decimal place. Let’s do this instead Step17: Again, the difference is very small. It’s still possible that our function is wrong, but it seems unlikely enough that we should probably get back to doing our analysis. We have one more task first, though Step18: There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation Step19: This is better because we can now ask Python’s built-in help system to show us the documentation for the function Step20: A string like this is called a docstring. We don’t need to use triple quotes when we write one, but if we do, we can break the string across multiple lines Step21: Defining Defaults We have passed parameters to functions in two ways Step22: but we still need to say delimiter= Step23: To understand what’s going on, and make our own functions easier to use, let’s re-define our center function like this Step24: The key change is that the second parameter is now written desired=0.0 instead of just desired. If we call the function with two arguments, it works as it did before Step25: But we can also now call it with just one parameter, in which case desired is automatically assigned the default value of 0.0 Step26: This is handy Step27: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in Step28: With that in hand, let’s look at the help for numpy.loadtxt Step29: There’s a lot of information here, but the most important part is the first couple of lines Step31: then the filename is assigned to fname (which is what we want), but the delimiter string ',' is assigned to dtype rather than delimiter, because dtype is the second parameter in the list. However ',' isn’t a known dtype so our code produced an error message when we tried to run it. When we call loadtxt we don’t have to provide fname= for the filename because it’s the first item in the list, but if we want the ',' to be assigned to the variable delimiter, we do have to provide delimiter= for the second parameter since delimiter is not the second parameter in the list. <section class="challenge panel panel-success"> <div class="panel-heading"> <h2 id="combining-strings"><span class="fa fa-pencil"></span>Combining strings</h2> </div> <div class="panel-body"> <p>“Adding” two strings produces their concatenation
Python Code: # Let's get our import statements out of the way first from __future__ import division, print_function import numpy as np import glob import matplotlib.pyplot as plt %matplotlib inline def kelvin_to_celsius(temp): return temp - 273.15 Explanation: Introduction to Python 2 Creating Functions <section class="objectives panel panel-warning"> <div class="panel-heading"> <h3><span class="fa fa-certificate"></span> Learning Objectives: </h3> </div> - Define a function that takes parameters. - Return a value from a function. - Test and debug a function. - Set default values for function parameters. - Explain why we should divide programs into small, single-purpose functions. At this point, we’ve written code to draw some interesting features in our inflammation data, loop over all our data files to quickly draw these plots for each of them, and have Python make decisions based on what it sees in our data. But, our code is getting pretty long and complicated; what if we had thousands of datasets, and didn’t want to generate a figure for every single one? Commenting out the figure-drawing code is a nuisance. Also, what if we want to use that code again, on a different dataset or at a different point in our program? Cutting and pasting it is going to make our code get very long and very repetitive, very quickly. We’d like a way to package our code so that it is easier to reuse, and Python provides for this by letting us define things called ‘functions’ - a shorthand way of re-executing longer pieces of code. Let’s start by defining a function `kelvin_to_celsius` that converts temperatures from Kelvin to Celsius: End of explanation print('absolute zero in Celsius:', kelvin_to_celsius(0.0)) Explanation: The function definition opens with the word def, which is followed by the name of the function and a parenthesized list of parameter names. The body of the function — the statements that are executed when it runs — is indented below the definition line, typically by four spaces. When we call the function, the values we pass to it are assigned to those variables so that we can use them inside the function. Inside the function, we use a return statement to send a result back to whoever asked for it. Let’s try running our function. Calling our own function is no different from calling any other function: End of explanation print(5/9) Explanation: We’ve successfully called the function that we defined, and we have access to the value that we returned. Integer division We are using Python 3 division, which always returns a floating point number: End of explanation !python2 -c "print 5/9" Explanation: Unfortunately, this wasn’t the case in Python 2: End of explanation float(5) / 9 5 / float(9) 5.0 / 9 5 / 9.0 Explanation: If you are using Python 2 and want to keep the fractional part of division you need to convert one or the other number to floating point: End of explanation 4 // 2 3 // 2 Explanation: And if you want an integer result from division in Python 3, use a double-slash: End of explanation def celsius_to_fahr(temp): return temp * (9/5) + 32 print('freezing point of water:', celsius_to_fahr(0)) print('boiling point of water:', celsius_to_fahr(100)) Explanation: Composing Functions Now that we’ve seen how to turn Kelvin into Celsius, let's try converting Celsius to Fahrenheit: End of explanation def kelvin_to_fahr(temp): temp_c = kelvin_to_celsius(temp) result = celsius_to_fahr(temp_c) return result print('freezing point of water in Fahrenheit:', kelvin_to_fahr(273.15)) print('absolute zero in Fahrenheit:', kelvin_to_fahr(0)) Explanation: What about converting Kelvin to Fahrenheit? We could write out the formula, but we don’t need to. Instead, we can compose the two functions we have already created: End of explanation def analyse(filename): data = np.loadtxt(fname=filename, delimiter=',') fig = plt.figure(figsize=(10.0, 3.0)) axes1 = fig.add_subplot(1, 3, 1) axes2 = fig.add_subplot(1, 3, 2) axes3 = fig.add_subplot(1, 3, 3) axes1.set_ylabel('average') axes1.plot(data.mean(axis=0)) axes2.set_ylabel('max') axes2.plot(data.max(axis=0)) axes3.set_ylabel('min') axes3.plot(data.min(axis=0)) fig.tight_layout() plt.show(fig) Explanation: This is our first taste of how larger programs are built: we define basic operations, then combine them in ever-larger chunks to get the effect we want. Real-life functions will usually be larger than the ones shown here — typically half a dozen to a few dozen lines — but they shouldn’t ever be much longer than that, or the next person who reads it won’t be able to understand what’s going on. Tidying up Now that we know how to wrap bits of code up in functions, we can make our inflammation analyasis easier to read and easier to reuse. First, let’s make an analyse function that generates our plots: End of explanation def detect_problems(filename): data = np.loadtxt(fname=filename, delimiter=',') if data.max(axis=0)[0] == 0 and data.max(axis=0)[20] == 20: print('Suspicious looking maxima!') elif data.min(axis=0).sum() == 0: print('Minima add up to zero!') else: print('Seems OK!') Explanation: and another function called detect_problems that checks for those systematics we noticed: End of explanation # First redefine our list of filenames from the last lesson filenames = sorted(glob.glob('data/inflammation*.csv')) for f in filenames[:3]: print(f) analyse(f) detect_problems(f) Explanation: Notice that rather than jumbling this code together in one giant for loop, we can now read and reuse both ideas separately. We can reproduce the previous analysis with a much simpler for loop: End of explanation def centre(data, desired): return (data - data.mean()) + desired Explanation: By giving our functions human-readable names, we can more easily read and understand what is happening in the for loop. Even better, if at some later date we want to use either of those pieces of code again, we can do so in a single line. Testing and Documenting Once we start putting things in functions so that we can re-use them, we need to start testing that those functions are working correctly. To see how to do this, let’s write a function to center a dataset around a particular value: End of explanation z = np.zeros((2,2)) print(centre(z, 3)) Explanation: We could test this on our actual data, but since we don’t know what the values ought to be, it will be hard to tell if the result was correct. Instead, let’s use NumPy to create a matrix of 0’s and then center that around 3: End of explanation data = np.loadtxt(fname='data/inflammation-01.csv', delimiter=',') print(centre(data, 0)) Explanation: That looks right, so let’s try center on our real data: End of explanation print('original min, mean, and max are:', data.min(), data.mean(), data.max()) centered = centre(data, 0) print('min, mean, and and max of centered data are:', centered.min(), centered.mean(), centered.max()) Explanation: It’s hard to tell from the default output whether the result is correct, but there are a few simple tests that will reassure us: End of explanation print('std dev before and after:', data.std(), centered.std()) Explanation: That seems almost right: the original mean was about 6.1, so the lower bound from zero is how about -6.1. The mean of the centered data isn’t quite zero — we’ll explore why not in the challenges — but it’s pretty close. We can even go further and check that the standard deviation hasn’t changed: End of explanation print('difference in standard deviations before and after:', data.std() - centered.std()) Explanation: Those values look the same, but we probably wouldn’t notice if they were different in the sixth decimal place. Let’s do this instead: End of explanation # centre(data, desired): return a new array containing the original data centered around the desired value. def centre(data, desired): return (data - data.mean()) + desired Explanation: Again, the difference is very small. It’s still possible that our function is wrong, but it seems unlikely enough that we should probably get back to doing our analysis. We have one more task first, though: we should write some documentation for our function to remind ourselves later what it’s for and how to use it. The usual way to put documentation in software is to add comments like this: End of explanation def centre(data, desired): '''Return a new array containing the original data centered around the desired value.''' return (data - data.mean()) + desired Explanation: There’s a better way, though. If the first thing in a function is a string that isn’t assigned to a variable, that string is attached to the function as its documentation: End of explanation help(centre) Explanation: This is better because we can now ask Python’s built-in help system to show us the documentation for the function: End of explanation def centre(data, desired): '''Return a new array containing the original data centered around the desired value. Example: center([1, 2, 3], 0) => [-1, 0, 1]''' return (data - data.mean()) + desired help(centre) Explanation: A string like this is called a docstring. We don’t need to use triple quotes when we write one, but if we do, we can break the string across multiple lines: End of explanation np.loadtxt('data/inflammation-01.csv', delimiter=',') Explanation: Defining Defaults We have passed parameters to functions in two ways: directly, as in type(data), and by name, as in numpy.loadtxt(fname='something.csv', delimiter=','). In fact, we can pass the filename to loadtxt without the fname=: End of explanation np.loadtxt('data/inflammation-01.csv', ',') Explanation: but we still need to say delimiter=: End of explanation def centre(data, desired=0.0): '''Return a new array containing the original data centered around the desired value (0 by default). Example: center([1, 2, 3], 0) => [-1, 0, 1]''' return (data - data.mean()) + desired Explanation: To understand what’s going on, and make our own functions easier to use, let’s re-define our center function like this: End of explanation test_data = np.zeros((2, 2)) print(centre(test_data, 3)) Explanation: The key change is that the second parameter is now written desired=0.0 instead of just desired. If we call the function with two arguments, it works as it did before: End of explanation more_data = 5 + np.zeros((2, 2)) print('data before centering:') print(more_data) print('centered data:') print(centre(more_data)) Explanation: But we can also now call it with just one parameter, in which case desired is automatically assigned the default value of 0.0: End of explanation def display(a=1, b=2, c=3): print('a:', a, 'b:', b, 'c:', c) print('no parameters:') display() print('one parameter:') display(55) print('two parameters:') display(55, 66) Explanation: This is handy: if we usually want a function to work one way, but occasionally need it to do something else, we can allow people to pass a parameter when they need to but provide a default to make the normal case easier. The example below shows how Python matches values to parameters: End of explanation print('only setting the value of c') display(c=77) Explanation: As this example shows, parameters are matched up from left to right, and any that haven’t been given a value explicitly get their default value. We can override this behavior by naming the value as we pass it in: End of explanation help(np.loadtxt) Explanation: With that in hand, let’s look at the help for numpy.loadtxt: End of explanation np.loadtxt('data/inflammation-01.csv', ',') Explanation: There’s a lot of information here, but the most important part is the first couple of lines: <pre>loadtxt(fname, dtype=<type 'float'>, comments='#', delimiter=None, converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0)</pre> This tells us that loadtxt has one parameter called fname that doesn’t have a default value, and eight others that do. If we call the function like this: End of explanation def fence(original, wrapper='#'): Return a new string which consists of the original string with the wrapper character before and after return wrapper + original + wrapper print(fence('name', '*')) Explanation: then the filename is assigned to fname (which is what we want), but the delimiter string ',' is assigned to dtype rather than delimiter, because dtype is the second parameter in the list. However ',' isn’t a known dtype so our code produced an error message when we tried to run it. When we call loadtxt we don’t have to provide fname= for the filename because it’s the first item in the list, but if we want the ',' to be assigned to the variable delimiter, we do have to provide delimiter= for the second parameter since delimiter is not the second parameter in the list. <section class="challenge panel panel-success"> <div class="panel-heading"> <h2 id="combining-strings"><span class="fa fa-pencil"></span>Combining strings</h2> </div> <div class="panel-body"> <p>“Adding” two strings produces their concatenation: <code>'a' + 'b'</code> is <code>'ab'</code>. Write a function called <code>fence</code> that takes two parameters called <code>original</code> and <code>wrapper</code> and returns a new string that has the wrapper character at the beginning and end of the original. A call to your function should look like this:</p> <div class="sourceCode"><pre class="sourceCode python"><code class="sourceCode python"><span class="bu">print</span>(fence(<span class="st">'name'</span>, <span class="st">'*'</span>))</code></pre></div> <pre class="output"><code>&#42;name&#42;</code></pre> </div> </section> End of explanation
1,952
Given the following text description, write Python code to implement the functionality described below step by step Description: Crib dragging following http Step1: To demonstrate the OTP, we can decrypt the CTs by XOR'ing with the key (k) Step2: The assignment asks us to examine what happens when we a space character ' ' with uppercase or lowercase letters Step3: So, if we XOR a space with a letter we'll get it's opposite case. Thus far we have several candidates, if we didn't know anything about the PTs we'd have to expand our search with each candidate. For brevity's sake I'll expand the search with knowledge of the PT. We know that all the letters in the PTs are lowercase so and that there are no 'q', 't', or 'f' characters so that narrows it down to 20 XOR 64 which produces 'D' Step4: In this demonstration I'll proceed with knowledge of the PT to expand the crib. Note that by XOR'ing with the crib we get the plaintext of the second message
Python Code: m1 = "hello world!!".encode('hex') m2 = "other message".encode('hex') key = "secretkey123!".encode('hex') print 'm1: {}\nm2: {}\nkey: {}'.format(m1, m2, key) print len(m1), len(m2), len(key) ct1 = hex(int(m1, 16) ^ int(key, 16))[2:-1] ct2 = hex(int(m2, 16) ^ int(key, 16))[2:-1] print 'ct1: {}\nct2: {}'.format(ct1, ct2) ctx = hex(int(ct1, 16) ^ int(ct2, 16))[2:-1] print 'ctx: {}'.format(ctx) Explanation: Crib dragging following http://travisdazell.blogspot.com/2012/11/many-time-pad-attack-crib-drag.html End of explanation print hex(int(ct1, 16) ^ int(key, 16))[2:-1].decode('hex') print hex(int(ct2, 16) ^ int(key, 16))[2:-1].decode('hex') import string space = ' '.encode('hex') def attack(crib, ctx): width = len(crib) print 'crib in hex: {}\ncrib width: {}\n------------'.format(crib, width) for i in range(0, len(ctx)): decoded = hex(int(crib, 16) ^ int(ctx[i:i+width], 16))[2:].decode('hex') if decoded.isalpha(): print "{}:{}\t".format(i, i+width), '{} XOR {}'.format(crib, ctx[i:i+width]), decoded attack(space, ctx) Explanation: To demonstrate the OTP, we can decrypt the CTs by XOR'ing with the key (k): $ c_{n} \oplus k = m_{n} $ End of explanation for i in string.uppercase[:10]: print hex(int(' '.encode('hex'), 16) ^ int(i.encode('hex'), 16))[2:].decode('hex') Explanation: The assignment asks us to examine what happens when we a space character ' ' with uppercase or lowercase letters: End of explanation crib = ' '.encode('hex') attack(crib, ctx) Explanation: So, if we XOR a space with a letter we'll get it's opposite case. Thus far we have several candidates, if we didn't know anything about the PTs we'd have to expand our search with each candidate. For brevity's sake I'll expand the search with knowledge of the PT. We know that all the letters in the PTs are lowercase so and that there are no 'q', 't', or 'f' characters so that narrows it down to 20 XOR 64 which produces 'D' End of explanation crib = 'World!!'.encode('hex') attack(crib, ctx) ct1 = 0x315c4eeaa8b5f8aaf9174145bf43e1784b8fa00dc71d885a804e5ee9fa40b16349c146fb778cdf2d3aff021dfff5b403b510d0d0455468aeb98622b137dae857553ccd8883a7bc37520e06e515d22c954eba5025b8cc57ee59418ce7dc6bc41556bdb36bbca3e8774301fbcaa3b83b220809560987815f65286764703de0f3d524400a19b159610b11ef3e ct2 = 0x234c02ecbbfbafa3ed18510abd11fa724fcda2018a1a8342cf064bbde548b12b07df44ba7191d9606ef4081ffde5ad46a5069d9f7f543bedb9c861bf29c7e205132eda9382b0bc2c5c4b45f919cf3a9f1cb74151f6d551f4480c82b2cb24cc5b028aa76eb7b4ab24171ab3cdadb8356f ct3 = 0x32510ba9a7b2bba9b8005d43a304b5714cc0bb0c8a34884dd91304b8ad40b62b07df44ba6e9d8a2368e51d04e0e7b207b70b9b8261112bacb6c866a232dfe257527dc29398f5f3251a0d47e503c66e935de81230b59b7afb5f41afa8d661cb ct4 = 0x32510ba9aab2a8a4fd06414fb517b5605cc0aa0dc91a8908c2064ba8ad5ea06a029056f47a8ad3306ef5021eafe1ac01a81197847a5c68a1b78769a37bc8f4575432c198ccb4ef63590256e305cd3a9544ee4160ead45aef520489e7da7d835402bca670bda8eb775200b8dabbba246b130f040d8ec6447e2c767f3d30ed81ea2e4c1404e1315a1010e7229be6636aaa ct5 = 0x3f561ba9adb4b6ebec54424ba317b564418fac0dd35f8c08d31a1fe9e24fe56808c213f17c81d9607cee021dafe1e001b21ade877a5e68bea88d61b93ac5ee0d562e8e9582f5ef375f0a4ae20ed86e935de81230b59b73fb4302cd95d770c65b40aaa065f2a5e33a5a0bb5dcaba43722130f042f8ec85b7c2070 ct6 = 0x32510bfbacfbb9befd54415da243e1695ecabd58c519cd4bd2061bbde24eb76a19d84aba34d8de287be84d07e7e9a30ee714979c7e1123a8bd9822a33ecaf512472e8e8f8db3f9635c1949e640c621854eba0d79eccf52ff111284b4cc61d11902aebc66f2b2e436434eacc0aba938220b084800c2ca4e693522643573b2c4ce35050b0cf774201f0fe52ac9f26d71b6cf61a711cc229f77ace7aa88a2f19983122b11be87a59c355d25f8e4 ct7 = 0x32510bfbacfbb9befd54415da243e1695ecabd58c519cd4bd90f1fa6ea5ba47b01c909ba7696cf606ef40c04afe1ac0aa8148dd066592ded9f8774b529c7ea125d298e8883f5e9305f4b44f915cb2bd05af51373fd9b4af511039fa2d96f83414aaaf261bda2e97b170fb5cce2a53e675c154c0d9681596934777e2275b381ce2e40582afe67650b13e72287ff2270abcf73bb028932836fbdecfecee0a3b894473c1bbeb6b4913a536ce4f9b13f1efff71ea313c8661dd9a4ce ct8 = 0x315c4eeaa8b5f8bffd11155ea506b56041c6a00c8a08854dd21a4bbde54ce56801d943ba708b8a3574f40c00fff9e00fa1439fd0654327a3bfc860b92f89ee04132ecb9298f5fd2d5e4b45e40ecc3b9d59e9417df7c95bba410e9aa2ca24c5474da2f276baa3ac325918b2daada43d6712150441c2e04f6565517f317da9d3 ct9 = 0x271946f9bbb2aeadec111841a81abc300ecaa01bd8069d5cc91005e9fe4aad6e04d513e96d99de2569bc5e50eeeca709b50a8a987f4264edb6896fb537d0a716132ddc938fb0f836480e06ed0fcd6e9759f40462f9cf57f4564186a2c1778f1543efa270bda5e933421cbe88a4a52222190f471e9bd15f652b653b7071aec59a2705081ffe72651d08f822c9ed6d76e48b63ab15d0208573a7eef027 ct10 = 0x466d06ece998b7a2fb1d464fed2ced7641ddaa3cc31c9941cf110abbf409ed39598005b3399ccfafb61d0315fca0a314be138a9f32503bedac8067f03adbf3575c3b8edc9ba7f537530541ab0f9f3cd04ff50d66f1d559ba520e89a2cb2a83 target = 0x32510ba9babebbbefd001547a810e67149caee11d945cd7fc81a05e9f85aac650e9052ba6a8cd8257bf14d13e6f0a803b54fde9e77472dbff89d71b57bddef121336cb85ccb8f3315f4b52e301d16e9f52f904 Explanation: In this demonstration I'll proceed with knowledge of the PT to expand the crib. Note that by XOR'ing with the crib we get the plaintext of the second message: End of explanation
1,953
Given the following text description, write Python code to implement the functionality described below step by step Description: ALMA Cycle 0 https Step1: Creation of Dictionary We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube. It returns a DataFrame with a vector for each theoretical line for each isotope in molist Step3: Recalibration of Dictionary
Python Code: file_path = '../data/2011.0.00419.S/sg_ouss_id/group_ouss_id/member_ouss_2013-03-06_id/product/IRAS16547-4247_Jet_CH3OH7-6.clean.fits' noise_pixel = (15, 4) train_pixels = [(133, 135),(134, 135),(133, 136),(134, 136)] img = fits.open(file_path) meta = img[0].data hdr = img[0].header # V axis naxisv = hdr['NAXIS3'] onevpix = hdr['CDELT3']*0.000001 v0 = hdr['CRVAL3']*0.000001 v0pix = int(hdr['CRPIX3']) vaxis = onevpix * (np.arange(naxisv)+1-v0pix) + v0 values = meta[0, :, train_pixels[0][0], train_pixels[0][1]] - np.mean(meta[0, :, train_pixels[0][0], train_pixels[0][1]]) values = values/np.max(values) plt.plot(vaxis, values) plt.xlim(np.min(vaxis), np.max(vaxis)) plt.ylim(-1, 1) gca().xaxis.set_major_formatter(FormatStrFormatter('%d')) noise = meta[0, :, noise_pixel[0], noise_pixel[1]] - np.mean(meta[0, :, noise_pixel[0], noise_pixel[1]]) noise = noise/np.max(noise) plt.plot(vaxis, noise) plt.ylim(-1, 1) plt.xlim(np.min(vaxis), np.max(vaxis)) gca().xaxis.set_major_formatter(FormatStrFormatter('%d')) Explanation: ALMA Cycle 0 https://www.iram.fr/IRAMFR/ARC/documents/cycle0/ALMA_EarlyScience_Cycle0_HighestPriority.pdf Project 2011.0.00419.S End of explanation cube_params = { 'freq' : vaxis[naxisv/2], 'alpha' : 0, 'delta' : 0, 'spe_bw' : naxisv*onevpix, 'spe_res' : onevpix*v0pix, 's_f' : 8, 's_a' : 0} dictionary = gen_all_words(cube_params, True) Explanation: Creation of Dictionary We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube. It returns a DataFrame with a vector for each theoretical line for each isotope in molist End of explanation prediction = pd.DataFrame([]) for train_pixel in train_pixels: dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, train_pixel, noise_pixel) X = get_values_filtered_normalized(file_path, train_pixel, cube_params) y_train = get_fortran_array(np.asmatrix(X)) dictionary_recal_fa = np.asfortranarray(dictionary_recal, dtype= np.double) lambda_param = 0 for idx in range(0, len(detected_peaks)): if detected_peaks[idx] != 0: lambda_param += 1 param = { 'lambda1' : lambda_param, # 'L': 1, 'pos' : True, 'mode' : 0, 'ols' : True, 'numThreads' : -1} alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray() total = np.inner(dictionary_recal_fa, alpha.T) for i in range(0, len(alpha)): iso_col = dictionary_recal.columns[i] if(not prediction.columns.isin([iso_col]).any()): prediction[iso_col] = alpha[i] else: prediction[iso_col] = prediction[iso_col]*alpha[i] for p in prediction.columns: if(prediction[p][0] != 0): print(prediction[p]) latexify(8) # Step 1: Read Cube ax = plt.subplot(6, 1, 1) data = get_data_from_fits(file_path) y = data[0, :, train_pixel[0], train_pixel[1]] plt.xticks([]) plt.plot(vaxis, y) lines = get_lines_from_fits(file_path) for line in lines: # Shows lines really present isotope_frequency = int(line[1]) isotope_name = line[0] + "-f" + str(line[1]) plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g') # 2. Normalize, filter dada ax = plt.subplot(6, 1, 2) plt.ylim(ymin =0,ymax = 1.15) y = get_values_filtered_normalized(file_path, train_pixel, cube_params) plt.xticks([]) plt.plot(vaxis, y) # 3. Possible Words ax = plt.subplot(6, 1, 3) plt.ylim(ymin =0,ymax = 1.15) plt.xticks([]) plt.plot(vaxis, dictionary) # 4. Detect Lines ax = plt.subplot(6, 1, 4) plt.ylim(ymin =0,ymax = 1.15) plt.plot(vaxis, y) plt.xticks([]) plt.ylabel("Temperature") for idx in range(0, len(detected_peaks)): if detected_peaks[idx] != 0: plt.axvline(x=vaxis[idx], ymin=0, ymax= 1, color='r') # 6. Recalibrate Dictionary ax = plt.subplot(6, 1, 5) plt.ylim(ymin =0,ymax = 1.15) plt.plot(vaxis, dictionary_recal_fa) plt.xticks([]) # 6. Recover Signal ax = plt.subplot(6, 1, 6) plt.ylim(ymin =0,ymax = 1.15) plt.plot(vaxis, total) plt.xlabel("Frequency [MHz]") gca().xaxis.set_major_formatter(FormatStrFormatter('%d')) def latexify(fig_width=None, fig_height=None, columns=1): Set up matplotlib's RC params for LaTeX plotting. Call this before plotting a figure. Parameters ---------- fig_width : float, optional, inches fig_height : float, optional, inches columns : {1, 2} # code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples # Width and max height in inches for IEEE journals taken from # computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf assert(columns in [1,2]) if fig_width is None: fig_width = 4.89 if columns==1 else 6.9 # width in inches if fig_height is None: golden_mean = (sqrt(5)-1.0)/2.0 # Aesthetic ratio fig_height = fig_width*golden_mean # height in inches MAX_HEIGHT_INCHES = 24.0 if fig_height > MAX_HEIGHT_INCHES: print("WARNING: fig_height too large:" + fig_height + "so will reduce to" + MAX_HEIGHT_INCHES + "inches.") fig_height = MAX_HEIGHT_INCHES params = {'backend': 'ps', 'text.latex.preamble': ['\usepackage{gensymb}'], 'axes.labelsize': 8, # fontsize for x and y labels (was 10) 'axes.titlesize': 8, 'text.fontsize': 8, # was 10 'legend.fontsize': 8, # was 10 'xtick.labelsize': 10, 'ytick.labelsize': 8, 'text.usetex': True, 'figure.figsize': [fig_width,fig_height], 'font.family': 'serif' } matplotlib.rcParams.update(params) def format_axes(ax): for spine in ['top', 'right']: ax.spines[spine].set_visible(False) for spine in ['left', 'bottom']: ax.spines[spine].set_color(SPINE_COLOR) ax.spines[spine].set_linewidth(0.5) ax.xaxis.set_ticks_position('bottom') ax.yaxis.set_ticks_position('left') for axis in [ax.xaxis, ax.yaxis]: axis.set_tick_params(direction='out', color=SPINE_COLOR) return ax for i in range(0, len((alpha > 0))): if((alpha > 0)[i]): print(dictionary_recal.columns[i]) print(prediction) for i in range(0, len(dictionary.index)): print(calculate_probability(alpha, dictionary.index[i], dictionary_recal)) print(dictionary.index[i]) Explanation: Recalibration of Dictionary End of explanation
1,954
Given the following text description, write Python code to implement the functionality described below step by step Description: 得益于 Python 数据模型,自定义类型行为可以像内置类型那样自然。实现如此自然的行为,靠的不是继承,而是鸭子类型,我们只需要按照预定行为实现对象所需方法即可 这一章我们定义自己的类,而且让类的行为跟真正的 Python 对象一样,这一章延续第一章,说明如何实现在很多 Python 类型中常见的特殊方法。 本章包含以下话题: 支持用于生成对象其他表示形式的内置函数(如 repr(), bytes() 等等) 使用一个类方法实现备选构造方法 扩展内置的 format() 函数和 str.format() 方法使用的格式微语言 实现只读属性 把对象变成可散列的,以便在集合中及作为 dict 的键使用 利用 __slots__ 节省内存 我们将开发一个二维欧几里得向量模型,这个过程中覆盖上面所有话题。这个过程中我们会讨论两个概念 如何以及何时利用 @classmethod 和 @staticmethod 装饰器 Python 的私有属性和受保护属性的用法,约定和局限 对象表示形式 每门面向对象的语言至少都有一种获取对象的字符串表示形式的标准方式。Python 提供了两种方式 repr() Step1: 上面的 __eq__ 方法,在两个操作数都是 Vector2d 的时候可用,不过拿 Vector2d 实例和其他具有相同数值的可迭代对象相比,结果也是 True(如 Vector2d(3, 4) == [3, 4])。这个行为可以视为特性,也可以视为缺陷在第 13 章运算符重载时候进一步讨论 我们已经定义了许多基本方法,但是显然少了一个操作:使用 bytes() 函数生成的二进制重建 Vecotr2d 实例 备选构造方法 我们可以把 Vector2d 实例转成字节序列;同理,也应该能从字节序列转换成 Vector2d 实例。在标准库中探索一番之后,我们发现 array.array 有个类方法 .frombytes(2.91 章介绍过,从文件读取数据) 正好符合需求。下面为 Vector2d 定义一个同名的类方法 Step2: 我们上面用的 classmethod 装饰器是 Python 专用的,下面解释一下 classmethod 和 staticmethod 我们来看一下 classmethod 装饰器,上面已经展示了它的用法,定义操作类,而不是操作实例的方法。classmethod 改变了调用方法的方式,因此类方法的第一个参数是类本身,而不是实例。classmethod 最常用的用途是定义备选构造方法。例如上面的 frombytes,注意,frombytes 最后一行使用 cls 参数构建了一个新实例,即 cls(*memv),按照约定,类方法的第一个参数为 cls(但不是强制的) staticmethod 装饰器也会改变方法的调用方式,但是第一个参数不是特殊值。其实,静态方法就是普通的函数,只是碰巧在类的定义体中,而不是在模块层定义。下面对这两种装饰器行为做了对比: Step3: 不管怎么调用 class.klassmeth,它的第一个参数都是 Demo 类。而 Demo.statmeth 的行为和普通函数类似。一般情况下,都是使用 classmethod,staticmethod 不是特别有用 格式化显示 内置的 format() 函数和 str.format() 方法把各个类型的格式化方式委托给相应的 .__format__(format_spec) 方法。format_spec 是格式说明符,它是: format(my_obj, format_spec) 的第二个参数 str.format() 方法的格式字符串,{} 里代换字段中冒号后面的部分 Step4: '{0.mass Step5: 格式规范微语言是可扩展的,因为各个类可以自行决定如何解释 format_spec 参数。例如,datetime 模块中的类,它们的 __format__ 方法使用的格式代码与 strftime() 函数一样。下面是内置的 format() 函数和 str.format() 方法的几个示例: Step6: 如果类没有定义 __format__ 方法,从 object 继承的方法会返回 str(my_object)。我们为 Vector2d 类定义了 __str__ 方法,因此可以这样: Step7: 然而,传入格式说明符,object.__format__ 方法会抛出 TypeError Step8: 我们将实现自己的微语言解决这个问题。首先,假设用户提供的格式说明符是用于格式化向量中各个浮点数分量的。我们想达到的效果如下: Step9: 实现这种输出的 __format__ 方法是: Step10: 下面要在微语言添加一个自定义格式的代码:如果格式说明以 'p' 为结尾,那么在极坐标中显示向量,即 <r, theta>,其中 r 是 模,theta 是弧度,其他部分('p' 之前的部分像往常一样解释) 注意自定义的格式代码不要和已有的重复,整数使用的有 'bcdoxXn', 浮点数使用的有 'eEfFgGn%', 字符串有的是 's'。所以我们选的极坐标代码是 'p',各个类都有自己的方式解释格式代码,自定义格式代码中重复使用代码字母不会出错,但是可能会让用户迷惑。 对于极坐标来说,我们已经定义了计算模的 __abs__ 方法,因此还要定义一个简单的 angle 方法,使用 math.atan2() 函数计算角度。angle 方法的代码如下: Step11: 这样方便增强 __format__ 方法,现在我们可以让其计算极坐标: Step12: 可散列的 Vector2d 按照定义,我们现在的 Vector2d 是不可散列的,因此不能放入集合 (set) 中: Step13: 为了将其设成可散列的,必须使用 __hash__ 方法(还需要 __eq__ 方法,前面实现了)。此外,还要让向量不可变。 目前我们可以为分量赋新值,如 v1.x = 7,我们应该让其无法赋值。 Step14: 注意,我们让这些向量不可变是有原因的,因为这样才能实现 __hash__ 方法。这个方法应该返回一个整数,理想情况下还要考虑对象属性的散列值(__eq__ 方法也要使用),因为相等的对象应该具有相同的散列值。根据官方文档,最好使用 异或(^)来混合各个分量的散列值 Step15: 如果创建可散列的类型,不一定实现特性,也不一定要保护实例属性。只需要正确的实现 __hash__ 和 __eq__ 方法即可。但是,实例的散列值绝不应该变化,因此我们借机提到了只读属性。 如果定义的类型有标量数值,可能还要实现 __int__ 和 __float__ 方法(分别被 int() 和 float() 构造函数调用),以便在某些情况下用于强制类型转换,此外,还有用于支持内置的 complex() 构造函数的 __complex__ 方法。 下面是完整的代码: Step16: 测试: Step17: Python 的私有属性和 “受保护” 的属性 Python 不能像 Java 一样使用 private 修饰符创建私有属性,但是 Python 有个简单的机制,能避免子类意外覆盖 “私有” 属性 举个例子,如果有人编写了一个 Dog 类,有一个 mood 实例属性,但是没有开放,你创建了 Dog 子类,你在毫不知情的情况下又创建了 mood 的实例属性,那么会在继承方法中会把 Dog 类的 mood 属性覆盖掉。这是难以调试的问题 为了避免这种情况,如果以 __mood 的形式(两个前导下划线,尾部没有或最多有一个前导下划线)命名实例属性,Python 会把属性名存到实例的 __dict__ 属性中,而且会在前面加上一个下划线和类名。因此,对于 Dog 类来说,__mood 会变成 _Dog__mood,对于 Beagle 类来说,会变成 _Beagle__mood。这个语言特性叫名称改写 Step18: 名称改写是一种安全措施,不能保证万无一失,它的目的是避免意外访问,不能防止故意做错事。 但不是所有 Python 程序员都喜欢这个功能,他们不喜欢两个下划线的这种特别不对称的写法,所以约定使用一个下划线编写受保护的属性(例如 self.x)。Python 不会对单下划线进行特殊处理,不过这是很多 Python 程序员严格遵守的约定,他们不会在外部访问这种属性。 Python 文档的某些角落将使用一个下划线前缀标记为受保护的属性。 总之,Vector2d 的分量都是 “私有的”,而 Vector2d 实例都是 “不可变” 的。这里用了两对引号是因为不能真正实现私有和不可变 使用 __slots__ 类属性节省空间 默认情况下,Python 在各个实例名为 __dict__ 字典里存储实例属性,如第三章所说,为了使用底层的散列表提升访问速度,字典会消耗大量内存,通过 __slots__ 类属性,能节省大量内存,方法是让解释器在元祖中存储实例属性,而不用字典 继承自超类的 __slots__ 属性没有效果。Python 只会使用各个类中定义的 __slots__ 属性。 定义 __slots__ 的方式是,创建一个类属性,使用 __slots__ 这个名字,并把它的值设为一个字符串构成的可迭代对象,其中各个元素表示各个实例属性。我喜欢使用元组,因为这样定义的 __slots__ 所含的信息不会变化,如下所示: Step19: 在类中定义 __slots__ 属性的目的是告诉解释器:这个类中的所有实例属性都在这了,这样,Python 会在各个实例中使用类似元祖的结构存储实例变量,从而避免使用消耗内存的 __dict__ 属性。如果有数百万个实例同时活动,这样能大量节省内存 如果要处理数百万个数值对象,应该使用 Numpy 数组,它能高效使用内存,而且提供了高度优化的数值处理函数,其中很多都一次操作整个数组 然而,节省的内存也可能被再次吃掉,如果把 __dict__ 这个名称添加到 __slots__ 中,实例会在元祖中保存各个实例的属性。此外还支持动态创建属性,这些属性存储在常规的 __dict__ 中。当然,把 __dict__ 加到 __slots__ 中可能完全违背了初衷,这取决于各个实例的静态属性和动态属性的数量及其用法。粗心的优化甚至比提早优化还糟糕 此外,还有一个实例属性值得注意,__weakref__ 属性,为了让对象支持弱引用,必须有这个属性,用户定义的类中默认就有 __weakref__ 属性。可是如果类中定义了 __slots__ 属性,而且想把实例作为弱引用的目标,就要把 __weakref__ 添加到 __slots__ 中。 综上,__slots__ 属性有些需要注意的地方,而且不能滥用。不能使用它来限制用户能赋值的属性。处理列表中数据时,__slots__ 属性最有用,例如模式固定的数据库记录,大型数据集。然而,如果你经常处理大量数据,一定要看一下 Numpy,此外 Pandas 也值得了解 __slots__ 的问题 总之,如果使用得当,__slots__ 能显著节省内存,不过有几点需要注意。 每个子类都要定义 __slots__ 属性,因为解释器会忽略继承的 __slots__ 属性 每个实例只能拥有 __slots__ 列出来的属性,除非把 __dict__ 加入 __slots__ 中(这样做失去了节省内存的功效) 如果不把 __weakref__ 加入 __slots__,实例就不能作为弱引用的目标 如果你的程序不用处理百万个实例,或许不值得费劲去创建不寻常的类,那就禁止它创建动态属性或者不支持弱引用。与其它优化措施一样,仅当权衡当下的需求并仔细搜集资料后证明确实有必要使用,才使用 __slots__ 属性 覆盖类属性 Python 有一个很独特的特性,类属性可用于为实例属性提供默认值。Vector2d 中有个 typecode 类属性,__bytes__ 方法两次用到了它,而且都故意使用 self.typecode 读取它的值,因为 Vector2d 实例本身没有 typecode 属性,所以 self.typecode 默认获取的是 Vector2d.typecode 的值 但是,如果为不存在的实例属性赋值,会新建实例属性。假如我们为 typecode 实例属性赋值,那么同名类属性不受影响。然而,自此以后,实例读取的 self.typecode 是实例属性 typecode,也就是把同名类属性遮盖了。借助这一特性,可以为各个实例的 typecode 属性定制不同的值。 Vector2d.typecode 属性默认值是 'd',即转换成字节序列时使用 8 字节双精度浮点数表示各个分量。如果在转换之前把 typecode 属性改成 'f',那么使用 4 字节单精度浮点数表示各个分量。 Step20: 这就是我们为什么要在字节序列之前加上 typecode 的值的原因:为了支持不同格式 如果想修改类属性的值,必须直接在类上修改,不能通过实例修改。如果想修改所有实例(没有 typecode 实例变量)的 typecode 属性的默认值,可以这么做: Step21: 然而,有种修改方法更符合 Python 风格,而且效果更持久,也更有针对性。类属性是公开的,因此会被子类继承,于是经常会创建一个子类,只用于定制类的属性。Django 基于类的视图就大量使用了这种技术。就像下面这样:
Python Code: v1 = Vector2d(3, 4) print(v1.x, v1.y) # 可以直接通过属性访问 x, y = v1 # 可以拆包成元祖 x, y v1 v1_clone = eval(repr(v1)) # repr 函数调用 Vector2d 实例,结果类似于构建实例的源码 v1 == v1_clone # 支持 == 比较 print(v1) # 会调用 str 函数,对 Vector2d 来说,输出的是一个有序对 octets = bytes(v1) # 调用 __bytes__ 方法,生成实例的二进制表示形式 octets abs(v1) # 会调用 __abs__ 方法,返回 Vector2d 实例的模 bool(v1), bool(Vector2d(0, 0)) # 会调用 __bool__ 方法,判断 Vector2d 的实例的向量长度 from array import array import math class Vector2d: typecode = 'd' # 类属性,Vector2d 实例和字节序之间转换使用 def __init__(self, x, y): self.x = float(x) # 转换成浮点数,尽早捕捉错误,防止传入不当参数 self.y = float(y) def __iter__(self): return (i for i in (self.x, self.y)) # 将 Vector2d 变成可迭代对象,这样才可以拆包 def __repr__(self): class_name = type(self).__name__ # {!r} 获取各个分量的表示形式,然后插值,构成一个字符串。因为 Vector2d 是可迭代对象,所以用 *self 会把 x 和 y 分量提供给 format 函数 return '{}({!r},{!r})'.format(class_name, *self) def __str__(self): return str(tuple(self)) def __bytes__(self): # 我们使用 typecode 转换成字节序列然后返回 return (bytes([ord(self.typecode)]) + bytes(array(self.typecode, self))) def __eq__(self, other): #为了快速比较所有分量,在操作数中构建元祖,对 Vector2d 实例来说,这样做还有问题,看下面的警告 return tuple(self) == tuple(other) def __abs__(self): return math.hypot(self.x, self.y) def __bool__(self): return bool(abs(self)) Explanation: 得益于 Python 数据模型,自定义类型行为可以像内置类型那样自然。实现如此自然的行为,靠的不是继承,而是鸭子类型,我们只需要按照预定行为实现对象所需方法即可 这一章我们定义自己的类,而且让类的行为跟真正的 Python 对象一样,这一章延续第一章,说明如何实现在很多 Python 类型中常见的特殊方法。 本章包含以下话题: 支持用于生成对象其他表示形式的内置函数(如 repr(), bytes() 等等) 使用一个类方法实现备选构造方法 扩展内置的 format() 函数和 str.format() 方法使用的格式微语言 实现只读属性 把对象变成可散列的,以便在集合中及作为 dict 的键使用 利用 __slots__ 节省内存 我们将开发一个二维欧几里得向量模型,这个过程中覆盖上面所有话题。这个过程中我们会讨论两个概念 如何以及何时利用 @classmethod 和 @staticmethod 装饰器 Python 的私有属性和受保护属性的用法,约定和局限 对象表示形式 每门面向对象的语言至少都有一种获取对象的字符串表示形式的标准方式。Python 提供了两种方式 repr(): 便于开发者理解的方式返回对象的字符串表示形式 str(): 便于用户理解的方式返回对象的字符串表示形式 为了给对象提供其他的表现形式,还会用到两个特殊的方法, __bytes__ 和 __format__。__bytes__ 方法与 __str__方法类似:bytes() 函数调用它获取对象的字节序列表示形式。而 __format__ 方法会被内置的 format() 和 str.format() 调用。使用特殊的格式代码显示对象的字符串表示形式。 注意:Python3 中 __repr__, __str__, __format__ 方法都必须返回 Unicode 字符串(str)类型。只有 __bytes__ 方法应该返回字节序列(bytes 类型) 再谈向量类 为了说明用于生成对象表示形式的众多方法,我们将使用一个 Vector2d 类,与第一章的类似。这几节会不断完善这个类,我们期望这个类行为如下所示: End of explanation from array import array import math class Vector2d: typecode = 'd' def __init__(self, x, y): self.x = float(x) self.y = float(y) @classmethod # 类方法使用 @classmethod 装饰器 def frombytes(cls, octets): # 不用传入 self 参数,相反,要通过 cls 传入类本身 typecode = chr(octets[0]) #从第一个字节中读取 typecode # 用传入的 octets 字节序列创建一个 memoryview,然后使用 typecode 转换 memv = memoryview(octets[1:]).cast(typecode) # 2.92 章介绍了 cast 方法,将一段内存转换成指定的类型,d 代表 float return cls(*memv) #拆包转换后的 memoryview,得到构造方法所需的一对参数 def __iter__(self): return (i for i in (self.x, self.y)) def __repr__(self): class_name = type(self).__name__ return '{}({!r},{!r})'.format(class_name, *self) def __str__(self): return str(tuple(self)) def __bytes__(self): return (bytes([ord(self.typecode)]) + bytes(array(self.typecode, self))) def __eq__(self, other): return tuple(self) == tuple(other) def __abs__(self): return math.hypot(self.x, self.y) def __bool__(self): return bool(abs(self)) v1 = Vector2d(3, 4) octets = bytes(v1) print(octets) v2 = Vector2d.frombytes(octets) v2 Explanation: 上面的 __eq__ 方法,在两个操作数都是 Vector2d 的时候可用,不过拿 Vector2d 实例和其他具有相同数值的可迭代对象相比,结果也是 True(如 Vector2d(3, 4) == [3, 4])。这个行为可以视为特性,也可以视为缺陷在第 13 章运算符重载时候进一步讨论 我们已经定义了许多基本方法,但是显然少了一个操作:使用 bytes() 函数生成的二进制重建 Vecotr2d 实例 备选构造方法 我们可以把 Vector2d 实例转成字节序列;同理,也应该能从字节序列转换成 Vector2d 实例。在标准库中探索一番之后,我们发现 array.array 有个类方法 .frombytes(2.91 章介绍过,从文件读取数据) 正好符合需求。下面为 Vector2d 定义一个同名的类方法 End of explanation class Demo: @classmethod def klassmeth(*args): return args @staticmethod def statmeth(*args): return args Demo.klassmeth() Demo.klassmeth('kaka') Demo.statmeth() Demo.statmeth('kaka') Explanation: 我们上面用的 classmethod 装饰器是 Python 专用的,下面解释一下 classmethod 和 staticmethod 我们来看一下 classmethod 装饰器,上面已经展示了它的用法,定义操作类,而不是操作实例的方法。classmethod 改变了调用方法的方式,因此类方法的第一个参数是类本身,而不是实例。classmethod 最常用的用途是定义备选构造方法。例如上面的 frombytes,注意,frombytes 最后一行使用 cls 参数构建了一个新实例,即 cls(*memv),按照约定,类方法的第一个参数为 cls(但不是强制的) staticmethod 装饰器也会改变方法的调用方式,但是第一个参数不是特殊值。其实,静态方法就是普通的函数,只是碰巧在类的定义体中,而不是在模块层定义。下面对这两种装饰器行为做了对比: End of explanation brl = 1 / 2.43 brl format(brl, '0.4f') '1 BRL = {rate:0.2f} USD'.format(rate = brl) Explanation: 不管怎么调用 class.klassmeth,它的第一个参数都是 Demo 类。而 Demo.statmeth 的行为和普通函数类似。一般情况下,都是使用 classmethod,staticmethod 不是特别有用 格式化显示 内置的 format() 函数和 str.format() 方法把各个类型的格式化方式委托给相应的 .__format__(format_spec) 方法。format_spec 是格式说明符,它是: format(my_obj, format_spec) 的第二个参数 str.format() 方法的格式字符串,{} 里代换字段中冒号后面的部分 End of explanation format(42, 'b') format(2 / 3, '.1%') Explanation: '{0.mass:5.3e}' 这样的格式字符串其实包含两部分,冒号左边的 0.mass 在代换字段语法中是字段名,冒号后面的 5.3e 是格式说明符。如果对这些陌生的话,先学 format() 函数,掌握格式规范微语言,然后再阅读格式字符串语法("Format String Syntax",https://docs.python.org/3/library/string.html#formatspec)。学习 str.format() 方法使用的 {:} 代换字段表示法(包含转换标志 !s, !r, !a) 格式规范微语言为一些内置类型提供了专用的表示代码。例如,b 和 x 分别代表 二进制和十六进制的 int 类型。f 表示 float 类型,而 % 表示百分数形式 End of explanation from datetime import datetime now = datetime.now() format(now, '%H:%M:%s') "It's now {:%I:%M %p}".format(now) Explanation: 格式规范微语言是可扩展的,因为各个类可以自行决定如何解释 format_spec 参数。例如,datetime 模块中的类,它们的 __format__ 方法使用的格式代码与 strftime() 函数一样。下面是内置的 format() 函数和 str.format() 方法的几个示例: End of explanation v1 = Vector2d(3, 4) format(v1) Explanation: 如果类没有定义 __format__ 方法,从 object 继承的方法会返回 str(my_object)。我们为 Vector2d 类定义了 __str__ 方法,因此可以这样: End of explanation format(v1, '.3f') Explanation: 然而,传入格式说明符,object.__format__ 方法会抛出 TypeError End of explanation v1 = Vector2d(3, 4) format(v1) format(v1, '.2f') format(v1, '3e') Explanation: 我们将实现自己的微语言解决这个问题。首先,假设用户提供的格式说明符是用于格式化向量中各个浮点数分量的。我们想达到的效果如下: End of explanation def __format__(self, fmt_spec=''): components = (format(c, fmt_spec) for c in self) return '({}, {})'.format(*components) Explanation: 实现这种输出的 __format__ 方法是: End of explanation def angle(self): return math.atan2(self.y, self.x) Explanation: 下面要在微语言添加一个自定义格式的代码:如果格式说明以 'p' 为结尾,那么在极坐标中显示向量,即 <r, theta>,其中 r 是 模,theta 是弧度,其他部分('p' 之前的部分像往常一样解释) 注意自定义的格式代码不要和已有的重复,整数使用的有 'bcdoxXn', 浮点数使用的有 'eEfFgGn%', 字符串有的是 's'。所以我们选的极坐标代码是 'p',各个类都有自己的方式解释格式代码,自定义格式代码中重复使用代码字母不会出错,但是可能会让用户迷惑。 对于极坐标来说,我们已经定义了计算模的 __abs__ 方法,因此还要定义一个简单的 angle 方法,使用 math.atan2() 函数计算角度。angle 方法的代码如下: End of explanation def __format__(self, fmt_spec=''): if fmt_spec.endswith('p'): fmt_spec = fmt_spec[:-1] # 删除 p 后缀 coords = (abs(self), self.angle()) # 构建元组,表示极坐标 outer_fmt = '<{}, {}>' else: coords = self outer_fmt = '({}, {})' components = (format(c, fmt_spec) for c in coords) return outer_fmt.format(*components) Explanation: 这样方便增强 __format__ 方法,现在我们可以让其计算极坐标: End of explanation v1 = Vector2d(3, 4) hash(v1) Explanation: 可散列的 Vector2d 按照定义,我们现在的 Vector2d 是不可散列的,因此不能放入集合 (set) 中: End of explanation class Vector2d: typecode = 'd' def __init__(self, x, y): self.__x = float(x) # 有两个前导下划线,或一个,把属性标记为私有的 self.__y = float(y) @property # property 把读值方法标记为特性 def x(self): # 读值方法与公开属性名一样,x return self.__x @property def y(self): return self.__y def __iter__(self): return (i for i in (self.x, self.y)) # ...下面省略了。 Explanation: 为了将其设成可散列的,必须使用 __hash__ 方法(还需要 __eq__ 方法,前面实现了)。此外,还要让向量不可变。 目前我们可以为分量赋新值,如 v1.x = 7,我们应该让其无法赋值。 End of explanation def __hash__(self): return hash(self.x) ^ hash(self.y) Explanation: 注意,我们让这些向量不可变是有原因的,因为这样才能实现 __hash__ 方法。这个方法应该返回一个整数,理想情况下还要考虑对象属性的散列值(__eq__ 方法也要使用),因为相等的对象应该具有相同的散列值。根据官方文档,最好使用 异或(^)来混合各个分量的散列值 End of explanation from array import array import math class Vector2d: typecode = 'd' def __init__(self, x, y): self.__x = float(x) self.__y = float(y) @property # property 把读值方法标记为特性 def x(self): # 读值方法与公开属性名一样,x return self.__x @property def y(self): return self.__y def __iter__(self): return (i for i in (self.x, self.y)) def __repr__(self): class_name = type(self).__name__ return '{}({!r},{!r})'.format(class_name, *self) def __str__(self): return str(tuple(self)) def __bytes__(self): return (bytes([ord(self.typecode)]) + bytes(array(self.typecode, self))) def __eq__(self, other): return tuple(self) == tuple(other) def __hash__(self): return hash(self.x) ^ hash(self.y) def __abs__(self): return math.hypot(self.x, self.y) # 返回欧几里德范数 sqrt(x*x + y*y)。 def __bool__(self): return bool(abs(self)) def angle(self): return math.atan2(self.y, self.x) def __format__(self, fmt_spec=''): if fmt_spec.endswith('p'): fmt_spec = fmt_spec[:-1] # 删除 p 后缀 coords = (abs(self), self.angle()) # 构建元组,表示极坐标 outer_fmt = '<{}, {}>' else: coords = self outer_fmt = '({}, {})' components = (format(c, fmt_spec) for c in coords) return outer_fmt.format(*components) @classmethod # 类方法使用 @classmethod 装饰器 def frombytes(cls, octets): # 不用传入 self 参数,相反,要通过 cls 传入类本身 typecode = chr(octets[0]) #从第一个字节中读取 typecode # 用传入的 octets 字节序列创建一个 memoryview,然后使用 typecode 转换 memv = memoryview(octets[1:]).cast(typecode) # 2.92 章介绍了 cast 方法,将一段内存转换成指定的类型,d 代表 float return cls(*memv) #拆包转换后的 memoryview,得到构造方法所需的一对参数 Explanation: 如果创建可散列的类型,不一定实现特性,也不一定要保护实例属性。只需要正确的实现 __hash__ 和 __eq__ 方法即可。但是,实例的散列值绝不应该变化,因此我们借机提到了只读属性。 如果定义的类型有标量数值,可能还要实现 __int__ 和 __float__ 方法(分别被 int() 和 float() 构造函数调用),以便在某些情况下用于强制类型转换,此外,还有用于支持内置的 complex() 构造函数的 __complex__ 方法。 下面是完整的代码: End of explanation v1 = Vector2d(3, 4) print(v1.x, v1.y) x, y = v1 x, y v1 v1_clone = eval(repr(v1)) v1 == v1_clone print(v1) octets = bytes(v1) octets abs(v1) bool(v1), bool(Vector2d(0, 0)) v1_clone = Vector2d.frombytes(bytes(v1)) v1_clone v1 == v1_clone format(v1) format(v1, '.2f') format(v1, '.3e') Vector2d(0, 0).angle() Vector2d(1, 0).angle() epsilon = 10 ** -8 abs(Vector2d(0, 1).angle() - math.pi / 2) < epsilon abs(Vector2d(1, 1).angle() - math.pi / 4) < epsilon format(Vector2d(1, 1), 'p') format(Vector2d(1, 1), '.3ep') format(Vector2d(1, 1), '.5fp') v1.x, v1.y v1.x = 123 v1 = Vector2d(3, 4) v2 = Vector2d(3.1, 4.2) hash(v1), hash(v2) len(set([v1, v2])) Explanation: 测试: End of explanation v1 = Vector2d(3, 4) v1.__dict__ Explanation: Python 的私有属性和 “受保护” 的属性 Python 不能像 Java 一样使用 private 修饰符创建私有属性,但是 Python 有个简单的机制,能避免子类意外覆盖 “私有” 属性 举个例子,如果有人编写了一个 Dog 类,有一个 mood 实例属性,但是没有开放,你创建了 Dog 子类,你在毫不知情的情况下又创建了 mood 的实例属性,那么会在继承方法中会把 Dog 类的 mood 属性覆盖掉。这是难以调试的问题 为了避免这种情况,如果以 __mood 的形式(两个前导下划线,尾部没有或最多有一个前导下划线)命名实例属性,Python 会把属性名存到实例的 __dict__ 属性中,而且会在前面加上一个下划线和类名。因此,对于 Dog 类来说,__mood 会变成 _Dog__mood,对于 Beagle 类来说,会变成 _Beagle__mood。这个语言特性叫名称改写 End of explanation class Vector2d: __slots__ = ('__x', '__y') # 下面是各个方法,省略 Explanation: 名称改写是一种安全措施,不能保证万无一失,它的目的是避免意外访问,不能防止故意做错事。 但不是所有 Python 程序员都喜欢这个功能,他们不喜欢两个下划线的这种特别不对称的写法,所以约定使用一个下划线编写受保护的属性(例如 self.x)。Python 不会对单下划线进行特殊处理,不过这是很多 Python 程序员严格遵守的约定,他们不会在外部访问这种属性。 Python 文档的某些角落将使用一个下划线前缀标记为受保护的属性。 总之,Vector2d 的分量都是 “私有的”,而 Vector2d 实例都是 “不可变” 的。这里用了两对引号是因为不能真正实现私有和不可变 使用 __slots__ 类属性节省空间 默认情况下,Python 在各个实例名为 __dict__ 字典里存储实例属性,如第三章所说,为了使用底层的散列表提升访问速度,字典会消耗大量内存,通过 __slots__ 类属性,能节省大量内存,方法是让解释器在元祖中存储实例属性,而不用字典 继承自超类的 __slots__ 属性没有效果。Python 只会使用各个类中定义的 __slots__ 属性。 定义 __slots__ 的方式是,创建一个类属性,使用 __slots__ 这个名字,并把它的值设为一个字符串构成的可迭代对象,其中各个元素表示各个实例属性。我喜欢使用元组,因为这样定义的 __slots__ 所含的信息不会变化,如下所示: End of explanation v1 = Vector2d(1.1, 2.2) dumpd = bytes(v1) dumpd len(dumpd) v1.typecode = 'f' dumpf = bytes(v1) dumpf len(dumpf) Explanation: 在类中定义 __slots__ 属性的目的是告诉解释器:这个类中的所有实例属性都在这了,这样,Python 会在各个实例中使用类似元祖的结构存储实例变量,从而避免使用消耗内存的 __dict__ 属性。如果有数百万个实例同时活动,这样能大量节省内存 如果要处理数百万个数值对象,应该使用 Numpy 数组,它能高效使用内存,而且提供了高度优化的数值处理函数,其中很多都一次操作整个数组 然而,节省的内存也可能被再次吃掉,如果把 __dict__ 这个名称添加到 __slots__ 中,实例会在元祖中保存各个实例的属性。此外还支持动态创建属性,这些属性存储在常规的 __dict__ 中。当然,把 __dict__ 加到 __slots__ 中可能完全违背了初衷,这取决于各个实例的静态属性和动态属性的数量及其用法。粗心的优化甚至比提早优化还糟糕 此外,还有一个实例属性值得注意,__weakref__ 属性,为了让对象支持弱引用,必须有这个属性,用户定义的类中默认就有 __weakref__ 属性。可是如果类中定义了 __slots__ 属性,而且想把实例作为弱引用的目标,就要把 __weakref__ 添加到 __slots__ 中。 综上,__slots__ 属性有些需要注意的地方,而且不能滥用。不能使用它来限制用户能赋值的属性。处理列表中数据时,__slots__ 属性最有用,例如模式固定的数据库记录,大型数据集。然而,如果你经常处理大量数据,一定要看一下 Numpy,此外 Pandas 也值得了解 __slots__ 的问题 总之,如果使用得当,__slots__ 能显著节省内存,不过有几点需要注意。 每个子类都要定义 __slots__ 属性,因为解释器会忽略继承的 __slots__ 属性 每个实例只能拥有 __slots__ 列出来的属性,除非把 __dict__ 加入 __slots__ 中(这样做失去了节省内存的功效) 如果不把 __weakref__ 加入 __slots__,实例就不能作为弱引用的目标 如果你的程序不用处理百万个实例,或许不值得费劲去创建不寻常的类,那就禁止它创建动态属性或者不支持弱引用。与其它优化措施一样,仅当权衡当下的需求并仔细搜集资料后证明确实有必要使用,才使用 __slots__ 属性 覆盖类属性 Python 有一个很独特的特性,类属性可用于为实例属性提供默认值。Vector2d 中有个 typecode 类属性,__bytes__ 方法两次用到了它,而且都故意使用 self.typecode 读取它的值,因为 Vector2d 实例本身没有 typecode 属性,所以 self.typecode 默认获取的是 Vector2d.typecode 的值 但是,如果为不存在的实例属性赋值,会新建实例属性。假如我们为 typecode 实例属性赋值,那么同名类属性不受影响。然而,自此以后,实例读取的 self.typecode 是实例属性 typecode,也就是把同名类属性遮盖了。借助这一特性,可以为各个实例的 typecode 属性定制不同的值。 Vector2d.typecode 属性默认值是 'd',即转换成字节序列时使用 8 字节双精度浮点数表示各个分量。如果在转换之前把 typecode 属性改成 'f',那么使用 4 字节单精度浮点数表示各个分量。 End of explanation Vector2d.typecode = 'f' Explanation: 这就是我们为什么要在字节序列之前加上 typecode 的值的原因:为了支持不同格式 如果想修改类属性的值,必须直接在类上修改,不能通过实例修改。如果想修改所有实例(没有 typecode 实例变量)的 typecode 属性的默认值,可以这么做: End of explanation class ShortVector2d(Vector2d): typecode = 'f' sv = ShortVector2d(1 / 11, 1 / 27) sv # 查看 sv 的 repr 形式 len(bytes(sv)) Explanation: 然而,有种修改方法更符合 Python 风格,而且效果更持久,也更有针对性。类属性是公开的,因此会被子类继承,于是经常会创建一个子类,只用于定制类的属性。Django 基于类的视图就大量使用了这种技术。就像下面这样: End of explanation
1,955
Given the following text description, write Python code to implement the functionality described below step by step Description: <html><head><meta content="text/html; charset=UTF-8" http-equiv="content-type"><style type="text/css">ol</style></head><body class="c5"><p class="c0 c4"><span class="c3"></span></p><p class="c2 title" id="h.rrbabt268i6e"><h1>CaImAn&rsquo;s Demo pipeline</h1></p><p class="c0"><span class="c3">This notebook will help to demonstrate the process of CaImAn and how it uses different functions to denoise, deconvolve and demix neurons from a two-photon Calcium Imaging dataset. The demo shows how to construct the params, MotionCorrect and cnmf objects and call the relevant functions. You can also run a large part of the pipeline with a single method (cnmf.fit_file). See inside for details. Dataset couresy of Sue Ann Koay and David Tank (Princeton University) This demo pertains to two photon data. For a complete analysis pipeline for one photon microendoscopic data see demo_pipeline_cnmfE.ipynb</span></p> <p class="c0"><span class="c3">More information can be found in the companion paper. </span></p> </html> Step1: Set up logger (optional) You can log to a file using the filename parameter, or make the output more or less verbose by setting level to logging.DEBUG, logging.INFO, logging.WARNING, or logging.ERROR. A filename argument can also be passed to store the log file Step2: Select file(s) to be processed The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fname variable as a list. Step3: Play the movie (optional) Play the movie (optional). This will require loading the movie in memory which in general is not needed by the pipeline. Displaying the movie uses the OpenCV library. Press q to close the video panel. Step4: Setup some parameters We set some parameters that are relevant to the file, and then parameters for motion correction, processing with CNMF and component quality evaluation. Note that the dataset Sue_2x_3000_40_-46.tif has been spatially downsampled by a factor of 2 and has a lower than usual spatial resolution (2um/pixel). As a result several parameters (gSig, strides, max_shifts, rf, stride_cnmf) have lower values (halved compared to a dataset with spatial resolution 1um/pixel). Step5: Create a parameters object You can creating a parameters object by passing all the parameters as a single dictionary. Parameters not defined in the dictionary will assume their default values. The resulting params object is a collection of subdictionaries pertaining to the dataset to be analyzed (params.data), motion correction (params.motion), data pre-processing (params.preprocess), initialization (params.init), patch processing (params.patch), spatial and temporal component (params.spatial), (params.temporal), quality evaluation (params.quality) and online processing (params.online) Step6: Setup a cluster To enable parallel processing a (local) cluster needs to be set up. This is done with a cell below. The variable backend determines the type of cluster used. The default value 'local' uses the multiprocessing package. The ipyparallel option is also available. More information on these choices can be found here. The resulting variable dview expresses the cluster option. If you use dview=dview in the downstream analysis then parallel processing will be used. If you use dview=None then no parallel processing will be employed. Step7: Motion Correction First we create a motion correction object with the parameters specified. Note that the file is not loaded in memory Step8: Now perform motion correction. From the movie above we see that the dateset exhibits non-uniform motion. We will perform piecewise rigid motion correction using the NoRMCorre algorithm. This has already been selected by setting pw_rigid=True when defining the parameters object. Step9: Inspect the results by comparing the original movie. A more detailed presentation of the motion correction method can be found in the demo motion correction notebook. Step10: Memory mapping The cell below memory maps the file in order 'C' and then loads the new memory mapped file. The saved files from motion correction are memory mapped files stored in 'F' order. Their paths are stored in mc.mmap_file. Step11: Now restart the cluster to clean up memory Step12: Run CNMF on patches in parallel The FOV is split is different overlapping patches that are subsequently processed in parallel by the CNMF algorithm. The results from all the patches are merged with special attention to idendtified components on the border. The results are then refined by additional CNMF iterations. Step13: Run the entire pipeline up to this point with one command It is possible to run the combined steps of motion correction, memory mapping, and cnmf fitting in one step as shown below. The command is commented out since the analysis has already been performed. It is recommended that you familiriaze yourself with the various steps and the results of the various steps before using it. Step14: Inspecting the results Briefly inspect the results by plotting contours of identified components against correlation image. The results of the algorithm are stored in the object cnm.estimates. More information can be found in the definition of the estimates object and in the wiki. Step15: Re-run (seeded) CNMF on the full Field of View You can re-run the CNMF algorithm seeded on just the selected components from the previous step. Be careful, because components rejected on the previous step will not be recovered here. Step16: Component Evaluation The processing in patches creates several spurious components. These are filtered out by evaluating each component using three different criteria Step17: Plot contours of selected and rejected components Step18: View traces of accepted and rejected components. Note that if you get data rate error you can start Jupyter notebooks using Step19: Extract DF/F values Step20: Select only high quality components Step21: Display final results Step22: Closing, saving, and creating denoised version You can save an hdf5 file with all the fields of the cnmf object Step23: Stop cluster and clean up log files Step24: View movie with the results We can inspect the denoised results by reconstructing the movie and playing alongside the original data and the resulting (amplified) residual movie Step25: The denoised movie can also be explicitly constructed using
Python Code: import bokeh.plotting as bpl import cv2 import glob import logging import matplotlib.pyplot as plt import numpy as np import os try: cv2.setNumThreads(0) except(): pass try: if __IPYTHON__: # this is used for debugging purposes only. allows to reload classes # when changed get_ipython().magic('load_ext autoreload') get_ipython().magic('autoreload 2') except NameError: pass import caiman as cm from caiman.motion_correction import MotionCorrect from caiman.source_extraction.cnmf import cnmf as cnmf from caiman.source_extraction.cnmf import params as params from caiman.utils.utils import download_demo from caiman.utils.visualization import plot_contours, nb_view_patches, nb_plot_contour bpl.output_notebook() Explanation: <html><head><meta content="text/html; charset=UTF-8" http-equiv="content-type"><style type="text/css">ol</style></head><body class="c5"><p class="c0 c4"><span class="c3"></span></p><p class="c2 title" id="h.rrbabt268i6e"><h1>CaImAn&rsquo;s Demo pipeline</h1></p><p class="c0"><span class="c3">This notebook will help to demonstrate the process of CaImAn and how it uses different functions to denoise, deconvolve and demix neurons from a two-photon Calcium Imaging dataset. The demo shows how to construct the params, MotionCorrect and cnmf objects and call the relevant functions. You can also run a large part of the pipeline with a single method (cnmf.fit_file). See inside for details. Dataset couresy of Sue Ann Koay and David Tank (Princeton University) This demo pertains to two photon data. For a complete analysis pipeline for one photon microendoscopic data see demo_pipeline_cnmfE.ipynb</span></p> <p class="c0"><span class="c3">More information can be found in the companion paper. </span></p> </html> End of explanation logging.basicConfig(format= "%(relativeCreated)12d [%(filename)s:%(funcName)20s():%(lineno)s] [%(process)d] %(message)s", # filename="/tmp/caiman.log", level=logging.WARNING) Explanation: Set up logger (optional) You can log to a file using the filename parameter, or make the output more or less verbose by setting level to logging.DEBUG, logging.INFO, logging.WARNING, or logging.ERROR. A filename argument can also be passed to store the log file End of explanation fnames = ['Sue_2x_3000_40_-46.tif'] # filename to be processed if fnames[0] in ['Sue_2x_3000_40_-46.tif', 'demoMovie.tif']: fnames = [download_demo(fnames[0])] Explanation: Select file(s) to be processed The download_demo function will download the specific file for you and return the complete path to the file which will be stored in your caiman_data directory. If you adapt this demo for your data make sure to pass the complete path to your file(s). Remember to pass the fname variable as a list. End of explanation display_movie = False if display_movie: m_orig = cm.load_movie_chain(fnames) ds_ratio = 0.2 m_orig.resize(1, 1, ds_ratio).play( q_max=99.5, fr=30, magnification=2) Explanation: Play the movie (optional) Play the movie (optional). This will require loading the movie in memory which in general is not needed by the pipeline. Displaying the movie uses the OpenCV library. Press q to close the video panel. End of explanation # dataset dependent parameters fr = 30 # imaging rate in frames per second decay_time = 0.4 # length of a typical transient in seconds # motion correction parameters strides = (48, 48) # start a new patch for pw-rigid motion correction every x pixels overlaps = (24, 24) # overlap between pathes (size of patch strides+overlaps) max_shifts = (6,6) # maximum allowed rigid shifts (in pixels) max_deviation_rigid = 3 # maximum shifts deviation allowed for patch with respect to rigid shifts pw_rigid = True # flag for performing non-rigid motion correction # parameters for source extraction and deconvolution p = 1 # order of the autoregressive system gnb = 2 # number of global background components merge_thr = 0.85 # merging threshold, max correlation allowed rf = 15 # half-size of the patches in pixels. e.g., if rf=25, patches are 50x50 stride_cnmf = 6 # amount of overlap between the patches in pixels K = 4 # number of components per patch gSig = [4, 4] # expected half size of neurons in pixels method_init = 'greedy_roi' # initialization method (if analyzing dendritic data using 'sparse_nmf') ssub = 1 # spatial subsampling during initialization tsub = 1 # temporal subsampling during intialization # parameters for component evaluation min_SNR = 2.0 # signal to noise ratio for accepting a component rval_thr = 0.85 # space correlation threshold for accepting a component cnn_thr = 0.99 # threshold for CNN based classifier cnn_lowest = 0.1 # neurons with cnn probability lower than this value are rejected Explanation: Setup some parameters We set some parameters that are relevant to the file, and then parameters for motion correction, processing with CNMF and component quality evaluation. Note that the dataset Sue_2x_3000_40_-46.tif has been spatially downsampled by a factor of 2 and has a lower than usual spatial resolution (2um/pixel). As a result several parameters (gSig, strides, max_shifts, rf, stride_cnmf) have lower values (halved compared to a dataset with spatial resolution 1um/pixel). End of explanation opts_dict = {'fnames': fnames, 'fr': fr, 'decay_time': decay_time, 'strides': strides, 'overlaps': overlaps, 'max_shifts': max_shifts, 'max_deviation_rigid': max_deviation_rigid, 'pw_rigid': pw_rigid, 'p': p, 'nb': gnb, 'rf': rf, 'K': K, 'stride': stride_cnmf, 'method_init': method_init, 'rolling_sum': True, 'only_init': True, 'ssub': ssub, 'tsub': tsub, 'merge_thr': merge_thr, 'min_SNR': min_SNR, 'rval_thr': rval_thr, 'use_cnn': True, 'min_cnn_thr': cnn_thr, 'cnn_lowest': cnn_lowest} opts = params.CNMFParams(params_dict=opts_dict) Explanation: Create a parameters object You can creating a parameters object by passing all the parameters as a single dictionary. Parameters not defined in the dictionary will assume their default values. The resulting params object is a collection of subdictionaries pertaining to the dataset to be analyzed (params.data), motion correction (params.motion), data pre-processing (params.preprocess), initialization (params.init), patch processing (params.patch), spatial and temporal component (params.spatial), (params.temporal), quality evaluation (params.quality) and online processing (params.online) End of explanation #%% start a cluster for parallel processing (if a cluster already exists it will be closed and a new session will be opened) if 'dview' in locals(): cm.stop_server(dview=dview) c, dview, n_processes = cm.cluster.setup_cluster( backend='local', n_processes=None, single_thread=False) Explanation: Setup a cluster To enable parallel processing a (local) cluster needs to be set up. This is done with a cell below. The variable backend determines the type of cluster used. The default value 'local' uses the multiprocessing package. The ipyparallel option is also available. More information on these choices can be found here. The resulting variable dview expresses the cluster option. If you use dview=dview in the downstream analysis then parallel processing will be used. If you use dview=None then no parallel processing will be employed. End of explanation # first we create a motion correction object with the parameters specified mc = MotionCorrect(fnames, dview=dview, **opts.get_group('motion')) # note that the file is not loaded in memory Explanation: Motion Correction First we create a motion correction object with the parameters specified. Note that the file is not loaded in memory End of explanation %%capture #%% Run piecewise-rigid motion correction using NoRMCorre mc.motion_correct(save_movie=True) m_els = cm.load(mc.fname_tot_els) border_to_0 = 0 if mc.border_nan is 'copy' else mc.border_to_0 # maximum shift to be used for trimming against NaNs Explanation: Now perform motion correction. From the movie above we see that the dateset exhibits non-uniform motion. We will perform piecewise rigid motion correction using the NoRMCorre algorithm. This has already been selected by setting pw_rigid=True when defining the parameters object. End of explanation #%% compare with original movie display_movie = False if display_movie: m_orig = cm.load_movie_chain(fnames) ds_ratio = 0.2 cm.concatenate([m_orig.resize(1, 1, ds_ratio) - mc.min_mov*mc.nonneg_movie, m_els.resize(1, 1, ds_ratio)], axis=2).play(fr=60, gain=15, magnification=2, offset=0) # press q to exit Explanation: Inspect the results by comparing the original movie. A more detailed presentation of the motion correction method can be found in the demo motion correction notebook. End of explanation #%% MEMORY MAPPING # memory map the file in order 'C' fname_new = cm.save_memmap(mc.mmap_file, base_name='memmap_', order='C', border_to_0=border_to_0, dview=dview) # exclude borders # now load the file Yr, dims, T = cm.load_memmap(fname_new) images = np.reshape(Yr.T, [T] + list(dims), order='F') #load frames in python format (T x X x Y) Explanation: Memory mapping The cell below memory maps the file in order 'C' and then loads the new memory mapped file. The saved files from motion correction are memory mapped files stored in 'F' order. Their paths are stored in mc.mmap_file. End of explanation #%% restart cluster to clean up memory cm.stop_server(dview=dview) c, dview, n_processes = cm.cluster.setup_cluster( backend='local', n_processes=None, single_thread=False) Explanation: Now restart the cluster to clean up memory End of explanation %%capture #%% RUN CNMF ON PATCHES # First extract spatial and temporal components on patches and combine them # for this step deconvolution is turned off (p=0). If you want to have # deconvolution within each patch change params.patch['p_patch'] to a # nonzero value cnm = cnmf.CNMF(n_processes, params=opts, dview=dview) cnm = cnm.fit(images) Explanation: Run CNMF on patches in parallel The FOV is split is different overlapping patches that are subsequently processed in parallel by the CNMF algorithm. The results from all the patches are merged with special attention to idendtified components on the border. The results are then refined by additional CNMF iterations. End of explanation # cnm1 = cnmf.CNMF(n_processes, params=opts, dview=dview) # cnm1.fit_file(motion_correct=True) Explanation: Run the entire pipeline up to this point with one command It is possible to run the combined steps of motion correction, memory mapping, and cnmf fitting in one step as shown below. The command is commented out since the analysis has already been performed. It is recommended that you familiriaze yourself with the various steps and the results of the various steps before using it. End of explanation #%% plot contours of found components Cn = cm.local_correlations(images.transpose(1,2,0)) Cn[np.isnan(Cn)] = 0 cnm.estimates.plot_contours_nb(img=Cn) Explanation: Inspecting the results Briefly inspect the results by plotting contours of identified components against correlation image. The results of the algorithm are stored in the object cnm.estimates. More information can be found in the definition of the estimates object and in the wiki. End of explanation %%capture #%% RE-RUN seeded CNMF on accepted patches to refine and perform deconvolution cnm2 = cnm.refit(images, dview=dview) Explanation: Re-run (seeded) CNMF on the full Field of View You can re-run the CNMF algorithm seeded on just the selected components from the previous step. Be careful, because components rejected on the previous step will not be recovered here. End of explanation #%% COMPONENT EVALUATION # the components are evaluated in three ways: # a) the shape of each component must be correlated with the data # b) a minimum peak SNR is required over the length of a transient # c) each shape passes a CNN based classifier cnm2.estimates.evaluate_components(images, cnm2.params, dview=dview) Explanation: Component Evaluation The processing in patches creates several spurious components. These are filtered out by evaluating each component using three different criteria: the shape of each component must be correlated with the data at the corresponding location within the FOV a minimum peak SNR is required over the length of a transient each shape passes a CNN based classifier End of explanation #%% PLOT COMPONENTS cnm2.estimates.plot_contours_nb(img=Cn, idx=cnm2.estimates.idx_components) Explanation: Plot contours of selected and rejected components End of explanation # accepted components cnm2.estimates.nb_view_components(img=Cn, idx=cnm2.estimates.idx_components) # rejected components if len(cnm2.estimates.idx_components_bad) > 0: cnm2.estimates.nb_view_components(img=Cn, idx=cnm2.estimates.idx_components_bad) else: print("No components were rejected.") Explanation: View traces of accepted and rejected components. Note that if you get data rate error you can start Jupyter notebooks using: 'jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10' End of explanation #%% Extract DF/F values cnm2.estimates.detrend_df_f(quantileMin=8, frames_window=250) Explanation: Extract DF/F values End of explanation cnm2.estimates.select_components(use_object=True) Explanation: Select only high quality components End of explanation cnm2.estimates.nb_view_components(img=Cn, denoised_color='red') print('you may need to change the data rate to generate this one: use jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10 before opening jupyter notebook') Explanation: Display final results End of explanation save_results = False if save_results: cnm2.save('analysis_results.hdf5') Explanation: Closing, saving, and creating denoised version You can save an hdf5 file with all the fields of the cnmf object End of explanation #%% STOP CLUSTER and clean up log files cm.stop_server(dview=dview) log_files = glob.glob('*_LOG_*') for log_file in log_files: os.remove(log_file) Explanation: Stop cluster and clean up log files End of explanation cnm2.estimates.play_movie(images, q_max=99.9, gain_res=2, magnification=2, bpx=border_to_0, include_bck=False) Explanation: View movie with the results We can inspect the denoised results by reconstructing the movie and playing alongside the original data and the resulting (amplified) residual movie End of explanation #%% reconstruct denoised movie denoised = cm.movie(cnm2.estimates.A.dot(cnm2.estimates.C) + \ cnm2.estimates.b.dot(cnm2.estimates.f)).reshape(dims + (-1,), order='F').transpose([2, 0, 1]) Explanation: The denoised movie can also be explicitly constructed using: End of explanation
1,956
Given the following text description, write Python code to implement the functionality described. Description: Print all safe primes below N Python 3 implementation of the approach ; Function to print first n safe primes ; Initialize all entries of integer array as 1. A value in prime [ i ] will finally be 0 if i is Not a prime , else 1 ; 0 and 1 are not primes ; If prime [ p ] is not changed , then it is a prime ; Update all multiples of p ; If i is prime ; 2 p + 1 ; If 2 p + 1 is also a prime then set prime [ 2 p + 1 ] = 2 ; i is a safe prime ; Driver code
Python Code: from math import sqrt def printSafePrimes(n ) : prime =[0 for i in range(n + 1 ) ] for i in range(2 , n + 1 ) : prime[i ] = 1  prime[0 ] = prime[1 ] = 0 for p in range(2 , int(sqrt(n ) ) + 1 , 1 ) : if(prime[p ] == 1 ) : for i in range(p * 2 , n + 1 , p ) : prime[i ] = 0    for i in range(2 , n + 1 , 1 ) : if(prime[i ] != 0 ) : temp =(2 * i ) + 1 if(temp <= n and prime[temp ] != 0 ) : prime[temp ] = 2    for i in range(5 , n + 1 ) : if(prime[i ] == 2 ) : print(i , end = "▁ ")    if __name__== ' __main __' : n = 20 printSafePrimes(n ) 
1,957
Given the following text description, write Python code to implement the functionality described below step by step Description: New function to make a list and to select calibrator I add a function to retrieve all the flux from the ALMA Calibrator list with its frequency and observing date, and to retrieve redshift (z) from NED. Step1: Example, retrieve all the calibrator with a flux > 0.1 Jy Step2: Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7] already queried and convert it to SQL exclude Cycle 0, array 12m Step3: We can write a "report file" or only use the "resume data", some will have redshift data retrieved from NED. Step4: Sometimes there is no redshift information found in NED Combining listcal and resume information. Step5: Select objects which has redshift collect the flux, band, freq, and obsdate plot based on the Band Step6: Plot Flux vs Redshift same object will located in the same z some of them will not have flux in all 3 bands. Step7: Plot log(Luminosity) vs redshift Step9: How to calculate luminosity Step10: Plot $\log_{10}(L)$ vs $z$ Step11: Black-dashed line are for 0.1 Jy flux. Without log10
Python Code: file_listcal = "alma_sourcecat_searchresults_20180419.csv" q = databaseQuery() Explanation: New function to make a list and to select calibrator I add a function to retrieve all the flux from the ALMA Calibrator list with its frequency and observing date, and to retrieve redshift (z) from NED. End of explanation listcal = q.read_calibratorlist(file_listcal, fluxrange=[0.1, 999999]) len(listcal) print("Name: ", listcal[0][0]) print("J2000 RA, dec: ", listcal[0][1], listcal[0][2]) print("Alias: ", listcal[0][3]) print("Flux density: ", listcal[0][4]) print("Band: ", listcal[0][5]) print("Freq: ", listcal[0][6]) print("Obs date: ", listcal[0][4]) Explanation: Example, retrieve all the calibrator with a flux > 0.1 Jy: End of explanation report, resume = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \ maxFreqRes=999999999, array='12m', \ excludeCycle0=True, \ selectPol=False, \ minTimeBand={3:60., 6:60., 7:60.}, \ silent=True) Explanation: Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7] already queried and convert it to SQL exclude Cycle 0, array 12m End of explanation print("Name: ", resume[0][0]) print("From NED: ") print("Name: ", resume[0][3]) print("J2000 RA, dec: ", resume[0][4], resume[0][5]) print("z: ", resume[0][6]) print("Total # of projects: ", resume[0][7]) print("Total # of UIDs: ", resume[0][8]) print("Gal lon: ", resume[0][9]) print("Gal lat: ", resume[0][10]) Explanation: We can write a "report file" or only use the "resume data", some will have redshift data retrieved from NED. End of explanation for i, obj in enumerate(resume): for j, cal in enumerate(listcal): if obj[0] == cal[0]: # same name obj.append(cal[4:]) # add [flux, band, flux obsdate] in the "resume" Explanation: Sometimes there is no redshift information found in NED Combining listcal and resume information. End of explanation def collect_z_and_flux(Band): z = [] flux = [] for idata in resume: if idata[6] is not None: # select object which has redshift information fluxnya = idata[11][0] bandnya = idata[11][1] freqnya = idata[11][2] datenya = idata[11][3] for i, band in enumerate(bandnya): if band == str(Band): # take only first data flux.append(fluxnya[i]) z.append(idata[6]) break return z, flux z3, f3 = collect_z_and_flux(3) print("Number of seleted source in B3: ", len(z3)) z6, f6 = collect_z_and_flux(6) print("Number of seleted source in B6: ", len(z6)) z7, f7 = collect_z_and_flux(7) print("Number of seleted source in B7: ", len(z7)) Explanation: Select objects which has redshift collect the flux, band, freq, and obsdate plot based on the Band End of explanation plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, f3, 'ro') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B3") plt.subplot(222) plt.plot(z6, f6, 'go') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B6") plt.subplot(223) plt.plot(z7, f7, 'bo') plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B7") plt.subplot(224) plt.plot(z3, f3, 'ro', z6, f6, 'go', z7, f7, 'bo', alpha=0.3) plt.xlabel("z") plt.ylabel("Flux density (Jy)") plt.title("B3, B6, B7") Explanation: Plot Flux vs Redshift same object will located in the same z some of them will not have flux in all 3 bands. End of explanation from astropy.cosmology import FlatLambdaCDM cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Tcmb0=2.725) Explanation: Plot log(Luminosity) vs redshift End of explanation def calc_power(z, flux): z = redshift flux in Jy z = np.array(z) flux = np.array(flux) dL = cosmo.luminosity_distance(z).to(u.meter).value # Luminosity distance luminosity = 4.0*np.pi*dL*dL/(1.0+z) * flux * 1e-26 return z, luminosity Explanation: How to calculate luminosity: $$L_{\nu} (\nu_{e}) = \frac{4 \pi D_{L}^2}{1+z} \cdot S_{\nu} (\nu_{o})$$ Notes: - Calculate Luminosity or Power in a specific wavelength (without k-correction e.g. using spectral index) - $L_{\nu}$ in watt/Hz, in emited freq - $S_{\nu}$ in watt/m$^2$/Hz, in observed freq - $D_L$ is luminosity distance, calculated using astropy.cosmology function - need to calculate distance in meter - need to convert Jy to watt/m$^2$/Hz ----- $\times 10^{-26}$ End of explanation z3, l3 = calc_power(z3, f3) z6, l6 = calc_power(z6, f6) z7, l7 = calc_power(z7, f7) zdummy = np.linspace(0.001, 2.5, 100) fdummy = 0.1 # Jy, our cut threshold zdummy, Ldummy0 = calc_power(zdummy, fdummy) zdummy, Ldummy3 = calc_power(zdummy, np.max(f3)) zdummy, Ldummy6 = calc_power(zdummy, np.max(f6)) zdummy, Ldummy7 = calc_power(zdummy, np.max(f7)) plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, np.log10(l3), 'r*', \ zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy3), 'r--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3") plt.subplot(222) plt.plot(z6, np.log10(l6), 'g*', \ zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy6), 'g--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6") plt.subplot(223) plt.plot(z7, np.log10(l7), 'b*', \ zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy7), 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7") plt.subplot(224) plt.plot(z3, np.log10(l3), 'r*', z6, np.log10(l6), 'g*', z7, np.log10(l7), 'b*', \ zdummy, np.log10(Ldummy0), 'k--', \ zdummy, np.log10(Ldummy3), 'r--', \ zdummy, np.log10(Ldummy6), 'g--', \ zdummy, np.log10(Ldummy7), 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7") Explanation: Plot $\log_{10}(L)$ vs $z$ End of explanation plt.figure(figsize=(15,10)) plt.subplot(221) plt.plot(z3, l3, 'r*', zdummy, Ldummy0, 'k--', zdummy, Ldummy3, 'r--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3") plt.subplot(222) plt.plot(z6, l6, 'g*', zdummy, Ldummy0, 'k--', zdummy, Ldummy6, 'g--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6") plt.subplot(223) plt.plot(z7, l7, 'b*', zdummy, Ldummy0, 'k--', zdummy, Ldummy7, 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7") plt.subplot(224) plt.plot(z3, l3, 'r*', z6, l6, 'g*', z7, l7, 'b*', \ zdummy, Ldummy0, 'k--', zdummy, Ldummy3, 'r--', \ zdummy, Ldummy6, 'g--', zdummy, Ldummy7, 'b--', alpha=0.5) plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7") Explanation: Black-dashed line are for 0.1 Jy flux. Without log10 End of explanation
1,958
Given the following text description, write Python code to implement the functionality described below step by step Description: H2O Tutorial Step1: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows Step2: Download Data The following code downloads a copy of the Wisconsin Diagnostic Breast Cancer dataset. We can import the data directly into H2O using the Python API. Step3: Explore Data Once we have loaded the data, let's take a quick look. First the dimension of the frame Step4: Now let's take a look at the top of the frame Step5: The first two columns contain an ID and the resposne. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors. Step6: To select a subset of the columns to look at, typical Pandas indexing applies Step7: Now let's select a single column, for example -- the response column, and look at the data more closely Step8: It looks like a binary response, but let's validate that assumption Step9: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis) Step10: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column. Step11: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look Step12: Great, no missing labels. Out of curiosity, let's see if there is any missing data in this frame Step13: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically. Step14: Ok, the data is not exactly evenly distributed between the two classes -- there are almost twice as many Benign samples as there are Malicious samples. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below). Step15: Machine Learning in H2O We will do a quick demo of the H2O software -- trying to predict malignant tumors using various machine learning algorithms. Specify the predictor set and response The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis'). Step16: Split H2O Frame into a train and test set Step17: Train and Test a GBM model Step18: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters. Step19: The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables. Step20: Inspect Model The type of results shown when you print a model, are determined by the following Step21: Model Performance on a Test Set Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we passed the test set as the validation_frame in training, so we have technically already created test set predictions and performance. However, when performing model selection over a variety of model parameters, it is common for users to break their dataset into three pieces Step22: Cross-validated Performance To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row. Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument. When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which we call data. Step23: Grid Search One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over Step24: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters Step25: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid. Step26: Compare Models
Python Code: import h2o # Start an H2O Cluster on your local machine h2o.init() Explanation: H2O Tutorial: Breast Cancer Classification Author: Erin LeDell Contact: [email protected] This tutorial steps through a quick introduction to H2O's Python API. The goal of this tutorial is to introduce through a complete example H2O's capabilities from Python. Also, to help those that are accustomed to Scikit Learn and Pandas, the demo will be specific call outs for differences between H2O and those packages; this is intended to help anyone that needs to do machine learning on really Big Data make the transition. It is not meant to be a tutorial on machine learning or algorithms. Detailed documentation about H2O's and the Python API is available at http://docs.h2o.ai. Install H2O in Python Prerequisites This tutorial assumes you have Python 2.7 installed. The h2o Python package has a few dependencies which can be installed using pip. The packages that are required are (which also have their own dependencies): bash pip install requests pip install tabulate pip install scikit-learn If you have any problems (for example, installing the scikit-learn package), check out this page for tips. Install h2o Once the dependencies are installed, you can install H2O. We will use the latest stable version of the h2o package, which is called "Tibshirani-3." The installation instructions are on the "Install in Python" tab on this page. ```bash The following command removes the H2O module for Python (if it already exists). pip uninstall h2o Next, use pip to install this version of the H2O Python module. pip install http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/3/Python/h2o-3.6.0.3-py2.py3-none-any.whl ``` Start up an H2O cluster In a Python terminal, we can import the h2o package and start up an H2O cluster. End of explanation # This will not actually do anything since it's a fake IP address # h2o.init(ip="123.45.67.89", port=54321) Explanation: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows: End of explanation csv_url = "http://www.stat.berkeley.edu/~ledell/data/wisc-diag-breast-cancer-shuffled.csv" data = h2o.import_file(csv_url) Explanation: Download Data The following code downloads a copy of the Wisconsin Diagnostic Breast Cancer dataset. We can import the data directly into H2O using the Python API. End of explanation data.shape Explanation: Explore Data Once we have loaded the data, let's take a quick look. First the dimension of the frame: End of explanation data.head() Explanation: Now let's take a look at the top of the frame: End of explanation data.columns Explanation: The first two columns contain an ID and the resposne. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors. End of explanation columns = ["id", "diagnosis", "area_mean"] data[columns].head() Explanation: To select a subset of the columns to look at, typical Pandas indexing applies: End of explanation data['diagnosis'] Explanation: Now let's select a single column, for example -- the response column, and look at the data more closely: End of explanation data['diagnosis'].unique() data['diagnosis'].nlevels() Explanation: It looks like a binary response, but let's validate that assumption: End of explanation data['diagnosis'].levels() Explanation: We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis): End of explanation data.isna() data['diagnosis'].isna() Explanation: Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the isna method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column. End of explanation data['diagnosis'].isna().sum() Explanation: The isna method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look: End of explanation data.isna().sum() Explanation: Great, no missing labels. Out of curiosity, let's see if there is any missing data in this frame: End of explanation # TO DO: Insert a bar chart or something showing the proportion of M to B in the response. data['diagnosis'].table() Explanation: The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically. End of explanation n = data.shape[0] # Total number of training samples data['diagnosis'].table()['Count']/n Explanation: Ok, the data is not exactly evenly distributed between the two classes -- there are almost twice as many Benign samples as there are Malicious samples. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below). End of explanation y = 'diagnosis' x = data.columns del x[0:1] x Explanation: Machine Learning in H2O We will do a quick demo of the H2O software -- trying to predict malignant tumors using various machine learning algorithms. Specify the predictor set and response The response, y, is the 'diagnosis' column, and the predictors, x, are all the columns aside from the first two columns ('id' and 'diagnosis'). End of explanation train, test = data.split_frame(ratios=[0.75], seed=1) train.shape test.shape Explanation: Split H2O Frame into a train and test set End of explanation # Import H2O GBM: from h2o.estimators.gbm import H2OGradientBoostingEstimator Explanation: Train and Test a GBM model End of explanation model = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1) Explanation: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters. End of explanation model.train(x=x, y=y, training_frame=train, validation_frame=test) Explanation: The model object, like all H2O estimator objects, has a train method, which will actually perform model training. At this step we specify the training and (optionally) a validation set, along with the response and predictor variables. End of explanation print(model) Explanation: Inspect Model The type of results shown when you print a model, are determined by the following: - Model class of the estimator (e.g. GBM, RF, GLM, DL) - The type of machine learning problem (e.g. binary classification, multiclass classification, regression) - The data you specify (e.g. training_frame only, training_frame and validation_frame, or training_frame and nfolds) Below, we see a GBM Model Summary, as well as training and validation metrics since we supplied a validation_frame. Since this a binary classification task, we are shown the relevant performance metrics, which inclues: MSE, R^2, LogLoss, AUC and Gini. Also, we are shown a Confusion Matrix, where the threshold for classification is chosen automatically (by H2O) as the threshold which maximizes the F1 score. The scoring history is also printed, which shows the performance metrics over some increment such as "number of trees" in the case of GBM and RF. Lastly, for tree-based methods (GBM and RF), we also print variable importance. End of explanation perf = model.model_performance(test) perf.r2() perf.auc() Explanation: Model Performance on a Test Set Once a model has been trained, you can also use it to make predictions on a test set. In the case above, we passed the test set as the validation_frame in training, so we have technically already created test set predictions and performance. However, when performing model selection over a variety of model parameters, it is common for users to break their dataset into three pieces: Training, Validation and Test. After training a variety of models using different parameters (and evaluating them on a validation set), the user may choose a single model and then evaluate model performance on a separate test set. This is when the model_performance method, shown below, is most useful. End of explanation cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli', ntrees=100, max_depth=4, learn_rate=0.1, nfolds=5) cvmodel.train(x=x, y=y, training_frame=data) Explanation: Cross-validated Performance To perform k-fold cross-validation, you use the same code as above, but you specify nfolds as an integer greater than 1, or add a "fold_column" to your H2O Frame which indicates a fold ID for each row. Unless you have a specific reason to manually assign the observations to folds, you will find it easiest to simply use the nfolds argument. When performing cross-validation, you can still pass a validation_frame, but you can also choose to use the original dataset that contains all the rows. We will cross-validate a model below using the original H2O Frame which we call data. End of explanation ntrees_opt = [5,50,100] max_depth_opt = [2,3,5] learn_rate_opt = [0.1,0.2] hyper_params = {'ntrees': ntrees_opt, 'max_depth': max_depth_opt, 'learn_rate': learn_rate_opt} Explanation: Grid Search One way of evaluting models with different parameters is to perform a grid search over a set of parameter values. For example, in GBM, here are three model parameters that may be useful to search over: - ntrees: Number of trees - max_depth: Maximum depth of a tree - learn_rate: Learning rate in the GBM We will define a grid as follows: End of explanation from h2o.grid.grid_search import H2OGridSearch gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params) Explanation: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters: End of explanation gs.train(x=x, y=y, training_frame=train, validation_frame=test) Explanation: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid. End of explanation print(gs) # print out the auc for all of the models for g in gs: print(g.model_id + " auc: " + str(g.auc())) #TO DO: Compare grid search models Explanation: Compare Models End of explanation
1,959
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Neural Network for Image Classification Step1: 2 - Dataset You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better! Problem Statement Step2: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images. Step3: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. <img src="images/imvectorkiank.png" style="width Step5: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector. 3 - Architecture of your model Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images. You will build two different models Step6: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error. Step7: Expected Output Step8: Expected Output Step10: Expected Output Step11: You will now train the model as a 5-layer neural network. Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error. Step12: Expected Output Step13: <table> <tr> <td> **Train Accuracy** </td> <td> 0.985645933014 </td> </tr> </table> Step14: Expected Output Step15: A few type of images the model tends to do poorly on include
Python Code: import time import numpy as np import h5py import matplotlib.pyplot as plt import scipy from PIL import Image from scipy import ndimage from dnn_app_utils_v2 import * %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) Explanation: Deep Neural Network for Image Classification: Application When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course! You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation. After this assignment you will be able to: - Build and apply a deep neural network to supervised learning. Let's get started! 1 - Packages Let's first import all the packages that you will need during this assignment. - numpy is the fundamental package for scientific computing with Python. - matplotlib is a library to plot graphs in Python. - h5py is a common package to interact with a dataset that is stored on an H5 file. - PIL and scipy are used here to test your model with your own picture at the end. - dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook. - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. End of explanation train_x_orig, train_y, test_x_orig, test_y, classes = load_data() Explanation: 2 - Dataset You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better! Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train images labelled as cat (1) or non-cat (0) - a test set of m_test images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Let's get more familiar with the dataset. Load the data by running the cell below. End of explanation # Example of a picture index = 10 plt.imshow(train_x_orig[index]) print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.") # Explore your dataset m_train = train_x_orig.shape[0] num_px = train_x_orig.shape[1] m_test = test_x_orig.shape[0] print ("Number of training examples: " + str(m_train)) print ("Number of testing examples: " + str(m_test)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_x_orig shape: " + str(train_x_orig.shape)) print ("train_y shape: " + str(train_y.shape)) print ("test_x_orig shape: " + str(test_x_orig.shape)) print ("test_y shape: " + str(test_y.shape)) Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images. End of explanation # Reshape the training and test examples train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T # Standardize data to have feature values between 0 and 1. train_x = train_x_flatten/255. test_x = test_x_flatten/255. print ("train_x's shape: " + str(train_x.shape)) print ("test_x's shape: " + str(test_x.shape)) Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. <img src="images/imvectorkiank.png" style="width:450px;height:300px;"> <caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption> End of explanation ### CONSTANTS DEFINING THE MODEL #### n_x = 12288 # num_px * num_px * 3 n_h = 7 n_y = 1 layers_dims = (n_x, n_h, n_y) # GRADED FUNCTION: two_layer_model def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False): Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (n_x, number of examples) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- dimensions of the layers (n_x, n_h, n_y) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- If set to True, this will print the cost every 100 iterations Returns: parameters -- a dictionary containing W1, W2, b1, and b2 np.random.seed(1) grads = {} costs = [] # to keep track of the cost m = X.shape[1] # number of examples (n_x, n_h, n_y) = layers_dims # Initialize parameters dictionary, by calling one of the functions you'd previously implemented ### START CODE HERE ### (≈ 1 line of code) parameters = initialize_parameters(n_x, n_h, n_y) ### END CODE HERE ### # Get W1, b1, W2 and b2 from the dictionary parameters. W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2". ### START CODE HERE ### (≈ 2 lines of code) A1, cache1 = linear_activation_forward(X, W1, b1, 'relu') A2, cache2 = linear_activation_forward(A1, W2, b2, 'sigmoid') ### END CODE HERE ### # Compute cost ### START CODE HERE ### (≈ 1 line of code) cost = compute_cost(A2, Y) ### END CODE HERE ### # Initializing backward propagation dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2)) # Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1". ### START CODE HERE ### (≈ 2 lines of code) dA1, dW2, db2 = linear_activation_backward(dA2, cache2, 'sigmoid') dA0, dW1, db1 = linear_activation_backward(dA1, cache1, 'relu') ### END CODE HERE ### # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2 grads['dW1'] = dW1 grads['db1'] = db1 grads['dW2'] = dW2 grads['db2'] = db2 # Update parameters. ### START CODE HERE ### (approx. 1 line of code) parameters = update_parameters(parameters, grads, learning_rate) ### END CODE HERE ### # Retrieve W1, b1, W2, b2 from parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # Print the cost every 100 training example if print_cost and i % 100 == 0: print("Cost after iteration {}: {}".format(i, np.squeeze(cost))) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector. 3 - Architecture of your model Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images. You will build two different models: - A 2-layer neural network - An L-layer deep neural network You will then compare the performance of these models, and also try out different values for $L$. Let's look at the two architectures. 3.1 - 2-layer neural network <img src="images/2layerNN_kiank.png" style="width:650px;height:400px;"> <caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption> <u>Detailed Architecture of figure 2</u>: - The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$. - You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$. - You then repeat the same process. - You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias). - Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat. 3.2 - L-layer deep neural network It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation: <img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;"> <caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption> <u>Detailed Architecture of figure 3</u>: - The input is a (64,64,3) image which is flattened to a vector of size (12288,1). - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit. - Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture. - Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat. 3.3 - General methodology As usual you will follow the Deep Learning methodology to build the model: 1. Initialize parameters / Define hyperparameters 2. Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop) 4. Use trained parameters to predict labels Let's now implement those two models! 4 - Two-layer neural network Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are: python def initialize_parameters(n_x, n_h, n_y): ... return parameters def linear_activation_forward(A_prev, W, b, activation): ... return A, cache def compute_cost(AL, Y): ... return cost def linear_activation_backward(dA, cache, activation): ... return dA_prev, dW, db def update_parameters(parameters, grads, learning_rate): ... return parameters End of explanation parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True) Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error. End of explanation predictions_train = predict(train_x, train_y, parameters) Explanation: Expected Output: <table> <tr> <td> **Cost after iteration 0**</td> <td> 0.6930497356599888 </td> </tr> <tr> <td> **Cost after iteration 100**</td> <td> 0.6464320953428849 </td> </tr> <tr> <td> **...**</td> <td> ... </td> </tr> <tr> <td> **Cost after iteration 2400**</td> <td> 0.048554785628770206 </td> </tr> </table> Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this. Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below. End of explanation predictions_test = predict(test_x, test_y, parameters) Explanation: Expected Output: <table> <tr> <td> **Accuracy**</td> <td> 1.0 </td> </tr> </table> End of explanation ### CONSTANTS ### layers_dims = [12288, 20, 7, 5, 1] # 5-layer model # GRADED FUNCTION: L_layer_model def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009 Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments: X -- data, numpy array of shape (number of examples, num_px * num_px * 3) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- list containing the input size and each layer size, of length (number of layers + 1). learning_rate -- learning rate of the gradient descent update rule num_iterations -- number of iterations of the optimization loop print_cost -- if True, it prints the cost every 100 steps Returns: parameters -- parameters learnt by the model. They can then be used to predict. np.random.seed(1) costs = [] # keep track of cost # Parameters initialization. ### START CODE HERE ### parameters = initialize_parameters_deep(layers_dims) ### END CODE HERE ### # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID. ### START CODE HERE ### (≈ 1 line of code) AL, caches = L_model_forward(X, parameters) ### END CODE HERE ### # Compute cost. ### START CODE HERE ### (≈ 1 line of code) cost = compute_cost(AL, Y) ### END CODE HERE ### # Backward propagation. ### START CODE HERE ### (≈ 1 line of code) grads = L_model_backward(AL, Y, caches) ### END CODE HERE ### # Update parameters. ### START CODE HERE ### (≈ 1 line of code) parameters = update_parameters(parameters, grads, learning_rate) ### END CODE HERE ### # Print the cost every 100 training example if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters Explanation: Expected Output: <table> <tr> <td> **Accuracy**</td> <td> 0.72 </td> </tr> </table> Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting. Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model. 5 - L-layer Neural Network Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are: python def initialize_parameters_deep(layers_dims): ... return parameters def L_model_forward(X, parameters): ... return AL, caches def compute_cost(AL, Y): ... return cost def L_model_backward(AL, Y, caches): ... return grads def update_parameters(parameters, grads, learning_rate): ... return parameters End of explanation parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True) Explanation: You will now train the model as a 5-layer neural network. Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error. End of explanation pred_train = predict(train_x, train_y, parameters) Explanation: Expected Output: <table> <tr> <td> **Cost after iteration 0**</td> <td> 0.771749 </td> </tr> <tr> <td> **Cost after iteration 100**</td> <td> 0.672053 </td> </tr> <tr> <td> **...**</td> <td> ... </td> </tr> <tr> <td> **Cost after iteration 2400**</td> <td> 0.092878 </td> </tr> </table> End of explanation pred_test = predict(test_x, test_y, parameters) Explanation: <table> <tr> <td> **Train Accuracy** </td> <td> 0.985645933014 </td> </tr> </table> End of explanation print_mislabeled_images(classes, test_x, test_y, pred_test) Explanation: Expected Output: <table> <tr> <td> **Test Accuracy**</td> <td> 0.8 </td> </tr> </table> Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. This is good performance for this task. Nice job! Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course). 6) Results Analysis First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images. End of explanation ## START CODE HERE ## my_image = "my_image.jpg" # change this to the name of your image file my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat) ## END CODE HERE ## fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1)) my_predicted_image = predict(my_image, my_label_y, parameters) plt.imshow(image) print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.") Explanation: A few type of images the model tends to do poorly on include: - Cat body in an unusual position - Cat appears against a background of a similar color - Unusual cat color and species - Camera Angle - Brightness of the picture - Scale variation (cat is very large or small in image) 7) Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)! End of explanation
1,960
Given the following text description, write Python code to implement the functionality described below step by step Description: Visualization 1 Step1: Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title. Step2: Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate random data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title.
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np Explanation: Visualization 1: Matplotlib Basics Exercises End of explanation x=np.random.randn(22) y=np.random.randn(22) plt.xlabel('x') plt.ylabel('y') plt.title('Scatter') plt.scatter(x,y,s=22.0,c='g',marker='x',alpha=.7,linewidths=2.2) #plt.xlim(0,1) #plt.ylim() Explanation: Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title. End of explanation data=np.random.randn(22) plt.hist(data) Explanation: Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate random data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title. End of explanation
1,961
Given the following text description, write Python code to implement the functionality described below step by step Description: Bangladesh Medical Association(BMA) member data extraction Version Step2: We need a function to parse the HTML data after extracting the result. Step3: Now we extract the result pages against each of the id(1 to 66000) and store the strings in a pandas Dataframe. We will tokenize the resultant string later. Step4: Parsing Now upon observation we will see that nugges of information is encapsulated within a specific piece of HTML sting. Using those patterns we can extract the relevant informations. Step5: Photo extraction Now we have the information about the doctors. We can also extract the image files containting the photos.
Python Code: #Load the necessary modules from mechanize import Browser import pandas as pd from IPython.core.display import HTML import requests Explanation: Bangladesh Medical Association(BMA) member data extraction Version : 1.0<br> Date : 2015-05-21 This notebook will illustrate the approach undertaken to extract the BMA doctor's registration. All doctors in Bangladesh recive a registration number at BMA after successfully completing their internship. Using that number they can establish their credivbility as a doctor. Using these numbers one can verify that someone is a legitimate doctor. Using BMA search portal one can search using only the registration number. But since it is not common in this country to routinely publish their BMA number, we need an interface using which we can search the database using doctor's name also. Tools used: Python 2 IPython : python module which provided a python shell for interactive computing within a browser and terminal Mechanize : python module for interacting with web page and submitting form (Python 2 only module) Pandas : python module for handling large dataset Requests: simple HTTP library for python Unfortunately the data is very barebone at BMA website. Doctor's name, father's name, address and an official photo is provided against each id number. But we can create a master table which we can populate from other sources. This interface provides us 66000 medical doctor and 4000 dental doctor's worth of information. Currently we have around 70000 doctors in our country. So up can expect data upto couple year ago. This is a first attempt to collect the data and accumulate them. Several crude hacks were employed to ensure that a working model is up and running as soon as possible. Initially the informations are dumped in a CSV files after we have all the data they will be imported into a PostgreSQL database. First use of the database might be to implement an mobile app interface where a patient can search for a doctor by his name or registration number and see his photo to verify that he is legit doctor. Extraction End of explanation def extract_sub_string(string, start, finish): extract a substring between the 'start' substring and the first occurence of 'finish' substring after that point. :param string: main string, to be parsed :type string: str :param start: starting string :type start: str :param end: ending string :type end: str new_string_index = string.find(start) new_string = string[new_string_index:] end_index =new_string.find(finish) final_string = string[new_string_index:new_string_index+end_index] return final_string Explanation: We need a function to parse the HTML data after extracting the result. End of explanation start = 'doctor_info' finish="</div" extracted_strings = [] extracted_df = pd.DataFrame(columns=['extracted']) for reg_no in xrange(1,66001): browser = Browser() browser.open("http://bmdc.org.bd/doctors-info/") for form in browser.forms(): pass # We have 2 forms in this page and we going to select the second form browser.select_form(nr=1) # This form has 2 input fields, first field, search_doc_id takes an number and second field type indicates if the # id is assocated to a medical doctor or dentist form['search_doc_id']=str(reg_no) form['type']=['1'] # Submit the form and read the result response = browser.submit() content = response.read() str_content = str(content) #Extract only the relevant portion extracted_str = extract_sub_string(str_content, start, finish) extracted_strings.append(extracted_str) # Originally these commnted out snipppets were run so that each group of 100 doctors are recorded at a time in # seperate csv files. for testing and stability purpose. Each 100 doctors took around 6-7 minutes to record. #if reg_no%100==0: # file_number = reg_no/100 # extracted_df = pd.DataFrame(columns=['extracted']) # extracted_df.extracted = extracted_strings # extracted_df.to_csv(str(file_number)+'.csv') # extracted_strings = [] extracted_df.extracted = extracted_strings extracted_df.to_csv('all_bma_doctor.csv') Explanation: Now we extract the result pages against each of the id(1 to 66000) and store the strings in a pandas Dataframe. We will tokenize the resultant string later. End of explanation tokenized_df = pd.DataFrame(columns=['Registration','Name','Father','Address', 'Division']) #Since originally we created a number of csv files each containing 100 doctors we parsed them differently. #file_list = [] #for item in xrange(1,66): # file_list.append(str(item)+'.csv') #for file_ in file_list: df = pd.read_csv('all_bma_doctor.csv') for index in df.index: string = df.ix[index, 'extracted'] start="Registration Number</td>\r\n" finish='</td>\r\n </tr>\r\n\r\n <tr class="odd">\r\n' reg_no = extract_sub_string(string , start, finish) reg_no = reg_no.strip() reg_no = reg_no.split(" ")[-1] #reg_no start = '<td>Doctor\'s Name</td>\r\n' finish = '</td>\r\n </tr>\r\n' dr_name = extract_sub_string(string , start, finish) dr_name=dr_name.strip() dr_name = dr_name.split(">")[-1] #dr_name start = "<td>Father's Name</td>" finish = "</td>\r\n </tr>" father = extract_sub_string(string , start, finish) father = father.strip() father = father.split(">")[-1] #father start = '<td> <address> ' finish = "</address>" address = extract_sub_string(string , start, finish) address = address.strip() address = address.split("<address>")[-1] address = address.replace("<br/>",' ').strip() #address division = 'Medical' values = pd.Series() values['Registration'] = reg_no values['Name'] = dr_name values['Father'] = father values['Address'] = address values['Division'] = division tokenized_df.loc[len(tokenized_df)] = values tokenized_df[5000:5010] Explanation: Parsing Now upon observation we will see that nugges of information is encapsulated within a specific piece of HTML sting. Using those patterns we can extract the relevant informations. End of explanation for bma_id in xrange(1,66001): f = open(str(bma_id)+'.jpg','wb') f.write(requests.get('http://bmdc.org.bd/dphotos/medical/'+str(bma_id)+'.JPG').content) f.close() Explanation: Photo extraction Now we have the information about the doctors. We can also extract the image files containting the photos. End of explanation
1,962
Given the following text description, write Python code to implement the functionality described below step by step Description: Step4: Вопросы Какие есть два способа создать поток, используя модуль threading? Что такое кооперативная многозадачность? Что такое Future? В чем основные отличия между асинхронными и синхронными функциями в Python? Какая функция при работе с корутинами является, грубо говоря, аналогом функции concurrent.futures.ThreadPoolExecutor().submit() из мира потоков? Что такое MVC и какие файлы в Django отвечают за каждый компонент? В чем главные отличия Django, Flask и aiohttp? Асинхронный бот Базы данных Реляционные (MySQL, PostgreSQL, Oracle, SQLite) Key-Value + document-oriented (Redis, Tarantool, MongoDB, Elasticsearch) Графовые (Neo4j) и т.д. Распределенные? (DNS) In-Memory? (Memcached) Реляционные базы данных Записи могут иметь ключи, указывающие друг на друга Чаще всего для работы с данными используется SQL (https Step5: ORM - Object-Relational Mapping Установим соответствие между записями в базе и объектами в коде Получим удобство в коде за счет меньшей гибкости построения запросов и большего оверхеда Вернемся к нашему сайту Миграции - это преобразования схемы и/или типов данных, меняющие структуру базы как в процессе разработки, так и на боевых серверах python manage.py migrate Step6: Нужно добавить наше приложение в INSTALLED_APPS в settings.py Step7: Создадим миграцию для наших новых моделей python manage.py makemigrations hello Но что конкретно он нагенерировал? python manage.py sqlmigrate hello 0001 Ну вот, теперь все понятно python manage.py migrate Встроенный Python shell (рекомендую IPython) python manage.py shell Step8: Админка python manage.py createsuperuser Step9: python manage.py runserver А что насчет не-Django? SQLAlchemy, вот что! https
Python Code: import sqlite3 conn = sqlite3.connect('example.db') c = conn.cursor() c.execute( CREATE TABLE employees ( id int unsigned NOT NULL, first_name string NOT NULL, last_name string NOT NULL, department_id int unsigned, PRIMARY KEY (id) )) c.execute( CREATE TABLE departments ( id int unsigned NOT NULL, title string NOT NULL, PRIMARY KEY (id) )) conn.commit() c.execute( INSERT INTO `employees` (`id`, `first_name`, `last_name`, `department_id`) VALUES ('1', 'Darth', 'Vader', 1), ('2', 'Darth', 'Maul', 1), ('3', 'Kylo', 'Ren', 1), ('4', 'Magister', 'Yoda', 2), ('5', 'Leia', 'Organa', 2), ('6', 'Luke', 'Skywalker', 2), ('7', 'Jar Jar', 'Binks', NULL) ) c.execute( INSERT INTO `departments` (`id`, `title`) VALUES ('1', 'Dark Side Inc.'), ('2', 'Light Side Ltd.'), ('3', 'Rebels'), ('4', 'Wookie') ) conn.commit() c.execute("SELECT emp.last_name AS Surname, d.title AS Department FROM departments d LEFT JOIN employees emp ON (d.id = emp.department_id)") print(c.fetchall()) Explanation: Вопросы Какие есть два способа создать поток, используя модуль threading? Что такое кооперативная многозадачность? Что такое Future? В чем основные отличия между асинхронными и синхронными функциями в Python? Какая функция при работе с корутинами является, грубо говоря, аналогом функции concurrent.futures.ThreadPoolExecutor().submit() из мира потоков? Что такое MVC и какие файлы в Django отвечают за каждый компонент? В чем главные отличия Django, Flask и aiohttp? Асинхронный бот Базы данных Реляционные (MySQL, PostgreSQL, Oracle, SQLite) Key-Value + document-oriented (Redis, Tarantool, MongoDB, Elasticsearch) Графовые (Neo4j) и т.д. Распределенные? (DNS) In-Memory? (Memcached) Реляционные базы данных Записи могут иметь ключи, указывающие друг на друга Чаще всего для работы с данными используется SQL (https://ru.wikipedia.org/wiki/SQL) SQL - Structured Query Language SQL SELECT emp.last_name AS Surname, d.title AS Department FROM departments d LEFT JOIN employees emp ON (d.id = emp.department_id) http://sqlfiddle.com ``SQL CREATE TABLE IF NOT EXISTSemployees(idint(6) unsigned NOT NULL,first_namevarchar(30) NOT NULL,last_namevarchar(30) NOT NULL,department_idint(6) unsigned, PRIMARY KEY (id`) ) DEFAULT CHARSET=utf8; CREATE TABLE IF NOT EXISTS departments ( id int(6) unsigned NOT NULL, title varchar(30) NOT NULL, PRIMARY KEY (id) ) DEFAULT CHARSET=utf8; ``` ``SQL INSERT INTOemployees(id,first_name,last_name,department_id`) VALUES ('1', 'Darth', 'Vader', 1), ('2', 'Darth', 'Maul', 1), ('3', 'Kylo', 'Ren', 1), ('4', 'Magister', 'Yoda', 2), ('5', 'Leia', 'Organa', 2), ('6', 'Luke', 'Skywalker', 2), ('7', 'Jar Jar', 'Binks', NULL); INSERT INTO departments (id, title) VALUES ('1', 'Dark Side Inc.'), ('2', 'Light Side Ltd.'), ('3', 'Rebels'), ('4', 'Wookie'); ``` Python DB API 2.0 https://www.python.org/dev/peps/pep-0249/ https://docs.python.org/3/library/sqlite3.html End of explanation # hello/models.py from django.db import models class Question(models.Model): question_text = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): question = models.ForeignKey(Question, on_delete=models.CASCADE) choice_text = models.CharField(max_length=200) votes = models.IntegerField(default=0) Explanation: ORM - Object-Relational Mapping Установим соответствие между записями в базе и объектами в коде Получим удобство в коде за счет меньшей гибкости построения запросов и большего оверхеда Вернемся к нашему сайту Миграции - это преобразования схемы и/или типов данных, меняющие структуру базы как в процессе разработки, так и на боевых серверах python manage.py migrate End of explanation INSTALLED_APPS = [ 'hello', # <---- вот сюда, например 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] Explanation: Нужно добавить наше приложение в INSTALLED_APPS в settings.py End of explanation import django django.setup() from django.utils import timezone # поддержка временных зон from hello.models import Question, Choice Question.objects.all() # вернуть все объекты из базы q = Question(question_text="Чёкак?", pub_date=timezone.now()) # создать объект q.save() # сохранить объект в базу q.question_text = "Чёкаво?" q.save() str(q.query) # заглянуть внутрь Question.objects.filter(question_text__startswith='Чё') # фильтруем по строчкам current_year = timezone.now().year Question.objects.get(pub_date__year=current_year) # фильтруем по году Question.objects.get(id=2) q.choice_set.all() # все варианты ответа для данного вопроса c = q.choice_set.create(choice_text='Кто бы знал', votes=0) # создаем связанный объект c.delete() # удаляем объект Explanation: Создадим миграцию для наших новых моделей python manage.py makemigrations hello Но что конкретно он нагенерировал? python manage.py sqlmigrate hello 0001 Ну вот, теперь все понятно python manage.py migrate Встроенный Python shell (рекомендую IPython) python manage.py shell End of explanation # hello/admin.py from django.contrib import admin from .models import Question admin.site.register(Question) Explanation: Админка python manage.py createsuperuser End of explanation import asyncio import aiohttp from aioes import Elasticsearch from datetime import datetime es = Elasticsearch(['localhost:9200']) URL = "https://ghibliapi.herokuapp.com/species/603428ba-8a86-4b0b-a9f1-65df6abef3d3" async def create_db(): async with aiohttp.ClientSession() as session: async with session.get(URL) as resp: films_urls = (await resp.json())["films"] for i, film_url in enumerate(films_urls): async with session.get(film_url) as resp: res = await es.index( index="coding-index", doc_type='film', id=i, body=await resp.json() ) print(res['created']) loop = asyncio.get_event_loop() loop.run_until_complete(create_db()) # https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl-match-query.html async def get_by_id(key): return await es.get(index='coding-index', doc_type='film', id=key) async def search_by_director(director): return await es.search(index='coding-index', body={"query": {"match": {'director': director}}}) async def search_in_description(sentence): return await es.search(index='coding-index', body={"query": {"match": {'description': sentence}}}) # loop.run_until_complete(get_by_id(0)) # loop.run_until_complete(search_by_director("Hayao Miyazaki")) loop.run_until_complete(search_in_description("cat")) Explanation: python manage.py runserver А что насчет не-Django? SQLAlchemy, вот что! https://www.sqlalchemy.org/ http://docs.sqlalchemy.org/en/latest/orm/tutorial.html Есть и другие https://ponyorm.com/ http://docs.peewee-orm.com/en/latest/ Проблемы с реляционными базами Не очень хорошо масштабируются Любое изменение схемы приводит к гиганским миграциям Плохо поддерживают асинхронность Распространенные СУБД плохо интергрируются с вычислительными решениями Но вообще PostgreSQL неплох Как насчет NoSQL? Redis (https://redis.io/), https://aioredis.readthedocs.io/en/latest/ Elasticsearch (https://www.elastic.co/products/elasticsearch), https://aioes.readthedocs.io/en/latest/ MongoDB (https://www.mongodb.com/), https://motor.readthedocs.io/en/stable/ Попробуем Elasticsearch https://www.elastic.co/downloads/elasticsearch Запустить bin/elasticsearch, проверить, что работает: http://localhost:9200/ pip install aioes Для тех, кто боится асинхронности: https://elasticsearch-py.readthedocs.io/en/master/ End of explanation
1,963
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction This module provides tools to simulate scattering intensities detected by POPS as a function of particle size, refractive index, and some more less obvious parameters. Simulations are based on mie scattering, which gives the module the name. The only function that is worth mentioning is the one below. Its worth exploring all the optional parameters. Imports Step1: standard settings Step2: Wavelength dependence as an example what would it mean to use a 445nm instead of a 405nm laser Step3: refractive index dependence
Python Code: from atmPy.aerosols.instruments.POPS import mie %matplotlib inline import matplotlib.pylab as plt plt.rcParams['figure.dpi'] = 200 Explanation: Introduction This module provides tools to simulate scattering intensities detected by POPS as a function of particle size, refractive index, and some more less obvious parameters. Simulations are based on mie scattering, which gives the module the name. The only function that is worth mentioning is the one below. Its worth exploring all the optional parameters. Imports End of explanation d,amp = mie.makeMie_diameter(noOfdiameters=1000) f,a = plt.subplots() a.plot(d,amp) a.loglog() a.set_xlim((0.1,3)) a.set_ylabel('Signal intensity (arb. u.)') a.set_xlabel('Diameter ($\mu$m)') Explanation: standard settings End of explanation noofpoints = 1000 d,amp405 = mie.makeMie_diameter(noOfdiameters=noofpoints, WavelengthInUm=0.405) d,amp445 = mie.makeMie_diameter(noOfdiameters=noofpoints, WavelengthInUm=0.445) f,a = plt.subplots() a.plot(d, amp405, label = '405') a.plot(d, amp445, label = '445') a.loglog() lim = [0.14, 3] arglim = [abs(d - lim[0]).argmin(), abs(d - lim[1]).argmin()] # arglim scs_at_lim_405= amp405[arglim] # scs_at_lim_405 #w the lower detection limit will go up to d[abs(amp445 - scs_at_lim_405[0]).argmin()] Explanation: Wavelength dependence as an example what would it mean to use a 445nm instead of a 405nm laser End of explanation nop = 1000 dI,ampI = mie.makeMie_diameter(noOfdiameters=nop, IOR=1.4) dII,ampII = mie.makeMie_diameter(noOfdiameters=nop, IOR=1.5) dIII,ampIII = mie.makeMie_diameter(noOfdiameters=nop, IOR=1.6) f,a = plt.subplots() a.plot(dI,ampI) a.plot(dII,ampII) a.plot(dIII,ampIII) a.loglog() a.set_xlim((0.1,3)) a.set_ylabel('Signal intensity (arb. u.)') a.set_xlabel('Diameter ($\mu$m)') Explanation: refractive index dependence End of explanation
1,964
Given the following text description, write Python code to implement the functionality described below step by step Description: This world is far from Normal(ly distributed) Step1: Create some toy data but also add some outliers. Step2: Plot the data together with the true regression line (the three points in the upper left corner are the outliers we added). Step3: Robust Regression Lets see what happens if we estimate our Bayesian linear regression model using the glm() function as before. This function takes a Patsy string to describe the linear model and adds a Normal likelihood by default. Step4: To evaluate the fit, I am plotting the posterior predictive regression lines by taking regression parameters from the posterior distribution and plotting a regression line for each (this is all done inside of plot_posterior_predictive()). Step5: As you can see, the fit is quite skewed and we have a fair amount of uncertainty in our estimate as indicated by the wide range of different posterior predictive regression lines. Why is this? The reason is that the normal distribution does not have a lot of mass in the tails and consequently, an outlier will affect the fit strongly. A Frequentist would estimate a Robust Regression and use a non-quadratic distance measure to evaluate the fit. But what's a Bayesian to do? Since the problem is the light tails of the Normal distribution we can instead assume that our data is not normally distributed but instead distributed according to the Student T distribution which has heavier tails as shown next (I read about this trick in "The Kruschke", aka the puppy-book; but I think Gelman was the first to formulate this). Lets look at those two distributions to get a feel for them. Step6: As you can see, the probability of values far away from the mean (0 in this case) are much more likely under the T distribution than under the Normal distribution. To define the usage of a T distribution in PyMC3 we can pass a family object -- T -- that specifies that our data is Student T-distributed (see glm.families for more choices). Note that this is the same syntax as R and statsmodels use.
Python Code: %matplotlib inline import pymc3 as pm import matplotlib.pyplot as plt import numpy as np import theano Explanation: This world is far from Normal(ly distributed): Bayesian Robust Regression in PyMC3 Author: Thomas Wiecki This tutorial first appeard as a post in small series on Bayesian GLMs on my blog: The Inference Button: Bayesian GLMs made easy with PyMC3 This world is far from Normal(ly distributed): Robust Regression in PyMC3 The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3 In this blog post I will write about: How a few outliers can largely affect the fit of linear regression models. How replacing the normal likelihood with Student T distribution produces robust regression. How this can easily be done with PyMC3 and its new glm module by passing a family object. This is the second part of a series on Bayesian GLMs (click here for part I about linear regression). In this prior post I described how minimizing the squared distance of the regression line is the same as maximizing the likelihood of a Normal distribution with the mean coming from the regression line. This latter probabilistic expression allows us to easily formulate a Bayesian linear regression model. This worked splendidly on simulated data. The problem with simulated data though is that it's, well, simulated. In the real world things tend to get more messy and assumptions like normality are easily violated by a few outliers. Lets see what happens if we add some outliers to our simulated data from the last post. Again, import our modules. End of explanation size = 100 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) # y = a + b*x true_regression_line = true_intercept + true_slope * x # add noise y = true_regression_line + np.random.normal(scale=.5, size=size) # Add outliers x_out = np.append(x, [.1, .15, .2]) y_out = np.append(y, [8, 6, 9]) data = dict(x=x_out, y=y_out) Explanation: Create some toy data but also add some outliers. End of explanation fig = plt.figure(figsize=(7, 7)) ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model') ax.plot(x_out, y_out, 'x', label='sampled data') ax.plot(x, true_regression_line, label='true regression line', lw=2.) plt.legend(loc=0); Explanation: Plot the data together with the true regression line (the three points in the upper left corner are the outliers we added). End of explanation with pm.Model() as model: pm.glm.glm('y ~ x', data) start = pm.find_MAP() step = pm.NUTS(scaling=start) trace = pm.sample(2000, step, progressbar=False) Explanation: Robust Regression Lets see what happens if we estimate our Bayesian linear regression model using the glm() function as before. This function takes a Patsy string to describe the linear model and adds a Normal likelihood by default. End of explanation plt.subplot(111, xlabel='x', ylabel='y', title='Posterior predictive regression lines') plt.plot(x_out, y_out, 'x', label='data') pm.glm.plot_posterior_predictive(trace, samples=100, label='posterior predictive regression lines') plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y') plt.legend(loc=0); Explanation: To evaluate the fit, I am plotting the posterior predictive regression lines by taking regression parameters from the posterior distribution and plotting a regression line for each (this is all done inside of plot_posterior_predictive()). End of explanation normal_dist = pm.Normal.dist(mu=0, sd=1) t_dist = pm.T.dist(mu=0, lam=1, nu=1) x_eval = np.linspace(-8, 8, 300) plt.plot(x_eval, theano.tensor.exp(normal_dist.logp(x_eval)).eval(), label='Normal', lw=2.) plt.plot(x_eval, theano.tensor.exp(t_dist.logp(x_eval)).eval(), label='Student T', lw=2.) plt.xlabel('x') plt.ylabel('Probability density') plt.legend(); Explanation: As you can see, the fit is quite skewed and we have a fair amount of uncertainty in our estimate as indicated by the wide range of different posterior predictive regression lines. Why is this? The reason is that the normal distribution does not have a lot of mass in the tails and consequently, an outlier will affect the fit strongly. A Frequentist would estimate a Robust Regression and use a non-quadratic distance measure to evaluate the fit. But what's a Bayesian to do? Since the problem is the light tails of the Normal distribution we can instead assume that our data is not normally distributed but instead distributed according to the Student T distribution which has heavier tails as shown next (I read about this trick in "The Kruschke", aka the puppy-book; but I think Gelman was the first to formulate this). Lets look at those two distributions to get a feel for them. End of explanation with pm.Model() as model_robust: family = pm.glm.families.StudentT() pm.glm.glm('y ~ x', data, family=family) start = pm.find_MAP() step = pm.NUTS(scaling=start) trace_robust = pm.sample(2000, step, progressbar=False) plt.figure(figsize=(5, 5)) plt.plot(x_out, y_out, 'x') pm.glm.plot_posterior_predictive(trace_robust, label='posterior predictive regression lines') plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y') plt.legend(); Explanation: As you can see, the probability of values far away from the mean (0 in this case) are much more likely under the T distribution than under the Normal distribution. To define the usage of a T distribution in PyMC3 we can pass a family object -- T -- that specifies that our data is Student T-distributed (see glm.families for more choices). Note that this is the same syntax as R and statsmodels use. End of explanation
1,965
Given the following text description, write Python code to implement the functionality described below step by step Description: TF-DNNRegressor - ReLU - Spitzer Calibration Data This script show a simple example of using tf.contrib.learn library to create our model. The code is divided in following steps Step1: Load CSVs data Step2: Filtering Categorical and Continuous features We store Categorical, Continuous and Target features names in different variables. This will be helpful in later steps. Step3: Converting Data into Tensors When building a TF.Learn model, the input data is specified by means of an Input Builder function. This builder function will not be called until it is later passed to TF.Learn methods such as fit and evaluate. The purpose of this function is to construct the input data, which is represented in the form of Tensors or SparseTensors. Note that input_fn will be called while constructing the TensorFlow graph, not while running the graph. What it is returning is a representation of the input data as the fundamental unit of TensorFlow computations, a Tensor (or SparseTensor). More detail on input_fn. Step4: Selecting and Engineering Features for the Model We use tf.learn's concept of FeatureColumn which help in transforming raw data into suitable input features. These engineered features will be used when we construct our model. Step5: Defining The Regression Model Following is the simple DNNRegressor model. More detail about hidden_units, etc can be found here. model_dir is used to save and restore our model. This is because once we have trained the model we don't want to train it again, if we only want to predict on new data-set. Step6: Training and Evaluating Our Model add progress bar through python logging Step7: Track Scalable Growth Shrunk data set to 23559 Training samples and 7853 Val/Test samples | n_iters | time (s) | val acc | multicore | gpu | |------------------------------------------------| | 100 | 5.869 | 6.332 | yes | no | | 200 | 6.380 | 13.178 | yes | no | | 500 | 8.656 | 54.220 | yes | no | | 1000 | 12.170 | 66.596 | yes | no | | 2000 | 19.891 | 62.996 | yes | no | | 5000 | 43.589 | 76.586 | yes | no | | 10000 | 80.581 | 66.872 | yes | no | | 20000 | 162.435 | 78.927 | yes | no | | 50000 | 535.584 | 75.493 | yes | no | | 100000 | 1062.656 | 73.162 | yes | no | Step8: Predicting output for test data Most of the time prediction script would be separate from training script (we need not to train on same data again) but I am providing both in same script here; as I am not sure if we can create multiple notebook and somehow share data between them in Kaggle.
Python Code: import pandas as pd import numpy as np import tensorflow as tf tf.logging.set_verbosity(tf.logging.ERROR) import warnings warnings.filterwarnings("ignore") %matplotlib inline from matplotlib import pyplot as plt from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler, MinMaxScaler, minmax_scale from sklearn.metrics import r2_score from time import time start0 = time() plt.rcParams['figure.dpi'] = 300 Explanation: TF-DNNRegressor - ReLU - Spitzer Calibration Data This script show a simple example of using tf.contrib.learn library to create our model. The code is divided in following steps: Load CSVs data Filtering Categorical and Continuous features Converting Data into Tensors Selecting and Engineering Features for the Model Defining The Regression Model Training and Evaluating Our Model Predicting output for test data v0.1: Added code for data loading, modeling and prediction model. v0.2: Removed unnecessary output logs. PS: I was able to get a score of 1295.07972 using this script with 70% (of train.csv) data used for training and rest for evaluation. Script took 2hrs for training and 3000 steps were used. End of explanation nSkip = 20 spitzerDataRaw = pd.read_csv('pmap_ch2_0p1s_x4_rmulti_s3_7.csv')[::nSkip] PLDpixels = pd.DataFrame({key:spitzerDataRaw[key] for key in spitzerDataRaw.columns.values if 'pix' in key}) PLDpixels PLDnorm = np.sum(np.array(PLDpixels),axis=1) PLDpixels = (PLDpixels.T / PLDnorm).T PLDpixels spitzerData = spitzerDataRaw.copy() for key in spitzerDataRaw.columns: if key in PLDpixels.columns: spitzerData[key] = PLDpixels[key] testPLD = np.array(pd.DataFrame({key:spitzerData[key] for key in spitzerData.columns.values if 'pix' in key})) assert(not sum(abs(testPLD - np.array(PLDpixels))).all()) print('Confirmed that PLD Pixels have been Normalized to Spec') notFeatures = ['flux', 'fluxerr', 'xerr', 'yerr', 'xycov'] feature_columns = spitzerData.drop(notFeatures,axis=1).columns.values features = spitzerData.drop(notFeatures,axis=1).values labels = spitzerData['flux'].values stdScaler = StandardScaler() features_scaled = stdScaler.fit_transform(features) labels_scaled = stdScaler.fit_transform(labels[:,None]).ravel() x_valtest, x_train, y_valtest, y_train = train_test_split(features_scaled, labels_scaled, test_size=0.6, random_state=42) x_val, x_test, y_val, y_test = train_test_split(x_valtest, y_valtest, test_size=0.5, random_state=42) # x_val = minmax_scale(x_val.astype('float32')) # x_train = minmax_scale(x_train.astype('float32')) # x_test = minmax_scale(x_test.astype('float32')) # y_val = minmax_scale(y_val.astype('float32')) # y_train = minmax_scale(y_train.astype('float32')) # y_test = minmax_scale(y_test.astype('float32')) print(x_val.shape[0] , 'validation samples') print(x_train.shape[0], 'train samples') print(x_test.shape[0] , 'test samples') train_df = pd.DataFrame(np.c_[x_train, y_train], columns=list(feature_columns) + ['flux']) test_df = pd.DataFrame(np.c_[x_test , y_test ], columns=list(feature_columns) + ['flux']) evaluate_df = pd.DataFrame(np.c_[x_val , y_val ], columns=list(feature_columns) + ['flux']) Explanation: Load CSVs data End of explanation # categorical_features = [feature for feature in features if 'cat' in feature] categorical_features = [] continuous_features = [feature for feature in train_df.columns]# if 'cat' in feature] LABEL_COLUMN = 'flux' Explanation: Filtering Categorical and Continuous features We store Categorical, Continuous and Target features names in different variables. This will be helpful in later steps. End of explanation # Converting Data into Tensors def input_fn(df, training = True): # Creates a dictionary mapping from each continuous feature column name (k) to # the values of that column stored in a constant Tensor. continuous_cols = {k: tf.constant(df[k].values) for k in continuous_features} # Creates a dictionary mapping from each categorical feature column name (k) # to the values of that column stored in a tf.SparseTensor. # categorical_cols = {k: tf.SparseTensor( # indices=[[i, 0] for i in range(df[k].size)], # values=df[k].values, # shape=[df[k].size, 1]) # for k in categorical_features} # Merges the two dictionaries into one. feature_cols = continuous_cols # feature_cols = dict(list(continuous_cols.items()) + list(categorical_cols.items())) if training: # Converts the label column into a constant Tensor. label = tf.constant(df[LABEL_COLUMN].values) # Returns the feature columns and the label. return feature_cols, label # Returns the feature columns return feature_cols def train_input_fn(): return input_fn(train_df, training=True) def eval_input_fn(): return input_fn(evaluate_df, training=True) # def test_input_fn(): # return input_fn(test_df.drop(LABEL_COLUMN,axis=1), training=False) def test_input_fn(): return input_fn(test_df, training=False) Explanation: Converting Data into Tensors When building a TF.Learn model, the input data is specified by means of an Input Builder function. This builder function will not be called until it is later passed to TF.Learn methods such as fit and evaluate. The purpose of this function is to construct the input data, which is represented in the form of Tensors or SparseTensors. Note that input_fn will be called while constructing the TensorFlow graph, not while running the graph. What it is returning is a representation of the input data as the fundamental unit of TensorFlow computations, a Tensor (or SparseTensor). More detail on input_fn. End of explanation engineered_features = [] for continuous_feature in continuous_features: engineered_features.append( tf.contrib.layers.real_valued_column(continuous_feature)) # for categorical_feature in categorical_features: # sparse_column = tf.contrib.layers.sparse_column_with_hash_bucket( # categorical_feature, hash_bucket_size=1000) # engineered_features.append(tf.contrib.layers.embedding_column(sparse_id_column=sparse_column, dimension=16, # combiner="sum")) Explanation: Selecting and Engineering Features for the Model We use tf.learn's concept of FeatureColumn which help in transforming raw data into suitable input features. These engineered features will be used when we construct our model. End of explanation # train_df = df_train_ori.head(1000) # evaluate_df = df_train_ori.tail(500) # test_df = df_test_ori.head(1000) # MODEL_DIR = "tf_model_spitzer/withNormalization_drop50/relu" MODEL_DIR = "tf_model_spitzer/adamOptimizer_with_drop50/tanh/" # MODEL_DIR = "tf_model_spitzer/xgf" print("train_df.shape = " , train_df.shape) print("test_df.shape = " , test_df.shape) print("evaluate_df.shape = ", evaluate_df.shape) nHidden1 = 10 nHidden2 = 5 nHidden3 = 10 regressor = tf.contrib.learn.DNNRegressor(activation_fn=tf.nn.relu, dropout=0.5, optimizer=tf.train.AdamOptimizer, feature_columns=engineered_features, hidden_units=[nHidden1, nHidden2, nHidden3], model_dir=MODEL_DIR) Explanation: Defining The Regression Model Following is the simple DNNRegressor model. More detail about hidden_units, etc can be found here. model_dir is used to save and restore our model. This is because once we have trained the model we don't want to train it again, if we only want to predict on new data-set. End of explanation import logging logging.getLogger().setLevel(logging.INFO) # Training Our Model nFitSteps = 100000 start = time() wrap = regressor.fit(input_fn=train_input_fn, steps=nFitSteps) print('TF Regressor took {} seconds'.format(time()-start)) # Evaluating Our Model print('Evaluating ...') results = regressor.evaluate(input_fn=eval_input_fn, steps=1) for key in sorted(results): print("{}: {}".format(key, results[key])) print("Val Acc: {:.3f}".format((1-results['loss'])*100)) Explanation: Training and Evaluating Our Model add progress bar through python logging End of explanation nItersList = [100,200,500,1000,2000,5000,10000,20000,50000,100000] rtimesList = [5.869, 6.380, 8.656, 12.170, 19.891, 43.589, 80.581, 162.435, 535.584, 1062.656] valAccList = [6.332, 13.178, 54.220, 66.596, 62.996, 76.586, 66.872, 78.927, 75.493, 73.162] plt.loglog(nItersList, rtimesList,'o-'); plt.twinx() plt.semilogx(nItersList, valAccList,'o-', color='orange'); Explanation: Track Scalable Growth Shrunk data set to 23559 Training samples and 7853 Val/Test samples | n_iters | time (s) | val acc | multicore | gpu | |------------------------------------------------| | 100 | 5.869 | 6.332 | yes | no | | 200 | 6.380 | 13.178 | yes | no | | 500 | 8.656 | 54.220 | yes | no | | 1000 | 12.170 | 66.596 | yes | no | | 2000 | 19.891 | 62.996 | yes | no | | 5000 | 43.589 | 76.586 | yes | no | | 10000 | 80.581 | 66.872 | yes | no | | 20000 | 162.435 | 78.927 | yes | no | | 50000 | 535.584 | 75.493 | yes | no | | 100000 | 1062.656 | 73.162 | yes | no | End of explanation def de_median(x): return x - np.median(x) predicted_output = list(regressor.predict(input_fn=test_input_fn)) # x = list(predicted_output) r2_score(test_df['flux'].values,predicted_output)*100 print('Full notebook took {} seconds'.format(time()-start0)) Explanation: Predicting output for test data Most of the time prediction script would be separate from training script (we need not to train on same data again) but I am providing both in same script here; as I am not sure if we can create multiple notebook and somehow share data between them in Kaggle. End of explanation
1,966
Given the following text description, write Python code to implement the functionality described below step by step Description: PyEmma Featurizer Support Step1: Import a PyEmma Coordinates Module Using of pyemma featurizers or general other complex code requires a little trick to be storable. Since storing of code only works if we are not dependend on the context (scope) we need to wrap the construction of our featurizer in a function, that gets all it needs from the global scope as a parameter Step2: Now use this featurizer generating function to build a collective variable out of it. All we need for that is a name as usual, the generating function, the list of parameters - here only the topology and at best a test snapshot, a template. Step3: Let's save it to the storage Step4: and apply the featurizer to a trajectory Step5: Sync to make sure the cache is written to the netCDF file. Step6: Make sure that we get the same result
Python Code: import openpathsampling as paths import numpy as np from __future__ import print_function #! lazy import pyemma.coordinates as coor #! lazy ref_storage = paths.Storage('engine_store_test.nc', mode='r') #! lazy storage = paths.Storage('delete.nc', 'w') storage.trajectories.save(ref_storage.trajectories[0]) Explanation: PyEmma Featurizer Support End of explanation def pyemma_generator(f): f.add_inverse_distances(f.pairs(f.select_Backbone())) cv = paths.collectivevariable.PyEMMAFeaturizerCV( 'pyemma', pyemma_generator, topology=ref_storage.snapshots[0].topology ).with_diskcache() Explanation: Import a PyEmma Coordinates Module Using of pyemma featurizers or general other complex code requires a little trick to be storable. Since storing of code only works if we are not dependend on the context (scope) we need to wrap the construction of our featurizer in a function, that gets all it needs from the global scope as a parameter End of explanation cv(ref_storage.trajectories[0]); Explanation: Now use this featurizer generating function to build a collective variable out of it. All we need for that is a name as usual, the generating function, the list of parameters - here only the topology and at best a test snapshot, a template. End of explanation #! lazy print(storage.save(cv)) Explanation: Let's save it to the storage End of explanation cv(storage.trajectories[0]); Explanation: and apply the featurizer to a trajectory End of explanation cv(storage.snapshots.all()); py_cv = storage.cvs['pyemma'] store = storage.stores['cv%d' % storage.idx(py_cv)] nc_var = store.variables['value'] assert(nc_var.shape[1] == 15) print(nc_var.shape[1]) assert(nc_var.var_type == 'numpy.float32') print(nc_var.var_type) #! ignore print(storage.variables['cvs_json'][:]) py_cv_idx = storage.idx(py_cv) print(py_cv_idx) py_emma_feat = storage.vars['attributes_json'][py_cv_idx] erg = py_emma_feat(storage.snapshots); #! lazy print(erg[:,2:4]) storage.close() ref_storage.close() #! lazy storage = paths.Storage('delete.nc', 'r') cv = storage.cvs[0] Explanation: Sync to make sure the cache is written to the netCDF file. End of explanation assert np.allclose(erg, cv(storage.snapshots)) storage.close() Explanation: Make sure that we get the same result End of explanation
1,967
Given the following text description, write Python code to implement the functionality described below step by step Description: Training a part-of-speech tagger with transformers (BERT) This example shows how to use Thinc and Hugging Face's transformers library to implement and train a part-of-speech tagger on the Universal Dependencies AnCora corpus. This notebook assumes familiarity with machine learning concepts, transformer models and Thinc's config system and Model API (see the "Thinc for beginners" notebook and the documentation for more info). Step1: First, let's use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If we're on GPU, we can also call use_pytorch_for_gpu_memory to route cupy's memory allocation via PyTorch, so both can play together nicely. Step3: Overview Step5: Defining the model The Thinc model we want to define should consist of 3 components Step6: The wrapped tokenizer will take a list-of-lists as input (the texts) and will output a TokensPlus object containing the fully padded batch of tokens. The wrapped transformer will take a list of TokensPlus objects and will output a list of 2-dimensional arrays. TransformersTokenizer Step7: The forward pass takes the model and a list-of-lists of strings and outputs the TokensPlus dataclass. It also outputs a dummy callback function, to meet the API contract for Thinc models. Even though there's no way we can meaningfully "backpropagate" this layer, we need to make sure the function has the right signature, so that it can be used interchangeably with other layers. 2. Wrapping the transformer To load and wrap the transformer, we can use transformers.AutoModel and Thinc's PyTorchWrapper. The forward method of the wrapped model can take arbitrary positional arguments and keyword arguments. Here's what the wrapped model is going to look like Step8: The input and output transformation functions give you full control of how data is passed into and out of the underlying PyTorch model, so you can work with PyTorch layers that expect and return arbitrary objects. Putting it all together, we now have a nice layer that is configured with the name of a transformer model, that acts as a function mapping tokenized input into feature vectors. Step9: We can now combine the TransformersTokenizer and Transformer into a feed-forward network using the chain combinator. The with_array layer transforms a sequence of data into a contiguous 2d array on the way into and out of a model. Step10: Training the model Setting up model and data Since we've registered all layers via @thinc.registry.layers, we can construct the model, its settings and other functions we need from a config (see CONFIG above). The result is a config object with a model, an optimizer, a function to calculate the loss and the training settings. Step11: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including the AnCora data. If we're using a GPU, calling ops.asarray on the outputs ensures that they're converted to cupy arrays (instead of numpy arrays). Calling Model.initialize with a batch of inputs and outputs allows Thinc to infer the missing dimensions. Step12: Helper functions for training and evaluation Before we can train the model, we also need to set up the following helper functions for batching and evaluation Step13: The training loop Transformers often learn best with large batch sizes – larger than fits in GPU memory. But you don't have to backprop the whole batch at once. Here we consider the "logical" batch size (number of examples per update) separately from the physical batch size. For the physical batch size, what we care about is the number of words (considering padding too). We also want to sort by length, for efficiency. At the end of the batch, we call the optimizer with the accumulated gradients, and advance the learning rate schedules. You might want to evaluate more often than once per epoch – that's up to you.
Python Code: !pip install "thinc>=8.0.0a0" transformers torch "ml_datasets>=0.2.0a0" "tqdm>=4.41" Explanation: Training a part-of-speech tagger with transformers (BERT) This example shows how to use Thinc and Hugging Face's transformers library to implement and train a part-of-speech tagger on the Universal Dependencies AnCora corpus. This notebook assumes familiarity with machine learning concepts, transformer models and Thinc's config system and Model API (see the "Thinc for beginners" notebook and the documentation for more info). End of explanation from thinc.api import prefer_gpu, use_pytorch_for_gpu_memory is_gpu = prefer_gpu() print("GPU:", is_gpu) if is_gpu: use_pytorch_for_gpu_memory() Explanation: First, let's use Thinc's prefer_gpu helper to make sure we're performing operations on GPU if available. The function should be called right after importing Thinc, and it returns a boolean indicating whether the GPU has been activated. If we're on GPU, we can also call use_pytorch_for_gpu_memory to route cupy's memory allocation via PyTorch, so both can play together nicely. End of explanation CONFIG = [model] @layers = "TransformersTagger.v1" starter = "bert-base-multilingual-cased" [optimizer] @optimizers = "Adam.v1" [optimizer.learn_rate] @schedules = "warmup_linear.v1" initial_rate = 0.01 warmup_steps = 3000 total_steps = 6000 [loss] @losses = "SequenceCategoricalCrossentropy.v1" [training] batch_size = 128 words_per_subbatch = 2000 n_epoch = 10 Explanation: Overview: the final config Here's the final config for the model we're building in this notebook. It references a custom TransformersTagger that takes the name of a starter (the pretrained model to use), an optimizer, a learning rate schedule with warm-up and the general training settings. You can keep the config string within your file or notebook, or save it to a conig.cfg file and load it in via Config.from_disk. End of explanation from typing import Optional, List import numpy from thinc.types import Ints1d, Floats2d from dataclasses import dataclass import torch from transformers import BatchEncoding, TokenSpan @dataclass class TokensPlus: batch_size: int tok2wp: List[Ints1d] input_ids: torch.Tensor token_type_ids: torch.Tensor attention_mask: torch.Tensor def __init__(self, inputs: List[List[str]], wordpieces: BatchEncoding): self.input_ids = wordpieces["input_ids"] self.attention_mask = wordpieces["attention_mask"] self.token_type_ids = wordpieces["token_type_ids"] self.batch_size = self.input_ids.shape[0] self.tok2wp = [] for i in range(self.batch_size): spans = [wordpieces.word_to_tokens(i, j) for j in range(len(inputs[i]))] self.tok2wp.append(self.get_wp_starts(spans)) def get_wp_starts(self, spans: List[Optional[TokenSpan]]) -> Ints1d: Calculate an alignment mapping each token index to its first wordpiece. alignment = numpy.zeros((len(spans)), dtype="i") for i, span in enumerate(spans): if span is None: raise ValueError( "Token did not align to any wordpieces. Was the tokenizer " "run with is_split_into_words=True?" ) else: alignment[i] = span.start return alignment def test_tokens_plus(name: str="bert-base-multilingual-cased"): from transformers import AutoTokenizer inputs = [ ["Our", "band", "is", "called", "worlthatmustbedivided", "!"], ["We", "rock", "!"] ] tokenizer = AutoTokenizer.from_pretrained(name) wordpieces = tokenizer( inputs, is_split_into_words=True, add_special_tokens=True, return_token_type_ids=True, return_attention_mask=True, return_length=True, return_tensors="pt", padding="longest" ) tplus = TokensPlus(inputs, wordpieces) assert len(tplus.tok2wp) == len(inputs) == len(tplus.input_ids) for i, align in enumerate(tplus.tok2wp): assert len(align) == len(inputs[i]) for j in align: assert j >= 0 and j < tplus.input_ids.shape[1] test_tokens_plus() Explanation: Defining the model The Thinc model we want to define should consist of 3 components: the transformers tokenizer, the actual transformer implemented in PyTorch and a softmax-activated output layer. 1. Wrapping the tokenizer To make it easier to keep track of the data that's passed around (and get type errors if something goes wrong), we first create a TokensPlus dataclass that holds the information we need from the transformers tokenizer. The most important work we'll do in this class is to build an alignment map. The transformer models are trained on input sequences that over-segment the sentence, so that they can work on smaller vocabularies. These over-segmentations are generally called "word pieces". The transformer will return a tensor with one vector per wordpiece. We need to map that to a tensor with one vector per POS-tagged token. We'll pass those token representations into a feed-forward network to predict the tag probabilities. During the backward pass, we'll then need to invert this mapping, so that we can calculate the gradients with respect to the wordpieces given the gradients with respect to the tokens. To keep things relatively simple, we'll store the alignment as a list of arrays, with each array mapping one token to one wordpiece vector (its first one). To make this work, we'll need to run the tokenizer with is_split_into_words=True, which should ensure that we get at least one wordpiece per token. End of explanation import thinc from thinc.api import Model from transformers import AutoTokenizer @thinc.registry.layers("transformers_tokenizer.v1") def TransformersTokenizer(name: str) -> Model[List[List[str]], TokensPlus]: def forward(model, inputs: List[List[str]], is_train: bool): tokenizer = model.attrs["tokenizer"] wordpieces = tokenizer( inputs, is_split_into_words=True, add_special_tokens=True, return_token_type_ids=True, return_attention_mask=True, return_length=True, return_tensors="pt", padding="longest" ) return TokensPlus(inputs, wordpieces), lambda d_tokens: [] return Model("tokenizer", forward, attrs={"tokenizer": AutoTokenizer.from_pretrained(name)}) Explanation: The wrapped tokenizer will take a list-of-lists as input (the texts) and will output a TokensPlus object containing the fully padded batch of tokens. The wrapped transformer will take a list of TokensPlus objects and will output a list of 2-dimensional arrays. TransformersTokenizer: List[List[str]] → TokensPlus Transformer: TokensPlus → List[Array2d] 💡 Since we're adding type hints everywhere (and Thinc is fully typed, too), you can run your code through mypy to find type errors and inconsistencies. If you're using an editor like Visual Studio Code, you can enable mypy linting and type errors will be highlighted in real time as you write code. To use the tokenizer as a layer in our network, we register a new function that returns a Thinc Model. The function takes the name of the pretrained weights (e.g. "bert-base-multilingual-cased") as an argument that can later be provided via the config. After loading the AutoTokenizer, we can stash it in the attributes. This lets us access it at any point later on via model.attrs["tokenizer"]. End of explanation from typing import List, Tuple, Callable from thinc.api import ArgsKwargs, torch2xp, xp2torch from thinc.types import Floats2d def convert_transformer_inputs(model, tokens: TokensPlus, is_train): kwargs = { "input_ids": tokens.input_ids, "attention_mask": tokens.attention_mask, "token_type_ids": tokens.token_type_ids, } return ArgsKwargs(args=(), kwargs=kwargs), lambda dX: [] def convert_transformer_outputs( model: Model, inputs_outputs: Tuple[TokensPlus, Tuple[torch.Tensor]], is_train: bool ) -> Tuple[List[Floats2d], Callable]: tplus, trf_outputs = inputs_outputs wp_vectors = torch2xp(trf_outputs[0]) tokvecs = [wp_vectors[i, idx] for i, idx in enumerate(tplus.tok2wp)] def backprop(d_tokvecs: List[Floats2d]) -> ArgsKwargs: # Restore entries for BOS and EOS markers d_wp_vectors = model.ops.alloc3f(*trf_outputs[0].shape, dtype="f") for i, idx in enumerate(tplus.tok2wp): d_wp_vectors[i, idx] += d_tokvecs[i] return ArgsKwargs( args=(trf_outputs[0],), kwargs={"grad_tensors": xp2torch(d_wp_vectors)}, ) return tokvecs, backprop Explanation: The forward pass takes the model and a list-of-lists of strings and outputs the TokensPlus dataclass. It also outputs a dummy callback function, to meet the API contract for Thinc models. Even though there's no way we can meaningfully "backpropagate" this layer, we need to make sure the function has the right signature, so that it can be used interchangeably with other layers. 2. Wrapping the transformer To load and wrap the transformer, we can use transformers.AutoModel and Thinc's PyTorchWrapper. The forward method of the wrapped model can take arbitrary positional arguments and keyword arguments. Here's what the wrapped model is going to look like: python @thinc.registry.layers("transformers_model.v1") def Transformer(name) -&gt; Model[TokensPlus, List[Floats2d]]: return PyTorchWrapper( AutoModel.from_pretrained(name), convert_inputs=convert_transformer_inputs, convert_outputs=convert_transformer_outputs, ) The Transformer layer takes our TokensPlus dataclass as input and outputs a list of 2-dimensional arrays. The convert functions are used to map inputs and outputs to and from the PyTorch model. Each function should return the converted output, and a callback to use during the backward pass. To make the arbitrary positional and keyword arguments easier to manage, Thinc uses an ArgsKwargs dataclass, essentially a named tuple with args and kwargs that can be spread into a function as *ArgsKwargs.args and **ArgsKwargs.kwargs. The ArgsKwargs objects will be passed straight into the model in the forward pass, and straight into torch.autograd.backward during the backward pass. End of explanation import thinc from thinc.api import PyTorchWrapper from transformers import AutoModel @thinc.registry.layers("transformers_model.v1") def Transformer(name: str) -> Model[TokensPlus, List[Floats2d]]: return PyTorchWrapper( AutoModel.from_pretrained(name), convert_inputs=convert_transformer_inputs, convert_outputs=convert_transformer_outputs, ) Explanation: The input and output transformation functions give you full control of how data is passed into and out of the underlying PyTorch model, so you can work with PyTorch layers that expect and return arbitrary objects. Putting it all together, we now have a nice layer that is configured with the name of a transformer model, that acts as a function mapping tokenized input into feature vectors. End of explanation from thinc.api import chain, with_array, Softmax @thinc.registry.layers("TransformersTagger.v1") def TransformersTagger(starter: str, n_tags: int = 17) -> Model[List[List[str]], List[Floats2d]]: return chain( TransformersTokenizer(starter), Transformer(starter), with_array(Softmax(n_tags)), ) Explanation: We can now combine the TransformersTokenizer and Transformer into a feed-forward network using the chain combinator. The with_array layer transforms a sequence of data into a contiguous 2d array on the way into and out of a model. End of explanation from thinc.api import Config, registry C = registry.resolve(Config().from_str(CONFIG)) model = C["model"] optimizer = C["optimizer"] calculate_loss = C["loss"] cfg = C["training"] Explanation: Training the model Setting up model and data Since we've registered all layers via @thinc.registry.layers, we can construct the model, its settings and other functions we need from a config (see CONFIG above). The result is a config object with a model, an optimizer, a function to calculate the loss and the training settings. End of explanation import ml_datasets (train_X, train_Y), (dev_X, dev_Y) = ml_datasets.ud_ancora_pos_tags() train_Y = list(map(model.ops.asarray, train_Y)) # convert to cupy if needed dev_Y = list(map(model.ops.asarray, dev_Y)) # convert to cupy if needed model.initialize(X=train_X[:5], Y=train_Y[:5]) Explanation: We’ve prepared a separate package ml-datasets with loaders for some common datasets, including the AnCora data. If we're using a GPU, calling ops.asarray on the outputs ensures that they're converted to cupy arrays (instead of numpy arrays). Calling Model.initialize with a batch of inputs and outputs allows Thinc to infer the missing dimensions. End of explanation def minibatch_by_words(pairs, max_words): pairs = list(zip(*pairs)) pairs.sort(key=lambda xy: len(xy[0]), reverse=True) batch = [] for X, Y in pairs: batch.append((X, Y)) n_words = max(len(xy[0]) for xy in batch) * len(batch) if n_words >= max_words: yield batch[:-1] batch = [(X, Y)] if batch: yield batch def evaluate_sequences(model, Xs: List[Floats2d], Ys: List[Floats2d], batch_size: int) -> float: correct = 0.0 total = 0.0 for X, Y in model.ops.multibatch(batch_size, Xs, Ys): Yh = model.predict(X) for yh, y in zip(Yh, Y): correct += (y.argmax(axis=1) == yh.argmax(axis=1)).sum() total += y.shape[0] return float(correct / total) Explanation: Helper functions for training and evaluation Before we can train the model, we also need to set up the following helper functions for batching and evaluation: minibatch_by_words: Group pairs of sequences into minibatches under max_words in size, considering padding. The size of a padded batch is the length of its longest sequence multiplied by the number of elements in the batch. evaluate_sequences: Evaluate the model sequences of two-dimensional arrays and return the score. End of explanation from tqdm.notebook import tqdm from thinc.api import fix_random_seed fix_random_seed(0) for epoch in range(cfg["n_epoch"]): batches = model.ops.multibatch(cfg["batch_size"], train_X, train_Y, shuffle=True) for outer_batch in tqdm(batches, leave=False): for batch in minibatch_by_words(outer_batch, cfg["words_per_subbatch"]): inputs, truths = zip(*batch) inputs = list(inputs) guesses, backprop = model(inputs, is_train=True) backprop(calculate_loss.get_grad(guesses, truths)) model.finish_update(optimizer) optimizer.step_schedules() score = evaluate_sequences(model, dev_X, dev_Y, cfg["batch_size"]) print(epoch, f"{score:.3f}") Explanation: The training loop Transformers often learn best with large batch sizes – larger than fits in GPU memory. But you don't have to backprop the whole batch at once. Here we consider the "logical" batch size (number of examples per update) separately from the physical batch size. For the physical batch size, what we care about is the number of words (considering padding too). We also want to sort by length, for efficiency. At the end of the batch, we call the optimizer with the accumulated gradients, and advance the learning rate schedules. You might want to evaluate more often than once per epoch – that's up to you. End of explanation
1,968
Given the following text description, write Python code to implement the functionality described below step by step Description: <p><font size="6"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p> © 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons Step1: We processed some raw data files of the AirBase air quality data. The data contains hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations Step2: We only use the data from 1999 onwards Step3: Some first exploration with the typical functions Step4: <div class="alert alert-warning"> **ATTENTION!** Step5: Summary figures Use summary statistics... Step6: Also with seaborn plots function, just start with some subsets as first impression... As we already have seen previously, the plotting library seaborn provides some high-level plotting functions on top of matplotlib (check the docs!). One of those functions is pairplot, which we can use here to quickly visualize the concentrations at the different stations and their relation Step7: Is this a tidy dataset ? Step8: In principle this is not a tidy dataset. The variable that was measured is the NO2 concentration, and is divided in 4 columns. Of course those measurements were made at different stations, so one could interpret it as separate variables. But in any case, such format does not always work well with libraries like seaborn which expects a pure tidy format. Reason to not use a tidy dataset here Step9: In the following exercises we will mostly do our analysis on dataand often use pandas plotting, but once we produced some kind of summary dataframe as the result of an analysis, then it becomes more interesting to convert that result to a tidy format to be able to use the more advanced plotting functionality of seaborn. Exercises <div class="alert alert-warning"> <b>REMINDER</b> Step10: <div class="alert alert-success"> <b>EXERCISE 3</b> <ul> <li>Make a violin plot for January 2011 until August 2011 (check out the documentation to improve the plotting settings)</li> <li>Change the y-label to 'NO$_2$ concentration (µg/m³)'</li> </ul><br> _NOTE Step11: <div class="alert alert-success"> <b>EXERCISE 4</b> <ul> <li>Make a bar plot with pandas of the mean of each of the stations in the year 2012 (check the documentation of Pandas plot to adapt the rotation of the labels) and make sure all bars have the same color.</li> <li>Using the matplotlib objects, change the y-label to 'NO$_2$ concentration (µg/m³)</li> <li>Add a 'darkorange' horizontal line on the ax for the y-value 40 µg/m³ (command for horizontal line from matplotlib Step12: <div class="alert alert-success"> <b>EXERCISE 5 Step13: <div class="alert alert-info"> **REMEMBER** Step14: Remove the temporary 'month' column generated in the solution of the previous exercise Step15: Note Step16: <div class="alert alert-success"> <b>EXERCISE 8</b> <ul> <li>Plot the typical diurnal profile (typical hourly averages) for the different stations taking into account the whole time period.</li> </ul> </div> Step17: <div class="alert alert-success"> __EXERCISE 9__ What is the difference in the typical diurnal profile between week and weekend days? (and visualise it) Start with only visualizing the different in diurnal profile for the 'BETR801' station. In a next step, make the same plot for each station. <details><summary>Hints</summary> - Add a column `weekend` defining if a value of the index is in the weekend (i.e. days of the week 5 and 6) or not - Add a column `hour` with the hour of the day for each row. - You can `groupby` on multiple items at the same time. </details> </div> Step18: Remove the temporary columns 'hour' and 'weekend' used in the solution of previous exercise Step19: <div class="alert alert-success"> __EXERCISE 10__ Calculate the correlation between the different stations (check in the documentation, google "pandas correlation" or use the magic function <code>%psearch</code>) </div> Step20: <div class="alert alert-success"> __EXERCISE 11__ Count the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year? **Hints Step21: More advanced exercises... Step22: <div class="alert alert-success"> __EXERCISE 12__ Perform the following actions for the station `'FR04012'` only Step23: <div class="alert alert-success"> <b>EXERCISE 13</b> Step24: <div class="alert alert-success"> <b>EXERCISE 14</b> <ul> <li>Make a selection of the original dataset of the data in January 2009, call the resulting variable <code>subset</code></li> <li>Add a new column, called 'dayofweek', to the variable <code>subset</code> which defines for each data point the day of the week</li> <li>From the <code>subset</code> DataFrame, select only Monday (= day 0) and Sunday (=day 6) and remove the others (so, keep this as variable <code>subset</code>)</li> <li>Change the values of the dayofweek column in <code>subset</code> according to the following mapping Step25: <div class="alert alert-success"> __EXERCISE 15__ The maximum daily, 8 hour mean, should be below 100 µg/m³. What is the number of exceedances of this limit for each year/station? <details><summary>Hints</summary> - Have a look at the `rolling` method to perform moving window operations. </details> <br>_Note Step26: <div class="alert alert-success"> <b>EXERCISE 16</b> Step27: Plotting with seaborn Step28: Reshaping and plotting with pandas
Python Code: import pandas as pd import numpy as np import matplotlib.pyplot as plt Explanation: <p><font size="6"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p> © 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons End of explanation alldata = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True) Explanation: We processed some raw data files of the AirBase air quality data. The data contains hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations: FR04037 (PARIS 13eme): urban background site at Square de Choisy FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia BETR802: urban traffic site in Antwerp, Belgium BETN029: rural background site in Houtem, Belgium See http://www.eea.europa.eu/themes/air/interactive/no2 Importing and quick exploration We processed the individual data files in the previous notebook (case4_air_quality_processing.ipynb), and saved it to a csv file airbase_data_processed.csv. Let's import the file here (if you didn't finish the previous notebook, a set of the pre-processed dataset if also available in data/airbase_data.csv): End of explanation data = alldata['1999':].copy() Explanation: We only use the data from 1999 onwards: End of explanation data.head() # tail() data.info() data.describe(percentiles=[0.1, 0.5, 0.9]) data.plot(figsize=(12,6)) Explanation: Some first exploration with the typical functions: End of explanation data.tail(500).plot(figsize=(12,6)) Explanation: <div class="alert alert-warning"> **ATTENTION!**: When just using `.plot()` without further notice (selection, aggregation,...) * Risk of running into troubles by overloading your computer processing (certainly with looooong time series). * Not always the most informative/interpretable visualisation. </div> Plot only a subset Why not just using the head/tail possibilities? End of explanation data.plot(kind='box', ylim=[0,250]) Explanation: Summary figures Use summary statistics... End of explanation import seaborn as sns sns.pairplot(data.tail(5000).dropna()) Explanation: Also with seaborn plots function, just start with some subsets as first impression... As we already have seen previously, the plotting library seaborn provides some high-level plotting functions on top of matplotlib (check the docs!). One of those functions is pairplot, which we can use here to quickly visualize the concentrations at the different stations and their relation: End of explanation data.head() Explanation: Is this a tidy dataset ? End of explanation # %load _solutions/case4_air_quality_analysis1.py # %load _solutions/case4_air_quality_analysis2.py # %load _solutions/case4_air_quality_analysis3.py Explanation: In principle this is not a tidy dataset. The variable that was measured is the NO2 concentration, and is divided in 4 columns. Of course those measurements were made at different stations, so one could interpret it as separate variables. But in any case, such format does not always work well with libraries like seaborn which expects a pure tidy format. Reason to not use a tidy dataset here: smaller memory use timeseries functionality like resample works better pandas plotting already does what we want when having different columns for some types of plots (eg line plots of the timeseries) <div class="alert alert-success"> <b>EXERCISE 1</b>: <ul> <li>Create a tidy version of this dataset <code>data_tidy</code>, ensuring the result has new columns 'station' and 'no2'.</li> <li>Check how many missing values are contained in the 'no2' column.</li> <li>Drop the rows with missing values in that column.</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis4.py # %load _solutions/case4_air_quality_analysis5.py Explanation: In the following exercises we will mostly do our analysis on dataand often use pandas plotting, but once we produced some kind of summary dataframe as the result of an analysis, then it becomes more interesting to convert that result to a tidy format to be able to use the more advanced plotting functionality of seaborn. Exercises <div class="alert alert-warning"> <b>REMINDER</b>: <br><br> Take a look at the [Timeseries notebook](pandas_04_time_series_data.ipynb) when you require more info about: <ul> <li><code>resample</code></li> <li>string indexing of DateTimeIndex</li> </ul><br> Take a look at the [matplotlib](visualization_01_matplotlib.ipynb) and [seaborn](visualization_02_seaborn.ipynb) notebooks when you require more info about the plot requirements. </div> <div class="alert alert-success"> <b>EXERCISE 2</b>: <ul> <li>Plot the monthly mean and median concentration of the 'FR04037' station for the years 2009 - 2013 in a single figure/ax</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis6.py # %load _solutions/case4_air_quality_analysis7.py # %load _solutions/case4_air_quality_analysis8.py Explanation: <div class="alert alert-success"> <b>EXERCISE 3</b> <ul> <li>Make a violin plot for January 2011 until August 2011 (check out the documentation to improve the plotting settings)</li> <li>Change the y-label to 'NO$_2$ concentration (µg/m³)'</li> </ul><br> _NOTE:_ In this case, we can use seaborn both with the data not in a long format but when having different columns for which you want to make violin plots, as with the tidy data. </div> End of explanation # %load _solutions/case4_air_quality_analysis9.py Explanation: <div class="alert alert-success"> <b>EXERCISE 4</b> <ul> <li>Make a bar plot with pandas of the mean of each of the stations in the year 2012 (check the documentation of Pandas plot to adapt the rotation of the labels) and make sure all bars have the same color.</li> <li>Using the matplotlib objects, change the y-label to 'NO$_2$ concentration (µg/m³)</li> <li>Add a 'darkorange' horizontal line on the ax for the y-value 40 µg/m³ (command for horizontal line from matplotlib: <code>axhline</code>).</li> <li><a href="visualization_01_matplotlib.ipynb">Place the text</a> 'Yearly limit is 40 µg/m³' just above the 'darkorange' line.</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis10.py Explanation: <div class="alert alert-success"> <b>EXERCISE 5:</b> Did the air quality improve over time? <ul> <li>For the data from 1999 till the end, plot the yearly averages</li> <li>For the same period, add the overall mean (all stations together) as an additional line to the graph, use a thicker black line (<code>linewidth=4</code> and <code>linestyle='--'</code>)</li> <li>[OPTIONAL] Add a legend above the ax for all lines</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis11.py Explanation: <div class="alert alert-info"> **REMEMBER**: `resample` is a special version of a`groupby` operation. For example, taking annual means with `data.resample('A').mean()` is equivalent to `data.groupby(data.index.year).mean()` (but the result of `resample` still has a DatetimeIndex). Checking the index of the resulting DataFrame when using **groupby** instead of resample: You'll notice that the Index lost the DateTime capabilities: ```python >>> data.groupby(data.index.year).mean().index ``` <br> Results in: ``` Int64Index([1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012], dtype='int64')$ ``` <br> When using **resample**, we keep the DateTime capabilities: ```python >>> data.resample('A').mean().index ``` <br> Results in: ``` DatetimeIndex(['1999-12-31', '2000-12-31', '2001-12-31', '2002-12-31', '2003-12-31', '2004-12-31', '2005-12-31', '2006-12-31', '2007-12-31', '2008-12-31', '2009-12-31', '2010-12-31', '2011-12-31', '2012-12-31'], dtype='datetime64[ns]', freq='A-DEC') ``` <br> But, `groupby` is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle. </div> <div class="alert alert-success"> <b>EXERCISE 6</b> <ul> <li>How does the <i>typical yearly profile</i> (typical averages for the different months over the years) look like for the different stations? (add a 'month' column as a first step)</li> </ul> </div> End of explanation data = data.drop("month", axis=1, errors="ignore") Explanation: Remove the temporary 'month' column generated in the solution of the previous exercise: End of explanation # %load _solutions/case4_air_quality_analysis12.py # %load _solutions/case4_air_quality_analysis13.py Explanation: Note: Technically, we could reshape the result of the groupby operation to a tidy format (we no longer have a real time series), but since we already have the things we want to plot as lines in different columns, doing .plot already does what we want. <div class="alert alert-success"> <b>EXERCISE 7</b> <ul> <li>Plot the weekly 95% percentiles of the concentration in 'BETR801' and 'BETN029' for 2011</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis14.py Explanation: <div class="alert alert-success"> <b>EXERCISE 8</b> <ul> <li>Plot the typical diurnal profile (typical hourly averages) for the different stations taking into account the whole time period.</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis15.py # %load _solutions/case4_air_quality_analysis16.py # %load _solutions/case4_air_quality_analysis17.py # %load _solutions/case4_air_quality_analysis18.py # %load _solutions/case4_air_quality_analysis19.py # %load _solutions/case4_air_quality_analysis20.py Explanation: <div class="alert alert-success"> __EXERCISE 9__ What is the difference in the typical diurnal profile between week and weekend days? (and visualise it) Start with only visualizing the different in diurnal profile for the 'BETR801' station. In a next step, make the same plot for each station. <details><summary>Hints</summary> - Add a column `weekend` defining if a value of the index is in the weekend (i.e. days of the week 5 and 6) or not - Add a column `hour` with the hour of the day for each row. - You can `groupby` on multiple items at the same time. </details> </div> End of explanation data = data.drop(['hour', 'weekend'], axis=1, errors="ignore") Explanation: Remove the temporary columns 'hour' and 'weekend' used in the solution of previous exercise: End of explanation # %load _solutions/case4_air_quality_analysis21.py Explanation: <div class="alert alert-success"> __EXERCISE 10__ Calculate the correlation between the different stations (check in the documentation, google "pandas correlation" or use the magic function <code>%psearch</code>) </div> End of explanation # %load _solutions/case4_air_quality_analysis22.py # %load _solutions/case4_air_quality_analysis23.py # %load _solutions/case4_air_quality_analysis24.py Explanation: <div class="alert alert-success"> __EXERCISE 11__ Count the number of exceedances of hourly values above the European limit 200 µg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year? **Hints:** <details><summary>Hints</summary> - Create a new DataFrame, called <code>exceedances</code>, (with boolean values) indicating if the threshold is exceeded or not - Remember that the sum of True values can be used to count elements - Adding a horizontal line can be done with the matplotlib function <code>ax.axhline</code> </details> </div> End of explanation data = alldata['1999':].copy() Explanation: More advanced exercises... End of explanation # %load _solutions/case4_air_quality_analysis25.py # %load _solutions/case4_air_quality_analysis26.py # %load _solutions/case4_air_quality_analysis27.py Explanation: <div class="alert alert-success"> __EXERCISE 12__ Perform the following actions for the station `'FR04012'` only: <ul> <li>Remove the rows containing <code>NaN</code> or zero values</li> <li>Sort the values of the rows according to the air quality values (low to high values)</li> <li>Rescale the values to the range [0-1] and store result as <code>FR_scaled</code> (Hint: check <a href="https://en.wikipedia.org/wiki/Feature_scaling#Rescaling">wikipedia</a>)</li> <li>Use pandas to plot these values sorted, not taking into account the dates</li> <li>Add the station name 'FR04012' as y-label</li> <li>[OPTIONAL] Add a vertical line to the plot where the line (hence, the values of variable FR_scaled) reach the value <code>0.3</code>. You will need the documentation of <code>np.searchsorted</code> and matplotlib's <code>axvline</code></li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis28.py # %load _solutions/case4_air_quality_analysis29.py Explanation: <div class="alert alert-success"> <b>EXERCISE 13</b>: <ul> <li>Create a Figure with two subplots (axes), for which both ax<b>i</b>s are shared</li> <li>In the left subplot, plot the histogram (30 bins) of station 'BETN029', only for the year 2009</li> <li>In the right subplot, plot the histogram (30 bins) of station 'BETR801', only for the year 2009</li> <li>Add the title representing the station name on each of the subplots, you do not want to have a legend</li> </ul> </div> End of explanation # %load _solutions/case4_air_quality_analysis30.py # %load _solutions/case4_air_quality_analysis31.py # %load _solutions/case4_air_quality_analysis32.py # %load _solutions/case4_air_quality_analysis33.py Explanation: <div class="alert alert-success"> <b>EXERCISE 14</b> <ul> <li>Make a selection of the original dataset of the data in January 2009, call the resulting variable <code>subset</code></li> <li>Add a new column, called 'dayofweek', to the variable <code>subset</code> which defines for each data point the day of the week</li> <li>From the <code>subset</code> DataFrame, select only Monday (= day 0) and Sunday (=day 6) and remove the others (so, keep this as variable <code>subset</code>)</li> <li>Change the values of the dayofweek column in <code>subset</code> according to the following mapping: <code>{0:"Monday", 6:"Sunday"}</code></li> <li>With seaborn, make a scatter plot of the measurements at 'BETN029' vs 'FR04037', with the color variation based on the weekday. Add a linear regression to this plot.</li> </ul><br> **Note**: If you run into the **SettingWithCopyWarning** and do not know what to do, recheck [pandas_03b_indexing](pandas_03b_indexing.ipynb) </div> End of explanation # %load _solutions/case4_air_quality_analysis34.py # %load _solutions/case4_air_quality_analysis35.py Explanation: <div class="alert alert-success"> __EXERCISE 15__ The maximum daily, 8 hour mean, should be below 100 µg/m³. What is the number of exceedances of this limit for each year/station? <details><summary>Hints</summary> - Have a look at the `rolling` method to perform moving window operations. </details> <br>_Note:_ This is not an actual limit for NO$_2$, but a nice exercise to introduce the `rolling` method. Other pollutans, such as 0$_3$ have actually such kind of limit values based on 8-hour means. </div> End of explanation # %load _solutions/case4_air_quality_analysis36.py # %load _solutions/case4_air_quality_analysis37.py Explanation: <div class="alert alert-success"> <b>EXERCISE 16</b>: <ul> <li>Visualize the typical week profile for station 'BETR801' as boxplots (where the values in one boxplot are the <i>daily means</i> for the different <i>weeks</i> for a certain day of the week).</li><br> </ul> **Tip:**<br> The boxplot method of a DataFrame expects the data for the different boxes in different columns. For this, you can either use `pivot_table` or a combination of `groupby` and `unstack` </div> Calculating daily means and add day of the week information: End of explanation # %load _solutions/case4_air_quality_analysis38.py Explanation: Plotting with seaborn: End of explanation # %load _solutions/case4_air_quality_analysis39.py # %load _solutions/case4_air_quality_analysis40.py Explanation: Reshaping and plotting with pandas: End of explanation
1,969
Given the following text description, write Python code to implement the functionality described below step by step Description: Face recognition The goal of this seminar is to build two simple (anv very similar) face recognition pipelines using scikit-learn package. Overall, we'd like to explore different representations and see which one works better. Prepare dataset Step2: Now we are going to plot some samples from the dataset using the provided helper function. Step3: Nearest Neighbour baseline The simplest way to do face recognition is to treat raw pixels as features and perform Nearest Neighbor Search in the Euclidean space. Let's use KNeighborsClassifier class. Step4: Not very imperssive, is it? Eigenfaces All the dirty work will be done by the scikit-learn package. First we need to learn a dictionary of codewords. For that we preprocess the training set by making each face normalized (zero mean and unit variance).. Step5: Now we are going to apply PCA to obtain a dictionary of codewords. PCA class is what we need (use svd_solver='randomized' for randomized PCA). Step6: We plot a bunch of principal components. Step7: Transform training data, train an SVM and apply it to the encoded test data. Step8: How many components are sufficient to reach the same accuracy level?
Python Code: import scipy.io image_h, image_w = 32, 32 data = scipy.io.loadmat('faces_data.mat') X_train = data['train_faces'].reshape((image_w, image_h, -1)).transpose((2, 1, 0)).reshape((-1, image_h * image_w)) y_train = (data['train_labels'] - 1).reshape((-1,)) X_test = data['test_faces'].reshape((image_w, image_h, -1)).transpose((2, 1, 0)).reshape((-1, image_h * image_w)) y_test = (data['test_labels'] - 1).reshape((-1,)) n_features = X_train.shape[1] n_train = len(y_train) n_test = len(y_test) n_classes = len(np.unique(y_train)) print('Dataset loaded.') print(' Image size : {}x{}'.format(image_h, image_w)) print(' Train images : {}'.format(n_train)) print(' Test images : {}'.format(n_test)) print(' Number of classes : {}'.format(n_classes)) Explanation: Face recognition The goal of this seminar is to build two simple (anv very similar) face recognition pipelines using scikit-learn package. Overall, we'd like to explore different representations and see which one works better. Prepare dataset End of explanation def plot_gallery(images, titles, h, w, n_row=3, n_col=6): Helper function to plot a gallery of portraits plt.figure(figsize=(1.5 * n_col, 1.7 * n_row)) plt.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35) for i in range(n_row * n_col): plt.subplot(n_row, n_col, i + 1) plt.imshow(images[i].reshape((h, w)), cmap=plt.cm.gray, interpolation='nearest') plt.title(titles[i], size=12) plt.xticks(()) plt.yticks(()) titles = [str(y) for y in y_train] plot_gallery(X_train, titles, image_h, image_w) Explanation: Now we are going to plot some samples from the dataset using the provided helper function. End of explanation from sklearn.neighbors import KNeighborsClassifier y_train = y_train.ravel() y_test = y_test.ravel() # Use KNeighborsClassifier to calculate test score for the Nearest Neighbour classifier. clf = KNeighborsClassifier(n_neighbors=1, n_jobs=-1) clf.fit(X_train, y_train) test_score = clf.score(X_test, y_test) print('Test score: {}'.format(test_score)) Explanation: Nearest Neighbour baseline The simplest way to do face recognition is to treat raw pixels as features and perform Nearest Neighbor Search in the Euclidean space. Let's use KNeighborsClassifier class. End of explanation X_train.shape X_train.mean(axis=0).shape # Populate variable 'X_train_processed' with samples each of which has zero mean and unit variance. X_train_processed = X_train * 1. mean = X_train_processed.mean(axis=0) X_train_processed -= mean std = X_train_processed.std(axis=0) X_train_processed /= std X_test_processed = X_test * 1. X_test_processed -= mean X_test_processed /= std X_train_processed.shape Explanation: Not very imperssive, is it? Eigenfaces All the dirty work will be done by the scikit-learn package. First we need to learn a dictionary of codewords. For that we preprocess the training set by making each face normalized (zero mean and unit variance).. End of explanation from sklearn.decomposition import RandomizedPCA n_components = 64 # Populate 'pca' with a trained instance of RamdomizedPCA. pca = RandomizedPCA(copy=True, n_components=n_components, random_state=123) X_train_pca = pca.fit_transform(X_train_processed) X_test_pca = pca.transform(X_test_processed) Explanation: Now we are going to apply PCA to obtain a dictionary of codewords. PCA class is what we need (use svd_solver='randomized' for randomized PCA). End of explanation plt.figure(figsize=(20,10)) for i in range(5): plt.subplot(1, 5, i + 1) plt.imshow(pca.components_[i].reshape(32,32), cmap=plt.cm.gray, interpolation='nearest') Explanation: We plot a bunch of principal components. End of explanation from sklearn.svm import SVC svc = SVC(kernel='linear', random_state=123) # Populate 'test_score' with test accuracy of an SVM classifier. svc.fit(X_train_pca, y_train) test_score = svc.score(X_test_pca, y_test) print('Test score: {}'.format(test_score)) Explanation: Transform training data, train an SVM and apply it to the encoded test data. End of explanation n_components = [1, 2, 4, 8, 16, 32, 64] accuracy = [] # Try different numbers of components and populate 'accuracy' list. for n_comp in n_components: pca = RandomizedPCA(n_components=n_comp, copy=True, random_state=123) X_train_pca = pca.fit_transform(X_train_processed) X_test_pca = pca.transform(X_test_processed) svc.fit(X_train_pca, y_train) accuracy.append(svc.score(X_test_pca, y_test)) plt.figure(figsize=(10, 6)) plt.plot(n_components, test_score * np.ones(len(n_components)), 'r') plt.plot(n_components, accuracy) print('Max accuracy: {}'.format(max(accuracy))) Explanation: How many components are sufficient to reach the same accuracy level? End of explanation
1,970
Given the following text description, write Python code to implement the functionality described. Description: Count ways to form minimum product triplets function to calculate number of triples ; Sort the array ; Count occurrences of third element ; If all three elements are same ( minimum element appears at l east 3 times ) . Answer is nC3 . ; If minimum element appears once . Answer is nC2 . ; Minimum two elements are distinct . Answer is nC1 . ; Driver code
Python Code: def noOfTriples(arr , n ) : arr . sort() count = 0 for i in range(n ) : if arr[i ] == arr[2 ] : count += 1   if arr[0 ] == arr[2 ] : return(count - 2 ) *(count - 1 ) *(count ) / 6  elif arr[1 ] == arr[2 ] : return(count - 1 ) *(count ) / 2  return count  arr =[1 , 3 , 3 , 4 ] n = len(arr ) print(noOfTriples(arr , n ) )
1,971
Given the following text description, write Python code to implement the functionality described below step by step Description: How to plot topomaps the way EEGLAB does If you have previous EEGLAB experience you may have noticed that topomaps (topoplots) generated using MNE-Python look a little different from those created in EEGLAB. If you prefer the EEGLAB style this example will show you how to calculate head sphere origin and radius to obtain EEGLAB-like channel layout in MNE. Step1: Create fake data First we will create a simple evoked object with a single timepoint using biosemi 10-20 channel layout. Step2: Calculate sphere origin and radius EEGLAB plots head outline at the level where the head circumference is measured in the 10-20 system (a line going through Fpz, T8/T4, Oz and T7/T3 channels). MNE-Python places the head outline lower on the z dimension, at the level of the anatomical landmarks Step3: Compare MNE and EEGLAB channel layout We already have the required x, y, z sphere center and its radius — we can use these values passing them to the sphere argument of many topo-plotting functions (by passing sphere=(x, y, z, radius)). Step4: Topomaps (topoplots) As the last step we do the same, but plotting the topomaps. These will not be particularly interesting as they will show random data but hopefully you will see the difference.
Python Code: # Authors: Mikołaj Magnuski <[email protected]> # # License: BSD (3-clause) import numpy as np from matplotlib import pyplot as plt import mne print(__doc__) Explanation: How to plot topomaps the way EEGLAB does If you have previous EEGLAB experience you may have noticed that topomaps (topoplots) generated using MNE-Python look a little different from those created in EEGLAB. If you prefer the EEGLAB style this example will show you how to calculate head sphere origin and radius to obtain EEGLAB-like channel layout in MNE. End of explanation biosemi_montage = mne.channels.make_standard_montage('biosemi64') n_channels = len(biosemi_montage.ch_names) fake_info = mne.create_info(ch_names=biosemi_montage.ch_names, sfreq=250., ch_types='eeg') rng = np.random.RandomState(0) data = rng.normal(size=(n_channels, 1)) * 1e-6 fake_evoked = mne.EvokedArray(data, fake_info) fake_evoked.set_montage(biosemi_montage) Explanation: Create fake data First we will create a simple evoked object with a single timepoint using biosemi 10-20 channel layout. End of explanation # first we obtain the 3d positions of selected channels chs = ['Oz', 'Fpz', 'T7', 'T8'] pos = np.stack([biosemi_montage.get_positions()['ch_pos'][ch] for ch in chs]) # now we calculate the radius from T7 and T8 x position # (we could use Oz and Fpz y positions as well) radius = np.abs(pos[[2, 3], 0]).mean() # then we obtain the x, y, z sphere center this way: # x: x position of the Oz channel (should be very close to 0) # y: y position of the T8 channel (should be very close to 0 too) # z: average z position of Oz, Fpz, T7 and T8 (their z position should be the # the same, so we could also use just one of these channels), it should be # positive and somewhere around `0.03` (3 cm) x = pos[0, 0] y = pos[-1, 1] z = pos[:, -1].mean() # lets print the values we got: print([f'{v:0.5f}' for v in [x, y, z, radius]]) Explanation: Calculate sphere origin and radius EEGLAB plots head outline at the level where the head circumference is measured in the 10-20 system (a line going through Fpz, T8/T4, Oz and T7/T3 channels). MNE-Python places the head outline lower on the z dimension, at the level of the anatomical landmarks :term:LPA, RPA, and NAS &lt;fiducial&gt;. Therefore to use the EEGLAB layout we have to move the origin of the reference sphere (a sphere that is used as a reference when projecting channel locations to a 2d plane) a few centimeters up. Instead of approximating this position by eye, as we did in the sensor locations tutorial &lt;tut-sensor-locations&gt;, here we will calculate it using the position of Fpz, T8, Oz and T7 channels available in our montage. End of explanation # create a two-panel figure with some space for the titles at the top fig, ax = plt.subplots(ncols=2, figsize=(8, 4), gridspec_kw=dict(top=0.9), sharex=True, sharey=True) # we plot the channel positions with default sphere - the mne way fake_evoked.plot_sensors(axes=ax[0], show=False) # in the second panel we plot the positions using the EEGLAB reference sphere fake_evoked.plot_sensors(sphere=(x, y, z, radius), axes=ax[1], show=False) # add titles ax[0].set_title('MNE channel projection', fontweight='bold') ax[1].set_title('EEGLAB channel projection', fontweight='bold') Explanation: Compare MNE and EEGLAB channel layout We already have the required x, y, z sphere center and its radius — we can use these values passing them to the sphere argument of many topo-plotting functions (by passing sphere=(x, y, z, radius)). End of explanation fig, ax = plt.subplots(ncols=2, figsize=(8, 4), gridspec_kw=dict(top=0.9), sharex=True, sharey=True) mne.viz.plot_topomap(fake_evoked.data[:, 0], fake_evoked.info, axes=ax[0], show=False) mne.viz.plot_topomap(fake_evoked.data[:, 0], fake_evoked.info, axes=ax[1], show=False, sphere=(x, y, z, radius)) # add titles ax[0].set_title('MNE', fontweight='bold') ax[1].set_title('EEGLAB', fontweight='bold') Explanation: Topomaps (topoplots) As the last step we do the same, but plotting the topomaps. These will not be particularly interesting as they will show random data but hopefully you will see the difference. End of explanation
1,972
Given the following text description, write Python code to implement the functionality described below step by step Description: Replication for results in Davidson et al. 2017. "Automated Hate Speech Detection and the Problem of Offensive Language" Step1: Loading the data Step2: Columns key Step3: This histogram shows the imbalanced nature of the task - most tweets containing "hate" words as defined by Hatebase were only considered to be offensive by the CF coders. More tweets were considered to be neither hate speech nor offensive language than were considered hate speech. Step9: Feature generation Step10: Running the model The best model was selected using a GridSearch with 5-fold CV. Step11: Evaluating the results
Python Code: import pandas as pd import numpy as np import pickle import sys from sklearn.feature_extraction.text import TfidfVectorizer import nltk from nltk.stem.porter import * import string import re from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer as VS from textstat.textstat import * from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import SelectFromModel from sklearn.metrics import classification_report from sklearn.svm import LinearSVC import matplotlib.pyplot as plt import seaborn %matplotlib inline Explanation: Replication for results in Davidson et al. 2017. "Automated Hate Speech Detection and the Problem of Offensive Language" End of explanation df = pickle.load(open("../data/labeled_data.p",'rb')) df df.describe() df.columns Explanation: Loading the data End of explanation df['class'].hist() Explanation: Columns key: count = number of CrowdFlower users who coded each tweet (min is 3, sometimes more users coded a tweet when judgments were determined to be unreliable by CF). hate_speech = number of CF users who judged the tweet to be hate speech. offensive_language = number of CF users who judged the tweet to be offensive. neither = number of CF users who judged the tweet to be neither offensive nor non-offensive. class = class label for majority of CF users. 0 - hate speech 1 - offensive language 2 - neither tweet = raw tweet text End of explanation tweets=df.tweet Explanation: This histogram shows the imbalanced nature of the task - most tweets containing "hate" words as defined by Hatebase were only considered to be offensive by the CF coders. More tweets were considered to be neither hate speech nor offensive language than were considered hate speech. End of explanation stopwords=stopwords = nltk.corpus.stopwords.words("english") other_exclusions = ["#ff", "ff", "rt"] stopwords.extend(other_exclusions) stemmer = PorterStemmer() def preprocess(text_string): Accepts a text string and replaces: 1) urls with URLHERE 2) lots of whitespace with one instance 3) mentions with MENTIONHERE This allows us to get standardized counts of urls and mentions Without caring about specific people mentioned space_pattern = '\s+' giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|' '[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+') mention_regex = '@[\w\-]+' parsed_text = re.sub(space_pattern, ' ', text_string) parsed_text = re.sub(giant_url_regex, '', parsed_text) parsed_text = re.sub(mention_regex, '', parsed_text) return parsed_text def tokenize(tweet): Removes punctuation & excess whitespace, sets to lowercase, and stems tweets. Returns a list of stemmed tokens. tweet = " ".join(re.split("[^a-zA-Z]*", tweet.lower())).strip() tokens = [stemmer.stem(t) for t in tweet.split()] return tokens def basic_tokenize(tweet): Same as tokenize but without the stemming tweet = " ".join(re.split("[^a-zA-Z.,!?]*", tweet.lower())).strip() return tweet.split() vectorizer = TfidfVectorizer( tokenizer=tokenize, preprocessor=preprocess, ngram_range=(1, 3), stop_words=stopwords, use_idf=True, smooth_idf=False, norm=None, decode_error='replace', max_features=10000, min_df=5, max_df=0.75 ) #Construct tfidf matrix and get relevant scores tfidf = vectorizer.fit_transform(tweets).toarray() vocab = {v:i for i, v in enumerate(vectorizer.get_feature_names())} idf_vals = vectorizer.idf_ idf_dict = {i:idf_vals[i] for i in vocab.values()} #keys are indices; values are IDF scores #Get POS tags for tweets and save as a string tweet_tags = [] for t in tweets: tokens = basic_tokenize(preprocess(t)) tags = nltk.pos_tag(tokens) tag_list = [x[1] for x in tags] tag_str = " ".join(tag_list) tweet_tags.append(tag_str) #We can use the TFIDF vectorizer to get a token matrix for the POS tags pos_vectorizer = TfidfVectorizer( tokenizer=None, lowercase=False, preprocessor=None, ngram_range=(1, 3), stop_words=None, use_idf=False, smooth_idf=False, norm=None, decode_error='replace', max_features=5000, min_df=5, max_df=0.75, ) #Construct POS TF matrix and get vocab dict pos = pos_vectorizer.fit_transform(pd.Series(tweet_tags)).toarray() pos_vocab = {v:i for i, v in enumerate(pos_vectorizer.get_feature_names())} #Now get other features sentiment_analyzer = VS() def count_twitter_objs(text_string): Accepts a text string and replaces: 1) urls with URLHERE 2) lots of whitespace with one instance 3) mentions with MENTIONHERE 4) hashtags with HASHTAGHERE This allows us to get standardized counts of urls and mentions Without caring about specific people mentioned. Returns counts of urls, mentions, and hashtags. space_pattern = '\s+' giant_url_regex = ('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|' '[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+') mention_regex = '@[\w\-]+' hashtag_regex = '#[\w\-]+' parsed_text = re.sub(space_pattern, ' ', text_string) parsed_text = re.sub(giant_url_regex, 'URLHERE', parsed_text) parsed_text = re.sub(mention_regex, 'MENTIONHERE', parsed_text) parsed_text = re.sub(hashtag_regex, 'HASHTAGHERE', parsed_text) return(parsed_text.count('URLHERE'),parsed_text.count('MENTIONHERE'),parsed_text.count('HASHTAGHERE')) def other_features(tweet): This function takes a string and returns a list of features. These include Sentiment scores, Text and Readability scores, as well as Twitter specific features sentiment = sentiment_analyzer.polarity_scores(tweet) words = preprocess(tweet) #Get text only syllables = textstat.syllable_count(words) num_chars = sum(len(w) for w in words) num_chars_total = len(tweet) num_terms = len(tweet.split()) num_words = len(words.split()) avg_syl = round(float((syllables+0.001))/float(num_words+0.001),4) num_unique_terms = len(set(words.split())) ###Modified FK grade, where avg words per sentence is just num words/1 FKRA = round(float(0.39 * float(num_words)/1.0) + float(11.8 * avg_syl) - 15.59,1) ##Modified FRE score, where sentence fixed to 1 FRE = round(206.835 - 1.015*(float(num_words)/1.0) - (84.6*float(avg_syl)),2) twitter_objs = count_twitter_objs(tweet) retweet = 0 if "rt" in words: retweet = 1 features = [FKRA, FRE,syllables, avg_syl, num_chars, num_chars_total, num_terms, num_words, num_unique_terms, sentiment['neg'], sentiment['pos'], sentiment['neu'], sentiment['compound'], twitter_objs[2], twitter_objs[1], twitter_objs[0], retweet] #features = pandas.DataFrame(features) return features def get_feature_array(tweets): feats=[] for t in tweets: feats.append(other_features(t)) return np.array(feats) other_features_names = ["FKRA", "FRE","num_syllables", "avg_syl_per_word", "num_chars", "num_chars_total", \ "num_terms", "num_words", "num_unique_words", "vader neg","vader pos","vader neu", \ "vader compound", "num_hashtags", "num_mentions", "num_urls", "is_retweet"] feats = get_feature_array(tweets) #Now join them all up M = np.concatenate([tfidf,pos,feats],axis=1) M.shape #Finally get a list of variable names variables = ['']*len(vocab) for k,v in vocab.iteritems(): variables[v] = k pos_variables = ['']*len(pos_vocab) for k,v in pos_vocab.iteritems(): pos_variables[v] = k feature_names = variables+pos_variables+other_features_names Explanation: Feature generation End of explanation X = pd.DataFrame(M) y = df['class'].astype(int) select = SelectFromModel(LogisticRegression(class_weight='balanced',penalty="l1",C=0.01)) X_ = select.fit_transform(X,y) model = LinearSVC(class_weight='balanced',C=0.01, penalty='l2', loss='squared_hinge',multi_class='ovr').fit(X_, y) model = LogisticRegression(class_weight='balanced',penalty='l2',C=0.01).fit(X_,y) y_preds = model.predict(X_) Explanation: Running the model The best model was selected using a GridSearch with 5-fold CV. End of explanation report = classification_report( y, y_preds ) print(report) plt.rc('pdf', fonttype=42) plt.rcParams['ps.useafm'] = True plt.rcParams['pdf.use14corefonts'] = True plt.rcParams['text.usetex'] = True plt.rcParams['font.serif'] = 'Times' plt.rcParams['font.family'] = 'serif' from sklearn.metrics import confusion_matrix confusion_matrix = confusion_matrix(y,y_preds) matrix_proportions = np.zeros((3,3)) for i in range(0,3): matrix_proportions[i,:] = confusion_matrix[i,:]/float(confusion_matrix[i,:].sum()) names=['Hate','Offensive','Neither'] confusion_df = pd.DataFrame(matrix_proportions, index=names,columns=names) plt.figure(figsize=(5,5)) seaborn.heatmap(confusion_df,annot=True,annot_kws={"size": 12},cmap='gist_gray_r',cbar=False, square=True,fmt='.2f') plt.ylabel(r'\textbf{True categories}',fontsize=14) plt.xlabel(r'\textbf{Predicted categories}',fontsize=14) plt.tick_params(labelsize=12) #Uncomment line below if you want to save the output #plt.savefig('confusion.pdf') #True distribution y.hist() pd.Series(y_preds).hist() Explanation: Evaluating the results End of explanation
1,973
Given the following text description, write Python code to implement the functionality described below step by step Description: Loading and parsing XML files from the file system Our XML files are in a subdirectory called 'partonopeus'. We load the os library and use its listdir() method to verify the contents of that directory. Step1: We create a dictionary to hold our input files, using the single-letter filename before the '.xml' extension as the key and the file itself as the value. The lxml library that we use to parse XML requires that we open the XML file for reading in bytes mode. Step2: We load the lxml library and use the .XML() method to parse the file and the .tostring() method to stringify the results so that we can examine them. In Real Life, if we need to manipulate the XML (e.g., to search it with XPath), we would keep it as XML. That is, the tostring() process is used here just to create something human-readable for pedagogical purposes.
Python Code: import os os.listdir('partonopeus') Explanation: Loading and parsing XML files from the file system Our XML files are in a subdirectory called 'partonopeus'. We load the os library and use its listdir() method to verify the contents of that directory. End of explanation inputFiles = {} for inputFile in os.listdir('partonopeus'): siglum = inputFile[0] contents = open('partonopeus/' + inputFile,'rb').read() inputFiles[siglum] = contents Explanation: We create a dictionary to hold our input files, using the single-letter filename before the '.xml' extension as the key and the file itself as the value. The lxml library that we use to parse XML requires that we open the XML file for reading in bytes mode. End of explanation from lxml import etree print(etree.tostring(etree.XML(inputFiles['A']))) Explanation: We load the lxml library and use the .XML() method to parse the file and the .tostring() method to stringify the results so that we can examine them. In Real Life, if we need to manipulate the XML (e.g., to search it with XPath), we would keep it as XML. That is, the tostring() process is used here just to create something human-readable for pedagogical purposes. End of explanation
1,974
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Type Is Required Step7: 1.4. Elemental Stoichiometry Is Required Step8: 1.5. Elemental Stoichiometry Details Is Required Step9: 1.6. Prognostic Variables Is Required Step10: 1.7. Diagnostic Variables Is Required Step11: 1.8. Damping Is Required Step12: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required Step13: 2.2. Timestep If Not From Ocean Is Required Step14: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required Step15: 3.2. Timestep If Not From Ocean Is Required Step16: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required Step17: 4.2. Scheme Is Required Step18: 4.3. Use Different Scheme Is Required Step19: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required Step20: 5.2. River Input Is Required Step21: 5.3. Sediments From Boundary Conditions Is Required Step22: 5.4. Sediments From Explicit Model Is Required Step23: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required Step24: 6.2. CO2 Exchange Type Is Required Step25: 6.3. O2 Exchange Present Is Required Step26: 6.4. O2 Exchange Type Is Required Step27: 6.5. DMS Exchange Present Is Required Step28: 6.6. DMS Exchange Type Is Required Step29: 6.7. N2 Exchange Present Is Required Step30: 6.8. N2 Exchange Type Is Required Step31: 6.9. N2O Exchange Present Is Required Step32: 6.10. N2O Exchange Type Is Required Step33: 6.11. CFC11 Exchange Present Is Required Step34: 6.12. CFC11 Exchange Type Is Required Step35: 6.13. CFC12 Exchange Present Is Required Step36: 6.14. CFC12 Exchange Type Is Required Step37: 6.15. SF6 Exchange Present Is Required Step38: 6.16. SF6 Exchange Type Is Required Step39: 6.17. 13CO2 Exchange Present Is Required Step40: 6.18. 13CO2 Exchange Type Is Required Step41: 6.19. 14CO2 Exchange Present Is Required Step42: 6.20. 14CO2 Exchange Type Is Required Step43: 6.21. Other Gases Is Required Step44: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required Step45: 7.2. PH Scale Is Required Step46: 7.3. Constants If Not OMIP Is Required Step47: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required Step48: 8.2. Sulfur Cycle Present Is Required Step49: 8.3. Nutrients Present Is Required Step50: 8.4. Nitrous Species If N Is Required Step51: 8.5. Nitrous Processes If N Is Required Step52: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required Step53: 9.2. Upper Trophic Levels Treatment Is Required Step54: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required Step55: 10.2. Pft Is Required Step56: 10.3. Size Classes Is Required Step57: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required Step58: 11.2. Size Classes Is Required Step59: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required Step60: 12.2. Lability Is Required Step61: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required Step62: 13.2. Types If Prognostic Is Required Step63: 13.3. Size If Prognostic Is Required Step64: 13.4. Size If Discrete Is Required Step65: 13.5. Sinking Speed If Prognostic Is Required Step66: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required Step67: 14.2. Abiotic Carbon Is Required Step68: 14.3. Alkalinity Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-3', 'ocnbgchem') Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem MIP Era: CMIP6 Institute: TEST-INSTITUTE-3 Source ID: SANDBOX-3 Topic: Ocnbgchem Sub-Topics: Tracers. Properties: 65 (37 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:54:46 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks 4. Key Properties --&gt; Transport Scheme 5. Key Properties --&gt; Boundary Forcing 6. Key Properties --&gt; Gas Exchange 7. Key Properties --&gt; Carbon Chemistry 8. Tracers 9. Tracers --&gt; Ecosystem 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton 11. Tracers --&gt; Ecosystem --&gt; Zooplankton 12. Tracers --&gt; Disolved Organic Matter 13. Tracers --&gt; Particules 14. Tracers --&gt; Dic Alkalinity 1. Key Properties Ocean Biogeochemistry key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean biogeochemistry model code (PISCES 2.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.model_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Geochemical" # "NPZD" # "PFT" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Fixed" # "Variable" # "Mix of both" # TODO - please enter value(s) Explanation: 1.4. Elemental Stoichiometry Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe elemental stoichiometry (fixed, variable, mix of the two) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.5. Elemental Stoichiometry Details Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe which elements have fixed/variable stoichiometry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.6. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all prognostic tracer variables in the ocean biogeochemistry component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.7. Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of all diagnotic tracer variables in the ocean biogeochemistry component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.damping') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.8. Damping Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any tracer damping used (such as artificial correction or relaxation to climatology,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport Time stepping method for passive tracers transport in ocean biogeochemistry 2.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for passive tracers End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for passive tracers (if different from ocean) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "use ocean model transport time step" # "use specific time step" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks Time stepping framework for biology sources and sinks in ocean biogeochemistry 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time stepping framework for biology sources and sinks End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Timestep If Not From Ocean Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Time step for biology sources and sinks (if different from ocean) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline" # "Online" # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Transport Scheme Transport scheme in ocean biogeochemistry 4.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transport scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Use that of ocean model" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 4.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Transport scheme used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.3. Use Different Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Decribe transport scheme if different than that of ocean model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Atmospheric Chemistry model" # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Boundary Forcing Properties of biogeochemistry boundary forcing 5.1. Atmospheric Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how atmospheric deposition is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "from file (climatology)" # "from file (interannual variations)" # "from Land Surface model" # TODO - please enter value(s) Explanation: 5.2. River Input Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river input is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Sediments From Boundary Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from boundary condition End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Sediments From Explicit Model Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List which sediments are speficied from explicit sediment model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Gas Exchange *Properties of gas exchange in ocean biogeochemistry * 6.1. CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.2. CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe CO2 gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.3. O2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is O2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6.4. O2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe O2 gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.5. DMS Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is DMS gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.6. DMS Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify DMS gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.7. N2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.8. N2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.9. N2O Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is N2O gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.10. N2O Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify N2O gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.11. CFC11 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC11 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.12. CFC11 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC11 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.13. CFC12 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is CFC12 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.14. CFC12 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify CFC12 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.15. SF6 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is SF6 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.16. SF6 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify SF6 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.17. 13CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 13CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.18. 13CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 13CO2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.19. 14CO2 Exchange Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is 14CO2 gas exchange modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.20. 14CO2 Exchange Type Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify 14CO2 gas exchange scheme type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.21. Other Gases Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any other gas exchange End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OMIP protocol" # "Other protocol" # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Carbon Chemistry Properties of carbon chemistry biogeochemistry 7.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how carbon chemistry is modeled End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea water" # "Free" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 7.2. PH Scale Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, describe pH scale. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Constants If Not OMIP Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If NOT OMIP protocol, list carbon chemistry constants. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Tracers Ocean biogeochemistry tracers 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of tracers in ocean biogeochemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Sulfur Cycle Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sulfur cycle modeled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrogen (N)" # "Phosphorous (P)" # "Silicium (S)" # "Iron (Fe)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.3. Nutrients Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List nutrient species present in ocean biogeochemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Nitrates (NO3)" # "Amonium (NH4)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.4. Nitrous Species If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous species. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Dentrification" # "N fixation" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.5. Nitrous Processes If N Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If nitrogen present, list nitrous processes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Tracers --&gt; Ecosystem Ecosystem properties in ocean biogeochemistry 9.1. Upper Trophic Levels Definition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Definition of upper trophic level (e.g. based on size) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.2. Upper Trophic Levels Treatment Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Define how upper trophic level are treated End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "PFT including size based (specify both below)" # "Size based only (specify below)" # "PFT only (specify below)" # TODO - please enter value(s) Explanation: 10. Tracers --&gt; Ecosystem --&gt; Phytoplankton Phytoplankton properties in ocean biogeochemistry 10.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of phytoplankton End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Diatoms" # "Nfixers" # "Calcifiers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Pft Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton functional types (PFT) (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microphytoplankton" # "Nanophytoplankton" # "Picophytoplankton" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.3. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Phytoplankton size classes (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Generic" # "Size based (specify below)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Tracers --&gt; Ecosystem --&gt; Zooplankton Zooplankton properties in ocean biogeochemistry 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of zooplankton End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Microzooplankton" # "Mesozooplankton" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Size Classes Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Zooplankton size classes (if applicable) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Tracers --&gt; Disolved Organic Matter Disolved organic matter properties in ocean biogeochemistry 12.1. Bacteria Present Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there bacteria representation ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Labile" # "Semi-labile" # "Refractory" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Lability Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe treatment of lability in dissolved organic matter End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diagnostic" # "Diagnostic (Martin profile)" # "Diagnostic (Balast)" # "Prognostic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Tracers --&gt; Particules Particulate carbon properties in ocean biogeochemistry 13.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is particulate carbon represented in ocean biogeochemistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "POC" # "PIC (calcite)" # "PIC (aragonite" # "BSi" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Types If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If prognostic, type(s) of particulate matter taken into account End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "No size spectrum used" # "Full size spectrum" # "Discrete size classes (specify which below)" # TODO - please enter value(s) Explanation: 13.3. Size If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13.4. Size If Discrete Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic and discrete size, describe which size classes are used End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Function of particule size" # "Function of particule type (balast)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Sinking Speed If Prognostic Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If prognostic, method for calculation of sinking speed of particules End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "C13" # "C14)" # TODO - please enter value(s) Explanation: 14. Tracers --&gt; Dic Alkalinity DIC and alkalinity properties in ocean biogeochemistry 14.1. Carbon Isotopes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which carbon isotopes are modelled (C13, C14)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.2. Abiotic Carbon Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is abiotic carbon modelled ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Prognostic" # "Diagnostic)" # TODO - please enter value(s) Explanation: 14.3. Alkalinity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is alkalinity modelled ? End of explanation
1,975
Given the following text description, write Python code to implement the functionality described below step by step Description: Sudden Landslide Identification Product (SLIP) What to expect from this notebook Introduction to the SLIP algorithm describing change detection in the context of datacube Detailed band math equations for SLIP filtering Illustrate the step by step evolution of a SLIP product <a id='slip_top'></a> SLIP SLIP is used to automate the detection of Landslides. A SLIP product is the result of filtering based on per-pixel changes in both soil moisture and vegetation in areas with high elevation gradients. All of which (with the exception of elevation gradients) can be computed using simple bandmath equations. Data SLIP makes use of the following Landsat 7 Surface Reflectance Bands Step1: <span id="slip_plat_prod">Choose Platform and Product &#9652;</span> Step2: <span id="slip_define_extents">Define the Extents of the Analysis &#9652;</span> Step3: <span id="slip_load_data">Load Data from the Data Cube &#9652;</span> Step4: Visualization This step is optional, but useful to those seeking a step by step validation of SLIP. The following code shows a true-color representation of our loaded scene. Step5: <span id="slip_change_detect">Change Detection &#9652;</span> In the context of SLIP, Change detection happens through the comparison of 'current' values against 'past' values. <br> Trivialized Example Step6: <br> However, OLD can have varying interpretations. In SLIP, OLD values (referred to in code as BASELINE values) are simply rolling averages of not-nan values leading up to the date in question. <br> The following figure illustrates such a compositing method Step7: It is important to note that compositing will shorten the length of baseline's time domain by the window size since ranges less than the composite size are not computed. For a composite size of 5, new's first 5 time values will not have composite values. Step8: What this composite looks like Step9: The baseline composite is featured in the figure above (left). It represents what was typical for the past five acquisitions 'leading-up-to' time_to_show. Displayed next to it (right) is the true-color visualization of the acquisition 'at' time_to_show. The new object contains unaltered LS7 scenes that are index-able using a date like time_to_show. The baseline object contains a block of composites of those landsat scenes that is index-able the same way. <span id="slip_ndwi">NDWI (Nomalized Difference Water Index) &#9652;</span> SLIP makes the major assumption that landslides will strip a hill/mountain-side of all of its vegetation. SLIP uses NDWI, an index used to monitor water content of leaves, to track the existence of vegetation on a slope. At high enough levels, leaf water content change can no longer be attributed to something like seasonal fluctuations and will most likely indicate a change in the existence of vegetation. NDWI BANDMATH NDWI is computed on a per-pixel level and involves arithmetic between NIR (Near infrared) and SWIR1 (Short Wave Infrared) values. NDWI is computed for both NEW and BASELINE imagery then compared to yield NDWI change. The equations bellow detail a very simple derivation of change in NDWI Step10: Filtering NDWI In the context of code, you can best think of filtering as a peicewise transformation that assigns a nan (or null) value to points that fall below our minimum change threshold. (For SLIP that threshold is 20%) <br> $$ ndwi_filter(Dataset) = \left{ \begin{array}{lr} Dataset & Step11: How far NDWI filtering gets you A SLIP product is the result of a process of elimination. NDWI is sufficient in eliminating a majority of non-contending areas early on in the process. Featured below is what is left of the original image after having filtered for changes in NDWI . Step12: Highlighted in the center picture are values that meet our NDWI change expectations. Featured in the right-most image is what remains of our original image after NDWI filtering. <span id="slip_red">RED Reflectance &#9652;</span> SLIP makes another important assumption about Landslides. On top of stripping the Slope of vegetation, a landslide will reveal a large layer of previously vegetated soil. Since soil reflects more light in the RED spectral band than highly vegetated areas do, SLIP looks for increases in the RED bands. This captures both the loss of vegetation, and the unearthing of soil. RED change bandmath Red change is computed on a per-pixel level and involves arithmetic on the RED band values. The derivation of RED change is simple Step13: Filtering for RED reflectance increase Filtering RED reflectance change is just like the piecewise transformation used for filtering NDWI change. <br> $$ red_filter(Dataset) = \left{ \begin{array}{lr} Dataset & Step14: How much further RED reflectance filtering gets you Continuing SLIP's process of elimination, Red increase filtering will further refine the area of interest to areas that, upon visual inspection appear to be light brown in color. Step15: <span id="slip_aster">ASTER Global Elevation Models &#9652;</span> Aster GDEM models provide elevation data for each pixel expressed in meters. For SLIP height is not enough to determine that a landslide can happen on a pixel. SLIP focuses on areas with high elevation Gradients/Slope (Expressed in non-radian degrees).The driving motivation for using slope based filtering is that landslides are less likely to happen in flat regions. Loading the elevation model Step16: Calculating Angle of elevation A gradient is generated for each pixel using the four pixels adjacent to it, as well as a rise/run formuala. <br><br> $$ Gradient = \frac{Rise}{Run} $$ <br><br> Basic trigonometric identities can then be used to derive the angle Step17: Filtering out pixels that don't meet requirements for steepness <br> Filtering based on slope is a peicewise transformation using a derived slopemask Step18: Visualising our final SLIP product The final results of SLIP are small regions of points with a high likelihood of landslides having occurred on them. Furthermore there is no possibility that detections are made in flat areas(areas with less than a $15^{\circ}$ angle of elevation. Step19: <span id="slip_evo">Reviewing the Evolution of the SLIP Product &#9652;</span> The following visualizations will detail the evolution of the SLIP product from the previous steps. Order of operations Step20: <span id="slip_compare_output_baseline">Visual Comparison of SLIP Output and Baseline Composited Scene &#9652;</span> In the name of validating results, it makes sense to compare the SLIP product generated for the selected date (time_to_show) to the composited scene representing what is considered to be "normal" for the last 5 acquisitions.
Python Code: import sys import os sys.path.append(os.environ.get('NOTEBOOK_ROOT')) import numpy as np import xarray as xr import pandas as pd import matplotlib.pyplot as plt from utils.data_cube_utilities.dc_display_map import display_map from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full # landsat_qa_clean_mask, landsat_clean_mask_invalid from utils.data_cube_utilities.dc_baseline import generate_baseline from utils.data_cube_utilities.dc_displayutil import display_at_time from utils.data_cube_utilities.dc_slip import create_slope_mask from datacube.utils.aws import configure_s3_access configure_s3_access(requester_pays=True) import datacube dc = datacube.Datacube() Explanation: Sudden Landslide Identification Product (SLIP) What to expect from this notebook Introduction to the SLIP algorithm describing change detection in the context of datacube Detailed band math equations for SLIP filtering Illustrate the step by step evolution of a SLIP product <a id='slip_top'></a> SLIP SLIP is used to automate the detection of Landslides. A SLIP product is the result of filtering based on per-pixel changes in both soil moisture and vegetation in areas with high elevation gradients. All of which (with the exception of elevation gradients) can be computed using simple bandmath equations. Data SLIP makes use of the following Landsat 7 Surface Reflectance Bands: - RED, - NIR, - SWIR1 - PIXEL_QA SLIP makes use of the following ASTER GDEM V2 bands: - dem Algorithmic Process Algorithmically speaking, SLIP is a series of per-pixel filter operations acting on relationships between NEW(current) and BASELINE(historical) values of an area. The remaining pixels after filter operations will be what SLIP classifies as landslides. Itemized in the list below are operations taken to create a SLIP product: Import and initialize datacube Load Geographic area Remove clouds and no-data values Label this product NEW Generate a rolling average composite of NEW Label the rolling average composite BASELINE Filter in favor of sufficiently large changes in vegetation (using NDWI values derived from NEW and BASELINE) Filter in favor of sufficiently large increases in RED reflectance(using RED band values from NEW and BASELINE) Generate a slope-mask(using ASTERDEM V2 data) Filter in favor of areas that have a high enough slope(Landslides don't happen on flat surfaces) Index Import Dependencies and Connect to the Data Cube Choose Platform and Product Define the Extents of the Analysis Load Data from the Data Cube Change Detection NDWI (Nomalized Difference Water Index) RED Reflectance ASTER Global Elevation Models Reviewing the Evolution of the SLIP Product Visual Comparison of SLIP Output and Baseline Composited Scene <span id="slip_import">Import Dependencies and Connect to the Data Cube &#9652;</span> End of explanation platform = 'LANDSAT_8' product = 'ls8_usgs_sr_scene' collection = 'c1' level = 'l2' Explanation: <span id="slip_plat_prod">Choose Platform and Product &#9652;</span> End of explanation # Freetown, Sierra Leone # (https://www.reuters.com/article/us-leone-mudslide-africa/cities-across-africa-face-threat-of-landslides-like-sierra-leone-idUSKCN1AY115) # define geographic boundaries in (min, max) format lon = (-13.3196, -12.9366) lat = (8.1121, 8.5194) # define date range boundaries in (min,max) format # There should be a landslide by Freetown during August 2017. date_range =("2016-01-01", "2017-12-31") display_map(lat, lon) Explanation: <span id="slip_define_extents">Define the Extents of the Analysis &#9652;</span> End of explanation # Define desired bands. For SLIP, only red, nir, swir and pixel_qa will be necessary. desired_bands = ['red','nir','swir1','pixel_qa'] # Add blue and green bands since they are needed for visualizing results (RGB). desired_bands = desired_bands + ['green', 'blue'] # Load area. landsat_ds = dc.load(product = product,\ platform = platform,\ lat = lat,\ lon = lon,\ time = date_range,\ measurements = desired_bands, group_by='solar_day', dask_chunks={'time':1, 'longitude': 1000, 'latitude': 1000}).persist() # clean_mask = landsat_qa_clean_mask(landsat_ds, platform) & \ # (landsat_ds != -9999).to_array().all('variable') & \ # landsat_clean_mask_invalid(landsat_ds) clean_mask = landsat_clean_mask_full(dc, landsat_ds, product=product, platform=platform, collection=collection, level=level).persist() # Determine the times with data. data_time_mask = (clean_mask.sum(['latitude', 'longitude']) > 0).persist() clean_mask = clean_mask.sel(time=data_time_mask) landsat_ds = landsat_ds.sel(time=data_time_mask) landsat_ds = landsat_ds.where(clean_mask).persist() Explanation: <span id="slip_load_data">Load Data from the Data Cube &#9652;</span> End of explanation time_to_show = '2017-08-04' acq_to_show = landsat_ds.sel(time=time_to_show, method='nearest') rgb_da = acq_to_show[['red', 'green', 'blue']].squeeze().to_array().compute() vmin = rgb_da.quantile(0.05).values vmax = rgb_da.quantile(0.95).values rgb_da.plot.imshow(vmin=vmin, vmax=vmax) plt.show() Explanation: Visualization This step is optional, but useful to those seeking a step by step validation of SLIP. The following code shows a true-color representation of our loaded scene. End of explanation new = acq_to_show Explanation: <span id="slip_change_detect">Change Detection &#9652;</span> In the context of SLIP, Change detection happens through the comparison of 'current' values against 'past' values. <br> Trivialized Example: <br> $$ \Delta Value = (Value_{new} - Value_{old})/ Value_{old} $$ <br> It is easy to define NEW as the current value being analyzed. <br> End of explanation # Generate a moving average of n values leading up to current time. baseline = generate_baseline(landsat_ds, composite_size = 3, mode = 'average') Explanation: <br> However, OLD can have varying interpretations. In SLIP, OLD values (referred to in code as BASELINE values) are simply rolling averages of not-nan values leading up to the date in question. <br> The following figure illustrates such a compositing method: <br><br> <!-- ![img](avg_compositing.png) --> <br> In the figure above, t4 values are the average of t1-t3 (assuming a window size of 3) <br> The code below composites with a window size of 5. End of explanation (len(new.time), len(baseline.time)) Explanation: It is important to note that compositing will shorten the length of baseline's time domain by the window size since ranges less than the composite size are not computed. For a composite size of 5, new's first 5 time values will not have composite values. End of explanation display_at_time([baseline, new], time = time_to_show, width = 2, w = 12) Explanation: What this composite looks like End of explanation ndwi_new = (new.nir- new.swir1)/(new.nir + new.swir1) ndwi_baseline = (baseline.nir - baseline.swir1)/ (baseline.nir + baseline.swir1) ndwi_change = ndwi_new - ndwi_baseline Explanation: The baseline composite is featured in the figure above (left). It represents what was typical for the past five acquisitions 'leading-up-to' time_to_show. Displayed next to it (right) is the true-color visualization of the acquisition 'at' time_to_show. The new object contains unaltered LS7 scenes that are index-able using a date like time_to_show. The baseline object contains a block of composites of those landsat scenes that is index-able the same way. <span id="slip_ndwi">NDWI (Nomalized Difference Water Index) &#9652;</span> SLIP makes the major assumption that landslides will strip a hill/mountain-side of all of its vegetation. SLIP uses NDWI, an index used to monitor water content of leaves, to track the existence of vegetation on a slope. At high enough levels, leaf water content change can no longer be attributed to something like seasonal fluctuations and will most likely indicate a change in the existence of vegetation. NDWI BANDMATH NDWI is computed on a per-pixel level and involves arithmetic between NIR (Near infrared) and SWIR1 (Short Wave Infrared) values. NDWI is computed for both NEW and BASELINE imagery then compared to yield NDWI change. The equations bellow detail a very simple derivation of change in NDWI: $$ NDWI_{NEW} = \frac{NIR_{NEW} - SWIR_{NEW}}{NIR_{NEW} + SWIR_{NEW}}$$ <br><br> $$ NDWI_{BASELINE} = \frac{NIR_{BASELINE} - SWIR_{BASELINE}}{NIR_{BASELINE} + SWIR_{BASELINE}}$$ <br><br> $$\Delta NDWI = NDWI_{NEW} - NDWI_{BASELINE}$$ <br> The code is just as simple: End of explanation new_ndwi_filtered = new.where(abs(ndwi_change) > 0.2) Explanation: Filtering NDWI In the context of code, you can best think of filtering as a peicewise transformation that assigns a nan (or null) value to points that fall below our minimum change threshold. (For SLIP that threshold is 20%) <br> $$ ndwi_filter(Dataset) = \left{ \begin{array}{lr} Dataset & : | \Delta NDWI(Dataset) | > 0.2\ np.nan & : | \Delta NDWI(Dataset) | \le 0.2 \end{array} \right.\ $$ <br> In code, it's even simpler: End of explanation display_at_time([new, (new, new_ndwi_filtered),new_ndwi_filtered], time = time_to_show, width = 3, w =14) Explanation: How far NDWI filtering gets you A SLIP product is the result of a process of elimination. NDWI is sufficient in eliminating a majority of non-contending areas early on in the process. Featured below is what is left of the original image after having filtered for changes in NDWI . End of explanation red_change = (new.red - baseline.red)/(baseline.red) Explanation: Highlighted in the center picture are values that meet our NDWI change expectations. Featured in the right-most image is what remains of our original image after NDWI filtering. <span id="slip_red">RED Reflectance &#9652;</span> SLIP makes another important assumption about Landslides. On top of stripping the Slope of vegetation, a landslide will reveal a large layer of previously vegetated soil. Since soil reflects more light in the RED spectral band than highly vegetated areas do, SLIP looks for increases in the RED bands. This captures both the loss of vegetation, and the unearthing of soil. RED change bandmath Red change is computed on a per-pixel level and involves arithmetic on the RED band values. The derivation of RED change is simple: <br><br> $$ \Delta Red = \frac{RED_{NEW} - RED_{BASELINE}}{RED_{BASELINE}} $$ The code is just as simple: End of explanation new_red_and_ndwi_filtered = new_ndwi_filtered.where(red_change > 0.4) Explanation: Filtering for RED reflectance increase Filtering RED reflectance change is just like the piecewise transformation used for filtering NDWI change. <br> $$ red_filter(Dataset) = \left{ \begin{array}{lr} Dataset & : \Delta red(Dataset) > 0.4\ np.nan & : \Delta red(Dataset) \le 0.4 \end{array} \right.\ $$ <br> In Code: End of explanation display_at_time([new, (new, new_red_and_ndwi_filtered),new_red_and_ndwi_filtered], time = time_to_show, width = 3, w = 14) Explanation: How much further RED reflectance filtering gets you Continuing SLIP's process of elimination, Red increase filtering will further refine the area of interest to areas that, upon visual inspection appear to be light brown in color. End of explanation aster = dc.load(product="terra_aster_gdm",\ lat=lat,\ lon=lon,\ measurements=['dem'], group_by='solar_day') Explanation: <span id="slip_aster">ASTER Global Elevation Models &#9652;</span> Aster GDEM models provide elevation data for each pixel expressed in meters. For SLIP height is not enough to determine that a landslide can happen on a pixel. SLIP focuses on areas with high elevation Gradients/Slope (Expressed in non-radian degrees).The driving motivation for using slope based filtering is that landslides are less likely to happen in flat regions. Loading the elevation model End of explanation # Create a slope-mask. False: if pixel <15 degees; True: if pixel > 15 degrees; is_above_slope_threshold = create_slope_mask(aster, degree_threshold = 15,resolution = 30) Explanation: Calculating Angle of elevation A gradient is generated for each pixel using the four pixels adjacent to it, as well as a rise/run formuala. <br><br> $$ Gradient = \frac{Rise}{Run} $$ <br><br> Basic trigonometric identities can then be used to derive the angle: <br><br> $$ Angle of Elevation = \arctan(Gradient) $$ <br><br> When deriving the angle of elevation for a pixel, two gradients are available. One formed by the bottom pixel and top pixel, the other formed by the right and left pixel. For the purposes of identifying landslide causing slopes, the greatest of the two slopes will be used. The following image describes the process for angle-of-elevation calculation for a single pixel within a grid of DEM pixels <br><br> <br><br> The vagaries of implementation have been abstracted away by dc_demutils. It's used to derive a slope-mask. A slope-mask in this sense, is an array of true and false values based on whether or not that pixel meets a minimum angle of elevation requirement. Its use is detailed below. End of explanation slip_product = new_red_and_ndwi_filtered.where(is_above_slope_threshold) Explanation: Filtering out pixels that don't meet requirements for steepness <br> Filtering based on slope is a peicewise transformation using a derived slopemask: <br> $$ slope_filter(Dataset) = \left{ \begin{array}{lr} Dataset & : is_above_degree_threshold(Dataset, 15^{\circ}) = True\ np.nan & : is_above_degree_threshold(Dataset, 15^{\circ}) = False\ \end{array} \right.\ $$ <br> Its use in code: End of explanation display_at_time([new, (new, slip_product),slip_product], time = time_to_show, width = 3, w = 14) Explanation: Visualising our final SLIP product The final results of SLIP are small regions of points with a high likelihood of landslides having occurred on them. Furthermore there is no possibility that detections are made in flat areas(areas with less than a $15^{\circ}$ angle of elevation. End of explanation display_at_time([new, (new,new_ndwi_filtered),new_ndwi_filtered,new, (new, new_red_and_ndwi_filtered),new_red_and_ndwi_filtered, new, (new, slip_product),slip_product], time = time_to_show, width = 3, w = 14, h = 12) Explanation: <span id="slip_evo">Reviewing the Evolution of the SLIP Product &#9652;</span> The following visualizations will detail the evolution of the SLIP product from the previous steps. Order of operations: - NDWI change Filtered - RED increase Filtered - Slope Filtered Visualization End of explanation display_at_time([baseline, (new,slip_product)], time = time_to_show, width = 2, mode = 'blend', color = [210,7,7] , w = 14) Explanation: <span id="slip_compare_output_baseline">Visual Comparison of SLIP Output and Baseline Composited Scene &#9652;</span> In the name of validating results, it makes sense to compare the SLIP product generated for the selected date (time_to_show) to the composited scene representing what is considered to be "normal" for the last 5 acquisitions. End of explanation
1,976
Given the following text description, write Python code to implement the functionality described below step by step Description: ArangoDB with Graphistry We explore Game of Thrones data in ArangoDB to show how Arango's graph support interops with Graphistry pretty quickly. This tutorial shares two sample transforms Step1: Connect Step2: Demo 1 Step3: Demo 2
Python Code: !pip install python-arango --user -q from arango import ArangoClient import pandas as pd import graphistry def paths_to_graph(paths, source='_from', destination='_to', node='_id'): nodes_df = pd.DataFrame() edges_df = pd.DataFrame() for graph in paths: nodes_df = pd.concat([ nodes_df, pd.DataFrame(graph['vertices']) ], ignore_index=True) edges_df = pd.concat([ edges_df, pd.DataFrame(graph['edges']) ], ignore_index=True) nodes_df = nodes_df.drop_duplicates([node]) edges_df = edges_df.drop_duplicates([node]) return graphistry.bind(source=source, destination=destination, node=node).nodes(nodes_df).edges(edges_df) def graph_to_graphistry(graph, source='_from', destination='_to', node='_id'): nodes_df = pd.DataFrame() for vc_name in graph.vertex_collections(): nodes_df = pd.concat([nodes_df, pd.DataFrame([x for x in graph.vertex_collection(vc_name)])], ignore_index=True) edges_df = pd.DataFrame() for edge_def in graph.edge_definitions(): edges_df = pd.concat([edges_df, pd.DataFrame([x for x in graph.edge_collection(edge_def['edge_collection'])])], ignore_index=True) return graphistry.bind(source=source, destination=destination, node=node).nodes(nodes_df).edges(edges_df) Explanation: ArangoDB with Graphistry We explore Game of Thrones data in ArangoDB to show how Arango's graph support interops with Graphistry pretty quickly. This tutorial shares two sample transforms: * Visualize the full graph * Visualize the result of a traversal query Each runs an AQL query via python-arango, automatically converts to pandas, and plots with graphistry. Setup End of explanation # To specify Graphistry account & server, use: # graphistry.register(api=3, username='...', password='...', protocol='https', server='hub.graphistry.com') # For more options, see https://github.com/graphistry/pygraphistry#configure client = ArangoClient(protocol='http', host='localhost', port=8529) db = client.db('GoT', username='root', password='1234') Explanation: Connect End of explanation paths = db.graph('theGraph').traverse( start_vertex='Characters/4814', direction='outbound', strategy='breadthfirst' )['paths'] g = paths_to_graph(paths) g.bind(point_title='name').plot() Explanation: Demo 1: Traversal viz Use python-arango's traverse() call to descendants of Ned Stark Convert result paths to pandas and Graphistry Plot, and instead of using raw Arango vertex IDs, use the first name End of explanation g = graph_to_graphistry( db.graph('theGraph') ) g.bind(point_title='name').plot() Explanation: Demo 2: Full graph Use python-arango on a graph to identify and download the involved vertex/edge collections Convert the results to pandas and Graphistry Plot, and instead of using raw Arango vertex IDs, use the first name End of explanation
1,977
Given the following text description, write Python code to implement the functionality described below step by step Description: <p><font size="6"><b>CASE - Observation data</b></font></p> © 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons Step1: Introduction Observation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available. In this example, data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona. It is a long-term observation study in 24 different plots (each plot identified with a verbatimLocality identifier) and defines, apart from the species, location and date of the observations, also the sex and the weight (if available). The data consists of two data sets Step2: <div class="alert alert-success"> **EXERCISE** Create a new column with the name `eventDate` which contains datetime-aware information of each observation. To do so, combine the columns `day`, `month` and `year` into a datetime-aware data type by using the `pd.to_datetime` function from Pandas (check the help of that function to see how multiple columns with the year, month and day can be converted). <details><summary>Hints</summary> - `pd.to_datetime` can automatically combine the information from multiple columns. To select multiple columns, use a list of column names, e.g. `df[["my_col1", "my_col2"]]` - To create a new column, assign the result to new name, e.g. `df["my_new_col"] = df["my_col"] + 1` </details> Step3: <div class="alert alert-success"> **EXERCISE** For convenience when this dataset will be combined with other datasets, add a new column, `datasetName`, to the survey data set with `"Ecological Archives E090-118-D1."` as value for each of the individual records (static value for the entire data set) <details><summary>Hints</summary> - When a column does not exist, a new `df["a_new_column"]` can be created by assigning a value to it. - Pandas will automatically broadcast a single string value to each of the rows in the DataFrame. </details> Step4: Cleaning the verbatimSex column Step5: For the further analysis (and the species concerned in this specific data set), the sex information should be either male or female. We want to create a new column, named sex and convert the current values to the corresponding sex, taking into account the following mapping Step6: Tackle missing values (NaN) and duplicate values See pandas_07_missing_values.ipynb for an overview of functionality to work with missing values. <div class="alert alert-success"> **EXERCISE** How many records in the data set have no information about the `species`? Use the `isna()` method to find out. <details><summary>Hints</summary> - Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN - The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum. </details> Step7: <div class="alert alert-success"> **EXERCISE** How many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate. <details><summary>Hints</summary> - The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum. </details> Step8: <div class="alert alert-success"> **EXERCISE** - Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark. - Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records. <details><summary>Hints</summary> - Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data. - `sort_values()` can work with a single columns name as well as a list of names. </details> Step9: <div class="alert alert-success"> **EXERCISE** - Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `observations_unique`. Use the `drop duplicates()` method from Pandas. - How many observations are still left in the data set? <details><summary>Hints</summary> - `keep=First` is the default option for `drop_duplicates` - The number of rows in a DataFrame is equal to the `len`gth </details> Step10: <div class="alert alert-success"> **EXERCISE** Use the `dropna()` method to find out Step11: <div class="alert alert-success"> **EXERCISE** Filter the `observations` data and select only those records that do not have a `species_ID` while having information on the `sex`. Store the result as variable `not_identified`. <details><summary>Hints</summary> - To combine logical operators element-wise in Pandas, use the `&` operator. - Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values. </details> Step12: Adding the names of the observed species Step13: In the data set observations, the column specied_ID provides only an identifier instead of the full name. The name information is provided in a separate file species_names.csv Step14: The species names contains for each identifier in the ID column the scientific name of a species. The species_names data set contains in total 38 different scientific names Step15: For further analysis, let's combine both in a single DataFrame in the following exercise. <div class="alert alert-success"> **EXERCISE** Combine the DataFrames `observations_data` and `species_names` by adding the corresponding species name information (name, class, kingdom,..) to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data`. <details><summary>Hints</summary> - This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier. - Take into account that our key-column is different for `observations` and `species_names`, respectively `specied_ID` and `ID`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on. </details> Step16: Select subsets according to taxa of species Step17: <div class="alert alert-success"> **EXERCISE** - Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection. <details><summary>Hints</summary> - You do not have to combine three different conditions, but use the `isin` operator with a list of names. </details> Step18: <div class="alert alert-success"> **EXERCISE** Select the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`. <details><summary>Hints</summary> - Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other. - If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`) </details> Step19: <div class="alert alert-success"> **EXERCISE** Select the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>. <details><summary>Hints</summary> - Logical operators like `==`, `!=`, `>`,... can still be used. </details> Step20: <div class="alert alert-success"> **EXERCISE** Select the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 usint the `eventDate` column. Call the resulting variable `birds_85_89`. <details><summary>Hints</summary> - No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine) </details> Step21: <div class="alert alert-success"> **EXERCISE** - Drop the observations for which no `weight` information is available. - On the filtered data, compare the median weight for each of the species (use the `name` column) - Sort the output from high to low median weight (i.e. descending) __Note__ You can do this all in a single line statement, but don't have to do it as such! <details><summary>Hints</summary> - You will need `dropna`, `groupby`, `median` and `sort_values`. </details> Step22: Species abundance <div class="alert alert-success"> **EXERCISE** Which 8 species (use the `name` column to identify the different species) have been observed most over the entire data set? <details><summary>Hints</summary> - Pandas provide a function to combine sorting and showing the first n records, see [here](https Step23: <div class="alert alert-success"> **EXERCISE** - What is the number of different species (`name`) in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`. - Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'. <details><summary>Hints</summary> - _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups. - `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes. </details> Step24: <div class="alert alert-success"> **EXERCISE** - What is the number of plots (`verbatimLocality`) each of the species (`name`) have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high. - Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable). <details><summary>Hints</summary> - Use the previous exercise to solve this one. </details> Step25: <div class="alert alert-success"> **EXERCISE** - Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named "count". - Use a `pivot_table` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`. <details><summary>Hints</summary> - _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time. - If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method. - `reset_index()` is useful function to convert multiple indices into columns again. </details> Step26: As such, we can use the variable pivoted to plot the result Step27: <div class="alert alert-success"> **EXERCISE** Recreate the previous plot with the `catplot` function from the Seaborn library directly starting from <code>survey_data</code>. <details><summary>Hints</summary> - Check the `kind` argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value. - To link a column to different colors, use the `hue` argument - Using `height` and `aspect`, the figure size can be optimized. </details> Step28: <div class="alert alert-success"> **EXERCISE** - Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations. - Using the seaborn <a href="http Step29: Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values. <div class="alert alert-success"> **EXERCISE** - Make a summary table with the number of records of each of the species in each of the plots (called `verbatimLocality`)? Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name. - Use the Seaborn <a href="http Step30: <div class="alert alert-success"> **EXERCISE** Make a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method. <details><summary>Hints</summary> - You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use. - `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year. </details> Step31: (OPTIONAL SECTION) Evolution of species during monitoring period In this section, all plots can be made with the embedded Pandas plot function, unless specificly asked <div class="alert alert-success"> **EXERCISE** Plot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years. <details><summary>Hints</summary> - _...for each month of..._ requires `groupby`. - `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years) </details> Step32: <div class="alert alert-success"> **EXERCISE** Plot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time for the whole monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale <details><summary>Hints</summary> - `isin` is useful to select from within a list of elements. - `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters! - `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https Step33: <div class="alert alert-success"> **EXERCISE** Recreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable. <details><summary>Hints</summary> - We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively. - To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter. - Using `height` and `aspect`, the figure size can be optimized. </details> Step34: <div class="alert alert-success"> **EXERCISE** Plot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets. <details><summary>Hints</summary> - Combine `resample` and `groupby`! - Check out the previous exercise for the plot function. - Pass the `sharey=False` to the `facet_kws` argument as a dictionary. </details> Step35: <div class="alert alert-success"> **EXERCISE** The observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`weekday`) the number of observations and make a barplot. <details><summary>Hints</summary> - Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...? </details>
Python Code: %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns plt.style.use('seaborn-whitegrid') Explanation: <p><font size="6"><b>CASE - Observation data</b></font></p> © 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons End of explanation observations = pd.read_csv("data/observations.csv", index_col="occurrenceID") observations.head() observations.info() Explanation: Introduction Observation data of species (when and where is a given species observed) is typical in biodiversity studies. Large international initiatives support the collection of this data by volunteers, e.g. iNaturalist. Thanks to initiatives like GBIF, a lot of these data is also openly available. In this example, data originates from a study of a Chihuahuan desert ecosystem near Portal, Arizona. It is a long-term observation study in 24 different plots (each plot identified with a verbatimLocality identifier) and defines, apart from the species, location and date of the observations, also the sex and the weight (if available). The data consists of two data sets: observations.csv the individual observations. species_names.csv the overview list of the species names. Let's start with the observations data! Reading in the observations data <div class="alert alert-success"> **EXERCISE** - Read in the `data/observations.csv` file with Pandas and assign the resulting DataFrame to a variable with the name `observations`. - Make sure the 'occurrenceID' column is used as the index of the resulting DataFrame while reading in the data set. - Inspect the first five rows of the DataFrame and the data types of each of the data columns. <details><summary>Hints</summary> - All read functions in Pandas start with `pd.read_...`. - Setting a column as index can be done with an argument of the `read_csv` function To check the documentation of a function, use the keystroke combination of SHIFT + TAB when the cursor is on the function. - Remember `.head()` and `.info()`? </details> End of explanation observations["eventDate"] = pd.to_datetime(observations[["year", "month", "day"]]) observations Explanation: <div class="alert alert-success"> **EXERCISE** Create a new column with the name `eventDate` which contains datetime-aware information of each observation. To do so, combine the columns `day`, `month` and `year` into a datetime-aware data type by using the `pd.to_datetime` function from Pandas (check the help of that function to see how multiple columns with the year, month and day can be converted). <details><summary>Hints</summary> - `pd.to_datetime` can automatically combine the information from multiple columns. To select multiple columns, use a list of column names, e.g. `df[["my_col1", "my_col2"]]` - To create a new column, assign the result to new name, e.g. `df["my_new_col"] = df["my_col"] + 1` </details> End of explanation observations["datasetName"] = "Ecological Archives E090-118-D1." Explanation: <div class="alert alert-success"> **EXERCISE** For convenience when this dataset will be combined with other datasets, add a new column, `datasetName`, to the survey data set with `"Ecological Archives E090-118-D1."` as value for each of the individual records (static value for the entire data set) <details><summary>Hints</summary> - When a column does not exist, a new `df["a_new_column"]` can be created by assigning a value to it. - Pandas will automatically broadcast a single string value to each of the rows in the DataFrame. </details> End of explanation observations["verbatimSex"].unique() Explanation: Cleaning the verbatimSex column End of explanation sex_dict = {"M": "male", "F": "female", "R": "male", "P": "female", "Z": np.nan} observations['sex'] = observations['verbatimSex'].replace(sex_dict) observations["sex"].unique() Explanation: For the further analysis (and the species concerned in this specific data set), the sex information should be either male or female. We want to create a new column, named sex and convert the current values to the corresponding sex, taking into account the following mapping: * M -> male * F -> female * R -> male * P -> female * Z -> nan <div class="alert alert-success"> **EXERCISE** - Express the mapping of the values (e.g. `M` -> `male`) into a Python dictionary object with the variable name `sex_dict`. `Z` values correspond to _Not a Number_, which can be defined as `np.nan`. - Use the `sex_dict` dictionary to replace the values in the `verbatimSex` column to the new values and save the mapped values in a new column 'sex' of the DataFrame. - Check the conversion by printing the unique values within the new column `sex`. <details><summary>Hints</summary> - A dictionary is a Python standard library data structure, see https://docs.python.org/3/tutorial/datastructures.html#dictionaries - no Pandas magic involved when you need a key/value mapping. - When you need to replace values, look for the Pandas method `replace`. </details> End of explanation observations['species_ID'].isna().sum() Explanation: Tackle missing values (NaN) and duplicate values See pandas_07_missing_values.ipynb for an overview of functionality to work with missing values. <div class="alert alert-success"> **EXERCISE** How many records in the data set have no information about the `species`? Use the `isna()` method to find out. <details><summary>Hints</summary> - Do NOT use `survey_data_processed['species'] == np.nan`, but use the available method `isna()` to check if a value is NaN - The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum. </details> End of explanation observations.duplicated().sum() Explanation: <div class="alert alert-success"> **EXERCISE** How many duplicate records are present in the dataset? Use the method `duplicated()` to check if a row is a duplicate. <details><summary>Hints</summary> - The result of an (element-wise) condition returns a set of True/False values, corresponding to 1/0 values. The amount of True values is equal to the sum. </details> End of explanation duplicate_observations = observations[observations.duplicated(keep=False)] duplicate_observations.sort_values(["eventDate", "verbatimLocality"]).head(9) Explanation: <div class="alert alert-success"> **EXERCISE** - Select all duplicate data by filtering the `observations` data and assign the result to a new variable `duplicate_observations`. The `duplicated()` method provides a `keep` argument define which duplicates (if any) to mark. - Sort the `duplicate_observations` data on both the columns `eventDate` and `verbatimLocality` and show the first 9 records. <details><summary>Hints</summary> - Check the documentation of the `duplicated` method to find out which value the argument `keep` requires to select all duplicate data. - `sort_values()` can work with a single columns name as well as a list of names. </details> End of explanation observations_unique = observations.drop_duplicates() len(observations_unique) Explanation: <div class="alert alert-success"> **EXERCISE** - Exclude the duplicate values (i.e. keep the first occurrence while removing the other ones) from the `observations` data set and save the result as `observations_unique`. Use the `drop duplicates()` method from Pandas. - How many observations are still left in the data set? <details><summary>Hints</summary> - `keep=First` is the default option for `drop_duplicates` - The number of rows in a DataFrame is equal to the `len`gth </details> End of explanation len(observations_unique.dropna()) len(observations_unique.dropna(subset=['species_ID'])) observations_with_ID = observations_unique.dropna(subset=['species_ID']) observations_with_ID.head() Explanation: <div class="alert alert-success"> **EXERCISE** Use the `dropna()` method to find out: - For how many observations (rows) we have all the information available (i.e. no NaN values in any of the columns)? - For how many observations (rows) we do have the `species_ID` data available ? - Remove the data without `species_ID` data from the observations and assign the result to a new variable `observations_with_ID` <details><summary>Hints</summary> - `dropna` by default removes by default all rows for which _any_ of the columns contains a `NaN` value. - To specify which specific columns to check, use the `subset` argument </details> End of explanation mask = observations['species_ID'].isna() & observations['sex'].notna() not_identified = observations[mask] not_identified.head() Explanation: <div class="alert alert-success"> **EXERCISE** Filter the `observations` data and select only those records that do not have a `species_ID` while having information on the `sex`. Store the result as variable `not_identified`. <details><summary>Hints</summary> - To combine logical operators element-wise in Pandas, use the `&` operator. - Pandas provides both a `isna()` and a `notna()` method to check the existence of `NaN` values. </details> End of explanation # Recap from previous exercises - remove duplicates and observations without species information observations_unique_ = observations.drop_duplicates() observations_data = observations_unique_.dropna(subset=['species_ID']) Explanation: Adding the names of the observed species End of explanation species_names = pd.read_csv("data/species_names.csv") species_names.head() Explanation: In the data set observations, the column specied_ID provides only an identifier instead of the full name. The name information is provided in a separate file species_names.csv: End of explanation species_names.shape Explanation: The species names contains for each identifier in the ID column the scientific name of a species. The species_names data set contains in total 38 different scientific names: End of explanation survey_data = pd.merge(observations_data, species_names, how="left", left_on="species_ID", right_on="ID") survey_data Explanation: For further analysis, let's combine both in a single DataFrame in the following exercise. <div class="alert alert-success"> **EXERCISE** Combine the DataFrames `observations_data` and `species_names` by adding the corresponding species name information (name, class, kingdom,..) to the individual observations using the `pd.merge()` function. Assign the output to a new variable `survey_data`. <details><summary>Hints</summary> - This is an example of a database JOIN operation. Pandas provides the `pd.merge` function to join two data sets using a common identifier. - Take into account that our key-column is different for `observations` and `species_names`, respectively `specied_ID` and `ID`. The `pd.merge()` function has `left_on` and `right_on` keywords to specify the name of the column in the left and right `DataFrame` to merge on. </details> End of explanation survey_data['taxa'].value_counts() #survey_data.groupby('taxa').size() Explanation: Select subsets according to taxa of species End of explanation non_rodent_species = survey_data[survey_data['taxa'].isin(['Rabbit', 'Bird', 'Reptile'])] len(non_rodent_species) Explanation: <div class="alert alert-success"> **EXERCISE** - Select the observations for which the `taxa` is equal to 'Rabbit', 'Bird' or 'Reptile'. Assign the result to a variable `non_rodent_species`. Use the `isin` method for the selection. <details><summary>Hints</summary> - You do not have to combine three different conditions, but use the `isin` operator with a list of names. </details> End of explanation r_species = survey_data[survey_data['name'].str.lower().str.startswith('r')] len(r_species) r_species["name"].value_counts() Explanation: <div class="alert alert-success"> **EXERCISE** Select the observations for which the `name` starts with the characters 'r' (make sure it does not matter if a capital character is used in the 'taxa' name). Call the resulting variable `r_species`. <details><summary>Hints</summary> - Remember the `.str.` construction to provide all kind of string functionalities? You can combine multiple of these after each other. - If the presence of capital letters should not matter, make everything lowercase first before comparing (`.lower()`) </details> End of explanation non_bird_species = survey_data[survey_data['taxa'] != 'Bird'] len(non_bird_species) Explanation: <div class="alert alert-success"> **EXERCISE** Select the observations that are not Birds. Call the resulting variable <code>non_bird_species</code>. <details><summary>Hints</summary> - Logical operators like `==`, `!=`, `>`,... can still be used. </details> End of explanation birds_85_89 = survey_data[(survey_data["eventDate"] >= "1985-01-01") & (survey_data["eventDate"] <= "1989-12-31 23:59") & (survey_data['taxa'] == 'Bird')] birds_85_89.head() # alternative solution birds_85_89 = survey_data[(survey_data["eventDate"].dt.year >= 1985) & (survey_data["eventDate"].dt.year <= 1989) & (survey_data['taxa'] == 'Bird')] birds_85_89.head() Explanation: <div class="alert alert-success"> **EXERCISE** Select the __Bird__ (taxa is Bird) observations from 1985-01 till 1989-12 usint the `eventDate` column. Call the resulting variable `birds_85_89`. <details><summary>Hints</summary> - No hints, you can do this! (with the help of some `<=` and `&`, and don't forget the put brackets around each comparison that you combine) </details> End of explanation # Multiple lines obs_with_weight = survey_data.dropna(subset=["weight"]) median_weight = obs_with_weight.groupby(['name'])["weight"].median() median_weight.sort_values(ascending=False) # Single line statement survey_data.dropna(subset=["weight"]).groupby(['name'])["weight"].median().sort_values(ascending=False) Explanation: <div class="alert alert-success"> **EXERCISE** - Drop the observations for which no `weight` information is available. - On the filtered data, compare the median weight for each of the species (use the `name` column) - Sort the output from high to low median weight (i.e. descending) __Note__ You can do this all in a single line statement, but don't have to do it as such! <details><summary>Hints</summary> - You will need `dropna`, `groupby`, `median` and `sort_values`. </details> End of explanation survey_data.groupby("name").size().nlargest(8) survey_data['name'].value_counts()[:8] Explanation: Species abundance <div class="alert alert-success"> **EXERCISE** Which 8 species (use the `name` column to identify the different species) have been observed most over the entire data set? <details><summary>Hints</summary> - Pandas provide a function to combine sorting and showing the first n records, see [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nlargest.html)... </details> End of explanation n_species_per_plot = survey_data.groupby(["verbatimLocality"])["name"].nunique() fig, ax = plt.subplots(figsize=(6, 6)) n_species_per_plot.plot(kind="barh", ax=ax) ax.set_ylabel("Plot number"); # Alternative option to calculate the species per plot: # inspired on the pivot table we already had: # species_per_plot = survey_data.reset_index().pivot_table( # index="name", columns="verbatimLocality", values="ID", aggfunc='count') # n_species_per_plot = species_per_plot.count() Explanation: <div class="alert alert-success"> **EXERCISE** - What is the number of different species (`name`) in each of the `verbatimLocality` plots? Use the `nunique` method. Assign the output to a new variable `n_species_per_plot`. - Define a Matplotlib `Figure` (`fig`) and `Axes` (`ax`) to prepare a plot. Make an horizontal bar chart using Pandas `plot` function linked to the just created Matplotlib `ax`. Each bar represents the `species per plot/verbatimLocality`. Change the y-label to 'Plot number'. <details><summary>Hints</summary> - _...in each of the..._ should provide a hint to use `groupby` for this exercise. The `nunique` is the aggregation function for each of the groups. - `fig, ax = plt.subplots()` prepares a Matplotlib Figure and Axes. </details> End of explanation n_plots_per_species = survey_data.groupby(["name"])["verbatimLocality"].nunique().sort_values() fig, ax = plt.subplots(figsize=(10, 8)) n_plots_per_species.plot(kind="barh", ax=ax) ax.set_xlabel("Number of plots"); ax.set_ylabel(""); Explanation: <div class="alert alert-success"> **EXERCISE** - What is the number of plots (`verbatimLocality`) each of the species (`name`) have been observed in? Assign the output to a new variable `n_plots_per_species`. Sort the counts from low to high. - Make an horizontal bar chart using Pandas `plot` function to show the number of plots each of the species was found (using the `n_plots_per_species` variable). <details><summary>Hints</summary> - Use the previous exercise to solve this one. </details> End of explanation n_plot_sex = survey_data.groupby(["sex", "verbatimLocality"]).size().rename("count").reset_index() n_plot_sex.head() pivoted = n_plot_sex.pivot_table(columns="sex", index="verbatimLocality", values="count") pivoted.head() Explanation: <div class="alert alert-success"> **EXERCISE** - Starting from the `survey_data`, calculate the amount of males and females present in each of the plots (`verbatimLocality`). The result should return the counts for each of the combinations of `sex` and `verbatimLocality`. Assign to a new variable `n_plot_sex` and ensure the counts are in a column named "count". - Use a `pivot_table` to convert the `n_plot_sex` DataFrame to a new DataFrame with the `verbatimLocality` as index and `male`/`female` as column names. Assign to a new variable `pivoted`. <details><summary>Hints</summary> - _...for each of the combinations..._ `groupby` can also be used with multiple columns at the same time. - If a `groupby` operation gives a Series as result, you can give that Series a name with the `.rename(..)` method. - `reset_index()` is useful function to convert multiple indices into columns again. </details> End of explanation pivoted.plot(kind='bar', figsize=(12, 6), rot=0) Explanation: As such, we can use the variable pivoted to plot the result: End of explanation sns.catplot(data=survey_data, x="verbatimLocality", hue="sex", kind="count", height=3, aspect=3) Explanation: <div class="alert alert-success"> **EXERCISE** Recreate the previous plot with the `catplot` function from the Seaborn library directly starting from <code>survey_data</code>. <details><summary>Hints</summary> - Check the `kind` argument of the `catplot` function to find out how to use counts to define the bars instead of a `y` value. - To link a column to different colors, use the `hue` argument - Using `height` and `aspect`, the figure size can be optimized. </details> End of explanation heatmap_prep = survey_data.pivot_table(index='year', columns='month', values="ID", aggfunc='count') fig, ax = plt.subplots(figsize=(10, 8)) ax = sns.heatmap(heatmap_prep, cmap='Reds') Explanation: <div class="alert alert-success"> **EXERCISE** - Create a table, called `heatmap_prep`, based on the `survey_data` DataFrame with the row index the individual years, in the column the months of the year (1-> 12) and as values of the table, the counts for each of these year/month combinations. - Using the seaborn <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html">documentation</a>, make a heatmap starting from the `heatmap_prep` variable. <details><summary>Hints</summary> - A `pivot_table` has an `aggfunc` parameter by which the aggregation of the cells combined into the year/month element are combined (e.g. mean, max, count,...). - You can use the `ID` to count the number of observations. - seaborn has an `heatmap` function which requires a short-form DataFrame, comparable to giving each element in a table a color value. </details> End of explanation species_per_plot = survey_data.reset_index().pivot_table(index="name", columns="verbatimLocality", values="ID", aggfunc='count') species_per_plot.head() fig, ax = plt.subplots(figsize=(8,8)) sns.heatmap(species_per_plot, ax=ax, cmap='Greens') Explanation: Remark that we started from a tidy data format (also called long format) and converted to short format with in the row index the years, in the column the months and the counts for each of these year/month combinations as values. <div class="alert alert-success"> **EXERCISE** - Make a summary table with the number of records of each of the species in each of the plots (called `verbatimLocality`)? Each of the species `name`s is a row index and each of the `verbatimLocality` plots is a column name. - Use the Seaborn <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html">documentation</a> to make a heatmap. <details><summary>Hints</summary> - Make sure to pass the correct columns to respectively the `index`, `columns`, `values` and `aggfunc` parameters of the `pivot_table` function. You can use the `ID` to count the number of observations for each name/locality combination (when counting rows, the exact column doesn't matter). </details> End of explanation survey_data.resample('A', on='eventDate').size().plot() Explanation: <div class="alert alert-success"> **EXERCISE** Make a plot visualizing the evolution of the number of observations for each of the individual __years__ (i.e. annual counts) using the `resample` method. <details><summary>Hints</summary> - You want to `resample` the data using the `eventDate` column to create annual counts. If the index is not a datetime-index, you can use the `on=` keyword to specify which datetime column to use. - `resample` needs an aggregation function on how to combine the values within a single 'group' (in this case data within a year). In this example, we want to know the `size` of each group, i.e. the number of records within each year. </details> End of explanation merriami = survey_data[survey_data["name"] == "Dipodomys merriami"] fig, ax = plt.subplots() merriami.groupby(merriami['eventDate'].dt.month).size().plot(kind="barh", ax=ax) ax.set_xlabel("number of occurrences") ax.set_ylabel("Month of the year") Explanation: (OPTIONAL SECTION) Evolution of species during monitoring period In this section, all plots can be made with the embedded Pandas plot function, unless specificly asked <div class="alert alert-success"> **EXERCISE** Plot using Pandas `plot` function the number of records for `Dipodomys merriami` for each month of the year (January (1) -> December (12)), aggregated over all years. <details><summary>Hints</summary> - _...for each month of..._ requires `groupby`. - `resample` is not useful here, as we do not want to change the time-interval, but look at month of the year (over all years) </details> End of explanation subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis', 'Chaetodipus baileyi'])] month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size() species_evolution = month_evolution.unstack(level=0) axs = species_evolution.plot(subplots=True, figsize=(14, 8), sharey=True) Explanation: <div class="alert alert-success"> **EXERCISE** Plot, for the species 'Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis' and 'Chaetodipus baileyi', the monthly number of records as a function of time for the whole monitoring period. Plot each of the individual species in a separate subplot and provide them all with the same y-axis scale <details><summary>Hints</summary> - `isin` is useful to select from within a list of elements. - `groupby` AND `resample` need to be combined. We do want to change the time-interval to represent data as a function of time (`resample`) and we want to do this _for each name/species_ (`groupby`). The order matters! - `unstack` is a Pandas function a bit similar to `pivot`. Check the [unstack documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html) as it might be helpful for this exercise. </details> End of explanation # Given as solution.. subsetspecies = survey_data[survey_data["name"].isin(['Dipodomys merriami', 'Dipodomys ordii', 'Reithrodontomys megalotis', 'Chaetodipus baileyi'])] month_evolution = subsetspecies.groupby("name").resample('M', on='eventDate').size().rename("counts") month_evolution = month_evolution.reset_index() sns.relplot(data=month_evolution, x='eventDate', y="counts", row="name", kind="line", hue="name", height=2, aspect=5) Explanation: <div class="alert alert-success"> **EXERCISE** Recreate the same plot as in the previous exercise using Seaborn `relplot` functon with the `month_evolution` variable. <details><summary>Hints</summary> - We want to have the `counts` as a function of `eventDate`, so link these columns to y and x respectively. - To create subplots in Seaborn, the usage of _facetting_ (splitting data sets to multiple facets) is used by linking a column name to the `row`/`col` parameter. - Using `height` and `aspect`, the figure size can be optimized. </details> End of explanation year_evolution = survey_data.groupby("taxa").resample('A', on='eventDate').size() year_evolution.name = "counts" year_evolution = year_evolution.reset_index() year_evolution.head() sns.relplot(data=year_evolution, x='eventDate', y="counts", col="taxa", col_wrap=2, kind="line", height=2, aspect=5, facet_kws={"sharey": False}) Explanation: <div class="alert alert-success"> **EXERCISE** Plot the annual amount of occurrences for each of the 'taxa' as a function of time using Seaborn. Plot each taxa in a separate subplot and do not share the y-axis among the facets. <details><summary>Hints</summary> - Combine `resample` and `groupby`! - Check out the previous exercise for the plot function. - Pass the `sharey=False` to the `facet_kws` argument as a dictionary. </details> End of explanation fig, ax = plt.subplots() survey_data.groupby(survey_data["eventDate"].dt.weekday).size().plot(kind='barh', color='#66b266', ax=ax) import calendar xticks = ax.set_yticklabels(calendar.day_name) Explanation: <div class="alert alert-success"> **EXERCISE** The observations where taken by volunteers. You wonder on which day of the week the most observations where done. Calculate for each day of the week (`weekday`) the number of observations and make a barplot. <details><summary>Hints</summary> - Did you know the Python standard Library has a module `calendar` which contains names of week days, month names,...? </details> End of explanation
1,978
Given the following text description, write Python code to implement the functionality described below step by step Description: <!--BOOK_INFORMATION--> <img align="left" style="padding-right Step1: Or if you try an operation that's not defined Step2: Or you might be trying to compute a mathematically ill-defined result Step3: Or maybe you're trying to access a sequence element that doesn't exist Step4: Note that in each case, Python is kind enough to not simply indicate that an error happened, but to spit out a meaningful exception that includes information about what exactly went wrong, along with the exact line of code where the error happened. Having access to meaningful errors like this is immensely useful when trying to trace the root of problems in your code. Catching Exceptions Step5: Note that the second block here did not get executed Step6: Here we see that when the error was raised in the try statement (in this case, a ZeroDivisionError), the error was caught, and the except statement was executed. One way this is often used is to check user input within a function or another piece of code. For example, we might wish to have a function that catches zero-division and returns some other value, perhaps a suitably large number like $10^{100}$ Step7: There is a subtle problem with this code, though Step8: Dividing an integer and a string raises a TypeError, which our over-zealous code caught and assumed was a ZeroDivisionError! For this reason, it's nearly always a better idea to catch exceptions explicitly Step9: We're now catching zero-division errors only, and letting all other errors pass through un-modified. Raising Exceptions Step10: As an example of where this might be useful, let's return to our fibonacci function that we defined previously Step11: One potential problem here is that the input value could be negative. This will not currently cause any error in our function, but we might want to let the user know that a negative N is not supported. Errors stemming from invalid parameter values, by convention, lead to a ValueError being raised Step12: Now the user knows exactly why the input is invalid, and could even use a try...except block to handle it! Step13: Diving Deeper into Exceptions Briefly, I want to mention here some other concepts you might run into. I'll not go into detail on these concepts and how and why to use them, but instead simply show you the syntax so you can explore more on your own. Accessing the error message Sometimes in a try...except statement, you would like to be able to work with the error message itself. This can be done with the as keyword Step14: With this pattern, you can further customize the exception handling of your function. Defining custom exceptions In addition to built-in exceptions, it is possible to define custom exceptions through class inheritance. For instance, if you want a special kind of ValueError, you can do this Step15: This would allow you to use a try...except block that only catches this type of error Step16: You might find this useful as you develop more customized code. try...except...else...finally In addition to try and except, you can use the else and finally keywords to further tune your code's handling of exceptions. The basic structure is this
Python Code: print(Q) Explanation: <!--BOOK_INFORMATION--> <img align="left" style="padding-right:10px;" src="fig/cover-small.jpg"> This notebook contains an excerpt from the Whirlwind Tour of Python by Jake VanderPlas; the content is available on GitHub. The text and code are released under the CC0 license; see also the companion project, the Python Data Science Handbook. <!--NAVIGATION--> < Defining and Using Functions | Contents | Iterators > Errors and Exceptions No matter your skill as a programmer, you will eventually make a coding mistake. Such mistakes come in three basic flavors: Syntax errors: Errors where the code is not valid Python (generally easy to fix) Runtime errors: Errors where syntactically valid code fails to execute, perhaps due to invalid user input (sometimes easy to fix) Semantic errors: Errors in logic: code executes without a problem, but the result is not what you expect (often very difficult to track-down and fix) Here we're going to focus on how to deal cleanly with runtime errors. As we'll see, Python handles runtime errors via its exception handling framework. Runtime Errors If you've done any coding in Python, you've likely come across runtime errors. They can happen in a lot of ways. For example, if you try to reference an undefined variable: End of explanation 1 + 'abc' Explanation: Or if you try an operation that's not defined: End of explanation 2 / 0 Explanation: Or you might be trying to compute a mathematically ill-defined result: End of explanation L = [1, 2, 3] L[1000] Explanation: Or maybe you're trying to access a sequence element that doesn't exist: End of explanation try: print("this gets executed first") except: print("this gets executed only if there is an error") Explanation: Note that in each case, Python is kind enough to not simply indicate that an error happened, but to spit out a meaningful exception that includes information about what exactly went wrong, along with the exact line of code where the error happened. Having access to meaningful errors like this is immensely useful when trying to trace the root of problems in your code. Catching Exceptions: try and except The main tool Python gives you for handling runtime exceptions is the try...except clause. Its basic structure is this: End of explanation try: print("let's try something:") x = 1 / 0 # ZeroDivisionError except: print("something bad happened!") Explanation: Note that the second block here did not get executed: this is because the first block did not return an error. Let's put a problematic statement in the try block and see what happens: End of explanation def safe_divide(a, b): try: return a / b except: return 1E100 safe_divide(1, 2) safe_divide(2, 0) Explanation: Here we see that when the error was raised in the try statement (in this case, a ZeroDivisionError), the error was caught, and the except statement was executed. One way this is often used is to check user input within a function or another piece of code. For example, we might wish to have a function that catches zero-division and returns some other value, perhaps a suitably large number like $10^{100}$: End of explanation safe_divide (1, '2') Explanation: There is a subtle problem with this code, though: what happens when another type of exception comes up? For example, this is probably not what we intended: End of explanation def safe_divide(a, b): try: return a / b except ZeroDivisionError: return 1E100 safe_divide(1, 0) safe_divide(1, '2') Explanation: Dividing an integer and a string raises a TypeError, which our over-zealous code caught and assumed was a ZeroDivisionError! For this reason, it's nearly always a better idea to catch exceptions explicitly: End of explanation raise RuntimeError("my error message") Explanation: We're now catching zero-division errors only, and letting all other errors pass through un-modified. Raising Exceptions: raise We've seen how valuable it is to have informative exceptions when using parts of the Python language. It's equally valuable to make use of informative exceptions within the code you write, so that users of your code (foremost yourself!) can figure out what caused their errors. The way you raise your own exceptions is with the raise statement. For example: End of explanation def fibonacci(N): L = [] a, b = 0, 1 while len(L) < N: a, b = b, a + b L.append(a) return L Explanation: As an example of where this might be useful, let's return to our fibonacci function that we defined previously: End of explanation def fibonacci(N): if N < 0: raise ValueError("N must be non-negative") L = [] a, b = 0, 1 while len(L) < N: a, b = b, a + b L.append(a) return L fibonacci(10) fibonacci(-10) Explanation: One potential problem here is that the input value could be negative. This will not currently cause any error in our function, but we might want to let the user know that a negative N is not supported. Errors stemming from invalid parameter values, by convention, lead to a ValueError being raised: End of explanation N = -10 try: print("trying this...") print(fibonacci(N)) except ValueError: print("Bad value: need to do something else") Explanation: Now the user knows exactly why the input is invalid, and could even use a try...except block to handle it! End of explanation try: x = 1 / 0 except ZeroDivisionError as err: print("Error class is: ", type(err)) print("Error message is:", err) Explanation: Diving Deeper into Exceptions Briefly, I want to mention here some other concepts you might run into. I'll not go into detail on these concepts and how and why to use them, but instead simply show you the syntax so you can explore more on your own. Accessing the error message Sometimes in a try...except statement, you would like to be able to work with the error message itself. This can be done with the as keyword: End of explanation class MySpecialError(ValueError): pass raise MySpecialError("here's the message") Explanation: With this pattern, you can further customize the exception handling of your function. Defining custom exceptions In addition to built-in exceptions, it is possible to define custom exceptions through class inheritance. For instance, if you want a special kind of ValueError, you can do this: End of explanation try: print("do something") raise MySpecialError("[informative error message here]") except MySpecialError: print("do something else") Explanation: This would allow you to use a try...except block that only catches this type of error: End of explanation try: print("try something here") except: print("this happens only if it fails") else: print("this happens only if it succeeds") finally: print("this happens no matter what") Explanation: You might find this useful as you develop more customized code. try...except...else...finally In addition to try and except, you can use the else and finally keywords to further tune your code's handling of exceptions. The basic structure is this: End of explanation
1,979
Given the following text description, write Python code to implement the functionality described below step by step Description: Voyager 2 Example data taken on 2018-10-22 during MARS receiver testing, using the Breakthrough Listen backend. Data recorded over full bandwidth of MARS receiver, here we have extracted a small bandwidth around the Voyager2 telemetry signal. Spectral data product has 2.79 Hz resolution, 18.25 s time integrations. Notebook run from blc12 Step1: Dynamic spectra data Data is stored in filterbank format, which can be loaded using blimpy Step2: Plotting and showing dynamic spectra, we can see the signal drifting due to doppler acceleration (LO does not correct for LSRK) Step3: Extract sidebands and zoom Step4: Raw voltage data Raw voltage data for a 2.92 MHz subband was recorded about the Voyager signal. It is stored in Guppi RAW format, as 8-bit integers. This can also be loaded using blimpy. Step5: Data shape is (1, 524288, 2), and dtype is presented as complex64 (but stored as 8-bit in file) Axes are (channel, time sample, polarization). You can iterate over multiple blocks of data to get more time samples if required, until the end of the file. Step6: We can compute the spectrum from these data Step7: We can use multiple blocks to integrate the spectrum down
Python Code: %matplotlib inline import blimpy as bl import pylab as plt import numpy as np plt.rcParams['font.size'] = 12 Explanation: Voyager 2 Example data taken on 2018-10-22 during MARS receiver testing, using the Breakthrough Listen backend. Data recorded over full bandwidth of MARS receiver, here we have extracted a small bandwidth around the Voyager2 telemetry signal. Spectral data product has 2.79 Hz resolution, 18.25 s time integrations. Notebook run from blc12 End of explanation filename = '/datax2/users/dancpr/voyager2_hires_2018.10.22.h5' a = bl.Waterfall(filename) a.info() Explanation: Dynamic spectra data Data is stored in filterbank format, which can be loaded using blimpy: End of explanation plt.figure(figsize=(10, 6)) a.plot_spectrum() plt.figure(figsize=(14, 6)) a.plot_waterfall() Explanation: Plotting and showing dynamic spectra, we can see the signal drifting due to doppler acceleration (LO does not correct for LSRK) End of explanation fc_cw = 8575.34545 # Center freq of transmission sb_sep = 0.0225 # Sideband separation from carrier in MHz sb_bw = 0.001 # Sideband bandwidth plot_bw = sb_bw / 2 # Plotting bandwidth t = a.timestamps # Calculate center freq of sidebands fc_lsb = fc_cw - sb_sep fc_usb = fc_cw + sb_sep # Extract data f_cw, d_cw = a.grab_data(f_start=fc_cw-plot_bw, f_stop=fc_cw+plot_bw) f_lsb, d_lsb = a.grab_data(f_start=fc_lsb-plot_bw, f_stop=fc_lsb+plot_bw) f_usb, d_usb = a.grab_data(f_start=fc_usb-plot_bw, f_stop=fc_usb+plot_bw) # Plot def plot_waterfall(t, f, d): t_elapsed = (t[-1] - t[0]) * 86400 plt.imshow(d[::-1], aspect='auto', extent=(f[-1], f[0], 0, t_elapsed)) plt.xticks(rotation=30) plt.xlabel("Frequency [MHz]") plt.figure(figsize=(14, 5)) plt.subplot(1,3,1) plt.ylabel("Elapsed time [s]") plot_waterfall(t, f_lsb, d_lsb) plt.subplot(1,3,2) plot_waterfall(t, f_cw, d_cw) plt.subplot(1,3,3) plot_waterfall(t, f_usb, d_usb) Explanation: Extract sidebands and zoom End of explanation raw_filename = '/datax2/users/dancpr/voyager_2018.10.22.raw' raw = bl.GuppiRaw(raw_filename) raw_header0, raw_data0 = raw.read_next_data_block() Explanation: Raw voltage data Raw voltage data for a 2.92 MHz subband was recorded about the Voyager signal. It is stored in Guppi RAW format, as 8-bit integers. This can also be loaded using blimpy. End of explanation raw_data0.shape Explanation: Data shape is (1, 524288, 2), and dtype is presented as complex64 (but stored as 8-bit in file) Axes are (channel, time sample, polarization). You can iterate over multiple blocks of data to get more time samples if required, until the end of the file. End of explanation dx = raw_data0.squeeze()[:, 0] dy = raw_data0.squeeze()[:, 1] n_chans = 8192 plt.figure(figsize=(12, 6)) plt.subplot(1,2,1) Dx = np.abs(np.fft.fftshift(np.fft.fft(dx))) Dx = Dx.reshape([n_chans, -1]).mean(axis=-1) plt.plot(Dx) plt.subplot(1,2,2) Dy = np.abs(np.fft.fftshift(np.fft.fft(dy))) Dy = Dy.reshape([n_chans, -1]).mean(axis=-1) plt.plot(Dy) Explanation: We can compute the spectrum from these data End of explanation n_chans = 8192 n_ints = 64 raw.reset_index() x, y = np.zeros(n_chans), np.zeros(n_chans) for idx in range(n_ints): h, d = raw.read_next_data_block() dx = d.squeeze()[:, 0] dy = d.squeeze()[:, 1] Dx = np.abs(np.fft.fftshift(np.fft.fft(dx))) Dx = Dx.reshape([n_chans, -1]).mean(axis=-1) Dy = np.abs(np.fft.fftshift(np.fft.fft(dy))) Dy = Dy.reshape([n_chans, -1]).mean(axis=-1) x += Dx y += Dy plt.figure(figsize=(12, 6)) plt.subplot(1,2,1) plt.plot(x) plt.subplot(1,2,2) plt.plot(y) Explanation: We can use multiple blocks to integrate the spectrum down: End of explanation
1,980
Given the following text description, write Python code to implement the functionality described below step by step Description: Modeling of Qubit Chain Simulation of few steps of quantum walk <img src="images/line_qubits_site1.png" alt="Qubit Chain"> <img src="images/line_qubits_site.png" alt="Qubit Chain Shift"> Contributor Alexander Yu. Vlasov The initial part is similar with previous notebook, but only circuit for one step without measurements is implemented. Parameter n_step is used in next part of program discussed below. Step1: The method used here may be applied only for simulator and similar with already used in a Qiskit tutorial about visualization of quantum state. The unitary_simulator backend is used to produce $2^n \times 2^n$ unitary matrix QWalk representing quantum circuit, where $n$ is n_nodes. The complex vector with $2^n$ is initialized as initial state, e.g., $|\psi_0\rangle = |00001\rangle$. Multiplication of QWalk on such a vector produces final state also with $2^n$ components $|\psi_1\rangle = Q_{\rm Walk} |\psi_0\rangle$. The density matrix $\rho_1 = |\psi_1\rangle!\langle\psi_1|$ for such a state is calculated and used for Quantum Sphere. Unlike of real hardware the state is not destroyed due to measurement and may be again used as initial state $|\psi_{k+1}\rangle = Q_{\rm Walk} |\psi_k\rangle$. So, the Quantum Spheres are drawn for $\rho_k = |\psi_k\rangle!\langle\psi_k|$ after each step of quantum walk without initialization.
Python Code: from pprint import pprint import math import numpy as np # importing the Qiskit from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister from qiskit import Aer, execute # import state tomography functions from qiskit.tools.visualization import plot_histogram, plot_state # Definition of matchgate def gate_mu3(qcirc,theta,phi,lam,a,b): qcirc.cx(a,b) qcirc.cu3(theta,phi,lam,b,a) qcirc.cx(a,b) n_nodes = 5 n_step = 3 # Creating Registers qr = QuantumRegister(n_nodes) cr = ClassicalRegister(n_nodes) # Creating Circuits qc = QuantumCircuit(qr,cr) # Creating of two partitions with M1' and M2 for i in range(0,n_nodes-1,2): gate_mu3(qc,math.pi, math.pi, 0, qr[i], qr[i+1]) for i in range(1,n_nodes,2): gate_mu3(qc,math.pi/2, 0, 0, qr[i], qr[i+1]) Explanation: Modeling of Qubit Chain Simulation of few steps of quantum walk <img src="images/line_qubits_site1.png" alt="Qubit Chain"> <img src="images/line_qubits_site.png" alt="Qubit Chain Shift"> Contributor Alexander Yu. Vlasov The initial part is similar with previous notebook, but only circuit for one step without measurements is implemented. Parameter n_step is used in next part of program discussed below. End of explanation # execute the quantum circuit backend = 'unitary_simulator' # the device to run on job = execute(qc, Aer.get_backend(backend)) # Execute quantum walk result = job.result() initial_state = np.zeros(2**n_nodes) initial_state[1]=1.0 # state 0 = ....0000, state 1 = ...000001 QWalk = result.get_data(qc)['unitary'] #Applying QWalk n_step times for i in range(0,n_step): if i > 0: initial_state = np.copy(state_QWalk) # Copy previous state state_QWalk = np.dot(QWalk,initial_state) # Multiply on QWalk matrix rho_QWalk=np.outer(state_QWalk, state_QWalk.conj()) # Calculate density matrix print('step = ',i+1) # print number plot_state(rho_QWalk,'qsphere') # draw Quantum Sphere Explanation: The method used here may be applied only for simulator and similar with already used in a Qiskit tutorial about visualization of quantum state. The unitary_simulator backend is used to produce $2^n \times 2^n$ unitary matrix QWalk representing quantum circuit, where $n$ is n_nodes. The complex vector with $2^n$ is initialized as initial state, e.g., $|\psi_0\rangle = |00001\rangle$. Multiplication of QWalk on such a vector produces final state also with $2^n$ components $|\psi_1\rangle = Q_{\rm Walk} |\psi_0\rangle$. The density matrix $\rho_1 = |\psi_1\rangle!\langle\psi_1|$ for such a state is calculated and used for Quantum Sphere. Unlike of real hardware the state is not destroyed due to measurement and may be again used as initial state $|\psi_{k+1}\rangle = Q_{\rm Walk} |\psi_k\rangle$. So, the Quantum Spheres are drawn for $\rho_k = |\psi_k\rangle!\langle\psi_k|$ after each step of quantum walk without initialization. End of explanation
1,981
Given the following text description, write Python code to implement the functionality described below step by step Description: Independent confirmation of ACA offsets As part of the verification of the dynamic offsets process, SOT/ACA ops has independently confirmed the FOT aimpoint offsets for the JUL0415O test week. For this independent verification, we have used Step1: Check dynamic offset file for consistency The dynamic target offset file includes the aca offsets calculated by the FOT version of the chandra_aca.drift module. The file also includes inputs used to calculate the offsets values Step2: Run the ACA model and get new offsets The dynamic offsets in the dynamic offsets / aimpoint file are calculated using the the xija ACA thermal model called from the FOT Matlab tools. To independently verify both the inputs and the outputs reported in the dynamic offsets/aimpoints file, we run the SOT version ACA model over the JUL0415O schedule interval and recalculate the aca_offsets using the calculated ACA ccd temperatures and the zero offset aimpoint information from the OR list. The ACA load review software, starcheck, already has code to determine inputs to the xija ACA model and to run the model over command products. For this test, the code to get the mean aca ccd temperature for each obsid has been extended to also run the offset calculation via chandra_aca.drift_get_aca_offsets. + if interval['obsid'] in obsreqs and len(ok_temps) &gt; 0 Step3: Compare values to dynamic offset table from Matlab Then, for each entry in the dynamic offset table from the matlab tools, we compare the aca_offset_y and aca_offset_z with the values from the independent run of the model and the offset values calculated from within the starcheck code. For quick review, we print out the offsets and temperatures, with the dynamic aimpoint offset file versions in the first column of each value being checked. Step4: The maximum differences in the offsets between the values via an independent run of the model are within an arcsec.
Python Code: import os import sys from glob import glob import json import numpy as np from astropy.table import Table from Chandra.Time import DateTime from Ska.Matplotlib import plot_cxctime from chandra_aca import drift import parse_cm Explanation: Independent confirmation of ACA offsets As part of the verification of the dynamic offsets process, SOT/ACA ops has independently confirmed the FOT aimpoint offsets for the JUL0415O test week. For this independent verification, we have used: the ZERO OFFSET aimpoint table provided by SOT MP the xija ACA thermal model for a test week (note that in testing, this notebook required PYTHONPATH set to include the 'calc_aca_offsets' branch of starcheck and the 'read_zero_offset' branch of parse_cm) End of explanation TEST_DIR = '/proj/sot/ska/ops/SFE/JUL0415O/oflso' dynam_table = Table.read(glob("{}/*dynamical_offsets.txt".format(TEST_DIR))[0], format='ascii') # first, check table for self-consistent offsets ys = [] zs = [] for row in dynam_table: y, z = drift.get_aca_offsets(row['detector'], row['chip_id'], row['chipx'], row['chipy'], time=row['mean_date'], t_ccd=row['mean_t_ccd']) ys.append(y) zs.append(z) print "Y offsets consistent: {}".format(np.allclose(dynam_table['aca_offset_y'], ys, atol=0.02)) print "Z offsets consistent: {}".format(np.allclose(dynam_table['aca_offset_z'], zs, atol=0.02)) Explanation: Check dynamic offset file for consistency The dynamic target offset file includes the aca offsets calculated by the FOT version of the chandra_aca.drift module. The file also includes inputs used to calculate the offsets values: detector, chip_id, chipx, chipy, a time, and a temperature. As a check of consistency, we recalculate the aca_offset_y and aca_offset_z values using those inputs and the SOT/Ska version of the chandra_aca.drift module. For each row in the table, this tests confirms that the re-calculated values of aca_offset_y and aca_offset_z are within 0.02 arcsecs of the values in the file. End of explanation from starcheck.calc_ccd_temps import get_ccd_temps obsid_info = json.loads(get_ccd_temps(TEST_DIR, json_obsids=open("{}/starcheck/obsids.json".format(TEST_DIR)), model_spec="{}/starcheck/aca_spec.json".format(TEST_DIR), char_file="/proj/sot/ska/data/starcheck/characteristics.yaml", orlist="{}/mps/or/JUL0415_A.or".format(TEST_DIR))); Explanation: Run the ACA model and get new offsets The dynamic offsets in the dynamic offsets / aimpoint file are calculated using the the xija ACA thermal model called from the FOT Matlab tools. To independently verify both the inputs and the outputs reported in the dynamic offsets/aimpoints file, we run the SOT version ACA model over the JUL0415O schedule interval and recalculate the aca_offsets using the calculated ACA ccd temperatures and the zero offset aimpoint information from the OR list. The ACA load review software, starcheck, already has code to determine inputs to the xija ACA model and to run the model over command products. For this test, the code to get the mean aca ccd temperature for each obsid has been extended to also run the offset calculation via chandra_aca.drift_get_aca_offsets. + if interval['obsid'] in obsreqs and len(ok_temps) &gt; 0: + obsreq = obsreqs[interval['obsid']] + if 'chip_id' in obsreq: + ddy, ddz = get_aca_offsets(obsreq['detector'], + obsreq['chip_id'], + obsreq['chipx'], + obsreq['chipy'], + time=itimes, + t_ccd=ok_temps) + obs['aca_offset_y'] = np.mean(ddy) + obs['aca_offset_z'] = np.mean(ddz) (see link to changed starcheck code) Then, the returned values from that code include these independently calculated values of aca_offset_y and aca_offset_z that correspond to aca_offset_y and aca_offset_z in the dynamic aimpoint text product. (apologies for the starcheck log output) End of explanation y_diff = [] z_diff = [] for obsid in dynam_table['obsid']: dyn_rec = dynam_table[dynam_table['obsid'] == obsid][0] if str(obsid) in obsid_info: print "{} offset y {: .2f} vs {: .2f} offset z {: .2f} vs {: .2f} t_ccd {: .2f} vs {: .2f}".format( obsid, dyn_rec['aca_offset_y'], obsid_info[str(obsid)]['aca_offset_y'], dyn_rec['aca_offset_z'], obsid_info[str(obsid)]['aca_offset_z'], dyn_rec['mean_t_ccd'], obsid_info[str(obsid)]['ccd_temp']) y_diff.append(dyn_rec['aca_offset_y'] - obsid_info[str(obsid)]['aca_offset_y']) z_diff.append(dyn_rec['aca_offset_z'] - obsid_info[str(obsid)]['aca_offset_z']) y_diff = np.array(y_diff) z_diff = np.array(z_diff) Explanation: Compare values to dynamic offset table from Matlab Then, for each entry in the dynamic offset table from the matlab tools, we compare the aca_offset_y and aca_offset_z with the values from the independent run of the model and the offset values calculated from within the starcheck code. For quick review, we print out the offsets and temperatures, with the dynamic aimpoint offset file versions in the first column of each value being checked. End of explanation print "Y offset max difference {:.2f} arcsec".format(np.max(np.abs(y_diff))) print "Z offset max difference {:.2f} arcsec".format(np.max(np.abs(z_diff))) Explanation: The maximum differences in the offsets between the values via an independent run of the model are within an arcsec. End of explanation
1,982
Given the following text description, write Python code to implement the functionality described below step by step Description: Initial planning steps Step1: Targeting and master list design Latest cuts Step2: Test round with Ody Step3: Sky Fiber positions For some hosts we already have sky positions from the last run, so copy those over Step4: For the remainder, generate and visually inspect them one at a time. Edit the file to remove any that are not good sky positions Step7: Make all the master catalogs Step8: Nightly configurations/etc Night 1 (Jun 19) Step9: Night pretty much a total wash - got 30 min in on OBrother_1, but probably mostly useless Night 2 (Jun 20) Step10: Night 3 (Jun 21) Step11: Load autoz's from Marla's reduction to decide which ones to remove Step12: Examining targets Step13: Miscellaneous experimentation with ML stuff requires that the master-list prep be in place Step14: Checks that the ML targets match coordinates w/objids Step15: Looks like all is good - they match to ~microarcsec, which is probably rounding Step16: Checks for bright stars in SDSS catalogs Step17: Investigate targets that get priority 8 but turn out to be stars Step18: Inspection of results from above suggests that the Aeneid field has many more correctly-IDed stars, while for AnaK and OBrother, it's about even. Step19: Ah - it's probably because Aeneid is much closer to the galactic plane than the others, so just more stars period Stars in master catalogs
Python Code: #if online ufo = urllib2.urlopen('https://docs.google.com/spreadsheet/ccc?key=1b3k2eyFjHFDtmHce1xi6JKuj3ATOWYduTBFftx5oPp8&output=csv') hosttab = QTable.read(ufo.read(), format='csv') ufo.close() #if offline hosttab = Table.read('SAGADropbox/hosts/host_catalog_flag0.csv') hostscs = SkyCoord(u.Quantity(hosttab['RA'], u.deg), u.Quantity(hosttab['Dec'], u.deg), distance=u.Quantity(hosttab['distance'], u.Mpc)) #UTC time from 8:35-19:35 is AAT 18 deg window nighttimes = Time('2015-6-20 8:35:00') + np.arange(12)*u.hour aao = EarthLocation(lon='149:3:57.9', lat='-31:16:37.3') aao_frame = AltAz(obstime=nighttimes, location=aao) seczs = [] for sc in hostscs: az = sc.transform_to(aao_frame) seczs.append(az.secz) seczs = np.array(seczs) hrsvis = np.sum((1<=seczs)&(seczs<1.75),axis=1) visenough = hrsvis>2 for secz, nsaid in zip(seczs[visenough], hosttab['NSAID'][visenough]): msk = secz>0 plt.plot(np.arange(len(secz))[msk], secz[msk], label=nsaid) plt.legend(loc=0) plt.ylim(0,5) names = [] ras = [] with open('aat_targets/aattargs_iobserve.dat', 'w') as f: for host in hosttab[visenough]: name = 'NSA'+str(host['NSAID']) for nm, val in hsd.items(): if val.nsaid == host['NSAID']: name = nm if nm.startswith('NSA'): name = name+'_obsed' break f.write(name.replace(' ','_') + ' ') f.write(str(host['RA']) + ' ') f.write(str(host['Dec']) + '\n') names.append(name) ras.append(host['RA']) names = np.array(names) ras = np.array(ras) earlymsk = (10<ras/15)&(ras/15<18) print('Early targets') for nm in names[earlymsk]: print(nm) print('\nLater targets') for nm in names[~earlymsk]: print(nm) Explanation: Initial planning steps End of explanation dune = hsd['Dune'] ody = hsd['Odyssey'] gilg = hsd['Gilgamesh'] aen = hsd['Aeneid'] aen.environsarcmin = 30. ob = hsd['OBrother'] anak = hsd['AnaK'] hostsforrun = [dune, ody, gilg, aen, ob, anak] #*possible* target n150307 = hosts.NSAHost(150307) [(h.name, h.environskpc, h.environsarcmin) for h in hostsforrun] #Casjobs queries. But these don't have the UnWISE reduction, so we use different catalogs in the end for h in hostsforrun: print(h.name, h.fnsdss) print(h.sdss_environs_query(dl=False, xmatchwise=True, usecas=True)) print('\n\n') # this actually sets up the hosts to use the UnWISE-matched catalogs for h in hostsforrun: h.altfnsdss.insert(0, h.fnsdss) h.fnsdss = 'catalogs/base_sql_nsa{0}.fits.gz'.format(h.nsaid) print('Loading catalog for', h.name, 'from', h.fnsdss) h._cached_sdss = None # make sure not to used an old cached one if its present c = h.get_sdss_catalog() #we have to modify these catalogs because 'phot_sg' in the fits catalogs are 3/6 instead of 'STAR'/'GALAXY', and 'type' is missing c.add_column(MaskedColumn(name='type', data=c['phot_sg'])) c.remove_column('phot_sg') phot_sg = np.select([c['type']==3,c['type']==6], ['GALAXY','STAR']).astype('a6') c['phot_sg'] = MaskedColumn(name='phot_sg', data=phot_sg) for h in hostsforrun: c = h.get_sdss_catalog() csc = SkyCoord(ra=c['ra']*u.deg, dec=c['dec']*u.deg, distance=h.dist) hsc = SkyCoord(h.coords, distance=h.dist) print(h.name,'max',np.max(csc.separation_3d(hsc).to(u.kpc))) gricolorcuts = {'g-r': (None, 0.8, 2), 'r-i': (None, 0.5, 2)} sagacolorcuts = gricolorcuts.copy() sagacolorcuts['r-K'] = (None, 2.0, 2) sagacolorcuts['r-w1'] = (None, 2.6, 2) def uggrline_cut(cat, sl=1.5, inter=-0.2): gt = cat['u']-cat['g']+2*(cat['u_err']+cat['g_err']) lt = sl*(cat['g']-cat['r']-2*(cat['g_err']+cat['r_err'])) + inter return gt > lt gricolorcuts['funcs'] = sagacolorcuts['funcs'] = [uggrline_cut] allgoodspec = Table(fits.getdata('allgoodspec_v2_jun14_15.fits.gz')) allgoodspecnorem = allgoodspec[allgoodspec['REMOVE']==-1] Explanation: Targeting and master list design Latest cuts: ``` rlim=20.5 or 21 qsaga = where(sdss.r le rlim and sdss.rhost_kpc le 300 and sdss.remove eq -1 and $ sdss.g-sdss.r-2sdss.g_err-2sdss.r_err le 0.8 and $ sdss.r-sdss.i-2sdss.r_err-2sdss.i_err le 0.5 and $ sdss.r-sdss.K-2sdss.r_err-2sdss.Kerr le 2.0 and $ sdss.r-sdss.W1-2sdss.r_err-2sdss.W1err le 2.6 and $ sdss.u-sdss.g+2sdss.u_err+2sdss.g_err gt $ 1.5(sdss.g-sdss.r-2sdss.g_err-2sdss.r_err)-0.2 and $ sdss.sb_exp_r ge 0.42sdss.r+13.2 and $ sdss.fibermag_r le 23,nsaga) qgri = where(sdss.r le rlim and sdss.rhost_kpc le 300 and $ sdss.g-sdss.r-2sdss.g_err-2sdss.r_err le 0.8 and $ sdss.r-sdss.i-2sdss.r_err-2sdss.i_err le 0.5 and $ sdss.fibermag_r le 23,ngri) ``` Notes on allgoodspec fields: ``` # SATS = 3 if object is SAGA PRIMARY HOST # SATS = 2 if object is low-z, z < 0.05 # SATS = 1 if object is SAGA SATELLITE (+/- 200 km/s w/in 300kpc) # SATS = 0 if object is high-z, z > 0.05 # SATS = -1 no redshift REMOVE = -1 = GOOD OBJECT REMOVE = 1 = ON REMOVE LIST, DO NOT USE (rm_removelist_obj) REMOVE = 2 = SHREDDED OBJECT BASED ON NSA (nsa_cleanup) REMOVE = 3 = REPEATED SPECTRUM ``` End of explanation allgoododydist = catalog_to_sc(allgoodspec, ody) targs = targeting.select_targets(ody, colorcuts=sagacolorcuts, outercutrad=None, galvsallcutoff=21, faintlimit=21, verbose=True, removespecstars=False, removegalsathighz =False) #do these because the fits catalogs don't have spec_class len(targs) targsc = catalog_to_sc(targs, ody) idx, d2d, d3d = targsc.match_to_catalog_sky(allgoododydist) alreadytargeted = d2d < 1*u.arcsec np.sum(alreadytargeted), np.sum(~alreadytargeted) guides = aat.select_guide_stars_sdss(ody.get_sdss_catalog()) calibs = aat.select_flux_stars(ody.get_sdss_catalog(), onlyoutside=300*u.kpc) skyradec = aat.select_sky_positions(ody) aat.produce_master_fld(ody, datetime.date(2014, 6, 19), targs[~alreadytargeted], pris=None, guidestars=guides, fluxstars=calibs,skyradec=skyradec, outfn='aat_targets_jun2015/Odyssey_master_test.fld', randomizeorder=True) s = targs['phot_sg']=='STAR' g = targs['phot_sg']=='GALAXY' plt.scatter(targs['fibermag_r'][s],targs['sb_deV_r'][s],alpha=.5,s=5,edgecolor='none', facecolor='b') plt.scatter(targs['fibermag_r'][g],targs['sb_deV_r'][g],alpha=.5,s=5,edgecolor='none', facecolor='r') #plt.scatter(targs['r'],targs['sb_exp_r'],alpha=.5,s=5,edgecolor='none', facecolor='b') #plt.scatter(targs['r'],targs['sb_deV_r'],alpha=.5,s=5,edgecolor='none', facecolor='r') plt.scatter(targs['r'],targs['sb_petro_r'],alpha=.5,s=5,edgecolor='none', facecolor='g') plt.xlim(17.7, 21) plt.ylim(20,30) Explanation: Test round with Ody End of explanation !cp aat_targets_jul2014/Gilgamesh_sky.dat aat_targets_jun2015/Gilgamesh_sky.dat !cp aat_targets_jul2014/Aeneid_sky.dat aat_targets_jun2015/Aeneid_sky.dat !cp aat_targets_jul2014/Ana_Karenina_sky.dat aat_targets_jun2015/AnaK_sky.dat Explanation: Sky Fiber positions For some hosts we already have sky positions from the last run, so copy those over End of explanation #Identify sky regions for each host and write out to separate files - from os.path import exists for h in hostsforrun: outfn = 'aat_targets_jun2015/' + h.name.replace(' ','_') + '_sky.dat' if exists(outfn): print(outfn, 'exists, not overwriting') else: print('Writing', outfn) aat.select_sky_positions(h, nsky=100, outfn=outfn, rad=1*u.deg) aat.imagelist_fld_targets("aat_targets_jun2015/Dune_sky.dat", ttype='sky', n=np.inf); !subl "aat_targets_jun2015/Dune_sky.dat" aat.imagelist_fld_targets("aat_targets_jun2015/Odyssey_sky.dat", ttype='sky', n=np.inf); !subl "aat_targets_jun2015/Odyssey_sky.dat" aat.imagelist_fld_targets("aat_targets_jun2015/OBrother_sky.dat", ttype='sky', n=np.inf); !subl "aat_targets_jun2015/OBrother_sky.dat" Explanation: For the remainder, generate and visually inspect them one at a time. Edit the file to remove any that are not good sky positions End of explanation #now organize the targets that should be manually forced to priority-9 no matter what #these are Dune targets that look especially promising dune9= 221.04498 0.18304042 221.07614 0.034178416 #and these are OBrother ob9= 334.99197 -3.0978879 336.67442 -3.0774462 335.36295 -4.1283350 335.97998 -3.2705486 #this is the ambiguous one in Odyssey ody9='\n247.82589 20.210879' ras = [] decs = [] for l in (dune9+ob9+ody9).split('\n'): if l.strip() == '': continue ra, dec = l.strip().split() ras.append(float(ra)) decs.append(float(dec)) toforcemanual = SkyCoord(ras*u.deg, decs*u.deg) #use this to manually add particular lines to the target catalog. Here we just add the DECALS AnaK object manualtargetlinesdct = {'AnaK':['DECALS_target_1 23 37 07.90 +0 12 40.86 P 8 21.33 0 magcol=decals_fiber2mag_r, decals_r=18.5']} # load the machine learning probabilities - these are used for pri 7 and 8 machine_learning_probs = Table.read('catalogs/machine_learning_june2015.csv.gz', format='csv') for h in hostsforrun: print('Doing', h.name) allgooddist = catalog_to_sc(allgoodspec, h) targs = targeting.select_targets(h, colorcuts=gricolorcuts, outercutrad=None, galvsallcutoff=21, faintlimit=21., removespecstars=False, removegalsathighz=False) #do these because the fits catalogs don't have spec_class targs = targeting.remove_targets_with_remlist(targs, h, 'TargetRemoveJun14_2015.csv') print('removing', np.sum(targs['REMOVE']!=-1),'REMOVE!=-1 objects') targs = targs[targs['REMOVE']==-1] targsc = catalog_to_sc(targs, h) idx, d2d, d3d = targsc.match_to_catalog_sky(allgooddist) alreadytargeted = d2d < 1*u.arcsec print('Already targeted', np.sum(alreadytargeted), 'leaving', np.sum(~alreadytargeted),'to target') #the targeting catalog *without* cuts applied, but de-duplicated rawdups = targeting.find_duplicate_objids(h.get_sdss_catalog()) rawcatnodups = h.get_sdss_catalog()[~rawdups] print('Raw catalog has', np.sum(rawdups), 'duplicates') #for guide and flux, though, we need to use the *other* catalog, because the UnWISE matched one has a bright cut casjobs_cat = h._load_and_reprocess_sdss_catalog(h.altfnsdss[0]) casjobs_cat = casjobs_cat[~targeting.find_duplicate_objids(casjobs_cat)] guides = aat.select_guide_stars_sdss(casjobs_cat) calibs = aat.select_flux_stars(casjobs_cat, onlyoutside=300*u.kpc) #skyradec = aat.select_sky_positions(h) skyradec = 'aat_targets_jun2015/{0}_sky.dat'.format(h.name) # generated above targcat = targs[~alreadytargeted] #now look for duplicates and always take the first - usually means multiple WISE or 2MASS matches targdups = targeting.find_duplicate_objids(targcat) targcat = targcat[~targdups] print('Raw catalog has', np.sum(targdups), 'duplicates') #add column for extra notes #targcat.add_column(table.Column(name='extra_aat_notes', data=np.zeros(len(targcat), dtype='S25'))) pris = aat.prioritize_targets(targcat, scheme='jun2015baseline') # these scheme doesn't include the SAGA color cuts yet - it just makes 1/2 and 3/4 on SB (1/2 is outside rvir) sagacolormsk = targeting.colorcut_mask(targcat, sagacolorcuts) #if you meet the saga cuts *and* are within rvir you get a boost to 5/6 msk56 = sagacolormsk&(pris<5)&(pris>2) pris[msk56] += 2 #now we set everything #if you fail the saga cuts and are outside rvir you get thrown out... #BUT WE DECIDED NOT TO DO THAT: the r-w1 cut seems to kick out things that we might want #outsidemsk = (~sagacolormsk)&(pris<3) #pris[outsidemsk] = 0 # now the machine-learning forced priority 7/8 targets # everything with p_class1 > 0.05 goes into 7, and the 50 highest go into 8. Make those masks this_mlcat = machine_learning_probs[machine_learning_probs['HOST_NSAID']==h.nsaid] this_mlcat_scs = SkyCoord(this_mlcat['RA']*u.deg, this_mlcat['DEC']*u.deg) this_mlcat_withinrv = this_mlcat_scs.separation(h.coords) < h.environsarcmin*u.arcmin pri7_ml_msk = (this_mlcat['PROBABILITY_CLASS_1']>0.05)&this_mlcat_withinrv pri2_ml_msk = (this_mlcat['PROBABILITY_CLASS_1']>0.05)&~this_mlcat_withinrv pri8_ml_idx = np.argsort(this_mlcat['PROBABILITY_CLASS_1']) pri8_ml_idx = pri8_ml_idx[this_mlcat_withinrv[pri8_ml_idx]][-50:] #if they are in 8, don't put them in 2 or 7 pri7_ml_msk[pri8_ml_idx] = False pri2_ml_msk[pri8_ml_idx] = False # now actually build the coordinate objects and match them to the base catalog targcat, pris = targeting.add_forced_targets(rawcatnodups, targcat, pris, this_mlcat['OBJID'][pri7_ml_msk], 7) targcat, pris = targeting.add_forced_targets(rawcatnodups, targcat, pris, this_mlcat['OBJID'][pri8_ml_idx], 8) pris2 = pris==2 pris[pris2] = 1 # instead of 1/2 on SB, it's 2 for those with high ML-prob, and 1 for all else targcat, pris = targeting.add_forced_targets(rawcatnodups, targcat, pris, this_mlcat['OBJID'][pri2_ml_msk], 2) #and also set priorities for targets that by-hand should be forced to 8 targcat, pris = targeting.add_forced_targets(rawcatnodups, targcat, pris, toforcemanual, 8) aat.produce_master_fld(h, datetime.date(2014, 6, 19), targcat, pris=pris, fluxpri=9, guidestars=guides, fluxstars=calibs,skyradec=skyradec, outfn='aat_targets_jun2015/{0}_master.fld'.format(h.name), randomizeorder=True, manualtargetlines=manualtargetlinesdct.get(h.name, [])) print('') #newline Explanation: Make all the master catalogs End of explanation scptarget = 'visitor3@aatlxa:configure/' #note that this was done with a *different* master catalog: it does not have priorty 2 from the ML learning (instead they are all in 7) h = dune fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = ody fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = aen fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = ob fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:150, 4:150, 5:200, 6:200, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget Explanation: Nightly configurations/etc Night 1 (Jun 19) End of explanation scptarget = 'visitor3@aatlxa:configure/' h = dune fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 20), fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = dune fnbase = 'aat_targets_jun2015/' + h.name finum = 2 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] !scp {scptarget}{h.name}*.lis aat_targets_jun2015/ aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 20), fieldname=str(finum), listorem=listorem) !scp $fnconfig $scptarget h = ody fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:300, 2:300, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 20), fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = aen fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:300, 2:300, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 20), fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = ob fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:300, 2:300, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 20), fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget h = anak fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:300, 2:300, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 20), fieldname=str(finum), listorem=listorem) #now scp to the aat machines to design config !scp $fnconfig $scptarget Explanation: Night pretty much a total wash - got 30 min in on OBrother_1, but probably mostly useless Night 2 (Jun 20) End of explanation scptarget = 'visitor3@aatlxa:configure/' h = dune fnbase = 'aat_targets_jun2015/' + h.name finum = 3 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] !scp {scptarget}{h.name}*.lis aat_targets_jun2015/ aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 21), fieldname=str(finum), listorem=listorem) !scp $fnconfig $scptarget h = gilg fnbase = 'aat_targets_jun2015/' + h.name finum = 1 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = [fnbase + '_' + str(i) + '.lis' for i in range(1, finum)] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 21), fieldname=str(finum), listorem=listorem) !scp $fnconfig $scptarget Explanation: Night 3 (Jun 21) End of explanation h = aen fnbase = 'aat_targets_jun2015/' + h.name finum = 2 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = ['aat_targets_jun2015/Aeneid_1.lis'] zlogfns = ['aat_targets_jun2015/Aeneid_1.zlog'] aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 21), fieldname=str(finum), listorem=listorem, zlogfns=zlogfns) !scp $fnconfig $scptarget h = ob fnbase = 'aat_targets_jun2015/' + h.name finum = 2 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = ['aat_targets_jun2015/OBrother_1.lis'] zlogfns = ['aat_targets_jun2015/OBrother_1slow.zlog'] def zltabkeepfunc(entry): if entry['zqual']<3: return True elif entry['zqual']==3 and entry['z']<.02: return True return False aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 21), fieldname=str(finum), listorem=listorem, zlogfns=zlogfns, zltabkeepfunc=zltabkeepfunc) !scp $fnconfig $scptarget h = anak fnbase = 'aat_targets_jun2015/' + h.name finum = 2 fnmaster = fnbase + '_master.fld' fnconfig = fnbase + '_{0}.fld'.format(finum) listorem = ['aat_targets_jun2015/AnaK_1.lis'] zlogfns = ['aat_targets_jun2015/AnaK_1.zlog'] def zltabkeepfunc(entry): if entry['zqual']<3: return True elif entry['zqual']==3 and entry['z']<.02: return True return False aat.subsample_from_master_fld(fnmaster, fnconfig, {1:200, 2:200, 3:300, 4:300, 5:np.inf, 6:np.inf, 7:np.inf, 8:np.inf, 9:np.inf}, #8 reserved for flux nflux=5, nguides=30, utcobsdate=datetime.date(2015, 6, 21), fieldname=str(finum), listorem=listorem, zlogfns=zlogfns, zltabkeepfunc=zltabkeepfunc) !scp $fnconfig $scptarget Explanation: Load autoz's from Marla's reduction to decide which ones to remove End of explanation tab, sc, info = aat.load_lis_file('aat_targets_jun2015/Dune_2.lis') print(info) targeting.sampled_imagelist(sc[tab['codes'] == 'F'], None, names=tab[tab['codes'] == 'F']['fibnums']) Explanation: Examining targets End of explanation h, h.ra, np.mean(targs['ra']), len(targcat), len(pris) ml_probs = Table.read('catalogs/machine_learning_june2015.csv.gz', format='csv') ml_probs plt.hist(ml_probs['PROBABILITY_CLASS_1'], bins=1000,log=True,histtype='step') plt.axvline(0.05, c='k') plt.tight_layout() this_mlcat = ml_probs[ml_probs['HOST_NSAID']==h.nsaid] len(this_mlcat) np.sum(np.in1d(targcat['objID'], this_mlcat['OBJID'])), len(targcat) np.sum(np.in1d(targcat['objID'][pris>2], this_mlcat['OBJID'])), np.sum(pris>2) np.sum(this_mlcat['PROBABILITY_CLASS_1']>0.05), len(this_mlcat) #now check that they're all at least in the base catalog rawcat = h.get_sdss_catalog() np.sum(np.in1d(this_mlcat['OBJID'], rawcat['objID'])), len(this_mlcat) this_mlcat['PROBABILITY_CLASS_1'][np.argsort(this_mlcat['PROBABILITY_CLASS_1'])[-50:]] ml_scs = SkyCoord(u.Quantity(ml_probs['RA'],u.deg),u.Quantity(ml_probs['DEC'],u.deg)) targeting.sampled_imagelist(ml_scs[ml_probs['PROBABILITY_CLASS_1']>.5], None, None); ml_scs = SkyCoord(u.Quantity(ml_probs['RA'],u.deg),u.Quantity(ml_probs['DEC'],u.deg)) targeting.sampled_imagelist(ml_scs[ml_probs['PROBABILITY_CLASS_1']>.1], None, None); Explanation: Miscellaneous experimentation with ML stuff requires that the master-list prep be in place End of explanation omlps = ml_probs[ml_probs['HOST_SAGA_NAME']=='Odyssey'] omlps['OBJID'].name = 'objID' ocat = ody.get_sdss_catalog() joined = table.join(ody.get_sdss_catalog(), omlps, keys='objID') dra = joined['ra']-joined['RA'] ddec = joined['dec']-joined['DEC'] plt.hist(np.hypot(dra*np.cos(ody.coords.dec), ddec)*3600) plt.tight_layout() Explanation: Checks that the ML targets match coordinates w/objids End of explanation # random sampling from the master list: # 1237662697578561899 16 31 48.49 +19 28 27.25 P 8 20.14 0 magcol=fiber2mag_r, model_r=16.62r match = omlps[omlps['objID']== 1237662697578561899 ] SkyCoord(match['RA'], match['DEC'], unit=u.deg).to_string('hmsdms') Explanation: Looks like all is good - they match to ~microarcsec, which is probably rounding End of explanation for h in hostsforrun: rawcat = fits.getdata(h.fnsdss) rawcat2 = Table.read(h.altfnsdss[0], format='csv', guess=False) stars = rawcat['phot_sg']==6 stars2 = rawcat2['type']==6 plt.figure() plt.hist(rawcat[stars]['r'],bins=100, range=(10,21.2), histtype='step', label='r_fitscat') plt.hist(rawcat2[stars2]['r'],bins=100, range=(10,21.2), histtype='step', label='modelMag_r') plt.hist(rawcat2[stars2]['psf_r'],bins=100, range=(10,21.2), histtype='step', label='psf_r') plt.title(h.name) plt.legend(loc='upper left') plt.savefig('/Users/erik/tmp/stars'+h.name+'.png') Explanation: Checks for bright stars in SDSS catalogs End of explanation h = aen zlfn = 'aat_targets_jun2015/Aeneid_1.zlog' lisfn = 'aat_targets_jun2015/Aeneid_1.lis' h = ob zlfn = 'aat_targets_jun2015/OBrother_1slow.zlog' lisfn = 'aat_targets_jun2015/OBrother_1.lis' h = anak zlfn = 'aat_targets_jun2015/AnaK_1.zlog' lisfn = 'aat_targets_jun2015/AnaK_1.lis' zltab = Table.read(zlfn, format='ascii', names=aat.zlogcolnames) zltab plt.hist(zltab[zltab['zqual']>2]['z'], bins=100, range=(-.01, h.zspec*2), histtype='step') plt.tight_layout() listab, lissc, lisinfo = aat.load_lis_file(lisfn) listab fibnums = [int(nm.split('_')[-1]) if nm != 'noid' else -1 for nm in zltab['name']] objids = [] matchpris = [] for fibnum in fibnums: if fibnum == -1: objids.append(-1) matchpris.append(-1) else: matchid = listab['ids'][fibnum==listab['fibnums']][0] if matchid.startswith('Flux') or matchid.startswith('DECALS'): objids.append(-2) matchpris.append(-2) elif matchid == h.name: objids.append(-3) matchpris.append(-3) else: objids.append(int(matchid)) matchpris.append(listab['pris'][fibnum==listab['fibnums']][0]) cat = h.get_sdss_catalog() sg = np.array([objid if objid < 0 else cat['phot_sg'][objid==cat['objID']][0] for objid in objids]) matchpris = np.array(matchpris) galwithlowz = (sg=='GALAXY')&(zltab['zqual']>2)&(np.abs(zltab['z'])<0.005) starwithhighz = (sg=='STAR')&(zltab['zqual']>2)&(np.abs(zltab['z'])>0.005) galwithhighz = (sg=='GALAXY')&(zltab['zqual']>2)&(np.abs(zltab['z'])>0.005) starwithlowz = (sg=='STAR')&(zltab['zqual']>2)&(np.abs(zltab['z'])<0.005) np.sum(galwithlowz), np.sum(starwithhighz), np.sum(galwithhighz), np.sum(starwithlowz) names = np.array(['{3}:{0}wpri{1}z={2:.2g}'.format(sgi, pi, zi, nmi.split('_')[-1]) for sgi, pi, zi, nmi in zip(sg, matchpris, zltab['z'], zltab['name'])]) #all things w/ stars targeting.sampled_imagelist(zltab[sg=='STAR'], None, None, names=names[sg=='STAR'], posttoimglist=0.1) #weird things targeting.sampled_imagelist(zltab[galwithlowz], None, None, names=names[galwithlowz], posttoimglist=0.1) targeting.sampled_imagelist(zltab[starwithhighz], None, None, names=names[starwithhighz], posttoimglist=0.1) #expected things targeting.sampled_imagelist(zltab[galwithhighz], None, None, names=names[galwithhighz], posttoimglist=0.1) targeting.sampled_imagelist(zltab[starwithlowz], None, None, names=names[starwithlowz], posttoimglist=0.1) h.zspec Explanation: Investigate targets that get priority 8 but turn out to be stars End of explanation for h in hostsforrun: print(h.name, h.coords.galactic.b.deg) Explanation: Inspection of results from above suggests that the Aeneid field has many more correctly-IDed stars, while for AnaK and OBrother, it's about even. End of explanation for h in hostsforrun: tab, sc, hdr = aat.load_fld('aat_targets_jun2015/{0}_master.fld'.format(h.name)) objidtopri = {} for entry in tab: try: objid = int(entry['name']) objidtopri[objid] = entry['pri'] except ValueError: pass cat = h.get_sdss_catalog() pris = np.array([objidtopri.get(i, -1) for i in cat['objID']]) probmsk = (pris==2)|(pris==7)|(pris==8) starmsk = cat['phot_sg']=='STAR' print(h.name, np.sum(probmsk&starmsk), 'stars of', np.sum(probmsk)) targeting.sampled_imagelist(cat[probmsk&starmsk], None, None, names=pris[probmsk&starmsk], posttoimglist=.1) Explanation: Ah - it's probably because Aeneid is much closer to the galactic plane than the others, so just more stars period Stars in master catalogs End of explanation
1,983
Given the following text description, write Python code to implement the functionality described below step by step Description: Nearest Neighbors author Step1: Load Wikipedia dataset We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase). Step2: Extract word count vectors As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki. Step3: Find nearest neighbors Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search. Step4: Let's look at the top 10 nearest neighbors by performing the following query Step6: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians. Francisco Barrio is a Mexican politician, and a former governor of Chihuahua. Walter Mondale and Don Bonker are Democrats who made their career in late 1970s. Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official. Andy Anstett is a former politician in Manitoba, Canada. Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details. For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages Step7: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data Step8: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio. Step9: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first. Step10: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words? Answer.answer is 56066 Hint Step11: Checkpoint. Check your has_top_words function on two random articles Step12: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance? Answer. Biden and Bush Hint Step13: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page. Answer. 3th the', 'in', 'and', 'of', 'to', 'his', 'act', 'he', 'a', 'as' obama_words.join() Step14: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words. TF-IDF to the rescue Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons. To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama Step15: Let's determine whether this list makes sense. * With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama. * Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama. Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well. Step16: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document. Step17: The first 10 words should say Step18: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words. Choosing metrics You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden. Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint Step19: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability Step20: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page. Step21: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents. Step22: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long. Note Step23: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama. Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors. Step24: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided. Moral of the story Step25: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences. Step26: Now, compute the cosine distance between the Barack Obama article and this tweet Step27: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors
Python Code: import graphlab import matplotlib.pyplot as plt import numpy as np %matplotlib inline Explanation: Nearest Neighbors author: 申恒恒 When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically * Decide on a notion of similarity * Find the documents that are most similar In the assignment you will * Gain intuition for different notions of similarity and practice finding similar documents. * Explore the tradeoffs with representing documents using raw word counts and TF-IDF * Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page. Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook. Import necessary packages As usual we need to first import the Python packages that we will need. End of explanation wiki = graphlab.SFrame('people_wiki.gl') wiki wiki['URI'][1] Explanation: Load Wikipedia dataset We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase). End of explanation wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text']) wiki Explanation: Extract word count vectors As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki. End of explanation model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'], method='brute_force', distance='euclidean') Explanation: Find nearest neighbors Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search. End of explanation model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10) Explanation: Let's look at the top 10 nearest neighbors by performing the following query: End of explanation wiki[wiki['name'] == 'Barack Obama'][['word_count']].stack('word_count', new_column_name=['word','count']).sort('count',ascending=False) def top_words(name): Get a table of the most frequent words in the given person's wikipedia page. row = wiki[wiki['name'] == name] word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count']) return word_count_table.sort('count', ascending=False) obama_words = top_words('Barack Obama') obama_words barrio_words = top_words('Francisco Barrio') barrio_words Explanation: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians. Francisco Barrio is a Mexican politician, and a former governor of Chihuahua. Walter Mondale and Don Bonker are Democrats who made their career in late 1970s. Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official. Andy Anstett is a former politician in Manitoba, Canada. Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details. For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages: End of explanation combined_words = obama_words.join(barrio_words, on='word') combined_words Explanation: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See the documentation for more details. For instance, running obama_words.join(barrio_words, on='word') will extract the rows from both tables that correspond to the common words. End of explanation combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'}) combined_words Explanation: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio. End of explanation combined_words.sort('Obama', ascending=False) Explanation: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first. End of explanation obama_words = top_words('Barack Obama') common_words = list(obama_words[:5]['word']) type(common_words) #mmon_words set(common_words) common_words = list(top_words('Barack Obama')[:5]['word']) # Barack Obama 5 largest words print common_words def has_top_words(word_count_vector): # extract the keys of word_count_vector and convert it to a set unique_words = set(word_count_vector.keys()) #using keys() method and using set() method convert list to set # return True if common_words is a subset of unique_words # return False otherwise return set(common_words).issubset(unique_words) # YOUR CODE HERE wiki['has_top_words'] = wiki['word_count'].apply(has_top_words) # use has_top_words column to answer the quiz question print wiki['has_top_words'] sum(wiki['has_top_words']) Explanation: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words? Answer.answer is 56066 Hint: * Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five. * Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function has_top_words to accomplish the task. - Convert the list of top 5 words into set using the syntax set(common_words) where common_words is a Python list. See this link if you're curious about Python sets. - Extract the list of keys of the word count dictionary by calling the keys() method. - Convert the list of keys into a set as well. - Use issubset() method to check if all 5 words are among the keys. * Now apply the has_top_words function on every row of the SFrame. * Compute the sum of the result column to obtain the number of articles containing all the 5 top words. End of explanation print 'Output from your function:', has_top_words(wiki[32]['word_count']) print 'Correct output: True' print 'Also check the length of unique_words. It should be 167' print 'Output from your function:', has_top_words(wiki[33]['word_count']) print 'Correct output: False' print 'Also check the length of unique_words. It should be 188' type(wiki[33]) Explanation: Checkpoint. Check your has_top_words function on two random articles: End of explanation a = graphlab.SFrame(wiki[wiki['name']=='Barack Obama']['word_count'])[0]['X1'] b = graphlab.SFrame(wiki[wiki['name']=='George W. Bush']['word_count'])[0]['X1'] c = graphlab.SFrame(wiki[wiki['name']=='Joe Biden']['word_count'])[0]['X1'] graphlab.toolkits.distances.euclidean(a,b) # Obama and Bush graphlab.toolkits.distances.euclidean(a,c) # Obama and Joe graphlab.toolkits.distances.euclidean(b,c) # Bush and Joe+++++++++++++ Explanation: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance? Answer. Biden and Bush Hint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean. Refer to this link for usage. End of explanation bush_words = top_words('George W. Bush') obama_words.join(bush_words, on='word') \ .rename({'count' : 'Obama', 'count.1' : 'Bush'}) \ .sort('Obama', ascending = False) obama_words.join(bush_words, on='word') \ .rename({'count' : 'Obama', 'count.1' : 'Bush'}) \ .sort('Obama', ascending = False)['word'][:10] Explanation: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page. Answer. 3th the', 'in', 'and', 'of', 'to', 'his', 'act', 'he', 'a', 'as' obama_words.join() End of explanation wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count']) model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='euclidean') model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10) Explanation: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words. TF-IDF to the rescue Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons. To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama: End of explanation def top_words_tf_idf(name): row = wiki[wiki['name'] == name] word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight']) return word_count_table.sort('weight', ascending=False) obama_tf_idf = top_words_tf_idf('Barack Obama') obama_tf_idf schiliro_tf_idf = top_words_tf_idf('Phil Schiliro') schiliro_tf_idf Explanation: Let's determine whether this list makes sense. * With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama. * Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama. Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well. End of explanation combination2_words = obama_tf_idf.join(schiliro_tf_idf,on='word').sort('weight',ascending=False) combination2_words combination2_words = combination2_words.rename({'weight':'Obama', 'weight.1':'Schiliro'}) combination2_words combination2_words = combination2_words.sort('Obama', ascending=False) combination2_words Explanation: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document. End of explanation common_words = set(list(combination2_words[:5]['word'])) common_words # common_words = common_words def has_top_words(word_count_vector): # extract the keys of word_count_vector and convert it to a set unique_words = set(word_count_vector.keys()) # return True if common_words is a subset of unique_words # return False otherwise return common_words.issubset(unique_words) # YOUR CODE HERE wiki['has_top_words'] = wiki['word_count'].apply(has_top_words) # use has_top_words column to answer the quiz question print wiki['has_top_words'] # YOUR CODE HERE sum(wiki['has_top_words']) Explanation: The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011. Quiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words? Answer.14 End of explanation obama = wiki[wiki['name'] == 'Barack Obama']['tf_idf'][0] biden = wiki[wiki['name'] == 'Joe Biden']['tf_idf'][0] graphlab.toolkits.distances.euclidean(obama, biden) Explanation: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words. Choosing metrics You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden. Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match. Answer. 123.297 End of explanation model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10) Explanation: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability: End of explanation def compute_length(row): return len(row['text']) wiki['length'] = wiki.apply(compute_length) nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100) nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'}) nearest_neighbors_euclidean.sort('rank') Explanation: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page. End of explanation plt.figure(figsize=(10.5,4.5)) plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True, label='Entire Wikipedia', zorder=3, alpha=0.8) plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True, label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8) plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4, label='Length of Barack Obama', zorder=2) plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4, label='Length of Joe Biden', zorder=1) plt.axis([1000, 5500, 0, 0.004]) plt.legend(loc='best', prop={'size':15}) plt.title('Distribution of document length') plt.xlabel('# of words') plt.ylabel('Percentage') plt.rcParams.update({'font.size':16}) plt.tight_layout() Explanation: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents. End of explanation model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'], method='brute_force', distance='cosine') nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100) nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'}) nearest_neighbors_cosine.sort('rank') Explanation: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long. Note: Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them. To remove this bias, we turn to cosine distances: $$ d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|} $$ Cosine distances let us compare word distributions of two articles of varying lengths. Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors. End of explanation plt.figure(figsize=(10.5,4.5)) plt.figure(figsize=(10.5,4.5)) plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True, label='Entire Wikipedia', zorder=3, alpha=0.8) plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True, label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8) plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True, label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8) plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4, label='Length of Barack Obama', zorder=2) plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4, label='Length of Joe Biden', zorder=1) plt.axis([1000, 5500, 0, 0.004]) plt.legend(loc='best', prop={'size':15}) plt.title('Distribution of document length') plt.xlabel('# of words') plt.ylabel('Percentage') plt.rcParams.update({'font.size': 16}) plt.tight_layout() Explanation: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama. Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors. End of explanation sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']}) sf['word_count'] = graphlab.text_analytics.count_words(sf['text']) encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf') encoder.fit(wiki) sf = encoder.transform(sf) sf Explanation: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided. Moral of the story: In deciding the features and distance measures, check if they produce results that make sense for your particular application. Problem with cosine distances: tweets vs. long articles Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example. +--------------------------------------------------------+ | +--------+ | | One that shall not be named | Follow | | | @username +--------+ | | | | Democratic governments control law in response to | | popular act. | | | | 8:05 AM - 16 May 2016 | | | | Reply Retweet (1,332) Like (300) | | | +--------------------------------------------------------+ How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.) End of explanation tweet_tf_idf = sf[0]['tf_idf.word_count'] tweet_tf_idf obama = wiki[wiki['name'] == 'Barack Obama'] obama Explanation: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences. End of explanation obama_tf_idf = obama[0]['tf_idf'] graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf) Explanation: Now, compute the cosine distance between the Barack Obama article and this tweet: End of explanation model2_tf_idf.query(obama, label='name', k=10) Explanation: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors: End of explanation
1,984
Given the following text description, write Python code to implement the functionality described below step by step Description: CMS Autoencoder Development Joeri R. Hermans Departement of Data Science & Knowledge Engineering Maastricht University, The Netherlands In this notebook we mainly deal with the development of an autoencoder of the CMS detector using the feature matrices that have been constructed in the previous notebooks. Cluster Configuration In the following sections, we set up the cluster properties. Step1: Utility Functions Some utility functions that will be processed used throughout this notebook. Step2: Data Loading and Preprocessing Choose one of the following 3 datasets. Step3: Dataset Statistics Before we start the training process, let us first gather some prior statistics on our dataset, i.e., what is the distribution of track types within our dataset. Step4: From the histogram, we can see that some particular event types are definitly under-represented in the training set. To have a uniform distribution of track types, we need to sample the dataset in a particular way. However, since we are doing this in a multi-class classification context (predicting multiple classes at the same time), this is not very trival. A first, but naive approach would be to sample all the RSGravitonToGaGa events, and plot the distribution, to check what exactly is happening. However, in this case, all you collisions would contain RSGravitionToGaGa tracks. Step5: From this we observe that the follwing events do not occur in the presence of RSGravitonToGaGa events Step6: An alternative idea might be to select the samples which minimize the inbalance in the training set. A first approach would be to obtain an equal number of "different" vector types. Since we have 11 types, there are $2^{11} = 2048$ possible combinations of vectors. Let's first see how many we actually have. Step7: Which vectors? Step8: It might be interesting to know what is the distribution of these vectors are. Step9: From this we know that collisions represented by [0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0] and [1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0] do not happen very often in the datasets that have been handed to us. However, what we can see from this is that the training set it is actually rather balanced w.r.t. the vectors which occur in the training sets (with some exceptions). Model development TODO Distributing training on full dataset. Before constructing the autoencoder, we first need to obtain the dimensionality of our problem, i.e., the dimensionality of the feature matrices.
Python Code: %matplotlib inline import numpy as np import os from pyspark import SparkContext from pyspark import SparkConf from pyspark.sql.types import * from pyspark.storagelevel import StorageLevel import matplotlib.mlab as mlab import matplotlib.pyplot as plt from distkeras.trainers import * from distkeras.utils import * from keras.optimizers import * from keras.models import Sequential from keras.layers.core import * from keras.layers.convolutional import * from keras.layers import * # Use the DataBricks AVRO reader. os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-avro_2.11:3.2.0 pyspark-shell' # Modify these variables according to your needs. application_name = "CMS Autoencoder Development" using_spark_2 = False local = False if local: # Tell master to use local resources. master = "local[*]" num_processes = 3 num_executors = 1 else: # Tell master to use YARN. master = "yarn-client" num_executors = 20 num_processes = 1 # This variable is derived from the number of cores and executors, # and will be used to assign the number of model trainers. num_workers = num_executors * num_processes print("Number of desired executors: " + `num_executors`) print("Number of desired processes / executor: " + `num_processes`) print("Total number of workers: " + `num_workers`) # Do not change anything here. conf = SparkConf() conf.set("spark.app.name", application_name) conf.set("spark.master", master) conf.set("spark.executor.cores", `num_processes`) conf.set("spark.executor.instances", `num_executors`) conf.set("spark.executor.memory", "5g") conf.set("spark.locality.wait", "0") conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer") conf.set("spark.kryoserializer.buffer.max", "2000") conf.set("spark.executor.heartbeatInterval", "6000s") conf.set("spark.network.timeout", "10000000s") conf.set("spark.shuffle.spill", "true") conf.set("spark.driver.memory", "10g") conf.set("spark.driver.maxResultSize", "10g") # Check if the user is running Spark 2.0 + if using_spark_2: sc = SparkSession.builder.config(conf=conf) \ .appName(application_name) \ .getOrCreate() else: # Create the Spark context. sc = SparkContext(conf=conf) # Add the missing imports from pyspark import SQLContext sqlContext = SQLContext(sc) # Check if we are using Spark 2.0 if using_spark_2: reader = sc else: reader = sqlContext Explanation: CMS Autoencoder Development Joeri R. Hermans Departement of Data Science & Knowledge Engineering Maastricht University, The Netherlands In this notebook we mainly deal with the development of an autoencoder of the CMS detector using the feature matrices that have been constructed in the previous notebooks. Cluster Configuration In the following sections, we set up the cluster properties. End of explanation def plot_matrix(m): plt.figure(figsize=(10,10), dpi=250) plt.imshow(m, cmap='plasma', interpolation='nearest') plt.show() def conv_block(feat_maps_out, prev): prev = BatchNormalization(axis=1, mode=2)(prev) # Specifying the axis and mode allows for later merging prev = Activation('relu')(prev) prev = Convolution2D(feat_maps_out, 3, 3, border_mode='same')(prev) prev = BatchNormalization(axis=1, mode=2)(prev) # Specifying the axis and mode allows for later merging prev = Activation('relu')(prev) prev = Convolution2D(feat_maps_out, 3, 3, border_mode='same')(prev) return prev def skip_block(feat_maps_in, feat_maps_out, prev): if feat_maps_in != feat_maps_out: # This adds in a 1x1 convolution on shortcuts that map between an uneven amount of channels prev = Convolution2D(feat_maps_out, 1, 1, border_mode='same')(prev) return prev def residual(feat_maps_in, feat_maps_out, prev_layer): skip = skip_block(feat_maps_in, feat_maps_out, prev_layer) conv = conv_block(feat_maps_out, prev_layer) return merge([skip, conv], mode='sum') # the residual connection def plot_types_distribution(vector): fig, ax = plt.subplots() width = 5 indexes = np.arange(11) ax.bar(indexes, vector, 1, color='b') ax.set_xlabel("Track types") ax.set_ylabel("Probability of occurance") ax.set_title("Distribution of track types in original dataset") ax.set_xticks(indexes) ax.set_xticklabels(indexes) fig.show() def normalize_distribution(distribution): return np.divide(distribution, distribution.sum()).tolist() Explanation: Utility Functions Some utility functions that will be processed used throughout this notebook. End of explanation # RAW FEATURE MATRICES dataset = reader.read.format("com.databricks.spark.avro").load("data/collisions_feature_matrices.avro") # NORMALIZED FEATURE MATRICES dataset = reader.read.format("com.databricks.spark.avro").load("data/collisions_feature_matrices_normalized.avro") # BATCH-NORMALIZED FEATURE MATRICES dataset = reader.read.format("com.databricks.spark.avro").load("data/collisions_feature_matrices_batch_normalized.avro") # Read the collisions dataset to obtain meta-information about the tracks. collisions = reader.read.format("com.databricks.spark.avro").load("data/collisions.avro") def extract_track_types(iterator): for row in iterator: tracks = row['tracks'] for t in tracks: yield t['track_type'] # Obtain the files from which we extracted the collisions. files = collisions.mapPartitions(extract_track_types).distinct().collect() files # Construct the types from the files by removing the RelVal prefix and removing the suffix after the first _. mapping = {} types = [] index = 0 for f in files: if f not in mapping: type = f[6:f.find('_')] if type not in types: types.append(type) index += 1 mapping[f] = types.index(type) num_types = len(types) mapping def construct_output_vector(row): collision_id = row['id'] tracks = row['tracks'] files = [] for t in tracks: file = t['track_type'] if file not in files: files.append(file) # Construct the output vector. y = np.zeros(num_types) for f in files: y[mapping[f]] = 1.0 return Row(**{'id': collision_id, 'y': y.tolist()}) # From this, construct a feature vector which represents the track types for every collision-id. output_vectors = collisions.map(construct_output_vector).toDF() def flatten(row): # Obtain the collision-id. collision_id = row['collision_id'] # Obtain the feature matrices, and flatten them. m_f = np.asarray(row['front']).flatten() m_s = np.asarray(row['side']).flatten() return Row(**{'collision_id': collision_id, 'front': m_f.tolist(), 'side': m_s.tolist()}) training_set = dataset.map(flatten).toDF() training_set = training_set.join(output_vectors, training_set.collision_id == output_vectors.id) training_set = training_set.select("collision_id", "front", "side", "y") training_set.persist(StorageLevel.MEMORY_AND_DISK) training_set.printSchema() print("Number of training samples: " + str(training_set.count())) Explanation: Data Loading and Preprocessing Choose one of the following 3 datasets. End of explanation collision_distribution = np.asarray(training_set.select("y").rdd.reduce(lambda a, b: np.add(a, b).tolist())[0]) normalized_prior_distribution = np.divide(collision_distribution, collision_distribution.sum()).tolist() plot_types_distribution(normalized_prior_distribution) print("Number of occurences in " + str(types[3]) + ": " + str(collision_distribution[3])) Explanation: Dataset Statistics Before we start the training process, let us first gather some prior statistics on our dataset, i.e., what is the distribution of track types within our dataset. End of explanation def fetch_data(iterator): for row in iterator: if row['y'][3] == 1: yield row collision_distribution = training_set.select("y").mapPartitions(fetch_data) collision_distribution = np.asarray(collision_distribution.reduce(lambda a, b: np.add(a, b).tolist())[0]) normalized_distribution = normalize_distribution(collision_distribution) plot_types_distribution(normalized_distribution) Explanation: From the histogram, we can see that some particular event types are definitly under-represented in the training set. To have a uniform distribution of track types, we need to sample the dataset in a particular way. However, since we are doing this in a multi-class classification context (predicting multiple classes at the same time), this is not very trival. A first, but naive approach would be to sample all the RSGravitonToGaGa events, and plot the distribution, to check what exactly is happening. However, in this case, all you collisions would contain RSGravitionToGaGa tracks. End of explanation print(types[0]) print(types[5]) print(types[9]) Explanation: From this we observe that the follwing events do not occur in the presence of RSGravitonToGaGa events: End of explanation print("Number of distinct collision vectors in training set: " + str(training_set.select("y").distinct().count())) Explanation: An alternative idea might be to select the samples which minimize the inbalance in the training set. A first approach would be to obtain an equal number of "different" vector types. Since we have 11 types, there are $2^{11} = 2048$ possible combinations of vectors. Let's first see how many we actually have. End of explanation training_set.select("y").distinct().collect() Explanation: Which vectors? End of explanation results = training_set.select("y").groupBy("y").count().collect() results Explanation: It might be interesting to know what is the distribution of these vectors are. End of explanation validation = training_set.sample(False, 0.001) validation = validation.collect() validation_x = [np.asarray(x['front']) for x in validation] validation_y = [np.asarray(x['y']) for x in validation] validation_x = np.asarray(validation_x) validation_y = np.asarray(validation_y) training = training_set.sample(False, 0.008) training = training.collect() training_x = [np.asarray(x['front']) for x in training] training_y = [np.asarray(x['y']) for x in training] training_x = np.asarray(training_x) training_y = np.asarray(training_y) input_size = training_x[0].shape[0] output_size = input_size # Simple MLP for development purposes. mlp = Sequential() mlp.add(Dense(5000, input_shape=(input_size,))) mlp.add(Activation('relu')) mlp.add(Dense(400)) mlp.add(Activation('relu')) mlp.add(Dense(num_types)) mlp.add(Activation('sigmoid')) mlp.summary() loss = 'mse' optimizer = 'adam' mlp.compile(loss=loss, optimizer=optimizer, metrics=['mae', 'acc']) # Note, this model has been trained ~50 epochs before reaching this state. # Of course, this is only trained on a small subset for development purposes. mlp.fit(training_x, training_y, nb_epoch=10, verbose=1, batch_size=32) Y = mlp.predict(validation_x) index = 3 print("Expected - Predicted:") for i in range(0, len(Y[index])): print(format(validation_y[index][i]) + " - " + str(Y[index][i])) Explanation: From this we know that collisions represented by [0.0, 1.0, 1.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0] and [1.0, 1.0, 1.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0] do not happen very often in the datasets that have been handed to us. However, what we can see from this is that the training set it is actually rather balanced w.r.t. the vectors which occur in the training sets (with some exceptions). Model development TODO Distributing training on full dataset. Before constructing the autoencoder, we first need to obtain the dimensionality of our problem, i.e., the dimensionality of the feature matrices. End of explanation
1,985
Given the following text description, write Python code to implement the functionality described below step by step Description: Webscraping with Beautiful Soup Intro In this tutorial, we'll be scraping information on the state senators of Illinois, available here, as well as the list of bills each senator has sponsored (e.g., here. The Tools Requests Beautiful Soup Step1: Part 1 Step2: 1.2 Soup it Now we use the BeautifulSoup function to parse the reponse into an HTML tree. This returns an object (called a soup object) which contains all of the HTML in the original document. Step3: 1.3 Find Elements BeautifulSoup has a number of functions to find things on a page. Like other webscraping tools, Beautiful Soup lets you find elements by their Step4: NB Step5: That's a lot! Many elements on a page will have the same html tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes? We can do this by adding an additional argument to the find_all In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu". Step6: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector. In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu. Step7: Challenge 1 Find all the &lt;a&gt; elements in class mainmenu Step8: 1.4 Get Attributes and Text of Elements Once we identify elements, we want the access information in that element. Oftentimes this means two things Step9: It's a tag! Which means it has a text member Step10: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes. You can access a tag’s attributes by treating the tag like a dictionary Step11: Challenge 2 Find all the href attributes (url) from the mainmenu. Step12: Part 2 Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape http Step13: 2.2 Find the right elements and text. Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag. Step14: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want. Step15: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above. Step16: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member. Step17: Now we can combine the beautifulsoup tools with our basic python skills to scrape an entire web page. Step18: 2.3 Loop it all together Let's use a for loop to get 'em all! Step19: Challege 3 Step20: Challenge 4 Step21: Part 3 Step22: 3.2 Get all the bills Finally, create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs. NOTE
Python Code: # import required modules import requests from bs4 import BeautifulSoup from datetime import datetime import time import re import sys Explanation: Webscraping with Beautiful Soup Intro In this tutorial, we'll be scraping information on the state senators of Illinois, available here, as well as the list of bills each senator has sponsored (e.g., here. The Tools Requests Beautiful Soup End of explanation # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp') # read the content of the server’s response src = req.text Explanation: Part 1: Using Beautiful Soup 1.1 Make a Get Request and Read in HTML We use requests library to: 1. make a GET request to the page 2. read in the html of the page End of explanation # parse the response into an HTML tree soup = BeautifulSoup(src, 'lxml') # take a look print(soup.prettify()[:1000]) Explanation: 1.2 Soup it Now we use the BeautifulSoup function to parse the reponse into an HTML tree. This returns an object (called a soup object) which contains all of the HTML in the original document. End of explanation # find all elements in a certain tag # these two lines of code are equivilant # soup.find_all("a") Explanation: 1.3 Find Elements BeautifulSoup has a number of functions to find things on a page. Like other webscraping tools, Beautiful Soup lets you find elements by their: HTML tags HTML Attributes CSS Selectors Let's search first for HTML tags. The function find_all searches the soup tree to find all the elements with an a particular HTML tag, and returns all of those elements. What does the example below do? End of explanation # soup.find_all("a") # soup("a") Explanation: NB: Because find_all() is the most popular method in the Beautiful Soup search API, you can use a shortcut for it. If you treat the BeautifulSoup object as though it were a function, then it’s the same as calling find_all() on that object. These two lines of code are equivalent: End of explanation # Get only the 'a' tags in 'sidemenu' class soup("a", class_="sidemenu") Explanation: That's a lot! Many elements on a page will have the same html tag. For instance, if you search for everything with the a tag, you're likely to get a lot of stuff, much of which you don't want. What if we wanted to search for HTML tags ONLY with certain attributes, like particular CSS classes? We can do this by adding an additional argument to the find_all In the example below, we are finding all the a tags, and then filtering those with class = "sidemenu". End of explanation # get elements with "a.sidemenu" CSS Selector. soup.select("a.sidemenu") Explanation: Oftentimes a more efficient way to search and find things on a website is by CSS selector. For this we have to use a different method, select(). Just pass a string into the .select() to get all elements with that string as a valid CSS selector. In the example above, we can use "a.sidemenu" as a CSS selector, which returns all a tags with class sidemenu. End of explanation # SOLUTION soup.select("a.mainmenu") Explanation: Challenge 1 Find all the &lt;a&gt; elements in class mainmenu End of explanation # this is a list soup.select("a.sidemenu") # we first want to get an individual tag object first_link = soup.select("a.sidemenu")[0] # check out its class type(first_link) Explanation: 1.4 Get Attributes and Text of Elements Once we identify elements, we want the access information in that element. Oftentimes this means two things: Text Attributes Getting the text inside an element is easy. All we have to do is use the text member of a tag object: End of explanation print(first_link.text) Explanation: It's a tag! Which means it has a text member: End of explanation print(first_link['href']) Explanation: Sometimes we want the value of certain attributes. This is particularly relevant for a tags, or links, where the href attribute tells us where the link goes. You can access a tag’s attributes by treating the tag like a dictionary: End of explanation # SOLUTION [link['href'] for link in soup.select("a.mainmenu")] Explanation: Challenge 2 Find all the href attributes (url) from the mainmenu. End of explanation # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98') # read the content of the server’s response src = req.text # soup it soup = BeautifulSoup(src, "lxml") Explanation: Part 2 Believe it or not, that's all you need to scrape a website. Let's apply these skills to scrape http://www.ilga.gov/senate/default.asp?GA=98 NB: we're just going to scrape the 98th general assembly. Our goal is to scrape information on each senator, including their: - name - district - party 2.1 First, make the get request and soup it. End of explanation # get all tr elements rows = soup.find_all("tr") len(rows) Explanation: 2.2 Find the right elements and text. Now let's try to get a list of rows in that table. Remember that rows are identified by the tr tag. End of explanation # returns every ‘tr tr tr’ css selector in the page rows = soup.select('tr tr tr') print(rows[2].prettify()) Explanation: But remember, find_all gets all the elements with the tr tag. We can use smart CSS selectors to get only the rows we want. End of explanation # select only those 'td' tags with class 'detail' row = rows[2] detailCells = row.select('td.detail') detailCells Explanation: We can use the select method on anything. Let's say we want to find everything with the CSS selector td.detail in an item of the list we created above. End of explanation # Keep only the text in each of those cells rowData = [cell.text for cell in detailCells] Explanation: Most of the time, we're interested in the actual text of a website, not its tags. Remember, to get the text of an HTML element, use the text member. End of explanation # check em out print(rowData[0]) # Name print(rowData[3]) # district print(rowData[4]) # party Explanation: Now we can combine the beautifulsoup tools with our basic python skills to scrape an entire web page. End of explanation # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98') # read the content of the server’s response src = req.text # soup it soup = BeautifulSoup(src, "lxml") # Create empty list to store our data members = [] # returns every ‘tr tr tr’ css selector in the page rows = soup.select('tr tr tr') # loop through all rows for row in rows: # select only those 'td' tags with class 'detail' detailCells = row.select('td.detail') # get rid of junk rows if len(detailCells) is not 5: continue # Keep only the text in each of those cells rowData = [cell.text for cell in detailCells] # Collect information name = rowData[0] district = int(rowData[3]) party = rowData[4] # Store in a tuple tup = (name,district,party) # Append to list members.append(tup) len(members) Explanation: 2.3 Loop it all together Let's use a for loop to get 'em all! End of explanation # SOLUTION # make a GET request req = requests.get('http://www.ilga.gov/senate/default.asp?GA=98') # read the content of the server’s response src = req.text # soup it soup = BeautifulSoup(src, "lxml") # Create empty list to store our data members = [] # returns every ‘tr tr tr’ css selector in the page rows = soup.select('tr tr tr') # loop through all rows for row in rows: # select only those 'td' tags with class 'detail' detailCells = row.select('td.detail') # get rid of junk rows if len(detailCells) is not 5: continue # Keep only the text in each of those cells rowData = [cell.text for cell in detailCells] # Collect information name = rowData[0] district = int(rowData[3]) party = rowData[4] # add href href = row.select('a')[1]['href'] # add full path full_path = "http://www.ilga.gov/senate/" + href + "&Primary=True" # Store in a tuple tup = (name,district,party, full_path) # Append to list members.append(tup) members[:5] Explanation: Challege 3: Get HREF element pointing to members' bills. The code above retrieves information on: - the senator's name - their district number - and their party We now want to retrieve the URL for each senator's list of bills. The format for the list of bills for a given senator is: http://www.ilga.gov/senate/SenatorBills.asp + ? + GA=98 + &MemberID=memberID + &Primary=True to get something like: http://www.ilga.gov/senate/SenatorBills.asp?MemberID=1911&GA=98&Primary=True You should be able to see that, unfortunately, memberID is not currently something pulled out in our scraping code. Your initial task is to modify the code above so that we also retrieve the full URL which points to the corresponding page of primary-sponsored bills, for each member, and return it along with their name, district, and party. Tips: To do this, you will want to get the appropriate anchor element (&lt;a&gt;) in each legislator's row of the table. You can again use the .select() method on the row object in the loop to do this — similar to the command that finds all of the td.detail cells in the row. Remember that we only want the link to the legislator's bills, not the committees or the legislator's profile page. The anchor elements' HTML will look like &lt;a href="/senate/Senator.asp/..."&gt;Bills&lt;/a&gt;. The string in the href attribute contains the relative link we are after. You can access an attribute of a BeatifulSoup Tag object the same way you access a Python dictionary: anchor['attributeName']. (See the <a href="http://www.crummy.com/software/BeautifulSoup/bs4/doc/#tag">documentation</a> for more details). NOTE: There are a lot of different ways to use BeautifulSoup to get things done; whatever you need to do to pull that HREF out is fine. Posting on the etherpad is recommended for discussing different strategies. End of explanation # SOLUTION def get_members(url): src = requests.get(url).text soup = BeautifulSoup(src, "lxml") rows = soup.select('tr') members = [] for row in rows: detailCells = row.select('td.detail') if len(detailCells) is not 5: continue rowData = [cell.text for cell in detailCells] name = rowData[0] district = int(rowData[3]) party = rowData[4] href = row.select('a')[1]['href'] full_path = "http://www.ilga.gov/senate/" + href + "&Primary=True" tup = (name,district,party,full_path) members.append(tup) return(members) # Test you code! senateMembers = get_members('http://www.ilga.gov/senate/default.asp?GA=98') len(senateMembers) Explanation: Challenge 4: Make a function Turn the code above into a function that accepts a URL, scrapes the URL for its senators, and returns a list of tuples containing information about each senator. End of explanation # SOLUTION def get_bills(url): src = requests.get(url).text soup = BeautifulSoup(src, "lxml") rows = soup.select('tr tr tr') bills = [] rowData = [] for row in rows: detailCells = row.select('td.billlist') if len(detailCells) is not 5: continue rowData = [cell.text for cell in row] bill_id = rowData[0] description = rowData[2] champber = rowData[3] last_action = rowData[4] last_action_date = rowData[5] tup = (bill_id,description,champber,last_action,last_action_date) bills.append(tup) return(bills) # uncomment to test your code: test_url = senateMembers[0][3] get_bills(test_url)[0:5] Explanation: Part 3: Scrape Bills 3.1 Writing a Scraper Function Now we want to scrape the webpages corresponding to bills sponsored by each bills. Write a function called get_bills(url) to parse a given Bills URL. This will involve: requesting the URL using the <a href="http://docs.python-requests.org/en/latest/">requests</a> library using the features of the BeautifulSoup library to find all of the &lt;td&gt; elements with the class billlist return a list of tuples, each with: description (2nd column) chamber (S or H) (3rd column) the last action (4th column) the last action date (5th column) I've started the function for you. Fill in the rest. End of explanation # SOLUTION bills_dict = {} for member in senateMembers[:5]: bills_dict[member[1]] = get_bills(member[3]) time.sleep(0.5) bills_dict[52] Explanation: 3.2 Get all the bills Finally, create a dictionary bills_dict which maps a district number (the key) onto a list_of_bills (the value) eminating from that district. You can do this by looping over all of the senate members in members_dict and calling get_bills() for each of their associated bill URLs. NOTE: please call the function time.sleep(0.5) for each iteration of the loop, so that we don't destroy the state's web site. End of explanation
1,986
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Regression. Author Step1: 1. The regression problem The goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_0, X_1, \ldots, X_{m-1}$ (that we will collect in a single vector $\bf X$). Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$. <img src="figs/block_diagram.png" width=400> The only information available to estimate the relation between the inputs and the target is a dataset $\mathcal D$ containing several observations of all variables. $$\mathcal{D} = {{\bf x}{k}, s{k}}_{k=0}^{K-1}$$ The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$. <img src="figs/predictor.png" width=300> 2. Examples of regression problems. The <a href=http Step2: This dataset contains Step3: observations of the target variable and Step4: input variables. 3. Scatter plots 3.1. 2D scatter plots When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i> Python methods plot and scatter from the matplotlib package can be used for these graphical representations. Step5: 3.2. 3D Plots With the addition of a third coordinate, plot and scatter can be used for 3D plotting. Exercise 1 Step6: 4. Evaluating a regression task In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are Square error Step7: The overal prediction performance is computed as the average of the loss computed over a set of samples
Python Code: # Import some libraries that will be necessary for working with data and displaying plots # To visualize plots in the notebook %matplotlib inline import numpy as np import scipy.io # To read matlab files import pandas as pd # To read data tables from csv files # For plots and graphical results import matplotlib import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import pylab # For the student tests (only for python 2) import sys if sys.version_info.major==2: from test_helper import Test # That's default image size for this interactive session pylab.rcParams['figure.figsize'] = 9, 6 Explanation: Introduction to Regression. Author: Jerónimo Arenas García ([email protected]) Jesús Cid Sueiro ([email protected]) Notebook version: 1.1 (Sep 12, 2017) Changes: v.1.0 - First version. Extracted from regression_intro_knn v.1.0. v.1.1 - Compatibility with python 2 and python 3 End of explanation from sklearn import datasets # Load the dataset. Select it by uncommenting the appropriate line D_all = datasets.load_boston() #D_all = datasets.load_diabetes() # Extract data and data parameters. X = D_all.data # Complete data matrix (including input and target variables) S = D_all.target # Target variables n_samples = X.shape[0] # Number of observations n_vars = X.shape[1] # Number of variables (including input and target) Explanation: 1. The regression problem The goal of regression methods is to predict the value of some target variable $S$ from the observation of one or more input variables $X_0, X_1, \ldots, X_{m-1}$ (that we will collect in a single vector $\bf X$). Regression problems arise in situations where the value of the target variable is not easily accessible, but we can measure other dependent variables, from which we can try to predict $S$. <img src="figs/block_diagram.png" width=400> The only information available to estimate the relation between the inputs and the target is a dataset $\mathcal D$ containing several observations of all variables. $$\mathcal{D} = {{\bf x}{k}, s{k}}_{k=0}^{K-1}$$ The dataset $\mathcal{D}$ must be used to find a function $f$ that, for any observation vector ${\bf x}$, computes an output $\hat{s} = f({\bf x})$ that is a good predition of the true value of the target, $s$. <img src="figs/predictor.png" width=300> 2. Examples of regression problems. The <a href=http://scikit-learn.org/>scikit-learn</a> package contains several <a href=http://scikit-learn.org/stable/datasets/> datasets</a> related to regression problems. <a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston > Boston dataset</a>: the target variable contains housing values in different suburbs of Boston. The goal is to predict these values based on several social, economic and demographic variables taken frome theses suburbs (you can get more details in the <a href = https://archive.ics.uci.edu/ml/datasets/Housing > UCI repository </a>). <a href=http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html#sklearn.datasets.load_diabetes /> Diabetes dataset</a>. We can load these datasets as follows: End of explanation print(n_samples) Explanation: This dataset contains End of explanation print(n_vars) Explanation: observations of the target variable and End of explanation # Select a dataset nrows = 4 ncols = 1 + (X.shape[1]-1)/nrows # Some adjustment for the subplot. pylab.subplots_adjust(hspace=0.2) # Plot all variables for idx in range(X.shape[1]): ax = plt.subplot(nrows,ncols,idx+1) ax.scatter(X[:,idx], S) # <-- This is the key command ax.get_xaxis().set_ticks([]) ax.get_yaxis().set_ticks([]) plt.ylabel('Target') Explanation: input variables. 3. Scatter plots 3.1. 2D scatter plots When the instances of the dataset are multidimensional, they cannot be visualized directly, but we can get a first rough idea about the regression task if we plot the target variable versus one of the input variables. These representations are known as <i>scatter plots</i> Python methods plot and scatter from the matplotlib package can be used for these graphical representations. End of explanation # <SOL> # </SOL> Explanation: 3.2. 3D Plots With the addition of a third coordinate, plot and scatter can be used for 3D plotting. Exercise 1: Select the diabetes dataset. Visualize the target versus components 2 and 4. (You can get more info about the <a href=http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.scatter>scatter</a> command and an <a href=http://matplotlib.org/examples/mplot3d/scatter3d_demo.html>example of use</a> in the <a href=http://matplotlib.org/index.html> matplotlib</a> documentation) End of explanation # In this section we will plot together the square and absolute errors grid = np.linspace(-3,3,num=100) plt.plot(grid, grid**2, 'b-', label='Square error') plt.plot(grid, np.absolute(grid), 'r--', label='Absolute error') plt.xlabel('Error') plt.ylabel('Cost') plt.legend(loc='best') plt.show() Explanation: 4. Evaluating a regression task In order to evaluate the performance of a given predictor, we need to quantify the quality of predictions. This is usually done by means of a loss function $l(s,\hat{s})$. Two common losses are Square error: $l(s, \hat{s}) = (s - \hat{s})^2$ Absolute error: $l(s, \hat{s}) = |s - \hat{s}|$ Note that both the square and absolute errors are functions of the estimation error $e = s-{\hat s}$. However, this is not necessarily the case. As an example, imagine a situation in which we would like to introduce a penalty which increases with the magnitude of the estimated variable. For such case, the following cost would better fit our needs: $l(s,{\hat s}) = s^2 \left(s-{\hat s}\right)^2$. End of explanation # Load dataset in arrays X and S df = pd.read_csv('datasets/x01.csv', sep=',', header=None) X = df.values[:,0] S = df.values[:,1] # <SOL> # </SOL> if sys.version_info.major==2: Test.assertTrue(np.isclose(R, 153781.943889), 'Incorrect value for the average square error') else: np.testing.assert_almost_equal(R, 153781.943889, decimal=4) print("Test passed") Explanation: The overal prediction performance is computed as the average of the loss computed over a set of samples: $${\bar R} = \frac{1}{K}\sum_{k=0}^{K-1} l\left(s_k, \hat{s}_k\right)$$ Exercise 2: The dataset in file 'datasets/x01.csv', taken from <a href="http://people.sc.fsu.edu/~jburkardt/datasets/regression/x01.txt">here</a> records the average weight of the brain and body for a number of mammal species. * Represent a scatter plot of the targe variable versus the one-dimensional input. * Plot, over the same plot, the prediction function given by $S = 1.2 X$ * Compute the square error rate for the given dataset. End of explanation
1,987
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to PySpark We'll talk a little bit about the Spark's precursor, Hadoop, and then we'll discuss the advantages and utility of Spark on top of Hadoop. Next, we'll discuss what Dash offers in conjunction with Spark. Finally, we will implement a simple Spark application using the PySpark API. In the Beginning, There was Hadoop... Apache Hadoop - an open source software framework for reliable,scalable, distributed computing on commodity hardware Main modules Step1: NOTE Step2: Let's create a new SparkSession through the builder attribute and the getOrCreate() method. Step3: Initializing the master and appName attributes isn't actually important or critical in this introduction, nor is configuring the memory options for the executor. I've included here for the sake of thoroughness. NOTE Step4: Part of what allows for some of the speed-up in Spark applications is that Spark evaluations are mostly lazy evals. So executing the following line of code isn't very useful Step5: Instead we have to take an action on the rdd, such as collect() to materialize the data represented by the rdd abstraction Step6: NOTE Step7: Since we read the data in with textFile() we just have a set of strings separated by commas as our data. Let's split the data into separate entriees using the map() function. Step8: Now we have something more closely resembling a collection of records. But, notice that the data does not have a header and is mostly unstructured. We can fix that by converting the data to a DataFrame. I have been told that n general DataFrames perform better than the RDDs, especially when using Python... Step9: To examine the data types associated with the dataframe printSchemea() method. Step10: Since we used the textFile() method to read our data in, the data types are all string The following would cast all of the columns to floats instead Step11: But that seems pretty inefficient, and it is. We can write a function to handle all of this for us Step12: The pyspark.sql package has lots of convenient data exploration methods built in that support SQL query language execution. For example, we can select by columns Step13: We can use the filter() method to perform a classic SELECT FROM WHERE query as below Step14: And we can get summary statistics pretty easilly too... Step15: Let's do a quick bit of feature engineering and transformation to optimize a linear regression on our feature set... Step16: We can examine the column of medianHouseValue in the above outputs to make sure that we transformed the data correctly. Let's do some more feature engineering and standardization. Step17: Notice that we're using the col() function to specify that we're using columnar data in our calculations. The col("totalRooms")/col("households") is acting like a numpy array, element wise dividing the results. Next we'll use the select() method to reorder the data so that our response variable is Step18: Now we're going to actually isolate the response variable of labels from the predictor variables using a DenseVector, which is essentially a numpy ndarray. Step19: There are all kinds of great machine learning algorithms and functions already built into PySpark in the Spark ML library. If you're interested in more data pipelining, try visiting this page Step20: We can divide the data into training and testing sets using the PySpark SQL randomSplit() method. Step21: Now we can create the regression model. The original tutorial directs you to the following URL for information on the linear regression model class Step22: To evaluate the model, we can inspect the model parameters. Step23: And summary data for the model is available as well. Step24: Stop the Spark session...
Python Code: import findspark findspark.init() Explanation: Introduction to PySpark We'll talk a little bit about the Spark's precursor, Hadoop, and then we'll discuss the advantages and utility of Spark on top of Hadoop. Next, we'll discuss what Dash offers in conjunction with Spark. Finally, we will implement a simple Spark application using the PySpark API. In the Beginning, There was Hadoop... Apache Hadoop - an open source software framework for reliable,scalable, distributed computing on commodity hardware Main modules: Hadoop Common - main Hadoop Utilities HDFS - Hadoop Distributed File System Hadoop YARN - a task scheduling / cluster resource management module Hadoop MAP/REDUCE - paralell computing paradigm for big data anaylytics Emphasis on reliability and scalability Pervasive Assumption: node failure is the rule, not the exception Task / Cluster management is performed automatically, under the hood YARN is rack aware Tasks are scheduled either on a node where the data is already housed, or preference is given to a node on the same rack Reduces overall traffic between racks, increasing throughput Big Data Analytics with Hadoop Store the data using the HDFS Define a Mapper class (user defined class extends Mapper) which implements a map function The map function takes in a pair, &lt;k1,v1&gt; , and maps it to a new value, &lt;k2,v2&gt; The prototypical example is a word count program : map a document to a set of &lt;k,v&gt;=&lt;w,c&gt; pairs The document is distributed as chunks throughout the file system Each chunk of the document becomes a value, v, associated with a key, k, in a &lt;k,v&gt; pair Split each value into tokens (i.e. - words) associated with an iterator, iter For every word in iter, w, create a &lt;k,v&gt;=&lt;w,1&gt; pair Output a multi-set of &lt;w,1&gt; pairs for the chunk We now have one multiset for each chunk of the document Collect the multisets into a single multiset Define a Reducer class which impelements a reduce function The reduce function reduces the output from the mapping to a more meaningful state Reduce the multiset to a set of &lt;w,c&gt; pairs by summing all v with the same k A Map/Reduce Example Implementation: public static class TokenizerMapper extends Mapper&lt;Object, Text, Text, IntWritable&gt;{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer&lt;Text,IntWritable,Text,IntWritable&gt; { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable&lt;IntWritable&gt; values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } And Lo! the villagers did rejoice! ...for a time... Some Problems with Hadoop MAP/REDUCE 1. Every application has to be pidgeon-holed into the MAP/REDUCE paradigm 2. Studies showed that up to 90% of computation wall time was spent in file I/O 3. Iterative algorithms were especially slow when implemented with MAP/REDUCE * The application may bottle neck due to a low number of particularly slow nodes Enter Spark Originally developed by Matei Zaharia and researchers at UC Berkley's AMP Lab Donated to the Apache Software Foundation in 2013 ## RDD - Resilient Distributed Dataset The main data structure for Spark applications Fault tolerant multi-set of data partitions available to a computing cluster in shared memory Requires cluster management and distributed file system Supported cluster management includes Spark native, Hadoop YARN, and Apache Mesos Several supported DFS: HDFS MAP-R FDS Cassandra ... RDDs can be created by any storage source supported by Hadoop In memory processing and a more diverse API are its main benefits over MAP/REDUCE Iterative Operations in Hadoop MAP/REDUCE: versus Iterative Operations in Spark : Interactive Operations in Hadoop MAP/REDUCE: versus Interactive Operations in Spark: PySpark : a Python API for programming spark applications * Originally Spark was written in Scala and most Spark applications were written in either Scala or Java * Eventually support was extended with APIs for Python and R * We are going to work with the Python API : PySpark The SparkShell Spark has an REPL interactive shell called the SparkShell Let's create a simple Spark application... This piece of the presentation borrows heavily from a July 2017 DataCamp tutorial on machine learning and PySpark. Visit the following URL to see the original tutorial: https://www.datacamp.com/community/tutorials/apache-spark-tutorial-machine-learning#gs.Y3MIPIY Data: California Realestate We're going to explore some data and run some iterative algorithms on it. You can find the data here: http://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html Download the cal_housing.tgz tar ball at the bottom of the page and extract it somewhere obvious. First we need to use the findspark package inorder to be able to locate the various Spark packages that we need (i.e. - pyspark, etc.) You can download and install findspark using pip as I did, or I'm sure anaconda will also work. There are probably other ways of making sure you can locate the packages you need as well, but this was the simplest and most straight forward I found. End of explanation from pyspark.sql import SparkSession Explanation: NOTE: This is only necessary in the Jupyter Notebook. You should be able to import the necessary packages in a regular Python script without using findspark The SparkSession is the entry point for any Spark application. End of explanation spark = SparkSession.builder\ .master("local")\ .appName("LinearRegressionModel")\ .config("spark.executor.memory","1gb")\ .getOrCreate() sc = spark.sparkContext Explanation: Let's create a new SparkSession through the builder attribute and the getOrCreate() method. End of explanation rdd = sc.textFile('data/CaliforniaHousing/cal_housing.data') header = sc.textFile('data/CaliforniaHousing/cal_housing.domain') Explanation: Initializing the master and appName attributes isn't actually important or critical in this introduction, nor is configuring the memory options for the executor. I've included here for the sake of thoroughness. NOTE: If you find the Spark tutorial on the Spark documentation web page it includes the following line of code: spark = SparkSession.builder().appName(appName).master(master).getOrCreate() That does not work. You must pass a string as an argument to the appName() and master() methods. Moving on... From here we can create a couple of RDDs: one with the data and another with the domain information, the header End of explanation header Explanation: Part of what allows for some of the speed-up in Spark applications is that Spark evaluations are mostly lazy evals. So executing the following line of code isn't very useful: End of explanation header.collect() Explanation: Instead we have to take an action on the rdd, such as collect() to materialize the data represented by the rdd abstraction End of explanation rdd.take(2) Explanation: NOTE: collect() is a pretty dangerous action: if the RDD is especially large then your executor you may run out of RAM and your application will crash. If you're using especially large data and you just want a peak at it to try to suss out its structure, then try take() or first() End of explanation rdd = rdd.map(lambda line: line.split(",")) rdd.take(2) Explanation: Since we read the data in with textFile() we just have a set of strings separated by commas as our data. Let's split the data into separate entriees using the map() function. End of explanation from pyspark.sql import Row df = rdd.map(lambda line: Row(longitude=line[0], latitude=line[1], housingMedianAge=line[2], totalRooms=line[3], totalBedRooms=line[4], population=line[5], households=line[6], medianIncome=line[7], medianHouseValue=line[8])).toDF() df.show() Explanation: Now we have something more closely resembling a collection of records. But, notice that the data does not have a header and is mostly unstructured. We can fix that by converting the data to a DataFrame. I have been told that n general DataFrames perform better than the RDDs, especially when using Python... End of explanation df.printSchema() Explanation: To examine the data types associated with the dataframe printSchemea() method. End of explanation from pyspark.sql.types import * df = df.withColumn("longitude", df["longitude"].cast(FloatType())) \ .withColumn("latitude", df["latitude"].cast(FloatType())) \ .withColumn("housingMedianAge",df["housingMedianAge"].cast(FloatType())) \ .withColumn("totalRooms", df["totalRooms"].cast(FloatType())) \ .withColumn("totalBedRooms", df["totalBedRooms"].cast(FloatType())) \ .withColumn("population", df["population"].cast(FloatType())) \ .withColumn("households", df["households"].cast(FloatType())) \ .withColumn("medianIncome", df["medianIncome"].cast(FloatType())) \ .withColumn("medianHouseValue", df["medianHouseValue"].cast(FloatType())) Explanation: Since we used the textFile() method to read our data in, the data types are all string The following would cast all of the columns to floats instead: End of explanation from pyspark.sql.types import * def convertCols(df,names,dataType): #df - a dataframe, names - a list of col names, dataType - the cast conversion type for name in names: df = df.withColumn(name,df[name].cast(dataType)) return df names = ['households', 'housingMedianAge', 'latitude', 'longitude', 'medianHouseValue', 'medianIncome',\ 'population', 'totalBedRooms', 'totalRooms'] df = convertCols(df,names,FloatType()) df.printSchema() df.show(10) Explanation: But that seems pretty inefficient, and it is. We can write a function to handle all of this for us: End of explanation df.select('population','totalBedrooms').show(10) Explanation: The pyspark.sql package has lots of convenient data exploration methods built in that support SQL query language execution. For example, we can select by columns: End of explanation ndf = df.select('population','totalBedrooms').filter(df['totalBedrooms'] > 500) ndf.show(10) Explanation: We can use the filter() method to perform a classic SELECT FROM WHERE query as below: End of explanation df.describe().show() Explanation: And we can get summary statistics pretty easilly too... End of explanation # Import all from `sql.functions` from pyspark.sql.functions import * df.show() # Adjust the values of `medianHouseValue` df = df.withColumn("medianHouseValue", col("medianHouseValue")/100000) df.show() Explanation: Let's do a quick bit of feature engineering and transformation to optimize a linear regression on our feature set... End of explanation # Import all from `sql.functions` if you haven't yet from pyspark.sql.functions import * # Divide `totalRooms` by `households` roomsPerHousehold = df.select(col("totalRooms")/col("households")) # Divide `population` by `households` populationPerHousehold = df.select(col("population")/col("households")) # Divide `totalBedRooms` by `totalRooms` bedroomsPerRoom = df.select(col("totalBedRooms")/col("totalRooms")) # Add the new columns to `df` df = df.withColumn("roomsPerHousehold", col("totalRooms")/col("households")) \ .withColumn("populationPerHousehold", col("population")/col("households")) \ .withColumn("bedroomsPerRoom", col("totalBedRooms")/col("totalRooms")) # Inspect the result df.first() Explanation: We can examine the column of medianHouseValue in the above outputs to make sure that we transformed the data correctly. Let's do some more feature engineering and standardization. End of explanation # Re-order and select columns df = df.select("medianHouseValue", "totalBedRooms", "population", "households", "medianIncome", "roomsPerHousehold", "populationPerHousehold", "bedroomsPerRoom") Explanation: Notice that we're using the col() function to specify that we're using columnar data in our calculations. The col("totalRooms")/col("households") is acting like a numpy array, element wise dividing the results. Next we'll use the select() method to reorder the data so that our response variable is End of explanation # Import `DenseVector` from pyspark.ml.linalg import DenseVector # Define the `input_data` input_data = df.rdd.map(lambda x: (x[0], DenseVector(x[1:]))) # Replace `df` with the new DataFrame df = spark.createDataFrame(input_data, ["label", "features"]) Explanation: Now we're going to actually isolate the response variable of labels from the predictor variables using a DenseVector, which is essentially a numpy ndarray. End of explanation # Import `StandardScaler` from pyspark.ml.feature import StandardScaler # Initialize the `standardScaler` standardScaler = StandardScaler(inputCol="features", outputCol="features_scaled") # Fit the DataFrame to the scaler scaler = standardScaler.fit(df) # Transform the data in `df` with the scaler scaled_df = scaler.transform(df) # Inspect the result scaled_df.take(2) Explanation: There are all kinds of great machine learning algorithms and functions already built into PySpark in the Spark ML library. If you're interested in more data pipelining, try visiting this page: https://spark.apache.org/docs/latest/ml-pipeline.html End of explanation train_data, test_data = scaled_df.randomSplit([.8,.2],seed=1234) Explanation: We can divide the data into training and testing sets using the PySpark SQL randomSplit() method. End of explanation # Import `LinearRegression` from pyspark.ml.regression import LinearRegression # Initialize `lr` lr = LinearRegression(labelCol="label", maxIter=10, regParam=0.3, elasticNetParam=0.8) # Fit the data to the model linearModel = lr.fit(train_data) # Generate predictions predicted = linearModel.transform(test_data) # Extract the predictions and the "known" correct labels predictions = predicted.select("prediction").rdd.map(lambda x: x[0]) labels = predicted.select("label").rdd.map(lambda x: x[0]) # Zip `predictions` and `labels` into a list predictionAndLabel = predictions.zip(labels).collect() # Print out first 5 instances of `predictionAndLabel` predictionAndLabel[:5] Explanation: Now we can create the regression model. The original tutorial directs you to the following URL for information on the linear regression model class: https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.regression.LinearRegression End of explanation # Coefficients for the model linearModel.coefficients # Intercept for the model linearModel.intercept Explanation: To evaluate the model, we can inspect the model parameters. End of explanation # Get the RMSE linearModel.summary.rootMeanSquaredError # Get the R2 linearModel.summary.r2 Explanation: And summary data for the model is available as well. End of explanation spark.stop() Explanation: Stop the Spark session... End of explanation
1,988
Given the following text description, write Python code to implement the functionality described below step by step Description: Using FISSA with CNMF from MATLAB CNMF is blind source separation toolbox for cell detection and signal extraction. Here we illustrate how one can use the ROIs detected by CNMF, and use FISSA to extract and decontaminate the traces. In this tutorial, we assume the user is using the MATLAB implementation of CNMF. As such, this also serves as a tutorial on how to import data from MATLAB into Python to use with FISSA. However, note that there is also a Python implementation of CNMF, which you can use instead to keep your whole workflow in Python. Reference Step1: Running CNMF in MATLAB, and importing into Python We ran CNMF in MATLAB using the run_pipeline.m script available from the CNMF repository on our example data (found at ../exampleData/20150529/). We saved the Coor and F_df variables generated by that script into a .mat file (cNMFdata.mat) which we now load here. Step2: Show detected cells Let's render the ROIs using matplotlib. Step3: Running FISSA on cells detected by CNMF FISSA needs ROIs to be provided either as an ImageJ zip file, or a set of numpy arrays. CNMF can output ROIs in coordinates (as we imported above), which can be directly read into FISSA. A given ROI after importing from MATLAB is given as python Coor[i, 0] FISSA expects a set of rois to be given as a list of lists, python [[roiA1, roiA2, roiA3, ...]] so we will need to change the format of the ROIs first. Step4: Which can then be put into FISSA and run as follows. Step5: Plotting the results Let's plot the traces for ROIs as they were detected by CNMF, and after removing neuropile with FISSA.
Python Code: # FISSA package import fissa # For plotting our results, import numpy and matplotlib import matplotlib.pyplot as plt import numpy as np # Need this utility from scipy to load data from matfiles from scipy.io import loadmat Explanation: Using FISSA with CNMF from MATLAB CNMF is blind source separation toolbox for cell detection and signal extraction. Here we illustrate how one can use the ROIs detected by CNMF, and use FISSA to extract and decontaminate the traces. In this tutorial, we assume the user is using the MATLAB implementation of CNMF. As such, this also serves as a tutorial on how to import data from MATLAB into Python to use with FISSA. However, note that there is also a Python implementation of CNMF, which you can use instead to keep your whole workflow in Python. Reference: Pnevmatikakis, E.A., Soudry, D., Gao, Y., Machado, T., Merel, J., ... and Paninski, L. Simultaneous denoising, deconvolution, and demixing of calcium imaging data. Neuron, 89(2):285-299, 2016. doi:&nbsp;10.1016/j.neuron.2015.11.037. Import packages End of explanation # Load data from cNMFdata.mat file cNMFdata = loadmat("cNMFdata")["dat"] # Get the F_df recording traces out of the loaded object F_df = cNMFdata["F_df"][0, 0] # Get the ROI outlines out of the loaded object Coor = cNMFdata["Coor"][0, 0] Explanation: Running CNMF in MATLAB, and importing into Python We ran CNMF in MATLAB using the run_pipeline.m script available from the CNMF repository on our example data (found at ../exampleData/20150529/). We saved the Coor and F_df variables generated by that script into a .mat file (cNMFdata.mat) which we now load here. End of explanation # Plotting lines surrounding each of the ROIs plt.figure(figsize=(7, 7)) for i_cell in range(len(Coor)): x = Coor[i_cell, 0][0] y = Coor[i_cell, 0][1] # Plot border around cells plt.plot(x, y) # Invert the y-axis because image co-ordinates are labelled from top-left plt.gca().invert_yaxis() plt.show() Explanation: Show detected cells Let's render the ROIs using matplotlib. End of explanation numROI = len(Coor) rois_FISSA = [[Coor[i, 0][0], Coor[i, 0][1]] for i in range(numROI)] Explanation: Running FISSA on cells detected by CNMF FISSA needs ROIs to be provided either as an ImageJ zip file, or a set of numpy arrays. CNMF can output ROIs in coordinates (as we imported above), which can be directly read into FISSA. A given ROI after importing from MATLAB is given as python Coor[i, 0] FISSA expects a set of rois to be given as a list of lists, python [[roiA1, roiA2, roiA3, ...]] so we will need to change the format of the ROIs first. End of explanation output_folder = "fissa_cnmf_example" tiff_folder = "exampleData/20150529/" experiment = fissa.Experiment(tiff_folder, [rois_FISSA], output_folder) experiment.separate(redo_prep=True) Explanation: Which can then be put into FISSA and run as follows. End of explanation # Fetch the colormap object for Cynthia Brewer's Paired color scheme cmap = plt.get_cmap("Paired") # Select which trial (TIFF index) to plot trial = 0 # Plot the mean image and ROIs from the FISSA experiment plt.figure(figsize=(7, 7)) plt.imshow(experiment.means[trial], cmap="gray") XLIM = plt.xlim() YLIM = plt.ylim() for i_roi in range(len(experiment.roi_polys)): # Plot border around ROI for contour in experiment.roi_polys[i_roi, trial][0]: plt.plot( contour[:, 1], contour[:, 0], color=cmap((i_roi * 2 + 1) % cmap.N), ) # ROI co-ordinates are half a pixel outside the image, # so we reset the axis limits plt.xlim(XLIM) plt.ylim(YLIM) plt.show() # Plot all ROIs and trials # Get the number of ROIs and trials n_roi = experiment.result.shape[0] n_trial = experiment.result.shape[1] # Find the maximum signal intensities for each ROI roi_max_raw = [ np.max([np.max(experiment.raw[i_roi, i_trial][0]) for i_trial in range(n_trial)]) for i_roi in range(n_roi) ] roi_max_result = [ np.max([np.max(experiment.result[i_roi, i_trial][0]) for i_trial in range(n_trial)]) for i_roi in range(n_roi) ] roi_max = np.maximum(roi_max_raw, roi_max_result) # Plot our figure using subplot panels plt.figure(figsize=(16, 10)) for i_roi in range(n_roi): for i_trial in range(n_trial): # Make subplot axes i_subplot = 1 + i_trial * n_roi + i_roi plt.subplot(n_trial, n_roi, i_subplot) # Plot the data plt.plot( experiment.raw[i_roi][i_trial][0, :], label="Raw (CNMF)", color=cmap((i_roi * 2) % cmap.N), ) plt.plot( experiment.result[i_roi][i_trial][0, :], label="FISSA", color=cmap((i_roi * 2 + 1) % cmap.N), ) # Labels and boiler plate plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05]) if i_roi == 0: plt.ylabel( "Trial {}\n\nSignal intensity\n(candela per unit area)".format( i_trial + 1 ) ) if i_trial == 0: plt.title("ROI {}".format(i_roi)) if i_trial == n_trial - 1: plt.xlabel("Time (frame number)") plt.legend() plt.show() Explanation: Plotting the results Let's plot the traces for ROIs as they were detected by CNMF, and after removing neuropile with FISSA. End of explanation
1,989
Given the following text description, write Python code to implement the functionality described below step by step Description: Flavours of Gradient Descent A quick recap of the Gradient Descent method Step1: Batch Gradient Descent In most supervised ML applications, we will try to learn a pattern from a number of labeled examples. In Batch Gradient Descent, each iteration loops over entire set of examples. So, let's build 1-layer network of Linear Perceptrons to classify Fisher's IRIS dataset (again!). Remember that a Linear Perceptron can only distinguish between two classes. <table> <tr> <td><img src="http Step2: The Softmax Function The softmax function is a technique to apply a probabilistic classifier by making a probability distribution out of a set of values $(v_1, v_2, ..., v_n)$ which may or may not satisfy all the features of probability distribution Step3: Non-linear Perceptron With SoftMax With softmax, we typically use the cross-entropy error as the function to minimize. The Cross Entropy Error for a given input $X = (x_1, x_2, ..., x_n)$, where each $x_i$ is a vector, is given by Step4: Cross Entropy Error Step11: Gradient of the Cross Entropy Error The Gradient update step in Gradient Descent when the Loss Function uses Cross Entropy Error is Step12: Gradient of the Cross Entropy Error Recap We know the the cross entropy error is the average of the vector products between the 1-hot enconding of label and the softmax output. $L = - \frac {1}{n} \sum_{i=1}^n Y_i^T ln(\hat Y_i)$ Where the sum runs over all of the $n$ input samples. This is a complex derivation, and we need to approach it step-by step. First, let's work out what the $i$-th sample contributes to the gradient of L, i.e. the derivative of - $Y_i^Tln(\hat Y_i)$. Let's draw the structure of the Network using networkx for a 2-class problem, so we have 2 input nodes.
Python Code: %matplotlib inline import numpy as np def L(x): return x**2 - 2*x + 1 def L_prime(x): return 2*x - 2 def converged(x_prev, x, epsilon): "Return True if the abs value of all elements in x-x_prev are <= epsilon." absdiff = np.abs(x-x_prev) return np.all(absdiff <= epsilon) def gradient_descent(f_prime, x_0, learning_rate=0.2, n_iters=100, epsilon=1E-8): x = x_0 for _ in range(n_iters): x_prev = x x -= learning_rate*f_prime(x) if converged(x_prev, x, epsilon): break return x x_min = gradient_descent(L_prime, 2) print('Minimum value of L(x) = x**2 - 2*x + 1.0 is [%.2f] at x = [%.2f]' % (L(x_min), x_min)) Explanation: Flavours of Gradient Descent A quick recap of the Gradient Descent method: This is an iterative algorithm to minize a loss function $L(x)$, where we start with a guess of what the answer should be - and then take steps proportional to the gradient at the current point. $x = x_0$ (initial guess) Until Convergence is achieved: $x_{i+1} = x_{i} - \eta\nabla_L(x_i)$ For example, Let's say $L(x) = x^2 - 2x + 1$ and we start at $x0 = 2$. Coding the Gradient Descent method in Python: End of explanation import seaborn as sns import pandas as pd iris_df = sns.load_dataset('iris') print('Columns: %s' % (iris_df.columns.values, )) print('Labels: %s' % (pd.unique(iris_df['species']), )) iris_df.head(5) Explanation: Batch Gradient Descent In most supervised ML applications, we will try to learn a pattern from a number of labeled examples. In Batch Gradient Descent, each iteration loops over entire set of examples. So, let's build 1-layer network of Linear Perceptrons to classify Fisher's IRIS dataset (again!). Remember that a Linear Perceptron can only distinguish between two classes. <table> <tr> <td><img src="http://blog.zabarauskas.com/img/perceptron.gif"></td> <td><img src="http://cmp.felk.cvut.cz/cmp/courses/recognition/Labs/perceptron/images/linear.png" /> </tr> </table> Since there are 3 classes, our mini-network will have 3 Perceptrons. We'll channel the output of each Perceptron $w_i^T + b$ into a softmax function to pick the final label. We'll train this network using Batch Gradient Descent. Getting Data End of explanation def softmax(x): # Uncomment to find out why we shouldn't do it this way... # return np.exp(x) / np.sum(np.exp(x)) scaled_x = x - np.max(x) result = np.exp(scaled_x) / np.sum(np.exp(scaled_x)) return result a = np.array([-500.9, 2000, 7, 11, 12, -15, 100]) sm_a = softmax(a) print('Softmax(%s) = %s' % (a, sm_a)) Explanation: The Softmax Function The softmax function is a technique to apply a probabilistic classifier by making a probability distribution out of a set of values $(v_1, v_2, ..., v_n)$ which may or may not satisfy all the features of probability distribution: $v_i >= 0$ $\sum_{i=1}^n v_i = 1$ The probability distribution is the Gibbs Distribution: $v'i = \frac {\exp {v_i}} {\sum{j=1}^n\exp {v_j})}$ for $i = 1, 2, ... n$. End of explanation def encode_1_of_n(ordered_labels, y): label2idx = dict((label, idx) for idx, label in enumerate(ordered_labels)) def encode_one(y_i): enc = np.zeros(len(ordered_labels)) enc[label2idx[y_i]] = 1.0 return enc return np.array([x for x in map(encode_one, y)]) encode_1_of_n(['apple', 'banana', 'orange'], ['apple', 'banana', 'orange', 'apple', 'apple']) Explanation: Non-linear Perceptron With SoftMax With softmax, we typically use the cross-entropy error as the function to minimize. The Cross Entropy Error for a given input $X = (x_1, x_2, ..., x_n)$, where each $x_i$ is a vector, is given by: $L(x) = - \frac {1}{n} \sum_{i=1}^n y_i^T log(\hat{y_i})$ Where The sum runs over $X = (x_1, x_2, ..., x_n)$. Each $y_i$ is the 1-of-n encoded label of the $i$-th example, so it's also a vector. For example, if the labels in order are ('apple', 'banana', 'orange') and the label of $x_i$ is 'banana', then $y_i = [0, 1, 0]$. $\hat{y_i}$ is the softmax output for $x_i$ from the network. The term $y_i^T log(\hat{y_i})$ is the vector dot product between $y_i$ and $log(\hat{y_i})$. One of n Encoding End of explanation def cross_entropy_loss(Y, Y_hat): entropy_sum = 0.0 log_Y_hat = np.log(Y_hat) for y, y_hat in zip(Y, log_Y_hat): entropy_sum += np.dot(y, y_hat) return -entropy_sum/Y.shape[0] Y_tst = np.array([[1, 0, 0], [0, 1, 0]]) # log(Y_hat_tst1) is the same as Y_tst, so we expect the x-entropy error to be the min (-1) in this case. print(Y_tst) Y_hat_tst1 = np.array([[np.e, 1, 1,], [1, np.e, 1]]) print(Y_hat_tst1) print(cross_entropy_loss(Y_tst, Y_hat_tst1)) print() # expect it to be > -1 Y_hat_tst2 = np.array([[1, 1, 1,], [1, np.e, 1]]) print(Y_hat_tst2) print(cross_entropy_loss(Y_tst, Y_hat_tst2)) print() Explanation: Cross Entropy Error End of explanation import pandas as pd class OneLayerNetworkWithSoftMax: def __init__(self): self.w, self.bias = None, 0.0 self.optimiser = None self.output = None def init_weights(self, X, Y): Initialize a 2D weight matrix as a Dataframe with dim(n_labels*n_features). self.labels = np.unique(Y) w_init = np.random.randn(len(self.labels), X.shape[1]) self.w = pd.DataFrame(data=w_init) self.w.index.name = 'node_id' def predict(self, x): Return the predicted label of x using current weights. output = self.forward(x, update=False) max_label_idx = np.argmax(output) return self.labels[max_label_idx] def forward(self, x, update=True): Calculate softmax(w^Tx+b) for x using current $w_i$ s. #output = self.w.apply(lambda row: np.dot(row, x), axis=1) output = np.dot(self.w, x) output += self.bias output = softmax(output) if update: self.output = output return output def backward(self, x, y, learning_rate): Executes the weight update step grad = (self.output - y) for i in range(len(grad)): dw[i] -= grad[i] * x w -= learning_rate * dw :param x: one sample vector. :param y: One-hot encoded label for x. # [y_hat1 - y1, y_hat2-y2, ... ] y_hat_min_y = self.output - y # Transpose the above to a column vector # and then multiply x with each element # to produce a 2D array (n_labels*n_features), same as w error_grad = np.apply_along_axis(lambda z: z*x , 1, np.atleast_2d(y_hat_min_y).T) dw = learning_rate * error_grad return dw def print_weight_diff(self, i, w_old, diff_only=True): if not diff_only: print('Before Iteration [%s]: weights are: \n%s' % (i+1, w_old)) print('After Iteration [%s]: weights are: \n%s' % (i+1, self.w)) w_diff = np.abs(w_old - self.w) print('After Iteration [%s]: weights diff: \n%s' % (i+1, w_diff)) def _gen_minibatch(self, X, Y, mb_size): Generates `mb_size` sized chunks from X and Y. n_samples = X.shape[0] indices = np.arange(n_samples) np.random.shuffle(indices) for start in range(0, n_samples, mb_size): yield X[start:start+mb_size, :], Y[start:start+mb_size, :] def _update_batch(self, i, X_batch, Y_batch, learning_rate, print_every=100): w_old = self.w.copy() dw = [] for x, y in zip(X_batch, Y_batch): self.forward(x) dw_item = self.backward(x, y, learning_rate) dw.append(dw_item) dw_batch = np.mean(dw, axis=0) self.w -= dw_batch if (i == 0) or ((i+1) % print_every == 0): self.print_weight_diff(i, w_old) def train(self, X, Y, n_iters=1000, learning_rate=0.2, minibatch_size=30, epsilon=1E-8): Entry point for the Minibatch SGD training method. Calls forward+backward for each (x_i, y_i) pair and adjusts the weight w accordingly. self.init_weights(X, Y) Y = encode_1_of_n(self.labels, Y) n_samples = X.shape[0] # MiniBatch SGD for i in range(n_iters): for X_batch, Y_batch in self._gen_minibatch(X, Y, minibatch_size): self._update_batch(i, X_batch, Y_batch, learning_rate) # Set aside test data label_grouper = iris_df.groupby('species') test = label_grouper.head(10).set_index('species') train = label_grouper.tail(100).set_index('species') # Train the Network X_train, Y_train = train.as_matrix(), train.index.values nn = OneLayerNetworkWithSoftMax() nn.train(X_train, Y_train) # Test results = test.apply(lambda row : nn.predict(row.as_matrix()), axis=1) results.name = 'predicted_label' results.index.name = 'expected_label' results.reset_index() Explanation: Gradient of the Cross Entropy Error The Gradient update step in Gradient Descent when the Loss Function uses Cross Entropy Error is: $w_i^{j+1} = w_i^{j} - \eta [\frac {\partial L} {\partial w_i}]^{j}$ End of explanation import networkx as nx from matplotlib import pylab G = nx.DiGraph() G.add_edges_from( [('i', 'n1'), ('i', 'n2'), ('n1', 's1'), ('n2', 's1'), ('n1', 's2'), ('n2', 's2'), ('s1', 'y1'), ('s2', 'y2'), ]) pos = {'i': (1, 1), 'n1': (2, 0), 'n2': (2, 2), 's1': (3, 0), 's2': (3, 2), 'y1': (4, 0), 'y2': (4, 2), } labels = {'i': r'$x_i$', 'n1': r'$w_1$', 'n2': r'$w_2$', 's1': r'$s_1$', # r'$\frac {\exp(z_{i1})} {S_i}$', 's2': r'$s_2$', # r'$\frac {\exp(z_{i2})} {S_i}$' } edge_labels = {('i', 'n1'): r'$x_i$', ('i', 'n2'): r'$x_i$', ('n1', 's1'): r'$w_1^Tx_i$', ('n1', 's2'): r'$w_1^Tx_i$', ('n2', 's1'): r'$w_2^Tx_i$', ('n2', 's2'): r'$w_2^Tx_i$', ('n2', 's1'): r'$w_2^Tx_i$', ('s1', 'y1'): r'$\frac {\exp(z_{i1})} {S_i}$', ('s2', 'y2'): r'$\frac {\exp(z_{i2})} {S_i}$', } nx.draw(G, pos=pos, node_size=1000) nx.draw_networkx_labels(G,pos,labels, font_size=15, color='white') nx.draw_networkx_edge_labels(G, pos=pos, edge_labels=edge_labels, font_size=15) Explanation: Gradient of the Cross Entropy Error Recap We know the the cross entropy error is the average of the vector products between the 1-hot enconding of label and the softmax output. $L = - \frac {1}{n} \sum_{i=1}^n Y_i^T ln(\hat Y_i)$ Where the sum runs over all of the $n$ input samples. This is a complex derivation, and we need to approach it step-by step. First, let's work out what the $i$-th sample contributes to the gradient of L, i.e. the derivative of - $Y_i^Tln(\hat Y_i)$. Let's draw the structure of the Network using networkx for a 2-class problem, so we have 2 input nodes. End of explanation
1,990
Given the following text description, write Python code to implement the functionality described below step by step Description: .. _tut_compute_covariance Step1: Source estimation method such as MNE require a noise estimations from the recordings. In this tutorial we cover the basics of noise covariance and construct a noise covariance matrix that can be used when computing the inverse solution. For more information, see Step2: The definition of noise depends on the paradigm. In MEG it is quite common to use empty room measurements for the estimation of sensor noise. However if you are dealing with evoked responses, you might want to also consider resting state brain activity as noise. First we compute the noise using empty room recording. Note that you can also use only a part of the recording with tmin and tmax arguments. That can be useful if you use resting state as a noise baseline. Here we use the whole empty room recording to compute the noise covariance (tmax=None is the same as the end of the recording, see Step3: Now that you the covariance matrix in a python object you can save it to a file with Step4: Note that this method also attenuates the resting state activity in your source estimates. Step5: Plot the covariance matrices Try setting proj to False to see the effect. Notice that the projectors in epochs are already applied, so proj parameter has no effect. Step6: How should I regularize the covariance matrix? The estimated covariance can be numerically unstable and tends to induce correlations between estimated source amplitudes and the number of samples available. The MNE manual therefore suggests to regularize the noise covariance matrix (see Step7: This procedure evaluates the noise covariance quantitatively by how well it whitens the data using the negative log-likelihood of unseen data. The final result can also be visually inspected. Under the assumption that the baseline does not contain a systematic signal (time-locked to the event of interest), the whitened baseline signal should be follow a multivariate Gaussian distribution, i.e., whitened baseline signals should be between -1.96 and 1.96 at a given time sample. Based on the same reasoning, the expected value for the global field power (GFP) is 1 (calculation of the GFP should take into account the true degrees of freedom, e.g. ddof=3 with 2 active SSP vectors) Step8: This plot displays both, the whitened evoked signals for each channels and the whitened GFP. The numbers in the GFP panel represent the estimated rank of the data, which amounts to the effective degrees of freedom by which the squared sum across sensors is divided when computing the whitened GFP. The whitened GFP also helps detecting spurious late evoked components which can be the consequence of over- or under-regularization. Note that if data have been processed using signal space separation (SSS) [2], gradiometers and magnetometers will be displayed jointly because both are reconstructed from the same SSS basis vectors with the same numerical rank. This also implies that both sensor types are not any longer statistically independent. These methods for evaluation can be used to assess model violations. Additional introductory materials can be found here &lt;https
Python Code: import os.path as op import mne from mne.datasets import sample Explanation: .. _tut_compute_covariance: Computing covariance matrix End of explanation data_path = sample.data_path() raw_empty_room_fname = op.join( data_path, 'MEG', 'sample', 'ernoise_raw.fif') raw_empty_room = mne.io.read_raw_fif(raw_empty_room_fname) raw_fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif') raw = mne.io.read_raw_fif(raw_fname) raw.info['bads'] += ['EEG 053'] # bads + 1 more Explanation: Source estimation method such as MNE require a noise estimations from the recordings. In this tutorial we cover the basics of noise covariance and construct a noise covariance matrix that can be used when computing the inverse solution. For more information, see :ref:BABDEEEB. End of explanation noise_cov = mne.compute_raw_covariance(raw_empty_room, tmin=0, tmax=None) Explanation: The definition of noise depends on the paradigm. In MEG it is quite common to use empty room measurements for the estimation of sensor noise. However if you are dealing with evoked responses, you might want to also consider resting state brain activity as noise. First we compute the noise using empty room recording. Note that you can also use only a part of the recording with tmin and tmax arguments. That can be useful if you use resting state as a noise baseline. Here we use the whole empty room recording to compute the noise covariance (tmax=None is the same as the end of the recording, see :func:mne.compute_raw_covariance). End of explanation events = mne.find_events(raw) epochs = mne.Epochs(raw, events, event_id=1, tmin=-0.2, tmax=0.0, baseline=(-0.2, 0.0)) Explanation: Now that you the covariance matrix in a python object you can save it to a file with :func:mne.write_cov. Later you can read it back to a python object using :func:mne.read_cov. You can also use the pre-stimulus baseline to estimate the noise covariance. First we have to construct the epochs. When computing the covariance, you should use baseline correction when constructing the epochs. Otherwise the covariance matrix will be inaccurate. In MNE this is done by default, but just to be sure, we define it here manually. End of explanation noise_cov_baseline = mne.compute_covariance(epochs) Explanation: Note that this method also attenuates the resting state activity in your source estimates. End of explanation noise_cov.plot(raw_empty_room.info, proj=True) noise_cov_baseline.plot(epochs.info) Explanation: Plot the covariance matrices Try setting proj to False to see the effect. Notice that the projectors in epochs are already applied, so proj parameter has no effect. End of explanation cov = mne.compute_covariance(epochs, tmax=0., method='auto') Explanation: How should I regularize the covariance matrix? The estimated covariance can be numerically unstable and tends to induce correlations between estimated source amplitudes and the number of samples available. The MNE manual therefore suggests to regularize the noise covariance matrix (see :ref:cov_regularization), especially if only few samples are available. Unfortunately it is not easy to tell the effective number of samples, hence, to choose the appropriate regularization. In MNE-Python, regularization is done using advanced regularization methods described in [1]_. For this the 'auto' option can be used. With this option cross-validation will be used to learn the optimal regularization: End of explanation evoked = epochs.average() evoked.plot_white(cov) Explanation: This procedure evaluates the noise covariance quantitatively by how well it whitens the data using the negative log-likelihood of unseen data. The final result can also be visually inspected. Under the assumption that the baseline does not contain a systematic signal (time-locked to the event of interest), the whitened baseline signal should be follow a multivariate Gaussian distribution, i.e., whitened baseline signals should be between -1.96 and 1.96 at a given time sample. Based on the same reasoning, the expected value for the global field power (GFP) is 1 (calculation of the GFP should take into account the true degrees of freedom, e.g. ddof=3 with 2 active SSP vectors): End of explanation covs = mne.compute_covariance(epochs, tmax=0., method=('empirical', 'shrunk'), return_estimators=True) evoked = epochs.average() evoked.plot_white(covs) Explanation: This plot displays both, the whitened evoked signals for each channels and the whitened GFP. The numbers in the GFP panel represent the estimated rank of the data, which amounts to the effective degrees of freedom by which the squared sum across sensors is divided when computing the whitened GFP. The whitened GFP also helps detecting spurious late evoked components which can be the consequence of over- or under-regularization. Note that if data have been processed using signal space separation (SSS) [2], gradiometers and magnetometers will be displayed jointly because both are reconstructed from the same SSS basis vectors with the same numerical rank. This also implies that both sensor types are not any longer statistically independent. These methods for evaluation can be used to assess model violations. Additional introductory materials can be found here &lt;https://goo.gl/ElWrxe&gt;. For expert use cases or debugging the alternative estimators can also be compared: End of explanation
1,991
Given the following text description, write Python code to implement the functionality described below step by step Description: Library Exploration Step1: Corresponded Tag-POStag Table <table class="c-table o-block"><tr class="c-table__row"><th class="c-table__head-cell u-text-label">Tag</th><th class="c-table__head-cell u-text-label">POS</th><th class="c-table__head-cell u-text-label">Morphology</th></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>-LRB-</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=brck</code> <code>PunctSide=ini</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>-PRB-</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=brck</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>,</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=comm</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code> Step2: Dependency Analysis Step3: Visualization using displaCy (https Step4: Head and Child in dependency tree spaCy uses the terms head and child to describe the words connected by a single arc in the dependency tree. The term dep is used for the arc label, which describes the type of syntactic relation that connects the child to the head. https Step5: Verb extraction Step6: Extract similar words Step7: Vector representation Step8: Entity Recognition Step9: Visualization using displaCy Named Entity Visualizer (https
Python Code: import spacy nlp = spacy.load('en') text = u"We are living in Singapore.\nIt's blazing outside today!\n" doc = nlp(text) for token in doc: print((token.text, token.lemma, token.tag, token.pos)) for token in doc: print((token.text, token.lemma_, token.tag_, token.pos_)) # lemma means *root form* Explanation: Library Exploration: spaCy Parsing End of explanation #https://spacy.io/docs/api/token doc_ps = nlp("Mr.Sakamoto told us the Dragon Fruits was very yummy!") #for t in doc: t = doc_ps[2] print("token:",t) print("vocab (The vocab object of the parent Doc):", t.vocab) print("doc (The parent document.):", t.doc) print("i (The index of the token within the parent document.):", t.i) print("ent_type_ (Named entity type.):", t.ent_type_) print("ent_iob_ (IOB code of named entity tag):", t.ent_iob_) print("ent_id_ (ID of the entity the token is an instance of):", t.ent_id_) print("lemma_ (Base form of the word, with no inflectional suffixes.):", t.lemma_) print("lower_ (Lower-case form of the word.):", t.lower_) print("shape_ (A transform of the word's string, to show orthographic features.):", t.shape_) print("prefix_ (Integer ID of a length-N substring from the start of the word):", t.prefix_) print("suffix_ (Length-N substring from the end of the word):", t.suffix_) print("like_url (Does the word resemble a URL?):", t.like_url) print("like_num (Does the word represent a number? ):", t.like_num) print("like_email (Does the word resemble an email address?):", t.like_email) print("is_oov (Is the word out-of-vocabulary?):", t.is_oov) print("is_stop (Is the word part of a stop list?):", t.is_stop) print("pos_ (Coarse-grained part-of-speech.):", t.pos_) print("tag_ (Fine-grained part-of-speech.):", t.tag_) print("dep_ (Syntactic dependency relation.):", t.dep_) print("lang_ (Language of the parent document's vocabulary.):", t.lang_) print("prob: (Smoothed log probability estimate of token's type.)", t.prob) print("idx (The character offset of the token within the parent document.):", t.idx) print("sentiment (A scalar value indicating the positivity or negativity of the token):", t.sentiment) print("lex_id (ID of the token's lexical type.):", t.lex_id) print("text (Verbatim text content.):", t.text) print("text_with_ws (Text content, with trailing space character if present.):", t.text_with_ws) print("whitespace_ (Trailing space character if present.):", t.whitespace_) Explanation: Corresponded Tag-POStag Table <table class="c-table o-block"><tr class="c-table__row"><th class="c-table__head-cell u-text-label">Tag</th><th class="c-table__head-cell u-text-label">POS</th><th class="c-table__head-cell u-text-label">Morphology</th></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>-LRB-</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=brck</code> <code>PunctSide=ini</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>-PRB-</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=brck</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>,</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=comm</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>:</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>.</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=peri</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>''</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=quot</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>&quot;&quot;</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=quot</code> <code>PunctSide=fin</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>#</code></td><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"> <code>SymType=numbersign</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>``</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=quot</code> <code>PunctSide=ini</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code></code></td><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"> <code>SymType=currency</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>ADD</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>AFX</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Hyph=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>BES</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>CC</code></td><td class="c-table__cell u-text"> <code>CONJ</code></td><td class="c-table__cell u-text"> <code>ConjType=coor</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>CD</code></td><td class="c-table__cell u-text"> <code>NUM</code></td><td class="c-table__cell u-text"> <code>NumType=card</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>DT</code></td><td class="c-table__cell u-text"> <code>DET</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>EX</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>AdvType=ex</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>FW</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"> <code>Foreign=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>GW</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>HVS</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>HYPH</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>PunctType=dash</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>IN</code></td><td class="c-table__cell u-text"> <code>ADP</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>JJ</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Degree=pos</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>JJR</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Degree=comp</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>JJS</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Degree=sup</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>LS</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"> <code>NumType=ord</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>MD</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbType=mod</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NFP</code></td><td class="c-table__cell u-text"> <code>PUNCT</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NIL</code></td><td class="c-table__cell u-text"></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NN</code></td><td class="c-table__cell u-text"> <code>NOUN</code></td><td class="c-table__cell u-text"> <code>Number=sing</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NNP</code></td><td class="c-table__cell u-text"> <code>PROPN</code></td><td class="c-table__cell u-text"> <code>NounType=prop</code> <code>Number=sign</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NNPS</code></td><td class="c-table__cell u-text"> <code>PROPN</code></td><td class="c-table__cell u-text"> <code>NounType=prop</code> <code>Number=plur</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>NNS</code></td><td class="c-table__cell u-text"> <code>NOUN</code></td><td class="c-table__cell u-text"> <code>Number=plur</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>PDT</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>AdjType=pdt</code> <code>PronType=prn</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>POS</code></td><td class="c-table__cell u-text"> <code>PART</code></td><td class="c-table__cell u-text"> <code>Poss=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>PRP</code></td><td class="c-table__cell u-text"> <code>PRON</code></td><td class="c-table__cell u-text"> <code>PronType=prs</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>PRP</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>PronType=prs</code> <code>Poss=yes</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RB</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>Degree=pos</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RBR</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>Degree=comp</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RBS</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>Degree=sup</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>RP</code></td><td class="c-table__cell u-text"> <code>PART</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>SP</code></td><td class="c-table__cell u-text"> <code>SPACE</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"> <code>SYM</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>TO</code></td><td class="c-table__cell u-text"> <code>PART</code></td><td class="c-table__cell u-text"> <code>PartType=inf</code> <code>VerbForm=inf</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>UH</code></td><td class="c-table__cell u-text"> <code>INTJ</code></td><td class="c-table__cell u-text"></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VB</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=inf</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBD</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=fin</code> <code>Tense=past</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBG</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=part</code> <code>Tense=pres</code> <code>Aspect=prog</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBN</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=part</code> <code>Tense=past</code> <code>Aspect=perf</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBP</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=fin</code> <code>Tense=pres</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>VBZ</code></td><td class="c-table__cell u-text"> <code>VERB</code></td><td class="c-table__cell u-text"> <code>VerbForm=fin</code> <code>Tense=pres</code> <code>Number=sing</code> <code>Person=3</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WDT</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WP</code></td><td class="c-table__cell u-text"> <code>NOUN</code></td><td class="c-table__cell u-text"> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WP</code></td><td class="c-table__cell u-text"> <code>ADJ</code></td><td class="c-table__cell u-text"> <code>Poss=yes</code> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>WRB</code></td><td class="c-table__cell u-text"> <code>ADV</code></td><td class="c-table__cell u-text"> <code>PronType=int|rel</code></td></tr><tr class="c-table__row"><td class="c-table__cell u-text"> <code>XX</code></td><td class="c-table__cell u-text"> <code>X</code></td><td class="c-table__cell u-text"></td></tr></table> Definition of Tags <table cellpadding="2" cellspacing="2" border="0"> <tr bgcolor="#DFDFFF" align="none"> <td align="none"> <div align="left">Number</div> </td> <td> <div align="left">Tag</div> </td> <td> <div align="left">Description</div> </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 1. </td> <td>CC </td> <td>Coordinating conjunction </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 2. </td> <td>CD </td> <td>Cardinal number </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 3. </td> <td>DT </td> <td>Determiner </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 4. </td> <td>EX </td> <td>Existential <i>there<i> </i></i></td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 5. </td> <td>FW </td> <td>Foreign word </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 6. </td> <td>IN </td> <td>Preposition or subordinating conjunction </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 7. </td> <td>JJ </td> <td>Adjective </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 8. </td> <td>JJR </td> <td>Adjective, comparative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 9. </td> <td>JJS </td> <td>Adjective, superlative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 10. </td> <td>LS </td> <td>List item marker </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 11. </td> <td>MD </td> <td>Modal </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 12. </td> <td>NN </td> <td>Noun, singular or mass </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 13. </td> <td>NNS </td> <td>Noun, plural </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 14. </td> <td>NNP </td> <td>Proper noun, singular </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 15. </td> <td>NNPS </td> <td>Proper noun, plural </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 16. </td> <td>PDT </td> <td>Predeterminer </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 17. </td> <td>POS </td> <td>Possessive ending </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 18. </td> <td>PRP </td> <td>Personal pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 19. </td> <td>PRP </td> <td>Possessive pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 20. </td> <td>RB </td> <td>Adverb </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 21. </td> <td>RBR </td> <td>Adverb, comparative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 22. </td> <td>RBS </td> <td>Adverb, superlative </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 23. </td> <td>RP </td> <td>Particle </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 24. </td> <td>SYM </td> <td>Symbol </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 25. </td> <td>TO </td> <td><i>to</i> </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 26. </td> <td>UH </td> <td>Interjection </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 27. </td> <td>VB </td> <td>Verb, base form </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 28. </td> <td>VBD </td> <td>Verb, past tense </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 29. </td> <td>VBG </td> <td>Verb, gerund or present participle </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 30. </td> <td>VBN </td> <td>Verb, past participle </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 31. </td> <td>VBP </td> <td>Verb, non-3rd person singular present </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 32. </td> <td>VBZ </td> <td>Verb, 3rd person singular present </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 33. </td> <td>WDT </td> <td>Wh-determiner </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 34. </td> <td>WP </td> <td>Wh-pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 35. </td> <td>WP </td> <td>Possessive wh-pronoun </td> </tr> <tr bgcolor="#FFFFCA"> <td align="none"> 36. </td> <td>WRB </td> <td>Wh-adverb </table> End of explanation doc_dep = nlp(u'I like chicken rice and Laksa.') for np in doc_dep.noun_chunks: print((np.text, np.root.text, np.root.dep_, np.root.head.text)) for t in doc_dep: print((t.text, t.dep_, t.tag_)) Explanation: Dependency Analysis End of explanation for token in doc_dep: # Orth: Original, Head: head of subtree print((token.text, token.dep_, token.n_lefts, token.n_rights, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights])) dependency_pattern = '{left}<---{word}[{w_type}]--->{right}\n--------' for token in doc_dep: print (dependency_pattern.format(word=token.orth_, w_type=token.dep_, left=[t.orth_ for t in token.lefts], right=[t.orth_ for t in token.rights])) Explanation: Visualization using displaCy (https://demos.explosion.ai/displacy/) <img src="spacy_dependency01.png"> End of explanation for t in doc_dep: print((t.text, t.dep_,t.tag_,t.pos_),(t.head.text, t.head.dep_,t.head.tag_,t.head.pos_)) Explanation: Head and Child in dependency tree spaCy uses the terms head and child to describe the words connected by a single arc in the dependency tree. The term dep is used for the arc label, which describes the type of syntactic relation that connects the child to the head. https://spacy.io/docs/usage/dependency-parse End of explanation # Load symbols from spacy.symbols import nsubj, VERB verbs = set() for token in doc: print ((token, token.dep, token.head, token.head.pos)) if token.dep == nsubj and token.head.pos == VERB: verbs.add(token.head) verbs Explanation: Verb extraction End of explanation from numpy import dot from numpy.linalg import norm # cosine similarity cosine = lambda v1, v2: dot(v1, v2) / (norm(v1) * norm(v2)) target_word = 'Singapore' sing = nlp.vocab[target_word] sing # gather all known words except for taget word all_words = list({w for w in nlp.vocab if w.has_vector and w.orth_.islower() and w.lower_ != target_word.lower()}) len(all_words) # sort by similarity #all_words.sort(key=lambda w: cosine(w.vector, sing.vector)) #all_words.reverse() #print("Top 10 most similar words to",target_word) #for word in all_words[:10]: # print(word.orth_) Explanation: Extract similar words End of explanation country1 = nlp.vocab['china'] race1 = nlp.vocab['chinese'] country2 = nlp.vocab['japan'] result = country1.vector - race1.vector + country2.vector all_words = list({w for w in nlp.vocab if w.has_vector and w.orth_.islower() and w.lower_ != "china" and w.lower_ != "chinese" and w.lower_ != "japan"}) all_words.sort(key=lambda w: cosine(w.vector, result)) all_words[0].orth_ # Top 3 results for word in all_words[:3]: print(word.orth_) Explanation: Vector representation End of explanation example_sent = "NTUC has raised S$25 million to help workers re-skill and upgrade their skills, secretary-general Chan Chun Sing said at the May Day Rally on Monday " parsed = nlp(example_sent) for token in parsed: print((token.orth_, token.ent_type_ if token.ent_type_ != "" else "(not an entity)")) Explanation: Entity Recognition End of explanation import random from spacy.gold import GoldParse from spacy.language import EntityRecognizer train_data = [ ('Who is Chaka Khan?', [(7, 17, 'PERSON')]), ('I like Bangkok and Buangkok.', [(7, 14, 'LOC'), (19, 27, 'LOC')]) ] nlp2 = spacy.load('en', entity=False, parser=False) ner = EntityRecognizer(nlp2.vocab, entity_types=['PERSON', 'LOC']) for itn in range(5): random.shuffle(train_data) for raw_text, entity_offsets in train_data: doc2 = nlp2.make_doc(raw_text) gold = GoldParse(doc2, entities=entity_offsets) nlp.tagger(doc2) ner.update(doc2, gold) ner.model.end_training() nlp.save_to_directory('./sample_ner/') nlp3 = spacy.load('en', path='./sample_ner/') example_sent = "Who is Tai Seng Tan?" doc3 = nlp3(example_sent) for ent in doc3.ents: print(ent.label_, ent.text) Explanation: Visualization using displaCy Named Entity Visualizer (https://demos.explosion.ai/displacy-ent/) <img src="spacy_ner01.png"> List of entity types https://spacy.io/docs/usage/entity-recognition <table class="c-table o-block"><tr class="c-table__row"><th class="c-table__head-cell u-text-label">Type</th><th class="c-table__head-cell u-text-label">Description</th></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>PERSON</code></td><td class="c-table__cell u-text">People, including fictional.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>NORP</code></td><td class="c-table__cell u-text">Nationalities or religious or political groups.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>FACILITY</code></td><td class="c-table__cell u-text">Buildings, airports, highways, bridges, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>ORG</code></td><td class="c-table__cell u-text">Companies, agencies, institutions, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>GPE</code></td><td class="c-table__cell u-text">Countries, cities, states.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>LOC</code></td><td class="c-table__cell u-text">Non-GPE locations, mountain ranges, bodies of water.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>PRODUCT</code></td><td class="c-table__cell u-text">Objects, vehicles, foods, etc. (Not services.)</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>EVENT</code></td><td class="c-table__cell u-text">Named hurricanes, battles, wars, sports events, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>WORK_OF_ART</code></td><td class="c-table__cell u-text">Titles of books, songs, etc.</td></tr><tr class="c-table__row"><td class="c-table__cell u-text"><code>LANGUAGE</code></td><td class="c-table__cell u-text">Any named language.</td></tr></table> Build own entity recognizer End of explanation
1,992
Given the following text description, write Python code to implement the functionality described below step by step Description: Experiments for FOSSACS'19 Paper Step1: Hack that alows to parse ltl3ba automata without universal branching. Step2: $\newcommand{\F}{\mathsf{F}}$ $\newcommand{\G}{\mathsf{G}}$ $\newcommand{\FG}{\mathsf{F,G}}$ Formulae Detect mergable formulae Step3: Literature Step4: Mergeable formulae We first count the numbers of formulae with $\F$- and $\FG$-merging. After that we save the $\FG$-mergeable formulae into a separate file. Step5: Random Step6: Generate 1000 mergeable formulae with priorities 1,2,4 Step7: Evaluating the impact of $\F$- and $\FG$-merging We compare the $\F$- and $\FG$-merging translation to the basic one. We compare the sizes of SLAA (alternating). We use a wrapper script ltlcross_runner for ltlcross that uses the pandas library to manipulate data. It requires some settings. Step8: Scatter plots
Python Code: from ltlcross_runner import LtlcrossRunner from IPython.display import display import pandas as pd import spot import sys spot.setup(show_default='.a') pd.options.display.float_format = '{: .0f}'.format pd.options.display.latex.multicolumn_format = 'c' Explanation: Experiments for FOSSACS'19 Paper: LTL to Smaller Self-Loop Alternating Automata Authors: František Blahoudek, Juraj Major, Jan Strejček End of explanation import os os.environ['SPOT_HOA_TOLERANT']='TRUE' %%bash ltl3ba -v ltl3tela -v ltl2tgba --version # If there are already files with results, and rerun is False, ltlcross is not run again. rerun = False Explanation: Hack that alows to parse ltl3ba automata without universal branching. End of explanation def is_mergable(f, level=3): '''Runs ltl3tela with the -m argument to detect whether the given formula `f` is mergable. level 1: F-mergeable level 2: G-mergeable level 3: F,G-mergeable ''' if level == 3: return is_mergable(f,2) or is_mergable(f,1) res = !ltl3tela -m{level} -f "{f}" return res[0] == '1' is_mergable('FGa',2) Explanation: $\newcommand{\F}{\mathsf{F}}$ $\newcommand{\G}{\mathsf{G}}$ $\newcommand{\FG}{\mathsf{F,G}}$ Formulae Detect mergable formulae End of explanation tmp_file = 'formulae/tmp.ltl' lit_pref = 'formulae/literature' lit_file = lit_pref + '.ltl' lit_merg_file = 'formulae/lit.ltl' # The well-known set of formulae from literature !genltl --dac-patterns --eh-patterns --sb-patterns --beem-patterns --hkrss-patterns > $tmp_file # We add also negation of all the formulae. # We remove all M and W operators as LTL3BA does not understand them. # The `relabel-bool` option renames `G(a | b)` into `G a`. !ltlfilt --negate $tmp_file | \ ltlfilt $tmp_file -F - --unique -r3 --remove-wm --relabel-bool=abc | \ ltlfilt -v --equivalent-to=0 | ltlfilt -v --equivalent-to=1> $lit_file Explanation: Literature End of explanation lit_f_mergable = [is_mergable(l,1) for l in open(lit_file)] lit_mergable = [is_mergable(l,3) for l in open(lit_file)] counts = '''Out of {} formulae known from literature, there are: {} with F-merging, {} with F,G-merging, and {} with no merging possibility ''' print(counts.format( len(lit_mergable), lit_f_mergable.count(True), lit_mergable.count(True), lit_mergable.count(False))) with open(lit_merg_file,'w') as out: for l in open(lit_file): if is_mergable(l): out.write(l) Explanation: Mergeable formulae We first count the numbers of formulae with $\F$- and $\FG$-merging. After that we save the $\FG$-mergeable formulae into a separate file. End of explanation def generate(n=100,func=(lambda x: True),filename=None,priorities='M=0,W=0,xor=0',ap=['a','b','c','d','e']): if filename is not None: if filename is sys.stdout: file_h = filename else: file_h = open(filename,'w') f = spot.randltl(ap, ltl_priorities=priorities, simplify=3,tree_size=15).relabel_bse(spot.Abc)\ .unabbreviate('WM') i = 0 printed = set() while(i < n): form = next(f) if form in printed: continue if func(form) and not form.is_tt() and not form.is_ff(): if filename is not None: print(form,file=file_h) printed.add(form) i += 1 return list(printed) def measure_rand(n=1000,priorities='M=0,W=0,xor=0',ap=['a','b','c','d','e']): rand = generate(n,priorities=priorities,ap=ap) rand_mergable = [is_mergable(l,3) for l in rand] rand_f_mergable = [is_mergable(l,1) for l in rand] counts = '''Out of {} random formulae, there are: {} with F-merging, {} with F,G-merging, and {} with no merging possibility ''' print(counts.format( len(rand_mergable), rand_f_mergable.count(True), rand_mergable.count(True), rand_mergable.count(False))) return rand, rand_f_mergable, rand_mergable def get_priorities(n): '''Returns the `priority string` for ltlcross where `n` is the priority of both F and G. The operators W,M,xor have priority 0 and the rest has the priority 1. ''' return 'M=0,W=0,xor=0,G={0},F={0}'.format(n) measure_rand(); measure_rand(priorities=get_priorities(2)); rand4 = measure_rand(priorities=get_priorities(4)) randfg = measure_rand(priorities='xor=0,implies=0,equiv=0,X=0,W=0,M=0,R=0,U=0,F=2,G=2') Explanation: Random End of explanation fg_priorities = [1,2,4] !mkdir -p formulae #generate(total_r,filename=fg_f,priorities='xor=0,implies=0,equiv=0,X=0,W=0,M=0,R=0,U=0,F=3,G=3'); for i in fg_priorities: generate(1000,func=lambda x:is_mergable(x,3), filename='formulae/rand{}.ltl'.format(i), priorities=get_priorities(i)) generate(1000,func=lambda x:is_mergable(x,3), filename='formulae/randfg.ltl'.format(i), priorities='xor=0,implies=0,equiv=0,X=0,W=0,M=0,R=0,U=0,F=2,G=2'); Explanation: Generate 1000 mergeable formulae with priorities 1,2,4 End of explanation resfiles = {} runners = {} ### Tools' setting ### # a dict of a form (name : ltlcross cmd) ltl3tela_shared = "ltl3tela -p1 -t0 -n0 -a3 -f %f " #end = " | awk '!p;/^--END--/{p=1}' > %O" end = " > %O" tools = {"FG-merging" : ltl3tela_shared + end, #"FG-merging+compl" : ltl3tela_shared + "-n1" + end, "F-merging" : ltl3tela_shared + "-G0" + end, #"G-merging" : ltl3tela_shared + "-F0" + end, "basic" : ltl3tela_shared + "-F0 -G0" + end, "LTL3BA" : "ltl3ba -H1 -f %s" + end, } ### Order in which we want to sort the translations MI_order = ["LTL3BA", "basic","F-merging","FG-merging"] ### Files with measured statistics ### resfiles['lit'] = 'MI_alt-lit.csv' resfiles['randfg'] = 'MI_alt-randfg.csv' for i in fg_priorities: resfiles['rand{}'.format(i)] = 'MI_alt-rand{}.csv'.format(i) ### Measures to be measured cols = ["states","transitions","nondet_states","nondet_aut","acc"] for name,rfile in resfiles.items(): runners[name] = LtlcrossRunner(tools,res_filename=rfile, formula_files=['formulae/{}.ltl'.format(name)], cols=cols) for r in runners.values(): if rerun: r.run_ltlcross() r.parse_results() t1 = {} for name,r in runners.items(): tmp = r.cummulative(col=cols).unstack(level=0).loc[MI_order,cols] t1_part = tmp.loc[:,['states','acc']] t1_part["det. automata"] = len(r.values)-tmp.nondet_aut t1[name] = t1_part t1_merged = pd.concat(t1.values(),axis=1,keys=t1.keys()).loc[MI_order,:] t1_merged row_map={"basic" : 'basic', "F-merging" : '$\F$-merging', "G-merging" : '$\G$-merging', "FG-merging" : '$\FG$-merging', "FG-merging+compl" : "$\FG$-merging + complement"} t1_merged.rename(row_map,inplace=True); t1 = t1_merged.rename_axis(['',"translation"],axis=1) t1.index.name = None t1 rand = t1.copy() rand.columns = rand.columns.swaplevel() rand.sort_index(axis=1,level=1,inplace=True,sort_remaining=False,ascending=True) idx = pd.IndexSlice corder = ['states','acc'] parts = [rand.loc[:,idx[[c]]] for c in corder] rand = pd.concat(parts,names=corder,axis=1) rand print(rand.to_latex(escape=False,bold_rows=False),file=open('fossacs_t1.tex','w')) cp fossacs_t1.tex /home/xblahoud/research/ltl3tela_papers/ Explanation: Evaluating the impact of $\F$- and $\FG$-merging We compare the $\F$- and $\FG$-merging translation to the basic one. We compare the sizes of SLAA (alternating). We use a wrapper script ltlcross_runner for ltlcross that uses the pandas library to manipulate data. It requires some settings. End of explanation def fix_tools(tool): return tool.replace('FG-','$\\FG$-').replace('F-','$\\F$-') def sc_plot(r,t1,t2,filename=None,include_equal = True,col='states',log=None,size=(5.5,5),kw=None,clip=None, add_count=True): merged = isinstance(r,list) if merged: vals = pd.concat([run.values[col] for run in r]) vals.index = vals.index.droplevel(0) vals = vals.groupby(vals.index).first() else: vals = r.values[col] to_plot = vals.loc(axis=1)[[t1,t2]] if include_equal else\ vals[vals[t1] != vals[t2]].loc(axis=1)[[t1,t2]] to_plot['count'] = 1 to_plot.dropna(inplace=True) to_plot = to_plot.groupby([t1,t2]).count().reset_index() if filename is not None: print(scatter_plot(to_plot, log=log, size=size,kw=kw,clip=clip, add_count=add_count),file=open(filename,'w')) else: return scatter_plot(to_plot, log=log, size=size,kw=kw,clip=clip, add_count=add_count) def scatter_plot(df, short_toolnames=True, log=None, size=(5.5,5),kw=None,clip=None,add_count = True): t1, t2, _ = df.columns.values if short_toolnames: t1 = fix_tools(t1.split('/')[0]) t2 = fix_tools(t2.split('/')[0]) vals = ['({},{}) [{}]\n'.format(v1,v2,c) for v1,v2,c in df.values] plots = '''\\addplot[ scatter, scatter src=explicit, only marks, fill opacity=0.5, draw opacity=0] coordinates {{{}}};'''.format(' '.join(vals)) start_line = 0 if log is None else 1 line = '\\addplot[darkgreen,domain={}:{}]{{x}};'.format(start_line, min(df.max(axis=0)[:2])+1) axis = 'axis' mins = 'xmin=0,ymin=0,' clip_str = '' if clip is not None: clip_str = '\\draw[red,thick] ({},{}) rectangle ({},{});'.format(*clip) if log: if log == 'both': axis = 'loglogaxis' mins = 'xmin=1,ymin=1,' else: axis = 'semilog{}axis'.format(log) mins = mins + '{}min=1,'.format(log) args = '' if kw is not None: if 'title' in kw and add_count: kw['title'] = '{{{} ({})}}'.format(kw['title'],df['count'].sum()) args = ['{}={},\n'.format(k,v) for k,v in kw.items()] args = ''.join(args) res = '''%\\begin{{tikzpicture}} \\pgfplotsset{{every axis legend/.append style={{ cells={{anchor=west}}, draw=none, }}}} \\pgfplotsset{{colorbar/width=.3cm}} \\pgfplotsset{{title style={{align=center, font=\\small}}}} \\pgfplotsset{{compat=1.14}} \\begin{{{0}}}[ {1} colorbar, colormap={{example}}{{ color(0)=(blue) color(500)=(green) color(1000)=(red) }}, %thick, axis x line* = bottom, axis y line* = left, width={2}cm, height={3}cm, xlabel={{{4}}}, ylabel={{{5}}}, cycle list={{% {{darkgreen, solid}}, {{blue, densely dashed}}, {{red, dashdotdotted}}, {{brown, densely dotted}}, {{black, loosely dashdotted}} }}, {6}% ] {7}% {8}% {9}% \\end{{{0}}} %\\end{{tikzpicture}} '''.format(axis,mins, size[0],size[1],t1,t2, args,plots,line, clip_str) return res ltl3ba = 'LTL3BA' fgm = 'FG-merging' fm = 'F-merging' basic = 'basic' size = (4,4) clip_names = ('xmin','ymin','xmax','ymax') kw = {} sc_plot(runners['lit'],basic,fgm,'sc_lit.tex',size=size,kw=kw.copy()) size = (4.3,4.5) kw['title'] = 'literature' sc_plot(runners['lit'],basic,fgm,'sc_lit.tex',size=size,kw=kw.copy()) for suff in ['1','2','4','fg']: kw['title'] = 'rand'+suff sc_plot(runners['rand'+suff],basic,fgm,'sc_rand{}.tex'.format(suff),size=size,kw=kw.copy()) cp sc_lit.tex sc_rand*.tex ~/research/ltl3tela_papers r = runners['rand4'] r.smaller_than('basic','F-merging') Explanation: Scatter plots End of explanation
1,993
Given the following text description, write Python code to implement the functionality described below step by step Description: Solcing Crimes with Data Science Markus Harrer INNOQ Deutschland Facts of the case a white bus with a red sign on the side window was stolen police did an innovative mpbile phone investigation there was only phone number of unknown identity Step1: Unfortunately Step2: Join Bringing datasets together Step3: Filtering
Python Code: import pandas as pd cdr = pd.read_excel("cdr_data_export.xlsx") cdr.head() Explanation: Solcing Crimes with Data Science Markus Harrer INNOQ Deutschland Facts of the case a white bus with a red sign on the side window was stolen police did an innovative mpbile phone investigation there was only phone number of unknown identity: 04638472273 Our approach: Where are the whereabouts / place of residence of the mobile phone owner? What do we have? CDRs (Call Data Records) in an Excel file! That means: Information about the cell towers used for the phone calls! Import and Load Using pandas to read an Excel file into a Dataframe. End of explanation towers = pd.read_csv("darknet.io/hacks/infrastructure/mobile_net/texas_towers.csv", index_col=0) towers.head() Explanation: Unfortunately: Information about the tower's locations are missing! We need a second data source from the DARKNET! Load another dataset This time: a CSV file End of explanation call_data = cdr.join(towers, on='TowerID') call_data.head() call_data[['Caller', 'Symbol', 'Callee']] = call_data['Call'].str.split("(->|-X)", expand=True) call_data.head() call_data['Event'] = call_data['Symbol'].map( { "->" : "Incoming", "-X" : "Missed" }) call_data.head() Explanation: Join Bringing datasets together End of explanation suspect_data = call_data[(call_data['Callee'] == '04638472273') | (call_data['Caller'] == '04638472273')].copy() suspect_data.head() suspect_data['Start'] = pd.to_datetime(suspect_data['Start']) suspect_data.head() suspect_data['DoW'] = suspect_data['Start'].dt.weekday_name suspect_data.head() suspect_data.plot.scatter('TowerLon', "TowerLat"); suspect_on_weekend = suspect_data[suspect_data['DoW'].isin(['Saturday', 'Sunday'])].copy() suspect_on_weekend.head() suspect_on_weekend.plot.scatter('TowerLon', "TowerLat"); suspect_on_weekend['Start'] suspect_on_weekend['hour'] = suspect_on_weekend['Start'].dt.hour suspect_on_weekend.head() suspect_on_weekend_night = suspect_on_weekend[ (suspect_on_weekend['hour'] < 6) | (suspect_on_weekend['hour'] > 22)] suspect_on_weekend_night.head() ax = suspect_on_weekend_night.plot.scatter('TowerLat', 'TowerLon') from sklearn.cluster import KMeans kmeans = KMeans(n_clusters = 1) data = suspect_on_weekend_night[['TowerLat', 'TowerLon']] kmeans.fit_predict(data) centroids = kmeans.cluster_centers_ ax.scatter(x = centroids[:, 0], y = centroids[:, 1], c = 'r', marker = 'x') ax.figure centroids Explanation: Filtering End of explanation
1,994
Given the following text description, write Python code to implement the functionality described below step by step Description: DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING Step1: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still Step2: straight up out of box elastic net with slightly tweaked alpha Step3: Examine performance on test set
Python Code: platform = 'lendingclub' store = pd.HDFStore( '/Users/justinhsi/justin_tinkering/data_science/lendingclub/{0}_store.h5'. format(platform), append=True) loan_info = store['train_filtered_columns'] columns = loan_info.columns.values # checking dtypes to see which columns need one hotting, and which need null or not to_one_hot = [] to_null_or_not = [] do_nothing = [] for col in columns: if loan_info[col].dtypes == np.dtype('O'): print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict()) to_one_hot.append(col) elif len(loan_info[col].isnull().value_counts(dropna=False)) > 1: print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict()) to_null_or_not.append(col) else: print(col, loan_info[col].isnull().value_counts(dropna=False).to_dict()) do_nothing.append(col) Explanation: DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING End of explanation standardized, eval_cols, mean_series, std_dev_series = data_prep.process_data_train( loan_info) Explanation: Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still End of explanation regr = RandomForestRegressor( n_estimators=20, random_state=0, max_features=10, min_samples_split=20, min_samples_leaf=10, n_jobs=-1, ) regr.fit(standardized, eval_cols) # dump the model joblib.dump(regr, 'model_dump/model_0.2.0.pkl') # joblib.dump((mean_series, std_dev_series), 'model_dump/mean_stddev.pkl') regr.score(standardized, eval_cols) now = time.strftime("%Y_%m_%d_%Hh_%Mm_%Ss") # info to stick in detailed dataframe describing each model model_info = {'model_version': '0.2.0', 'target': 'npv_roi_10', 'weights': 'None', 'algo_model': 'RF_regr', 'hyperparams': "n_estimators=20,random_state=0,max_features=10,min_samples_split=20,min_samples_leaf=10,n_jobs=-1", 'cost_func': 'sklearn default, which I think is mse', 'useful_notes': 'R2 score of .199350 (regr.score())', 'date': now} model_info_df = pd.DataFrame(model_info, index = ['0.2.0']) store.open() store.append( 'model_info', model_info_df, data_columns=True, index=True, append=True, ) store.close() Explanation: straight up out of box elastic net with slightly tweaked alpha End of explanation store.open() test = store['test_filtered_columns'] train = store['train_filtered_columns'] loan_npv_rois = store['loan_npv_rois'] default_series = test['target_strict'] results = store['results'] store.close() train_X, train_y = data_prep.process_data_test(train) train_y = train_y['npv_roi_10'].values test_X, test_y = data_prep.process_data_test(test) test_y = test_y['npv_roi_10'].values regr = joblib.load('model_dump/model_0.2.0.pkl') regr_version = '0.2.0' test_yhat = regr.predict(test_X) train_yhat = regr.predict(train_X) test_mse = np.sum((test_yhat - test_y)**2)/len(test_y) train_mse = np.sum((train_yhat - train_y)**2)/len(train_y) def eval_models(trials, port_size, available_loans, regr, regr_version, test, loan_npv_rois, default_series): results = {} pct_default = {} test_copy = test.copy() for trial in tqdm_notebook(np.arange(trials)): loan_ids = np.random.choice( test_copy.index.values, available_loans, replace=False) loans_to_pick_from = test_copy.loc[loan_ids, :] scores = regr.predict(loans_to_pick_from) scores_series = pd.Series(dict(zip(loan_ids, scores))) scores_series.sort_values(ascending=False, inplace=True) picks = scores_series[:900].index.values results[trial] = loan_npv_rois.loc[picks, :].mean().to_dict() pct_default[trial] = (default_series.loc[picks].sum()) / port_size pct_default_series = pd.Series(pct_default) results_df = pd.DataFrame(results).T results_df['pct_def'] = pct_default_series return results_df # as per done with baseline models, say 3000 loans available # , pick 900 of them trials = 20000 port_size = 900 available_loans = 3000 model_results = eval_models(trials, port_size, available_loans, regr, regr_version, test_X, loan_npv_rois, default_series) multi_index = [] for col in model_results.columns.values: multi_index.append((col,regr_version)) append_results = model_results append_results.columns = pd.MultiIndex.from_tuples(multi_index, names = ['discount_rate', 'model']) try: results = results.join(append_results) except ValueError: results.loc[:, (slice(None), slice('0.2.0','0.2.0'))] = append_results results.sort_index(axis=1, inplace = True) store.open() store['results'] = results model_info = store['model_info'] store.close() results.describe() model_info Explanation: Examine performance on test set End of explanation
1,995
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1 Step1: Step2
Python Code: batchlogfile = 'sample_dataset/batch_log.json' df_batch = pd.read_json(batchlogfile, lines=True) index_purchase = ['event_type','id','timestamp','amount'] index_friend = ['event_type','id1','id2','timestamp'] #df_batch.head() #df_batch.describe() # Read D and T df_DT=df_batch[df_batch['D'].notnull()] df_DT=df_DT[['D','T']] D = df_DT.values[0][0] T = df_DT.values[0][1] #print(D) #print(T) #df_DT.head() # check D and T values if D < 1: print('Program terminated because of D < 1') sys.exit() if T < 2: print('Program terminated because of T < 2') sys.exit() #for possible_value in set(df['event_type'].tolist()): # print(possible_value) df_purchase = df_batch[df_batch['event_type']=='purchase'] df_purchase = df_purchase[index_purchase] df_purchase = df_purchase.dropna(how='any') # If sort on the timestamp is needed, commentout the following line # df_purchase = df_purchase.sort_values('timestamp') #df_purchase.shape df_friend=df_batch[(df_batch['event_type']=='befriend') | (df_batch['event_type']=='unfriend')] df_friend=df_friend[index_friend] df_friend=df_friend.dropna(how='any') # If sort on the timestamp is needed, commentout the following line #df_friend=df_friend.sort_values('timestamp') #df_friend.shape G = nx.Graph() idlist = set(df_purchase.id.tolist()) G.add_nodes_from(idlist) #len(list(G.nodes())) def Add_edges(data): for row in data.itertuples(): id10 = row.id1 id20 = row.id2 event_type0 = row.event_type if event_type0 == 'befriend': G.add_edge(id10,id20) if event_type0 == 'unfriend': if G.has_edge(id10,id20): G.remove_edge(id10,id20) Add_edges(df_friend) #len(list(G.edges())) #G[10.0] #G.number_of_nodes() #G.number_of_edges() # define a function to calcualte the mean and sd for userid's network def Get_Mean_SD(userid): Nodes = list(nx.ego_graph(G, userid, D, center=False)) df_Nodes = df_purchase.loc[df_purchase['id'].isin(Nodes)] if len(df_Nodes) >= 2: if len(df_Nodes) > T: df_Nodes = df_Nodes.sort_values('timestamp').iloc[-int(T):] #df_Nodes.shape #the std from pd is different from np; np is correct #mean = df_Nodes.amount.mean() #sd = df_Nodes.amount.std() mean = np.mean(df_Nodes['amount']) sd = np.std(df_Nodes['amount']) mean = float("{0:.2f}".format(mean)) sd = float("{0:.2f}".format(sd)) else: mean=np.nan sd=np.nan return mean, sd #Get_Mean_SD(0.0) #df_purchase.head() #df_purchase.tail() #df_purchase.shape Explanation: Step1: build the initial state of the entire user network, as well as the purchae history of the users Input: sample_dataset/batch_log.json End of explanation # read in the stream_log.json streamlogfile = 'sample_dataset/stream_log.json' df_stream = pd.read_json(streamlogfile, lines=True) # If sort on the timestamp is needed, commentout the following line #df_stream = df_stream.sort_values('timestamp') # open output file flagged_purchases.json flaggedfile = 'log_output/flagged_purchases.json' f = open(flaggedfile, 'w') # Determine whether a purchase is anomalous; update purchase history; update social network for i in range(0, len(df_stream)): datai = df_stream.iloc[i] event_type = datai['event_type'] if (event_type == 'purchase') & (not datai[index_purchase].isnull().any()): # update purchase history df_purchase = df_purchase.append(datai[index_purchase]) timestamp = datai['timestamp'] timestamp = str(timestamp) userid = datai['id'] if (not G.has_node(userid)): G.add_node(userid) amount = datai['amount'] mean, sd = Get_Mean_SD(userid) if mean != np.nan: mean_3sd = mean + (3*sd) if amount > mean_3sd: f.write('{{"event_type":"{0:s}", "timestamp":"{1:s}", "id": "{2:.0f}", "amount": "{3:.2f}", "mean": "{4:.2f}", "sd": "{5:.2f}"}}\n'.format(event_type, timestamp, userid, amount, mean, sd)) # update social network if (event_type == 'befriend') & (not datai[index_friend].isnull().any()): df_friend=df_friend.append(datai[index_friend]) id1 = datai['id1'] id2 = datai['id2'] G.add_edge(id1,id2) if (event_type == 'unfriend') & (not datai[index_friend].isnull().any()): df_friend=df_friend.append(datai[index_friend]) id1 = datai['id1'] id2 = datai['id2'] if G.has_edge(id1,id2): G.remove_edge(id1,id2) f.close() Explanation: Step2: Determine whether a purchase is anomalous input file: sample_dataset/stream_log.json End of explanation
1,996
Given the following text description, write Python code to implement the functionality described below step by step Description: Testing the nscore transformation table Step1: Getting the data ready for work If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. Step2: The nscore transformation table function Step3: Note that the input can be data or a reference distribution function Normal score transformation table using delustering wight Step4: Normal score transformation table without delustering wight Step5: Comparing results
Python Code: #general imports import matplotlib.pyplot as plt import pygslib from matplotlib.patches import Ellipse import numpy as np import pandas as pd #make the plots inline %matplotlib inline Explanation: Testing the nscore transformation table End of explanation #get the data in gslib format into a pandas Dataframe mydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') # This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code # so, we are adding constant elevation = 0 and a dummy BHID = 1 mydata['Zlocation']=0 mydata['bhid']=1 # printing to verify results print (' \n **** 5 first rows in my datafile \n\n ', mydata.head(n=5)) #view data in a 2D projection plt.scatter(mydata['Xlocation'],mydata['Ylocation'], c=mydata['Primary']) plt.colorbar() plt.grid(True) plt.show() Explanation: Getting the data ready for work If the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame. End of explanation print (pygslib.gslib.__dist_transf.ns_ttable.__doc__) Explanation: The nscore transformation table function End of explanation dtransin,dtransout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],mydata['Declustering Weight']) dttable= pd.DataFrame({'z': dtransin,'y': dtransout}) print (dttable.head(3)) print (dttable.tail(3) ) print ('there was any error?: ', error!=0) dttable.hist(bins=30) Explanation: Note that the input can be data or a reference distribution function Normal score transformation table using delustering wight End of explanation transin,transout, error = pygslib.gslib.__dist_transf.ns_ttable(mydata['Primary'],np.ones(len(mydata['Primary']))) ttable= pd.DataFrame({'z': transin,'y': transout}) print (ttable.head(3)) print (ttable.tail(3)) ttable.hist(bins=30) Explanation: Normal score transformation table without delustering wight End of explanation parameters_probplt = { 'iwt' : 0, #int, 1 use declustering weight 'va' : ttable.y, # array('d') with bounds (nd) 'wt' : np.ones(len(ttable.y))} # array('d') with bounds (nd), wight variable (obtained with declust?) parameters_probpltl = { 'iwt' : 0, #int, 1 use declustering weight 'va' : dttable.y, # array('d') with bounds (nd) 'wt' : np.ones(len(dttable.y))} # array('d') with bounds (nd), wight variable (obtained with declust?) binval,cl,xpt025,xlqt,xmed,xuqt,xpt975,xmin,xmax, \ xcvr,xmen,xvar,error = pygslib.gslib.__plot.probplt(**parameters_probplt) binvall,cll,xpt025l,xlqtl,xmedl,xuqtl,xpt975l,xminl, \ xmaxl,xcvrl,xmenl,xvarl,errorl = pygslib.gslib.__plot.probplt(**parameters_probpltl) fig = plt.figure() ax = fig.add_subplot(1,1,1) plt.plot (cl, binval, label = 'gaussian non-declustered') plt.plot (cll, binvall, label = 'gaussian declustered') plt.legend(loc=4) plt.grid(True) fig.show Explanation: Comparing results End of explanation
1,997
Given the following text description, write Python code to implement the functionality described below step by step Description: What is classification ? Import the data you'll be using Visualize/Analyze your dataset Perform classification on it 1.a - We use the mnist dataset Step1: 1.b - How does Mnist look like ? Step2: 1.c - Distribution of the Mnist dataset Step3: 1.c - Normalize and change the encoding of the data Step4: 2 - Classify our data We are going to choose in between 3 classifier to classify our data Step5: 2.a - SVM https Step6: 2.b - Nearest neighboor Browse throught the entire dataset which is the closest "neighboor" to our current example. Step7: 2.c - Softmax regression $ y = \sigma(W^T \cdot X + b) $
Python Code: import keras from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() print "input of training set has shape {} and output has shape {}".format(x_train.shape, y_train.shape) print "input of testing set has shape {} and output has shape {}".format(x_test.shape, y_test.shape) Explanation: What is classification ? Import the data you'll be using Visualize/Analyze your dataset Perform classification on it 1.a - We use the mnist dataset End of explanation import matplotlib.pyplot as plt import numpy as np %matplotlib inline fig, axs = plt.subplots(2,5) axs = [b for a in axs for b in a] for i in range(2*5): axs[i].imshow(x_train[i], cmap='gray') axs[i].axis('off') plt.show() Explanation: 1.b - How does Mnist look like ? End of explanation fig, axs = plt.subplots(2,2) axs[0][0].hist(x_train.reshape([-1]), bins = 25) axs[0][1].hist(y_train.reshape([-1]), bins = 10) axs[1][0].hist(x_test.reshape([-1]), bins = 25) axs[1][1].hist(y_test.reshape([-1]), bins = 10) plt.show() Explanation: 1.c - Distribution of the Mnist dataset End of explanation # Normalize the MNIST data x_train = x_train/255. x_test = x_test/255. # Change the one-hot-encoding y_train = keras.utils.to_categorical(y_train, 10) y_test = keras.utils.to_categorical(y_test, 10) Explanation: 1.c - Normalize and change the encoding of the data End of explanation sample = x_test[0] plt.imshow(sample) Explanation: 2 - Classify our data We are going to choose in between 3 classifier to classify our data: SVM Nearest Neighboor Logistic Regression End of explanation from sklearn import svm from skimage.transform import resize # 24*24 images'll be too big, we downsample them to 8*8 def to_svm_image(img): img = resize(img, [8,8]) return img.reshape([-1]) x_train_svm = map(to_svm_image, x_train) x_train_svm = np.array(x_train_svm) # Train the classifier here clf = svm.SVC(gamma=0.001, C=100.) clf.fit(x_train_svm, y_train.argmax(axis=1)) # Test the classifier sample = to_svm_image(x_test[0]) sample = sample.reshape([1,-1]) prediction = clf.predict(sample) print "With SVM, our sample is closest to class {}".format(prediction[0]) Explanation: 2.a - SVM https://www.youtube.com/watch?v=_PwhiWxHK8o End of explanation sample = x_test[0] def distance(tensor1, tensor2, norm='l1'): if norm == "l1": dist = np.abs(tensor1 - tensor2) if norm == "l2": dist = tensor1 ** 2 - tensor2 ** 2 dist = np.sum(dist) return dist def predict(sample, norm='l1'): min_dist = 100000000000 min_idx = -1 for idx, im in enumerate(x_train): if distance(sample, im) < min_dist: min_dist = distance(sample, im, norm) min_idx = idx y_pred = y_train[min_idx] return y_pred y = predict(sample, 'l1') print "With NN, our sample is closest to class {}".format(y.argmax()) Explanation: 2.b - Nearest neighboor Browse throught the entire dataset which is the closest "neighboor" to our current example. End of explanation from sklearn import linear_model, datasets from sklearn.linear_model import LogisticRegression from sklearn.linear_model import SGDClassifier # Train the classifier here clf_sgd = SGDClassifier() clf_sgd.fit(x_train_svm, y_train.argmax(axis=1)) # Test the classifier sample = to_svm_image(x_test[0]) sample = sample.reshape([1,-1]) prediction = clf.predict(sample) print "With Softmax regression, our sample is closest to class {}".format(prediction[0]) Explanation: 2.c - Softmax regression $ y = \sigma(W^T \cdot X + b) $ End of explanation
1,998
Given the following text description, write Python code to implement the functionality described below step by step Description: Selecting variants by number of unique barcodes This notebook gets scores for the variants in an Experiment that are linked to multiple barcodes, and plots the relationship between each variant's score and number of unique barcodes. Step1: Modify the results_path variable in the next cell to match the output directory of your Enrich2-Example dataset. Step2: Open the Experiment HDF5 file. Step3: The pd.HDFStore.keys() method returns a list of all the tables in this HDF5 file. Step4: First we will work with the barcode-variant map for this analysis, stored in the "/main/barcodemap" table. The index is the barcode and it has a single column for the variant HGVS string. Step5: To find out how many unique barcodes are linked to each variant, we'll count the number of times each variant appears in the barcode-variant map using a Counter data structure. We'll then output the top ten variants by number of unique barcodes. Step6: Next we'll turn the Counter into a data frame. Step7: The data frame has the information we want, but it will be easier to use later if it's indexed by variant rather than row number. Step8: We'll use a cutoff to choose variants with a minimum number of unique barcodes, and store this subset in a new index. We'll also exclude the wild type by dropping the first entry of the index. Step9: We can use this index to get condition-level scores for these variants by querying the "/main/variants/scores" table. Since we are working with an Experiment HDF5 file, the data frame column names are a MultiIndex with two levels, one for experimental conditions and one for data values (see the pandas documentation for more information). Step10: There are fewer rows in multi_bc_scores than in multi_bc_variants because some of the variants were not scored in all replicate selections, and therefore do not have a condition-level score. Now that we're finished getting data out of the HDF5 file, we'll close it. Step11: We'll add a column to the bc_counts data frame that contains scores from the multi_bc_scores data frame. To reference a column in a data frame with a MultiIndex, we need to specify all column levels. Step12: Many rows in bc_counts are missing scores (displayed as NaN) because those variants were not in multi_bc_scores. We'll drop them before continuing. Step13: Now that we have a data frame containing the subset of variants we're interested in, we can make a plot of score vs. number of unique barcodes. This example uses functions and colors from the Enrich2 plotting library.
Python Code: % matplotlib inline import os.path from collections import Counter import numpy as np import pandas as pd import matplotlib.pyplot as plt from enrich2.variant import WILD_TYPE_VARIANT import enrich2.plots as enrich_plot pd.set_option("display.max_rows", 10) # rows shown when pretty-printing Explanation: Selecting variants by number of unique barcodes This notebook gets scores for the variants in an Experiment that are linked to multiple barcodes, and plots the relationship between each variant's score and number of unique barcodes. End of explanation results_path = "/path/to/Enrich2-Example/Results/" Explanation: Modify the results_path variable in the next cell to match the output directory of your Enrich2-Example dataset. End of explanation my_store = pd.HDFStore(os.path.join(results_path, "BRCA1_Example_exp.h5")) Explanation: Open the Experiment HDF5 file. End of explanation my_store.keys() Explanation: The pd.HDFStore.keys() method returns a list of all the tables in this HDF5 file. End of explanation bcm = my_store['/main/barcodemap'] bcm Explanation: First we will work with the barcode-variant map for this analysis, stored in the "/main/barcodemap" table. The index is the barcode and it has a single column for the variant HGVS string. End of explanation variant_bcs = Counter(bcm['value']) variant_bcs.most_common(10) Explanation: To find out how many unique barcodes are linked to each variant, we'll count the number of times each variant appears in the barcode-variant map using a Counter data structure. We'll then output the top ten variants by number of unique barcodes. End of explanation bc_counts = pd.DataFrame(variant_bcs.most_common(), columns=['variant', 'barcodes']) bc_counts Explanation: Next we'll turn the Counter into a data frame. End of explanation bc_counts.index = bc_counts['variant'] bc_counts.index.name = None del bc_counts['variant'] bc_counts Explanation: The data frame has the information we want, but it will be easier to use later if it's indexed by variant rather than row number. End of explanation bc_cutoff = 10 multi_bc_variants = bc_counts.loc[bc_counts['barcodes'] >= bc_cutoff].index[1:] multi_bc_variants Explanation: We'll use a cutoff to choose variants with a minimum number of unique barcodes, and store this subset in a new index. We'll also exclude the wild type by dropping the first entry of the index. End of explanation multi_bc_scores = my_store.select('/main/variants/scores', where='index in multi_bc_variants') multi_bc_scores Explanation: We can use this index to get condition-level scores for these variants by querying the "/main/variants/scores" table. Since we are working with an Experiment HDF5 file, the data frame column names are a MultiIndex with two levels, one for experimental conditions and one for data values (see the pandas documentation for more information). End of explanation my_store.close() Explanation: There are fewer rows in multi_bc_scores than in multi_bc_variants because some of the variants were not scored in all replicate selections, and therefore do not have a condition-level score. Now that we're finished getting data out of the HDF5 file, we'll close it. End of explanation bc_counts['score'] = multi_bc_scores['E3', 'score'] bc_counts Explanation: We'll add a column to the bc_counts data frame that contains scores from the multi_bc_scores data frame. To reference a column in a data frame with a MultiIndex, we need to specify all column levels. End of explanation bc_counts.dropna(inplace=True) bc_counts Explanation: Many rows in bc_counts are missing scores (displayed as NaN) because those variants were not in multi_bc_scores. We'll drop them before continuing. End of explanation fig, ax = plt.subplots() enrich_plot.configure_axes(ax, xgrid=True) ax.plot(bc_counts['barcodes'], bc_counts['score'], linestyle='none', marker='.', alpha=0.6, color=enrich_plot.plot_colors['bright5']) ax.set_xlabel("Unique Barcodes") ax.set_ylabel("Variant Score") Explanation: Now that we have a data frame containing the subset of variants we're interested in, we can make a plot of score vs. number of unique barcodes. This example uses functions and colors from the Enrich2 plotting library. End of explanation
1,999
Given the following text description, write Python code to implement the functionality described below step by step Description: Parameter identification example Here is a simple toy model that we use to demonstrate the working of the inference package $\emptyset \xrightarrow[]{k_1} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$ Run the MCMC algorithm to identify parameters from the experimental data In this demonstration, we will try to use multiple trajectories of data taken under multiple initial conditions and different length of time points? Step1: Using Gaussian prior for k1 Step2: Using mixed priors and estimate both k1 and d1 Step3: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis. You can also plot the results as follows
Python Code: %matplotlib inline %config InlineBackend.figure_format = "retina" from matplotlib import rcParams rcParams["savefig.dpi"] = 100 rcParams["figure.dpi"] = 100 rcParams["font.size"] = 20 Explanation: Parameter identification example Here is a simple toy model that we use to demonstrate the working of the inference package $\emptyset \xrightarrow[]{k_1} X \; \; \; \; X \xrightarrow[]{d_1} \emptyset$ Run the MCMC algorithm to identify parameters from the experimental data In this demonstration, we will try to use multiple trajectories of data taken under multiple initial conditions and different length of time points? End of explanation %matplotlib inline import bioscrape as bs from bioscrape.types import Model from bioscrape.inference import py_inference import numpy as np import pylab as plt import pandas as pd # Import a bioscrape/SBML model M = Model(sbml_filename = 'toy_sbml_model.xml') # Import data from CSV # Import a CSV file for each experiment run df = pd.read_csv('test_data.csv', delimiter = '\t', names = ['X','time'], skiprows = 1) M.set_species({'X':df['X'][0]}) # Create prior for parameters prior = {'d1' : ['gaussian', 0.2, 200]} sampler, pid = py_inference(Model = M, exp_data = df, measurements = ['X'], time_column = ['time'], nwalkers = 5, init_seed = 0.15, nsteps = 1500, sim_type = 'deterministic', params_to_estimate = ['d1'], prior = prior) Explanation: Using Gaussian prior for k1 End of explanation %matplotlib inline import bioscrape as bs from bioscrape.types import Model from bioscrape.inference import py_inference import numpy as np import pylab as plt import pandas as pd # Import a bioscrape/SBML model M = Model(sbml_filename = 'toy_sbml_model.xml') # Import data from CSV # Import a CSV file for each experiment run df = pd.read_csv('test_data.csv', delimiter = '\t', names = ['X','time'], skiprows = 1) M.set_species({'X':df['X'][0]}) prior = {'d1' : ['gaussian', 0.2, 20], 'k1' : ['uniform', 0, 100]} sampler, pid = py_inference(Model = M, exp_data = df, measurements = ['X'], time_column = ['time'], nwalkers = 20, init_seed = 0.15, nsteps = 5500, sim_type = 'deterministic', params_to_estimate = ['d1', 'k1'], prior = prior) Explanation: Using mixed priors and estimate both k1 and d1 End of explanation from bioscrape.simulator import py_simulate_model M_fit = Model(sbml_filename = 'toy_sbml_model.xml') M_fit.set_species({'X':df['X'][0]}) timepoints = pid.timepoints flat_samples = sampler.get_chain(discard=200, thin=15, flat=True) inds = np.random.randint(len(flat_samples), size=200) for ind in inds: sample = flat_samples[ind] for pi, pi_val in zip(pid.params_to_estimate, sample): M_fit.set_parameter(pi, pi_val) plt.plot(timepoints, py_simulate_model(timepoints, Model= M_fit)['X'], "C1", alpha=0.1) # plt.errorbar(, y, yerr=yerr, fmt=".k", capsize=0) # plt.plot(timepoints, list(pid.exp_data['X']), label = 'data') plt.plot(timepoints, py_simulate_model(timepoints, Model = M)['X'], "k", label="original model") plt.legend(fontsize=14) plt.xlabel("Time") plt.ylabel("[X]"); flat_samples = sampler.get_chain(discard = 200, thin = 15,flat = True) flat_samples Explanation: Check mcmc_results.csv for the results of the MCMC procedure and perform your own analysis. You can also plot the results as follows End of explanation