diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkpoj" "b/data_all_eng_slimpj/shuffled/split2/finalzzkpoj" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkpoj" @@ -0,0 +1,5 @@ +{"text":"\\chapter{Natural Language Processing for Policymaking}\n\n\\chapterauthor{Zhijing Jin}{jinzhi@ethz.ch}{Max Planck Institute \\& ETH Z\u00fcrich}\n\n\\chapterauthor{Rada Mihalcea}{mihalcea@umich.edu}{University of Michigan}\n\n\\begin{abstract}\nLanguage is the medium for many political activities, from campaigns to news reports. Natural language processing (NLP) uses computational tools to parse text into key information that is needed for policymaking. In this chapter, we introduce common methods of NLP, including text classification, topic modeling, event extraction, and text scaling. We then overview how these methods can be used for policymaking through four major applications including data collection for evidence-based policymaking, interpretation of political decisions, policy communication, and investigation of policy effects.\nFinally, we highlight some potential limitations and ethical concerns when using NLP for policymaking.\n\n\\end{abstract}\n\n\\keywords{Natural Language Processing, Text Analysis, Policymaking, Artificial Intelligence, Machine Learning}\n\n\n\\thispagestyle{alim}\n\n\n\\input{main_content}\n\n\\bibliographystyle{acl_natbib}\n\n\\section{Introduction}\\label{sim:sec:intro}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig\/img_intro.pdf}\n \\caption{Overview of NLP for policymaking.}\n \\label{fig:intro}\n\\end{figure}\n\nLanguage is an important form of data in politics. Constituents express their stances and needs in text such as social media and survey responses. Politicians conduct campaigns through debates, statements of policy positions, and social media. Government staff needs to compile information from various documents to assist in decision-making. Textual data is also prevalent through the documents and debates in the legislation process, negotiations and treaties to resolve international conflicts, and media such as news reports, social media, party platforms, and manifestos.\n\nNatural language processing (NLP) is the study of computational methods to automatically analyze text and extract meaningful information for subsequent analysis.\nThe importance of NLP for policymaking has been highlighted since the last century \\citep{gigley-1993-projected}. With the recent success of NLP and its versatility over tasks such as classification, information extraction, summarization, and translation \\citep{devlin2019bert,brown2020language}, there is a rising trend to integrate NLP into the policy decisions and public administrations \\citep{misuraca2020use,engstrom2020government,vanroy2021ai}. Main applications include extracting useful, condensed information from free-form text \\citep{engstrom2020government}, and \nanalyzing sentiment and citizen feedback by NLP \\citep{biran2022policycloud} as in many projects funded by EU Horizon projects \\citep{european2011proposal}.\nDriven by the broad applications of NLP \\citep{jin-etal-2021-good,gonzalez2022good}, the research community also starts to connect NLP with various social applications in\nthe fields of computational social science \\citep{lazer2009computational,shah2015big,engel2021handbook,luz2022computational} and political science in particular \\citep{grimmer2013text,glavas-etal-2019-computational}.\n\n\n\nWe show an overview of NLP for policymaking in Figure~\\ref{fig:intro}. According to this overview, the chapter will consist of three parts. First, we introduce in Section~\\ref{sec:nlp_tools} NLP methods that are applicable to political science, including text classification, topic modeling, event extraction, and score prediction. Next, we cover a variety of cases where NLP can be applied to policymaking in Section~\\ref{sec:nlp4policy}. Specifically, we cover four stages: analyzing data for evidence-based policymaking, improving policy communication with the public, investigating policy effects, and interpreting political phenomena to the public. Finally, we will discuss limitations and ethical considerations when using NLP for policymaking in Section~\\ref{sec:ethics}.\n\n\n\\section{NLP for Text Analysis}\\label{sec:nlp_tools}\n\nNLP brings powerful computational tools to analyze textual data \\citep{jurafsky2000speech}.\nAccording to the type of information that we want to extract from the text, we introduce four different NLP tools to analyze text data: text classification (by which the extracted information is the \\textit{category} of the text), topic modeling (by which the extracted information is the \\textit{key topics} in the text), event extraction (by which the extracted information is the list of \\textit{events} mentioned in the text), and score prediction (where the extracted information is a \\textit{score} of the text). Table~\\ref{tab:nlp_tasks} lists each method with the type of information it can extract and some example application scenarios, which we will detail in the following subsections.\n\n\\begin{table}[t]\n \\centering\n \\small\n\\resizebox{\\textwidth}{!}{\n \\begin{tabular}{llll}\n\\toprule\n\\textbf{NLP Method} & \\textbf{Information to Extract} & \\textbf{Example Applications} \\\\ \\midrule\nText classification & Category of text & Identify the sentiment, stance, etc.\n\\\\\nTopic modeling & Key topics in text & Summarize topics in political agenda\n\\\\\nEvent extraction & List of events & Extract news events, international conflicts\n\\\\\nScore prediction & Score & Text scaling\n\\\\\n\\bottomrule\n \\end{tabular}\n}\n \\caption{Four common NLP methods, the type of information extracted by each of them, and example applications.}\n \\label{tab:nlp_tasks}\n\\end{table}\n\n\\subsection{Text Classification}\nAs one of the most common types of text analysis methods, text classification reads in a piece of text and predicts its category using an NLP text classification model, as in Figure~\\ref{fig:classification}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9 \\textwidth]{fig\/img_classification.pdf}\n \\caption{The usage and example applications of text classification on political text.}\n \\label{fig:classification}\n\\end{figure} \nThere are many off-the-shelf existing tools for text classification \\citep{yin-etal-2019-benchmarking,brown2020language,loria2018textblob} such as {the implementation}\\footnote{\\url{https:\/\/discuss.huggingface.co\/t\/new-pipeline-for-zero-shot-text-classification\/681}} using the Python package \\texttt{transformers} \\citep{wolf-etal-2020-transformers}. \nA well-known subtask of text classification is sentiment classification (also known as sentiment analysis, or opinion mining), which aims to distinguish the subjective information in the text, such as positive or negative sentiment \\citep{pang2007opinion}.\nHowever, the existing tools only do well in categories that are easy to predict. If the categorization is customized and very specific to a study context, then there are two common solutions. One is to use dictionary-based methods, by a list of frequent keywords that correspond to a certain category \\citep{albaugh2013automated} or using general linguistic dictionaries such as the Linguistic Inquiry and Word Count (LIWC) dictionary \\citep{pennebaker2001linguistic}.\nThe second way is to adopt the data-driven pipeline, which requires human hand coding of documents into a predetermined set of categories, then train an NLP model to learn the text classification task \\citep{sun2019finetune}, and verify the performance of the NLP model on a held-out subset of the data, as introduced in \\citet{grimmer2013text}. An example of adapting the state-of-the-art NLP models on a customized dataset is demonstrated in {this guide}.\\footnote{\\url{https:\/\/skimai.com\/fine-tuning-bert-for-sentiment-analysis\/}}\n\nUsing the text classification method, we can automate many types of analyses in political science. As listed in the examples in Figure~\\ref{fig:classification}, researchers can detect political perspective of news articles \\citep{huguet-cabot-etal-2020-pragmatics}, the stance in media on a certain topic \\citep{luo-etal-2020-detecting}, whether campaigns use positive or negative sentiment \\citep{ansolabehere1995going}, which issue area is the legislation about \\citep{adler2011congressional},\ntopics in parliament speech \\citep{albaugh2013automated,osnabrugge2021cross}, congressional bills \\citep{hillard2008computer,collingwood2012tradeoffs} and political agenda \\citep{karan-etal-2016-analysis},\nwhether the international statement is peaceful or belligerent \\citep{schrodt2000pattern}, whether a speech contains positive or negative sentiment \\citep{schumacher2016euspeech}, and whether a U.S. Circuit Courts case decision is conservative or liberal \\citep{hausladen2020text}.\nMoreover, text classification can also be used to categorize the type of language devices that politicians use, such as what type of framing the text uses \\citep{huguet-cabot-etal-2020-pragmatics}, and \nwhether a tweet uses political parody \\citep{maronikolakis-etal-2020-analyzing}.\n\n\n\n\n\n\n\n\n\n\n\\subsection{Topic Modeling}\nTopic modeling is a method to uncover a list of frequent topics in a corpus of text. For example, news articles that are against vaccination might frequently mention the topic ``autism,'' whereas news articles supporting vaccination will be more likely to mention ``immune'' and ``protective.''\nOne of the most widely used models is the Latent Dirichlet Allocation (LDA) \\citep{blei2001latent} which is available in the Python packages \\texttt{NLTK} and \\texttt{Gensim}, as in {this guide}.\\footnote{\\url{https:\/\/skimai.com\/fine-tuning-bert-for-sentiment-analysis\/}} \n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\textwidth]{fig\/img_topic_model.pdf}\n \\caption{Given a collection of text documents, topic modeling generates a list of topic clusters.}\n \\label{fig:topic_model}\n\\end{figure}\n\nSpecifically, LDA is a probabilistic model that models each topic as a mixture of words, and each textual document can be represented as a mixture of topics. As in Figure~\\ref{fig:topic_model}, given a collection of textual documents, LDA topic modeling generates a list of topic clusters, for which the number $N$ of topics can be customized by the analyst. In addition, if needed, LDA can also produce a representation of each document as a weighted list of topics. While often the number of topics is predetermined by the analyst, this number can also be dynamically determined by measuring the perplexity of the resulting topics. In addition to LDA, other topic modeling algorithms have been used extensively, such as those based on principal component analysis (PCA) \\citep{Chung2008-fo}.\n\nTopic modeling, as described in this section, can facilitate various studies on political text.\nPrevious studies analyzed the topics of legislative speech \\citep{quinn2010analyze,quinn2006automated}, Senate press releases \\citep{grimmer2010bayesian}, and electorial manifestos \\citep{menini-etal-2017-topic}.\n\n\n\\subsection{Event Extraction}\nEvent extraction is the task of extracting a list of events from a given text. It\nis a subtask of a larger domain of NLP called information extraction \\citep{manning2008introduction}.\nFor example, the sentence ``Israel bombs Hamas sites in Gaza'' expresses an event ``\\textit{Israel $\\xrightarrow[]{\\text{bombs}}$ Hamas sites}'' with the location ``\\textit{Gaza}.'' Event extraction usually incorporates both entity extraction (e.g., Israel, Hamas sites, and Gaza in the previous example) and relation extraction (e.g., ``bombs'' in the previous example).\n\n\nEvent extraction is a handy tool to monitor events automatically, such as detecting news events \\citep{walker2006ace,mitamura2017events}, \nand detecting international conflicts \\citep{azar1980conflict,trappl2006programming}. To foster research on event extraction, there are tremendous efforts into textual data collection \\citep{mcclelland1976world,schrodt2006twenty,merritt1993international,raleigh_introducing_2010,sundberg2013introducing}, event coding schemes to accommodate different political events \\citep{goldstein1992conflict,bond1997mapping,gerner2002conflict}, and dataset validity assessment \\citep{schrodt1994validity}.\n\n\n\n\n\n\n\n\n\nAs for event extraction models, similar to text classification models, there are off-the-shelf tools such as the Python packages \\texttt{stanza} \\citep{qi2020stanza} and \\texttt{spaCy} \\citep{honnibal2020spacy}. In case of customized sets of event types, researchers can also train NLP models on a collection of textual documents with event annotations \\citep[\\textit{inter alia}]{hogenboom2011overview,liu2020extracting}.\n\n\\subsection{Score Prediction}\nNLP can also be used to predict a score given input text. A useful application is political text scaling, which aims to predict a score (e.g., left-to-right ideology, emotionality, and different attitudes towards the European integration process) for a given piece of text (e.g., political speeches, party manifestos, and social media posts) \\citep[\\textit{inter alia}]{laver2003extracting,lowe2011scaling,slapin2008scaling,gennaro2021emotion}.\n\nTraditional models for text scaling include Wordscores \\citep{laver2003extracting} and WordFish \\citep{slapin2008scaling,lowe2011scaling}. Recent NLP models represent the text by high-dimensional vectors learned by neural networks to predict the scores \\citep{glavas-etal-2017-unsupervised,nanni2019political}. One way to use the NLP models is to apply off-the-shelf general-purpose models such as InstructGPT \\citep{ouyang2022instructGPT} and design a prompt to specify the type of the scaling to the API,\\footnote{\\url{https:\/\/beta.openai.com\/docs\/introduction}}, or borrow existing, trained NLP models if the same type of scaling has been studied by previous researchers. Another way is to collect a dataset of text with hand-coded scales, and train NLP models to learn to predict the scale, similar to the practice in \\citet{slapin2008scaling,gennaro2021emotion}, \\textit{inter alia}.\n\n\n\n\n\\section{Using NLP for Policymaking}\\label{sec:nlp4policy}\n\nIn the political domain, there are large amounts of textual data to analyze \\citep{neuendorf2015content}, such as parliament debates \\citep{van2017debates}, speeches \\citep{schumacher2016euspeech}, legislative text \\citep{baumgartner2006comparative,bevan2017gone}, database of political parties worldwide \\citep{doring2019party}, and expert survey data \\citep{bakker2015measuring}. Since it is tedious to hand-code all textual data, NLP provides a low-cost tool to automatically analyze such massive text. \n\nIn this section, we will introduce how NLP can facilitate four major areas to help policymaking: before policies are made, researchers can use NLP to analyze data and extract key information for evidence-based policymaking (Section~\\ref{sec:use1}); after policies are made, researchers can interpret the priorities among and reasons behind political decisions (Section~\\ref{sec:use2}); researchers can also analyze features in the language of politicians when communicating the policies to the public (Section~\\ref{sec:use3}); finally, after the policies have taken effect, researchers can investigate the effectiveness of the policies (Section~\\ref{sec:use4}).\n\n\n\n\n\n\\subsection{Analyzing Data for Evidence-Based Policymaking}\\label{sec:use1}\n\nA major use of NLP is to extract information from large collections of text. This function can be very useful for analyzing the views and needs of constituents, so that policymakers can make decisions accordingly.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.9\\textwidth]{fig\/img_use_survey.pdf}\n \\caption{NLP to analyze data for evidence-based policymaking.}\n \\label{fig:use_survey}\n\\end{figure}\n\nAs in Figure~\\ref{fig:use_survey}, we will explain how NLP can be used to analyze data for evidence-based policymaking from three aspects: data, information to extract, and political usage.\n\n\\myparagraph{Data.}\nData is the basis of such analyses. Large amounts of textual data can reveal information about constituents, media outlets, and influential figures. The data can come from a variety of sources, including social media such as Twitter and Facebook, survey responses, and news articles.\n\n\\myparagraph{Information to Extract.}\nBased on the large textual corpora, NLP models can be used to extract information that are useful for political decision-making, ranging from information about people, such as sentiment \\citep{thelwall2011sentiment,rosenthal2015semeval}, stance \\citep{thomas-etal-2006-get,gottipati-etal-2013-learning,stefanov-etal-2020-predicting,luo-etal-2020-detecting}, ideology \\citep{hirst2010party,iyyer-etal-2014-political,preotiuc-pietro-etal-2017-beyond}, and reasoning on certain topics \\citep{egami2018make,demszky-etal-2019-analyzing,camp2021thin}, to factual information, such as main topics \\citep{gottipati-etal-2013-learning}, events \\citep{trappl2006programming,mitamura2017events,ding-riloff-2018-human,ding-etal-2019-improving}, and needs \\citep{sarol-etal-2020-empirical,crayton2020narratives,paul-frank-2019-ranking} expressed in the data. \nThe extracted information cannot only be about people, but also about political entities, such as the left-right political scales of parties and political actors \\citep{slapin2008scaling,glavas-etal-2017-unsupervised}, which claims are raised by which politicians \\citep{blessing-etal-2019-environment,pado-etal-2019-sides}, and the legislative body's vote breakdown for state bills by backgrounds such as gender, rural-urban and ideological splits \\citet{davoodi-etal-2020-understanding}.\n\nTo extract such information from text, we can often utilize the main NLP tools introduced in Section~\\ref{sec:nlp_tools}, including text classification, topic modeling, event extraction and score prediction (especially text scaling to predict left-to-right ideology). In NLP literature, social media, such as Twitter, is a popular source of textual data to collect public opinions \\citep{thelwall2011sentiment,paltoglou2012twitter,pak-paroubek-2010-twitter,arunachalam-sarkar-2013-new,rosenthal2015semeval}.\n\n\\myparagraph{Political Usage.}\nSuch information extracted from data is highly valuable for political usage. For example, voters' sentiment, stance, and ideology are important supplementary for traditional polls and surveys to gather information about the constituents' political leaning. Identifying the needs expressed by people is another important survey target, which helps politicians understand what needs they should take care of, and match the needs and availabilities of resources \\citep{hiware-etal-2020-narmada}.\n\nAmong more specific political uses is to understand the public opinion on parties\/president, as well as on certain topics. The public sentiment towards parties \\citep{pla-hurtado-2014-political} and President \\citep{marchetti-bowick-chambers-2012-learning} can serve as a supplementary for the traditional approval rating survey, and stances towards certain topics \\citep{gottipati-etal-2013-learning,stefanov-etal-2020-predicting,luo-etal-2020-detecting} can be important information for legislators to make decisions on debatable issues such as abortion, taxes, and legalization of same-sex marriage.\nMany existing studies use NLP on social media text to predict election results \\citep{oconnor2010tweets,beverungen2011evaluating,unankard2014predicting,mohammad2015sentiment,tjong-kim-sang-bos-2012-predicting}.\nIn general, big-data-driven analyses can facilitate decision-makers to collect more feedback from people and society, enabling policymakers to be closer to citizens, and increase transparency and engagement in political issues \\citep{arunachalam-sarkar-2013-new}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Interpreting Political Decisions}\\label{sec:use2}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/img_use_interpret.pdf}\n \\caption{NLP to interpret political decisions.}\n \\label{fig:use_interpret}\n\\end{figure}\n\nAfter policies are made, political scientists and social scientists can use textual data to interpret political decisions. \nAs in Figure~\\ref{fig:use_interpret}, there are two major use cases: mining political agendas, and discovering policy responsiveness.\n\n\\myparagraph{Mining Political Agendas.}\nResearchers can use textual data to infer a political agenda, including the topics that politicians prioritize, political events, and different political actors' stances on certain topics.\nSuch data can come from press releases, legislation, and electoral campaigns. \nExample of previous studies to analyze the topics and prioritization of political bodies include the research on the prioritization each Senator assigns to topics using press releases \\citep{grimmer2010representational}, topics in different parties' electoral manifestos \\citep{glavas-etal-2017-cross}, topics in EU parliament speeches \\citep{lauscher2016entities} and other various types of text \\citep{king2003automated,hopkins2010method,grimmer2010bayesian,roberts2014structural}, as well as political event detection from congressional text and news \\citep{nanni2017building}.\n\nResearch on politicians' stances include identifying policy positions of politicians \\citep[\\textit{inter alia}]{winter1977content,laver2003extracting,slapin2008scaling,lowe2011scaling},\nhow different politicians agree or disagree on certain topics in electoral campaigns \\citep{menini-tonelli-2016-agreement}, \nand assessment of political personalities \\citep{immelman1993assessment}.\n\n\n\n\nFurther studies look into how political interests affect legislative behavior. Legislators tend to show strong personal interest in the issues that come before their committees \\citep{fenno1973congressmen}, and \\citet{mayhew2004congress} identifies that Senators replying on appropriations secured for their state have a strong incentive to support legislations that allow them to secure particularistic goods.\n\n\n\\myparagraph{Discovering Policy Responsiveness.}\nPolicy responsiveness is the study of how policies respond to different factors, such as how changes in public opinion lead to responses in public policy \\citep{stimson1995dynamic}. One major direction is that politicians tend to make policies that align with the expectations of their constituents, in order to run for successful re-election in the next term \\citep{canes2002out}.\nStudies show that policy preferences of the state public can be a predictor of future state policies \\citep{caughey2018policy}.\nFor example, \\citet{lax2009gay} show that more LGBT tolerance leads to more pro-gay legislation in response.\n\n\n\nA recent study by \\citet{jin-etal-2021-mining-cause} uses NLP to analyze over 10 million COVID-19-related tweets targeted at US governors; using classification models to obtain the public sentiment, they study how public sentiment leads to political decisions of COVID-19 policies made by US governors. Such use of NLP on massive textual data contrasts with the traditional studies of policy responsiveness which span over several decades and use manually collected survey results \\citep{caughey2018policy,lax2009gay,lax2012democratic}. \n\n\n\\subsection{Improving Policy Communication with the Public}\\label{sec:use3}\n\nPolicy communication is the study to understand how politicians present the policies to their constituents. As in Figure~\\ref{fig:use_comm}, common research questions in policy communication include how politicians establish their images \\citep{fenno1978home} such as campaign strategies \\citep{petrocik1996issue,simon2002winning,sigelman2004avoidance}, how constituents allocate credit, what receives attention in Congress \\citep{sulkin2005issue}, and what receives attention in news articles \\citep{semetko2000framing,mccombs2004setting,armstrong2006whose}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{fig\/img_use_comm.pdf}\n \\caption{NLP to analyze policy communication.}\n \\label{fig:use_comm}\n\\end{figure}\n\nBased on data from press releases, political statements, electoral campaigns and news articles,\\footnote{Other data sources used in policy communication research include surveys of Senate staffers \\citep{cook1988press}, newsletters that legislators send to constituents \\citep{lipinski2009congressional} and so on.} researchers usually analyze two types of information: the language techniques politicians use, and the contents such as topics and underlying moral foundations in these textual documents. \n\n\\myparagraph{Language Techniques.}\nPolicy communication largely focuses on the types of languages that politicians use. Researchers are interested in first analyzing the language techniques in political texts, and then, based on these techniques, researchers can dive into the questions of why politicians use them, and what are the effects of such usage. \n\nFor example, previous studies analyze what portions of political texts are position-taking versus credit-claiming \\citep{grimmer2012words,grimmer2013appropriators}, whether the claims are vague or concrete \\citep{baerg2018central,eichorst2019resist}, the frequency of credit-claiming messages versus the actual amount of contributions\n\\citep{grimmer2012words}, and whether politicians tend to make credible or dishonorable promises \\citep{grimmer2010representational}. Within the political statements, it is also interesting to check the ideological proportions \\citep{sim-etal-2013-measuring}, and how politicians make use of dialectal variations and code-mixing \\citep{sravani-etal-2021-political}.\n\nThe representation styles usually affect the effectiveness of policy communication, such as \nthe role of language\nambiguity in framing the political agenda \\citep{page1976theory,campbell1983ambiguity}, and the effect of credit-claiming messages on constituents' allocation of credit \\citep{grimmer2012words}.\n\n\n\n\n\n\\myparagraph{Contents.}\nThe contents of policy communication include the topics in the political statements, such as what Senators discuss in floor statements \\citep{hill2002symbolic}, and what Presidents address in daily speeches \\citep{lee2008dividers}, and also the moral foundations used by politicians underlying their political tweets \\citep{johnson-goldwasser-2018-classification}.\n\nUsing the extracted content information, researchers can explore further questions such as whether competing politicians or political elites emphasize the same issues \\citep{petrocik1996issue,gabel2007estimating}, and how the priorities politicians articulate co-vary with the issues discussed in\nthe media \\citep{bartels1996politicians}. Another open research direction is to analyze the interaction between newspapers and politicians' messages, such as how often newspapers cover a certain politician's message and in what way, and how such coverage affects incumbency advantage.\n\n\n\n\n\\myparagraph{Meaningful Future Work.}\nApart from analyzing the language of existing political texts that aims to maximize political interests, an advanced question that is more meaningful to society is how to improve policy communication to steer towards a more beneficial future for society as a whole. There is relatively little research on this, and we welcome future work on this meaningful topic.\n\n\n\n\n\n\n \n \n \n \n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Investigating Policy Effects}\\label{sec:use4}\n\nAfter policies are taken into effect, it is important to collect feedback or evaluate the effectiveness of policies. Existing studies evaluate the effects of policies along different dimensions: one dimension is the change in public sentiment, which can be analyzed by comparing the sentiment classification results before and after policies, following a similar paradigm in Section~\\ref{sec:use1}.\nThere are also studies on how policies affect the crowd's perception of the democratic process \\citep{miller1990voters}.\n\nAnother dimension is how policies result in economic changes. \\citet{calvo2018winners} investigate the negative consequences of policy volatility that harm long-term economic growth. Specifically, to measure policy volatility, they first obtain main topics by topic modeling on presidential speeches, and then analyze how the significance of topics changes over time.\n\n\n\n\\section{Limitations and Ethical Considerations} \\label{sec:ethics}\n\nThere are several limitations that researchers and policymakers need to take into consideration when using NLP for policymaking, due to the data-driven and black-box nature of modern NLP. First, the effectiveness of the computational models relies on the quality and comprehensiveness of the data. Although many political discourses are public, including data sources such as news, press releases, legislation, and campaigns, when it comes to surveying public opinions, social media might be a biased representation of the whole population. Therefore, when making important policy decisions, the traditional polls and surveys can provide more comprehensive coverage. Note that in the case of traditional polls, NLP can still be helpful in expediting the processing of survey answers. \n\nThe second concern is the black-box nature of modern NLP models. We do not encourage decision-making systems to depend fully on NLP, but suggest that NLP can assist human decision-makers. Hence, all the applications introduced in this chapter use NLP to compile information that is necessary for policymaking instead of directly suggesting a policy. Nonetheless, some of the models are hard to interpret or explain, such as text classification using deep learning models \\citep{yin-etal-2019-benchmarking,brown2020language}, which could be vulnerable to adversarial attacks by small paraphrasing of the text input \\citep{jin2020bert}. In practical applications, it is important to ensure the trustworthiness of the usage of AI. There could be a preference for transparent machine learning models if they can do the work well (e.g., LDA topic models, and traditional classification methods using dictionaries or linguistic rules), or tasks with well-controlled outputs such as event extraction to select spans of the given text that mention events. In cases where only the deep learning models can provide good performance, there should be more detailed performance analysis (e.g., a study to check the correlation of the model decisions and human judgments), error analysis (e.g., different types of errors, failure modes, and potential bias towards certain groups), and studies about the interpretability of the model (e.g., feature attribution of the model, visualization of the internal states of the model).\n\nApart from the limitations of the technical methodology, there are also ethical considerations arising from the use of NLP. Among the use cases introduced in this chapter, some applications of NLP are relatively safe as they mainly involve analyzing public political documents and fact-based evidence or effects of policies. However, others could be concerning and vulnerable to misuse. For example, although effective, truthful policy communication is beneficial for society, it might be tempting to overdo policy communication and by all means optimize the votes. As it is highly important for government and politicians to gain positive public perception, overly optimizing policy communication might lead to propaganda, intrusion of data privacy to collect more user preferences, and, in more severe cases, surveillance and violation of human rights.\nHence, there is a strong need for policies to regulate the use of technologies that influence public opinions and pose a challenge to democracy.\n\n\\section{Conclusions}\n\nThis chapter provided a brief overview of current research directions in NLP that provide support for policymaking. We first introduced four main NLP tasks that are commonly used in text analysis: text classification, topic modeling, event extraction, and text scaling. We then showed how these methods can be used in policymaking for applications such as data collection for evidence-based policymaking, interpretation of political decisions, policy communication, and investigation of policy effects. We also discussed potential limitations and ethical considerations of which researchers and policymakers should be aware.\n\nNLP holds significant promise for enabling data-driven policymaking. In addition to the tasks overviewed in this chapter, we foresee that other NLP applications, such as text summarization (e.g., to condense information from large documents), question answering (e.g., for reasoning about policies), and culturally-adjusted machine translation (e.g., to facilitate international communications), will soon find use in policymaking. The field of NLP is quickly advancing, and close collaborations between NLP experts and public policy experts will be key to the successful use and deployment of NLP tools in public policy. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSpectropolarimetric data in the optical\/UV range put important\nconstraints on the emission and scattering geometry of AGNs (see\ne.g. \\cite{antonucci2002}). For the interpretation of\nspectropolarimetric data detailed modeling tools are important. Here,\nwe use a new, publicly available radiative transfer code, {\\sc Stokes}\n(\\cite[Goosmann \\& Gaskell\\ 2007]{goosmann2007}), to model the\nobscuring torus of AGNs. The code is based on the Monte-Carlo method\nand allows the simulation of various emission and scattering\ngeometries. Polarization due to Thomson and dust (Mie-)scattering is\nincluded. Moreover, the code computes wavelength-dependent time delays\nand can thus be used to study polarization reverberation (\\cite[Shoji,\nGaskell, \\& Goosmann\\ 2005]{shoji2005}).\n\n\n\\section{Modeling an obscuring torus}\n\nWe consider a torus with an elliptical cross-section and centered on a\npoint source. The source isotropically emits a flat continuum spectrum\nbetween 1600~\\AA \\ and 8000~\\AA. The torus half-opening angle is set to\n$\\theta_0 = 30^\\circ$. The inner and outer radii of the torus are fixed at\n0.25 pc and 100 pc respectively. The radial optical depth in the equatorial\nplane is 750 for the V-band. The dust models (table~\\ref{tab}) assume\na mixture of graphite and ``astronomical silicate'' and a grain radii\ndistribution $n(a) \\propto a^\\alpha_s$ between $a_{\\rm min}$ and\n$a_{\\rm max}$.\n\n\\begin{table}\\def~{\\hphantom{0}}\n \\begin{center}\n \\caption{Parameterization of the dust models}\n {\\tiny\n \\label{tab}\n \\begin{tabular}{lccccc}\\hline\n Type & Graphite & Silicate & $a_{\\rm min}$ & $a_{\\rm max}$ &\n $\\alpha_s$\\\\\\hline\n Galactic & 62.5\\% & 37.5\\% & 0.005$\\, \\mu{\\rm m}$ & 0.250$\\, \\mu{\\rm m}$\n & $-3.5$\\\\\n AGN & 85\\% & 15\\% & 0.005$\\, \\mu{\\rm m}$ & 0.200$\\, \\mu{\\rm m}$\n & $-2.05$\\\\\\hline\n \\end{tabular}\n }\n \\end{center}\n\\end{table}\n\nThe ``Galactic dust'' model reproduces the interstellar extinction\nfor $R_{\\rm V} = 3.1$ whilst the ``AGN dust'' parameterization is\nobtained from quasar extinction curves derived by \\cite[Gaskell\n\\etal\\ 2004]{gaskell2004}. This latter dust type favors larger grain\nsizes.\n\n\n\\section{Results and discussion}\n\nThe angular distributions of the polarization and flux spectra are\nshown in figure~\\ref{fig}. In a face-on view, the central source is\nseen directly and the obtained polarization is low. At obscured\ninclinations $i>\\theta_0$, the scattering properties of both dust\ntypes lead to different results. For ``AGN dust'', the polarization is\nlower in the UV but rises quickly toward longer wavelengths. Detailed\nspectropolarimetric observations of dust reflection thus effectively\nconstrain the dust composition. For both dust types the polarization\nposition angle is oriented perpendicularly to the projected symmetry\naxis of the object.\n\n\\begin{figure}\n \\vskip +0.4cm\n \\includegraphics[width=0.48\\textwidth]{S238-goosmann-poster2-fig1.eps}\n \\hfill\n \\includegraphics[width=0.48\\textwidth]{S238-goosmann-poster2-fig2.eps}\n \\caption{Polarization (top) and flux (bottom) spectra for a\n centrally-illuminated torus filled with ``Galactic dust'' (left) or\n ``AGN dust'' (right). The flux spectra are normalized to the\n illuminating flux and the inclination angle $i$ is measured from the\n symmetry axis.}\n \\label{fig}\n\\end{figure}\n\nIf the torus is filled with ``AGN dust'' the total flux spectra are\nwavelength-independent above 2500~\\AA \\ at all possible\ninclinations. For the ``Galactic dust'' model the scattered flux rises\ngradually toward longer wavelengths. The albedo of both dust\ncompositions changes significantly below 2500~\\AA \\ so that less\nradiation is scattered in the UV.\n\nSpectropolarimetric data from AGNs contain not only information\nabout the obscuring torus, but also include other scattering\nregions. To obtain a more detailed picture of AGN polarization it is\ntherefore necessary to model several, radiatively coupled scattering\nregions self-consistently. While this is beyond the scope of this\nproceedings note, {\\sc Stokes} is capable of solving such problems.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWe are interested in using climate models to produce objective probabilistic forecasts of future climate that\ninclude parameter uncertainty. The word `objective' is used here in a technical\nstatistical sense that means that the prior distribution for the parameters is determined by a rule,\nrather than from intuition.\nIn statistics the most widely discussed rule is the Jeffreys' rule~\\citep{jeffreys}.\nJeffreys' rule could, in principal, be applied to a climate model directly from the definition.\nThis would translate into using numerical methods to differentiate the predicted probabilities from\ninitial condition ensembles and to take expectations over all simulated climate states.\nThat could, however, be computationally demanding.\nAs an alternative one could fit distributions to the output from the model, and differentiate the estimated parameters of\nthe distributions instead, which is likely to be considerably easier.\nThe most obvious distribution to fit is then the multivariate normal distribution.\nIn previous work we have considered two special cases.\nIn~\\citet{jp1} we considered the case where the predicted variables are independent, but both the mean and the variance of initial condition\nensembles from the model are allowed to vary as a function of the model parameters.\nIn~\\citet{jp2} we considered the complementary case where the predicted variables can be correlated, but with a covariance matrix that is constant\nas a function of the model parameters.\nWe now consider the general case, with correlated predicted variables \\emph{and} a covariance matrix that can vary as a function of the\nmodel parameters.\nThis general case contains the two special cases.\nThe algebra in this case is not quite as elementary as the two special cases, in that we have\nto take expectations of a vector quadratic form, and differentiate the determinants, traces and inverses of matrices.\nPedagogically, therefore, we consider the derivations used in the two simpler cases as still being useful, especially\nas climate modelling practice is perhaps unlikely to reach the stage where it would be necessary to consider the covariance\nmatrix varying as a function of parameters for quite some time (there are many other more important challenges to be dealt with first).\n\nDetailed explanations of Jeffreys' Prior, and the motivation for its use, are given in~\\citet{jp1} and will not be repeated here.\nIn section~\\ref{s2} below we give the expressions for Jeffreys' Prior and the multivariate normal density.\nIn section~\\ref{s3} we then derive the expression for Jeffreys' Prior for the multivariate normal distribution where the climate model\nhas just a single parameter.\nIn section~\\ref{s4} we derive the same expression, but considering multiple parameters.\nIn both section~\\ref{s3} and section~\\ref{s4} we also consider four special cases:\nindependence (taking us back to the results in~\\citet{jp1}),\nconstant covariance (taking us back to the results in~\\citet{jp2}),\nconstant correlation and\nconstant variance.\n\n\\section{Jeffreys' Prior and the Multivariate Normal Density}\\label{s2}\n\nWe start with definitions of the Jeffreys' Prior and the multivariate normal density.\n\n\\subsection{Jeffreys' Prior}\n\nJeffreys' Prior is given by:\n\n\\begin{equation}\\label{eq1}\n p(\\theta)=\\sqrt{\\mbox{det}\\left[-\\mbox{E}\\left(\\frac{\\partial^2 \\log p(x|\\theta)}{\\partial \\theta_j \\partial \\theta_k}\\right)\\right]}\\\\\n \\label{jp_def}\n\\end{equation}\n\nDetailed explanations of this equation, which can at first be rather difficult to understand, are given in~\\citet{jp1} and~\\citet{jp2},\nas well as many statistics textbooks such as~\\citet{peterlee}.\n\n\\subsection{The Multivariate Normal Density}\n\nProbability densities from the multivariate normal distribution are given by:\n\\begin{equation}\np(x|\\theta)=\\frac{1}{(2\\pi)^\\frac{n}{2}} \\frac{1}{D^{\\frac{1}{2}}} \\mbox{exp}\\left(-\\frac{1}{2}(x-\\mu)^T S(x-\\mu)\\right)\n\\end{equation}\n\nwhere, in our application:\n\\begin{itemize}\n \\item $x$ is a vector of those variables predicted by the climate model that are to be compared\n with observations\n \\item $\\theta$ is a vector for the underlying parameters in the climate model\n \\item $\\mu$=$\\mu(\\theta)$ is a vector of the mean response of the model (in other words, the ensemble mean\n of $x$ for an infinite-sized initial condition ensemble for fixed parameters $\\theta$)\n \\item $\\Sigma=\\Sigma(\\theta)$ is the covariance matrix of the response of the model\n (in other words, the ensemble covariance matrix of $x$ for an infinite-sized initial condition ensemble for fixed parameters $\\theta$)\n \\item $S=\\Sigma^{-1}=S(\\theta)$ is the inverse of the covariance matrix\n \\item $D=\\mbox{det}(\\Sigma)=D(\\theta)$ is the determinant of the covariance matrix\n\\end{itemize}\n\nThis gives:\n\\begin{eqnarray}\n\\ln p(x|\\theta)\n&=&-\\frac{n}{2}\\ln 2\\pi -\\frac{1}{2} \\ln D -\\frac{1}{2}(x-\\mu)^T S(x-\\mu)\\\\\n&=&-\\frac{n}{2}\\ln 2\\pi -\\frac{1}{2} \\ln D -\\frac{1}{2}(x^T S x-x^TS \\mu-\\mu^T S x+\\mu^T S \\mu)\n\\end{eqnarray}\n\nSince $S$ is symmetric we have $x^TS\\mu=\\mu^T S x$\nwhich means that the above expression for $\\ln p(x|\\theta)$ simplifies a little to:\n\\begin{eqnarray}\n\\ln p(x|\\theta)\\label{eq1}\n&=&-\\frac{n}{2}\\ln 2\\pi -\\frac{1}{2} \\ln D -\\frac{1}{2}(x^T S x-2x^TS \\mu+\\mu^T S \\mu)\n\\end{eqnarray}\n\nWe now consider two cases: one parameter and multiple parameters.\n\n\\section{One parameter}\\label{s3}\n\nThe first case we consider is where there is just a single parameter in the climate model.\nWe mainly consider this case as a warm-up for the multiple parameter case, although it would also be\nrelevant if one only wanted to model the uncertainty due to a single parameter, which might be\na good approximation to the overall parameter uncertainty if that single parameter dominates the uncertainty.\n\nIf we consider $\\theta$ to be this single (scalar) parameter, then differentiating equation~\\ref{eq1} by $\\theta$ gives:\n\\begin{eqnarray}\n\\frac{\\partial \\ln p(x|\\theta)}{\\partial \\theta}\n&=&-\\frac{1}{2} \\frac{\\partial \\ln D}{\\partial \\theta}\n -\\frac{1}{2}x^T \\frac{\\partial S}{\\partial \\theta} x\n +x^T\\frac{\\partial (S \\mu)}{\\partial \\theta}\n -\\frac{1}{2}\\frac{\\partial (\\mu^T S \\mu)}{\\partial \\theta}\n\\end{eqnarray}\n\nExpanding the third and fourth terms:\n\\begin{eqnarray}\n\\frac{\\partial \\ln p(x|\\theta)}{\\partial \\theta}\n&=&-\\frac{1}{2} \\frac{\\partial \\ln D}{\\partial \\theta}\n -\\frac{1}{2}x^T \\frac{\\partial S}{\\partial \\theta} x\n +x^T \\left(\\frac{\\partial S}{\\partial \\theta}\\mu+S\\frac{\\partial \\mu}{\\partial \\theta}\\right)\n -\\frac{1}{2}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu+\\mu^T\\frac{\\partial S }{\\partial \\theta}\\mu+\\mu^T S \\frac{\\partial \\mu}{\\partial \\theta}\\right)\\\\\n&=&-\\frac{1}{2} \\frac{\\partial \\ln D}{\\partial \\theta}\n -\\frac{1}{2}x^T \\frac{\\partial S}{\\partial \\theta} x\n +x^T \\frac{\\partial S}{\\partial \\theta}\\mu+x^T S\\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{1}{2}\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu-\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\mu-\\frac{1}{2}\\mu^T S \\frac{\\partial \\mu}{\\partial \\theta}\n\\end{eqnarray}\n\nSince $S$ is symmetric $\\mu^T S \\frac{\\partial \\mu}{\\partial \\theta}=\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu$ and so\nthe 5th and 7th terms combine, giving:\n\n\\begin{eqnarray}\\label{eq10}\n\\frac{\\partial \\ln p(x|\\theta)}{\\partial \\theta}\n&=&-\\frac{1}{2} \\frac{\\partial \\ln D}{\\partial \\theta}\n -\\frac{1}{2}x^T \\frac{\\partial S}{\\partial \\theta} x\n +x^T \\frac{\\partial S}{\\partial \\theta}\\mu\n +x^T S\\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\mu\n\\end{eqnarray}\n\nDifferentiating again wrt $\\theta$ gives:\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta^2}\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta^2} x\n +x^T \\frac{\\partial}{\\partial \\theta}\\left(\\frac{\\partial S}{\\partial \\theta}\\mu\\right)\n +x^T \\frac{\\partial}{\\partial \\theta}\\left(S\\frac{\\partial \\mu}{\\partial \\theta}\\right)\\\\\n&& -\\frac{\\partial}{\\partial \\theta}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu\\right)\n -\\frac{1}{2}\\frac{\\partial}{\\partial \\theta}\\left(\\mu^T\\frac{\\partial S }{\\partial \\theta}\\mu\\right)\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta^2} x\n +x^T \\left(\\frac{\\partial^2 S}{\\partial \\theta^2}\\mu+\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\\right)\n +x^T \\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}+S\\frac{\\partial^2 \\mu}{\\partial \\theta^2}\\right)\\\\\n&& -\\left(\\frac{\\partial^2 \\mu^T }{\\partial \\theta^2}S \\mu\n +\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\theta} \\mu\n +\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\\right)\n -\\frac{1}{2}\\left(\\frac{\\partial \\mu^T}{\\partial \\theta}\\frac{\\partial S }{\\partial \\theta}\\mu\n +\\mu^T\\frac{\\partial^2 S }{\\partial \\theta^2}\\mu\n +\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\\right)\\\\\n&&\\mbox{(where we have expanded the derivatives of products)}\\nonumber\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta^2} x\n +x^T \\frac{\\partial^2 S}{\\partial \\theta^2}\\mu\n +x^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\n +x^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\n +x^T S\\frac{\\partial^2 \\mu}{\\partial \\theta^2}\\\\\n&& -\\frac{\\partial^2 \\mu^T }{\\partial \\theta^2}S \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\theta} \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{1}{2}\\frac{\\partial \\mu^T}{\\partial \\theta}\\frac{\\partial S }{\\partial \\theta}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta^2}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta^2} x\n +x^T \\frac{\\partial^2 S}{\\partial \\theta^2}\\mu\n +2x^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\n +x^T S\\frac{\\partial^2 \\mu}{\\partial \\theta^2}\\\\\n&& -\\frac{\\partial^2 \\mu^T }{\\partial \\theta^2}S \\mu\n -2\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\theta} \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta^2}\\mu\\\\\n&&\\mbox{(where we have combined the 4th and 5th, and 8th, 10th and 12th terms)}\\nonumber\n\\end{eqnarray}\n\nTaking expectations:\n\\begin{eqnarray}\nE\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta^2}\\right)\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2}E\\left(x^T \\frac{\\partial^2 S}{\\partial \\theta^2} x\\right)\n +E(x^T) \\frac{\\partial^2 S}{\\partial \\theta^2}\\mu\n +2E(x^T) \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&& +E(x^T) S\\frac{\\partial^2 \\mu}{\\partial \\theta^2}\n -\\frac{\\partial^2 \\mu^T }{\\partial \\theta^2}S \\mu\n -2\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\theta} \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta^2}\\mu\n\\end{eqnarray}\n\nBut $E(x)=\\mu$, and, from appendix 2,\n\\begin{equation}\nE\\left(x^T \\frac{\\partial^2 S}{\\partial \\theta^2} x\\right)=\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)+\\mu^T \\frac{\\partial^2 S}{\\partial \\theta^2} \\mu\n\\end{equation}\n\nand so:\n\n\\begin{eqnarray}\nE\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta^2}\\right)\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2} \\left(\\mbox{tr}\\left(\\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma \\right)+\\mu^T \\frac{\\partial^2 S}{\\partial \\theta^2} \\mu\\right)\n +\\mu^T \\frac{\\partial^2 S}{\\partial \\theta^2}\\mu\n +2\\mu^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&& +\\mu^T S\\frac{\\partial^2 \\mu}{\\partial \\theta^2}\n -\\frac{\\partial^2 \\mu^T }{\\partial \\theta^2}S \\mu\n -2\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\theta} \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta^2}\\mu\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n -\\frac{1}{2} \\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)\n -\\frac{1}{2} \\mu^T \\frac{\\partial^2 S}{\\partial \\theta^2} \\mu\n +\\mu^T \\frac{\\partial^2 S}{\\partial \\theta^2}\\mu\n +2\\mu^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&& +\\mu^T S\\frac{\\partial^2 \\mu}{\\partial \\theta^2}\n -\\frac{\\partial^2 \\mu^T }{\\partial \\theta^2}S \\mu\n -2\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\theta} \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta^2}\\mu\\\\\\label{eq2}\n&=&-\\frac{1}{2} \\left[\\frac{\\partial^2 \\ln D}{\\partial \\theta^2}+\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)\\right]\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&&\\mbox{(where we have cancelled various terms)}\\nonumber\n\\end{eqnarray}\n\nBut\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\frac{\\partial \\log D}{\\partial \\theta}\\right)\\\\\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\frac{1}{D}\\frac{\\partial D}{\\partial \\theta}\\right)\\\\\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\frac{1}{D}\\left(D\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\right)\\right)\\\\\n&&\\mbox{(using a standard result for the derivative of a determinant known as Jacobi's formula)}\\nonumber\\\\\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial}{\\partial \\theta}\\left(S\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\right)\\\\\n&&\\mbox{(using a standard result for the derivative of a trace)}\\nonumber\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}+S\\frac{\\partial^2 \\Sigma}{\\partial \\theta^2}\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)+\\mbox{tr}\\left( S \\frac{\\partial^2 \\Sigma}{\\partial \\theta^2}\\right)\\\\\n&&\\mbox{(using the linearity of the trace operator)}\\nonumber\n\\end{eqnarray}\n\nNow note that\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial \\theta}(S \\Sigma)&=&S \\frac{\\partial \\Sigma}{\\partial \\theta}+\\frac{\\partial S}{\\partial \\theta}\\Sigma\n\\end{eqnarray}\nand\n\\begin{eqnarray}\n\\frac{\\partial^2}{\\partial \\theta^2}( S \\Sigma)\n&=&S \\frac{\\partial^2 \\Sigma}{\\partial \\theta^2}+\\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma+2\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\n\\end{eqnarray}\n\nBut $S\\Sigma=I$ and so $\\frac{\\partial^2}{\\partial \\theta^2}( S \\Sigma)=0$, implying that\n\\begin{equation}\nS\\frac{\\partial^2 \\Sigma}{\\partial \\theta^2}=-\\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma -2\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\n\\end{equation}\n\nand so\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln D}{\\partial \\theta^2}\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)+\\mbox{tr}\\left( S \\frac{\\partial^2 \\Sigma}{\\partial \\theta^2}\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)+\\mbox{tr}\\left(- \\frac{\\partial^2 S}{\\partial \\theta^2}-2\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\Sigma\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)-\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)-\\mbox{tr}\\left(2\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\\\\n&=&-\\mbox{tr}\\left(\\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma \\right)-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n\\end{eqnarray}\n\nGiving:\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln D}{\\partial \\theta^2}+\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)\n&=&\\mbox{tr}\\left(- \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)+\\mbox{tr}\\left(S \\frac{\\partial^2 \\Sigma}{\\partial \\theta^2}\\right)\\\\\\label{eq3}\n&=&-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n\\end{eqnarray}\n\nReturning to equation~\\ref{eq2} and substituting in the expression given in equation~\\ref{eq3} gives:\n\n\\begin{eqnarray}\nE\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta^2}\\right)\n&=&-\\frac{1}{2} \\left[\\frac{\\partial^2 \\ln D}{\\partial \\theta^2}+\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta^2}\\Sigma\\right)\\right]\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&=&-\\frac{1}{2} \\left[-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\right]\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&=&\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\\\\\n&=&-\\frac{1}{2}\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\theta}S\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\\\\\n &&\\mbox{(using the standard result that $\\frac{\\partial A^{-1}}{\\partial \\theta}=-A^{-1}\\frac{\\partial A}{\\partial \\theta}A^{-1}$)}\n\\end{eqnarray}\n\nThis gives the prior:\n\\begin{eqnarray}\np(\\theta)\n&=&\\sqrt{-E\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta^2}\\right)}\\\\\n&=&\\sqrt{\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n-\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)}\\label{eq100}\\\\\n&=&\\sqrt{\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n+\\frac{1}{2}\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\theta}S\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)}\\label{eq100b}\n\\end{eqnarray}\n\nIf there are $n$ observations then:\n\\begin{itemize}\n \\item $\\mu$ is an $n$ x $1$ vector\n \\item $\\frac{\\partial \\mu}{\\partial \\theta}$ is an $n$ by $1$ vector\n \\item $\\mu^T$ is a $1$ x $n$ vector\n \\item $\\frac{\\partial \\mu^T}{\\partial \\theta}$ is a $1$ by $n$ vector\n \\item $S$ is an $n$ by $n$ matrix\n \\item $\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}$ is a scalar\n \\item $\\frac{\\partial S}{\\partial \\theta}$ is an $n$ by $n$ matrix\n \\item $\\frac{\\partial \\Sigma}{\\partial \\theta}$ is an $n$ by $n$ matrix\n \\item $\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}$ is an $n$ by $n$ matrix\n \\item tr$\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)$ is a scalar\n\\end{itemize}\n\nWe now consider various special cases of this general formula, starting with the two cases described in~\\citet{jp1} and~\\citet{jp2}.\n\n\\subsection{Independence}\n\nIf the observations are modelled as independent then:\n\\begin{eqnarray}\n\\Sigma\n&=&\\mbox{diag}\\left(\\sigma_1^2,\\sigma_2^2,...,\\sigma_n^2\\right)\\\\\nS\n&=&\\mbox{diag}\\left(\\sigma_1^{-2},\\sigma_2^{-2},...,\\sigma_n^{-2}\\right)\\\\\n\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n&=&\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\mu_i}{\\partial \\theta}\\right)^2\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta}\n&=&\\mbox{diag}\\left(2\\sigma_1 \\frac{\\partial \\sigma_1}{\\partial \\theta},2\\sigma_2\\frac{\\partial \\sigma_2}{\\partial \\theta},...,2\\sigma_n\\frac{\\partial \\sigma_n}{\\partial \\theta}\\right)\\\\\n\\frac{\\partial S}{\\partial \\theta}\n&=&\\mbox{diag}\\left(-2\\sigma_1^{-3} \\frac{\\partial \\sigma_1}{\\partial \\theta},-2\\sigma_2^{-3}\\frac{\\partial \\sigma_2}{\\partial \\theta},...,-2\\sigma_n^{-3}\\frac{\\partial \\sigma_n}{\\partial \\theta}\\right)\\\\\n\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\n&=&\\mbox{diag}\\left(-4\\sigma_1^{-2} \\frac{\\partial \\sigma_1}{\\partial \\theta},-4\\sigma_2^{-2}\\frac{\\partial \\sigma_2}{\\partial \\theta},...,-4\\sigma_n^{-2}\\frac{\\partial \\sigma_n}{\\partial \\theta}\\right)\\\\\n&=&-4\\mbox{diag}\\left(\\frac{1}{\\sigma_1^2} \\frac{\\partial \\sigma_1}{\\partial \\theta},\\frac{1}{\\sigma_2^2}\\frac{\\partial \\sigma_2}{\\partial \\theta},...,\\frac{1}{\\sigma_n^2}\\frac{\\partial \\sigma_n}{\\partial \\theta}\\right)\\\\\n-\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n&=&2\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\sigma_i}{\\partial \\theta}\\right)^2\n\\end{eqnarray}\n\nand so the prior is\n\\begin{eqnarray}\np(\\theta)&=&\\sqrt{\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\mu_i}{\\partial \\theta}\\right)^2\n +2\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\sigma_i}{\\partial \\theta}\\right)^2}\n\\end{eqnarray}\n\nwhich agrees with equation 19 in~\\citet{jp1}.\n\n\\subsection{Constant Covariance}\n\nIf the covariance $\\Sigma$ is constant then equation~\\ref{eq100} reduces immediately to\n\\begin{eqnarray}\np(\\theta)&=&\\sqrt{\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}}\n\\end{eqnarray}\n\nwhich agrees with equation 19 in~\\citet{jp2}.\n\n\\subsection{Constant Correlation}\n\nIf the correlations are modelled as constant (but the variances are allowed to vary) then:\n\n\\begin{eqnarray}\n\\Sigma&=&VCV\\\\\n\\mbox{where }V&=&\\mbox{diag}(\\sigma_1^2,\\sigma_2^2,...,\\sigma_n^2)\\\\\n\\mbox{and }C&=&\\mbox{the correlation matrix}\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta}\n&=& \\frac{\\partial V}{\\partial \\theta}CV+VC\\frac{\\partial V}{\\partial \\theta}\\\\\n&=& 2\\frac{\\partial V}{\\partial \\theta}CV\\\\\n\\mbox{where } \\frac{\\partial V}{\\partial \\theta}\n&=&2\\mbox{diag}(\\sigma_1\\frac{\\partial \\sigma_1}{\\partial \\theta},\\sigma_2\\frac{\\partial \\sigma_2}{\\partial \\theta},...,\\sigma_n\\frac{\\partial \\sigma_n}{\\partial \\theta})\\\\\nS&=&V^{-1} C^{-1} V^{-1}\\\\\n\\mbox{where }V^{-1}&=&\\mbox{diag}(\\sigma_1^{-2},\\sigma_2^{-2},...,\\sigma_n^{-2})\\\\\n\\mbox{and }C^{-1}&=&\\mbox{the inverse correlation matrix}\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta}\n&=& \\frac{\\partial V^{-1}}{\\partial \\theta}C^{-1}V^{-1}+V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta}\\\\\n&=& 2V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta}\\\\\n\\mbox{where } \\frac{\\partial V^{-1}}{\\partial \\theta}\n&=&-2\\mbox{diag}\\left(\\frac{1}{\\sigma_1^2}\\frac{\\partial \\sigma_1}{\\partial \\theta},\\frac{1}{\\sigma_2^2}\\frac{\\partial \\sigma_2}{\\partial \\theta},...,\\frac{1}{\\sigma_n^2}\\frac{\\partial \\sigma_n}{\\partial \\theta}\\right)\\\\\n\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\n&=&4 V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta}\\frac{\\partial V}{\\partial \\theta}CV\\\\\n\\frac{\\partial V^{-1}}{\\partial \\theta}\\frac{\\partial V}{\\partial \\theta}\n&=&-4\\mbox{diag}\\left(\\frac{1}{\\sigma_1}\\left(\\frac{\\partial \\sigma_1}{\\partial \\theta}\\right)^2,\\frac{1}{\\sigma_2}\\left(\\frac{\\partial \\sigma_2}{\\partial \\theta}\\right)^2,...,\\frac{1}{\\sigma_n}\\left(\\frac{\\partial \\sigma_n}{\\partial \\theta}\\right)^2\\right)\n\\end{eqnarray}\n\nand the prior is:\n\\begin{eqnarray}\np(\\theta)\n&=&\\sqrt{\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n-\\frac{1}{2}\\mbox{tr}\\left(4 V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta}\\frac{\\partial V}{\\partial \\theta}CV\\right)}\n\\end{eqnarray}\n\n\\subsection{Constant Variance}\n\nIf the variances are modelled as constant (but the correlations are allowed to vary) then:\n\\begin{eqnarray}\n\\Sigma&=&VCV\\\\\n\\mbox{where }V&=&\\mbox{diag}(\\sigma_1^2,\\sigma_2^2,...,\\sigma_n^2)\\\\\n\\mbox{and }C&=&\\mbox{the correlation matrix}\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta}&=& V \\frac{\\partial C}{\\partial \\theta}V\\\\\nS&=&V^{-1} C^{-1} V^{-1}\\\\\n\\mbox{where }V^{-1}&=&\\mbox{diag}(\\sigma_1^{-2},\\sigma_2^{-2},...,\\sigma_n^{-2})\\\\\n\\mbox{and }C^{-1}&=&\\mbox{the inverse correlation matrix}\\\\\n\\frac{\\partial S}{\\partial \\theta}&=&V^{-1} \\frac{\\partial C^{-1}}{\\partial \\theta}V^{-1}\\\\\n\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\theta}\n&=&V^{-1}\\frac{\\partial C^{-1}}{\\partial \\theta}V^{-1}V\\frac{\\partial C}{\\partial \\theta}V\\\\\n&=&V^{-1}\\frac{\\partial C^{-1}}{\\partial \\theta}\\frac{\\partial C}{\\partial \\theta}V\\\\\n&=&-V^{-1}C^{-1}\\frac{\\partial C}{\\partial \\theta}C^{-1}\\frac{\\partial C}{\\partial \\theta}V\n\\end{eqnarray}\n\nand so the prior is:\n\\begin{eqnarray}\np(\\theta)\n&=&\\sqrt{\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n-\\frac{1}{2}\\mbox{tr}\\left(V^{-1}\\frac{\\partial C^{-1}}{\\partial \\theta}\\frac{\\partial C}{\\partial \\theta}V\\right)}\\label{eq101}\\\\\n&=&\\sqrt{\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\theta}\n+\\frac{1}{2}\\mbox{tr}\\left(V^{-1}C^{-1}\\frac{\\partial C}{\\partial \\theta}C^{-1}\\frac{\\partial C}{\\partial \\theta}V\\right)}\\label{eq101b}\n\\end{eqnarray}\n\n\\section{Multiple Parameters}\\label{s4}\n\nWe now consider the multiparameter case.\nWe start with two parameters and generalise to multiple parameters later.\nThe derivations are only slightly more complex than those for the single parameter case.\nStarting from equation~\\ref{eq10}, which was:\n\n\\begin{eqnarray}\n\\frac{\\partial \\ln p(x|\\theta)}{\\partial \\theta}\n&=&-\\frac{1}{2} \\frac{\\partial \\ln D}{\\partial \\theta}\n -\\frac{1}{2}x^T \\frac{\\partial S}{\\partial \\theta} x\n +x^T \\frac{\\partial S}{\\partial \\theta}\\mu\n +x^T S\\frac{\\partial \\mu}{\\partial \\theta}\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\mu\n\\end{eqnarray}\nwe now take the derivative wrt a second parameter $\\phi$:\n\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta \\partial \\phi}\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} x\n +x^T \\frac{\\partial}{\\partial \\phi}\\left(\\frac{\\partial S}{\\partial \\theta}\\mu\\right)\n +x^T \\frac{\\partial}{\\partial \\phi}\\left(S\\frac{\\partial \\mu}{\\partial \\theta}\\right)\\\\\n&& -\\frac{\\partial}{\\partial \\phi}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta}S \\mu\\right)\n -\\frac{1}{2}\\frac{\\partial}{\\partial \\phi}\\left(\\mu^T\\frac{\\partial S }{\\partial \\theta}\\mu\\right)\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} x\n +x^T \\left(\\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\mu+\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\right)\n +x^T \\left(\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\mu}{\\partial \\theta}+S\\frac{\\partial^2 \\mu}{\\partial \\theta \\partial \\phi}\\right)\\\\\n&& -\\left(\\frac{\\partial^2 \\mu^T }{\\partial \\theta \\partial \\phi}S \\mu\n +\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\phi} \\mu\n +\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\\right)\n -\\frac{1}{2}\\left(\\frac{\\partial \\mu^T}{\\partial \\phi}\\frac{\\partial S }{\\partial \\theta}\\mu\n +\\mu^T\\frac{\\partial^2 S }{\\partial \\theta \\partial \\phi}\\mu\n +\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\right)\\nonumber\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n -\\frac{1}{2}x^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} x\n +x^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\mu\n +x^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\n +x^T \\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\mu}{\\partial \\theta}\n +x^T S\\frac{\\partial^2 \\mu}{\\partial \\theta \\partial \\phi}\\\\\n&& -\\frac{\\partial^2 \\mu^T }{\\partial \\theta \\partial \\phi}S \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\phi} \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\n -\\frac{1}{2}\\frac{\\partial \\mu^T}{\\partial \\phi}\\frac{\\partial S }{\\partial \\theta}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta \\partial \\phi}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\n\\end{eqnarray}\n\nTaking expectations:\n\\begin{eqnarray}\nE\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta^2}\\right)\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n -\\frac{1}{2}E\\left(x^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} x\\right)\n +E(x^T) \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\mu\n +E(x^T) \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&& +E(x^T) \\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\mu}{\\partial \\theta}\n +E(x^T) S\\frac{\\partial^2 \\mu}{\\partial \\theta \\partial \\phi}\n -\\frac{\\partial^2 \\mu^T }{\\partial \\theta \\partial \\phi}S \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\phi} \\mu\\\\\n&& -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\n -\\frac{1}{2}\\frac{\\partial \\mu^T}{\\partial \\phi}\\frac{\\partial S }{\\partial \\theta}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta \\partial \\phi}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\n\\end{eqnarray}\n\nBut $E(x)=\\mu$, and, from appendix 2,\n\\begin{equation}\nE\\left(x^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} x\\right)\n=\\mbox{tr}\\left(\\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma \\right)+\\mu^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} \\mu\n\\end{equation}\n\nand so:\n\n\\begin{eqnarray}\nE\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta \\partial \\phi}\\right)\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n -\\frac{1}{2}\\left(\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\\right)+\\mu^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} \\mu\\right)\n +\\mu^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\mu\n +\\mu^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&& +\\mu^T \\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\mu}{\\partial \\theta}\n +\\mu^T S\\frac{\\partial^2 \\mu}{\\partial \\theta \\partial \\phi}\n -\\frac{\\partial^2 \\mu^T }{\\partial \\theta \\partial \\phi}S \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\phi} \\mu\\\\\n&& -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\n -\\frac{1}{2}\\frac{\\partial \\mu^T}{\\partial \\phi}\\frac{\\partial S }{\\partial \\theta}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta \\partial \\phi}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&=&-\\frac{1}{2} \\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n -\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma \\right)\n -\\frac{1}{2}\\mu^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi} \\mu\n +\\mu^T \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\mu\n +\\mu^T \\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&& +\\mu^T \\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\mu}{\\partial \\theta}\n +\\mu^T S\\frac{\\partial^2 \\mu}{\\partial \\theta \\partial \\phi}\n -\\frac{\\partial^2 \\mu^T }{\\partial \\theta \\partial \\phi}S \\mu\n -\\frac{\\partial \\mu^T }{\\partial \\theta}\\frac{\\partial S}{\\partial \\phi} \\mu\\\\\n&& -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\n -\\frac{1}{2}\\frac{\\partial \\mu^T}{\\partial \\phi}\\frac{\\partial S }{\\partial \\theta}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial^2 S }{\\partial \\theta \\partial \\phi}\\mu\n -\\frac{1}{2}\\mu^T\\frac{\\partial S }{\\partial \\theta}\\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&=&-\\frac{1}{2} \\left[\\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n +\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\\right)\\right]\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\\label{eq4}\n\\end{eqnarray}\n\nBut\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\frac{\\partial \\log D}{\\partial \\phi}\\right)\\\\\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\frac{1}{D}\\frac{\\partial D}{\\partial \\phi}\\right)\\\\\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\frac{1}{D}\\left(D\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)\\right)\\right)\\\\\n&=&\\frac{\\partial}{\\partial \\theta}\\left(\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial}{\\partial \\theta}\\left(S\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}+S\\frac{\\partial^2 \\Sigma}{\\partial \\theta \\partial \\phi}\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)+\\mbox{tr}\\left( S \\frac{\\partial^2 \\Sigma}{\\partial \\theta \\partial \\phi}\\right)\n\\end{eqnarray}\n\nHowever\n\\begin{eqnarray}\n\\frac{\\partial}{\\partial \\theta}\\left(S \\Sigma\\right)&=&S \\frac{\\partial \\Sigma}{\\partial \\theta}+\\frac{\\partial S}{\\partial \\theta}\\Sigma\\\\\n\\frac{\\partial^2}{\\partial \\theta \\partial \\phi}\\left( S \\Sigma\\right)\n&=&S \\frac{\\partial^2 \\Sigma}{\\partial \\theta \\partial \\phi}\n +\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\n +\\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\n +\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\n\\end{eqnarray}\n\nBut $S\\Sigma=I$ and so $\\frac{\\partial^2}{\\partial \\theta \\partial \\phi}( S \\Sigma)=0$, implying that\n\\begin{equation}\nS\\frac{\\partial^2 \\Sigma}{\\partial \\theta \\partial \\phi}=- \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\n-\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\n-\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\n\\end{equation}\n\nand so\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)+\\mbox{tr}\\left( S \\frac{\\partial^2 \\Sigma}{\\partial \\theta \\partial \\phi}\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)\n+\\mbox{tr}\\left(- \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\n -\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\n -\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\\\\n&=&\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)\n+\\mbox{tr}\\left(- \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\\right)\n-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta}\\frac{\\partial \\Sigma}{\\partial \\phi}\\right)\n-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\\\\n&=&\\mbox{tr}\\left(- \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\\right)\n-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n\\end{eqnarray}\n\nGiving:\n\\begin{eqnarray}\n\\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}+\\mbox{tr}\\left(\\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma \\right)\n&=&\\mbox{tr}\\left(- \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\\right)\n-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n+\\mbox{tr}\\left(\\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma \\right)\\\\\n&=&-\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\\label{eq5}\n\\end{eqnarray}\n\nSubstituting expression~\\ref{eq5} into equation~\\ref{eq4} gives:\n\n\\begin{eqnarray}\nE\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta \\partial \\phi}\\right)\n&=&-\\frac{1}{2} \\left[\\frac{\\partial^2 \\ln D}{\\partial \\theta \\partial \\phi}\n +\\mbox{tr}\\left( \\frac{\\partial^2 S}{\\partial \\theta \\partial \\phi}\\Sigma\\right)\\right]\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&=&\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\phi}\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\\\\\n&=&-\\frac{1}{2}\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\phi}S\\frac{\\partial \\Sigma}{\\partial \\theta}\\right)\n -\\frac{\\partial \\mu^T }{\\partial \\theta}S \\frac{\\partial \\mu}{\\partial \\phi}\n\\end{eqnarray}\n\nWe now generalize from two to multiple parameters. We change the notation so that $\\theta$ is the vector\nof all parameters. The prior is then:\n\\begin{eqnarray}\np(\\theta)\n&=&\\sqrt{-\\mbox{det}E\\left(\\frac{\\partial^2 \\ln p(x|\\theta)}{\\partial \\theta_j \\partial \\theta_k}\\right)}\\\\\n&=&\\sqrt{\\mbox{det}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\n-\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}\\right)\\right)}\\label{eq101}\\\\\n&=&\\sqrt{\\mbox{det}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\n+\\frac{1}{2}\\mbox{tr}\\left(S\\frac{\\partial \\Sigma}{\\partial \\theta_j}S\\frac{\\partial \\Sigma}{\\partial \\theta_k}\\right)\\right)}\\label{eq101b}\n\\end{eqnarray}\n\nIf there are $n$ observations and $m$ parameters then:\n\\begin{itemize}\n \\item $\\mu$ is an $n$ x $1$ vector\n \\item $\\frac{\\partial \\mu}{\\partial \\theta_j}$ is an $n$ by $1$ vector\n \\item $\\mu^T$ is a $1$ x $n$ vector\n \\item $\\frac{\\partial \\mu^T}{\\partial \\theta_k}$ is a $1$ by $n$ vector\n \\item $S$ is an $n$ by $n$ matrix\n \\item $\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}$ is a scalar\n (which is the $(j,k)$th element of a matrix)\n \\item $\\frac{\\partial S}{\\partial \\theta_j}$ is an $n$ by $n$ matrix\n \\item $\\frac{\\partial \\Sigma}{\\partial \\theta_k}$ is an $n$ by $n$ matrix\n \\item $\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}$ is an $n$ by $n$ matrix\n \\item tr$\\left(\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}\\right)$ is a scalar\n (which is the $(j,k)$th element of a matrix)\n\\end{itemize}\n\nWe now again consider various special cases of this general formula, starting with the two cases described in~\\citet{jp1} and~\\citet{jp2}.\n\n\\subsection{Independence}\n\nIf the observations are modelled as independent then:\n\\begin{eqnarray}\n\\Sigma\n&=&\\mbox{diag}\\left(\\sigma_1^2,\\sigma_2^2,...,\\sigma_n^2\\right)\\\\\nS\n&=&\\mbox{diag}\\left(\\sigma_1^{-2},\\sigma_2^{-2},...,\\sigma_n^{-2}\\right)\\\\\n\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\n&=&\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\mu_i}{\\partial \\theta_j}\\right)\\left(\\frac{\\partial \\mu_i}{\\partial \\theta_k}\\right)\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta_k}\n&=&\\mbox{diag}\\left(2\\sigma_1 \\frac{\\partial \\sigma_1}{\\partial \\theta_k},2\\sigma_2\\frac{\\partial \\sigma_2}{\\partial \\theta_k},...,2\\sigma_n\\frac{\\partial \\sigma_n}{\\partial \\theta_k}\\right)\\\\\n\\frac{\\partial S}{\\partial \\theta_j}\n&=&\\mbox{diag}\\left(-2\\sigma_1^{-3} \\frac{\\partial \\sigma_1}{\\partial \\theta_j},-2\\sigma_2^{-3}\\frac{\\partial \\sigma_2}{\\partial \\theta_j},...,-2\\sigma_n^{-3}\\frac{\\partial \\sigma_n}{\\partial \\theta_j}\\right)\\\\\n\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}\n&=&\\mbox{diag}\\left(-4\\sigma_1^{-2} \\frac{\\partial \\sigma_1}{\\partial \\theta_j}\\frac{\\partial \\sigma_1}{\\partial \\theta_k},-4\\sigma_2^{-2}\\frac{\\partial \\sigma_2}{\\partial \\theta_j}\\frac{\\partial \\sigma_2}{\\partial \\theta_k},...,-4\\sigma_n^{-2}\\frac{\\partial \\sigma_n}{\\partial \\theta_j}\\frac{\\partial \\sigma_n}{\\partial \\theta_k}\\right)\\\\\n&=&-4\\mbox{diag}\\left(\\frac{1}{\\sigma_1^2} \\frac{\\partial \\sigma_1}{\\partial \\theta_j}\\frac{\\partial \\sigma_1}{\\partial \\theta_k},\\frac{1}{\\sigma_2^2}\\frac{\\partial \\sigma_2}{\\partial \\theta_j}\\frac{\\partial \\sigma_2}{\\partial \\theta_k},...,\\frac{1}{\\sigma_n^2}\\frac{\\partial \\sigma_n}{\\partial \\theta_j}\\frac{\\partial \\sigma_n}{\\partial \\theta_k}\\right)\\\\\n-\\frac{1}{2}\\mbox{tr}\\left(\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}\\right)\n&=&2\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\frac{\\partial \\sigma_i}{\\partial \\theta_j}\\frac{\\partial \\sigma_i}{\\partial \\theta_k}\n\\end{eqnarray}\n\nand so\n\\begin{eqnarray}\np(\\theta)&=&\\sqrt{\\mbox{det}\\left(\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\mu_i}{\\partial \\theta_j}\\right)\\left(\\frac{\\partial \\mu_i}{\\partial \\theta_k}\\right)\n +2\\sum_{i=1}^{n}\\frac{1}{\\sigma_i^2}\\left(\\frac{\\partial \\sigma_i}{\\partial \\theta_j}\\right)\\left(\\frac{\\partial \\sigma_i}{\\partial \\theta_k}\\right)\\right)}\n\\end{eqnarray}\n\nwhich agrees with equation 36 in~\\citet{jp1}.\n\n\\subsection{Constant Covariance}\n\nIf the covariance $\\Sigma$ is constant then equation~\\ref{eq101} reduces immediately to\n\\begin{eqnarray}\np(\\theta)&=&\\sqrt{\\mbox{det}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\\right)}\n\\end{eqnarray}\n\nwhich agrees with equation 28 in~\\citet{jp2}.\n\n\\subsection{Constant Correlation}\n\nIf the correlations are modelled as constant (but the variances are allowed to vary) then:\n\n\\begin{eqnarray}\n\\Sigma&=&VCV\\\\\n\\mbox{where }V&=&\\mbox{diag}(\\sigma_1^2,\\sigma_2^2,...,\\sigma_n^2)\\\\\n\\mbox{and }C&=&\\mbox{the correlation matrix}\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta_k}\n&=& \\frac{\\partial V}{\\partial \\theta_k}CV+VC\\frac{\\partial V}{\\partial \\theta_k}\\\\\n&=& 2\\frac{\\partial V}{\\partial \\theta_k}CV\\\\\n\\mbox{where } \\frac{\\partial V}{\\partial \\theta_k}\n&=&2\\mbox{diag}(\\sigma_1\\frac{\\partial \\sigma_1}{\\partial \\theta_k},\\sigma_2\\frac{\\partial \\sigma_2}{\\partial \\theta_k},...,\\sigma_n\\frac{\\partial \\sigma_n}{\\partial \\theta_k})\\\\\nS&=&V^{-1} C^{-1} V^{-1}\\\\\n\\mbox{where }V^{-1}&=&\\mbox{diag}(\\sigma_1^{-2},\\sigma_2^{-2},...,\\sigma_n^{-2})\\\\\n\\mbox{and }C^{-1}&=&\\mbox{the inverse correlation matrix}\\\\\n\\frac{\\partial S}{\\partial \\theta_j}\n&=& \\frac{\\partial V^{-1}}{\\partial \\theta_j}C^{-1}V^{-1}+V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta_j}\\\\\n&=& 2V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta_j}\\\\\n\\mbox{where } \\frac{\\partial V^{-1}}{\\partial \\theta_j}\n&=&-2\\mbox{diag}\\left(\\frac{1}{\\sigma_1^2}\\frac{\\partial \\sigma_1}{\\partial \\theta_j},\\frac{1}{\\sigma_2^2}\\frac{\\partial \\sigma_2}{\\partial \\theta_j},...,\\frac{1}{\\sigma_n^2}\\frac{\\partial \\sigma_n}{\\partial \\theta_j}\\right)\\\\\n\\frac{\\partial V^{-1}}{\\partial \\theta_j}\\frac{\\partial V}{\\partial \\theta_k}\n&=&-4\\mbox{diag}\\left(\\frac{1}{\\sigma_1}\\frac{\\partial \\sigma_1}{\\partial \\theta_j}\\frac{\\partial \\sigma_1}{\\partial \\theta_k},\\frac{1}{\\sigma_2}\\frac{\\partial \\sigma_2}{\\partial \\theta_j}\\frac{\\partial \\sigma_2}{\\partial \\theta_k},...,\\frac{1}{\\sigma_n}\\frac{\\partial \\sigma_n}{\\partial \\theta_j}\\frac{\\partial \\sigma_n}{\\partial \\theta_k}\\right)\\\\\n\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}\n&=&4 V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta_j}\\frac{\\partial V}{\\partial \\theta_k}CV\n\\end{eqnarray}\n\nand the prior is:\n\\begin{eqnarray}\np(\\theta)\n&=&\\sqrt{\\mbox{det}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\n-\\frac{1}{2}\\mbox{tr}\\left(4 V^{-1}C^{-1}\\frac{\\partial V^{-1}}{\\partial \\theta_j}\\frac{\\partial V}{\\partial \\theta_k}CV\\right)\\right)}\n\\end{eqnarray}\n\n\\subsection{Constant Variance}\n\nIf the variances are modelled as constant (but the correlations are allowed to vary) then:\n\\begin{eqnarray}\n\\Sigma&=&VCV\\\\\n\\mbox{where }V&=&\\mbox{diag}(\\sigma_1^2,\\sigma_2^2,...,\\sigma_n^2)\\\\\n\\mbox{and }C&=&\\mbox{the correlation matrix}\\\\\n\\frac{\\partial \\Sigma}{\\partial \\theta_k}&=& V \\frac{\\partial C}{\\partial \\theta}V\\\\\nS&=&V^{-1} C^{-1} V^{-1}\\\\\n\\mbox{where }V&=&\\mbox{diag}(\\sigma_1^{-2},\\sigma_2^{-2},...,\\sigma_n^{-2})\\\\\n\\mbox{and }C{-1}&=&\\mbox{the inverse correlation matrix}\\\\\n\\frac{\\partial S}{\\partial \\theta_j}&=&V^{-1} \\frac{\\partial C^{-1}}{\\partial \\theta_j}V^{-1}\\\\\n\\frac{\\partial S}{\\partial \\theta_j}\\frac{\\partial \\Sigma}{\\partial \\theta_k}\n&=&V^{-1}\\frac{\\partial C^{-1}}{\\partial \\theta_j}VV^{-1}\\frac{\\partial C}{\\partial \\theta_k}V\\\\\n&=&V^{-1}\\frac{\\partial C^{-1}}{\\partial \\theta_j}\\frac{\\partial C}{\\partial \\theta_k}V\\\\\n&=&-V^{-1}C^{-1}\\frac{\\partial C}{\\partial \\theta_j}C^{-1}\\frac{\\partial C}{\\partial \\theta_k}V\n\\end{eqnarray}\n\nand the prior is:\n\\begin{eqnarray}\np(y|x)\n&=&\\sqrt{\\mbox{det}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\n-\\frac{1}{2}\\mbox{tr}\\left(V^{-1}\\frac{\\partial C^{-1}}{\\partial \\theta_j}\\frac{\\partial C}{\\partial \\theta_k}V\\right)\\right)}\\\\\n&=&\\sqrt{\\mbox{det}\\left(\\frac{\\partial \\mu^T }{\\partial \\theta_j}S \\frac{\\partial \\mu}{\\partial \\theta_k}\n+\\frac{1}{2}\\mbox{tr}\\left(V^{-1}C^{-1}\\frac{\\partial C}{\\partial \\theta_j}C^{-1}\\frac{\\partial C}{\\partial \\theta_k}V\\right)\\right)}\n\\end{eqnarray}\n\n\\section{Summary}\\label{s5}\n\nClimate models are statistical models, in that they produce probabilistic predictions\n(when run as initial condition ensembles) and have certain parameters that can only\nbe determined by comparison of model results with observations.\nAs a result, the Bayesian framework, in which probabilistic predictions made from models with different parameter values are\ncombined together to make a single best probabilistic prediction, can be applied.\nWithin that framework one has to specify a prior, and one must choose between a prior based on intuition or a prior\nbased on a rule. The former is known as subjective Bayesian statistics, and the latter, objective Bayesian statistics.\nThe authors are pursuing a research programme that is exploring methods by which objective Bayesian statistics can be applied in\nclimate modelling. In this article we have discussed the application of the most standard objective prior, known as\nJeffreys' Prior.\n\nClimate models are complex and the relationship between the parameters\nand the predicted distributions is also complex. However, the form of the predicted distributions themselves can often\nbe rather simple, and for many groups of variables a multivariate normal may be a good approximation.\nWe have shown that, by making this approximation, the calculation of Jeffreys' Prior can be reduced to\ndifferentiating the parameters of the multivariate normal by the parameters of the underlying climate model.\nWe derive expressions for Jeffreys' Prior in this situation.\n\nThe results from our two previous articles on this topic (\\citet{jp1} and~\\citet{jp2}) are special cases of the general results shown here.\n\nIn all this work we have expressed Jeffreys' Prior in terms of the true, rather than the estimated, parameters of the distributions from climate model predictions. In other words, we have assumed infinite rather than finite size initial condition ensembles.\nA further challenge is to rederive the expressions given above but incorporating estimation uncertainty.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe physical motivation for the development of supermanifolds stems from quantum field theory in its functional integral formulation, which describes fermionic particles by anticommuting fields. In the 1970s, pioneering work by Berezin strongly suggested that commuting and anticommuting variables should be treated on equal footing. Several theories of supermanifolds have been advocated, among which the definition of Berezin, Kostant, and Leites is one of the most commonly used in mathematics. \n\nOur motivation for the study of supermanifolds comes from the study of certain nonlinear $\\sigma$-models with supersymmetry. Indeed, it is known from the work of the third named author \\cite{zirnbauer-rsss} that Riemannian symmetric superspaces occur naturally in the large $N$ limit of certain random matrix ensembles, which correspond to Cartan's ten infinite series of symmetric spaces. In spite of their importance in physics, the mathematical theory of these superspaces is virtually non-existent. (But compare \\cite{duflo-petracci-formalss,zirnbauer-superbos,goertsches-diss}.) We intend to initiate the systematic study of Riemannian symmetric superspaces, in order to obtain a good understanding of, in particular, the invariant differential operators, the spherical functions, and the related harmonic analysis. \nThe present work lays an important foundation for this endeavour: the generalisation of Chevalley's restriction theorem to the super setting. \n\nTo describe our results in detail, let us make our assumptions more precise. Let $\\mathfrak g$ be a complex Lie superalgebra with even centre such that $\\mathfrak g_0$ is reductive in $\\mathfrak g$ and $\\mathfrak g$ carries an even invariant supersymmetric form. Let $\\theta$ be an involutive automorphism of $\\mathfrak g$, and denote by $\\mathfrak g=\\mathfrak k\\oplus\\mathfrak p$ the decomposition into $\\theta$-eigenspaces. We say that $(\\mathfrak g,\\mathfrak k)$ is a \\emph{reductive superpair}, and it is of \\emph{even type} if there exists an even Cartan subspace $\\mathfrak a\\subset\\mathfrak p_0$. \n\nAssume that $(\\mathfrak g,\\mathfrak k)$ is a reductive symmetric superpair of even type. Let $\\bar\\Sigma_1^+$ denote the set of positive roots of $\\mathfrak g_1:\\mathfrak a$ such that $\\lambda,2\\lambda$ are no roots of $\\mathfrak g_0:\\mathfrak a$. To each $\\lambda\\in\\bar\\Sigma_1^+$, one associates a set $\\mathcal R_\\lambda$ of differential operators with rational coefficients on $\\mathfrak a$. \n\nOur main results are as follows. \n\n\\begin{Th*}[A]\n\tLet $I(\\mathfrak a^*)$ be the image of the restriction map $S(\\mathfrak p^*)^{\\mathfrak k}\\to S(\\mathfrak a^*)$ (which is injective). Then $I(\\mathfrak a^*)$ is the set of $W$-invariant polynomials on $\\mathfrak a$ which lie in the common domain of all operators in $\\mathcal R_\\lambda$, $\\lambda\\in\\bar\\Sigma_1^+$. Here, $W$ is the Weyl group of $\\mathfrak g_0:\\mathfrak a$.\n\\end{Th*}\n\nFor $\\lambda\\in\\bar\\Sigma_1^+$, let $A_\\lambda\\in\\mathfrak a$ be the corresponding coroot, and denote by $\\partial(A_\\lambda)$ the directional derivative operator in the direction of $A_\\lambda$. Then the image $I(\\mathfrak a^*)$ can be characterised in more explicit terms, as follows.\n\n\\begin{Th*}[B]\n\tWe have $I(\\mathfrak a^*)=\\bigcap_{\\lambda\\in\\bar\\Sigma_1^+}S(\\mathfrak a^*)^W\\cap I_\\lambda$ where\n\t\\[\n\t\tI_\\lambda=\\textstyle\\bigcap_{j=1}^{\\frac12m_{1,\\lambda}}\\dom\\lambda^{-j}\\partial(A_\\lambda)^j\\mathtxt{if}\\lambda(A_\\lambda)=0\\ ,\n\t\\]\n\tand if $\\lambda(A_\\lambda)\\neq0$, then $I_\\lambda$ consists of those $p\\in\\cplxs[\\mathfrak a]$ such that \n\t\\[\n\t\t\\partial(A_\\lambda)^kp|_{\\ker\\lambda}=0\\mathfa[odd integers] k\\ ,\\ 1\\sle k\\sle m_{1,\\lambda}-1\\ .\n\t\\]\n\tHere, $m_{1,\\lambda}$ denotes the multiplicity of $\\lambda$ in $\\mathfrak g_1$ (and is an even integer).\n\\end{Th*}\n\nIf the symmetric pair $(\\mathfrak g,\\mathfrak k)$ is of \\emph{group type}, \\emph{i.e.}~$\\mathfrak g=\\mathfrak k\\oplus\\mathfrak k$ with the flip involution, then for all $\\lambda\\in\\bar\\Sigma_1^+$, $\\lambda(A_\\lambda)=0$, and the multiplicity $m_{1,\\lambda}=2$. In this case, Theorem (B) reduces to $I(\\mathfrak a^*)=\\bigcap_{\\lambda\\in\\bar\\Sigma_1^+}S(\\mathfrak a^*)^W\\cap\\dom\\lambda^{-1}\\partial(A_\\lambda)$. The situation where $\\lambda(A_\\lambda)\\neq0$ for some $\\lambda\\in\\bar\\Sigma_1^+$ occurs if and only if $\\mathfrak g$ contains symmetric subalgebras $\\mathfrak s\\cong C(2)=\\mathfrak{osp}(2|2)$ where $\\mathfrak s_0\\cap\\mathfrak k=\\sll(2,\\cplxs)$. This is case for $\\mathfrak g=C(q+1)$ with a special involution, and in this case, the invariant algebra $I(\\mathfrak a^*)$ defines the singular curve $z^{2q+1}=w^2$ (\\thmref{Cor}{singular}). \n\nLet us place our result in the context of the literature. The Theorems (A) and (B) apply to the case of classical Lie superalgebras with non-degenerate invariant even form (equivalently, finite-dimensional contragredient Lie superalgebras), considered as symmetric superspaces of group type. In this case, the result is due to Sergeev \\cite{sergeev-invpol}, Kac \\cite{kac-laplace}, and Gorelik \\cite{gorelik-kacconstruction}, and we simply furnish a new (and elementary) proof. (The results of Sergeev are also valid for basic Lie superalgebras which are not contragredient.) For some particular cases, there are earlier results by Berezin \\cite{berezin}.\n\nSergeev's original proof involves case-by-case calculations. The proof by Gorelik---which carries out in detail ideas due to Kac in the context of Kac--Moody algebras---is classification-free, and uses so-called Shapovalov determinants. Moreover, the result of Kac and Gorelik actually characterises the image of the Harish-Chandra homomorphism rather than the image of the restriction map on the symmetric algebra, and is therefore more fundamental than our result.\n\nStill in the case of symmetric superpairs of group type, Kac \\cite{kac-typical} and Santos \\cite{santos-zuckerman} describe the image of the restriction morphism in terms of supercharacters of certain (cohomologically) induced modules (instead of a characterisation in terms of a system of differential equations). This approach cannot carry over to the case of symmetric pairs, as is known in the even case from the work of Helgason \\cite{helgason-fundamental}. \n\nOur result also applies in the context of Riemannian symmetric superspaces, where one has an even non-degenerate $\\mathcal G$-invariant supersymmetric form on $\\mathcal G\/\\mathcal K$ whose restriction to the base $G\/K$ is Riemannian. In this setting, it is to our knowledge completely new and not covered by earlier results. We point out that a particular case was proved in the PhD thesis of Fuchs \\cite{fuchs-diss}, in the framework of the `supermatrix model', using a technique due to Berezin. \n\nIn the context of harmonic analysis of even Riemannian symmetric spaces $G\/K$, Chevalley's restriction theorem enters crucially, since it determines the image of the Harish-Chandra homomorphism, and thereby, the spectrum of the algebra $\\mathbb D(G\/K)$ of $G$-invariant differential operators on $G\/K$. It is an important ingredient in the proof of Harish-Chandra's integral formula for the spherical functions. In a series of forthcoming papers, we will apply our generalisation of Chevalley's restriction theorem to obtain analogous results in the context of Riemannian symmetric superspaces. \n\n\\medskip\\noindent\nLet us give a brief overview of the contents of our paper. We review some basic facts on root decompositions in sections 2.1-2.2. In section 2.3, we introduce our main tool in the proof of Theorem (A), a certain twisted action $u_z$ on the supersymmetric algebra $S(\\mathfrak p)$. In section 3.1, we define the `radial component' map $\\gamma_z$ via the twisted action $u_z$. The proofs of Theorems (A) and (B) are contained in sections 3.2 and 3.3, respectively. The former comes down to a study of the singularities of $\\gamma_z$ as a function of the semi-simple $z\\in\\mathfrak p_0$, whereas the latter consists in an elementary and explicit discussion of the radial components of certain differential operators. In sections 4.1 and 4.2, we discuss the generality of the `even type' condition, and study an extreme example in some detail. \n\nThe first named author wishes to thank C.~Torossian (Paris VII) for his enlightening remarks on a talk given on an earlier version of this paper. The first and second named author wish to thank M.~Duflo (Paris VII) for helpful discussions, comments, and references. The second named author wishes to thank K.~Nishiyama (Kyoto) for several discussions on the topic. Last, not least, we wish to thank an anonymous but diligent referee whose suggestions greatly improved the presentation of our main technical device. \n\nThis research was partly funded by the IRTG ``Geometry and Analysis of Symmetries'', supported by Deutsche Forschungsgemeinschaft (DFG), Minist\\`ere de l'\\'Education Nationale (MENESR), and Deutsch-Franz\\\"osische Hochschule (DFH-UFA), and by the SFB\/Transregio 12 ``Symmetry and Universality in Mesoscopic Systems'', supported by Deutsche Forschungsgemeinschaft (DFG).\n\n\\section{Some basic facts and definitions}\n\n\\begin{Par*}\n\tIn this section, we mostly collect some basic facts concerning (restricted) root decompositions of Lie superalgebras, and the (super-) symmetric algebra, along with some definitions which we find useful to formulate our main results. As general references for matters super, we refer the reader to \\cite{kostant-supergeom,deligne-morgan-susy,kac-liesuperalgs,scheunert-liesuperalgs}\n\\end{Par*}\n\n\\subsection{Roots of a basic quadratic Lie superalgebra}\n\n\\begin{Def}\n\tLet $\\mathfrak g=\\mathfrak g_0\\oplus\\mathfrak g_1$ be a Lie superalgebra over $\\cplxs$ and $b$ a bilinear form $b$. Recall that $b$ is \\emph{supersymmetric} if $b(u,v)=(-1)^{\\Abs0u\\Abs0v}b(v,u)$ for all homogeneous $u,v$. We shall call $(\\mathfrak g,b)$ \\emph{quadratic} if $b$ is a non-degenerate, $\\mathfrak g$-invariant, even and supersymmetric form on $\\mathfrak g$. We shall say that $\\mathfrak g$ is \\emph{basic} if $\\mathfrak g_0$ is reductive in $\\mathfrak g$ (\\emph{i.e.}~$\\mathfrak g$ is a semi-simple $\\mathfrak g_0$-module) and $\\mathfrak z(\\mathfrak g)\\subset\\mathfrak g_0$ where $\\mathfrak z(\\mathfrak g)$ denotes the centre of $\\mathfrak g$.\n\\end{Def}\n\n\\begin{Par}\n\tLet $(\\mathfrak g,b)$ be a basic quadratic Lie superalgebra, and $\\mathfrak b$ be a Cartan subalgebra of $\\mathfrak g_0$. \n\t\n\tAs usual \\cite[Chapter II, \\S~4.6]{scheunert-liesuperalgs}, we define\n \\[\n V^\\alpha=\\Set1{x\\in V}{\\exists\\,n\\in\\nats\\,:\\,(h-\\alpha(h))^n(x)=0\\ \\textfor all h\\in\\mathfrak b}\\quad,\\quad\\alpha\\in\\mathfrak b^*\n \\]\n for any $\\mathfrak b$-module $V$. Further, the sets of even resp.~odd roots for $\\mathfrak b$ are \n \\[\n \\Delta_0(\\mathfrak g:\\mathfrak b)\n =\\Set1{\\alpha\\in\\mathfrak b^*\\setminus0}{\\mathfrak g_0^\\alpha\\neq0}\n \\mathtxt\\AND\\Delta_1(\\mathfrak g:\\mathfrak b)\n =\\Set1{\\alpha\\in\\mathfrak b^*}{\\mathfrak g_1^\\alpha\\neq0}\\ .\n \\]\n We also write $\\Delta_j=\\Delta_j(\\mathfrak g:\\mathfrak b)$. Let $\\Delta=\\Delta(\\mathfrak g:\\mathfrak b)=\\Delta_0\\cup\\Delta_1$. The elements of $\\Delta$ are called \\emph{roots}. We have \n \\[\n \\mathfrak g=\\mathfrak b\\oplus\\textstyle\\bigoplus_{\\alpha\\in\\Delta}\\mathfrak g^\\alpha\n =\\mathfrak b\\oplus\\bigoplus_{\\alpha\\in\\Delta_0}\\mathfrak g_0^\\alpha\n \t\\oplus\\bigoplus_{\\alpha\\in\\Delta_1}\\mathfrak g_1^\\alpha\\ .\n \\]\n It is obvious that $\\Delta_0=\\Delta(\\mathfrak g_0:\\mathfrak b)$, so in particular, it is a reduced abstract root system in its real linear span. Also, since $\\mathfrak g_0$ is reductive in $\\mathfrak g$, the root spaces $\\mathfrak g_i^\\alpha$ are the joint eigenspaces of $\\ad h$, $h\\in\\mathfrak b$ (and not only generalised ones).\n \n We collect the basic statements about $\\mathfrak b$-roots. The results are known (\\emph{e.g.}~\\cite{scheunert-liesuperalgs,benayadi-root}), so we omit their proofs.\n\\end{Par}\n\n\\begin{Prop}[superroot]\nLet $\\mathfrak g$ be a basic quadratic Lie superalgebra with invariant form $b$, and $\\mathfrak b$ a Cartan subalgebra of $\\mathfrak g_0$.\n\\begin{enumerate}\n\\item For $\\alpha,\\beta\\in\\Delta\\cup0$, we have $b(\\mathfrak g_j^\\alpha,\\mathfrak g_k^\\beta)=0$ unless $j=k$ and $\\alpha=-\\beta$.\n\\item The form $b$ induces a non-degenerate pairing $\\mathfrak g_j^\\alpha\\times\\mathfrak g_j^{-\\alpha}\\to\\cplxs$. In particular, we have $\\dim\\mathfrak g_j^\\alpha=\\dim\\mathfrak g_j^{-\\alpha}$ and $\\Delta_j=-\\Delta_j$ for $j\\in\\ints\/2\\ints$. \n\\item The form $b$ is non-degenerate on $\\mathfrak b$, so for any $\\lambda\\in\\mathfrak b^*$, there exists a unique $h_\\lambda\\in\\mathfrak b$ such that $b(h_\\lambda,h)=\\lambda(h)$ for all $h\\in\\mathfrak b$. \n\\item If $\\alpha(h_\\alpha)\\neq0$, $\\alpha\\in\\Delta_1$, then $2\\alpha\\in\\Delta_0$. In particular, $\\Delta_0\\cap\\Delta_1=\\vvoid$. \n\\item We have $\\mathfrak g_1^0=\\mathfrak z_1(\\mathfrak g)=\\{x\\in\\mathfrak g_1\\mid[x,\\mathfrak g]=0\\}=0$, so $0\\not\\in\\Delta_1$.\n\\item All root spaces $\\mathfrak g^\\alpha$, $\\alpha\\in\\Delta$, $\\alpha(h_\\alpha)\\neq0$, are one-dimensional. \n\\end{enumerate}\n\\end{Prop}\n\n\\subsection{Restricted roots of a reductive symmetric superpair}\n\n\\begin{Def}\nLet $(\\mathfrak g,b)$ be a complex quadratic Lie superalgebra, and $\\theta:\\mathfrak g\\to\\mathfrak g$ an involutive automorphism leaving the form $b$ invariant. If $\\mathfrak g=\\mathfrak k\\oplus\\mathfrak p$ is the $\\theta$-eigenspace decomposition, then we shall call $(\\mathfrak g,\\mathfrak k)$ a \\emph{symmetric superpair}. We shall say that $(\\mathfrak g,\\mathfrak k)$ is \\emph{reductive} if, moreover, $\\mathfrak g$ is basic.\n\nNote that for any symmetric superpair $(\\mathfrak g,\\mathfrak k)$, $\\mathfrak k$ and $\\mathfrak p$ are $b$-orthogonal and non-degenerate. It is also useful to consider the form $b^\\theta(x,y)=b(x,\\theta y)$ which is even, supersymmetric, non-degenerate and $\\mathfrak k$-invariant.\n\nLet $(\\mathfrak g,\\mathfrak k)$ be a reductive symmetric superpair. For arbitrary subspaces $\\mathfrak c,\\mathfrak d\\subset\\mathfrak g$, let $\\mathfrak z_{\\mathfrak d}(\\mathfrak c)=\\Set0{d\\in\\mathfrak d}{[d,\\mathfrak c]=0}$ denote the centraliser of $\\mathfrak c$ in $\\mathfrak d$. Any linear subspace $\\mathfrak a=\\mathfrak z_{\\mathfrak p}(\\mathfrak a)\\subset\\mathfrak p_0$ consisting of semi-simple elements of $\\mathfrak g_0$ is called an \\emph{even Cartan subspace}. If an even Cartan subspace exists, then we say that $(\\mathfrak g,\\mathfrak k)$ is of \\emph{even type}. \n\\end{Def}\n\n\\begin{Par*}\n\tWe state some generalities on even Cartan subspaces. These are known and straightforward to deduce from standard texts such as \\cite{dixmier-envalg,borel-rss}.\n\\end{Par*}\n\n\\begin{Lem}\nLet $\\mathfrak a\\subset\\mathfrak g$ be an even Cartan subspace.\n\\begin{enumerate}\n\\item $\\mathfrak a$ is reductive in $\\mathfrak g$, \\emph{i.e.}~$\\mathfrak g$ is a semi-simple $\\mathfrak a$-module.\n\\item $\\mathfrak z_{\\mathfrak g_0}(\\mathfrak a)$ and $\\mathfrak z_{\\mathfrak g_1}(\\mathfrak a)$ are $b$-non-degenerate.\n\\item $\\mathfrak z_{\\mathfrak g_0}(\\mathfrak a)=\\mathfrak m_0\\oplus\\mathfrak a$ and $\\mathfrak z_{\\mathfrak g_1}(\\mathfrak a)=\\mathfrak m_1$ where $\\mathfrak m_i=\\mathfrak z_{\\mathfrak k_i}(\\mathfrak a)$, and the sum is $b$-orthogonal.\n\\item $\\mathfrak m_0$, $\\mathfrak m_1$, and $\\mathfrak a$ are $b$-non-degenerate. \n\\item There exists a $\\theta$-stable Cartan subalgebra $\\mathfrak b$ of $\\mathfrak g_0$ containing $\\mathfrak a$. \n\\end{enumerate}\n\\end{Lem}\n\n\\begin{Par}[grouptype]\n\tLet $\\mathfrak k$ be a classical Lie superalgebra with a non-degenerate invariant even form $B$ \\cite{kac-repnclassical}. Then $\\mathfrak k_0$ is reductive in $\\mathfrak k$, and $\\mathfrak z(\\mathfrak k)$ is even. We may define $\\mathfrak g=\\mathfrak k\\oplus\\mathfrak k$, and $b(x,y,x',y')=B(x,x')+B(y,y')$. Then $(\\mathfrak g,b)$ is basic quadratic. The flip involution $\\theta(x,y)=(y,x)$ turns $(\\mathfrak g,\\mathfrak k)$ into a reductive symmetric superpair (where $\\mathfrak k$ is, as is customary, identified with the diagonal in $\\mathfrak g$). We call such a pair of \\emph{group type}. \n\t\n\tMoreover, any Cartan subalgebra $\\mathfrak a$ of $\\mathfrak k_0$ yields an even Cartan subspace for the superpair $(\\mathfrak g,\\mathfrak k)$. Indeed, $\\mathfrak p=\\Set1{(x,-x)}{x\\in\\mathfrak k}$, and the assertion follows from \\thmref{Prop}{superroot}~(v).\n\\end{Par}\n\n\\begin{Par}\n\tIn what follows, let $(\\mathfrak g,\\mathfrak k)$ be a reductive symmetric superpair of even type, $\\mathfrak a\\subset\\mathfrak p$ an even Cartan subspace, and $\\mathfrak b\\subset\\mathfrak g_0$ a $\\theta$-stable Cartan subalgebra containing $\\mathfrak a$. The involution $\\theta$ acts on $\\mathfrak b^*$ by $\\theta\\alpha=\\alpha\\circ\\theta$ for all $\\alpha\\in\\mathfrak b^*$. Let $\\alpha_\\pm=\\frac12(1\\pm\\theta)\\alpha$ for all $\\alpha\\in\\mathfrak b^*$, and set\n\t\\[\n\t\t\\Sigma_j=\\Sigma_j(\\mathfrak g:\\mathfrak a)=\\Set1{\\alpha_-}{\\alpha\\in\\Delta_j\\ ,\\ \\alpha\\neq\\theta\\alpha}\\ ,\\ \\Sigma=\\Sigma(\\mathfrak g:\\mathfrak a)=\\Sigma_0\\cup\\Sigma_1\\ .\n\t\\]\n\t(The union might not be disjoint.) Identifying $\\mathfrak a^*$ with the annihilator of $\\mathfrak b\\cap\\mathfrak k$ in $\\mathfrak b^*$, these may be considered as subsets of $\\mathfrak a^*$. The elements of $\\Sigma_0$, $\\Sigma_1$, and $\\Sigma$ are called \\emph{even restricted roots}, \\emph{odd restricted roots}, and \\emph{restricted roots}, respectively. For $\\lambda\\in\\Sigma$, let \n\t\\[\n\t\t\\Sigma_j(\\lambda)=\\Set1{\\alpha\\in\\Delta_j}{\\lambda=\\alpha_-}\\ ,\\ \\Sigma(\\lambda)=\\Sigma_0(\\lambda)\\cup\\Sigma_1(\\lambda)\\ .\n\t\\]\n\tIn the following lemma, observe that $\\lambda\\in\\Sigma_j(\\lambda)$ means that $\\lambda\\in\\Delta_j$. We omit the simple proof, which is exactly the same as in the even case \\cite[Chapter 1.1, Appendix 2, Lemma 1]{warner-vol1}.\n\\end{Par}\n\n\\begin{Lem}[rootrestr-fibre-even]\n\tLet $\\lambda\\in\\Sigma_j$, $j=0,1$. The map $\\alpha\\mapsto-\\theta\\alpha$ is a fixed point free involution of $\\Sigma_j(\\lambda)\\setminus\\lambda$. In particular, the cardinality of this set is even. \n\\end{Lem}\n\n\\begin{Par}\n\tFor $\\lambda\\in\\Sigma$, let \n\t\\[\n\t\t\\mathfrak g_{j,\\mathfrak a}^\\lambda=\\Set1{x\\in\\mathfrak g_j}{\\forall h\\in\\mathfrak a\\,:\\,[h,x]=\\lambda(h)\\cdot x}\\ ,\\ \\mathfrak g_{\\mathfrak a}^\\lambda=\\mathfrak g_{0,\\mathfrak a}^\\lambda\\oplus\\mathfrak g_{1,\\mathfrak a}^\\lambda\\ ,\n\t\\]\n\tand $m_{j,\\lambda}=\\dim_\\cplxs\\mathfrak g^\\lambda_{j,\\mathfrak a}$, the \\emph{even} or \\emph{odd multiplicity} of $\\lambda$, according to whether $j=0$ or $j=1$. It is clear that\n\t\\[\n\t\t\\textstyle\\mathfrak g_{j,\\mathfrak a}^\\lambda=\\bigoplus_{\\alpha\\in\\Sigma_j(\\lambda)}\\mathfrak g^\\alpha_j\\ ,\\ m_{j,\\lambda}=\\sum_{\\alpha\\in\\Sigma_j(\\lambda)}\\dim_\\cplxs\\mathfrak g_j^\\alpha\\ ,\\ \\text{\\AND}\\ \\mathfrak g=\\mathfrak z_{\\mathfrak g}(\\mathfrak a)\\oplus\\bigoplus_{\\lambda\\in\\Sigma}\\mathfrak g^\\lambda_{\\mathfrak a}\\ .\n\t\\]\n\\end{Par}\n\n\\begin{Par*}\n\tThe following facts are certainly well-known. Lacking a reference, we give the short proof. \n\\end{Par*}\n\n\\begin{Prop}[res-superroot]\nLet $\\alpha,\\beta\\in\\Delta$, $\\lambda\\in\\Sigma$, and $j,k\\in\\{0,1\\}$.\n\\begin{enumerate}\n\t\\item The form $b^\\theta$ is zero on $\\mathfrak g^\\alpha_j\\times\\mathfrak g_k^\\beta$, unless $j=k$ and $\\alpha=-\\theta\\beta$, in which case it gives a non-degenerate pairing.\n\t\\item There exists a unique $A_\\lambda\\in\\mathfrak a$ such that $b(A_\\lambda,h)=\\lambda(h)$ for all $h\\in\\mathfrak a$. \n\t\\item We have $\\dim_\\cplxs\\mathfrak g_j^\\alpha=\\dim_\\cplxs\\mathfrak g_j^{-\\theta\\alpha}$. \n\t\\item The subspace $\\mathfrak g_j(\\lambda)=\\mathfrak g_{j,\\mathfrak a}^\\lambda\\oplus\\mathfrak g_{j,\\mathfrak a}^{-\\lambda}$ is $\\theta$-invariant and decomposes into $\\theta$-eigenspaces as $\\mathfrak g_j(\\lambda)=\\mathfrak k_j^\\lambda\\oplus\\mathfrak p_j^\\lambda$.\n\t\\item The odd multiplicity $m_{1,\\lambda}$ is even, and $b^\\theta$ defines a symplectic form on both $\\mathfrak k_1^\\lambda$ and $\\mathfrak p_1^\\lambda$. \n\\end{enumerate}\n\\end{Prop}\n\n\\begin{proof}\n\tThe form $b^\\theta$ is even, so $b^\\theta(\\mathfrak g_0,\\mathfrak g_1)=0$. For $x\\in\\mathfrak g^\\alpha_j$, $y\\in\\mathfrak g^\\beta_j$, we compute, for all $h\\in\\mathfrak b$, \n\t\\begin{align*}\n\t\t(\\alpha+\\theta\\beta)(h)b^\\theta(x,y)&=b^\\theta([h,x],y)+b^\\theta(x,[\\theta h,y])\\\\\n\t\t&=b^\\theta([h,x]+[x,h],y)=0\\ .\n\t\\end{align*}\n\tHence, $b^\\theta(x,y)=0$ if $\\alpha\\neq-\\theta\\beta$. Since $b^\\theta$ is non-degenerate and $\\mathfrak g\/\\mathfrak b$ is the sum of root spaces, $b^\\theta$ induces a non-degenerate pairing of $\\mathfrak g_j^\\alpha$ and $\\mathfrak g_j^{-\\theta\\alpha}$. We also know already that $\\mathfrak a$ is non-degenerate for $b^\\theta$, and (i)-(iii) follow. Statement (iv) is immediate. \n\t\n\tWe have \n\t\\[\n\t\t\\mathfrak g_{1,\\mathfrak a}^\\lambda\/\\mathfrak g_1^\\lambda\\cong\\textstyle\\bigoplus_{\\alpha\\in\\Sigma_j(\\lambda)\\setminus\\lambda}\\mathfrak g_1^\\alpha\\ .\n\t\\]\n\tBy (iii) and \\thmref{Lem}{rootrestr-fibre-even}, this space is even-dimensional. But $\\lambda$ is a root if and only if $\\lambda=-\\theta\\lambda$. Then $b^\\theta$ defines a symplectic form on $\\mathfrak g_1^\\lambda$ by (i), and this space is even-dimensional. Thus, $m_{1,\\lambda}$ is even, and again by (i), $\\mathfrak g_{1,\\mathfrak a}^\\lambda$ is $b^\\theta$-non-degenerate. It is clear that $\\mathfrak k_1^\\lambda$ and $\\mathfrak p_1^{\\lambda}$ are $b^\\theta$-non-degenerate because $\\mathfrak g_{1,\\mathfrak a}^\\lambda$ and $\\mathfrak g_{1,\\mathfrak a}^{-\\lambda}$ are. Hence, we obtain assertion (v).\n\\end{proof}\n\n\\begin{Rem}\n\tUnlike the case of unrestricted roots, there may exist $\\lambda\\in\\Sigma_1$ such that $2\\lambda\\not\\in\\Sigma$ but $\\lambda$ is still anisotropic, \\emph{i.e.}~$\\lambda(A_\\lambda)\\neq0$. Indeed, consider $\\mathfrak g=\\mathfrak{osp}(2|2,\\cplxs)$ ($\\cong\\sll(2|1,\\cplxs)$). Then $\\mathfrak g_0=\\mathfrak o(2,\\cplxs)\\oplus\\mathfrak{sp}(2,\\cplxs)=\\mathfrak{gl}(2,\\cplxs)$ and $\\mathfrak g_1$ is the sum of the fundamental representation of $\\mathfrak g_0$ and its dual. \n\t\n\tDefine the involution $\\theta$ to be conjugation by the element $\\begin{Matrix}0\\sigma&0\\\\0&1_2\\end{Matrix}$ where $\\sigma=\\begin{Matrix}00&1\\\\1&0\\end{Matrix}$. One finds that $\\mathfrak k_0=\\sll(2,\\cplxs)$ and $\\mathfrak p_0=\\mathfrak a=\\mathfrak z(\\mathfrak g_0)$ which is one-dimensional and non-degenerate for the supertrace form $b$. On the other hand, $\\mathfrak g_1=\\mathfrak g_1(\\lambda)$ is the sum of the root spaces for certain odd roots $\\pm\\alpha$, $\\pm\\theta\\alpha$ which restrict to $\\pm\\lambda$. Clearly, there are no even roots, so $2\\lambda$ is not a restricted root. Since $A_\\lambda$ generates $\\mathfrak a$, it is a $b$-anisotropic vector. We discuss this issue at some length in section 4.2. \n\t\n\tWe point out that it is also not hard to prove that any such root $\\lambda$ occurs in this setup. \\emph{I.e.}, given a reductive symmetric superpair $(\\mathfrak g,\\mathfrak k)$, for any $\\lambda\\in\\Sigma_1$, $2\\lambda\\not\\in\\Sigma$, $\\lambda(A_\\lambda)\\neq0$, there exists a $b$-non-degenerate $\\theta$-invariant subalgebra $\\mathfrak s\\cong\\mathfrak{osp}(2|2,\\cplxs)$ such that $\\mathfrak p\\cap\\mathfrak s_0=\\cplxs A_\\lambda=\\mathfrak z(\\mathfrak s_0)$ (the centre of $\\mathfrak s_0$), and $\\dim\\mathfrak s\\cap\\mathfrak g_1(\\lambda)=4$.\n\t\n\tThis phenomenon, of course, cannot occur if the symmetric superpair $(\\mathfrak g,\\mathfrak k)$ is of group type. This reflects the fact that the conditions characterising the invariant algebra may be different in the general case than one might expect from the knowledge of the group case (\\emph{i.e.}~the theorems of Sergeev and Kac, Gorelik).\n\\end{Rem}\n\n\\subsection{The twisted action on the supersymmetric algebra}\n\n\\begin{Par}\n\tLet $V=V_0\\oplus V_1$ be a finite-dimensional super-vector space over $\\cplxs$. We define the supersymmetric algebra $S(V)=S(V_0)\\otimes\\bigwedge(V_1)$. It is $\\ints$-graded by total degree, as follows: $S^{k,\\mathrm{tot}}(V)=\\bigoplus_{p+q=k}S^p(V_0)\\otimes\\bigwedge^q(V_1)$. This grading is not compatible with the $\\ints_2$-grading, but will of be of use to us nonetheless. \n\t\n\tLet $U$ be another finite-dimensional super-vector space, and moreover, let $b:U\\times V\\to\\cplxs$ be a bilinear form. Then $b$ extends to a bilinear form $S(U)\\times S(V)\\to\\cplxs$: It is defined on linear generators by \n\t\\[\n\t\tb(x_1\\dotsm x_m,y_1\\dotsm y_n)=\\delta_{mn}\\cdot\\sum\\nolimits_{\\sigma\\in\\mathfrak S_n}\\alpha^\\sigma_{x_1,\\dotsc,x_n}\\cdot b(x_{\\sigma(1)},y_1)\\dotsm b(x_{\\sigma(n)},y_n)\t\n\t\\]\n\tfor all $x_1,\\dotsc,x_m\\in U$, $y_1,\\dotsc,y_n\\in V$ where $\\alpha=\\alpha^\\sigma_{x_1,\\dotsc,x_n}=\\pm1$ is determined by the requirement that $\\alpha\\cdot x_{\\sigma(1)}\\dotsm x_{\\sigma(n)}=x_1\\dotsm x_n$ in $S(V)$. If $b$ is even (resp.~odd, resp.~non-degenerate), then so is its extension. Here, recall that a bilinear form has degree $i$ if $b(V_j,V_k)=0$ whenever $i+j+k\\equiv1\\ (2)$.\n\t\n\tIn particular, the natural pairing of $V$ and $V^*$ extends to a non-de\\-ge\\-ne\\-rate even pairing $\\Cdual0\\cdot\\cdot$ of $S(V)$ and $S(V^*)$. By this token, $S(V)$ embeds injectively as a subsuperspace in $\\widehat S(V)=S(V^*)^*$. Its image coincides with the graded dual $S(V^*)^{*\\mathrm{gr}}$ whose elements are the linear forms vanishing on $S^{k,\\mathrm{tot}}(V^*)$ for $k\\gg1$. \n\t\n\tWe define a superalgebra homomorphism $\\partial:S(V)\\to\\End0{\\widehat S(V^*)}$ by \n\t\\[\n\t\t\\Cdual0p{\\partial(q)\\pi}=\\Cdual0{pq}\\pi\\mathfa p,q\\in S(V)\\,,\\,\\pi\\in S(V)^*\n\t\\]\n\twhere $\\widehat S(V^*)=S(V)^*$. Clearly, $\\partial(q)$ leaves $S(V^*)$ invariant. \n\\end{Par}\n\n\\begin{Par}\n\tIf $U$ is an even finite-dimensional vector space over $\\cplxs$, then we have the well-known isomorphism $S(U^*)\\cong\\cplxs[U]$ as algebras, where $\\cplxs[U]$ is the set of polynomial mappings $U\\to\\cplxs$. We recall that the isomorphism can be written down as follows. \n\t\n\tThe pairing $\\Cdual0\\cdot\\cdot$ of $S(U)$ and $S(U^*)$ extends to $\\widehat S(U)\\times S(U^*)$. For any $d\\in S(U)$, the exponential $e^d=\\sum_{n=0}^\\infty\\frac{d^n}{n!}$ makes sense as an element of the algebra $\\widehat S(U)=\\prod_{n=0}^\\infty S^n(U)$. Now, define a map $S(U^*)\\to\\cplxs[U]:p\\mapsto P$ by \n\t\\[\n\t\tP(z)=\\Cdual0{e^z}p=\\textstyle\\sum_{n=0}^\\infty\\frac1{n!}\\Cdual0{z^n}{p}=\\sum_{n=0}^\\infty\\frac1{n!}\\Cdual01{\\partial(z)^np}\\ .\n\t\\]\n\tObserve \n\t\\[\n\t\t\\tfrac d{dt}P(z_0+tz)\\big|_{t=0}=\\tfrac d{dt}\\Cdual0{e^{tz}e^{z_0}}p\\big|_{t=0}=\\Cdual0{ze^{z_0}}p\\ .\n\t\\]\n\tIterating this formula, we obtain $\\Cdual0{z_1\\dotsm z_n}p$ for any $z_j\\in U$ as a repeated directional derivative of $P$, and the map is injective. Since it preserves the grading by total degree, it is bijective because of identities of dimension in every degree. \n\\end{Par}\n\n\\begin{Par}\n\tLet $V=V_0\\oplus V_1$ be a finite-dimensional super-vector space. We apply the above to define an isomorphism $\\phi:S(V^*)\\to\\Hom[_{S(V_0)}]0{S(V),\\cplxs[V_0]}$. Here, $S(V_0)$ acts on $S(V)$ by left multiplication, and it acts on $\\cplxs[V_0]$ by natural extension of the action of $V_0$ by directional derivatives:\n\t\\[\n\t\t(\\partial_zP)(z_0)=\\tfrac d{dt}P(z_0+tz)\\big|_{t=0}\\mathfa P\\in\\cplxs[V_0]\\,,\\,z,z_0\\in V_0\\ .\n\t\\]\n\tThe isomorphism $\\phi$ is given by the following prescription for $P=\\phi(p)$:\n\t\\[\n\t\tP(d;z)=(-1)^{\\Abs0d\\Abs0p}\\Cdual0{e^z}{\\partial(d)p}\\mathfa p\\in S(V^*)\\,,\\,z\\in V_0\\,,\\,d\\in S(V)\\ .\n\t\\]\n\tHere, note that $\\widehat S(V_0)\\subset\\widehat S(V)$ since $S(V^*_0)$ is a direct summand of $S(V^*)$, $S(V^*)=S(V_0^*)\\oplus S(V_0^*)\\otimes\\bigwedge^+(V_1^*)$, where $\\bigwedge^+=\\bigoplus_{k\\sge1}\\bigwedge^k$. Hence, $e^z$ may be considered as an element of $\\widehat S(V)$. \t\n\t\n\tThe map $\\phi$ is an isomorphism as the composition of the isomorphisms\n\t\\begin{align*}\n\t\t\\Hom[_{S(V_0)}]0{S(V),\\cplxs[V_0]}&\\cong\\Hom[_{S(V_0)}]0{S(V_0)\\otimes\\textstyle\\bigwedge V_1,S(V_0^*)}\\\\\n\t\t&\\cong\\textstyle S(V_0^*)\\otimes\\bigwedge V_1^*\\cong S(V^*)\\ .\n\t\\end{align*}\n\\end{Par}\n\n\\begin{Def}[restrhom]\n\tLet $(\\mathfrak g,\\mathfrak k)$ be a reductive symmetric superpair of even type, and $\\mathfrak a\\subset\\mathfrak p$ an even Cartan subspace. We apply the isomorphism $\\phi$ for $V=\\mathfrak p$ to define natural \\emph{restriction homomorphisms}\n\t\\[\n\tS(\\mathfrak p^*)\\to S(\\mathfrak p_0^*):p\\mapsto\\bar p\\mathtxt\\AND S(\\mathfrak p^*)\\to S(\\mathfrak a^*):p\\mapsto\\bar p\\ .\n\t\\]\n\tHere, $\\bar p\\in S(\\mathfrak p_0^*)$ (resp.~$\\bar p\\in S(\\mathfrak a^*)$) is defined via its associated polynomial $\\bar P\\in\\cplxs[\\mathfrak p_0]$ (resp.~$\\bar P\\in\\cplxs[\\mathfrak a]$) where \n\t\\[\n\t\t\\bar P(z)=P(1;z)\\mathtxt\\AND P=\\phi(p)\\ .\n\t\\]\n\tThis is a convention we will adhere to in all that follows. \n\t\n\tSince $\\mathfrak p_0$ is complemented by $\\mathfrak p_1$ in $\\mathfrak p$, and $\\mathfrak a$ is complemented in $\\mathfrak p_0$ by $\\bigoplus_{\\lambda\\in\\Sigma_0}\\mathfrak p_0^\\lambda$, we will in the sequel consider $\\mathfrak p_0^*\\subset\\mathfrak p^*$ and $\\mathfrak a^*\\subset\\mathfrak p_0^*$.\n\\end{Def}\n\n\\begin{Par}\n\tLet $K$ be a connected Lie group with Lie algebra $\\mathfrak k_0$ such that the restricted adjoint representation $\\ad:\\mathfrak k_0\\to\\End0{\\mathfrak g}$ lifts to a homomorphism $\\Ad:K\\to\\GL(\\mathfrak g)$. (For instance, one might take $K$ simply connected.) Then $\\mathfrak k$ (resp.~$K$) acts on $S(\\mathfrak p)$, $S(\\mathfrak p^*)$, $\\widehat S(\\mathfrak p)$, $\\widehat S(\\mathfrak p^*)$ by suitable extensions of $\\ad$ and $\\ad^*$ (resp.~$\\Ad$ and $\\Ad^*$) which we denote by the same symbols. Here, the sign convention for $\\ad^*$ is\n\t\\[\n\t\t\\Cdual0y{\\ad^*(x)\\eta}=\\Cdual0{[y,x]}\\eta=-(-1)^{\\Abs0x\\Abs0y}\\Cdual0{\\ad(x)(y)}\\eta\n\t\\]\n\tfor all $x,y\\in\\mathfrak g$, $\\eta\\in\\mathfrak g^*$. \n\t\n\tLet $z\\in\\mathfrak p_0$. We have $e^z=\\sum_{k=0}^\\infty\\frac{z^k}{k!}\\in\\widehat S(\\mathfrak p)$, and this element is invertible with inverse $e^{-z}$. Define \n\t\\[\n\tu_z(x)d=\\ad(x)(de^z)e^{-z}\\mathfa x\\in\\mathfrak k\\,,\\,d\\in\\widehat S(\\mathfrak p)\\ .\n\t\\]\n\tObserve that \n\t\\[\n\t\\ad(x)(e^z)=\\textstyle\\sum_{n=0}^\\infty\\frac1{n!}\\ad(x)(z^n)=\\sum_{n=1}^\\infty\\frac n{n!}[x,z]z^{n-1}=[x,z]e^z\\ ,\n\t\\]\t\n\tbecause $z$ is even. Hence, \n\t\\[\n\t\tu_z(x)d=\\ad(x)(de^z)e^{-z}=[x,z]d+\\ad(x)(d)\\ .\n\t\\]\n\tIn particular, $u_z(x)$ leaves $S(\\mathfrak p)\\subset\\widehat S(\\mathfrak p)$ invariant. \n\\end{Par}\n\n\\begin{Lem}[twistedaction-cov]\n\tLet $z\\in\\mathfrak p_0$. Then $u_z$ defines a $\\mathfrak k$-module structure on $S(\\mathfrak p)$, and for all $x\\in\\mathfrak k$, $k\\in K$, we have \n\t\\[\n\t\t\\Ad(k)\\circ u_z(x)=u_{\\Ad(k)(z)}(\\Ad(k)(x))\\circ\\Ad(k)\\ .\n\t\\]\n\\end{Lem}\n\n\\begin{proof}\n\tWe clearly have \n\t\\[\n\t\tu_z(x)u_z(y)d=(\\ad(x)\\ad(y)(de^z))e^{-z}\\ .\n\t\\]\n\tNow $u_z$ is a $\\mathfrak k$-action because $\\ad$ is a homomorphism. Similarly,\n\t\\begin{align*}\n\t\t\\Ad(k)(u_z(x)d)&=\\ad(\\Ad(k)(x))(\\Ad(k)(d)e^{\\Ad(k)(z)})e^{-\\Ad(k)(z)}\\\\\n\t\t&=u_{\\Ad(k)(z)}(\\Ad(k)(x))\\Ad(k)(d)\\ ,\n\t\\end{align*}\n\twhich manifestly gives the second assertion. \n\\end{proof}\n\n\\begin{Par}[twisted-action-def]\n\tLet $u_z$ also denote the natural extension of $u_z$ to $\\Uenv0{\\mathfrak k}$. Then we may define an action $\\ell$ of $\\Uenv0{\\mathfrak k}$ on $\\Hom[_{S(\\mathfrak p_0)}]0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]}$ via \n\t\\[\n\t\t(\\ell_vP)(d;z)=(-1)^{\\Abs0v\\Abs0P}P(u_z(S(v))d;z)\n\t\\]\n\tfor all $P\\in\\Hom[_{S(\\mathfrak p_0)}]0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]}$, $v\\in\\Uenv0{\\mathfrak k}$, $d\\in S(\\mathfrak p)$, $z\\in\\mathfrak p_0$. Here, we denote by $S:\\Uenv0{\\mathfrak g}\\to\\Uenv0{\\mathfrak g}$ the unique linear map such that $S(1)=1$, $S(x)=-x$ for all $x\\in\\mathfrak g$, and $S(uv)=(-1)^{\\Abs0u\\Abs0v}S(v)S(u)$ for all homogeneous $u,v\\in\\Uenv0{\\mathfrak g}$ (\\emph{i.e.}~the principal anti-automorphism). Compare \\cite{koszul-superaction} for a similar definition in the context of the action of a supergroup on its algebra of superfunctions. \n\t\n\tWe also define \n\t\\[\n\t\t(L_kP)(d;z)=P(\\Ad(k^{-1})(d);\\Ad(k^{-1})(z))\n\t\\]\n\tfor all $P\\in\\Hom[_{S(\\mathfrak p_0)}]0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]}$, $k\\in K$, $d\\in S(\\mathfrak p)$, $z\\in\\mathfrak p_0$.\n\\end{Par}\n\n\\begin{Lem}[flat-radial-fmla]\n\tThe map $\\ell$ (resp.~$L$) defines on $\\Hom[_{S(\\mathfrak p_0)}]0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]}$ the structure of a module over $\\mathfrak k$ (resp.~$K$) making the isomorphism $\\phi$ equivariant for $\\mathfrak k$ (resp.~$K$). \n\\end{Lem}\n\n\\begin{proof}\n\tLet $P=\\phi(p)$. Then\n\t\\begin{align*}\n\t\t(\\ell_xP)(d;z)&=-(-1)^{\\Abs0x\\Abs0p}P(u_z(x)d;z)=-(-1)^{\\Abs0d\\Abs0p}\\Cdual1{\\ad(x)(e^zd)}p\\\\\n\t\t&=(-1)^{\\Abs0d(\\Abs0x+\\Abs0p)}\\Cdual1{e^zd}{\\ad^*(x)(p)}=\\phi\\Parens1{\\ad^*(x)(p)}(d;z)\\ .\n\t\\end{align*}\n\t\n\tSimilarly, we check that\n\t\\begin{align*}\n\t\t(L_kP)(d;z)&=P(\\Ad(k^{-1})(d);\\Ad(k^{-1})(z))\\\\\n\t\t&=(-1)^{\\Abs0d\\Abs0p}\\Cdual1{e^{\\Ad(k^{-1})(z)}\\Ad(k^{-1})(d)}p\\\\\n\t\t&=(-1)^{\\Abs0d\\Abs0p}\\Cdual1{\\Ad(k^{-1})(e^zd)}p=\\phi\\Parens1{\\Ad^*(k)(p)}(z;d)\\ .\n\t\\end{align*}\n\tThis proves our assertion.\n\\end{proof}\n\n\\section{Chevalley's restriction theorem}\n\n\\subsection{The map $\\gamma_z$}\n\n\\begin{Par*}\n\tFrom now on, let $(\\mathfrak g,\\mathfrak k)$ be a reductive symmetric superpair of even type, and let $\\mathfrak a\\subset\\mathfrak p_0$ be an even Cartan subspace. \n\\end{Par*}\n\n\\begin{Def}\n\tAn element $z\\in\\mathfrak p_0$ is called \\emph{oddly regular} whenever the map $\\ad(z):\\mathfrak k_1\\to\\mathfrak p_1$ is surjective. \tRecall that $z\\in\\mathfrak p_0$ is called \\emph{regular} if $\\dim\\mathfrak z_{\\mathfrak k_0}(z)=\\dim\\mathfrak z_{\\mathfrak k_0}(\\mathfrak a)$. We shall call $z$ \\emph{super-regular} if it is both regular and oddly regular. \n\t\t\n\tFix an even Cartan subspace $\\mathfrak a$, and let $\\Sigma$ be the set of (both odd and even) restricted roots. Let $\\Sigma^+\\subset\\Sigma$ be any subset such that $\\Sigma$ is the disjoint union of $\\pm\\Sigma^+$. Define $\\Sigma_j^\\pm=\\Sigma_j\\cap\\Sigma^\\pm$ for $j\\in\\ints\/2\\ints$. Let $\\bar\\Sigma_1$ be the set of $\\lambda\\in\\Sigma_1$ such that $m\\lambda\\not\\in\\Sigma_0$ for $m=1,2$. Denote $\\bar\\Sigma_1^+=\\bar\\Sigma_1\\cap\\Sigma^+$. Note that $\\Pi_1\\in S(\\mathfrak a^*)^W$ where $\\Pi_1(h)=\\prod_{\\lambda\\in\\Sigma_1}\\lambda(h)$, and $W$ is the Weyl group of $\\Sigma_0$. \n\t\n\tBy Chevalley's restriction theorem, restriction $S(\\mathfrak p_0^*)^{\\mathfrak k_0}\\to S(\\mathfrak a^*)^W$ is a bijective map. Let $\\Pi_1$ also denote the unique extension to $S(\\mathfrak p^*_0)^{\\mathfrak k_0}$ of $\\Pi_1$. \n\\end{Def}\n\n\\begin{Rem}\n\tThe space $\\mathfrak p_0$ contains non-semi-simple elements, and the definitions we have given above work in this generality. However, it will suffice for our purposes to consider the set of \\emph{semi-simple} super-regular elements in $\\mathfrak p_0$, by the following reasoning. \n\t\n\tFirst, the set of semi-simple elements in $\\mathfrak p_0$ is Zariski dense (a linear endomorphism is semi-simple if and only if its minimal polynomial has only simple zeros). Second, the set of semi-simple elements in $\\mathfrak p_0$ equals $\\Ad(K)(\\mathfrak a)$ \\cite[Chapter III, Proposition~4.16]{helgason-geoman}. Thus, given any \\emph{semi-simple} $z\\in\\mathfrak p_0$, $z$ is oddly regular (super-regular) if and only if $\\lambda(\\Ad(k)(z))\\neq0$ for all $\\lambda\\in\\Sigma_1$ ($\\lambda\\in\\Sigma$), and for some (any) $k\\in K$ such that $\\Ad(k)(z)\\in\\mathfrak a$. In particular, the set of super-regular elements of $\\mathfrak a$ is the complement of a finite union of hyperplanes. Hence, the set of semi-simple super-regular elements of $\\mathfrak p_0$ is non-void and therefore Zariski dense; in particular, this holds for the set of semi-simple oddly regular elements. \n\\end{Rem}\n\n\\begin{Lem}[centnondegen]\n\tIf $z\\in\\mathfrak p_0$ is semi-simple, then $\\mathfrak k_i=\\mathfrak z_{\\mathfrak k_i}(z)\\oplus[z,\\mathfrak p_i]$, and the subspaces $\\mathfrak z_{\\mathfrak k_i}(z)$ and $[z,\\mathfrak p_i]$ are $b$-non-degenerate. \n\\end{Lem}\n\n\\begin{proof}\n\tSince $\\ad z$ is a semi-simple endomorphism of $\\mathfrak g$ ($\\mathfrak g$ is a semi-simple $\\mathfrak g_0$-module and $z$ is semi-simple), we have $\\mathfrak g_i=\\mathfrak z_{\\mathfrak g_i}(z)\\oplus[z,\\mathfrak g_i]$. Taking $\\theta$-fixed parts, we deduce $\\mathfrak k_i=\\mathfrak z_{\\mathfrak k_i}(z)\\oplus\\mathfrak[z,\\mathfrak p_i]$. The summands, being $b$-orthogonal, are non-degenerate. \n\\end{proof}\n\n\\begin{Par}\n\tLet $z\\in\\mathfrak p_0$ be semi-simple and oddly regular. Let $\\beta:S(\\mathfrak g)\\to\\Uenv0{\\mathfrak g}$ be the supersymmetrisation map. Let\n\t\\[\n\t\t\\Gamma_z:\\textstyle\\bigwedge(\\mathfrak p_1)\\otimes S(\\mathfrak p_0)\\to S(\\mathfrak p):q\\otimes p\\mapsto u_z\\Parens1{\\beta([z,q])}p\n\t\\]\n\ton elementary tensors and extend linearly. \n\\end{Par}\n\n\\begin{Prop}[radial-part]\nLet $z$ be oddly regular and semi-simple. Then $\\Gamma_z$ is bijective, and $\\gamma_z=(\\eps\\otimes 1)\\circ\\Gamma_z^{-1}:S(\\mathfrak p)\\to S(\\mathfrak p_0)$ satisfies \n\\[\n\\gamma_{\\Ad(k)(z)}\\circ\\Ad(k)=\\Ad(k)\\circ\\gamma_z\\mathfa k\\in K\\ .\n\\] \nHere $\\eps:\\bigwedge(\\mathfrak p_1)\\to\\cplxs$ is the unique unital algebra homomorphism.\n\nMoreover, on $S^{m,\\mathrm{tot}}(\\mathfrak p)$, $\\Pi_1(z)^m\\gamma_z$ is polynomial in $z$, \\emph{i.e.}~it extends to an element $\\Pi_1(\\cdot)^m\\gamma_\\cdot$ of the space $\\cplxs[\\mathfrak p_0]\\otimes\\Hom0{S^{m,\\mathrm{tot}}(\\mathfrak p),S(\\mathfrak p_0)}$. \n\\end{Prop}\n\n\\begin{proof}\n\tBy the assumption on $z$, $\\ad z:\\mathfrak p_1\\to[z,\\mathfrak p_1]$ is bijective. Moreover, $\\Gamma_z$ respects the filtrations by total degree, and the degrees of these filtrations are equidimensional by the assumption. Hence, $\\Gamma_z$ will be bijective once it is surjective. In degree zero, $\\Gamma_z$ is the identity. We proceed to prove the surjectivity in higher degrees by induction.\n\n\tBy assumption, $\\ad z:[z,\\mathfrak p_1]\\to\\mathfrak p_1$ is also bijective (since its kernel is $\\mathfrak z_{\\mathfrak k_1}(z)\\cap[z,\\mathfrak p_1]$, which is $0$ by \\thmref{Lem}{centnondegen}). Let $y_1,\\dotsc,y_m\\in\\mathfrak p_1$, $y_1',\\dotsc,y_n'\\in\\mathfrak p_0$. Let $x_j\\in\\mathfrak p_1$ such that $[[z,x_j],z]=y_j$. We find\n\t\\[\n\t\t\\Gamma_z(x_1\\dotsm x_m\\otimes y_1'\\dotsm y_n')\\equiv y_1\\dotsm y_my_1'\\dotsm y_n'\\quad\\Parens1{\\textstyle\\bigoplus\\nolimits_{k0$ \n\t\\[\n\t\tP\\Parens1{\\Gamma_z(x_1\\dotsm x_n\\otimes q);z}=(\\ell_{\\beta([z,x_1\\dotsm x_n])}P)(q;z)=0\\ .\n\t\\]\n\tSince $d-\\gamma_z(d)\\in\\Gamma_z(\\bigwedge^+(\\mathfrak p_1)\\otimes S(\\mathfrak p_0))$, where $\\bigwedge^+(\\mathfrak p_1)$ denotes the kernel of $\\eps:\\bigwedge(\\mathfrak p_1)\\to\\cplxs$ (\\emph{i.e.}, the set of elements without constant term), the assertion follows immediately. \n\\end{proof}\n\n\\begin{Cor}[res-inj]\n\tLet $(\\mathfrak g,\\mathfrak k)$ be a reductive symmetric superpair of even type. The algebra homomorphism $p\\mapsto\\bar p:I(\\mathfrak p^*)=S(\\mathfrak p^*)^{\\mathfrak k}\\to S(\\mathfrak p_0^*)$ is injective. In particular, $I(\\mathfrak p^*)$ is commutative and purely even. \n\\end{Cor}\n\n\\begin{proof}\n\tLet $p\\in I(\\mathfrak p^*)$. Assume that $\\bar p=0$. Let $d\\in S(\\mathfrak p)$. For all $z\\in\\mathfrak p_0$ which are oddly regular and semi-simple, \n\t\\[\n\t\tP(d;z)=P(\\gamma_z(d);z)=[\\partial_{\\gamma_z(d)}\\bar P](z)=0\\ ,\n\t\\]\n\tby \\thmref{Prop}{flat-radial}. It follows that $P(d;-)=0$ on $\\mathfrak p_0$, since it is a polynomial. Since $d$ was arbitrary, we have established our contention. \n\\end{proof}\n\n\\begin{Rem}\n\tThe statement of the Corollary can, of course, be deduced by applying the inverse function theorem for supermanifolds, as in \\cite[Proposition 1.1]{sergeev-invpol}. Nonetheless, we find it instructive to give the above proof based on the map $\\gamma_z$, as it illustrates the approach we will take to determine the image of the restriction map. \n\\end{Rem}\n\n\\subsection{Proof of Theorem (A)} \n\n\\begin{Par}\n\tLet $(\\mathfrak g,\\mathfrak k)$ be a reductive symmetric superpair of even type, and let $\\mathfrak a$ be an even Cartan subspace. We denote by $\\mathfrak a'$ the set of super-regular elements of $\\mathfrak a$. Let $\\mathcal R$ be the algebra of differential operators on $\\mathfrak a$ with rational coefficients which are non-singular on $\\mathfrak a'$. For any $z\\in\\mathfrak a'$ and any $D\\in\\mathcal R$, let $D(z)$ be the local expression of $D$ at $z$. This is defined by the requirement that $D(z)$ be a differential operator with constant coefficients, and \n\t\\[\n\t\t(Df)(z)=(D(z)f)(z)\\mathfa z\\in\\mathfrak a'\\ ,\n\t\\]\n\tand all regular functions $f$. \n\t\n\tWe associate to $\\Sigma\\subset\\mathfrak a^*$, the restricted root system of $\\mathfrak g:\\mathfrak a$, the subset $\\mathcal R_\\Sigma=\\bigcup_{\\lambda\\in\\bar\\Sigma_1^+}\\mathcal R_\\lambda\\subset\\mathcal R$ where\n\t\\[\n\t\t\\mathcal R_\\lambda=\\Set1{D\\in\\mathcal R}{\\exists\\,d\\in S(\\mathfrak p_1^\\lambda)\\colon D(z)=\\gamma_z(d)\\text{ for all }z\\in\\mathfrak a'}\\ .\n\t\\]\n\t\\emph{I.e.}, $\\mathcal R_\\Sigma$ consists of those differential operators which are given as radial parts of operators with constant coefficients on the $\\mathfrak p$-projections $\\mathfrak p_1^\\lambda$ of the restricted root spaces for the $\\lambda\\in\\bar\\Sigma_1^+$. For any $D\\in\\mathcal R$, let the \\emph{domain} $\\dom D$ be the set of all $p\\in\\cplxs[\\mathfrak a]$ such that $Dp\\in\\cplxs[\\mathfrak a]$. \n\t\n\tAs we shall see, the image of the restriction map is the set of $W$-invariant polynomials in the common domain of $\\mathcal R_\\Sigma$. We will subsequently determine $\\mathcal R_\\Sigma$ in order to describe this common domain in more explicit terms. \n\\end{Par}\n\n\\begin{Th}[super-chevalley]\n\tThe restriction homomorphism $I(\\mathfrak p^*)\\to S(\\mathfrak a^*)$ from \\thmref{Def}{restrhom} is a bijection onto the subspace $I(\\mathfrak a^*)=S(\\mathfrak a^*)^W\\cap\\bigcap_{D\\in\\mathcal R_\\Sigma}\\dom D$. \n\\end{Th}\n\n\\begin{Par*}\n\tThe \\emph{proof} of the Theorem requires a little preparation. \n\\end{Par*}\n\n\\begin{Lem}[doubleprojection-trick]\n\tLet $q\\in S(\\mathfrak p_0^*)^K$, $Q=\\phi(q)$, and $z\\in\\mathfrak p_0$ be super-regular and semi-simple. For all $x\\in\\mathfrak k$, and $w\\in S(\\mathfrak p)$, we have\n\t\\[\n\t\tQ\\Parens1{\\gamma_z(u_z(x)w);z}=0\\ .\n\t\\]\n\\end{Lem}\n\n\\begin{proof}\n\tThere is no restriction to generality in supposing $z\\in\\mathfrak a'$, so that $\\mathfrak z_{\\mathfrak k}(z)=\\mathfrak z_{\\mathfrak k}(\\mathfrak a)=\\mathfrak m$ and $\\mathfrak z_{\\mathfrak k_0}(z)=\\mathfrak z_{\\mathfrak k_0}(\\mathfrak a)=\\mathfrak m_0$. We define linear maps\n\t\\[\n\t\t\\gamma_z':S(\\mathfrak p_0)\\to S(\\mathfrak a)\\mathtxt\\AND\\gamma_z'':S(\\mathfrak p)\\to S(\\mathfrak a)\n\t\\]\n\tby the requirements that $v-\\gamma_z'(v)\\in u_z(\\mathfrak m_0^\\perp\\cap\\mathfrak k_0)(S(\\mathfrak p_0))$ for all $v\\in S(\\mathfrak p_0)$ and $w-\\gamma_z''(w)\\in u_z(\\mathfrak m^\\perp\\cap\\mathfrak k)(S(\\mathfrak p))$ for all $w\\in S(\\mathfrak p)$. (That such maps exist and are uniquely defined by these properties follows in exactly the same way as for \\thmref{Prop}{radial-part}. We remark that $[z,\\mathfrak p_i]=\\mathfrak k_i\\cap\\mathfrak m_i^\\perp$ by \\thmref{Lem}{centnondegen}.) Then \n\t\\begin{align*}\n\t\tw&-\\gamma_z'(\\gamma_z(w))=w-\\gamma_z(w)+\\gamma_z(w)-\\gamma_z'(\\gamma_z(w))\\\\\n\t\t&\\in u_z(\\mathfrak m_1^\\perp\\cap\\mathfrak k_1)(S(\\mathfrak p))+u_z(\\mathfrak m_0^\\perp\\cap\\mathfrak k_0)(S(\\mathfrak p_0))\\subset u_z(\\mathfrak m^\\perp\\cap\\mathfrak k)(S(\\mathfrak p))\n\t\\end{align*}\n\tfor all $w\\in S(\\mathfrak p)$, where $\\mathfrak m_1=\\mathfrak z_{\\mathfrak k_1}(\\mathfrak a)$. This shows that $\\gamma_z''=\\gamma_z'\\circ\\gamma_z$. \n\t\n\tMoreover, by the $K$-invariance of $q$, we have $Q(v;z)=Q(\\gamma_z'(v);z)$ for all $v\\in S(\\mathfrak p_0)$. We infer \n\t\\[\n\t\tQ\\Parens1{\\gamma_z(u_z(x)w);z}=Q\\Parens1{\\gamma_z''(u_z(x)w);z}=0\\mathfa x\\in\\mathfrak m^\\perp\\cap\\mathfrak k\\ ,\\ w\\in S(\\mathfrak p)\n\t\\]\n\tsince $u_z(x)w\\in u_z(\\mathfrak m^\\perp\\cap\\mathfrak k)(S(\\mathfrak p))$ belongs to $\\ker\\gamma_z''$. \n\t\n\tNext, we need to consider the case of $x\\in\\mathfrak m$. Then $\\ad(x):S(\\mathfrak p)\\to S(\\mathfrak p)$ annihilates the subspace $S(\\mathfrak a)$, and moreover, $\\ad(x)(e^z)=0$. From this we find for all $y\\in\\mathfrak m^\\perp\\cap\\mathfrak k$, $d\\in S(\\mathfrak p)$\n\t\\begin{align*}\n\t\t\\ad(x)\\Parens1{u_z(y)(d)}&=(\\ad(x)\\ad(y)(de^z))e^{-z}\\\\\n\t\t&=(\\ad([x,y])(de^z))e^{-z}+(-1)^{\\Abs0x\\Abs0y}\\ad(y)(\\ad(x)(d)e^z)e^{-z}\\\\\n\t\t&=u_z([x,y])d+(-1)^{\\Abs0x\\Abs0y}u_z(y)\\ad(x)(d)\\ .\n\t\\end{align*}\n\tSince $\\mathfrak m$ is a subalgebra and $b$ is $\\mathfrak k$-invariant, $\\mathfrak m^\\perp\\cap\\mathfrak k$ is $\\mathfrak m$-invariant. Hence, the above formula shows that $\\ker\\gamma_z''=u_z(\\mathfrak m^\\perp\\cap k)(S(\\mathfrak p))$ is $\\ad(x)$-invariant.\n\t\n\tBy the definition of $\\gamma_z''$, we find that \n\t\\[\n\t\t\\gamma_z''(\\ad(x)d)=\\ad(x)\\gamma_z''(d)=0\\mathfa x\\in\\mathfrak m\\,,\\,d\\in S(\\mathfrak p)\\ .\n\t\\]\n\tReasoning as above, we see that \n\t\\[\n\tQ(\\gamma_z(u_z(x)d);z)=Q(\\gamma_z(\\ad(x)d);z)=0\\mathfa x\\in\\mathfrak m\\,,\\,d\\in S(\\mathfrak p)\\ .\n\t\\]\n\tSince $\\mathfrak k=\\mathfrak m\\oplus(\\mathfrak m^\\perp\\cap\\mathfrak k)$, this proves the lemma. \n\\end{proof}\n\n\\begin{Par*}\n\tLet $\\mathfrak p_0'$ be the set of semi-simple super-regular elements in $\\mathfrak p_0$. Recall the polynomial $\\Pi_1$, and consider the localisation $\\cplxs[\\mathfrak p_0]_{\\Pi_1}$. Let $q\\in S(\\mathfrak p_0^*)^K$, $Q=\\phi(q)$, and define \n\t\\[\n\t\tP(v;z)=Q(\\gamma_z(v);z)\\mathfa v\\in S(\\mathfrak p)\\ ,\\ z\\in\\mathfrak p_0'\\ .\n\t\\]\n\tBy \\thmref{Prop}{radial-part}, $P\\in\\Hom0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]_{\\Pi_1}}$. We remark that the $\\mathfrak k$-action $\\ell$ defined in \\thmref{Par}{twisted-action-def} extends to $\\Hom0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]_{\\Pi_1}}$, by the same formula. \n\\end{Par*}\n\n\\begin{Lem}[sp0-linear]\n\tRetain the above assumptions. Then $P$ is $S(\\mathfrak p_0)$-linear and $\\mathfrak k$-invariant, \\emph{i.e.}~$P\\in\\Hom[_{S(\\mathfrak p_0)}]0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]_{\\Pi_1}}^{\\mathfrak k}$.\n\\end{Lem}\n\n\\begin{proof}\n\tBy \\thmref{Lem}{doubleprojection-trick}, $P$ is $\\mathfrak k$-invariant. It remains to prove that $P$ is $S(\\mathfrak p_0)$-linear. To that end, we first establish that $P$ is $K$-equivariant as linear map $S(\\mathfrak p)\\to\\cplxs[\\mathfrak p_0]_{\\Pi_1}$. Since $q$ is $K$-invariant, \n\t\\begin{align*}\n\t\tP\\Parens1{\\Ad(k)(v);\\Ad(k)(z)}&=Q\\Parens1{\\gamma_{\\Ad(k)(z)}(\\Ad(k)(v));\\Ad(k)(z)}\\\\\n\t\t&=Q\\Parens1{\\Ad(k)(\\gamma_z(v));\\Ad(k)(z)}\\\\\n\t\t&=Q(\\gamma_z(v);z)=P(v;z)\\ .\n\t\\end{align*}\n\n\tNext, fix $z\\in\\mathfrak p_0'$. Then $S(\\mathfrak p)=S(\\mathfrak p_0)\\oplus u_z(\\mathfrak z_{\\mathfrak k_1}(z)^\\perp\\cap\\mathfrak k_1)(S(\\mathfrak p))$ where the second summand equals $\\ker\\gamma_z$. We may check the $S(\\mathfrak p_0)$-linearity on each summand separately. \n\n\tFor $v\\in S(\\mathfrak p_0)$, we have $P(v;z)=Q(v;z)$, so for any $y\\in\\mathfrak p_0$ \n\t\\[\n\t\t[\\partial_yP(v;-)](z)=[\\partial_yQ(v;-)](z)=Q(yv;z)=P(yv;z)\\ .\n\t\\]\n\t\n\tWe are reduced to considering $v=u_z(x)v'$ where $x\\in\\mathfrak z_{\\mathfrak k_1}(z)^\\perp\\cap\\mathfrak k_1$ and $v'\\in S(\\mathfrak p)$. We may assume w.l.o.g.~$z\\in\\mathfrak a$ (since $z$ is semi-simple), so that $\\mathfrak z_{\\mathfrak k_1}(z)=\\mathfrak z_{\\mathfrak k_1}(\\mathfrak a)=\\mathfrak m_1$. By our assumption on $z$, $\\mathfrak p_0=\\mathfrak a\\oplus[\\mathfrak k_0,z]$, and we may consider $y$ in each of the two summands separately. \n\t\n\tLet $y\\in\\mathfrak a$. For sufficiently small $t$, we have $z+ty\\in\\mathfrak a'=\\mathfrak a\\cap\\mathfrak p_0'$, so that $\\mathfrak z_{\\mathfrak k_1}(z+ty)=\\mathfrak m_1=\\mathfrak z_{\\mathfrak k_1}(z)$. Hence, $\\gamma_{z+ty}(u_{z+ty}(x)v')=0$. By the chain rule, \n\t\\[\n\t\t0=\\tfrac d{dt}\\gamma_{z+ty}(u_{z+ty}(x)v')\\big|_{t=0}=d\\gamma_\\cdot(v)_z(y)+\\gamma_z\\Parens1{\\tfrac d{dt}u_{z+ty}(x)v'\\big|_{t=0}}\\ ,\n\t\\]\n\tSince $\\tfrac d{dt}u_{z+ty}(x)v'\\big|_{t=0}=[x,y]v'$, we have \n\t\\[\n\t\td\\gamma_\\cdot(v)_z(y)=-\\gamma_z(\\tfrac d{dt}u_{z+ty}(x)v'\\big|_{t=0})=\\gamma_z([y,x]v')\\ .\n\t\\]\n\t\n\tMoreover, as operators on $S(\\mathfrak p)$, \n\t\\[\n\t\t[y,u_z(x)]=y[x,z]+y\\ad(x)-[x,z]y-\\ad(x)y=[y,x]\\ ,\n\t\\]\n\tand thus $yv=yu_z(x)v'\\equiv[y,x]v'$ modulo $\\ker\\gamma_z$. We conclude \n\t\\[\n\t\td\\gamma_\\cdot(v)_z(y)=\\gamma_z([y,x]v')=\\gamma_z(yv)=\\gamma_z(yv)-y\\gamma_z(v)\n\t\\]\n\tsince $\\gamma_z(v)=0$. \tHence, \n\t\\[\n\t\t[\\partial_yP(v;-)](z)=Q\\Parens1{d\\gamma_\\cdot(v)_z(y)+y\\gamma_z(v);z}=Q\\Parens1{\\gamma_z(yv);z}=P(yv;z)\\ .\n\t\\]\n\n\tNow let $y=[u,z]$ where $u\\in\\mathfrak k_0$. We may assume that $u\\perp\\mathfrak z_{\\mathfrak k_0}(z)$. Define $k_t=\\exp tu$. Then by the $K$-invariance of $P$, \n\t\\begin{align*}\n\t\t[\\partial_yP(v;-)](z)&=\\tfrac d{dt}P\\Parens1{v;\\Ad(k_t)(z)}\\big|_{t=0}\n\t\t=\\tfrac d{dt}P\\Parens1{\\Ad(k_t^{-1})(v);z}\\big|_{t=0}\\\\\n\t\t&=-P\\Parens1{\\ad(u)(v);z}=P(yv;z)-P\\Parens1{u_z(u)v;z}=P(yv;z)\n\t\\end{align*}\n\twhere in the last step, we have used \\thmref{Lem}{doubleprojection-trick}. \n\\end{proof}\n\n\\begin{proof}[\\protect{Proof of \\thmref{Th}{super-chevalley}}]\n\tThe restriction map is injective by \\thmref{Cor}{res-inj} and Chevalley's restriction theorem for $\\mathfrak g_0$. By the latter, the image lies in the set of $W$-invariants. Let $\\bar p\\in S(\\mathfrak a^*)$ be the restriction of $p\\in I(\\mathfrak p^*)$, and $P=\\phi(p)$. For any $d\\in S(\\mathfrak p)$, and $D\\in\\mathcal R_\\Sigma$ given by $D(z)=\\gamma_z(d)$, we have by \\thmref{Prop}{flat-radial}\n\t\\[\n\t\t(D\\bar p)(z)=(\\partial_{\\gamma_z(d)}\\bar P)(z)=P(\\gamma_z(d);z)=P(d;z)\\mathfa z\\in\\mathfrak a'\\ .\n\t\\]\n\tThe result is clearly polynomial in $z$, so $\\bar p\\in\\dom D$. This shows that the image of the restriction map lies in $I(\\mathfrak a^*)$. \n\t\n\tLet $r\\in I(\\mathfrak a^*)$. By Chevalley's restriction theorem, there exists a unique $q\\in I(\\mathfrak p_0^*)=S(\\mathfrak p_0^*)^K$ such that $Q(h)=R(h)$ for all $h\\in\\mathfrak a$. \n\t\n\tNext, recall that for $d\\in S(\\mathfrak p)$ and $z\\in\\mathfrak p_0'$: \n\t\\[\n\t\tP(d;z)=Q(\\gamma_z(d);z)\\ .\n\t\\]\n\tBy \\thmref{Lem}{sp0-linear}, $P\\in\\Hom[_{S(\\mathfrak p_0)}]0{S(\\mathfrak p),\\cplxs[\\mathfrak p_0]_{\\Pi_1}}^{\\mathfrak k}$. Hence, $P$ will define an element $p\\in I(\\mathfrak p^*)$ by virtue of the isomorphism $\\phi$, as soon as it is clear that, as a linear map $S(\\mathfrak p)\\to\\cplxs[\\mathfrak p_0]_{\\Pi_1}$, it takes its values in $\\cplxs[\\mathfrak p_0]$. \n\t\n\tWe only have to consider $z$ in the Zariski dense set $\\mathfrak p_0'$. The function $\\Pi_1(z)^k\\cdot P(d;z)$ depends polynomially on $z$, where we assume $d\\in S^{\\sle k,\\mathrm{tot}}(\\mathfrak p)$. To prove that $P$ has polynomial values, it will suffice (by the removable singularity theorem and the conjugacy of Cartan subspaces) to prove that $P(d;h)$ is bounded as $h\\in\\mathfrak a'=\\mathfrak a\\cap\\mathfrak p_0'$ approaches one of the hyperplanes $\\lambda^{-1}(0)$ where $\\lambda\\in\\Sigma_1^+$ is arbitrary. Since $r$ is $W$-invariant, $r-r_0$ (where $r_0$ is the constant term of $r$) vanishes on $\\lambda^{-1}(0)$ if a multiple of $\\lambda$ belongs to $\\Sigma_0^+$. Such a multiple could only be $\\pm\\lambda,\\pm2\\lambda$. Hence, it will suffice to consider $\\lambda\\in\\bar\\Sigma_1^+$. By definition, $2\\lambda\\not\\in\\Sigma$. \n\t\n\tConsider $P(d;h)$ as a map linear in $d$, and let $N_h=\\ker P(-;h)$. Let $d\\in S^{\\sle k,\\mathrm{tot}}(\\mathfrak p)$. Assume that $d=zd'$ where $z$ is defined by $x=y+z$, $y\\in\\mathfrak k$, $z\\in\\mathfrak p$, for some $x\\in\\mathfrak g^\\mu_{\\mathfrak a}$ and $\\mu\\in\\Sigma^+$, $\\mu\\neq\\lambda$. Then, modulo $N_h$, \n\t\\[\n\t\td=zd'\\equiv zd'+\\frac{u_h(y)d'}{\\mu(h)}=zd'+\\frac{[y,h]d'}{\\mu(h)}+\\frac{\\ad(y)(d')}{\\mu(h)}=\\frac{\\ad(y)(d')}{\\mu(h)}\\ .\n\t\\]\n\tThe root $\\mu$ is not proportional to $\\lambda$ and the total degree of $\\ad(y)(d')$ is strictly less than that of $d$. By induction, modulo $N_h$, \n\t\\[\n\t\td\\equiv\\frac{\\tilde d}{\\Pi_{\\mu\\in\\Sigma^+\\setminus\\lambda}\\,\\mu(h)^k}\n\t\\]\n\tfor some $\\tilde d$ which lies in the subalgebra of $S(\\mathfrak p)$ generated by $\\mathfrak a\\oplus\\mathfrak p_1^\\lambda$, and depends polynomially on $h$ and linearly on $d\\in S^{\\sle k,\\mathrm{tot}}(\\mathfrak p)$. \n\t\n\tHence, the problem of showing that $P(d;h)$ remains bounded as $h$ approaches $\\lambda^{-1}(0)$ is reduced to the case of $d\\in S(\\mathfrak a\\oplus\\mathfrak p_1^\\lambda)$. For $d\\in S(\\mathfrak p_1^\\lambda)$, the polynomiality of $P(d;-)$ immediately follows from the assumption on $r$. \nIf $d=d'd''$ where $d'\\in S(\\mathfrak a)$ and $d''\\in S(\\mathfrak p_1^\\lambda)$, then $P(d;z)=[\\partial(d')P(d'';-)](z)$ since $P$ is $S(\\mathfrak p_0)$-linear. But $P(d'';-)\\in\\cplxs[\\mathfrak p_0]$ and this space is $S(\\mathfrak p_0)$-invariant, so $P(d;-)\\in\\cplxs[\\mathfrak p_0]$. \n\n\tTherefore, there exists $p\\in I(\\mathfrak p^*)$ such that $P=\\phi(p)$. By its definition, it is clear that $p$ restricts to $r$, so we have proved the theorem.\n\\end{proof}\n\n\\subsection{Proof of Theorem (B)}\n\n\\begin{Par}[symplbasis]\n\tIn order to give a complete description of the image of the restriction map, we need to compute the radial parts $\\gamma_h(d)$ for $d\\in S(\\mathfrak p_1^\\lambda)$ and $h\\in\\mathfrak a'$ explicitly. First, let us choose bases of the spaces $S(\\mathfrak p_1^\\lambda)$. \n\n\tLet $\\lambda\\in\\Sigma^+_1$. By \\thmref{Prop}{res-superroot}~(v) we may choose $b^\\theta$-symplectic bases $y_i,\\tilde y_i\\in\\mathfrak k_1^\\lambda$, $z_i,\\tilde z_i\\in\\mathfrak p_1^\\lambda$, $i=1,\\dotsc,\\frac12m_{1,\\lambda}$, $m_{1,\\lambda}=\\dim\\mathfrak g_{1,\\mathfrak a}^\\lambda$. \\emph{I.e.},\n\t\\[\n\t\tb(y_i,\\tilde y_j)=b(\\tilde z_j,z_i)=\\delta_{ij}\\,,\\,b(y_i,y_j)=b(\\tilde y_i,\\tilde y_j)=b(z_i,z_j)=b(\\tilde z_i,\\tilde z_j)=0\\ .\n\t\\]\n\tWe may impose the conditions $x_i=y_i+z_i,\\tilde x_i=\\tilde y_i+\\tilde z_i\\in\\mathfrak g_{1,\\mathfrak a}^\\lambda$, so that \n\t\\[\t\n\t\t[h,y_i]=\\lambda(h)z_i\\,,\\,[h,\\tilde y_i]=\\lambda(h)\\tilde z_i\\,,\\,[h,z_i]=\\lambda(h)y_i\\,,\\,[h,\\tilde z_i]=\\lambda(h)\\tilde y_i\n\t\\]\n\tfor all $h\\in\\mathfrak a$. (Compare \\thmref{Prop}{res-superroot}~(iv).) \n\t\n\tGiven partitions $I=(i_1<\\dotsm0$ or $\\ell>0$, and write $I=(i0$, $J=(j0$. We claim that modulo $\\ker\\gamma_h$,\n\t\\[\n\t\tz_I\\tilde z_JA_\\lambda^m\\equiv\\begin{cases}0&k\\neq\\ell\\text{ or }i\\neq j\\ ,\\\\\n\t\t(-1)^kz_{I'}\\tilde z_{J'}\\textstyle\\sum_{n=0}^m(-1)^n\\tfrac{\\lambda(A_\\lambda)^n}{\\lambda(h)^{n+1}}(m)_nA_\\lambda^{m+1-n}&i=j\\ .\\end{cases}\n\t\\]\n\tWe argue by induction on $\\max(k,\\ell)$. There will also be a sub-induction on the integer $m$. First, we assume that $k>0$, and compute \n\t\\[\n\t\tz_I\\tilde z_JA_\\lambda^m\\equiv z_iz_{I'}\\tilde z_JA_\\lambda^m+\\tfrac1{\\lambda(h)}u_h(y_i)(z_{I'}\\tilde z_JA_\\lambda^m)=\\tfrac1{\\lambda(h)}\\ad(y_i)(z_{I'}\\tilde z_JA_\\lambda^m)\\ .\n\t\\]\n\t\n\tFor any $q$, we have \n\t\\[\n\t\tb\\Parens1{[y_i,z_q],h'}=-\\lambda(h')b(y_i,y_q)=0\\mathfa h'\\in\\mathfrak a\\ ,\n\t\\]\n\tso $b([y_i,z_q],\\mathfrak a)=0$, and $[y_i,z_q]\\in\\mathfrak p_0$. Hence $[y_i,z_q]\\in\\mathfrak g_{0,\\mathfrak a}^{2\\lambda}\\oplus\\mathfrak g_{0,\\mathfrak a}^{-2\\lambda}=0$. Similarly, for $i\\neq q$, we have $[y_i,\\tilde z_q]=0$. Now, assume that $i\\sle J$. Then \n\t\\begin{align*}\n\tz_I\\tilde z_JA_\\lambda^m&\\equiv(-1)^{k-1}\\tfrac1{\\lambda(h)}z_{I'}\\ad(y_i)(\\tilde z_JA_\\lambda^m)\\\\\n\t&=(-1)^{k-1}\\tfrac1{\\lambda(h)}[y_i,\\tilde z_j]z_{I'}\\tilde z_{J'}A_\\lambda^m-m\\tfrac{\\lambda(A_\\lambda)}{\\lambda(h)}z_I\\tilde z_JA_\\lambda^{m-1}\\tag{$*$}\n\t\\end{align*}\n\tsince $[y_i,A_\\lambda^m]=-m\\lambda(A_\\lambda)z_iA_\\lambda^{m-1}$. As it stands, equation ($*$) only holds for $\\ell>0$, but if we take the first summand to be $0$ if $\\ell=0$, then it is also true in the latter case.\n\t\n\tIf $\\ell>0$ and $i0$, then $i\\sle J$ implies $\\ell>0$ and $i=j$.\n\t\t\n\tIf $\\ell>0$ and $j\\sle I$, then we observe that $z_I\\tilde z_J=(-1)^{k\\ell}\\tilde z_Jz_I$. Formally exchanging the letters $z_s$ and $\\tilde z_s$ in the above equations, and reordering all terms in the appropriate fashion, we obtain\n\t\\[\\tag{$**$}\n\t\tz_I\\tilde z_JA_\\lambda^m\\equiv(-1)^k\\tfrac1{\\lambda(h)}[\\tilde y_j,z_i]z_{I'}\\tilde z_{J'}A_\\lambda^m-m\\tfrac{\\lambda(A_\\lambda)}{\\lambda(h)}z_I\\tilde z_JA_\\lambda^{m-1}\\ ,\n\t \\]\n\t because $k\\ell+\\ell-1+(k-1)(\\ell-1)=k(2\\ell-1)\\equiv k\\ (2)$. Arguing as above, the right hand side of equation ($**$) is equivalent to $0$ modulo $\\ker\\gamma_h$ if $k=0$ or $j0$ and $i=j$. \n\t \n\t We consider the case of $k,\\ell>0$ and $i=j$. Since $[y_i,\\tilde z_i]-[\\tilde y_i,z_i]=-2A_\\lambda$ by standard arguments, we find, by adding equations ($*$) and ($**$), \n\t\\[\n\t\tz_I\\tilde z_JA_\\lambda^m\\equiv(-1)^k\\tfrac1{\\lambda(h)}z_{I'}\\tilde z_{J'}A_\\lambda^{m+1}-m\\tfrac{\\lambda(A_\\lambda)}{\\lambda(h)}z_I\\tilde z_JA_\\lambda^{m-1}\\ .\n\t\\]\n\tWe may now apply this formula recursively to the second summand, to conclude\n\t\\[\n\tz_I\\tilde z_JA_\\lambda^m\\equiv(-1)^kz_{I'}\\tilde z_{J'}\\textstyle\\sum_{n=0}^m(-1)^n\\tfrac{\\lambda(A_\\lambda)^n}{\\lambda(h)^{n+1}}(m)_nA_\\lambda^{m+1-n}\\ .\n\t\\]\n\n\tBy induction on $\\max(k,\\ell)$, the right hand side belongs to $\\ker\\gamma_h$ unless $k=\\ell$. We have proved our claim, and thus, we arrive at the assertion of the lemma. \n\\end{proof}\n\n\\begin{Par}\t\n\tFix $\\lambda\\in\\bar\\Sigma_1^+$ and $h\\in\\mathfrak a'$. Let $I=(i_1<\\dotsm0$ and $h\\in\\mathfrak a'$, \n\t\\begin{align*}\n\t\t\\textstyle\\sum_{\\nu=0}^\\infty\\tfrac1{\\nu!}\\Cdual0{z_I\\tilde z_Jh^\\nu}{p_N}&=P(z_I\\tilde z_J;h)=(\\partial_{\\gamma_h(z_I\\tilde z_J)}\\lambda^N)(h)\\\\\n\t\t&=\\delta_{IJ}(-1)^{\\tfrac12k(k+1)}2^{-k}a_{Nk}\\lambda(h)^{N-2k}\n\t\\end{align*}\n\twhere\n\t\\[\n\t\ta_{Nk}=\\textstyle\\sum_{i=(k-N)_+}^{k-1}\\Parens1{-\\tfrac12}^i(N)_{k-i}\\frac{(k-1+i)!}{(k-1-i)!i!}\\ .\n\t\\]\n\tThus, \n\t\\[\n\t\tp_N=\\lambda^N+\\textstyle\\sum_{k=1}^{\\min(N,q)}(-1)^{\\tfrac12k(k+3)}2^{-k}a_{Nk}\\lambda^{N-2k}\\sum_{\\Abs0I=k}\\zeta_I\\tilde\\zeta_I\\ .\n\t\\]\n\t\n\tWhen $N=2$ and $k\\sge2$, then $a_{Nk}=0$ by \\thmref{Par}{evencoeffcase1} and \\thmref{Par}{evencoeffcase2}. On the other hand, $a_{21}=2$. Hence,\n\t\\[\n\t\tp_2=\\lambda^2+\\textstyle\\sum_{i=1}^q\\zeta_i\\tilde\\zeta_i\n\t\\]\n\tis the super-Laplacian, and \n\t\\[\n\t\tp_{2q+1}=\\lambda^{2q+1}+\\textstyle\\sum_{k=1}^q(-1)^{\\tfrac12k(k+3)}2^{-k}a_{2q+1,k}\\lambda^{2(q-k)+1}\\sum_{\\Abs0I=k}\\zeta_I\\tilde\\zeta_I\\ .\n\t\\]\n\tThese elements are clearly subject to the relation $p_2^{2q+1}=p_{2q+1}^2$. \n\t\n\tOne readily checks\n\t\\[\n\t\t\\ad^*(y_i)p_2=-\\lambda\\zeta_i+\\zeta_i\\lambda=0\\mathtxt\\AND\\ad^*(\\tilde y_i)p_2=-\\lambda\\tilde\\zeta_i+\\lambda\\tilde\\zeta_i=0\\ .\n\t\\]\n\tIn case $q=1$, one has $p_3=\\lambda^3+\\frac32\\lambda\\zeta_1\\tilde\\zeta_1$, and\n\t\\begin{align*}\n\t\t\\ad^*(y_1)p_3&=-\\tfrac32\\lambda^2\\zeta_1-\\tfrac32\\lambda\\zeta_1\\ad^*(y_1)\\tilde\\zeta_1=-\\tfrac32\\lambda^2\\zeta_1+\\tfrac32\\lambda\\zeta_1\\lambda=0\\ ,\\\\\n\t\t\\ad^*(\\tilde y_1)p_3&=-\\tfrac32\\lambda^2\\tilde\\zeta_1+\\tfrac32\\lambda\\ad^*(\\tilde y_1)(\\zeta_1)\\tilde\\zeta_1=-\\tfrac32\\lambda^2\\tilde\\zeta_1+\\tfrac32\\lambda^2\\tilde\\zeta_1=0\\ .\n\t\\end{align*}\n\t\n\tTo verify the $\\mathfrak k_0$-invariance, let \n\t\\[\n\t\tx=\\left(\\begin{Matrix}[0]00&0&0&0\\\\0&0&0&0\\\\0&0&A&B\\\\0&0&C&-A^t\\end{Matrix}\\right)\\in\\mathfrak k_0=\\mathfrak{sp}(2q,\\cplxs)\\ .\n\t\\]\n\tThen \n\t\\[\n\t\t\\ad^*(x)\\zeta_i=\\textstyle\\sum_{j=1}^q\\Parens1{A_{ji}\\zeta_j+C_{ji}\\tilde\\zeta_j}\\mathtxt\\AND\\ad^*(x)\\tilde\\zeta_i=\\sum_{j=1}^q\\Parens1{B_{ji}\\zeta_j+A_{ji}\\tilde\\zeta_j}\\ .\n\t\\]\n\tThis implies\n\t\\[\n\t\t\\ad^*(x)(\\zeta_i\\tilde\\zeta_i)=\\textstyle\\sum_{j\\neq i}\\Parens1{C_{ji}\\tilde\\zeta_j\\tilde\\zeta_i-B_{ji}\\zeta_i\\zeta_j}\\ .\n\t\\]\n\tSince $B=B^t$, $C=C^t$, we deduce $\\sum_{i=1}^q\\ad^*(x)(\\zeta_i\\tilde\\zeta_i)=0$. Since $\\mathfrak a=\\mathfrak z(\\mathfrak g_0)$ and thus $\\ad^*(\\mathfrak k_0)\\lambda=0$, this implies that $p_2$ (for general $q$) and $p_3$ (for $q=1$) are $\\mathfrak k$-invariant. \n\\end{Par}\n\n\\bibliographystyle{alpha}%\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGravitational wave observations from the merger of compact objects like black holes and neutron stars\nprovide a new powerful tool to learn about the regime of strong gravity and thus about\nGeneral Relativity (GR) and alternative theories of gravity\n\\cite{TheLIGOScientific:2016src,Abbott:2018lct,LIGOScientific:2019fpa,Will:2014kxa,Berti:2015itd,Barack:2018yly}.\nAmong the plethora of gravity theories, in particular theories with higher curvature corrections, as motivated\nby quantum gravity considerations, have received much attention in recent years.\nAn attractive class of such theories is Einstein-scalar-Gauss-Bonnet (EsGB) theory,\nsince they are ghost-free and lead to second order equations of motion\n\\cite{Horndeski:1974wa,Charmousis:2011bf,Kobayashi:2011nu}.\n\nThe coupling function $f(\\varphi)$ of the scalar field to the Gauss-Bonnet (GB) term\nhas a decisive influence on the properties of the resulting EsGB black holes and neutron stars.\nEffective low energy string theories feature exponential coupling functions,\nwith the scalar field representing the dilaton.\nThese theories possess black holes with scalar hair,\nwhose properties have been investigated in great detail\n\\cite{Kanti:1995vq,Torii:1996yi,Guo:2008hf,Pani:2009wy,Pani:2011gy,Kleihaus:2011tg,Ayzenberg:2013wua,Ayzenberg:2014aka,Maselli:2015tta,Kleihaus:2014lba,Kleihaus:2015aje,Blazquez-Salcedo:2016enn,Cunha:2016wzk,Zhang:2017unx,Blazquez-Salcedo:2017txk,Konoplya:2019hml,Zinhailo:2019rwd}.\nThe GB term allows to circumvent the no-hair theorems of GR\n(see e.g. \\cite{Chrusciel:2012jk,Herdeiro:2015waa}),\nbut these dilatonic theories do not allow for the black hole solutions of GR,\nthe Schwarzschild solutions and the Kerr solutions.\n\nAn interesting rather new development has led to the insight,\nthat for a whole class of coupling functions $f(\\varphi)$ the black hole solutions\nof GR remain solutions of the respective EsGB theory,\nbut become unstable at certain\nvalues of the GB coupling, where branches of scalarized black holes emerge\n\\cite{Doneva:2017bvd,Antoniou:2017acq,Silva:2017uqg}.\nThis phenomenon is referred to as curvature-induced spontaneous scalarization,\nand is akin to the well-known matter-induced spontaneous scalarization\nin scalar-tensor theories in neutron stars \\cite{Damour:1993hw}.\nBy now, a variety of coupling functions has been employed to\nstudy such spontaneously scalarized black holes and their properties\n\\cite{Doneva:2017bvd,Antoniou:2017acq,Silva:2017uqg,Antoniou:2017hxj,Blazquez-Salcedo:2018jnn,Doneva:2018rou,Minamitsuji:2018xde,Silva:2018qhn,Brihaye:2018grv,Doneva:2019vuh,Myung:2019wvb,Cunha:2019dwb,Macedo:2019sem,Hod:2019pmb,Collodel:2019kkx,Bakopoulos:2020dfg}.\n\nA central question concerning these scalarized black holes is of course their stability.\nIf physically relevant, these black holes should be stable, at least on astrophysical timescales.\n{A coupling function leading to potentially stable spontaneously scalarized black holes\nfor a wide range of parameters has been proposed in \\cite{Doneva:2017bvd}.}\nFor a fixed value of the GB coupling constant,\nthere is a critical value $r_{\\rm B}$ of the horizon size of the Schwarzschild black hole,\nwhere the fundamental branch of spontaneously scalarized black holes\nemerges and continues to exist for all $r_{\\rm H} r_{\\rm S1}$,\nfor fixed coupling.\nWithin the interval $r_{\\rm S2} < r_{\\rm H} < r_{\\rm B}$,\nhowever, the black holes are mode stable with respect to axial perturbations.\n\nIn this paper we complete the analysis of linear mode stability\nby considering also the non-radial polar modes.\nThe quadrupole modes now come in two kinds, the scalar-led modes\nand the gravitational-led modes. The names imply, that in the\nlimit of vanishing backgound scalar field, the modes correspond\nto the Schwarzschild modes, which are purely scalar\nand purely gravitational. For a finite background scalar field, of course\nmixing between these channels arises.\nThe presence of the scalar background field then destroys the\nisospectrality present for Schwarzschild black holes.\nIn addition to the quadrupole modes, the scalarized black holes\npossess also dipole modes and scalar modes, which involve both\nthe scalar field and the gravitational field.\nWe will see, that all of these modes are stable\nin the relevant range $r_{\\rm S2} < r_{\\rm H} < r_{\\rm B}$.\n\nThis paper is organized as follows.\nIn section II we recall the action and the background solutions.\nIn section III we discuss the equations for the polar perturbations,\nincluding the asymptotic behavior and the numerical method.\nWe present our numerical results for the polar modes of the\nspontaneously scalarized black holes \nin section IV, and we conclude in section V.\n\n\n\n\\section{Action and background}\n\nThe action in Einstein-scalar-Gauss-Bonnet theory is given by\n\\begin{eqnarray}\nS=&&\\frac{1}{16\\pi}\\int d^4x \\sqrt{-g}\n\\Big[R - 2\\nabla_\\mu \\varphi \\nabla^\\mu \\varphi\n+ \\lambda^2 f(\\varphi){\\cal R}^2_{GB} \\Big] \\ ,\\label{eq:quadratic}\n\\end{eqnarray}\nwhere the spacetime metric is $g_{\\mu\\nu}$ with Ricci scalar $R$,\n$\\varphi$ is the scalar field with coupling function $f(\\varphi)$, and $\\lambda$ is the GB coupling constant\nwith dimension of $length$.\nThe GB invariant ${\\cal R}^2_{GB}$ is defined as\n${\\cal R}^2_{GB}=R^2 - 4 R_{\\mu\\nu} R^{\\mu\\nu}\n+ R_{\\mu\\nu\\alpha\\beta}R^{\\mu\\nu\\alpha\\beta}$\nwith Ricci tensor $R_{\\mu\\nu}$\nand Riemann tensor $R_{\\mu\\nu\\alpha\\beta}$.\n\nThe field equations that result from this action are\n\\begin{eqnarray}\\label{FE1}\n&&R_{\\mu\\nu}- \\frac{1}{2}R g_{\\mu\\nu} + \\Gamma_{\\mu\\nu}= 2\\nabla_\\mu\\varphi\\nabla_\\nu\\varphi - g_{\\mu\\nu} \\nabla_\\alpha\\varphi \\nabla^\\alpha\\varphi \\ ,\\\\\n&&\\nabla_\\alpha\\nabla^\\alpha\\varphi= - \\frac{\\lambda^2}{4} \\frac{df(\\varphi)}{d\\varphi} {\\cal R}^2_{GB} \\ , \\label{FE2}\n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n\\Gamma_{\\mu\\nu}&=& - R(\\nabla_\\mu\\Psi_{\\nu} + \\nabla_\\nu\\Psi_{\\mu} ) - 4\\nabla^\\alpha\\Psi_{\\alpha}\\left(R_{\\mu\\nu} - \\frac{1}{2}R g_{\\mu\\nu}\\right) +\n4R_{\\mu\\alpha}\\nabla^\\alpha\\Psi_{\\nu} + 4R_{\\nu\\alpha}\\nabla^\\alpha\\Psi_{\\mu} \\nonumber \\\\\n&& - 4 g_{\\mu\\nu} R^{\\alpha\\beta}\\nabla_\\alpha\\Psi_{\\beta}\n+ \\, 4 R^{\\beta}_{\\;\\mu\\alpha\\nu}\\nabla^\\alpha\\Psi_{\\beta} \\ , \\\\\n\\Psi_{\\mu}&=& \\lambda^2 \\frac{df(\\varphi)}{d\\varphi}\\nabla_\\mu\\varphi \\ .\n\\end{eqnarray}\n\nIn the present paper we will use the coupling function\nintroduced in \\cite{Doneva:2017bvd}\n\\begin{equation} \\label{eq:coupling_function}\nf(\\varphi)= \\frac{1}{12} \\left(1- e^{-6\\varphi^2}\\right) \\ .\n\\end{equation}\nNote that the coupling function $f(\\varphi)$ satisfies\nthe conditions\n$\\frac{df}{d\\varphi}(0)=0$ and $b^2=\\frac{d^2f}{d\\varphi^2}(0)>0$,\nand hence we have curvature induced spontaneous scalarization\nof black holes in the theory,\nwhen we fix the cosmological value of the scalar field to zero,\n$ \\varphi_{\\infty}=0$\n\\cite{Doneva:2017bvd,Antoniou:2017acq,Silva:2017uqg}.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.34\\linewidth,angle=-90]{plot_M_vs_Ah_for_polar}\n\t\\includegraphics[width=0.34\\linewidth,angle=-90]{plot_M_vs_Qd_for_polar}\\\\[2ex]\n\t\\caption{(\\textit{left}) Scaled horizon area $A_{\\rm H}\/\\lambda^2$\nvs. scaled total mass $M\/\\lambda$, and\n(\\textit{right}) Scaled scalar charge $Q_{\\rm D}\/\\lambda$ vs. scaled total mass\nfor {the fundamental branch of EsGB black holes }(blue and red) and the Schwarzschild black holes (grey).\nThe bifurcation point $r_{\\rm B}$ is marked in blue,\nthe points $r_{\\rm S1}$ and $r_{\\rm S2}$, where hyperbolicity is lost\nfor the radial and axial perturbation equations, are marked in green and red, respectively.}\n\t\\label{fig:static}\n\\end{figure}\n\n\nSpherically symmetric solutions\nof the field equations can be obtained with the Ansatz for the metric\n\\begin{eqnarray}\\label{eq:metric_BG}\nds^2= - f(r)dt^2 + \\frac{1}{1-\\frac{2m(r)}{r}} dr^2\n+ r^2 (d\\theta^2 + \\sin^2\\theta d\\phi^2 ) \\ ,\n\\end{eqnarray}\nwhere the metric functions $f(r)$ and $m(r)$ depend only on the radial coordinate,\nand the scalar field $\\varphi(r)$ is likewise only a function of $r$.\nBy solving the set of field equations for these functions\nsubject to the conditions of asymptotic flatness and regularity at and outside the horizon,\nthe domain of existence of spontaneously scalarized black holes solutions has been\nmapped out in \\cite{Doneva:2017bvd}.\nIn fact, one can work with $\\lambda=1$ without loss of generality,\nsince all dimensionful quantities can be rescaled with respect to $\\lambda$.\nThe black holes can then be parametrized by the value\nof the horizon radius $r_{\\rm H}$.\n\nHere we focus only on the fundamental branch of the scalarized black holes,\nsince this should be the physically most relevant branch of solutions.\nThis branch is shown together with the Schwarzschild branch\nin Fig.~\\ref{fig:static},\nwhere the scaled horizon area $A_{\\rm H}\/\\lambda^2$ (left)\nand the scaled scalar charge $Q_{\\rm D}\/\\lambda$ (right) are exhibited\nas functions of the scaled mass $M\/\\lambda$.\nNote, that the scalar charge $Q_{\\rm D}$ is defined as the coefficient of the dominant\nterm in the asymptotic expansion of the scalar field.\nThe fundamental branch bifurcates from the Schwarzschild branch\nat the horizon radius $r_{\\rm B}=1.173944$ and continues to exist for all\nradii $r_{\\rm H}