diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmaef" "b/data_all_eng_slimpj/shuffled/split2/finalzzmaef" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmaef" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nEmpirical software engineering aims at studying and evaluating how people engineer software with the help of empirical research methods. This paper focuses on the application of one type of empirical methods, namely statistical causal inference (see section \\ref{sec:def}). Such methods have their roots in a number of applied fields (from AI to economics) and aim to provide a framework for making valid inferences about causal effects based on interventional or observational data. More specifically, we focus on statistical causal inference methods that use graphical models as developed by Pearl and colleagues \\cite{pearl2016causal,pearl2019book} and the potential outcomes framework from Neyman \\cite{neyman1990application}, Rubin \\cite{rubin1974estimating}, and Imbens \\cite{imbens2004nonparametric} (see also the review \\cite{yao2021survey}). \nOur goal is to understand whether, how and by whom these methods are applied in software engineering (see section \\ref{sec:methodogy} for the detailed research questions). We hope to provide an overview of the field that helps evaluating the potential and limitations of causal inference methods for software engineering. For that, we performed a literature survey following the guidelines from Kitchenham and Charters \\cite{kitchenham_guidelines_2007}, and Wohlin \\cite{wohlin2014guidelines}. We analyzed a total of 32 papers. To the best of our knowledge, this is the first time such a review is done.\n\nThis paper is structured as follows. Section \\ref{sec:def} provides a short introduction to statistical causal inference methodology. Section \\ref{sec:related_work} provides a short analysis of the related work. Section \\ref{sec:methodogy} presents our search methodology, section \\ref{sec:analysis} our analysis and section \\ref{sec:answers} our results. We discuss and conclude our work in both sections \\ref{sec:discussion} and \\ref{sec:conclusion}.\n\n\n\\section{Definitions}\\label{sec:def}\n\\subsection{Statistical Causal Inference}\nStatistical causal inference focuses on estimating actual causal effects of an action (a treatment $T$) on a given observed system (an outcome $Y$) from data \\cite{pearl2016causal,hernan2020whatif,yao2021survey}. In general, the gold standard for estimating a causal effect is to perform randomized controlled trials in which the potential confounders\\footnote{A confounder is a variable that both have an impact on the treatment and the outcome of interest.} are controlled for, through randomization, so that their effects are balanced between the treated and the untreated groups and that only the treatment differs between groups. However, when this is not feasible (e.g., for practicable or ethical reasons), causal effects have to be estimated from observational data (i.e., data that was not generated during randomized controlled trials). A major challenge with observational data, assuming that all relevant variables have been captured, is that it can contain correlations induced by confounding factors that can bias the estimation of causal effects. For this reason, researchers in the field of statistical causal inference have developed methods to reduce the impact of confounding factors and separate spurious correlations from causal effects \\cite{cinelli2022crashcourse}. One of such approach is the use of graphical causal models together with the application of the do-calculus from Pearl et al. \\cite{pearl2016causal,pearl2019book}.\n\n\\subsection{Statistical causal inference methodology} \nThe methodological process behind statistical causal inference can be described in three major steps: modeling, identification, estimation\\footnote{Pearl and Mackenzie introduced a 9 steps process (see Figure 1 in \\cite{pearl2019book}) and Sharma and Kiciman proposed 4 steps in \\cite{sharma2020dowhy}. Both resolve around the three steps mentioned above but introduce some more details and testing methods that are out of this paper scope. So for the sake of simplicity we just introduce the three steps of modeling, identification and estimation.}. The first step (modeling) consists in making causal assumptions explicit through the use of a graphical causal model (also called directed acyclic graph (DAG), or causal graph, see section \\ref{ref:causal_graph}). This causal model can either be obtained through domain expertise or extracted from data. In the latter case, the term \"causal structure discovery\" or \"causal structure learning\" is used (see reviews \\cite{glymour2019review,vowels2022survey}). The second step (identification) consists in finding whether a given causal effect can be estimated using the available data. It requires identifying structures such as counfounders, mediators, or colliders in order to understand which variable(s) should be controlled for \\cite{cinelli2022crashcourse} and outputs an estimand of the causal effect under study. The main tool for causal estimation is the do-calculus developed by Pearl and colleagues \\cite{pearl2012docalculus}. The third step (estimation), consists in effectively estimating the identified causal effect with the available data and the previously gained knowledge of which variables should be adjusted for. Note that along the way, there are methods for testing the validity of each artifact obtained, whether it is a model, an estimand or a final estimate \\cite{pearl2019book,sharma2020dowhy}.\n\n\n\\subsection{Graphical causal models}\\label{ref:causal_graph}\nGraphical causal models are a way to represent hypotheses about causality in a given system. They take the form of a graph consisting of nodes representing the relevant phenomena (potentially measurable) and links representing a direct causal effect between two phenomena. For example, if a treatment $T$ is supposed to have a causal effect on an outcome $Y$, then both $T$ and $Y$ are represented as nodes and a link goes from $T$ and points towards $Y$. Phenomena (also called variables) can be either endogenous (they belong to the system under study) or exogenous (they are outside the system). Observed effects are usually represented by solid lines, while unobserved effects are represented by dashed lines. Figure \\ref{fig:graphical_causal_model} illustrates a simple example of a graphical causal model. Graphical models are interesting for different reasons. First, they make hypotheses about variables under study and their causal effects explicit. Second, they are used for identifying structures that lead to spurious correlations and need to be handled before estimation.\n\n\\begin{figure}[!htbp]\n \\centering\n \\begin{tikzpicture}\n \\node (ut) at (0,1) [circle,fill,inner sep=1.25pt,label=above:$U_T$] {};\n \\node (uy) at (2,1) [circle,fill,inner sep=1.25pt,label=above:$U_Y$] {};\n \\node (t) at (0,0) [circle,fill,inner sep=1.25pt,label=left:$T$] {};\n \\node (y) at (2,0) [circle,fill,inner sep=1.25pt,label=right:$Y$] {};\n \\path[->] (t) edge (y);\n \\path[->,dashed] (ut) edge (t);\n \\path[->,dashed] (uy) edge (y);\n \\end{tikzpicture}\n \\caption{Example of a graphical causal model representation where a treatment $T$ causes an outcome $Y$. $T$ and $Y$ are endogenous variables. $U_T$ and $U_Y$ are exogenous variables (e.g., external noise).}\n \\label{fig:graphical_causal_model}\n\\end{figure}\n\n\\subsection{Relation to Bayesian networks and structural equation models}\nBecause the focus of this paper is on causal inference methods using graphical causal models, it is necessary to sketch the distinction between these models and others like structural equations models and Bayesian networks.\n\nThe graphical models in statistical causal inference are indeed are very similar to Bayesian networks, with the constraints that an arrow represents a direct causal effect, and that no arrow implies that no direct causal effect exists between variables. The main difference lies in the inference process. While Bayesian networks allow for probabilistic inference and prediction based on correlations, causal inference requires an additional step, namely identification. As stated before, the identification step consists in finding whether a given causal effect can be estimated using the available data and tells us which variable should be controlled for (see examples of good and bad controls in \\cite{cinelli2022crashcourse}).\n\nAnother type of graphical models are structural equation models (SEM) which represent cause and effects as equations. They sometimes use a specific symbol $:=$ to explicitly state that a variable is the cause of another one (for example $Y:=f(X)$ means that $X$ is a cause of $Y$), and that one cannot simply reverse the equation (e.g., $Y:=f(X)$ does not mean $X:=f^{-1}(Y)$). These models provide a functional form of the effect (e.g., linear, non-linear). In comparison, graphical causal models do not provide any functional form of the effect (they are also called non-parametric).\n\n\n\\section{Related Work}\\label{sec:related_work}\nIt is not the first time that the application of a given analysis method in software engineering is reviewed. The application Bayesian networks in software engineering is relatively well established. For example, de Sousa et al. \\cite{de_sousa_20year_2022} performed a mapping study in which they analyzed 109 publications (from 1999 to 2018) focused on the application of Bayesian networks for software project management. Another mapping study was done by Misirli and Bener \\cite{misirli_mapping_2014} with a focus on software quality prediction. Other systematic literature reviews and comparative analyses have also been performed \\cite{del_aguila_bayesian_2016,tosun_systematic_2017,mendes2019using}. However, none of these papers mention statistical causal inference as defined in section \\ref{sec:def}. \n\nThe work from Wong \\cite{wong2020computational} focuses on challenges related to developing software for statistical causal inference (which the author called \"computational causal inference\"). The focus is symmetrical to that of the current paper, in that Wong focuses on the application of software engineering for statistical causal inference, whereas we focus on the application of statistical causal inference for software engineering.\n\nTo the best of our knowledge, there is no similar review like ours. The closest we could find is a relatively extensive related work section in \\cite{clark_testing_2022} that references several papers using statistical causal inference in software testing but does not attempt to organize nor analyze them.\n\n\\section{Methodology}\\label{sec:methodogy}\nOur primary goal in this paper was to identify applications of statistical causal inference methods (as defined in section \\ref{sec:def}) in software engineering and to understand how these methods are used. We refined this goal into specific questions listed in table \\ref{tab:research-questions}. We mainly used literature review to identify relevant papers and corresponding authors. Table \\ref{tab:information_extracted} lists the information we extracted from the papers.\n\n\\begin{table}[!hbtp]\n \\centering\n \\begin{tabular}{|c|p{0.8\\textwidth}|}\n \\hline\n \\textbf{Id} & \\textbf{Research Question} \\\\\n\t\t\\hline\n\t\t\\textbf{RQ1} & What are the current use cases of statistical causal inference in software engineering? \\\\\n \\hline\n \\textbf{RQ2} & How is the research community using statistical causal inference in software engineering structured?\\\\\n\t\t\\hline\n \\textbf{RQ3} & Which causal inference methods were used for each use case? \\\\\n \\hline\n \\textbf{RQ4} & Which data sets and libraries were used? \\\\\n \\hline\n \\textbf{RQ5} & How authors evaluate their results? \\\\\n \\hline\n \\end{tabular}\n \\caption{Research Questions}\n \\label{tab:research-questions}\n\\end{table}\n\nThere are mainly two difficulties here. First, software engineering is a large field and filtering relevant papers (i.e., use cases that belong or do not to software engineering) can be challenging. For this, we relied upon categories of activities defined in the SWEBOK\\footnote{\\url{http:\/\/swebokwiki.org\/Main_Page}}. Second, statistical causal inference methods are relatively new and, in general, may be less well known than statistical data analysis methods. In addition, graphical causal models are close relatives of Bayesian networks and methods such as do-calculus could have been applied without direct reference to the generic term \"statistical causal inference\". \n\n\n\\begin{table}[!htbp]\n \\centering\n \\begin{tabular}{|c|p{0.8\\textwidth}|}\n \\hline\n \\textbf{Id} & \\textbf{Exclusion Criteria} \\\\\n \\hline\n \\textbf{EC1} & The document contains no reference to statistical causal inference (as defined in section \\ref{sec:def}). \\\\\n \\hline\n \\textbf{EC2} & The application is outside the scope of software engineering (as defined by the SWEBOK\\footnote{\\url{http:\/\/swebokwiki.org\/Main_Page}}). \\\\\n \\hline\n \\textbf{EC3} & The text presents a software library dedicated to causal inference. \\\\\n \\hline\n \\textbf{EC4} & There is no access to the article. \\\\\n \\hline\n \\textbf{EC5} & The article is not written in English. \\\\\n \\hline\n \\textbf{EC6} & The paper is a thesis. \\\\\n \\hline\n \\textbf{EC7} & The paper is not a primary study \\\\\n \\hline\n \\end{tabular}\n \\caption{Exclusion criteria}\n \\label{tab:exclusion-criteria}\n\\end{table}\n\nTable \\ref{tab:exclusion-criteria} provide the detail of the exclusion criteria we used. Note that we did not limit our literature search by date. However, as the do-calculus was only invented around 1995 \\cite{pearl2012docalculus}, we did not expect to find any relevant paper before 2005 and would not have included any paper prior to 1995.\n\n\n\n\n\\subsection{Search Process}\nThe search process followed three main steps: 1) establishing a set of seed documents, 2) consolidating a search query, and 3) a snowballing phase.\n\n\\subsubsection{Seed documents}\nWe started our search with a relatively simple query in Scopus\\footnote{\\url{https:\/\/www.scopus.com}} completed with a snowballing phase (both backward and forward) in order to find a first set of seed papers. The following query: \\texttt{TITLE-ABS-KEY((\\{causal inference\\} OR \\{causal discovery\\} OR \\{structure learning\\}) AND (\\{software engineering\\}))} was performed on 26.04.2022 and resulted in 42 documents among which 11 were judged relevant (after title and abstract screening). \n\nA forward and backward snowballing pass was done on these first 11 documents. This led to 19 new relevant papers for a total of 30 seed papers (see the appendix for the detailed list).\n\nThese 30 articles were used, on the one hand, as seed articles (i.e., to test whether subsequent search queries would include these articles) and, on the other hand, to gather information about how researchers in the field of software engineering describe statistical causal inference and find interesting search terms that helped us find more relevant articles.\n\n\\subsubsection{Search query}\nThe next phase consisted in creating a search query that would both include our seed papers and return new interesting papers. We actually created two search queries (see figure \\ref{fig:search_query}). The first query looks for papers both mentioning software engineering and statistical causal inference. It was structured in two parts: a first part focusing on the software engineering domain and a second part focusing on the topic of statistical causal inference. The second query looks for papers citing the work of Judea Pearl.\n\n\\paragraph{First query}\\label{par:first_query}\nAs stated above, the first query is structured in two parts. Each part contains relevant keywords for each topic of interest and all keywords are separated by \\texttt{OR}. Both parts are separated by \\texttt{AND}.\n\nThe search for relevant query terms for both parts was done as follows. First, single software engineering terms were tested in conjunction with \"causal inference\" (i.e., adding \\texttt{AND \"causal inference\"} to the query). The terms were chosen both from the SWEBOK and from the seed papers. Then, we checked the number of papers returned by Scopus as well as their relevance to our research questions. The hypothesis is that relevant keywords might return a higher ratio of relevant papers and that the constraint \\texttt{AND \"causal inference\"} is generic enough to capture most of the relevant papers but focused enough to exclude many irrelevant ones.\n\nA similar procedure was done for determining relevant keywords related to statistical causal inference. The chosen terms were taken from the following reviews and textbooks \\cite{elwert2013graphical,pearl2016causal,brady2020introduction,hernan2020whatif,yao2021survey} and consolidated with a short overview of the vocabulary used in the field of statistical causal inference\\footnote{Here the authors performed a search in Scopus using the query \n\\texttt{(TITLE(\"causal inference\")) AND (\"graph*\")}, extracted the main authors (i.e., authors having more than 5 documents within this query), extracted related and relevant topics using SciVal \\url{https:\/\/scival.com\/} (Topic T.12463: \"Chain Graph; Artificial Intelligence; Conditional Independence\", Topic T.1910: \"Observational Studies; Causal Inference; Propensity Score\", and Topic T.22578: \"Principal Stratification; Causal Effect; Outcome\"), and finally extracted key phrases from these topics.}. Single causal inference terms were tested in conjunction with \"software engineering\" (i.e., adding \\texttt{AND \"software engineering\"} to the query).\n\nTable \\ref{tab:prep-query} gives the detail of the queries tested and the related hits and relevant papers. The final query was built using the more encompassing terms (i.e., terms that provided more hits and more relevant papers but also included relevant papers from other queries). The final query is shown in figure \\ref{fig:search_query}.\n\n\\begin{table}[!htbp]\n \\centering\n \\begin{tabular}{|p{0.35\\textwidth}|p{0.35\\textwidth}|p{0.1\\textwidth}|p{0.1\\textwidth}|}\n \\hline\n Software Engineering Term (\\texttt{SE TERM}) & Causal Inference Term (\\texttt{CI TERM}) & Number of results & Number of relevant papers \\\\\n \\hline\n \"software engineering\" & \"causal inference\" & 20 & 11 \\\\\n \\hline\n \"software development\" & \"causal inference\" & 4 & 2 \\\\\n \\hline\n \"software test*\" & \"causal inference\" & 13 & 8 \\\\\n \\hline\n \"software design\" & \"causal inference\" & 5 & 3 \\\\\n \\hline\n \"software architecture\" & \"causal inference\" & 1 & 1 \\\\\n \\hline\n \"software *bug*\" & \"causal inference\" & 3& 2 \\\\\n \\hline\n \"software requirement\" & \"causal inference\" & 0 & 0 \\\\\n \\hline\n \"software quality\" & \"causal inference\" & 1 & 1 \\\\\n \\hline\n \"software construct*\" & \"causal inference\" & 0 & 0 \\\\\n \\hline\n \"technical debt\" & \"causal inference\" & 0 & 0 \\\\\n \\hline\n \"fault local*\" & \"causal inference\" & 19 & 17 \\\\\n \\hline\n \"config* manag*\" & \"causal inference\" & 0 & 0 \\\\\n \\hline\n \"software process\" & \"causal inference\" & 1 & 0 \\\\\n \\hline\n \"dev-ops\" OR \"devops\" OR \"mlops\" OR \"ml-ops\" OR \"aiops\" & \"causal inference\" & 1 & 1 \\\\\n \\hline\n \\hline\n \"software engineering\" & \"do calculus\" & 1 & 0 \\\\\n \\hline\n \"software engineering\" & \"structural causal model\" & 2 & 0 \\\\\n \\hline\n \"software engineering\" & \"causal graph\" & 8 & 0 \\\\\n \\hline\n \"software engineering\" & \"backdoor criteri*\" OR \"frontdoor criteri*\" & 0 & 0 \\\\\n \\hline\n \"software engineering\" & \"potential outcome framework\" & 0 & 0 \\\\\n \\hline\n \"software engineering\" & \"propensity score\" & 10 & 5 \\\\\n \\hline\n \"software engineering\" & \"causal identification\" & 0 & 0 \\\\\n \\hline\n \"software engineering\" & \"causal estimation\" & 1 & 0 \\\\\n \\hline\n \"software engineering\" & \"counterfact*\" & 18 & 9 \\\\\n \\hline\n \"software engineering\" & \"mediation analysis\" & 2 & 0 \\\\\n \\hline\n \"software engineering\" & \"difference in difference\" & 2 & 0 \\\\\n \\hline\n \"software engineering\" & \"instrument* variable\" & 4 & 0 \\\\\n \\hline\n \"software engineering\" & \"inverse probability\" OR \"probability weigh*\" & 5 & 2 \\\\\n \\hline\n \"software engineering\" & \"treatment effect\" & 5 & 1 \\\\\n \\hline\n \\end{tabular}\n \\caption{Queries used to determine the most relevant keywords. Each query was performed in Scopus on the 2nd of August 2022, targeting only Title, Abstract and Keywords (i.e., using \\texttt{TITLE-ABS-KEY ()}). Each query takes the form \\texttt{TITLE-ABS-KEY((SE TERM) AND (CI TERM))}.}\n \\label{tab:prep-query}\n\\end{table}\n\n\\paragraph{Second query}\\label{par:second_query}\nThe second query targeted papers citing the work from Judea Pearl, having been published in computer science, and mentioning \"software engineering\", \"causal inference\" and \"graph\" in their text. The query is also shown in Figure \\ref{fig:search_query}.\n\n\\begin{figure}[!htbp]\n \\textbf{First query:}\\\\\n \\begin{center}\n \\texttt{\n TITLE-ABS-KEY(\n (\"software engineering\" OR \"software test*\" OR \"software quality\" OR \"fault local*\" OR \"AIOps\") \n AND\n (\"causal inference\" OR \"propensity score\" OR \"counterfact*\" OR \"inverse probability\" OR \"probability weigh*\" )) \n }\n \\end{center}\n ~\\\\\n \\textbf{Second query:}\\\\\n \\begin{center}\n \\texttt{\n REFAUTH ( \"Pearl\" ) AND \"software engineering\" AND \"causal inference\" AND \"graph\" AND ( LIMIT-TO ( SUBJAREA , \"COMP\" ) ) \n }\n \\end{center}\n \\caption{Search queries}\n \\label{fig:search_query}\n\\end{figure}\n\n\\subsubsection{Snowballing phase}\nThese two queries returned both 86 and 100 papers (as of 2022-11-15), out of it a total of distinct 47 papers were judged relevant based on title and abstract screening. These papers were used as seed for snowballing. Backward snowballing was done using Scopus directly. Forward snowballing was done using Google Scholar. Relevance of the papers was judged based first upon title and abstract screening. We performed two snowballing phases. First, backward and forward snowballing was done on the first set of 47 papers. This led to 29 new papers (backward: 8, forward: 21). Second, backward and forward snowballing was done on this novel set of 29 papers. The snowballing phases allowed us to collect 93 potentially relevant papers.\n\nAfter full read, we decided to exclude 22 papers. 17 papers were judged out of scope and as they were not using statistical causal inference as defined in section \\ref{sec:def} (EC1). One paper was not written in English (EC5), one paper was describing a software library (EC3), and there were two papers that we didn't have access to (EC4). One paper \\cite{masri2015automated} was a secondary study (a review) (EC7). This led to a total of 71 papers that we further analyzed.\n\n\\subsection{Extracted information}\nIn order to answer our research questions (see the full list in table \\ref{tab:research-questions}), we first extracted demographics information like authors name, affiliation and type of publication. This helps learning about the structure of the research community using statistical causal inference in software engineering (RQ2). We extracted information about the software engineering use cases (RQ1). To answer RQ3 and RQ4, we extracted information about the statistical causal inference methods, the data sets and the libraries used. Finally we extracted information about the evaluation of the results (RQ5). Table \\ref{tab:information_extracted} provides an overview of the information collected. \n\n\\begin{table}[!htbp]\n \\centering\n \\begin{tabular}{|p{0.3\\textwidth}|p{0.6\\textwidth}|}\n \\hline\n \\textbf{Information extracted} & \\textbf{Description} \\\\\n \\hline\n Authors demographics & Authors names and affiliation. \\\\\n \\hline\n Type of paper & Journal, conference paper, report, preprint, etc. \\\\\n \\hline\n Software engineering use case & E.g., fault detection, effort estimation, etc. \\\\\n\t \\hline\n Causal analysis task done and corresponding method name & Causal discovery, identification, estimation.\\\\\n \\hline \n Data sets & Name and URL.\\\\\n \\hline \n Libraries & Name and URL.\\\\\n \\hline\n Evaluation methods and metrics & Corresponding name and, when available, formal description. \\\\\n \\hline\n \\end{tabular}\n \\caption{Information extracted}\n \\label{tab:information_extracted}\n\\end{table}\n\n\\subsection{Papers categorization and final selection}\nDuring the information extraction phase, we noticed that the articles naturally fell into four distinct categories. As these categories are relatively different, we felt it necessary to present them separately and to explain our selection choices.\n\nThe first category (C1), contains papers focusing only on the first step of causal inference (modeling) and do not mention the next two steps (identification and estimation). 22 papers are present in this category. Table \\ref{tab:category_c1} gives the details of the publications categorized in C1. Note that in this category, many papers rely upon the term causal analysis or causal inference when talking about causal structure discovery.\n\n\\begin{table}[!htbp]\n \\footnotesize\n \\centering\n \\begin{tabular}{|p{0.95\\textwidth}|c|}\n \\hline\n \\textbf{Title (alphabetical order)} & \\textbf{Ref.} \\\\\n \\hline\n A quantitative causal analysis for network log data & \\cite{jarry_quantitative_2021} \\\\\n \\hline\n An influence-based approach for root cause alarm discovery in telecom networks & \\cite{zhang_influence-based_2021} \\\\\n \\hline\n Causal analysis for performance modeling of computer programs & \\cite{lemeire_causal_2007} \\\\\n \\hline\n Causal analysis of network logs with layered protocols and topology knowledge & \\cite{kobayashi_causal_2019} \\\\\n \\hline\n Causal Inference Techniques for Microservice Performance Diagnosis: Evaluation and Guiding Recommendations & \\cite{wu_causal_2021} \\\\\n \\hline\n Causal modeling, discovery, and inference for software engineering & \\cite{kazman_causal_2017} \\\\\n \\hline\n Causal program slicing & \\cite{gore_causal_2009} \\\\\n \\hline\n Comparative causal analysis of network log data in two Large ISPs & \\cite{kobayashi_comparative_2022} \\\\\n \\hline\n Detecting causal structure on cloud application microservices using Granger causality models & \\cite{wang_detecting_2021} \\\\\n \\hline\n Discovering and utilising expert knowledge from security event logs & \\cite{khan_discovering_2019}\\\\\n \\hline\n Discovering many-to-one causality in software project risk analysis & \\cite{chen_discovering_2014} \\\\\n \\hline\n Evaluation of causal inference techniques for AIOps & \\cite{arya_evaluation_2021} \\\\\n \\hline\n FALCON: differential fault localization for SDN control plane & \\cite{yu_falcondifferential_2019} \\\\\n \\hline\n Further causal search analyses with UCC's effort estimation data & \\cite{hira_further_2018} \\\\\n \\hline\n Localization of operational faults in cloud applications by mining causal dependencies in logs using golden signals & \\cite{aggarwal_localization_2021} \\\\\n \\hline\n MicroDiag: Fine-grained Performance Diagnosis for Microservice Systems & \\cite{wu_microdiag_2021} \\\\\n \\hline\n Mining causality of network events in log data & \\cite{kobayashi_mining_2018} \\\\\n \\hline\n Mining causes of network events in log data with causal inference & \\cite{kobayashi_mining_2017} \\\\\n \\hline\n Mutation-based graph inference for fault localization & \\cite{musco_mutation-based_2016} \\\\\n \\hline\n Preliminary causal discovery results with software effort estimation data & \\cite{hira_preliminary_2018} \\\\\n \\hline\n Software project risk analysis using Bayesian networks with causality constraints & \\cite{hu_software_2013} \\\\\n \\hline\n Thinking inside the box: differential fault localization for SDN control plane & \\cite{li_thinking_2019} \\\\\n \\hline\n \\end{tabular}\n \\caption{Papers classified as C1, focusing on modeling potential causal structures.}\n \\label{tab:category_c1}\n\\end{table}\n\nThe second category (C2), contains papers focusing on counterfactual analysis (also called counterexamples or causality checking) but not doing causal inference as defined in section \\ref{sec:def}. Although they make use of structural equation models (SEM) and the notion of counterfactual as defined by Halpern and Pearl \\cite{halpern2005causes}, the papers in this category did not use statistical causal inference methods (at least none of the paper we reviewed was mentioning either the different phases of causal inference or specific techniques like do-calculus for identification or specific estimation methods). 16 papers were classified as C2.\n\n\\begin{table}[!htbp]\n \\footnotesize\n \\centering\n \\begin{tabular}{|p{0.95\\textwidth}|c|}\n \\hline\n \\textbf{Title (alphabetical order)} & \\textbf{Ref.} \\\\\n \\hline\n Causality analysis and fault ascription in component-based systems & \\cite{gossler_causality_2020} \\\\\n \\hline\n Fault ascription in concurrent systems & \\cite{gossler_fault_2016} \\\\\n \\hline\n A general framework for blaming in component-based systems & \\cite{gossler_general_2015} \\\\\n \\hline\n A causality analysis framework for component-based real-time systems & \\cite{wang_causality_2013} \\\\\n \\hline\n Code-change impact analysis using counterfactuals: Theory and implementation & \\cite{peralta_code-change_2013} \\\\\n \\hline\n Counterfactually reasoning about security & \\cite{peralta_counterfactually_2011} \\\\\n \\hline\n Code-change impact analysis using counterfactuals & \\cite{peralta_code-change_2011} \\\\\n \\hline\n Causality-Guided Adaptive Interventional Debugging & \\cite{fariha_causality-guided_2020} \\\\\n \\hline\n Explaining counterexamples using causality & \\cite{beer_explaining_2012} \\\\\n \\hline\n From Probabilistic Counterexamples via Causality to Fault Trees & \\cite{kuntz_probabilistic_2011} \\\\\n \\hline\n A General Trace-Based Framework of Logical Causality & \\cite{gossler_general_2014} \\\\\n \\hline\n Causal Reasoning for Safety in Hennessy Milner Logic & \\cite{caltais_causal_2020} \\\\\n \\hline\n Causality Analysis for Concurrent Reactive Systems (Extended Abstract) & \\cite{dimitrova_causality_2019} \\\\\n \\hline\n From Verification to Causality-based Explications & \\cite{baier_verification_2021} \\\\\n \\hline\n A Hybrid Approach to Causality Analysis & \\cite{wang_hybrid_2015} \\\\\n \\hline\n Symbolic Causality Checking Using Bounded Model Checking & \\cite{beer_symbolic_2015} \\\\\n \\hline\n \\end{tabular}\n \\caption{Papers classified as C2, focusing on counterfactual analysis but not relying upon any methods defined in section \\ref{sec:def}.}\n \\label{tab:category_c1}\n\\end{table}\n\nThe third category (C3), contains papers doing all three steps of causal inference: modeling, identification, and estimation. 32 papers were classified as C3. Table \\ref{tab:category_c3} provides the full list of these 32 papers.\n\n\\begin{table}[!htbp]\n \\footnotesize\n \\centering\n \\begin{tabular}{|p{0.95\\textwidth}|c|}\n \\hline\n \\textbf{Title (alphabetical order)} & \\textbf{Ref.} \\\\\n \\hline\n ACDC: Altering Control Dependence Chains for Automated Patch Generation & \\cite{assi_acdc_2017} \\\\\n \\hline\n An empirical study of Linespots: A novel past-fault algorithm & \\cite{scholz_empirical_2021} \\\\\n \\hline\n Artifact for Improving Fault Localization by Integrating Value and Predicate Based Causal Inference Techniques & \\cite{kucuk_artifact_2021} \\\\\n \\hline\n Bayesian causal inference in automotive software engineering and online evaluation & \\cite{liu_bayesian_2022} \\\\\n \\hline\n Bayesian Data Analysis in Empirical Software Engineering: The Case of Missing Data & \\cite{torkar_bayesian_2020} \\\\\n \\hline\n Bayesian propensity score matching in automotive embedded software engineering & \\cite{liu_bayesian_2021} \\\\\n \\hline\n Causal inference based fault localization for numerical software with NUMFL & \\cite{bai_causal_2017} \\\\\n \\hline\n Causal inference based service dependency graph for statistical service fault localization & \\cite{li_causal_2014} \\\\\n \\hline\n Causal inference for data-driven debugging and decision making in cloud computing & \\cite{geiger_causal_2020} \\\\\n \\hline\n Causal inference for statistical fault localization & \\cite{baah_causal_2010} \\\\\n \\hline\n Causal Program Dependence Analysis & \\cite{lee_causal_2021} \\\\\n \\hline\n CounterFault: Value-Based Fault Localization by Modeling and Predicting Counterfactual Outcomes & \\cite{podgurski_counterfault_2020} \\\\\n \\hline\n Do Developers Discover New Tools on the Toilet? & \\cite{murphy-hill_developers_2019} \\\\\n \\hline\n Effectively sampling higher order mutants using causal effect & \\cite{oh_effectively_2021} \\\\\n \\hline\n Gender differences and bias in open source: Pull request acceptance of women versus men & \\cite{terrell_gender_2017} \\\\\n \\hline\n Improving fault localization by integrating value and predicate based causal inference techniques & \\cite{kucuk_improving_2021} \\\\\n \\hline\n Improving Fault Localization by Integrating Valueand Predicate Based Causal Inference Techniques & \\cite{podgurski_improving_2021} \\\\\n \\hline\n Inforence: effective fault localization based on information-theoretic analysis and statistical causal inference & \\cite{feyzi_inforence_2019} \\\\\n \\hline\n License choice and the changing structures of work in organization owned open source projects & \\cite{medappa_license_2017} \\\\\n \\hline\n Matching Test Cases for Effective Fault Localization & \\cite{baah_matching_2011} \\\\\n \\hline\n MFL: Method-level fault localization with causal inference & \\cite{shu_mfl_2013} \\\\\n \\hline\n Mitigating the confounding effects of program dependences for effective fault localization & \\cite{baah_mitigating_2011} \\\\\n \\hline\n NUMFL: Localizing faults in numerical software using a value-based causal model & \\cite{bai_numfl_2015} \\\\\n \\hline\n On Software Productivity Analysis with Propensity Score Matching & \\cite{tsunoda_software_2017} \\\\\n \\hline\n On the Use of Causal Graphical Models for Designing Experiments in the Automotive Domain & \\cite{issa_mattos_use_2022} \\\\\n \\hline \n PerfCE: Performance Debugging on Databases with Chaos Engineering-Enhanced Causality Analysis & \\cite{ji_perfce_2022} \\\\\n \\hline\n Pitfalls of data-driven networking: A case study of latent causal confounders in video streaming & \\cite{sruthi_pitfalls_2020} \\\\\n \\hline\n Properties of Effective Metrics for Coverage-Based Statistical Fault Localization & \\cite{sun_properties_2016} \\\\\n \\hline\n Reducing confounding bias in predicate-level statistical debugging metrics & \\cite{gore_reducing_2012} \\\\\n \\hline\n Testing Causality in Scientific Modelling Software & \\cite{clark_testing_2022} \\\\\n \\hline\n The Importance of Being Positive in Causal Statistical Fault Localization: Important Properties of Baah et al.'s CSFL Regression Model & \\cite{bai_importance_2015} \\\\\n \\hline\n Unicorn: reasoning about configurable system performance through the lens of causality & \\cite{iqbal_unicorn_2022} \\\\\n \\hline\n \\end{tabular}\n \\caption{Papers classified as C3, applying the three steps of causal inference: modeling, identification and estimation.}\n \\label{tab:category_c3}\n\\end{table}\n\nThe last category (C4) was designed to contain papers that would not fit to the first categories, and whose focus was drastically different from the others (some kind of outliers). Only 1 paper was classified as C4: \\cite{leidekker_causal_2020}. This paper reports on an early work on how to use statistical causal inference methods for building and evaluating software evolution theories. It reproduces the modeling assumptions of two existing studies with the help of a causal diagram. Unfortunately, the document does not specify whether and how the identification and estimation was done. To our knowledge, there is no available follow-up document at this time. For these reasons, we did not include this paper in our further analyses.\n\nWith regard to our research questions, we also decided to exclude categories C1, C2 from further analysis. The reason is twofold. First, our paper focuses on the application of statistical causal inference methods as defined in section \\ref{sec:def}. Papers in both C1 and C2 categories do not use the three steps of statistical causal inference that are of interest in this paper. The second reason, is that our search was primarily designed to find papers from the C3 category, and therefore we suspect that the papers from categories C1 and C2 are only a subset of what we could have found if we directly targeted either causal structure discovery or counterfactual analysis.\n\n\n\\subsection{Demographics}\nAll four categories account for 71 papers spanning from 2007 until 2022. Figure \\ref{fig:nb_papers_per_year} provide the details of the number of publication per year for all categories. It shows a slight increase over the years. The details of the types of publications is given by figure \\ref{fig:publications_types}. Note that since RQ2 focuses on the structure of the research community, demographics for the category C3 is analyzed in more detail in section \\ref{sec:analysis}.\n\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{img\/papers_per_year}\n \\caption{Number of publications over the years.}\n \\label{fig:nb_papers_per_year}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{img\/publications_types}\n \\caption{Publications types.}\n \\label{fig:publications_types}\n\\end{figure}\n\n\n\n\\section{Analysis}\\label{sec:analysis}\n\\subsection{Demographics and bibliometrics}\\label{sec:demographics}\nThe 32 papers classified in C3 have been written by a total of 77 distinct authors. Out of theses 77 persons, only 18 are listed as authors in more than 2 papers (see figure \\ref{fig:nb_publications_authors}). From the coauthorship network, we obtained 15 distinct coauthors groups (i.e., authors having published together over the whole set of 32 papers). The largest coauthors group (in terms of papers collected) accounts for 12 papers \\cite{baah_causal_2010,baah_matching_2011,baah_mitigating_2011,bai_importance_2015,bai_numfl_2015,bai_causal_2017,kucuk_improving_2021,kucuk_artifact_2021,podgurski_counterfault_2020,podgurski_improving_2021,shu_mfl_2013,sun_properties_2016} and is composed of the following persons:\nBaah, G.K. and Harrold, M.J. both from Georgia Institute of Technology Atlanta, GA; Bai, Z., Cao, F, Kucuk, Y., Shu, G., Sun, B., Sun, S-F., and Podgurski, A. from Case Western Reserve University, Cleveland, OH, USA; and Henderson, T.A.D. from Google Inc. Mountain View, CA, USA.\n\nThe next three largest groups respectively account for 4 papers \\cite{lee_causal_2021,oh_effectively_2021,torkar_bayesian_2020,scholz_empirical_2021}, 3 papers \\cite{liu_bayesian_2021,liu_bayesian_2022,issa_mattos_use_2022}, and 2 papers \\cite{terrell_gender_2017,murphy-hill_developers_2019}. All the left 11 coauthors groups account each for a single paper.\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{img\/nb_publications_per_authors}\n \\caption{Number of publications per authors. Only authors being listed in more than one paper are displayed}\n \\label{fig:nb_publications_authors}\n\\end{figure}\n\n\\subsection{Use cases}\\label{sec:use_cases}\nFrom the 32 papers in C3, we extracted the use cases and organized them into four groups: papers focusing on fault localization (G1, see section \\ref{sec:fault_loc_g1}), on testing (G2, see section \\ref{sec:testing_g2}), on performance analysis (G3, see section \\ref{sec:performance_g3}), and papers reporting other use cases (G4, see section \\ref{sec:other_g4}). Table \\ref{tab:use_cases} provides an overview of all four groups and figure \\ref{fig:nb_papers_per_use_case_per_year} provides an overview of the evolution of the number of publications for each group. Note that one article \\cite{issa_mattos_use_2022} was an experience report and a reflection on previously published work \\cite{liu_bayesian_2021,liu_bayesian_2022}, therefore we decided not to include it in our use case analysis.\n\n\n\\begin{table}[!htbp]\n \\centering\n \\begin{tabular}{|p{6.5cm}|c|p{3.5cm}|}\n \\hline\n \\textbf{Focus} & \\textbf{Papers} & \\textbf{References} \\\\\n \\hline\n \\textbf{Faults localization} & 17 & \\cite{assi_acdc_2017,baah_causal_2010,baah_matching_2011,baah_mitigating_2011,bai_importance_2015,bai_numfl_2015,bai_causal_2017,feyzi_inforence_2019,gore_reducing_2012,kucuk_improving_2021,kucuk_artifact_2021,lee_causal_2021,li_causal_2014,podgurski_counterfault_2020,podgurski_improving_2021,shu_mfl_2013,sun_properties_2016}\\\\\n \\hline\n \\textbf{Testing} & 4 & \\cite{clark_testing_2022,liu_bayesian_2021,liu_bayesian_2022,oh_effectively_2021}\\\\\n \\hline\n \\textbf{Performance analysis} & 5 & \\cite{geiger_causal_2020,iqbal_unicorn_2022,ji_perfce_2022,scholz_empirical_2021,sruthi_pitfalls_2020}\\\\\n \\hline\n \\textbf{Other:} & 5 & \\\\\n Impact of open source license choice & & \\cite{medappa_license_2017} \\\\\n Impact of newsletters strategy (TotT) & & \\cite{murphy-hill_developers_2019} \\\\\n Effect of gender on pull request acceptance & & \\cite{terrell_gender_2017} \\\\\n Effort estimation & & \\cite{torkar_bayesian_2020} \\\\\n Productivity analysis & & \\cite{tsunoda_software_2017} \\\\\n \\hline\n \\end{tabular}\n \\caption{Overview of the software engineering use cases for the 32 papers analyzed (C3).}\n \\label{tab:use_cases}\n\\end{table}\n\n\\begin{figure}[!htbp]\n \\centering\n \\includegraphics[width=0.8\\textwidth]{img\/papers_per_use_cases_per_year}\n \\caption{Number of publications over the years for each use case group.}\n \\label{fig:nb_papers_per_use_case_per_year}\n\\end{figure}\n\n\\subsubsection{Fault localization (G1)}\\label{sec:fault_loc_g1}\nThe first group includes 17 papers focused on fault localization. \nFault localization is the task of automatically finding statements or other program elements that might potentially be the cause of a program failure.\n\n\\paragraph{Demographics}\nOut of the 17 papers, 12 papers have been published an single group of coauthors (the group of prof. Podgurski at Case Western Reserve University, Cleveland, OH, USA, and coauthors), from 2010 until 2021 \\cite{baah_causal_2010,baah_matching_2011,baah_mitigating_2011,bai_importance_2015,bai_numfl_2015,bai_causal_2017,kucuk_improving_2021,kucuk_artifact_2021,podgurski_counterfault_2020,podgurski_improving_2021,shu_mfl_2013,sun_properties_2016}. The other 5 papers \\cite{assi_acdc_2017,feyzi_inforence_2019,gore_reducing_2012,lee_causal_2021,li_causal_2014} were published between 2012 and 2021, by distinct coauthors groups.\n\nFrom theses 5 papers, 3 papers directly extend the work from Baah, Podgurski, and Harrold \\cite{baah_causal_2010,baah_matching_2011,baah_mitigating_2011} to automated patch generation \\cite{assi_acdc_2017}, to more complex bugs \\cite{feyzi_inforence_2019} and from statement level to predicate level \\cite{gore_reducing_2012}.\n\n\\cite{lee_causal_2021} proposes causal program dependence analysis (where causal inference is used to model the strength of program dependence relations) and applies it to fault localization. The work from Baah et al. \\cite{baah_causal_2010,baah_matching_2011,baah_mitigating_2011} and Gore and Reynolds \\cite{gore_reducing_2012} are cited as related work.\n\nOnly one paper \\cite{li_causal_2014} in this category does not cite any of the other related work from this group. It focuses on fault localization in composite services.\n\n\\paragraph{Application of statistical causal inference}\nAll the papers except for \\cite{li_causal_2014} focus the causal effect a given source code element might have on a failure. In \\cite{li_causal_2014}, the focus lies on inputs\/outputs relationships in composite services.\n\nAll papers except \\cite{li_causal_2014,lee_causal_2021} are based upon or extend the work from Baah et al. \\cite{baah_causal_2010,baah_matching_2011,baah_mitigating_2011} and obtain a graphical causal models from analysis of the program structure (from control dependency or data dependency graph). Identification is done using the application of the backdoor criterion in order to reduce confounding bias. Estimation is done either through the application of linear regression, random forest, or matching techniques. Figure \\ref{fig:model_baah} gives more details about the type of graphical causal model used in this work.\n\n\n\n\n\\begin{figure}[!htbp]\n \\centering\n \\begin{tikzpicture}\n \\node (x1) at (-2,2) [circle,fill,inner sep=1.25pt,label=above:$X_1$] {};\n \\node (x2) at (-0.5,2) [circle,fill,inner sep=1.25pt,label=above:$X_2$] {};\n \\node (dots) at (0.6,2) [label=above:$...$] {};\n \\node (xn1) at (2.5,2) [circle,fill,inner sep=1.25pt,label=above:$X_{n-1}$] {};\n \\node (xn) at (4,2) [circle,fill,inner sep=1.25pt,label=above:$X_n$] {};\n \\node (t) at (-1,0) [circle,fill,inner sep=1.25pt,label=left:$T_s$] {};\n \\node (y) at (3,0) [circle,fill,inner sep=1.25pt,label=right:$Y$] {};\n \\path[->] (t) edge (y);\n \\path[->] (x1) edge (t);\n \\path[->] (x1) edge (y);\n \\path[->] (x2) edge (t);\n \\path[->] (x2) edge (y);\n \\path[->] (xn1) edge (t);\n \\path[->] (xn1) edge (y);\n \\path[->] (xn) edge (t);\n \\path[->] (xn) edge (y);\n \\end{tikzpicture}\n \\caption{Illustration of the types of graphical models used by Baah et al. \\cite{baah_causal_2010,baah_matching_2011,baah_mitigating_2011}. The graphical causal model contains three types of nodes: the treatment $T_s$, the outcome $Y$ and confounders $X_i$ (nodes having an effect on both $Y$ and $T_s$). All variables are binary. The outcome variable $Y$ corresponds to the outcome of a given test (1 if it fails and is 0 otherwise); the treatment variable $T_s$ corresponds to the fact that a given statement $s$ was covered by the test or not; and finally the confounders are statements that both have an impact on $s$ (through control dependence), the corresponding variable is whether the confounding statements have been covered by the test. Identification is done using the back-door criterion and measuring the causal effect of a treatment $S$ on whether a test fails or pass is computed after adjusting for confounders.}\n \\label{fig:model_baah}\n\\end{figure}\n\n\n\nThe work from Lee et al. \\cite{lee_causal_2021} focuses first on causal program dependency analysis (CPDA). The graphical causal model represents how likely a change to the value of a program element $s_i$ is going to cause a change to the value of another element $s_j$. This model is obtained from both the analysis of the source code and interventions (mutations). Authors apply this modeling technique to the problem of fault localization. The working hypothesis is that a faulty program element will have more impact on the output element in failing executions than in passing executions. Identification is done using the backdoor criterion and adjusting for confounders. Estimation is done but no explicit causal estimation method is mentioned.\n\nAs stated above, the work from Li et al. \\cite{li_causal_2014} differs from the others in this group. It focuses on fault localization not in a given program but in service composition (i.e., in a set of interacting programs). The graphical causal model is obtained from the analysis and transformation of a service dependency graph: a graph representing services as nodes and service dependencies as links. Identification is done using the backdoor criterion. Estimation is also done but no explicit causal estimation method is mentioned.\n\n\\paragraph{Evaluation methods and data sets}\nApart from \\cite{li_causal_2014} which performs a proof of concept relying upon simulated data, all other papers rely upon data sets containing programs, bugs (ground truth) and their corresponding tests suites that have been either collected for the purpose of the study or are publicly available (e.g., Defect4J, Siemens Suite, SIR). As these data sets contain a ground truth (past found bugs and the corresponding fault program elements), the papers are all performing benchmarks and compare different solutions or different versions of a solution with one another.\n\n\n\\subsubsection{Testing (G2)}\\label{sec:testing_g2}\nThe second group includes 4 papers \\cite{clark_testing_2022,liu_bayesian_2021,liu_bayesian_2022,oh_effectively_2021} focusing on testing. As defined in the SWEBOK, testing is an activity performed to evaluate software quality and to improve it by identifying defects. Software testing involves dynamic verification of the behavior of a program against expected behavior on a finite set of test cases. As such, fault localization (see section \\ref{sec:fault_loc_g1}) can be considered as a subdomain of testing. The 4 papers reviewed here are less focusing on fault localization and applies causal inference for different testing methods: metamorphic testing and sensitivity analysis for \\cite{clark_testing_2022}, mutation testing for \\cite{oh_effectively_2021}, and \"observational testing\" for \\cite{liu_bayesian_2021,liu_bayesian_2022}.\n\n\\paragraph{Demographics}\nBoth papers \\cite{liu_bayesian_2021,liu_bayesian_2022} are related papers from the same set of coauthors.\n\nThe work from Oh et al. \\cite{oh_effectively_2021} relies on causal program dependency analysis (CPDA) introduced in \\cite{lee_causal_2021} that has been reviewed in the previous section. Two of the authors: Seongmin Lee and Shin Yoo are also coauthors of \\cite{lee_causal_2021}.\n\nThe work from Clark et al. \\cite{clark_testing_2022} refers to many papers reviewed in the previous section as related work (see section 7.2 Causality in Software Testing in \\cite{clark_testing_2022}).\nTo the best of our knowledge this is their first paper reporting on the use of statistical causal inference for testing. \n\n\\paragraph{Application of statistical causal inference}\nIn the paper \\cite{oh_effectively_2021}, the focus lies on mutation testing. In mutation testing, small deleterious modifications are applied to program elements (called mutations), the goal being to ensure that a software test suite is capable of detecting a maximum of mutations. Strategies for choosing program elements to mutate are based upon their causal effect computed by the CPDA approach from \\cite{lee_causal_2021}. Causal identification and estimation are the same as in \\cite{lee_causal_2021}.\n\nIn \\cite{clark_testing_2022}, the focus lies on testing scientific simulation software, or more precisely on testing the causal effects represented in the model being simulated. A graphical causal model is created in a semi-automated way (manual pruning by experts from the simulated system is required). Nodes are inputs and outputs of the simulation program. Identification is done using the backdoor criterion. Estimation methods mentioned in the paper are regression and causal forest estimators.\n\nThe work from Liu et al. \\cite{liu_bayesian_2021,liu_bayesian_2022} uses statistical causal inference for estimating the effect of software updates. These papers do not rely upon a graphical model but use other statistical causal inference techniques such as propensity score matching, difference in differences, and regression discontinuity design for estimating the causal effect.\n\n\\paragraph{Evaluation methods and data sets}\nIn comparison to the previous use case, the papers deal here with situations in which the ground truth is at best uncertain and at worst not known. In \\cite{clark_testing_2022}, the authors provide a proof of concept using data being generated from 3 scientific simulations. In \\cite{liu_bayesian_2021,liu_bayesian_2022}, the authors report the results of their approach on proprietary data collected from cars. Finally, in \\cite{oh_effectively_2021}, the authors are comparing three heuristics for generating mutants using data from the SIR data set mentioned in section \\ref{sec:fault_loc_g1}.\n\n\\subsubsection{Performance modeling (G3)}\\label{sec:performance_g3}\nThe third group includes 5 papers \\cite{geiger_causal_2020,iqbal_unicorn_2022,ji_perfce_2022,scholz_empirical_2021,sruthi_pitfalls_2020} focusing on performance analysis. In \\cite{geiger_causal_2020,iqbal_unicorn_2022,ji_perfce_2022,sruthi_pitfalls_2020}, the focus lies on performance debugging where the general goal is to understand which component of a system contributes to the measured performance and potentially to control for automated-decisions that impacts performance. \\cite{geiger_causal_2020} targets cloud computing, \\cite{iqbal_unicorn_2022} highly configurable systems, \\cite{ji_perfce_2022} databases, and \\cite{sruthi_pitfalls_2020} video streaming. The work from Scholz et al. \\cite{scholz_empirical_2021} differs a little bit as it aims at analyzing the predictive performance of different faults prediction algorithms.\n \n\\paragraph{Demographics}\nAll papers in this category are from different research groups. Only two authors, Richard Torkar and Robet Feldt, are coauthors other papers we included and reviewed: \\cite{torkar_bayesian_2020} (see section \\ref{sec:other_g4}) and \\cite{lee_causal_2021} (see section \\ref{sec:fault_loc_g1}).\n\nAll four papers \\cite{geiger_causal_2020,iqbal_unicorn_2022,ji_perfce_2022,sruthi_pitfalls_2020} cite other works that we collected, either from C1 (causal structure discovery), from C2 (counterfactual analysis), or from C3 (causal inference) \\cite{baah_causal_2010,feyzi_inforence_2019}.\n\nThe work from \\cite{scholz_empirical_2021} seems to belong to a somewhat adjacent research community and does not cite any of the 71 papers we collected.\n\n\\paragraph{Application of statistical causal inference}\nThe work from Geiger et al. \\cite{geiger_causal_2020} aims at addressing two problems: 1) optimizing automated-decisions (that are done during the operation of a cloud server) that can impact performance (authors called it the control problem); 2) understanding which component of a system contributes to what extent to the measured performance (authors called it the performance debugging problem). Both problems are formally defined within the language of counterfactuals and the authors propose a method in 4 steps to solve both problems from getting a causal graph (here the authors stay relatively generic and propose to use a mix of randomized interventional experiments, observational data, and expert knowledge and refers to other related work), until estimating the causal effects. In their application examples, nodes in the causal graph represent system and network metrics.\n\nThe work from \\cite{iqbal_unicorn_2022} focuses on the problem of understanding and estimating the impact of configuration options on performance. The graphical causal model is obtained via the use of a causal structure discover algorithm (FCI). Nodes in the graph represent configuration options and system and network performance metrics. Identification is done using the do-calculus (i.e., not limited to the backdoor criterion). Estimation methods are not direclty mentioned but the paper refers to the following libraries ananke\\footnote{\\url{https:\/\/ananke.readthedocs.io\/en\/latest\/}} and causality\\footnote{\\url{https:\/\/github.com\/akelleh\/causality}} for estimating the causal effects.\n\nThe work from Ji et al. \\cite{ji_perfce_2022} focuses on debugging performance anomalies in databases. The graphical model is obtained via a causal structure discovery algorithm (BLIP). Nodes in the graph represent system and network metrics (called KPIs in the paper). Estimation is done using double machine learning and by leveraging instrumental variables. No identification method (as defined in \\ref{sec:def}) is mentioned.\n\nThe work from \\cite{sruthi_pitfalls_2020} focuses on analyzing the performance of video streaming. The causal graph is obtained manually, and nodes represent network metrics. Identification is done using the backdoor criterion. The paper does not directly mention any estimation method.\n\nFinally, the work from Scholz and Torkar \\cite{scholz_empirical_2021} aims at comparing the performances of different fault prediction algorithms. Causal inference is used for the comparison but not directly for the fault localization and prediction (as in section \\ref{sec:fault_loc_g1}). The graphical causal model was obtained manually and the nodes represent the fault prediction algorithm to be compared (as treatment), evaluation metrics (as outcome), and project, language, LOC, and fix count. Identification is done by applying the rules of do-calculus. Estimation is done using linear regression.\n\n\n\\paragraph{Evaluation methods and data sets}\nIn this group all papers rely upon a ground truth that is either being simulated (as in \\cite{geiger_causal_2020,ji_perfce_2022,sruthi_pitfalls_2020}) or collected for the purpose of the study (as in \\cite{iqbal_unicorn_2022,scholz_empirical_2021}).\n\n\\subsubsection{Other use cases}\\label{sec:other_g4}\nThe last group includes 5 papers where the use cases were distinct enough from the other three groups. \\cite{medappa_license_2017} studies the impact of open source license choice, \\cite{murphy-hill_developers_2019} the impact of newsletters strategy (TotT), \\cite{terrell_gender_2017} the effect of gender on pull request acceptance, \\cite{torkar_bayesian_2020} applies causal inference methods to the problem of effort estimation, and \\cite{tsunoda_software_2017} targets productivity analysis.\n\n\n\\paragraph{Demographics}\nApart from \\cite{torkar_bayesian_2020}, where one of the coauthor also appears in another paper we reviewed \\cite{scholz_empirical_2021} (see section \\ref{sec:performance_g3}), all the other papers are from distinct coauthors groups and do not cross-reference each other nor cite any of the 71 papers we collected.\n\n\n\\paragraph{Application of statistical causal inference}\nIn this category, only \\cite{torkar_bayesian_2020} mentions using a graphical causal model (obtained manually) and uses the application of the do-calculus for identification and linear regression for estimation. The nodes in the graph represent code and project metrics taken from the International Software Benchmarking Standards Group (ISBSG) data set\\footnote{\\label{isbsg}\\url{isbsg.org}}.\n\nThe work in \\cite{medappa_license_2017,terrell_gender_2017,tsunoda_software_2017} are not explicitly relying on a graphical causal model and use propensity score matching for estimation.\n\nFinally, the work from Murphy-Hill et al. \\cite{murphy-hill_developers_2019} on the impact of the TotT newsletters strategy at Google does not explicitly rely upon a graphical causal model, but uses a method called CausalImpact that performs counterfactual analysis on time series. CausalImpact assumes that there exists a set control time series that are themselves not affected by the intervention and bases its estimation on them.\n\n\\paragraph{Evaluation and data sets}\nIn this group all papers are dealing with problems where the ground truth is difficult to obtain or is yet to be discovered. All five papers are reporting their analysis results and provide comparisons with the related literature. \\cite{murphy-hill_developers_2019} collected data from newsletter strategy and tool usage at Google. Both \\cite{medappa_license_2017} and \\cite{terrell_gender_2017} use data from GitHub. Finally, \\cite{torkar_bayesian_2020} and \\cite{tsunoda_software_2017} are using data from the International Software Benchmarking Standards Group (ISBSG).\n\n\n\\subsection{Libraries}\\label{sec:libraries}\nWe extracted the names and URL of the libraries used in the selected papers. When the libraries were not mentioned but the paper provided an implementation repository, we analyzed the repository dependencies. Our focus lies on libraries specific to statistical causal inference. We excluded libraries to perform regression tasks (like Ranger\\footnote{ \\url{https:\/\/cran.r-project.org\/web\/packages\/ranger\/index.html}} or Glmnet\\footnote{\\url{https:\/\/cran.r-project.org\/web\/packages\/glmnet\/index.html}}), \nBayesian probabilistic inference (like Pyro\\footnote{\\url{https:\/\/github.com\/pyro-ppl\/pyro}} or Brms \\footnote{\\url{https:\/\/cran.r-project.org\/web\/packages\/brms\/index.html}}),\nSEM (like Semopy\\footnote{\\url{https:\/\/semopy.com\/}}),\nand libraries to visualize or to manipulate graphs (like causalgraphicalmodels \\footnote{\\url{https:\/\/github.com\/ijmbarr\/causalgraphicalmodels}}). Table \\ref{tab:libraries} provides an overview of the libraries mentioned in the selected papers.\n\nThe first thing we noticed, is that almost two thirds of the papers we reviewed either did not mention relying upon any library (12 papers out of 32) or only mentioned using the R language (11 papers out of 32). The last 9 papers mentioned 11 distinct libraries related to statistical causal inference. \n\n\\begin{table}[!htbp]\n \\centering\n \\footnotesize\n \\begin{tabular}{|ll|c|}\n \\hline\n \\textbf{Library name} & \\textbf{URL} & \\textbf{Nb.}\\\\\n \\hline\n \\multicolumn{2}{|l|}{not mentioned} & 12 \\\\\n \\hline\n \\multicolumn{2}{|l|}{only mention using R} & 11 \\\\\n \n \n \\hline\n econml (Python) & \\url{https:\/\/github.com\/microsoft\/econml} & 2 \\\\\n \\hline\n matching (R) & \\url{https:\/\/cran.r-project.org\/web\/packages\/Matching\/index.html} & 1 \\\\\n \n \n \\hline\n matchit (R) & \\url{https:\/\/cran.r-project.org\/web\/packages\/MatchIt\/index.html} & 1 \\\\\n \\hline\n ananke (Python) & \\url{https:\/\/gitlab.com\/causal\/ananke} & 1 \\\\\n \\hline\n causality (Python) & \\url{https:\/\/github.com\/akelleh\/causality} & 1 \\\\\n \n \n \\hline\n deepiv (Python) & \\url{https:\/\/github.com\/jhartford\/deepiv} & 1 \\\\\n \\hline\n optmatch (R) & \\url{https:\/\/github.com\/markmfredrickson\/optmatch} & 1 \\\\\n \\hline\n causalimpact (R) & \\url{https:\/\/google.github.io\/CausalImpact\/CausalImpact.html} & 1 \\\\\n \\hline\n dagitty (R) & \\url{https:\/\/cran.r-project.org\/web\/packages\/dagitty\/index.html} & 1 \\\\\n \\hline\n causal-learn (Python) & \\url{https:\/\/github.com\/cmu-phil\/causal-learn} & 1 \\\\\n \\hline\n \\end{tabular}\n \\caption{Causal inference libraries used. The last column corresponds to the number of papers mentioning a given library.}\n \\label{tab:libraries}\n\\end{table}\n\n\\section{Answers to our research questions}\\label{sec:answers}\n\n\\subsection{RQ1: use cases}\nIn section \\ref{sec:use_cases} we analyzed the different software engineering use cases in which statistical causal inference had found an application. \nWe clustered the use cases into four groups: fault localization (17 papers), testing (4 papers), performance analysis (5 papers) and other use cases (5 papers). We left one paper out of the analysis. Most of the papers we collected were oriented towards the broader task of improving software quality. \n\n\n\\subsection{RQ2: structure of the research community}\nBased upon our results (see section \\ref{sec:demographics} and figure \\ref{fig:nb_papers_per_use_case_per_year}), we can say that the application of causal inference in software engineering is relatively young. The community of researchers applying statistical causal inference in software engineering seems also to be fragmented. We could only find one group of researchers (the group of prof. Podgurski and coauthors at Case Western Reserve University, Cleveland, OH, USA) that applied statistical causal inference over more than 10 years. As already noted in section \\ref{sec:analysis}, there are not so many cross-references. Only one paper \\cite{clark_testing_2022} provided an extensive state of the art (although focused on testing). The rest of the papers either did not mention related statistical causal inference work or clearly mentioned a lack of known related work. For example \\cite{liu_bayesian_2021} states \\textit{\"To the best of our knowledge, this is the first time such a [propensity score matching] model is used in software development publications.\"} although different matching methods have already been applied before (see tables \\ref{tab:est_dag} and \\ref{tab:est_no_dag}). This may be due both to the fact that statistical causal inference methods are relatively new and not so well known in software engineering, and to the fact that no surveys were available prior to this article.\n\n\\subsection{RQ3: causal methods}\nIn section \\ref{sec:def}, we simplified the statistical causal inference process into three main steps: modeling, identification and estimation.\n\n\\subsubsection{Modeling}\nConcerning the modeling phase (building a graphical causal model), three papers relied on a manual approach \\cite{sruthi_pitfalls_2020,scholz_empirical_2021,torkar_bayesian_2020}. Two papers relied on an existing causal structure discovery algorithm (FCI and BLIP) \\cite{iqbal_unicorn_2022,ji_perfce_2022}. 19 papers have developed specific extraction methods for generating the graphical causal model in an automated way. One paper \\cite{clark_testing_2022} relied both on automated and expert based (manual) approach for generating the graphical causal model. Six papers were not relying explicitly on a graphical causal model \\cite{liu_bayesian_2021,liu_bayesian_2022,medappa_license_2017,terrell_gender_2017,tsunoda_software_2017,murphy-hill_developers_2019}.\n\n\\subsubsection{Identification and Estimation}\nWe can classify all the paper into two groups depending upon the identification and estimation approaches they rely on.\n\nThe first group contains 24 papers that rely on explicitly adjusting for counfounders. These papers first define an explicit graphical causal model and apply the rules of the do-calculus (like the backdoor criterion) for identifying causal effects. Estimation is then done, either through classical statistical estimation methods (e.g., linear regression or random forest) or matching techniques. Table \\ref{tab:est_dag} provides the details of the corresponding estimation methods and papers.\n\n\\begin{table}[!htbp]\n \\centering\n \\begin{tabular}{|l|c|l|}\n \\hline\n \\textbf{Estimation method} & \\textbf{Nb.} & \\textbf{Ref.} \\\\\n \\hline\n Linear regression & 8 & \\cite{assi_acdc_2017,baah_causal_2010,bai_importance_2015,gore_reducing_2012,scholz_empirical_2021,shu_mfl_2013,sun_properties_2016,torkar_bayesian_2020} \\\\\n \\hline\n Random forest & 4 & \\cite{kucuk_artifact_2021,kucuk_improving_2021,podgurski_counterfault_2020,podgurski_improving_2021} \\\\\n \\hline\n Propensity score matching & 3 & \\cite{bai_numfl_2015,bai_causal_2017,feyzi_inforence_2019}\\\\\n \\hline\n Exact matching & 2 & \\cite{baah_matching_2011,baah_mitigating_2011} \\\\\n \\hline\n Not explicit & 7 & \\cite{lee_causal_2021,li_causal_2014,sruthi_pitfalls_2020,iqbal_unicorn_2022,geiger_causal_2020,oh_effectively_2021,clark_testing_2022} \\\\\n \\hline\n \\end{tabular}\n \\caption{Estimation methods for papers relying upon an explicit graphical causal model.}\n \\label{tab:est_dag}\n\\end{table}\n\nThe second group contains 7 papers \\cite{ji_perfce_2022,liu_bayesian_2021,liu_bayesian_2022,murphy-hill_developers_2019,terrell_gender_2017,tsunoda_software_2017,medappa_license_2017} that rely upon methods that implicitly deal with confounders and do not require an explicit graphical causal model. Table \\ref{tab:est_no_dag} provides the details of the corresponding estimation methods and papers.\n\n\\begin{table}[!htbp]\n \\centering\n \\begin{tabular}{|l|c|l|}\n \\hline\n \\textbf{Estimation method} & \\textbf{Nb.} & \\textbf{Ref.} \\\\\n \\hline\n Propensity score matching & 5 & \\cite{liu_bayesian_2021,liu_bayesian_2022,terrell_gender_2017,tsunoda_software_2017,medappa_license_2017}\\\\\n \\hline\n Difference in differences & 1 &\\cite{liu_bayesian_2022} \\\\\n \\hline\n Double machine learning + Instrumental variables & 1 &\\cite{ji_perfce_2022} \\\\ \n \\hline\n CausalImpact & 1 & \\cite{murphy-hill_developers_2019} \\\\\n \\hline\n \\end{tabular}\n \\caption{Estimation methods for papers not relying upon a graphical causal model.}\n \\label{tab:est_no_dag}\n\\end{table}\n\n\\subsection{RQ4 and RQ5: Data sets, libraries and evaluation}\nWe can group the analyzed papers into two categories: papers evaluating a novel method that relies on statistical causal inference, and papers using statistical causal inference to answer a research question using observational data. In the first group, the results of the selected papers show that causal inference provides improvement, usually, through mitigating confounding bias (as illustrated by the benchmarks in \\cite{podgurski_improving_2021,oh_effectively_2021,ji_perfce_2022}). This group of papers usually use a ground truth data set. For the other group, statistical causal inference methods are used to estimate causal effects from observational data. In this case, since there is no ground truth (it is the purpose of the study to discover it), assessing the impact of causal inference methods would require a more sophisticated (and longer) empirical set-up, the breadth of which is beyond the scope of the papers currently reviewed.\n\nIn section \\ref{sec:libraries} we provided an overview of the libraries used. The distribution among the selected papers shows no particular preferred library and an equal mix between Python and R packages. We have not been able to find any paper mentioning a specific benchmark data set dedicated to statistical causal inference in software engineering (as one could find in \\cite{yao2021survey}). For assessing statistical causal inference methods in software engineering, it would be interesting to have dedicated benchmark data sets where one could find ground truth data and the corresponding causal graphical model(s) (for example, obtained from the literature using Bayesian networks in software engineering (see section \\ref{sec:related_work})).\n\n\\section{Discussions}\\label{sec:discussion}\n\n\\subsection{Potential and limits of statistical causal inference in software engineering}\nNot all questions are aimed to be answered with a single tool. In his lecture and subsequent paper \"Why Model?\" \\cite{epstein2008whymodel}, Epstein lists 17 different modeling purposes. For our discussion, we only focus on the difference between \"predict\" and \"explain\". To keep it simple, we can say that in order to predict, it is generally not necessary to understand the underlying causal effects at play in a system. For example, one can predict tomorrow's weather with relatively high accuracy by simply following the heuristic that tomorrow's weather is the same as today's weather. Explaining the weather requires to understand its intricate network of causes and effects. That is, for prediction, one only needs a mapping between some inputs and some outputs \\cite{hernan2019secondchance}, whereas for explanation one requires a model of the underlying causes and effects mechanisms. This is where statistical causal inference methods (among others, e.g., simulations) have a role to play. Pearl's ladder of causation \\cite{pearl2019book} offers here an interesting framework for classifying research questions and for knowing whether statistical causal inference methods make sense for answering a given research question. The research questions of the papers we selected have all one thing in common: they involve an understanding of the causal effect of a given aspect (the execution of a statement, the choice of a configuration option, or the gender of the person submitting a pull request) on some outcome (the failure of a program, the performance of a composite system, or the acceptance of a pull request).\n\nA second important aspect to mention is that when using a graphical causal model, one is required to make assumptions explicit. Both about the aspects that are part of the system under study and about the assumed structure of the causal effects. This point was also mentioned in \\cite{torkar_bayesian_2020} and \\cite{issa_mattos_use_2022}.\n\nAnother argument brought by many of the selected papers for using statistical causal inference methods is the need to reduce confounder bias. A confounder is a variable that both have an impact on the treatment and the outcome of interest (see figure \\ref{fig:model_baah} for an example). Note that confounder bias is not the only bias that statistical causal methods can mitigate (another one is collider bias), but in order to mitigate them, one needs a graphical causal model \\cite{cinelli2022crashcourse}.\n\nFinally, it is important to understand the underlying assumptions taken by statistical causal inference methods (see the full list in \\cite{yao2021survey}, or in many of the related textbooks and lectures, for instance \\cite{hernan2020whatif,brady2020introduction}). This has been illustrated by \\cite{bai_importance_2015} for the positivity assumption or \\cite{podgurski_counterfault_2020} for the markov assumption. \n\n\\subsection{Threats to validity}\nThe most important threat to validity we see is selection bias. Indeed, it was relatively difficult to find relevant papers (as shown in \\ref{sec:methodogy}).\nOn the one hand, software engineering is a broad field, and it is almost sure that we missed some software engineering related aspects in our search process. On the other hand, there is no standard vocabulary to talk about statistical causal inference and many papers from C1 (see table \\ref{tab:category_c1}) referred to causal structure discovery as \"causal inference\". Furthermore, as our results show, the research community using statistical causal inference in software engineering seems to be fragmented, therefore the snowballing approach we relied on to find new relevant papers also has its limits. In order to mitigate these risks, we used two different queries and applied two snowballing phases (see section \\ref{sec:methodogy}).\n\n\\section{Conclusion}\\label{sec:conclusion}\nIn this paper, we proposed a review of 32 papers applying statistical causal inference methods in the software engineering domain. We have shown that these methods are interesting when targeting problems requiring estimating causal effects from observational data. Our analysis also reveals that these techniques are not yet mainstream in software engineering research. By providing this overview, we hope to promote awareness of these methods, lay the groundwork for discussions on where and when to apply them, and eventually increase the number of applications of these methods in software engineering.\n\n\\paragraph{Acknowledgement}\nThe authors would like to thank Nico Cappel, Niklas Gutting, and Ilir Hulaj for their preliminary work \"Anwendungen von kausaler Inferenz in Software-Engineering\" done during their Software Engineering Seminar at the Technical University of Kaiserslautern. \n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCoulombic interactions between salt and poly-anions \nplay a key role in the equilibrium and kinetics of nucleic\nacid processes \\cite{Anderson}. A convenient quantity\nquantifying such interactions and allowing for the analysis\nand interpretation of their thermodynamics consequences, is\nthe so called preferential interaction coefficient. Several\ndefinitions have been proposed and their interrelation studied,\nsee e.g. \\cite{Eisenberg,Sch,Tim}. In the present work, \nthey are defined as the integrated deficit (with respect to bulk\nconditions) of co-ions concentration around\na rod-like poly-ion. Our goal is to provide analytical \nexpressions describing the effect of salt concentration and poly-ion \nstructural parameters on the preferential interaction coefficient,\nfor a broad class of asymmetric electrolytes.\nFor symmetric electrolytes, it will be shown that our formulas improve \nupon existing analytical results. For other asymmetries, they seem to have\nno counterpart in the literature.\nOur analysis holds for highly (i.e. beyond counter-ion condensation\n\\cite{Manning,Oosawa}) \nand uniformly charged cylindrical poly-ions, and is explicitly\nlimited to the low\nsalt regime (i.e. when the poly-ion radius $a$ is smaller than \nthe Debye length $1\/\\kappa$). These conditions are most relevant \nfor RNA or DNA in their single, double, or triple strand forms.\n\nAs in several previous approaches \\cite{Sharp,Ni,Shkel,Taubes}, \nwe adopt the mean-field framework\nof Poisson-Boltzmann equation, in a homogeneous dielectric \nbackground of permittivity $\\varepsilon$. \nThe same starting point has proven relevant for related \nstructural physical chemistry studies of nucleic acids \n\\cite{Gueronbis}.\nIn a $z_-$:$z_+$ electrolyte,\nthe dimensionless electrostatic potential $\\phi = e \\varphi\/kT$\n(with $e>0$ the elementary charge and $kT$ thermal energy) \nthen obeys the equation \\cite{Levin}\n\\begin{equation}\n\\frac{1}{r}\\frac{d}{dr}\\left(r \\frac{d \\phi}{d r}\\right) \\,=\\, \n\\frac{\\kappa^2}{z_+ + z_-} \\,\\left[ e^{z_- \\phi}-\ne^{-z_+ \\phi} \n\\right],\n\\Label{eq:PB}\n\\end{equation}\nwhere $r$ is the radial distance to the rod axis. \nThe valencies $z_+$ and $z_-$ of salt ions are both taken positive.\nDenoting derivative with a prime, \nthe boundary conditions read $r \\phi'(r) = 2\\xi >0$ at the polyion\nradius ($r=a$) \nand $\\phi \\to 0$ for $r\\to \\infty$. The latter condition\nexpresses the infinite dilution of poly-ion limit and ensures that the\nwhole system is electrically neutral, since it (indirectly) implies that \n$r \\phi' \\to 0$ for $r \\to \\infty$. \nWe consider a negatively charged poly-anion for which $\\phi<0$ and the \nline charge density reads $\\lambda = -e \\xi \/\\ell_B<0$,\nwhere $\\ell_B = e^2\/(\\varepsilon kT)$ denotes the Bjerrum length\n(0.71 nm in water at room temperature). Finally, the Debye length\nis defined from the bulk ionic densities $n_+^\\infty$ and $n_-^\\infty$\nthrough $\\kappa^2 = 4 \\pi \\ell_B (z_+^2 n_+^\\infty + z_-^2 n_-^\\infty)$.\n\nThe Coulombic contribution to the anionic preferential interaction coefficient is defined as\n\\cite{Sharp,Ni,Shkel,Taubes,rque10}\n\\begin{equation}\n\\Gamma \\,=\\, \\kappa^2 \\,\\int_a^\\infty (e^{z_- \\phi}-1) \\, r dr,\n\\Label{eq:Gamma}\n\\end{equation}\nwhile its cationic counterpart follows from electro-neutrality.\nThis quantity --which provides a measure of the Donnan effect \\cite{Donnan}--\ncan be expressed in closed form as a function \nof the electrostatic potential, see Appendix \\ref{app:A}.\nAs can be seen in (\\ref{eq:a3}) and (\\ref{eq:a4}), $\\Gamma$\ndepends exponentially on the surface potential $\\phi_0$, \nso that deriving a precise analytical expression is a challenging task.\nFurthermore, we are interested here in the limit $\\kappa a <1$\n(including the regime $\\kappa a \\ll 1$) which is analytically more difficult\nthan the opposite high salt situation where to leading order,\nthe charged rod behaves as an infinite plane, and curvature\ncorrections can be perturbatively included \\cite{Shkelhighsalt,JPA2003,PRE2004}.\n\n\nWe will proceed in two steps. Focusing first on the \nsurface potential $\\phi_0= \\phi(a)$, we make use of recent results \n\\cite{TT} that have been obtained from a mapping of Eq. (\\ref{eq:PB})\nonto a Painlev\\'e type III problem \\cite{McCoy,McCaskill,Tracy}. The exact expressions\nthereby derived only hold for 1:1, 1:2 and 2:1 electrolytes, but may be written \nin a way that is electrolyte independent. This remarkable feature\nis specific to the short distance behaviour of $\\phi$ and has been \noverlooked so far, since not only short distance but also large distance\nproperties have been studied \\cite{TT}. We are then led to conjecture that\nthe corresponding expression holds for {\\em any} binary electrolyte\n$z_-$:$z_+$, and we explicitly check the relevance of our assumption\non several specific examples. \n\nTechnical details are deferred to the appendices. It is in particular concluded\nin Appendix \\ref{app:B} that the surface potential may be written \n\\begin{equation}\ne^{-z_+ \\phi_0} \\,\\simeq\\, \\frac{2 (z_++z_-)}{z_+ (\\kappa a)^2} \\,\n\\left[(z_+ \\xi-1)^2 +\\widetilde\\mu^2\n\\right]\n\\Label{eq:potsurf}\n\\end{equation}\nwhere\n\\begin{equation}\n\\widetilde\\mu \\, \\simeq\\, \\frac{-\\pi}{\\log(\\kappa a) +{\\cal C} -(z_+\\xi-1)^{-1}} .\n\\Label{eq:mutilde}\n\\end{equation}\nExpression (\\ref{eq:mutilde}) is valid for $\\kappa a<1$ and $z_+ \\xi >1$ [in fact \n$z_+\\xi>1+{\\cal O}(1\/|\\log \\kappa a|)$]. These conditions are easily fulfilled\nfor nucleic acids. \nThe ``constant'' $\\cal C$ appearing in (\\ref{eq:potsurf})\ndepends smoothly on the ratio $z_+\/z_-$ but is otherwise salt and\ncharge independent. We report in Table \\ref{table:1} \nits values for several electrolyte asymmetries. \nThe decrease (in absolute value) of $\\cal C$ when $z_+\/z_-$\nincreases is a signature of more efficient (non-linear) screening with\ncounter-ions of higher valencies.\n\n\\begin{center}\n\\begin{table}\n\\begin{tabular}{c||ccccccc}\n$z_+\/z_-$& 1\/10 & 1\/3 & 1\/2 & 1 & 2 & 3 & 10 \\\\\n\\hline\\hline\n${\\cal C}$& -2.51& -1.94 & -1.763 & -1.502&-1.301& -1.21 & -1.06\n\\end{tabular}\n\\caption{Values of ${\\cal C}$ appearing in Eq. (\\ref{eq:mutilde}) as a function \nof electrolyte asymmetries. For $z_+\/z_- = 1$, $1\/2$ and 2,\n${\\cal C}$ is known analytically from the results of \\cite{TT}. The \ncorresponding values are\nrecalled in Appendix \\ref{app:B}.\nFor other values of $z_+\/z_-$, $\\cal C$ has been determined\nnumerically, see in particular Fig. \\ref{fig:Q1} of Appendix \\ref{app:B}.}\n\\label{table:1}\n\\end{table}\n\\end{center}\n\n\n\\begin{figure}[hb]\n\\vskip 4.5mm\n\\includegraphics[height=6cm,angle=0]{Gamma11.eps}\n\\caption{Preferential interaction coefficient for a 1:1 salt. The main graph\ncorresponds to ss-RNA with reduced line charge $\\xi=2.2$ while the\ninset is for ds-RNA ($\\xi=5$). The circles correspond to the\nvalue of (\\ref{eq:Gamma}) following from the numerical solution \nof Eq. (\\ref{eq:PB}). The prediction of Eq. (\\ref{eq:Gammaapprox})\nwith $\\widetilde\\mu$ given by (\\ref{eq:mutilde}) and ${\\cal C}\\simeq -1.502$,\nshown with the continuous curve, is compared to that of Ref. \n\\cite{Shkel}, shown with the dashed line. As in all other figures,\nthe opposite of $\\Gamma$ is displayed, to consider a positive quantity.\n\\Label{fig:Gamma11}\n}\n\\end{figure}\n\n\nFrom Eq. (\\ref{eq:potsurf}) and the results of Appendix \\ref{app:B},\nour approximation for $\\Gamma$ takes a simple form\n\\begin{equation}\n\\Gamma \\,\\simeq \\, -\\frac{ z_-}{z_+} \\,(1+\\widetilde\\mu^2).\n\\label{eq:Gammaapprox}\n\\end{equation}\nThis expression is tested in Figures \\ref{fig:Gamma11} and \\ref{fig:Gamma}\nagainst the ``true'' numerical results that serve\nas a benchmark.\nIn Fig. \\ref{fig:Gamma11} which corresponds to a monovalent\nsalt (or more generally a $z$:$z$ electrolyte), we also\nshow the prediction of Ref. \\cite{Shkel},\nwhich is, to our knowledge, the most accurate existing formula\nfor a 1:1 salt. For the technical reasons discussed in Appendix \\ref{app:B},\nand that are evidenced in Figure \\ref{fig:Q1}, our expression \nimproves that of Shkel, Tsodikov and Record \\cite{Shkel},\nparticularly at lower salt content.\nFor 1:2 and 2:1 salts, we expect Eq. (\\ref{eq:Gammaapprox})\nto be also accurate, since it is based on exact expansions.\nThe situation of other salt asymmetries is more conjectural\n(see Appendix \\ref{app:B}), but Eq. (\\ref{eq:Gammaapprox}) is \nnevertheless in remarkable \nagreement with the full solution of Eq. (\\ref{eq:PB}),\nsee Fig. \\ref{fig:Gamma}. To be specific, in both Figures\n\\ref{fig:Gamma11} and \\ref{fig:Gamma}, the relative accuracy \nof our approximation is better than 0.2\\% for $\\kappa a=10^{-2}$\n(for both ss and ds RNA parameters). At $\\kappa a=0.1$,\nthe accuracy is on the order of 1\\%.\n\n\n\n\n\\begin{figure}[htb]\n\\vskip 4.5mm\n\\includegraphics[height=6cm,angle=0]{Gamma.eps}\n\\caption{Same as Figure \\ref{fig:Gamma11} for a 1:3 and a 3:1 \nelectrolyte. From Table \\ref{table:1}, we have ${\\cal C} \\simeq -1.21$\nin the 1:3 case and conversely ${\\cal C}\\simeq -1.94$ in the 3:1 case.\nThe symbols correspond to the numerical solution of Eq. \n(\\ref{eq:PB}) and the continuous curves show the results of\nEq. (\\ref{eq:Gammaapprox}) with again $\\widetilde\\mu$\ngiven by (\\ref{eq:mutilde}).\n\\Label{fig:Gamma}\n}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\vskip 4.5mm\n\\includegraphics[height=6cm,angle=0]{Gamma_kap0.01.eps}\n\\caption{Preferential interaction coefficient\nfor a 1:1 salt (hence ${\\cal C}\\simeq -1.502$)\nand $\\kappa a=10^{-2}$. The circles show the numerical \nsolution of PB theory (\\ref{eq:PB}), the continuous curve\nis for (\\ref{eq:Gammaapprox}) with (\\ref{eq:mutilde}) and \nthe dashed line is the prediction of Ref. \\cite{Shkel}. \nAlthough approximation (\\ref{eq:mutilde}) breaks down \nat low $\\xi$, the inset shows that $\\widetilde\\mu$ following from the\nsolution of Eq. (\\ref{eq:mutantilde}) gives through (\\ref{eq:Gammaapprox})\na $\\Gamma$ (continuous curve), that is in excellent agreement with\nthe ``exact one'', shown with circles as in the main graph. \n\\Label{fig:Gamkap0.01}\n}\n\\end{figure}\n\n\nAs illustrated in Fig. \\ref{fig:Gamkap0.01}, \napproximation (\\ref{eq:mutilde}) assumes that $z_+ \\xi>1$.\nThe corresponding expression for $\\Gamma$ therefore\nbreaks down when $\\xi$ is too low.\nMore general expressions, still for $\\kappa a<1$,\nmay be found in appendix \\ref{app:C}. \nThe inset of Fig. \\ref{fig:Gamkap0.01} offers an\nillustration and shows that the limitations of approximation\n(\\ref{eq:mutilde}) may be circumvented at little cost,\nproviding a quasi-exact value for $\\Gamma$. \nMoreover, it is shown in this appendix that for $z_+ \\xi=1$,\n$\\widetilde\\mu$ reads\n\\begin{equation}\n\\widetilde\\mu \\, \\simeq\\, \\frac{-\\pi\/2}{\\log(\\kappa a) +{\\cal C}} .\n\\label{eq:muxi1}\n\\end{equation}\nOn the other hand, Eq. (\\ref{eq:potsurf}) still holds.\nThe corresponding $\\Gamma$ is shown in Fig. \n\\ref{fig:Gamxi1}.\n\n\n\\begin{figure}[htb]\n\\vskip 4.5mm\n\\includegraphics[height=6cm,angle=0]{Gamma_xi1.eps}\n\\caption{Same as Fig. \\ref{fig:Gamma11} for $\\xi=1$ and\n$z_+\/z_-=1$. The same quantities are shown: \nour prediction for $\\Gamma$ [Eqs. (\\ref{eq:Gammaapprox})\nand (\\ref{eq:muxi1}) with ${\\cal C}\\simeq -1.502$] \nis compared to that of Ref. \\cite{Shkel}. \nThe inset shows $-z_+\\Gamma\/z_-$ for a 1:2 salt such as MgCl$_2$\nwhere $\\cal C$ takes the value -1.301.\nCircles : numerical data; curve : our prediction. \n\\Label{fig:Gamxi1}\n}\n\\end{figure}\n\nWe provide in Appendix \\ref{app:C} a general expression of the\nshort scale (i.e valid up to $\\kappa r \\sim 1$)\nradial dependence of the electric potential,\nsee Eq. (\\ref{eq:phigen}).\nThe bare charge should not be too low [more precisely, one must have\n$\\xi>\\xi_c$ with $\\xi_c$ given by Eq. (\\ref{eq:xicapp})],\nand $\\widetilde\\mu$ --which encodes the dependence on $\\xi$-- \nfollows from solving Eq.~(\\ref{eq:mutantilde}).\nIn general, the corresponding solution should be found \nnumerically. However, one can show {\\em a)} that $\\widetilde\\mu$\nvanishes for $\\xi=\\xi_c$, {\\em b)} that $\\widetilde\\mu$ takes the value\n(\\ref{eq:muxi1}) when $z_+\\xi=1$ and {\\em c)} that \n$\\widetilde\\mu$ is given by (\\ref{eq:mutilde}) when $z_+\\xi$\nexceeds unity by a small and salt dependent amount.\nIn practice, for DNA and RNA, we have $\\xi>2$ and\nEq. (\\ref{eq:mutilde}) provides excellent results\nwhenever $\\kappa a<0.1$. To illustrate this, we compare in Figure\n\\ref{fig:pot13} the potential following from the analytical expression\n(\\ref{eq:phigen}) to its numerical counterpart. We do not \ndisplay 1:1, 1:2 and 2:1 results since in these cases, Eq. (\\ref{eq:phigen})\nis obtained from an exact expansion and fully captures the\n$r$-dependence of the potential. For the asymmetry 1:3,\nFig. \\ref{fig:pot13} shows that the relatively simple form\n(\\ref{eq:phigen}) is very reliable. A similar agreement has been\nfound for all couples $z_-$:$z_+$ sampled, with the trend that \nthe validity of (\\ref{eq:phigen}) extends to larger distances\nas $z_+\/z_-$ is decreased. In this respect, the agreement\nshown in Fig. \\ref{fig:pot13} for which $z_+\/z_-$ is quite high\n(3), is one of the ``worst'' observed.\n\n\\begin{figure}[htb]\n\\vskip 4.5mm\n\\includegraphics[height=6cm,angle=0]{pot13_xi2.eps}\n\\caption{Opposite of the \nelectric potential versus radial distance in a 1:3 electrolyte\nwith $\\kappa a =10^{-2}$.\nThe continuous curve shows the prediction of Eq. (\\ref{eq:phigen})\nwith $\\widetilde\\mu$ given by $(\\ref{eq:mutilde})$ ; \nthe circles show the numerical solution of Eq. (\\ref{eq:PB}).\nThe potential for $\\xi=2.2$ is shown in the main graph\non a log-linear scale, and on a linear scale in the lower\ninset. The upper inset is for $\\xi=5$. \n\\Label{fig:pot13}\n}\n\\end{figure}\n\n\n\n\n{\\em Conclusion}. \nThe poly-ion ion preferential interaction \ncoefficient $\\Gamma$ describes the exclusion of co-ions in the vicinity of\na polyelectrolyte in an aqueous solution. We have obtained\nan accurate expression for $\\Gamma$ in the regime of low\nsalt ($\\kappa a <1$). The present results are particularly relevant for\nhighly charged poly-ions\n($z_+ \\xi >1$, that is beyond the classical Manning threshold\n\\cite{rque20}),\nbut are somewhat more general and hold in the range \n$\\xi_c<\\xi<1$, where $\\xi$ stands for the line charge\nper Bjerrum length and $\\xi_c$ is a salt dependent threshold,\ngiven by Eq. (\\ref{eq:xicapp}). \nOur formulae have been shown to hold for arbitrary mixed salts\nof the form $z_-$:$z_+$ (magnesium chloride, cobalt hexamine\netc). They have been derived from exact\nexpansions valid in 1:1,1:2 and 2:1 cases, from which a more\ngeneral conjecture has been inferred. The validity of this\nconjecture, backed up by analytical arguments,\n has been extensively tested for various values\nof $z_+\/z_-$, poly-ion charge and salt content.\nThese tests have provided the numerical value of the \nconstant $\\cal C$ reported in Table \\ref{table:1}, which only\ndepends of the ratio $z_+\/z_-$.\nAs a byproduct of our\nanalysis, we have obtained a very accurate expression\nfor the electric potential in the vicinity of the charged rod\n($r < \\kappa^{-1}$).\n\nIt should be emphasized that the validity of our mean-field\ndescription relying on the non-linear Poisson-Boltzmann equation\ndepends on the valency of counter-ions ($z_+$),\nand to a lesser extent to the value of $z_-$ \\cite{Levin,Grosberg}.\nFor the 1:1 case in a solvent like water at room temperature, micro-ionic correlations can be neglected up to a salt concentration of 0.1M \\cite{Ni}. \nFor $z_+\\geq 2$ or in solvents\nof lower dielectric permittivity, they play a more\nimportant role. Our results however provide mean-field benchmarks\nfrom analytical expressions,\nfrom which the effects of correlations may be assessed in cases\nwhere they cannot be ignored (see e.g. \\cite{Ni} for a detailed\ndiscussion).\n\n\n\\begin{acknowledgments}\n This work was supported by a ECOS Nord\/COLCIENCIAS action of French and\n Colombian cooperation. G.~T.~acknowledge partial financial support from\n Comit\\'e de Investigaciones, Facultad de Ciencias, Universidad de los Andes.\n This work has been supported in part by the NSF PFC-sponsored Center for\n Theoretical Biological Physics (Grants No. PHY-0216576 and PHY-0225630).\n\\end{acknowledgments}\n\n\n\\begin{appendix}\n\\section{}\n\\Label{app:A}\n\nIn order to explicitly relate the preferential coefficient $\\Gamma$ in (\\ref{eq:Gamma})\nto the electric potential, we follow a procedure similar to that which leads to an analytical \nsolution in the cell model, without added salt \\cite{Fuoss}. Implicit use will be made of the\nboundary conditions associated to (\\ref{eq:PB}). First, integrating \nEq. (\\ref{eq:PB}), one gets\n\\begin{equation}\n[r' \\phi'(r')]_a^r \\,=\\, \\frac{\\kappa^2}{z_++z_-} \\int_a^r \\left( e^{-z_+ \\phi} - e^{z_-\\phi} \n\\right)\\, r' dr' ,\n\\end{equation}\nwhere the notation $[F(r')]_a^r=F(r)-F(a)$ has been introduced.\nThen, multiplying Eq. (\\ref{eq:PB}) by $r^2 \\phi'$ and integrating, we obtain\n\\begin{eqnarray}\n\\frac{z_++z_-}{2 \\kappa^2} \\left[(r' \\phi')^2 \\right]_a^r &=& -\\left[ r'^2 \\frac{e^{-z_+ \\phi}}{z_+}\n+ r'^2 \\frac{e^{z_-\\phi}}{z_-}\n\\right]_a^r \n\\nonumber\\\\\n+&&\\!\\!\\!\\!\\!\\!\\!\\!\\!\n \\int_a^r 2r' \\left( \\frac{e^{-z_+ \\phi}}{z_+}\n+ \\frac{e^{z_-\\phi}}{z_-}\n\\right) dr' .\n\\end{eqnarray}\nCombining both relations with adequate weights, in order to suppress \nthe integral over counter-ion (+) density, we have\n\\begin{eqnarray}\n\\int_a^\\infty r' (e^{z_- \\phi}-1) dr'= \\frac{ z_+ z_-}{\\kappa^2} \\left( \\xi ^2 - \\frac{2 \\xi}{z_+}\\right) \n\\nonumber \\\\\n- \\frac{a^2}{2(z_++z_-)} \\left\\{z_+\\left(e^{z_-\\phi_0} -1 \\right) +z_- \\left(e^{-z_+ \\phi_0} -1\\right)\n\\right\\}\n\\label{eq:a3}\n\\end{eqnarray}\nwhere $\\phi_0=\\phi(a)$ is the surface potential.\nEquation (\\ref{eq:a3}) will turn useful in the formulation \nof a general conjecture concerning the surface potential $\\phi_0$, see \nAppendix \\ref{app:B}. \nWe also note that for the systems\nunder investigation here, the surface potential is quite high, and a very good approximation\nto (\\ref{eq:a3}) is\n\\begin{equation}\n\\int_a^\\infty r' (e^{z_- \\phi}-1) dr' \\simeq \\frac{ z_+ z_-}{\\kappa^2} \\left( \\xi ^2 - \\frac{2 \\xi}{z_+}\\right) \n- \\frac{a^2 z_- e^{-z_+ \\phi_0} }{2(z_++z_-)} \n\\label{eq:a4}\n\\end{equation}\n\n\\section{}\n\\Label{app:B}\n\nWe start by analyzing a 1:1 electrolyte, for which it has been shown\n\\cite{McCoy,McCaskill} that the short distance behaviour reads\n\\begin{equation}\ne^{\\phi\/2} \\,=\\, \\frac{\\kappa r}{4 \\mu}\\, \\sin\\left[\n2 \\mu \\log\\left(\\frac{\\kappa r}{8}\\right) - 2 \\Psi(\\mu)\n\\right] \\,+\\,\n{\\cal O}\\left(\\kappa r\n\\right)^4\n\\label{eq:grandpot}\n\\end{equation}\nwhere $\\Psi$ denotes the argument of the Euler Gamma function \n$\\Psi(x)=\\hbox{arg}[\\Gamma(i x)]$ \\cite{McCoy,McCaskill}. \nIn (\\ref{eq:grandpot}), $\\mu$ denotes the smallest positive root\nof \n\\begin{equation}\n\\tan\\left[\n2 \\mu \\log(\\kappa a\/8) -2 \\Psi(\\mu) \n\\right] \\,=\\, \\frac{2 \\mu}{\\xi-1}.\n\\label{eq:mutan}\n\\end{equation}\nExpressions (\\ref{eq:grandpot}) and (\\ref{eq:mutan})\nrequire that $\\xi$ exceeds a salt dependent \nthreshold [denoted $\\xi_c$ below and given by Eq. (\\ref{eq:xicapp})]\nthat is always smaller than 1 \\cite{TT}.\nThey thus always hold for $\\xi\\geq1$ and in particular encompass the\ninteresting limiting case $\\xi=1$,\nwhich is sufficient for our purposes. \nFor large $\\xi$, we have proposed in \n\\cite{TT} an approximation which amounts to linearizing the \nargument of the tangent in (\\ref{eq:grandpot}) in the vicinity of $-\\pi$,\nand similarly linearizing $\\Psi$ to first order: \n$\\Psi(x) \\simeq -\\pi\/2 -\\gamma x + {\\cal O}(x^3)$ where $\\gamma$ is \nthe Euler constant, close to 0.577.\nIt turns out however that finding accurate expressions for\n$\\exp(-z_+\\phi_0)$, which is useful for the computation of the\npreferential interaction coefficient, requires to include the first non-linear\ncorrection in the expansion of the tangent. \nAfter some algebra, we find~:\n\\begin{eqnarray}\n\\mu & \\simeq &\\frac{-\\pi\/2}{\\log(\\kappa a) +{\\cal C} - (\\xi-1)^{-1}} +\n\\nonumber\\\\\n&&\\frac{\\pi^3}{6(\\log(\\kappa a) +{\\cal C} - (\\xi-1)^{-1})^4}\n\\nonumber\\\\\n&&\\times\n\\left[\\frac{1}{(\\xi-1)^3}+\\frac{\\psi^{(2)}(1)}{8}\n\\right]\n\\label{eq:approxmu}\n\\end{eqnarray}\nwhere the constant ${\\cal C}={\\cal C}^{1:1}$ reads ${\\cal C}^{1:1}=\\gamma\n-\\log 8 \\simeq -1.502$ and $\\psi^{(2)}(1)=d^3\\ln\\Gamma(x)\/dx^3|_{x=1}$. From\n(\\ref{eq:approxmu}) and (\\ref{eq:grandpot}) where the sinus is expanded to\nthird order, we obtain \\begin{equation} (\\kappa a)^2 e^{-\\phi_0} \\simeq 4 [(\\xi-1)^2 +\n\\widetilde\\mu^2]\n\\label{eq:phi0a}\n\\end{equation}\nwhere $\\widetilde\\mu$ is given by\n\\begin{equation}\n\\widetilde\\mu \\simeq \\frac{-\\pi}{\\log(\\kappa a) +{\\cal C} - (z_+\\xi-1)^{-1}}.\n\\label{eq:mutildeapp}\n\\end{equation}\nIn writing (\\ref{eq:mutildeapp}), we have introduced the change of \nvariable $\\widetilde\\mu = 2 \\mu$ \\cite{rque1}. The reason is that similar changes \nfor other electrolyte asymmetries allows to put the final result in a\n``universal'' (electrolyte independent) form, see below. \nA similar reason holds for introducing $z_+$, here equal to 1,\nin the denominator of (\\ref{eq:mutildeapp}).\n\nThe functional proximity\nbetween our expressions and those reported in \\cite{Shkel} in the very same \ncontext is striking. We note however that our $\\widetilde\\mu$ (denoted\n$\\beta$ in \\cite{Shkel}) involves a different constant ${\\cal C}$.\nMore importantly, the functional form of (\\ref{eq:grandpot}) differs from that\ngiven in \\cite{Shkel}. \nThe comparison of the performances of our results with those\nof \\cite{Shkel} is addressed below, and is also discussed in the main \ntext.\n\nPerforming a similar analysis as above in the 1:2 case where\n$z_+=2$ and $z_-=1$, we obtain from the expressions derived\nin \\cite{TT}:\n\\begin{equation}\n(\\kappa a)^2 e^{-z_+\\phi_0} \\simeq 3 [(\\xi-1)^2 + \\widetilde\\mu^2]\n\\label{eq:phi0b}\n\\end{equation}\nand similarly, in the 2:1 case ($z_+=1$, $z_-=2$):\n\\begin{equation}\n(\\kappa a)^2 e^{-z_+\\phi_0} \\simeq 6 [(\\xi-1)^2 + \\widetilde\\mu^2].\n\\label{eq:phi0c}\n\\end{equation}\nIn both cases, provided again that $\\xi$ is not too low (see below)\n$\\widetilde\\mu$ is given by (\\ref{eq:mutildeapp}) \\cite{rque2},\nwith however a different numerical value for $\\cal C$\n[${\\cal C}^{1:2} =\\gamma-(3\\log3)\/2-(\\log2)\/3\\simeq -1.301 $\nand ${\\cal C}^{2:1} =\\gamma-(3\\log3)\/2-\\log2\\simeq -1.763 $].\n\nThe similarity of expressions (\\ref{eq:phi0a}), (\\ref{eq:phi0b})\nand (\\ref{eq:phi0c}) leads to conjecture that this form holds\nfor any $z_-$:$z_+$ electrolyte :\n\\begin{equation}\n(\\kappa a)^2 e^{-z_+\\phi_0} \\simeq {\\cal A} [(z_+\\xi-1)^2 + \\widetilde\\mu^2].\n\\label{eq:phi0d}\n\\end{equation}\nWe then have to determine the prefactor $\\cal A$ as a function \nof $z_+$ and $z_-$. To this end, we make use of the exact relation\n(\\ref{eq:a3}) [or equivalently (\\ref{eq:a4})], where in the limit of\nlarge $\\xi$, the lhs is finite while the two terms on the rhs diverge.\nThis yields the leading order behaviour :\n\\begin{equation}\n(\\kappa a)^2 \\exp(-z_+ \\phi_0) \\stackrel{\\xi\\to\\infty}{\\sim} 2\\,\\frac{z_++z_-}{z_+} \n(z_+\\xi-1)^2 .\n\\end{equation}\nIt then follows that ${\\cal A} = 2 (z_++z_-)\/z_+$ so that our general\nexpression (\\ref{eq:phi0d}) takes the form:\n\\begin{equation}\n(\\kappa a)^2 e^{-z_+ \\phi_0} \\, \\simeq \\, 2\\,\\frac{z_++z_-}{z_+} \n\\left[(z_+\\xi-1)^2 + \\widetilde\\mu^2\\right].\n\\label{eq:phi0gen}\n\\end{equation}\nThis expression holds regardless of the approximation used\nfor $\\widetilde \\mu$. If Eq. (\\ref{eq:mutildeapp}) is used, then $z_+\\xi$ should\nnot be too close to unity (see appendix \\ref{app:C} for more general results\nincluding the case $z_+\\xi=1$).\n\nIn order to test the accuracy \nof (\\ref{eq:phi0gen}) in conjunction with (\\ref{eq:mutildeapp}), we have solved numerically Eq. (\\ref{eq:PB})\nfor several values of $\\kappa a<1$ and electrolyte asymmetry \nand checked that for several different values of $z_+ \\xi>1$, \nthe quantity \n\\begin{eqnarray}\n{\\cal Q} &=& -\\pi\n\\left[ (\\kappa a)^2 e^{-z_+ \\phi_0} \\,\\frac{z_+}{2(z_++z_-)} - \n(z_+\\xi-1)^2\n\\right]^{-1\/2} \\nonumber\\\\&&\n-\\log(\\kappa a) + (z_+\\xi-1)^{-1}\n\\label{eq:teststri}\n\\end{eqnarray}\nis a constant ${\\cal C}$, which only depends on $z_+\/z_-$ but not on salt and\n$\\xi$ [it should be borne in mind that Eq. (\\ref{eq:mutildeapp}) \nis a small $\\kappa a$\nand large $\\xi$ expansion, which becomes increasingly incorrect\nas $\\kappa a$ is increased and\/or $\\xi$ lowered].\nThis is quite a stringent test (since the two terms on the rhs of\n(\\ref{eq:teststri}) are large and close] which requires high numerical \naccuracy. This is achieved following the procedure outlined in \n\\cite{Lang}. In doing so, we confirm the validity of\n(\\ref{eq:phi0gen}) and collect the values of $\\cal C$ given in Table\n\\ref{table:1}. In the 1:1 case, we predict that ${\\cal C} = \\gamma-\\log8\n\\simeq -1.507$, in excellent agreement with the numerical\ndata of Figure \\ref{fig:Q1}.\nOn the other hand, the prediction of Ref. \\cite{Shkel} \nthat $\\cal Q$ reaches a constant close to -1.90 (shown by the horizontal\ndashed line in Fig \\ref{fig:Q1}) is incorrect. Figure \\ref{fig:Q1} shows\nthat the quality of expression (\\ref{eq:phi0gen}) deteriorates when \n$\\kappa a$ increases, as expected. \nIt is noteworthy however that for $\\kappa a =10^{-1}$,\nits accuracy is excellent whenever $\\xi>2$. The inset \nof Fig. \\ref{fig:Q1} shows the validity of (\\ref{eq:phi0gen}) \nfor a 3:1 electrolyte. When $z_+\\xi$ is close to 1, \nEq. (\\ref{eq:mutildeapp}) becomes an irrelevant approximation \nto the solution of (\\ref{eq:mutan}), and can therefore not be\ninserted into the general formula (\\ref{eq:phi0gen}).\nThis explains the large deviations between $\\cal Q$\nand the asymptotic value ${\\cal C}$ observed in Fig. \\ref{fig:Q1}\nfor the lower values of $\\xi $ reported.\nWe come back to this point in Appendix \\ref{app:C}.\n\n\\begin{figure}[htb]\n\\vskip 4.5mm\n\\includegraphics[height=6cm,angle=0]{Q1.eps}\n\\caption{Plot of the quantity $\\cal Q$ defined in (\\ref{eq:teststri}) \nversus line charge $\\xi$ for a 1:1 electrolyte at $\\kappa a =10^{-3}$ \n(continuous curve) and $\\kappa a=10^{-1}$ (dashed curve). \nThe value reached at large $\\xi$ is compared to \nthe prediction of \\cite{Shkel} ${\\cal Q} \\to e^\\gamma + \\log 2-\\gamma\\simeq \n-1.90$ (horizontal dashed-dotted line) \nwhereas Eqs. (\\ref{eq:phi0gen})\nand (\\ref{eq:mutildeapp}) imply ${\\cal Q} \\to \\gamma-\\log8 \\simeq\n-1.50$, shown by the horizontal\ndotted line. The inset shows the same quantity for a 3:1 electrolyte at \n$\\kappa a=10^{-5}$ [such a very low value is required to determine\nprecisely the value of the asymptotic constant $\\cal C$,\nthat can subsequently be used at experimentally relevant (higher)\nsalt concentrations]. Here, we obtain ${\\cal Q} \\to -1.94$ (dotted line)\nwhich is the value reported for $\\cal C$ in Table \\ref{table:1}.\n\\Label{fig:Q1}\n}\n\\end{figure}\n\n\nThe present results hold for $z_+\\xi>1+ {\\cal O}(1\/|\\log \\kappa a|)$. \nIn this regime, our analysis shows that Eq. (\\ref{eq:phi0gen})\n[with $\\widetilde\\mu$ given by (\\ref{eq:mutildeapp})] is correct up to\norder $1\/\\log^4(\\kappa a)$ for any ($z_-,z_+$). On the other hand \nthe results of \\cite{Shkel} ,valid in the 1:1 case, appear to be correct\nto order $1\/\\log^2(\\kappa a)$.\nIn addition, our expression for the surface potential \nmay be generalized to a broader range of $\\xi$ values and \nan expression for the short distance dependence of the electric potential\nmay also be provided. This is the purpose of appendix \n\\ref{app:C}. \n\n\n\\section{}\n\\Label{app:C}\n\nIn Appendix \\ref{app:B}, the ``universal'' results valid for all \n($z_+,z_-$) have been unveiled partly by a change of variable \n$\\mu \\to \\widetilde\\mu$ from existing expressions \\cite{TT}. In light of these\nresults, and of their accuracy (assessed in particular by the precision \nreached for the preferential interaction coefficient), it is tempting to \ngo further without invoking approximations of (\\ref{eq:mutan}), or related \nexpressions for other asymmetries than 1:1. Inspection of the \nresults given in \\cite{TT} for the 1:1, 1:2 and 2:1 cases lead, with again the help of (\\ref{eq:a4}), to the conjecture that \n\\begin{equation}\ne^{z_+\\phi\/2} \\,\\simeq\\, \n\\frac{-\\kappa r}{ \\widetilde\\mu}\\, \\sqrt{\\frac{z_+}{2(z_++z_-)}} \\,\\sin\\left[\n\\widetilde \\mu \\log(\\kappa r) +\\widetilde\\mu\\,{\\cal C}\n\\right]\n\\label{eq:phigen}\n\\end{equation}\nwith \n\\begin{equation}\n\\tan\\left[\\,\n\\widetilde\\mu \\log(\\kappa a) \\,+\\, \\widetilde\\mu \\, {\\cal C}\n\\,\\right] \\,=\\, \\frac{\\widetilde\\mu}{z_+\\xi-1}.\n\\label{eq:mutantilde}\n\\end{equation} We emphasize that (\\ref{eq:phigen}), much as (\\ref{eq:grandpot}), is a\nshort distance expansion and typically holds for $\\kappa r<1$ (hence the\nrequirement that $\\kappa a<1$). In appendix~\\ref{app:D} we give further\nanalytical support for conjecture~(\\ref{eq:phigen}). A typical plot showing\nthe accuracy of (\\ref{eq:phigen}) is provided in the main text (Fig.\n\\ref{fig:pot13}). For $\\kappa r <0.1$, the agreement \nwith the exact result is \nbetter than 0.1\\%, \nand becomes progressively worse at higher distances\n(20\\% \ndisagreement at $\\kappa r=1$).\n\n\n\nFrom (\\ref{eq:phigen}), it follows that the integrated charge $q(r)$\nin a cylinder of radius $r$ [that is $q(r)=-r \\phi'(r)\/2$] reads \n\\begin{equation}\nz_+ q(r) \\,=\\, -1 + \\widetilde\\mu \\,\\tan \\left[\\widetilde\\mu \\, \\log \n\\left(\\frac{r}{R_M} \\right) \\right]\n\\end{equation}\nwhere the so-called Manning radius \\cite{Gueron,OS,TT} is given by \n\\begin{equation}\n\\kappa R_M = \\exp\\left(-{\\cal C} -\\frac{\\pi}{2\\widetilde\\mu}\n\\right).\n\\end{equation}\nThe Manning radius is a convenient measure of the counterion \ncondensate thickness. It is the point $r$ where not only $z_+ q(r) =1$\nbut also where $q(r)$ versus $\\log r$ exhibits an inflection point\n\\cite{Deserno}. For high enough $\\xi$, the logarithmic dependence \nof $1\/\\widetilde\\mu$ with salt [see (\\ref{eq:mutildeapp})] is such that\n$R_M \\propto \\kappa^{-1\/2}$.\n\n\nThe two relations (\\ref{eq:phigen}) and (\\ref{eq:mutantilde})\nencompass those given in Appendix \\ref{app:B} and allow to \ninvestigate the regime $z_+\\xi_c] & \\bullet\\ar[uuu] & \\circled{$\\bullet$}\\ar[uuul,->] & & \\\\\n& & & & & & \\\\\n& & \\bullet\\ar[uu]\\ar[uur,->] & & \\bullet\\ar[uu]\\ar[uul,->] & & \\\\\n & \\circled{$\\bullet$}\\ar[uuur,->] & & \\circled{$\\bullet$}\\ar[uuul,->]\\ar[uuur,->] & & \\circled{$\\bullet$}\\ar[uuul,->] & \\\\\n & & & & & & \\\\\n & \\bullet\\ar[uu]\\ar[uuur,->] & & \\bullet\\ar[uu]\\ar[uuur,->]\\ar[uuul,->] & & \\bullet\\ar[uu]\\ar[uuul,->] & \\\\\n & & & & & & \\\\\n & & & & & & \\\\\n & & \\circled{$1$}\\ar[uuuuur,->]\\ar[uuuuul,->] & & \\circled{$1$}\\ar[uuuuur,->]\\ar[uuuuul,->] & & \\\\\n\\end{tikzcd}\\]\n\n\n\n\nClearly each matrix in $X$, which contains two trivial matrices, is a Suszko reduced model of $\\vdash$. Moreover, by applying Theorem \\ref{th: caratterizzazione Plonka suszko ridotta}, one immediately checks that $\\PL(X)\\in\\mathsf{Mod}^{\\textup{Su}}(\\vdash^{l})$.\n\\qed\n\\end{example} \n\n \n\\section*{Acknowledgments}\n\nThe first and the second author were both supported by the grant GBP202\/12\/G061 of the Czech Science Foundation. The first author acknowledges also the ERC grant: ``Philosophy of Pharmacology: Safety, Statistical Standards, and Evidence Amalgamation'', GA:639276. The second author was supported also by a Beatriz Galindo fellowship of the Ministry of Education and Vocational Training of the Government of Spain. We are grateful to an anonymous referee for his\/her valuables comments and suggestions.\n \n\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nGenerative Adversarial Networks (GANs)~\\citep{gan} have become a popular research topic. \nArguably the most impressive results have been in image synthesis \\citep{biggan, otgan, sngan,sagan,stackgan,CRGAN}, but they have also been applied fruitfully to text generation \\citep{maskgan, leakgan}, domain transfer learning \\citep{cyclegan, stackgan, pix2pix}, and various other tasks \\citep{xian2018feature, ledig2017photo, zhu2017generative,GANFIGHT}.\n\nRecently, \\citet{biggan} substantially improved the results of \\citet{sagan} by using\nvery large mini-batches during training.\nThe effect of large mini-batches in the context of deep learning is well-studied \\citep{smith2017don, goyal2017accurate, keskar2016large, BIGBATCH} and general consensus\nis that they can be helpful in many circumstances, but the results of \\citet{biggan} suggest that GANs benefit disproportionately from large batches \\citep{OPENQUESTIONS}.\nIn fact, Table 1 of \\citet{biggan} shows that for the Frechet Inception Distance (FID) metric \\citep{FID}\non the ImageNet dataset, scores can be improved from 18.65 to 12.39 simply by making the batch eight times larger.\n\nUnfortunately, increasing the batch size in this manner is not always possible since it\nincreases the computational resources required to train these models -- \noften beyond the reach of conventional hardware. The experiments from the BigGAN \npaper require a full `TPU Pod'. The `unofficial' open source release of BigGAN \nworks around this by accumulating gradients across 8 different V100 GPUs and only \ntaking an optimizer step every 8 gradient accumulation steps.\nFuture research on GANs would be much easier if we could have the gains from large\nbatches without these pain points.\nIn this paper, we take steps toward accomplishing that goal by proposing a technique\nthat allows for \\textit{mimicking} large batches without the computational costs\nof actually using large batches.\n\nIn this work, we use Core-set selection \\citep{agarwal2005geometric} to sub-sample\na large batch to produce a smaller batch.\nThe large batches are then discarded, and the sub-sampled, smaller, batches are used to train the GAN.\nInformally, this procedure yields small batches with `coverage' similar to \nthat of the large batch -- in particular the small batch tries to `cover' all the same\nmodes as are covered in the large batch.\nThis technique yields many of the benefits of having large batches with much less computational overhead.\nMoreover, it is generic, and so can be applied to nearly all GAN variants.\n\n\nOur contributions can be summarized as follows:\n\n\\begin{itemize}\n \\item We introduce a simple, computationally cheap method to increase the \n `effective batch size' of GANs, which can be applied to any GAN variant.\n \\item We conduct experiments on the CIFAR and LSUN datasets showing \n that our method can substantially improve FID across different GAN \n architectures given a fixed batch size.\n \\item We use our method to improve the performance of the technique from \n \\citet{kumar2019maximum}, resulting in state-of-the-art performance at \n GAN-based anomaly detection.\n\\end{itemize}\n\n\n\n\\section{Background and Notation}\n\\paragraph{Generative Adversarial Networks}\nA Generative Adversarial Network (or GAN) is a system of two neural networks trained `adversarially'.\nThe generator, $G$, takes as input samples from a prior $z \\sim p(z)$ and outputs the learned distribution, $G(z)$.\nThe discriminator, $D$, receives as input both the training examples, $X$, and the\nsynthesized samples, $G(z)$, and outputs a distribution\n$D(.)$ over the possible sample source.\nThe discriminator is then trained to maximize the following objective:\n\\begin{equation}\n\\label{gen_eqn}\n\\mathcal{L}_D = -\\mathbb{E}_{x \\sim p_\\text{data}} [\\log D(x)] -\n\\mathbb{E}_{z \\sim p(z)} [\\log (1 - D(G(z)))]\n\\end{equation}\nwhile the generator is trained to minimize\\footnote{\nThis is the commonly used ``Non-Saturating Cost''.\nThere are many others, but for brevity and since our technique \nwe describe is agnostic to the loss function, we will omit them.\n}:\n\\begin{equation}\n\\label{dsc_eqn}\n\\mathcal{L}_G = -\\mathbb{E}_{z \\sim p(z)} [\\log D(G(z))]\n\\end{equation}\nInformally, the generator is trained to \\textit{trick} the discriminator into believing that \nthe generated samples $G(z)$ actually come from the target distribution, $p(x)$, while the\ndiscriminator is trained to be able to distinguish the samples from each other.\n\n\\paragraph{Inception Score and Frechet Inception Distance:}\nWe will refer frequently to the Frechet Inception Distance (FID) \\citep{FID}, \nto measure the effectiveness of an image synthesis model.\nTo compute this distance, one assumes that we have a pre-trained Inception \nclassifier. \nOne further assumes that the activations in the penultimate layer\nof this classifier come from a multivariate Gaussian.\nIf the activations on the real data are $N(m, C)$ and the activations on the\nfake data are $N(m_w, C_w)$, the FID is defined as:\n\n\\begin{equation}\n\\|m-m_w\\|_2^2+ \\Tr \\bigl(C+C_w-2\\bigl(CC_w\\bigr)^{1\/2}\\big)\n\\end{equation}\n\n\\paragraph{Core-set selection:}\n\nIn computational geometry, a Core-set, $Q$, of a set $P$ is a subset $Q \\subset P$ that \napproximates the `shape' of $P$ \\citep{agarwal2005geometric}.\nCore-sets are used to quickly generate approximate solutions to problems whose full solution on the original set would be burdensome to compute.\nGiven such a problem\\footnote{\nAs an example, consider computing the diameter of a point-set \\citep{agarwal2005geometric}.}, one computes $Q$,\nthen quickly computes the solution to the problem for $Q$ and converts that into an approximate solution for the original set $P$.\nThe general Core-set selection problem can be formulated several ways, here we consider the \nthe \\textit{minimax facility location} formulation \\citep{farahani2009facility}:\n\n\\begin{equation}\n \\min_{Q : |Q| = k} \\max_{x_i \\in P} \\min_{x_j \\in Q} d(x_{i}, x_{j})\n\\end{equation}\n\nwhere $k$ is the desired size of $Q$, and $d(., .)$ is a metric on $P$. \nInformally, the formula above encodes the following objective: Find some set, $Q$, of points of size $k$ such that the maximum distance between a point in $P$ and its nearest point in $Q$ is minimized.\nSince finding the exact solution to the minimax facility location problem is NP-Hard \\citep{wolsey2014integer}, we will have to make do with a greedy approximation, detailed\nin Section \\ref{section:greedy}.\n\n\\begin{algorithm}\n \\caption{\\texttt{GreedyCoreset}} \n \\label{alg:coreset}\n\\begin{algorithmic}\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n \\Require batch size ($k$), data points ($x$ where $|x| > k$)\n \\Ensure subset of $x$ of size $k$\n \\State $s \\gets \\{\\}$ \\Comment{Initialize the sampled set}\n \\While{$|s| < k$} \\Comment{Iteratively add points to sampled set}\n \\State $p \\gets \\argmax_{x_i \\notin s} \\min_{x_j \\in s} d(x_{i}, x_{j})$\n \\State $s \\gets s \\cup \\{p\\}$\n \\EndWhile\n \\Return $s$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Using Core-set Sampling for GANs (or Small-GAN)}\nWe aim to use Core-set sampling to increase the effective batch size during GAN training.\nThis involves replacing the basic sampling operation that is done implicitly when \nminibatches are created.\nThis implicit sampling operation happens in two places:\nFirst, when we create a minibatch of samples drawn from the prior distribution $p(z)$.\nSecond, when we create a minibatch of samples from the target distribution $p_{\\texttt{data}}(x)$ to update the parameters of the discriminator.\nThe first of these replacements is relatively simple, while the second presents challenges.\nIn both cases, we have to work around the fact that actually doing Core-set sampling is \ncomputationally hard.\n\n\\subsection{Sampling from the Prior}\nWe need to sample from the prior when we update the discriminator and generator parameters.\nOur Core-set sampling algorithm doesn't take into account the geometry of the space we \nsample from, so sampling from a complicated density might cause trouble. \nThis problem is not intractable, but it's nicer not to have to deal with it, so in \nthe absence of any evidence that the form of the prior matters very much, \nwe define the prior in our experiments to be the uniform distribution over a hypercube. \nTo add Core-set sampling to the prior distribution, we sample $n$ points from the prior, where $n$ is greater than the desired batch size, $k$. \nWe then perform Core-set selection on the large batch of size $n$ to create a batch of \nsize $k$. \nThe smaller batch is what's actually used to perform an SGD step.\n\n\\subsection{Sampling from the Target Distribution}\nSampling from the target distribution is more challenging.\nThe elements drawn from the distribution are high dimensional images, so taking\npairwise distances between them will tend to work poorly due to concentration of distances \\citep{donoho2000high, vaal}, and the fact that Euclidean distances are semantically meaningless in image space \\citep{girod1993s, eskicioglu1995image}.\n\nTo avoid these issues, we instead pre-process our data set by computing the `Inception\nEmbeddings' of each image using a pre-trained classifier \\citep{szegedy2017inception}.\nThis is commonly done in the transfer-learning literature, where it is generally accepted\nthat these embeddings have nontrivial semantics \\citep{yosinski2014transferable}.\nSince this pre-processing happens only once at the beginning of training, it doesn't \naffect the per-training-step performance.\n\nIn order to further reduce the time taken by the Core-set selection procedure, and inspired\nby the Johnson-Lindenstrauss Lemma \\citep{dasgupta2003elementary}, we take random low dimensional projections of the Inception Embeddings.\nCombined with Core-set selection, this gives us low-dimensional representations of the training set images in which pairwise Euclidean distances have meaningful semantics.\nWe can then use Core-set sampling on those representations to select images at training time, analogous to how we select images from the prior.\n\n\\subsection{Greedy Core-set Selection}\n\\label{section:greedy}\nIn the above sections, we have invoked Core-set selection while glossing over the detail\nthat exactly solving the $k$-center problem is NP-hard.\nThis is important, because we propose to use Core-set selection at \\textit{every} training\nstep\\footnote{\nThough the Core-set sampling does happens on CPU and so could be done in parallel to the GPU operations used to train the model, as long as the Core-set sampling time doesn't exceed the time of a forward and backward pass -- which it doesn't.}.\nFortunately, we can make do with an approximate solution, which is faster to compute:\nwe use the greedy $k$-center algorithm (similar to \\citet{sener2017active}) \nsummarized in Alg. \\ref{alg:coreset}.\n\n\\subsection{Small-GAN}\nOur full proposed algorithm for GAN training is presented in Alg. \\ref{alg:coreset_gan}. \nOur technique is agnostic to the underlying GAN framework and therefore can replace random sampling of mini-batches for all GAN variants. \nMore implementation details and design choices are presented in Section \\ref{exp}.\n\n\\begin{algorithm}\n \\caption{\\texttt{Small-GAN}} \n \\label{alg:coreset_gan}\n\\begin{algorithmic}\n \\renewcommand{\\algorithmicrequire}{\\textbf{Input:}}\n \\renewcommand{\\algorithmicensure}{\\textbf{Output:}}\n \\Require target batch size ($k$), starting batch size ($n > k$), Inception embeddings ($\\phi_{I}$)\n \\Ensure a trained GAN\n \\State Initialize networks $G$ and $D$\n \\For{$step = 1 \\text{ to } ...$}\n \\State $z \\sim p(z)$ \\Comment{Sample $n$ points from the prior}\n \\State $x \\sim p(x)$ \\Comment{Sample $n$ points from the data distribution}\n \\State $\\phi(x) \\gets \\phi_{I}(x)$ \\Comment{Get cached embeddings for $x$}\n \\State $\\hat{z} \\gets \\texttt{GreedyCoreset}(z)$ \\Comment{Get Core-set of $z$}\n \\State $\\widehat{\\phi(x)} \\gets \\texttt{GreedyCoreset}(\\phi(x))$ \\Comment{Get Core-set of embeddings}\n \\State $\\hat{x} \\gets \\phi_{I}^{-1}(\\widehat{\\phi(x)})$ \\Comment{Get $x$ corresponding to sampled embeddings}\n \\State Update GAN parameters as usual\n \n \n \n \n \n \\EndFor\n\\end{algorithmic}\n\\end{algorithm}\n\n\\section{Experiments}\n\\label{exp}\n\nIn this section we look at the performance of our proposed sampling method on various tasks:\nIn the first experiment, we train a GAN on a Gaussian mixture dataset with a large number of modes and confirm our method substantially mitigates `mode-dropping'.\nIn the second, we apply our technique to GAN-based anomaly detection \\citep{kumar2019maximum} and significantly improve on prior results.\nFinally, we test our method on standard image synthesis benchmarks and confirm that \nour technique seriously reduces the need for large mini-batches in GAN training.\nThe variety of settings in these experiments testifies to the generality of our proposed\ntechnique.\n\n\\subsection{Implementation Details}\n\nFor our Core-set algorithm, the distance function, $d(\\cdot,\\cdot)$ s the $\\ell_2$-norm for both the prior and target distributions. \nThe hyper-parameters used in each experiment are the same as originally proposed in the paper introducing that experiment, unless stated otherwise.\nFor over-sampling, we use a factor of 4 for the prior $p(z)$ and a factor of 8 for the target, $p(x)$, \nunless otherwise stated.\nWe investigate the effects of different over-sampling factors in the ablation study in Section \\ref{ablation}.\n\n\\subsection{Mixture of Gaussians}\n\\label{mog_section}\nWe first investigate the problem of mode dropping \\citep{arora2018gans} in GANs, where the GAN generator is unable to recover some modes from the target data set.\nWe investigate the performance of training a GAN to recover a different number \nof modes of 2D isotropic Gaussian distributions, with a standard deviation of 0.05. We use \na similar experimental setup as \\citet{DRS}, where our generator and discriminator are parameterized\nusing 4 ReLU-fully connected networks, and use the standard GAN loss in Eq.\\ \\ref{gen_eqn} and \\ref{dsc_eqn}. \nTo evaluate the performance of the models, we generate $10,000$ samples and assign them to their closest\nmode. \nAs in \\citet{DRS}, the metrics we use to evaluate performance are: $i)$ `high quality samples', which are samples within 4 standard deviations of the assigned mode and $ii)$ `recovered modes' which are mixture components with at least one assigned sample. \n\nOur results are present in table \\ref{tab:gaussians_exp}, where we experiment with an increasing number\nof modes. \nWe see that as the number of modes increases, a normal GAN suffers from increased\nmode dropping and lower sample quality compared to Core-set selection. \nWith 100 modes, Core-set selection recovers 97.33\\% of the modes compared to 90.67\\%\nfor the vanilla GAN. Core-set selection also generates 49.87\\% `high quality' samples\ncompared to 23.31\\% for the vanilla GAN.\n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|@{\\hskip 0.05in}|c|c||c|c|}\n \\hline\n Number of & \\% of Recovered & \\% of Recovered & \\% of High-Quality & \\% of High-Quality\\\\\n Modes & Modes (GAN) & Modes (Ours) & Samples (GAN) & Samples (Ours) \\\\\n \\hline \n \\hline\n 25 & \\textbf{100} & \\textbf{100} & 95.76 & \\textbf{98.9} \\\\ \n \n \\hline\n 36 & \\textbf{100} & \\textbf{100} & 92.73 & \\textbf{95.34} \\\\ \n \\hline\n 49 & 98.12 & \\textbf{99.85} & 84.28 & \\textbf{88.1} \\\\ \n \\hline\n 64 & 96.13 & \\textbf{99.01} & 68.81 & \\textbf{82.11} \\\\ \n \\hline\n 81 & 92.59 & \\textbf{98.84} &49.74 & \\textbf{71.75} \\\\ \n \\hline\n 100 & 90.67 & \\textbf{97.33} & 23.31 & \\textbf{49.87} \\\\ \n \\hline\n \\end{tabular}\n \\vskip 0.05in\n \\caption{Experiments with large number of modes}\n \\label{tab:gaussians_exp}\n\\end{table}\n\n\n\n\\subsection{Anomaly Detection}\n \n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|@{\\hskip 0.05in}|c|c|c|}\n \\hline\n Held-out Digit & Bi-GAN & MEG & Core-set+MEG \\\\\n \\hline\n \\hline\n 1 & 0.287 & 0.281 & \\textbf{0.351} \\\\\n \\hline\n 4 & 0.443 & 0.401 & \\textbf{0.501} \\\\\n \\hline\n 5 & 0.514 & 0.402 & \\textbf{0.518} \\\\\n \\hline\n 7 & 0.347 & 0.29 & \\textbf{0.387} \\\\\n \\hline\n 9 & 0.307 & 0.342 & \\textbf{0.39} \\\\\n \\hline\n \\end{tabular}\n \\caption{Experiments with Anomaly Detection on MNIST dataset. The Held-out digit represents the digit\n that was held out of the training set during training and treated as the anomaly class. The numbers\n reported is the area under the precision-recall curve.}\n \\label{tab:anomaly_exp}\n\\end{table}\n \nTo see whether our method can be useful for more than just GANs, we also apply it to the \nMaximum Entropy Generator (MEG) from \\citet{kumar2019maximum}.\nMEG is an energy-based model whose training procedure requires maximizing the entropy of the \nsamples generated from the model.\nSince MEG gives density estimates for arbitrary data points, it can be used for \nanomaly detection -- a fundamental goal of machine learning research \\citep{chandola2009anomaly,kwon2017survey} -- \nin which one aims to find samples that are `atypical' given a source data set.\n\\citet{kumar2019maximum} do use MEG successfully for this purpose, achieving results close to the state-of-the-art\ntechnique for GAN-based anomaly detection \\citep{bigan}.\nWe hypothesized that -- since energy estimates can in theory be improved by larger batch sizes -- these results \ncould be further improved by using Core-set selection, and we ran an experiment to confirm this hypothesis.\n\nWe follow the experimental set-up from \\citet{kumar2019maximum} by training the MEG \nwith all samples from a chosen MNIST digit left-out during training. \nThose samples then serve as the `anomaly class' during evaluation.\nWe report the area under the precision-recall curve and average the score over the last 10 epochs.\nThe results are reported in Table \\ref{tab:anomaly_exp}, which provides clear evidence in favor\nof our above hypothesis: for all digits tested, adding Core-set selection to MEG substantially \nimproves the results.\nBy performing these experiments, we aim to show the general applicability of Core-set selection,\nnot to suggest that MEG is superior to BiGANs \\citep{bigan} on the task.\nWe think it's likely that similar improvements could be achieved by using Core-set selection with BiGANs.\n\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|c||c|c||c|c|}\n \\hline\n & Small-GAN & & Small-GAN & & Small-GAN \\\\\n GAN (batch- & (batch-size & GAN (batch- & (batch-size & GAN (batch- & (batch-size \\\\\n size = 128) & = 128) & size = 256) & = 256) & size = 512) & = 512)\\\\\n \\hline\n \\hline\n 18.75 $\\pm$ 0.2 & \\textbf{16.73 $\\pm$ 0.1} & 17.9 $\\pm$ 0.1 & \\textbf{16.22 $\\pm$ 0.3} & 15.68 $\\pm$ 0.2 & \\textbf{15.08 $\\pm$ 0.1} \\\\\n \\hline\n \\end{tabular}\n \\caption{FID scores for CIFAR using SN-GAN as the batch-size is progressively doubled.\n The FID score is calculated using $50,000$ generated samples from the generator.}\n \\label{tab:cifar_exp}\n\\end{table}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c||c|c|c|}\n \\hline\n Small-GAN (batch- & GAN (batch- & GAN (batch- & GAN (batch-\\\\\n size = 64) & size = 64) & size = 128) & size = 256) \\\\\n \\hline\n \\hline\n 13.08 & 14.82 & 13.02 & 12.63 \\\\\n \\hline\n \\end{tabular}\n \\caption{FID scores for LSUN using SAGAN as the batch-size is progressively doubled.\n The FID score is calculated using $50,000$ generated samples from the generator.\n All experiments were run on the `outdoor church' subset of the dataset.}\n \\label{tab:lsun_exp}\n\\end{table}\n\n\\subsection{Image Synthesis}\n\n\\paragraph{CIFAR and LSUN:}\nWe also conduct experiments on standard image synthesis benchmarks.\nTo further show the generality of our method, \nwe experiment with two different GAN \narchitectures and two image datasets. \nWe use Spectral Normalization-GAN \\citep{sngan} and Self Attention-GAN \n\\citep{sagan} on the CIFAR \\citep{cifar} and LSUN \\citep{lsun} datasets, respectively.\nFor the LSUN dataset, which consists of 10 different categories, we train the model using the `outdoor church' subset of the data.\n\nFor evaluation, we measured the FID scores \\citep{FID} of $50,000$ generated samples from\nthe trained models\\footnote{Note that we measure the performance of all the models using the PyTorch\nversion of FID scores, and not the official Tensorflow one. We ran all our experiments \nwith the same code for accurate comparison.}.\nWe compare the performance using SN-GANs with and without Core-set selection across progressively\ndoubling batch sizes. \nWe observe a similar effect to \\citet{biggan}: just by increasing the mini-batch size by a factor of 4, \nfrom 128 to 512, we are able to improve the FID scores from 18.75 to 15.68 for SN-GANs.\nThis further demonstrates the importance of large mini-batches for GAN training.\nAdding Core-set selection significantly improves the performance of the underlying GAN \nfor all batch-sizes.\nFor a batch size of 128, our model using Core-set sampling significantly outperforms the \nnormal SN-GAN trained with a batch size of 256, and is comparable to an SN-GAN trained \nwith a batch size of 512.\nThe results suggest that the models perform significantly better for any given batch size \nwhen Coreset-sampling is used.\n\nHowever, Core-set sampling does become less helpful as the underlying batch size increases:\nfor SN-GAN, the performance improvement at a batch size of 128 is much larger than the\nimprovement at a batch size of 512.\nThis supports the hypothesis that Core-set selection works by approximating the coverage\nof a larger batch; a larger batch can already recover more modes of the data - so under\nthis hypothesis, we would expect Core-set selection to help less.\n\nWe see similar results when experimenting with \nSelf Attention GANs (SAGAN) \\citep{sagan} on the LSUN dataset \\citep{lsun}. \nCompared to our results with SN-GAN, increasing the batch size results in a smaller difference in the performance for the SAGAN model, but we still see the FID improve from 14.82 to 12.63 as the batch-size is increased by a factor of 4.\nUsing Core-set sampling with a batch size of 64, we are able to achieve a comparable score \nto when the model is trained with a batch size of 128.\nWe believe that one reason for a comparably smaller advantage of using Core-set sampling on\nLSUN is the nature of the data itself:\nusing the `outdoor church' subset of LSUN reduces the total number of `modes'\n\\textit{possible} in the target distribution, since images of churches have fewer differences than the images in CIFAR-10 data set.\nWe see similar effects in the mixture of Gaussians experiment (See \\ref{mog_section})\nwhere the relative difference between a GAN trained with and without Core-set sampling\nincreases as the number of modes are increased.\n\n\\paragraph{ImageNet:}\nFinally, in order to test that our method would work `at-scale', we ran an experiment on the ImageNet data set.\nUsing the code at \\url{https:\/\/github.com\/heykeetae\/Self-Attention-GAN}, we trained two GANs:\nThe first is trained exactly as described in the open-source code.\nThe second is trained using Coreset selection, with all other hyper-parameters unchanged.\nSimply adding Coreset selection to the existing SAGAN code materially improved the FID\n(which we compute using 50000 samples): the baseline model had an FID of 19.40 and the\nCore-set model had an FID of 17.33.\n\n\\subsection{Timing Analysis}\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n \\hline\n Small-GAN (batch & SN-GAN (batch & SN-GAN (batch & SN-GAN (batch \\\\\n size = 128) & size = 128) & size = 256) & size = 512) \\\\\n \\hline\n 14.51 & 13.31 & 26.46 & 51.64 \\\\\n \\hline\n \\end{tabular}\n \\caption{Timing to perform 50 gradient updates for SN-GAN with and without Core-sets.\n The time is measured in seconds.\n All the experiments were performed on a single NVIDIA Titan-XP GPU.\n The sampling factor was 4 for the prior and 8 for the target distribution.}\n \\label{tab:time_table}\n\\end{table}\n\n\nSince random sampling can be done very quickly, it is important to investigate the amount\nof time it takes to train GANs with and without Core-set sampling.\nWe measured the time for SN-GAN to do 50 gradient steps on the CIFAR dataset with various mini-batch sizes: the results are in Table \\ref{tab:time_table}.\nOn average, for each gradient step, the time added by performing Core-Set sampling is only 0.024 seconds. \n\n\\subsection{Ablation Study}\n\\label{ablation}\n\nWe conduct an ablation study to investigate the reasons for the effectiveness of Core-set selection.\nWe also investigate the effect of different sampling factors and other hyper-parameters.\nWe run all ablation experiments on the task of image synthesis using SN-GAN \\citep{sngan} with the CIFAR-10 dataset \\citep{cifar}.\nWe use the same hyperparameters as in our main image synthesis experiments and a batch size of 128, unless otherwise stated.\n\n\\subsection{Examination of Main Hyper-Parameters}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|@{\\hskip 0.05in}|c|c|c|c|c|}\n \\hline\n Small-GAN & A & B & C & D & E \\\\\n \\hline\n \\textbf{16.73} & 18.75 & 18.09 & 17.03 & 17.88 & 17.45 \\\\\n \\hline\n \\end{tabular}\n \\caption{FID scores for CIFAR using SN-GAN.\n The experiment list is: A = Training an SN-GAN, \n B = Core-set selection directly on the images,\n C = Core-set applied directly on Inception embeddings without a random projection, \n D = Core-set applied only on the prior distribution, \n E = Core-set applied only on target distribution.}\n \\label{tab:proposed_ablation_table}\n\\end{table}\n\nWe examine $i)$ the importance of the chosen target distribution \nfor Core-set selection and $ii)$ the importance of performing Core-set on that target distribution.\nThe FID scores are reported in Table \\ref{tab:proposed_ablation_table}. \n\nThe importance of the target distribution is clear, since \nperforming Core-set selection directly on the images (experiment B) performs similar to random-sampling. \nExperiment C supports our hypothesis that performing a random projection on the Inception embeddings can preserve semantic information while reducing the dimensionality \nof the features.\nThis increases the effectiveness of Core-set sampling and reduces sampling time.\n\nOur ablation study also shows the importance of performing Core-set selection on both the \nprior and target distribution. \nThe FID scores of the models are considerably worse when Core-set sampling is used on either distribution alone.\n\n\\subsection{Examination of Sampling Factors}\n\n\\begin{table}[t]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n \\hline\n A & B & C & D & E & F & G & H & I\\\\\n \\hline\n 18.01 & 17.8 & 17.59 & 17.12 & 16.83 & \\textbf{16.73} & 16.9 & 17.95 & 20.79\\\\\n \\hline\n \\end{tabular}\n \\caption{FID scores for CIFAR using SN-GAN.\n Each of the experiment shows a different pair of over-sampling factors for the prior and target\n distributions.\n The factors are listed as: sampling factor for prior distribution $\\times$ \n sampling factor for target distribution.\n A = $2\\times2$; B = $2\\times4$;\n C = $4\\times2$; D = $4\\times4$; \n E = $8\\times4$; F = $4\\times8$; \n G = $8\\times8$; H = $16 \\times 16$; \n I = $32 \\times 32$}\n \\label{tab:sampling_factor_table}\n\\end{table}\n\nAnother important hyper-parameter for training GANs using Core-set selection is the sampling factor.\nIn Table \\ref{tab:sampling_factor_table} we varied the factors by which both the prior and the target distributions were over-sampled.\nWe see that using 4 for the sampling factor for the prior and 8 for the sampling factor\nfor the target distribution results in the best performance. \n\n\\section{Related Work}\n\n\\subsection{Variance Reduction in GANs}\nResearchers have proposed reducing variance in GAN training from an optimization perspective, by directly changing the way each of the networks are optimized. \nSome have proposed applying the extragradient method \\citep{chavdarova2019reducing}, and \nothers have proposed casting the \\textit{minimax} two-player game as a variational-inequality problem \\citep{gidel2018variational}.\n\\citet{biggan} recently proposed to reduce variance directly by using large mini-batch sizes.\n\n\\subsection{Stability in GAN Training}\nStabilizing GANs has been extensively studied theoretically.\nResearchers have worked on improving the dynamics of the two \nplayer minimax game in a variety of ways \\citep{nagarajan2017gradient, mescheder2018training, mescheder2018convergence, li2017towards, arora2017generalization}. \nTraining instability has been linked to the architectural properties of GANs: especially to the discriminator \\citep{sngan}. \nProposed architectural stabilization techniques include using Convolutional \nNeural Networks (CNNs) \\citep{dcgan}, using very large batch sizes \\citep{biggan}, \nusing an ensemble of the discriminators \\citep{durugkar2016generative}, using spectral normalization for the discriminator \\citep{sngan}, adding self-attention layers for the generator and discriminator networks \\citep{attention, sagan} and using iterative updates to a \\textit{global} generator and discriminator using an ensemble of paired generators \nand discriminators \\citep{chavdarova2018sgan}. \nDifferent objectives have also been proposed to stabilize GAN training \n\\citep{wgan, improved_wgan, mmdgan, lsgan, fishergan, cramergan}.\n\n\\subsection{Core-set Selection}\nCore-set sampling has been widely studied from an algorithmic perspective in attempts to find better approximate solutions to the original NP-Hard problem \n\\citep{agarwal2005geometric, clarkson2010coresets, pratap2018faster}. \nThe optimality of the sub-sampled solutions have also been studied theoretically\n\\citep{barahona2005near, goldman1971optimal}. \nSee \\citet{phillips2016coresets} for a recent survey on Core-set selection algorithms. \nCore-sets have been applied to many machine learning problems such as $k$-means and approximate \nclustering \\citep{har2004coresets, har2007smaller, badoiu2002approximate}), active learning for SVMs \n\\citep{tsang2005core, tsang2007simpler}, unsupervised subset selection for hidden Markov models \n\\citep{wei2013using} scalable Bayesian inference, \\citep{huggins2016coresets} and mixture models \\citep{feldman2011scalable}. \nWe are not aware of Core-set selection being applied to GANs.\n\n\\subsection{Core-set Selection in Deep Learning}\nCore-set selection is largely underexplored in the Deep Learning literature, but interest has recently increased. \n\\citet{sener2017active} proposed to use Core-set sampling as a batch-mode active learning sampler for CNNs. \nTheir method used the embeddings of a trained network to sample from. \\citet{mussay2019activation} \nproposed using Core-set selection on the activations of a neural network for network compression. \nCore-set selection has also been used in continual learning to sample points for episodic memory \\citep{vcl}.\n\n\\section{Conclusion}\n\nIn this work we present a general way to mimic using a large batch-size in GANs while minimizing computational overhead. \nThis technique uses Core-set selection and improves performance in a wide variety of contexts.\nThis work also suggets further research: a similar method could be applied to other learning tasks where large mini-batches may be useful.\n\n\\section{Acknowledgements}\nWe thank Colin Raffel for feedback on a draft of this paper.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nA research program\non low energy neutrino physics~\\cite{pdgnuphys}\nis being pursued by the TEXONO\nCollaboration at the Kuo-Sheng (KS) Reactor Laboratory\nin Taiwan~\\cite{ksprogram}.\nA search for the neutrino magnetic moment~\\cite{magmommpla}\nwas performed with an ultra-low-background\nhigh-purity germanium detector (HPGe),\nfrom which a limit of\n$\\mu_{\\nu} ( \\rm{\\bar{\\nu}_e} ) < 7.4 \\times 10^{-11} ~ \\mu_{\\rm B}$\nat 90\\% confidence level (CL) was\nderived~\\cite{texonomagmom}.\nDuring the course of this study, the unique signature\nof the decay of the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ metastable state\nwas noted.\n\nIn this report, we discuss \nthese decay signatures \nin Section~\\ref{sect::expt}\nand present studies\nof their production channels\nin Section~\\ref{sect::prod}.\nThese investigations are of relevance to\nthe understanding of backgrounds in this\nand other germanium-based low-background experiments,\nsuch as those on \ndouble-beta decays~\\cite{pdgdbd,dbdge} \nand cold dark-matter searches~\\cite{pdgcdm,cdmge}.\nMoreover, the data also allow the evaluations\nof the experimental \nsensitivities of various\nneutrino-induced nuclear transitions\nin reactor neutrino experiments\nunder realistic conditions,\nthe studies of which are discussed in\nSection~\\ref{sect::nint}.\n\n\\section{Experimental Signatures of $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ Decays}\n\\label{sect::expt}\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=12cm]{hpge.eps}\n\\end{center}\n\\caption{\nSchematic layout of the HPGe\nwith its anti-Compton detectors\nas well as inner shieldings and\nradon purge system.\n}\n\\label{kshpge} \n\\end{figure} \n\nThe experimental details, including detector hardware, shieldings,\nelectronics systems as well as studies of systematic \nuncertainties, are discussed \nin Ref.~\\cite{texonomagmom}.\nThe schematics of the experimental set-up is shown\nin Figure~\\ref{kshpge}. \nThe laboratory is located at a distance of 28~m from a nuclear\nreactor core with a thermal power output of 2.9~GW.\nThe total reactor-$\\rm{\\bar{\\nu}_e}$ flux \nis about $\\rm{6.4 \\times 10^{12} ~ cm^{-2} s^{-1}}$.\nThe $\\rm{\\bar{\\nu}_e}$ spectrum\nis shown in Figure~\\ref{rnuspect}.\nThe $\\rm{\\bar{\\nu}_e}$'s are emitted via\n$\\beta ^-$ decays of \n(a) fission fragments and \n(b) $^{239}$U following neutron capture on\n$^{238}$U.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=12cm]{rnusptot.eps}\n\\end{center}\n\\caption{\nTotal $\\rm{\\bar{\\nu}_e}$-spectra at the power reactor,\nshowing the two components due to\n$\\beta ^-$ decays of fission fragments and\n$^{239}$U.\n}\n\\label{rnuspect}\n\\end{figure}\n\n\nThe natural isotopic abundance of $^{73}$Ge \nin germanium is 7.73\\%.\nThe level schemes for the isobaric states \nof $^{73}$Ga, $^{73}$Ge and $^{73}$As \nrelevant to this report are \ndepicted in Figure~\\ref{lscheme}~\\cite{lev_scheme}.\nThe $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ metastable nuclei decay\nvia the emissions of \n53.4 keV and 13.3 keV photons \nseparated by a half-life of 2.9~$\\mu $s. \nThis characteristic delayed coincidence\ngives an experimental\nsignature which can be uniquely identified\nin a HPGe detector.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics[width=14cm]{ge73.eps}\n\\end{center}\n\\caption{\nThe $^{73}{\\rm{Ge}}$ production \nand decay scheme~\\cite{lev_scheme} \nrelevant to\nthe discussions in this report.\n}\n\\label{lscheme}\n\\end{figure}\n\nThe HPGe target was 1.06~kg in\nmass and was surrounded by active anti-Compton\nveto (ACV) detectors\nmade of NaI(Tl) and CsI(Tl) scintillating crystals.\nThe detector system was placed inside a 50-ton shielding\nstructure with an outermost layer of plastic scintillator\npanels acting as cosmic-ray veto (CRV).\nA physics threshold of 12~keV\nand background level of $\\sim 1 ~ \\rm{kg^{-1}keV^{-1}day^{-1}}$\ncomparable to those of\nunderground dark matter experiments\nwere achieved.\nBesides magnetic moment searches,\nthis unique data set also allows the\nstudies of $\\nu _e$~\\cite{rnue} and\nthe searches for axions~\\cite{raxion}\nfrom the power reactor.\n\nEvents uncorrelated with the CRV and ACV \nare candidates for neutrino-induced signals.\nPerformance and efficiencies of these selections\nwere thoroughly studied and documented\nin Ref.~\\cite{texonomagmom}.\nThe amplitude-versus-energy plot for the\n``after-cut'' events is displayed in\nFigure~\\ref{ge73signature}.\nThe conspicuous structure at 66.7~keV\nis due to the convoluted \nsum of two correlated events delayed\nrelative to one another,\nidentified unambiguously as\n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ decays.\nA typical event pair is shown\nin the inset of Figure~\\ref{ge73signature}.\nThis pulse-shape signature\nis very distinct and can be tagged \nat an event-by-event basis without\ncontamination of background events.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=12cm]{ge73-2d.eps}\n\\end{center}\n\\caption{\nScatter plot of \nmaximum amplitude versus energy\nfor selected events,\nshowing that\ndecays from $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei can be\nidentified perfectly.\nThe energy spectra before (black)\nand after (red) event selection \nare shown in the left inset.\nTypical pulse shapes for\nnormal (black) and $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ (red)\nevents are\ndisplayed in the right inset.\n}\n\\label{ge73signature}\n\\end{figure}\n\nOnce the decays of \n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ were tagged,\ntheir production channels can be studied.\nThe data-acquisition (DAQ) system~\\cite{texonodaq} records\ntiming information for all events. \nThis capability allows the decay sequences with half-lives\nas long as 55~s to be distinctly identified~\\cite{csibkg}.\nThe KS-P1 (180.1\/52.7~days Reactor ON\/OFF live time)\nand \n-P3 (278.9\/43.6~days Reactor ON\/OFF live time)\nperiods, \nas defined in Ref.~\\cite{texonomagmom},\nwere used in the analysis.\nThe total live times are \n459.0\/96.3 days for Reactor ON\/OFF,\nrespectively.\nThe selection efficiency of the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$\nevents is 85\\%, the\nmissing fraction being those with the\ntwo $\\gamma$'s emitted closer than 0.16~s \nin time to each other.\nData taken 5~s before and after these \nevents were retrieved for subsequent \ndetailed studies.\nThe production of the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ states\ntakes place mostly in the interval\nof about 2~s (or 4 half-lives)\n{\\it before} the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ decays,\nwhile events from the other intervals\nrepresent the background control samples.\nAfter efficiency corrections,\nthe production rates of \nthe $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei were measured.\nThe rates decreased within a DAQ period\nand are characterized by two quantities.\nThe {\\it average rates}\nrepresent the sum of all \n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ decays divided by\nthe total DAQ live time, while \nthe {\\it equilibrium rates} are the\nsteady-state rates reached \nat the end of the DAQ periods.\nThe average and equilibrium rates were \n$( 8.7 \\pm 0.4 ) ~ {\\rm kg^{-1} day^{-1}}$\nand \n$( 6.7 \\pm 0.3 ) ~ {\\rm kg^{-1} day^{-1}}$,\nrespectively.\n\n\\section{Production Channels of $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$}\n\\label{sect::prod}\n\n\nThe identified production channels for\n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei and their relative fractions\nare summarized in Table~\\ref{prodsummary}.\nSpecific signatures of individual channels\nare described in the following paragraphs.\nFor simplicity, \nthe ``BEFORE'' and ``AFTER'' spectra\ntaken within the time interval (-2,0)s\nand (0,2)s by detector X \n(X can be NaI or Ge)\nare denoted \nby $\\Phi_{X}^{-}$ and $\\Phi_{X}^{+}$,\nrespectively.\n\n\n\n\\begin{table}\n\\begin{center}\n\\small{\n\\begin{tabular}{lcccccc}\n\\hline\\hline\n& \\multicolumn{3}{c}{Measured Event Rate} & \n\\multicolumn{3}{c}{Distribution} \\\\ \n& \\multicolumn{3}{c}{(day$^{-1}\\cdot $kg$^{-1}$)} & \n\\multicolumn{3}{c}{(\\%)} \\\\ \n\\multicolumn{1}{l}{DAQ Periods} & \nP1 & P3 & Combined &\nP1 & P3 & Combined \n\\\\ \\hline \\hline\n\\multicolumn{7}{l}{\\underline{$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ Production Rate} :} \\\\\n~~~ DAQ End(Equilibrium) & 8.8$\\pm $0.5 & 5.2$\\pm $0.4 & 6.7$\\pm$0.3 & \n$-$ & $-$ & $-$ \\\\\n~~~ Average & 9.9$\\pm $0.6\n& 8.1$\\pm $0.5 & 8.7$\\pm $0.4 &\n100.0 & 100.0 & 100.0 \\\\ \\hline \\hline\n\\multicolumn{7}{l}{\\underline{Production Channels}:} \\\\\n\\multicolumn{7}{l}{1. \\underline{$^{73}$As Decays} $-$ } \\\\ \n~~~ DAQ Start & \n$1.8 \\pm 0.2$ & 1.8$\\pm $0.2 & 1.8$\\pm $0.2 &\n$-$ & $-$ & $-$ \\\\ \n~~~ DAQ End(Equilibrium) & \n$0.3 \\pm 0.1$ & 0.2$\\pm $0.1 & 0.2$\\pm $0.1 &\n$-$ & $-$ & $-$ \\\\ \n~~~ Average & 1.0$\\pm $0.1 & 0.7$\\pm $0.1 & 0.8$\\pm $0.1\n& 9.8$\\pm $0.9 & 9.2$\\pm $0.8 & 9.5$\\pm $0.6 \\\\\n\\hline\n\\multicolumn{7}{l}{2. \\underline{$^{73}$Ga Decays} $-$ } \\\\\n~~~ $\\beta ^{-}$ & \n2.2$\\pm $0.2 & 1.9$\\pm $0.2 & 2.0$\\pm $0.1 &\n21.7$\\pm $1.6 & 23.2$\\pm $1.6 & 22.5$\\pm $1.1 \\\\ \n~~~ $\\beta ^{-} \\gamma$ (Full $\\rm{E_{\\gamma}}$) & \n0.9$\\pm $0.1 & 0.7$\\pm $0.1 & 0.8$\\pm $0.1 &\n9.1$\\pm $1.0 & 9.1$\\pm $0.9 & 9.1$\\pm $0.6 \\\\ \n~~~ $\\beta ^{-} \\gamma$ (Partial $\\rm{E_{\\gamma}}$) & \n1.0$\\pm $0.2 & 0.5$\\pm $0.1 & 0.6$\\pm $0.1 & \n9.7$\\pm $1.4 & 5.6$\\pm $1.0 & 7.0$\\pm $0.8 \\\\ \\hline\n\\multicolumn{7}{l}{3. \\underline{Prompt Cosmic-Induced $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$} $-$ } \\\\ \n~~~ With CRV Tag & \n2.2$\\pm $ 0.3 & 1.9$\\pm $0.3 & 2.0$\\pm $0.2 &\n22.0$\\pm $2.8 & 23.9$\\pm $ 2.7 & 23.0$\\pm $2.0 \\\\\n~~~ HPGe only (No CRV) & \n0.0$ \\pm $0.0 & 0.0$\\pm $0.0 & 0.0$\\pm $0.0 &\n0.0$\\pm $0.1 & 0.3$ \\pm $0.1 & 0.1$ \\pm $0.1 \\\\ \n~~~ HPGe+NaI (No CRV) & \n0.1$\\pm $0.1 & 0.1$\\pm $0.0 & 0.1$\\pm $0.0 &\n1.1$\\pm $ 0.7 & 1.4$\\pm $0.5 & 1.3$\\pm $0.4 \\\\ \\hline\n\\multicolumn{7}{l}\n{\\underline{Total Identified Production Channels for $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$} :} \\\\ \n& 7.3$\\pm $0.4 & 5.9$\\pm $0.3 & 6.4$\\pm $0.3 \n& 73.4$\\pm $3.8 & 72.7$\\pm $3.5 & 73.0$\\pm $2.6 \\\\ \\hline \\hline\n\\multicolumn{7}{l}\n{\\underline{Identified Inefficiency Factors} :} \\\\\n~~ DAQ Dead Time\n& $-$ & $-$ & $-$\n& 4.3 & 8.5 & 6.6 \\\\\n~~ Selection Inefficiencies \n& & & & & & \\\\\n~~~~ CRV+ACV Cuts \n& $-$ & $-$ & $-$ \n& 5.0 & 8.0 & 6.6 \\\\\n~~~~ 5~ms$<$$\\Delta$t$<$=2~s\n& $-$ & $-$ & $-$\n& 7.1 & 7.1 & 7.1 \\\\\n\\hline \\hline\n\\multicolumn{7}{l}{Total Identified Percentage:} \\\\\n& $-$ & $-$ & $-$\n& 89.8$\\pm $3.8 & 96.3$\\pm $3.5 & 93.3$\\pm $2.6 \\\\\n\\hline \\hline\n\\end{tabular}\n}\n\\end{center}\n\\caption{\nSummary of the \n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ production rates, and those\nof individual channels in the\ntwo data-taking periods.\n}\n\\label{prodsummary}\n\\end{table}\n\n\n\n\\begin{enumerate}\n\\item {\\bf $^{73}$As :}\\\\\nThe most conspicuous difference\nbetween the $\\rm{\\Phi_{Ge}^{-}}$ and $\\rm{\\Phi_{Ge}^{+}}$ spectra,\nafter the CRV and ACV cuts were applied,\nis the Ge X-ray\npeak at 11.1~keV depicted \nin Figure~\\ref{10kev_sign}.\nIn contrast, the structure for $\\rm{\\Phi_{Ge}^{+}}$\npeaks at 10.4~keV, corresponding to\nthe Ga X-rays following the\ndecays of $^{68}$Ge which were\nuncorrelated with the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ production.\nThe timing distribution of the \n$\\rm{\\Phi_{Ge}^{-}}$ events associated with\nthe peak is shown in the inset.\nThe best-fit half-life is \n$(0.81 \\pm 0.22) ~ {\\rm s}$,\nin good agreement with expectation\nand demonstrating that the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ states\nwere really produced.\nThe time-variation of the event rates \nis depicted in \nFigure~\\ref{timeplot}a.\nThe best-fit half-life for the \nP3 data is \n$( 62.0 \\pm 23.8 ) ~ {\\rm days}$,\nconsistent with the interpretations\nof electron capture of $^{73}$As:\n\\begin{equation}\n\\rm{ ^{73}As + e^- }\n \\rightarrow \\rm{ \\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) } + \\nu_e + Ge~X\\mbox{-}Rays } \n ~~~ \\rm{ ( ~ \\tau _{\\frac{1}{2}} = 80.3~days ~ ) } ~~ .\n\\end{equation}\nThis decay profile \nindicates that the production of $^{73}$As\nwas cosmic-ray induced.\nThe production rate was reduced by a factor of $\\sim$9 \ninside the 50-ton shielding structure\nat the KS laboratory where the\noverburden is about 30~meter-water-equivalence (mwe).\nThe steady-state equilibrium\nrate of ${\\rm ( 0.2 \\pm 0.1 ) ~ kg^{-1} day^{-1}}$\nwas reached after 400~days of data taking.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=10cm]{asxray}\n\\end{center}\n\\caption{\n(a)\nMeasured HPGe $\\rm{\\Phi_{Ge}^{-}}$ (blue data points) and\n$\\rm{\\Phi_{Ge}^{+}}$ (red histogram) spectra \nafter ACV+CRV cuts from P3 data. \nThe time distribution of the Ge X-ray\nevents in $\\rm{\\Phi_{Ge}^{-}}$ is shown in the inset,\nverifying that $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei were produced.\n}\n\\label{10kev_sign}\n\\end{figure}\n\n\n\\begin{figure}[hbt]\n\\begin{center}\n{\\bf (a)}\\\\\n\\includegraphics[width=10cm]{hf-asxray.eps}\\\\\n{\\bf (b)}\\\\\n\\includegraphics[width=10cm]{stability.eps}\n\\end{center}\n\\caption{\nTime plot for P3 data taking for event rates from\n(a) Ge X-rays, showing that \nthey are due to electron capture\nof $^{73}$As to $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$; and\n(b) other channels whose rates are stable.\nOffsets are introduced in (b) for visualization\npurposes. The data under ``$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$'' denotes \nthe steady-state production rate of $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$\nafter the $^{73}$As decays are taken into account.\n}\n\\label{timeplot}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n{\\bf (a)}\\\\\n\\includegraphics[width=10cm]{beta.eps}\\\\\n{\\bf (b)}\\\\\n\\includegraphics[width=10cm]{beta-gamma-0.eps}\\\\\n{\\bf (c)}\\\\\n\\includegraphics[width=10cm]{beta-gamma-1.eps}\n\\end{center}\n\\caption{\nMeasured $\\rm{\\Phi_{Ge}^{+}}$ (red histogram)\nand $\\rm{\\Phi_{Ge}^{-}}$ (blue data points) \nspectra in the 1~MeV region:\n(a) after CRV+ACV cuts;\n(b) after CRV cut, with signals at ACV\ncorresponding to $\\rm{E_{\\gamma} \\sim 300 ~ keV}$;\n(c) after CRV cut, with signals at ACV\ncorresponding to $\\rm{E_{\\gamma} \\neq 300 ~ keV}$.\nInset shows the timing distribution\nof the $\\rm{\\Phi_{Ge}^{-}}$ events.\n}\n\\label{beta}\n\\end{figure}\n\n\\item {\\bf $^{73}$Ga :}\\\\\nThe $\\rm{\\Phi_{Ge}^{-}}$ and $\\rm{\\Phi_{Ge}^{+}}$ spectra \nat the 1~MeV region for P3\nare depicted \nin Figures~\\ref{beta}a-c: \n(a) after CRV+ACV cuts;\n(b) after CRV cut, with signals at ACV\ncorresponding to $\\rm{E_{\\gamma} \\sim 300 ~ keV}$;\nand\n(c) after CRV cut, with signals at ACV\ncorresponding to $\\rm{E_{\\gamma} \\neq 300 ~ keV}$.\nThe timing distribution of the events \nis displayed in the inset of Figures~\\ref{beta}a-c,\nverifying that $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei were produced.\nThese events are consistent with \n$\\beta^{-}$-emissions from \n$^{73}$Ga in the HPGe:\n\\begin{equation}\n{ \\rm ^{73}Ga } \\rightarrow \n{ \\rm \\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) } + \\gamma 's + e^- + \\bar{\\nu}_e } \n ~~ { \\rm ( ~ Q = 1.59~MeV ~ ; ~\n\\tau _{\\frac{1}{2}} = 4.86~h ~ ) }~~ .\n\\end{equation}\n\nSome of the events in Figure~\\ref{beta}a are due to\n$\\beta ^-$ decays which directly fed the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ state\nat a branching ratio (BR) of 7\\%.\nIn addition, the $\\beta ^-$ decays can also populate\nthe various excited states, the dominant level\nof which is the $\\frac{3}{2}^-$ state at BR=78\\%,\nfollowed by $\\gamma$-ray emissions\nat $\\rm{E_{\\gamma} = 297 ~ keV}$. \nThe subsequent $\\gamma$'s were\nfully absorbed by the HPGe in about 57\\% \nof the decays, \nas indicated in Table~\\ref{prodsummary}.\nThese events also contribute to\nFigure~\\ref{beta}a.\nThe HPGe spectra where the\n$\\rm{E_{\\gamma} = 297 ~ keV}$ \nphotons\nwere fully and partially absorbed\nby the NaI(Tl) AC detector\nare depicted in \nFigures~\\ref{beta}b\\&c, respectively. \nThe $\\rm{\\Phi_{NaI}^{-}}$ and $\\rm{\\Phi_{NaI}^{+}}$ spectra\ncorresponding to the samples of Figure~\\ref{beta}b\nare displayed in\nFigure~\\ref{naispect}, showing\nthe detection of the line at \n$\\rm{E_{\\gamma} = 297 ~ keV}$.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=10cm]{beta-gamma-nai.eps}\n\\end{center}\n\\caption{\nThe $\\rm{\\Phi_{NaI}^{-}}$ (blue data points)\nand $\\rm{\\Phi_{NaI}^{+}}$ (red histogram) spectra taken with \nthe NaI(Tl) ACV detector.\nThe photopeak at 300~keV can be tagged by the\nspecified cuts, giving rise to the\nspectra of Figure~\\ref{beta}b.\n}\n\\label{naispect}\n\\end{figure}\n\n\n\n\\item {\\bf Prompt Cosmic-Induced $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ :}\\\\\nAbout 23\\% of the production of $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei\nwere in coincidence with a CRV tag.\nThese events are usually\ncharacterized by large energy depositions and\nsaturate the electronics in\nthe HPGe or ACV detectors,\nas depicted in Figure~\\ref{cosmic}a.\nFor instance, one of the channels is\nthe neutron capture on $^{72}$Ge:\n\\begin{equation}\n\\rm{\nn + ^{72}Ge \\rightarrow\n\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) } + \\gamma 's \n}\n\\end{equation}\nwhere photons with total energy of\nas much as 8~MeV \nare generated.\nOnly (15.8$\\pm$4.0)\\% of the events \nwere of low energy where the\nelectronics remained unsaturated\n(see also Figures~\\ref{cosmic}a\\&b).\nThere was an excess of events at about 300~keV\nfor $\\rm{\\Phi_{NaI}^{-}}$ over $\\rm{\\Phi_{NaI}^{+}}$ due\nto $\\gamma$-ray emissions from the \n$\\frac{3}{2} ^-$(364~keV)\nexcited state of $^{73}$Ge. \nThe corresponding energy depositions\nin the HPGe were less than 1~MeV.\nThese events represent evidence of\nexcitation of $^{73}$Ge$^*$ through the\ninteractions of high-energy neutrons produced\nby cosmic-ray induced\nspallations in the ambient materials. \\\\\nAmong the tagged $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ events,\n1.4\\% and (23\\%X0.84)$\\sim$19.3\\% \nare, respectively, those without and with CRV tags\nwhere the total energy depositions\nin the HPGe and AC detectors exceed the\nend point of $^{73}$Ga $\\beta ^-$ decays.\nThis is consistent with an independent measurement \nof the CRV inefficiency of about 7\\%~\\cite{texonomagmom},\ndue to geometrical coverage and hardware inefficiency.\nAccordingly, it can be concluded that all \n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ nuclei produced with energy depositions above\n1.6~MeV are associated with prompt cosmic rays.\n\n\\begin{figure}[hbt]\n\\begin{center}\n{\\bf (a)}\\\\\n\\includegraphics[width=10cm]{muon-2d.eps}\\\\\n{\\bf (b)}\\\\\n\\includegraphics[width=10cm]{muon-nai.eps}\n\\end{center}\n\\caption{\n(a)\nScatter plots \n$\\rm{\\Phi_{NaI}^{-}}$ versus $\\rm{\\Phi_{Ge}^{-}}$ (blue)\nand \n$\\rm{\\Phi_{NaI}^{+}}$ versus $\\rm{\\Phi_{Ge}^{+}}$ (red in the inset)\nfor events with cosmic-ray tag. \nOnly events without\nsaturating the electronics are shown,\nwhile the saturated ones are represented by\nthe two bands.\nThe timing distribution of the \n$\\rm{\\Phi_{Ge}^{-}}$ events \nis displayed in the inset.\n(b) \nThe corresponding $\\rm{\\Phi_{NaI}^{-}}$ (blue data points)\nand $\\rm{\\Phi_{NaI}^{+}}$ (red histogram) spectra for\nthe non-saturated events.\nThe peak at 300~keV \nis evidence of cosmic-ray induced \nproduction of $\\rm{^{73} Ge ^*}$ excited states.\n}\n\\label{cosmic}\n\\end{figure}\n\n\\end{enumerate}\n\n\n\nAfter corrections on DAQ dead time\nand selection efficiencies,\nthe identified production channel is about 93\\% \nof the tagged $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ decays. \nThe missing events can be attributed to those\nwhich do not satisfy the DAQ trigger condition of\nhaving more than 5~keV energy deposition in the HPGe.\nAn example of such channels is \nthe production of $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ or other \n$^{73}$Ge$^*$ excited states via\nthe excitation of high-energy neutrons,\nfollowed by complete escape of the final-state $\\gamma$'s\nfrom the HPGe.\n\nAll the identified background is cosmic-ray \ninduced, even though the decay time scales are\nvastly different: order of 100~days for $^{73}$As,\norder of 10~hours for $^{73}$Ga and prompt signatures\nfor those with a CRV tag.\nAs illustrated in Figures~\\ref{timeplot}a\\&b,\nthe background rates of $^{73}$As show the \ncharacteristic 80.3~day decay half-life, \nwhile the rates for the other background\nchannels are constant with time.\nAccordingly, this background is expected to\nbe greatly suppressed in an underground laboratory\nwhere the cosmic-ray fluxes are attenuated.\nThe residual background will be due to \ninteractions of the surviving cosmic rays\nas well as to the fast neutrons produced by\n($\\alpha$,n) reactions\nthrough natural ambient radioactivity.\n\n\\section{Studies of Possible Neutrino-Induced Interactions}\n\\label{sect::nint}\n \nNeutrino interactions on nuclei were studied\nin counter experiments\nonly for a few light\nisotopes ($^1$H~\\cite{reines}, $^2$H~\\cite{deuteron,sno}\nand $^{12}$C~\\cite{carbon}). \nFor heavy nuclei, these were observed\nin radiochemical\nexperiments ($^{37}$Cl~\\cite{snucl} \nand $^{71}$Ga~\\cite{snuga}) but without timing and\nspectral information.\nDetailed theoretical work was \nconfined mostly to these isotopes.\nEstablishing more experimentally accessible\ndetection channels will be of importance in the\nstudies of nuclear structure and neutrino physics.\nFor instance, interactions with\nlower threshold or better resolution\nthan the $\\rm{\\bar{\\nu}_e}$-p inverse\n$\\beta ^-$ decay and\n$\\nu$-d disintegration processes will open\nup new windows of investigations.\n\nWith reactor $\\rm{\\bar{\\nu}_e}$ as probes,\nneutrino-induced $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ productions\nwould manifest\nthemselves as excess of events for the\nReactor ON spectra [$\\rm{\\Phi_{Ge}^{-}} (ON)$]\nover that of Reactor OFF [$\\rm{\\Phi_{Ge}^{-}} (OFF)$].\nStudies were performed on the\ntagged $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ events discussed\nin previous sections.\n\n\n\n\\begin{table}[hbt]\n\\small{\n\\begin{tabular}{lccccccc}\n\\hline\nNCEX & $\\Delta$ & E$_{\\gamma }$ & Transitions & \n$\\rm{f _{\\nu}^{\\Delta }}$ \n& $\\rm{\\epsilon _{Ge}}$ & $\\rm{\\mathcal{R}_{\\nu}^{NC}}$ & \n$\\rm{ \\left\\langle \\sigma _{\\nu } ^{NC} \\right\\rangle }$ \\\\\nChannel & (MeV) & (MeV) & & & \n& $\\rm{( kg^{-1} day^{-1} ) }$ & ($\\rm{cm^{2}}$) \\\\ \\hline\n$\\rm{^{73}Ge^* ( 3\/2 ^- )}$ & 0.364 & 0.297 \n& $\\rm{ 9\/2^{+} \\rightarrow 3\/2^{-}}$ \n& 0.82\n& 0.49 & $< 2.00 \\times 10^{-2}$ & $< 1.13 \\times 10^{-43}$ \\\\ \n$\\rm{^{73}Ge^* ( 3\/2 ^- )}$ & 0.392 & 0.325 \n& $\\rm{ 9\/2^{+} \\rightarrow 3\/2^{-}}$ \n& 0.80\n& 0.45 & $< 4.36 \\times 10^{-2}$ & $< 2.72 \\times 10^{-43}$ \\\\ \n$\\rm{^{73}Ge^* (1\/2 ^+ )}$ & 0.555 & 0.488 & \n$\\rm{ 9\/2^{+} \\rightarrow 1\/2^{+}}$ \n& 0.72\n& 0.19 & $< 2.21 \\times 10^{-2}$ & $< 3.24 \\times 10^{-43}$ \\\\ \n$\\rm{^{73}Ge^* ( 1\/2 ^- )}$ & 1.132 & 1.065 & \n$\\rm{ 9\/2^{+} \\rightarrow 1\/2^{-}}$ \n& 0.45\n& 0.15 & $< 3.12 \\times 10^{-2}$ & $< 5.96 \\times 10^{-43}$ \\\\ \\hline\\hline \n& & & & & & \\\\\n$\\nu$CC & Q & & Transition & $\\rm{f _{\\nu}^{Thr}}$ \n& $\\epsilon _{Ge}$ & $\\rm{\\mathcal{R}_{\\nu}^{CC}}$ & \n$\\rm{\\left\\langle \\sigma _{\\nu } ^{CC} \\right\\rangle} $ \\\\\nChannel & (MeV) & & & & \n& $\\rm{ ( kg^{-1} day^{-1} ) }$ & $\\rm{( cm^{2} )}$ \\\\ \\hline\n$\\rm{^{73}Ga ( 3\/2 ^- )}$ & 1.665 & & \n$\\rm{ 9\/2^{+} \\rightarrow 3\/2^{-}}$ \n& 0.13\n& 0.68 & $<$~0.43 & $< 1.78 \\times 10^{-42}$ \\\\ \\hline\\hline\n\\end{tabular}\n}\n\\caption{\nSummary of the neutrino-induced\nNCEX and $\\nu$CC studies, showing 90\\% CL limits\non event rates and average cross sections. \n}\n\\label{tab01}\n\\end{table}\n\n\n\\subsection{Neutral-Current Nuclear Excitation}\n\nNeutrino-induced neutral current processes have been observed\nin the disintegration of the deuteron\nwith reactor $\\rm{\\bar{\\nu}_e}$~\\cite{deuteron}\nand in the SNO experiment for solar $\\nu _e$~\\cite{sno}.\nFor heavier isotopes, the interaction\nproceeds through neutral current \nexcitation (NCEX) on nuclei via \ninelastic scattering\n\\begin{equation}\n\\rm{\n\\rm{\\bar{\\nu}_e} ~+ ~(A,Z)~ \n\\rightarrow ~\\rm{\\bar{\\nu}_e} ~+~ (A,Z)^* ~ .\n}\n\\end{equation}\nThis process \nwas observed only in the \ncase of $^{12}$C~\\cite{carbon}\nusing accelerator neutrinos at\nenergy at the order of O(10~MeV).\nExcitations at lower energies using \nreactor neutrinos have \nbeen studied theoretically~\\cite{hclee,donnelly} \nbut were not experimentally observed.\nThis has been proposed as a detection\nchannel for solar neutrinos for the\nisotope $^{11}$B~\\cite{borex}\nand Dark Matter$-$WIMPs~\\cite{wimpncex}.\nThe NCEX processes are sensitive\nto the axial isoscalar component of the \nweak neutral currents and the strange \nquark content of the nucleon~\\cite{strangequark}.\n\nThe average cross section of the NCEX interactions \nin a neutrino beam with spectrum $\\phi_{\\nu} (E_{\\nu})$ \nis given by, using the conventions of Eq.~6 of\nRef.~\\cite{hclee}: \n\\begin{equation}\n\\label{eq::csmean}\n\\rm{\n\\left\\langle \\sigma _{\\nu}^{NC}\\right\\rangle=\n\\frac{\\mathcal{\\int}_{\\Delta}^{\\infty }\n\\sigma _{\\nu}^{NC}(E_{\\nu })\n\\phi_{\\nu}(E_{\\nu })\ndE_{\\nu }}\n{ \\Phi_{\\nu}^{Total} }\n}\n\\end{equation}\nwhere $\\Delta $ is the nuclear excitation energy,\nand \n\\begin{equation}\n\\rm{\n\\Phi_{\\nu}^{Total} = \n\\mathcal{\\int}_{0}^{\\infty }\n\\phi_{\\nu}(E_{\\nu })\ndE_{\\nu } \n}\n\\end{equation}\nis the total neutrino flux.\nThe energy dependence of \nthe interaction cross section varies as~\\cite{hclee}\n\\begin{equation}\n\\label{reactor_cross}\n\\rm{\n\\sigma_{\\nu}^{NC} (E _{\\nu }) \\propto\n(E _{\\nu }-\\Delta )^{2} ~~ ,\n}\n\\end{equation}\nwhere\nthe proportional constant depends\non the weak couplings and\nnuclear matrix elements.\nThe observed event rate is accordingly\n\\begin{equation}\n\\label{eq::rate}\n\\rm{\n\\mathcal{R}_{\\nu}^{NC}=\n\\left\\langle \\sigma _{\\nu}^{NC}\\right\\rangle \n\\cdot \\Phi_{\\nu}^{Total} \\cdot N_{Ge} \\cdot \\epsilon_{Ge} \n}\n\\end{equation}\nwhere $N_{Ge}$ is the number \nof $^{73}$Ge nuclei\nand\n$\\epsilon_{Ge} $ is \nthe detection efficiency,\nwhich can be evaluated by simulations.\n\n\\begin{figure}[hbt]\n\\begin{center}\n\\includegraphics[width=12cm]{residual.eps}\n\\end{center}\n\\caption{\nThe [$\\rm{\\Phi_{Ge}^{-}}(ON) - \\rm{\\Phi_{Ge}^{-}}(OFF)$] residual\nspectra for the four \ncandidate channels for neutral-current excitation \nwith P3 data.\nThe Gaussian best-fits are\nsuperimposed.\n}\n\\label{ncexresid}\n\\end{figure}\n\nThe experimental signatures of NCEX \nfor (A,Z)=$^{73}$Ge\nare the presence of mono-energetic \nlines at specified energies in\nthe Reactor [$\\rm{\\Phi_{Ge}^{-}}(ON) - \\rm{\\Phi_{Ge}^{-}}(OFF)$] residual spectra.\nThe candidate $^{73}$Ge-NCEX channels and their respective \n$\\epsilon _{Ge}$ are listed in Table~\\ref{tab01}.\nAlso shown is the fraction of the \nreactor neutrino flux\nabove the kinematics threshold, given by\n\\begin{equation}\n\\label{eq::fract}\n\\rm{\nf_{\\nu}^{\\Delta} = \n\\frac{\n\\mathcal{\\int}_{\\Delta}^{\\infty }\n\\phi_{\\nu}(E_{\\nu })\ndE_{\\nu } }\n{ \\Phi_{\\nu}^{Total} } ~~~ .\n}\n\\end{equation}\nThe residual spectra of the candidate transitions are\ndepicted in Figures~\\ref{ncexresid}a-d.\nNo excess of events were observed.\nLimits on $\\rm{\\mathcal{R} _{\\nu} ^{NC}}$, and \nconsequently on\n$\\rm{\\left\\langle \\sigma _{\\nu } ^{NC} \\right\\rangle }$ \nat 90\\% CL\nwere derived and tabulated in Table~\\ref{tab01}.\nThe limiting sensitivities\non $\\rm{ \\left\\langle \\sigma _{\\nu } ^{NC} \\right\\rangle }$\nare typically factors of $\\sim$$10^{2}$ worse than\nthe typical range \nof $\\rm{\\sim 10^{-45}cm ~ ^2}$\npredicted for the various isotopes~\\cite{hclee,donnelly}.\nThe dominant background are those of Figure~\\ref{beta}a,\nwhere all energy of the $\\beta$-$\\gamma$ emissions \nfollowing $^{73}$Ga decays were deposited in the HPGe. \n\n\n\\subsection{Charged-Current Inverse Beta Decays}\n\nThe neutrino-induced inverse $\\beta$-decay reaction \non the proton was the process on which\nthe first observation of neutrinos was \nbased~\\cite{reines}.\nSubsequently, there were several generations\nof oscillation experiments with reactor\nneutrinos that relied on this interaction.\nIn addition, neutrino-induced charged-current processes \nwere observed through the disintegration of\nthe deuteron with reactor $\\rm{\\bar{\\nu}_e}$~\\cite{deuteron}\nand in the SNO experiment\nfor solar $\\nu _e$~\\cite{sno}.\nFor heavy nuclei,\ncharged-current interactions \nwere observed only in solar $\\nu_e$ \nthrough radiochemical experiments on \n$^{35}$Cl~\\cite{snucl} and $^{71}$Ga~\\cite{snuga}.\nThere are still no successful real-time\ncounter experiments yet, though there\nare intensive R\\&D efforts \ntowards this goal~\\cite{lens}.\nThe charged-current ($\\nu$CC) inverse $\\beta$-decay\ninteraction for $\\rm{\\bar{\\nu}_e}$ on heavy nuclei such as $^{73}$Ge\nis given by\n\\begin{equation}\n\\label{eq::nucc}\n\\rm{\n\\rm{\\bar{\\nu}_e}~+ ^{73}Ge \\rightarrow ~e^{+}~+ ^{73}Ga^{*} ~~.\n}\n\\end{equation}\nCases for other heavy isotopes were discussed in \nconnection to the detection \nof low energy~$\\rm{\\bar{\\nu}_e}$~from the Earth~\\cite{nugeo}.\nHowever, there are no experimental \nstudies so far for these processes.\n\nSignatures of $\\nu$CC in $^{73}$Ge \nmanifest themselves as excess of $^{73}$Ga decay events\nfor the Reactor ON over OFF periods.\nThe $\\nu$CC rate ($\\rm{\\mathcal{R}_{\\nu}^{CC}}$) \nis related to the average cross section via\n\\begin{equation} \n\\rm{\n\\mathcal{R}_{\\nu}^{CC}=\n\\left\\langle \\sigma _{\\nu}^{CC}\\right \\rangle \n\\cdot \\Phi_{\\nu}^{Total} \n\\cdot N_{Ge} \\cdot \\epsilon_{Ge} ~~~.\n}\n\\end{equation}\nThe interaction cross section varies with neutrino energy as~\\cite{nugeo}\n\\begin{equation}\n\\rm{\n\\sigma _{\\nu}^{CC} ( E_{\\nu} ) \\propto \n F ( Z , E_{+} ) \\cdot \nE_{+} \\cdot\n\\sqrt { E_{+} ^2 - m_e^2 } \n}\n\\end{equation}\nwhere $\\rm{E_{+} = E_{\\nu} - Q - m_e}$ is the positron energy,\nand $\\rm{F ( Z, E_{+} )}$ is the known nuclear \nCoulomb correction factor, and \n$\\rm{Q = 1.67 ~MeV}$ is the Q-value\nfor the $\\nu$CC interaction of Eq.~\\ref{eq::nucc}. \n\nAs listed in Table~\\ref{tab01},\nthe residual $^{73}$Ga decay rate for \nthe combined P1 and P3 periods is \n$R_{\\nu}^{CC}$=${\\rm ( - 0.94 \\pm 0.67 ) ~ day^{-1} kg^{-1} }$,\nfrom which the 90\\% CL limits on $R_{\\nu}^{CC}$ and \n$\\left\\langle \\sigma _{\\nu}^{CC} \\right\\rangle$\nwere derived. \nThe fractional $\\rm{\\bar{\\nu}_e}$-flux \n($\\rm{f ^{Thr} _{\\nu}}$) follows the\nsame definition as Eq.~\\ref{eq::fract},\nwith threshold given by $\\rm{ Thr = Q + m_e }$.\nSimilar to the case for NCEX, the sensitivities\nare limited by cosmic-ray induced $^{73}$Ga \nand can therefore be enhanced in an underground location.\nThere are no calculations on $\\nu$CC rates\nwith reactor neutrinos on heavy nuclei.\nExtrapolations from theoretical \nestimates on geo-neutrinos~\\cite{nugeo}\nsuggest a general cross-section range of \n$\\rm{ \\sim 10^{-44} ~ cm ^2}$\nand hence factors of $\\sim$$10^{2}$ \nmore stringent than the experimental \nbounds.\n\n\n\n\n\\section{Summary and Prospects}\n\nWe made a thorough study on the decay signatures \nof the $\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ metastable state in a well-shielded reactor\nlaboratory at a shallow depth of about 30~mwe. \nAn unambiguous event-by-event tag of such decays was \ndemonstrated, and studies of the signals within\ntwo seconds before\nthe tag provide information on their production channels.\n\nSearches for possible neutrino-induced nuclear transitions\ngive rise to sensitivity limits\nwhich are typically\nfactors of $\\sim 10^2$ worse than\nthe general predicted range. The background channels are\nall cosmic-ray induced, the most relevant one being\nthe decays of $^{73}$Ga. Consequently, the sensitivities\ncan be greatly enhanced in an underground location. \nPhysics experiments have been conducted in \nthe past at the\nunderground laboratory at the \nKrasnoyarsk reactor~\\cite{krasnoyarsk}, \nthough this facility is no longer available. \nAdditional boost in the capabilities\ncan be provided by position-sensitive segmented HPGe\nwhich can distinguish single-site from multi-site events.\n\nTon-scale isotopically-pure Ge-based detectors \nhave been proposed and considered in \nforthcoming double-beta decay experiments~\\cite{dbdge}\nin the case of $^{76}$Ge,\nas well as cold dark-matter searches~\\cite{cdmge}\nwhere $^{73}$Ge is ideal\nfor studying the spin-dependent interactions.\nBackground control and suppression\nare crucial to the success of these projects.\nSophisticated procedures have been developed\nin the course of the R\\&D work.\nThe unique three-fold timing correlations of the\n$\\rm{^{73}Ge^{*} ( 1\/2 ^{-} ) }$ system demonstrated in this article\nwill further enhance the background rejection\ncapabilities.\n\nA one-ton isotopically-pure\n$^{73}$Ge detector located\n15~m from a 3~GW reactor core\nwould record $\\sim$16 \nneutrino-induced NCEX events per day \nat the typical predicted cross-section range of \n${\\rm 10^{-45} ~ cm^2 }$~\\cite{hclee,donnelly}.\nThe neutron flux in an underground site \ncan be attenuated by a typical\nfactor of $10^{3}$ or more~\\cite{ugdneutron}.\nTherefore, a similar\nsuppression in the $^{73}$Ga background\ncan be expected.\nThe background levels of $\\rm{ < 0.01 ~ kg^{-1} day^{-1}}$\nfrom Table~\\ref{tab01} for natural HPGe at surface\nwould therefore imply a range of \n$\\rm{ < 0.1 ~ ton^{-1} day^{-1} }$ \nfor pure $^{73}$Ge detector underground.\nThe detection of NCEX would be realistic\nand possible $-$ if an underground power reactor\nwould be available.\n\nMeasurement of the\nlow-energy solar neutrinos\nat a threshold low enough ($\\rm{< 423 ~ keV}$)\nto include the pp branch \nwith the NCEX processes \nhas important complementarity to the on-going efforts\ntowards detection of the charged-current interactions~\\cite{lens}.\nUsing simple scaling between reactor and solar\nneutrino fluxes and spectra,\nthe typical predicted cross-section \nranges of Refs.~\\cite{hclee,donnelly}\ncorrespond to a solar-$\\nu$ induced NCEX rate of \n$\\sim 16 ~ \\rm{ton^{-1} year^{-1} }$\nin a $^{73}$Ge detector. \nThis is comparable to the expected\nbackground range in an underground location.\nFurther optimizations on background control\nand suppression \nwould make such a process observable.\n\n\\section{Acknowledgments}\n\nThis work is supported by\ncontracts 93-2112-M-001-030,\n94-2112-M-001-028 and\n95-2119-M-001-028\nfrom the National Science Council, Taiwan.\n\n\\section*{References}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}