diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzbsuh" "b/data_all_eng_slimpj/shuffled/split2/finalzzbsuh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzbsuh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nLarge Language Models (LMs) \nachieve remarkable accuracy and generalization ability when fine tuned for NLP tasks~\\citep{peters2018deep,devlin2018bert,liu2019roberta,lan2020albert,raffel2020exploring}. They are also capable zero- and few-shot learners ~\\citep{brown2020gpt3}, with the ability to generalize to tasks not seen during training.\nA reliable way to improve LM accuracy in all of these settings is by scaling up: increasing the number of parameters and the amount of computation used during training and inference~\\citep{raffel2020exploring,brown2020gpt3,fedus2021switch}. In fact, some generalization properties only emerge in very large models, including much improved zero- and few-shot learning~\\citep{brown2020gpt3}.\n\nUnfortunately, the corresponding growth in computational resources required to train state-of-the-art language models is a barrier for many in the research community~\\cite{schwartz2019green}.\nThere is also a concern about the environmental costs associated with training and deploying such models~\\citep{strubell2019energy,gupta2021chasing,bender2021dangers,patterson2021carbon} motivating research into more efficient model designs~\\citep{lepikhin2021gshard,fedus2021switch,Lewis2021BASELS}.\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/efficiency-gain.pdf}\n\\caption{\n\\textbf{Estimate of how much more efficient MoEs are relative to dense models.} A speedup factor of $y$ indicates that an MoE model can match the performance of the corresponding dense model---trained with $x$ ZFLOPs---using $y$ times less compute (i.e., $x\/y$ ZFLOPs). We estimate this factor according to validation perplexity for in-domain language modeling, the Pile perplexity for out-of-domain language modeling, and average accuracy across 6 tasks for zero-shot priming. See \\S\\ref{subsubsec:efficiency-gain} for more details.\n}\n\\label{fig:efficiency-gain}\n\\end{figure}\n\n\\emph{Sparse} models allow for increased number of learnable parameters without the associated computational costs. \nFor example, sparsely gated mixture of experts (\\emph{MoE})~\\cite{lepikhin2021gshard} have been successfully used for language modeling and machine translation~\\citep{lepikhin2021gshard,Lewis2021BASELS,roller2021hash}, but are yet to be shown effective for fine-tuning~\\cite{fedus2021switch} as well as zero- and few-shot learning. \nWe hypothesize that sparse models are comparatively accurate to dense models but at a much lower computational footprint. \nTo measure this claim, we train traditional dense and MoE language models ranging in size from several hundred million parameters to more than one trillion parameters and present a careful empirical comparison of these models on downstream tasks in zero-shot, few-shot and fully supervised settings.\n\nAs shown in Figure \\ref{fig:efficiency-gain}, we find that MoE models can indeed achieve similar downstream task performance as dense models at a fraction of the compute. For models with relatively modest compute budgets, a MoE model can perform on par with a dense model that requires almost four times as much compute.\nDownstream task performance improves with scale for both MoE models and dense models.\nWhile we observe that the performance gap narrows as we increase model size, even at larger compute budgets ($\\sim$ 5000 GPU days), our largest MoE model (1.1T parameters) outperforms a dense model with similar computational cost (6.7B parameters).\nWe further compare and contrast the performance of dense and sparse models with similar computational signatures and observe some performance variations across tasks and domains, suggesting this an interesting area for future research. In summary, our contributions are:\n\\begin{itemize}[leftmargin=*]\n \\item We present a comprehensive study of sparse models for zero and few-shot learning at scale; \n \\item We demonstrate that even at scale sparse MoE models can yield competitive zero and few-shot performance at a fraction of the computation for model training and inference;\n \\item We observe some differences in how dense and sparse models generalize at scale suggesting complementary behaviour that could be an interesting future research direction.\n\\end{itemize}\n\n\n\n\\section{Background and Related Work}\n\n\\subsection{Large Language Models \/ GPT-3}\n\nProgress in the field of NLP has been driven by increasingly large Language Models (LMs) pretrained on large text datasets. While numerous variations have been proposed, such LMs are predominantly based on the transformer architecture~\\citep{vaswani2017attention}. Models are pretrained by hiding parts of the input: predicting the next word sequentially left-to-right, masking words in the text~\\citep{devlin2018bert,liu2019roberta}, or perturbing and\/or masking spans~\\citep{lewis-etal-2020-bart,raffel2020exploring}. The resulting models can be quickly adapted to perform new tasks at high accuracy by fine-tuning on supervised data~\\citep{devlin2018bert,liu2019roberta}.\n\nRecently, GPT-3~\\citep{brown2020gpt3} demonstrated that large LMs can perform zero- and few-shot learning without fine-tuning through in-context learning. Notably, many of these in-context zero- and few-shot learning behaviors emerge or amplify at scale. Concurrent to our work, \\citet{rae2022scaling} and \\citet{smith2022using} further explore scaling dense language models. %\n\n\n\\subsection{Sparse models}\nOne drawback of dense model scaling is that it grows increasingly computationally expensive. To more efficiently increase model capacity, conditional compute strategies have been developed \\citep{bengio2013estimating,davis2013low,cho2014exponentially,bengio2015conditional}, where each input activates a subset of the model. Recent work \\citep{Lewis2021BASELS,lepikhin2021gshard,fedus2021switch,fan2021beyond} has studied different conditional compute strategies that work well with Transformer models for natural language tasks. In this work, we focus on Sparsely Gated Mixture of Expert (MoE) models \\citep{shazeer2017outrageously,lepikhin2021gshard}. Sparse MoE models replace the dense feed forward network block in every alternate Transformer layer with an MoE layer. The MoE layer has a routing gate that learns which tokens are to be mapped to which set of experts (we use top-2 experts). To ensure scalability and training efficiency, it is also common to include a weighted gate loss term as in \\citet{lepikhin2021gshard} to the cross entropy loss to encourage the tokens to be uniformly distributed to the experts. \nConcurrent to our work, \\citet{du2021glam}, \\citet{rajbhandari2022deepspeedmoe} and \\citet{clark2022unified} also study MoE scaling.\n\n\\begin{table*}[ht]\n\\begin{center}\n\\begin{small}\n\\addtolength{\\tabcolsep}{-2.5pt}\n\\begin{tabular}{crrrrrrrrrrrrrrrrrr}\n\\toprule\n& \\multicolumn{5}{c}{GPT-3 (dense)} && \\multicolumn{5}{c}{Ours (dense)} && \\multicolumn{5}{c}{Ours (MoE)} & \\\\\n\\cmidrule{2-6} \\cmidrule{8-12} \\cmidrule{14-18}\n& \\multicolumn{1}{c}{\\emph{size}} & \\multicolumn{1}{c}{\\emph{cost}} & \\multicolumn{1}{c}{$l$} & \\multicolumn{1}{c}{$h$} & \\multicolumn{1}{c}{$e$} &\n& \\multicolumn{1}{c}{\\emph{size}} & \\multicolumn{1}{c}{\\emph{cost}} & \\multicolumn{1}{c}{$l$} & \\multicolumn{1}{c}{$h$} & \\multicolumn{1}{c}{$e$} &\n& \\multicolumn{1}{c}{\\emph{size}} & \\multicolumn{1}{c}{\\emph{cost}} & \\multicolumn{1}{c}{$l$} & \\multicolumn{1}{c}{$h$} & \\multicolumn{1}{c}{$e$} &\n\\\\\n\\midrule\n& 125M & 0.36 & 12 & 768 & -- &\n& 125M & 0.36 & 12 & 768 & -- &\n& 15B & 0.43 & 12 & 768 & 512 &\n\\\\\n& 355M & 1.06 & 24 & 1024 & -- &\n& 355M & 1.06 & 24 & 1024 & -- &\n& 52B & 1.30 & 24 & 1024 & 512 &\n\\\\\n& 760M & 2.13 & 24 & 1536 & -- &\n& \\multicolumn{5}{c}{---} &\n& \\multicolumn{5}{c}{---} &\n\\\\\n& 1.3B & 3.57 & 24 & 2048 & -- &\n& 1.3B & 3.57 & 24 & 2048 & -- &\n& 207B & 4.53 & 24 & 2048 & 512 &\n\\\\\n& 2.7B & 7.08 & 32 & 2560 & -- &\n& 2.7B & 7.08 & 32 & 2560 & -- &\n& \\multicolumn{5}{c}{---} &\n\\\\\n& 6.7B & 17.12 & 32 & 4096 & -- &\n& 6.7B & 17.12 & 32 & 4096 & -- &\n& 1.1T & 22.27 & 32 & 4096 & 512 &\n\\\\\n& 13B & 32.67 & 40 & 5120 & -- &\n& 13B & 32.67 & 40 & 5120 & -- &\n& \\multicolumn{5}{c}{---} &\n\\\\\n& 175B & 430.17 & 96 & 12288 & -- &\n& \\multicolumn{5}{c}{---} &\n& \\multicolumn{5}{c}{---} &\n\\\\\n\\bottomrule\n\\end{tabular}%\n\\end{small}\n\\end{center}\n\\caption{\\textbf{Dense and mixture of expert (MoE) model details}. \\emph{size}: number of parameters, \\emph{cost}: training ZFLOPs, $l$: layers, $h$: hidden dimension, $e$: number of experts. All models are trained for 300B tokens with a sequence length of 2048 tokens. Models within the same row are roughly comparable. We estimate the training cost in ZFLOPs analytically (see Appendix~\\ref{app:flops}).}\n\\label{tab:models}\n\\end{table*}\n\n\\subsection{Zero-shot and Few-shot Learning} \\label{subsec:background_fewshot}\n\nRecent works \\citep{schick-schutze-2021-exploiting,radford2019language} have directly evaluated LMs on unseen tasks successfully (zero-shot learning), by recasting their task inputs as cloze-style prompt completion tasks. This is in contrast to the traditional approach of augmenting LMs with task-specific heads, followed by supervised fine-tuning \\citep{devlin2018bert,raffel2020exploring}. Subsequently, \\citet{brown2020gpt3} demonstrated that priming LMs with a few input-output examples (few-shot learning) before careful prompting can improve task performance, that grows with model scale without any fine-tuning, and this gave rise to new resources for prompt engineering \\citep{bach2022promptsource}. In this paper, we contrast the zero-shot, few-shot, and fully supervised fine-tuning performance of dense and MoE models. Finally, \\citet{schick2021smalllanguagemodels} perform few-shot learning by few-shot fine-tuning using pattern-exploiting training, whose efficiency can be improved by performing partial fine-tuning of a small number of additional task-specific parameters instead \\citep{lester2021power,li2021prefix,houlsby2019parameter}. %\n\n\n\n\\subsection{Large-scale training}\n\nMany of the models we consider in this work are too big to be trained using standard data parallel techniques, since parameter storage would exceed the usable memory of a single GPU.\nWe adopt several techniques to make these models feasible to train, including pure FP16 training, activation checkpointing and fully sharded data parallel training. These techniques are described in more depth in Appendix~\\ref{app:scaling}.\n\n\\section{Experimental Setup}\n\n\\subsection{Models}\n\nWe train autoregressive (decoder-only) transformer models that roughly match the sizes and architecture explored in~\\citet{brown2020gpt3}. Model sizes are summarized in Table~\\ref{tab:models}.\nWe use pre-normalization transformer blocks~\\citep{baevski2018adaptive,child2019generating} and GELU activations~\\citep{hendrycks2016gelu}.\nWe differ from \\citet{brown2020gpt3} in two ways: (1) we use only dense attention, while they alternate between dense and locally banded sparse attention; and (2) we train our models with sinusoidal positional embeddings, following Shortformer~\\cite{press2020shortformer}.\\footnote{Early experiments found this to produce comparable results with fewer learned parameters.}\n\nWe also train MoE models that mirror our dense model configurations (see the third set of columns in Table~\\ref{tab:models}), so that comparisons are approximately matched in terms of the number of floating point operations (\\emph{FLOP}s).\nOur MoE models follow the design proposed in \\citet{lepikhin2021gshard} with alternating dense and expert layers and top-2 expert selection.\nWe use 512 experts in each expert layer ($E=512$).\nEach expert has a \\emph{capacity} of $\\frac{C \\cdot B}{E}$ tokens, where $C$ is a \\emph{capacity factor} that we set to $2$ and $B$ is the total batch size in tokens. Capacity refers to the maximum number of tokens that are routed to each expert.\nOnce an expert is at capacity for a given batch, additional tokens are considered to be ``overflowed\" with their representations passed-through via the residual connection.\n\n\\citet{fedus2021switch} report instability training large MoE models and suggest rescaling the initial model weights, which we do not find necessary.\nWe instead observe that expert parameters have an $E$-times smaller batch size relative to dense (data parallel) parameters and accordingly rescale expert gradients by a factor $\\frac{1}{\\sqrt{E}}$.\nThis rescaling aligns with theory suggesting that an $E$-times increase in batch size should be accompanied by a $\\sqrt{E}$ increase in learning rate~\\citep{krizhevsky2014one}.\n\n\nFollowing \\citet{brown2020gpt3}, we train our models for 300B tokens\\footnote{While we control the total number of tokens to be the same as \\citet{brown2020gpt3}, our pretraining data is not the same. See \\S\\ref{subsec:pretraining_data} for further details.}\nwith a context size (sequence length) of 2048 tokens.\nThe batch size and learning rate are set according to the model size following \\citet{brown2020gpt3}.\nWe linearly warm-up the learning rate from $0$ over the first 375M tokens and linearly decay back to $0$ over the remaining tokens.\nWe use the Adam optimizer~\\cite{kingma2014adam} with $\\beta_1=0.9$, $\\beta_2=0.98$, $\\epsilon=10^{-8}$, weight decay of 0.01 and dropout of 0.1.\\footnote{We note that our 355M dense and 52B MoE models (FLOPs-matched) were trained without dropout, which we find slightly improves performance at smaller scale.}\n\nWe train our models in PyTorch~\\cite{paszke2017automatic} using \\textsc{fairseq}~\\citep{ott2019fairseq}.\n\n\\subsection{Pretraining data}\n\\label{subsec:pretraining_data}\n\nWe pretrain our models on a union of six English-language datasets, including the five datasets used to pretrain RoBERTa~\\citep{liu2019roberta} and the English subset of CC100, totalling 112B tokens corresponding to 453GB:\n\\begin{itemize}[leftmargin=*]\n\\item \\textbf{BookCorpus}~\\citep{zhu2015bookcorpus} consists of more than 10K unpublished books (4GB);\n\\item \\textbf{English Wikipedia}, excluding lists, tables and headers (12GB);\n\\item \\textbf{CC-News}~\\citep{nagel2016ccnews} contains 63 millions English news articles crawled between September 2016 and February 2019 (76GB);\n\\item \\textbf{OpenWebText}~\\citep{gokaslan2019openwebtext}, an open source recreation of the WebText dataset used to train GPT-2 (38GB);\n\\item \\textbf{CC-Stories}~\\citep{trinh2018simple} contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas (31GB);\n\\item \\textbf{English CC100}~\\citep{wenzek-etal-2020-ccnet}, a dataset extracted from CommonCrawl snapshots between January 2018 and December 2018, filtered to match the style of Wikipedia (292GB).\n\\end{itemize}\nWe encode our data using the same Byte-Pair Encoding (BPE) as GPT-2~\\citep{radford2019language} and RoBERTa~\\citep{liu2019roberta} with a vocabulary of 50K subword units.\n\n\\subsection{Evaluation}\n\nWe evaluate models in terms of their in-domain and out-of-domain perplexity, as well as downstream task performance. \n\n\\subsubsection{Perplexity Evaluation}\n\nWe first evaluate our models on their ability to predict the next token in a sequence as measured by perplexity. Similar to training, we concatenate all documents in a given dataset using empty lines as separators, split the resulting sequence into non-overlapping blocks of 2048 tokens, and score each block independently.\\footnote{One limitation of this approach is that the first tokens in each block have limited context, as they do not condition on tokens from preceding blocks. Although more expensive, better results could be obtained using a sliding window approach. Nevertheless, this form of chunking the input is standard in language model evaluation.}\n\nWe evaluate and report perplexity in both \\textbf{in-domain} and \\textbf{out-of-domain} settings.\nIn-domain, we sample a held-out subset of the combined pretraining data (\\S\\ref{subsec:pretraining_data}).\nFor out-of-domain we use data from The Pile \\citep{gao2021thepile}, a public dataset that combines data from 22 diverse sources (e.g., ArXiv, Github, OpenSubtitles, etc.).\nWe report perplexities on the official test set of each individual subset, as well as the average across all subsets.\n\n\n\\subsubsection{Downstream Evaluation} \\label{subsec:downstream}\n\nWe target models that can perform downstream tasks well. Recent work shows that good perplexity performance does not always align with good performance on downstream tasks~\\cite{tay2021scale}. Hence, we evaluate our models accordingly.\n\n\\paragraph{Benchmarks.}\nWe evaluate our models on a subset of the tasks considered in \\citet{brown2020gpt3}.\nAs GPT-3 performance varies greatly across tasks and model sizes, we focus on tasks for which GPT-3 either demonstrated consistent gains from scaling, or consistent gains going from zero-shot to few-shot settings.\n\n\\textbf{Few-shot:} we use WinoGrande~\\citep{sakaguchi2020winogrande}, StoryCloze~\\citep{mostafazadeh-etal-2016-corpus} and OpenBookQA~\\citep{mihaylov-etal-2018-suit}, the only non-generation tasks for which \\citet{brown2020gpt3} reported meaningful gains over zero-shot at our scale.\\footnote{Defined as an improvement of at least 2 accuracy points over zero-shot learning and the majority class baseline for at least one GPT-3 model no bigger than 6.7B.} We exclude SuperGLUE, since we were not able to reproduce results reported in \\citet{brown2020gpt3} using the public GPT-3 API.\\footnote{Different from other tasks, we were not able to reproduce GPT-3 results on SuperGLUE using the OpenAI API and our evaluation protocol. The authors confirmed that they used a different evaluation protocol for SuperGLUE through personal correspondence.}\n\n\\textbf{Zero-shot:} in addition to the 3 few-shot tasks, we evaluate on ReCoRD~\\citep{zhang2018record}, HellaSwag~\\citep{zellers-etal-2019-hellaswag} and PIQA~\\citep{bisk2020piqa}. \\citet{brown2020gpt3} reported strong results and monotonic improvements from scaling on these tasks.\n\n\\paragraph{Evaluation protocol.} Following \\citet{brown2020gpt3}, we report results on the development set for all tasks except OpenBookQA and StoryCloze, for which we use the test set. For few-shot learning, we report the average results across 25 runs, randomly sampling a different set of few-shot examples from the training set each time.\\footnote{StoryCloze does not have a training set, so we follow \\citet{brown2020gpt3} and sample few-shot examples from the development set instead.} For priming, we further shuffle the few-shot examples for each test instance. Following \\citet{brown2020gpt3}, we use k=50 few-shot examples for WinoGrande, k=70 for StoryCloze and k=100 for OpenBookQA.\nIn cases where this exceeds the maximum context length for the model, we truncate the prompt keeping the maximum number of full examples that fit.\n\n\\paragraph{Baselines.} We compare to the published GPT-3 numbers \\citep{brown2020gpt3} as our primary baseline. To validate our experimental framework, we also evaluate GPT-3 leveraging the OpenAI API using our own evaluation code and settings. Unfortunately, the correspondence between model sizes and model names in the OpenAI API is not published. We follow other published work \\citep{gao2021thepile} and guess the correspondence based on our results from the public API as compared to results in \\citet{brown2020gpt3} (see \\S\\ref{subsec:zero_shot_results}).\n\n\\paragraph{Methods.} We compare both priming and fine-tuning-based approaches.\n\\begin{itemize}[leftmargin=*]\n\\item \\textbf{Priming:} We use a language model to separately score each label choice using the same templates as \\citet{brown2020gpt3}, and pick the one with the highest score. For few-shot learning, we use a single newline to separate examples. Our scoring function follows the description in \\citet{brown2020gpt3}:\n\\begin{itemize}\n\\item{\\bf For WinoGrande}, we take the log-likelihood of the common suffix of the different candidates.\n\\item{\\bf For OpenBookQA}, we normalize by the unconditional probability of each candidate by taking $\\frac{p(\\mathtt{completion}|\\mathtt{context})}{p(\\mathtt{completion}|\\mathtt{answer\\_context})}$, where we use the string \\textit{``Answer: ''} as answer\\_context.\n\\item{\\bf For ReCoRD}, we take the sum of per-token log-probabilities.\\footnote{This is different from \\citet{brown2020gpt3}, who take the average per-token log-probability for this task. This worked worse in our preliminary experiments.}\n\\item{\\bf For all the other tasks}, we take the average of per-token log-probabilities, ignoring the common prefix of the different candidates.\n\\end{itemize}\n\\item \\textbf{Fine-tuning:} Although supervised fine-tuning of pre-trained LMs on task specific training data, $\\mathcal{D}$, requires updating and storage of all model parameters per task, the process typically produces significant task specific performance improvements. We contrast the fine-tuning performance of sparse models and their dense counterparts following \\cite{radford2018gpt}, which applies an additional task-specific linear layer $W_y$ on the representation from the final transformer block for each input candidate separately, followed by a softmax layer.\nWe fine-tune all model parameters using the entire training set (fully supervised learning). In addition to our zero-shot tasks, we also evaluate on 3 widely-used classification tasks: BoolQ~\\citep{clark-etal-2019-boolq}, MNLI~\\citep{williams-etal-2018-broad} and SST-2~\\citep{socher-etal-2013-recursive}. More details are in Appendix \\ref{sec:fine_tuning_settings}.\n\n\\end{itemize}\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.485\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/ppl-valid.pdf}\n \\caption{In-domain (validation)}\n \\label{fig:ppl-valid}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.485\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/ppl-thepile.pdf}\n \\caption{Out-of-domain (the Pile)}\n \\label{fig:ppl-thepile}\n \\end{subfigure}\n \\hfill\n \\caption{\\textbf{Language modeling perplexity.} For the Pile, we report the average perplexity across the 22 subsets.}\n \\label{fig:ppl}\n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.485\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/efficiency-gain-ppl.pdf}\n \\caption{Language modeling (the Pile)}\n \\label{fig:efficiency-gain-ppl}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.485\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/efficiency-gain-zeroshot.pdf}\n \\caption{Zero-shot priming}\n \\label{fig:efficiency-gain-zeroshot}\n \\end{subfigure}\n \\hfill\n \\caption{\n \\textbf{Estimate of how much more efficient MoEs are relative to dense models in representative datasets.} A speedup factor of $y$ indicates that an MoE model can match the performance of the corresponding dense model using $y$ times less compute. Refer to \\S\\ref{subsubsec:efficiency-gain} for more details.\n }\n \\label{fig:efficiency-gain-datasets}\n\\end{figure*}\n\n\\subsubsection{MoE speedup factor}\n\\label{subsubsec:efficiency-gain}\n\nWe hypothesize that sparse models can achieve comparable performance at a smaller compute budget. As such, it is informative to measure how much more efficient MoEs are at achieving a specific performance level relative to dense models. We estimate how many FLOPs $\\flop (t)$ the model needs to achieve performance $t$ in a particular task (as measured by perplexity for language modeling and accuracy for downstream tasks) using either an MoE or a dense model. Given that we only have discrete observations, we estimate exact missing values by interpolating on a logarithmic scale as follows:\n$$ \\flop(t) = \\exp \\left( \\log \\flop_{lo}(t) + r \\left(\\log \\flop_{hi}(t) - \\log \\flop_{lo}(t) \\right) \\right)$$\nwhere $r = \\frac{t - t_{lo}}{t_{hi} - t_{lo}}$, $t_{lo}$ and $t_{hi}$ are the closest performance to $t$ from the available models while being lower and higher than $t$, respectively, and $\\flop_{lo}(t)$ and $\\flop_{hi}$ are their corresponding training cost in ZFLOPs.\n\nThe interpolation gives us matching performance levels for dense and MoE models. We use them to compute the MoE speedup factor $\\flop_{dense}(t) \/ \\flop_{moe}(t)$. For example, if a dense model requiring 20 ZFLOPs achieves a performance of $90\\%$ on a given task and a MoE model requiring 5 ZFLOPs achieves the same performance, then the formula produces saving factor of 4. We visualize the savings curve using $\\flop_{dense}(t)$ in the $x$ axis, which allows us to contrast speedup in different tasks in a comparable scale.\n\n\n\n\n\n\n\\begin{table}[t]\n\\begin{center}\n\\begin{small}\n\\addtolength{\\tabcolsep}{-2.5pt}\n\\resizebox{0.48\\textwidth}{!}{\n\\begin{tabular}{cr|cccccc|c}\n\\toprule\n&& RE & HS & PI & WG & SC & OB & avg \\\\\n\\midrule\n\\multirow{8}{*}{\\shortstack{GPT-3 \\\\ (paper)}}\n& 125M & 70.8 & 33.7 & 64.6 & 52.0 & 63.3 & 35.6 & 53.3 \\\\\n& 355M & 78.5 & 43.6 & 70.2 & 52.1 & 68.5 & 43.2 & 59.4 \\\\\n& 760M & 82.1 & 51.0 & 72.9 & 57.4 & 72.4 & 45.2 & 63.5 \\\\\n& 1.3B & 84.1 & 54.7 & 75.1 & 58.7 & 73.4 & 46.8 & 65.5 \\\\\n& 2.7B & 86.2 & 62.8 & 75.6 & 62.3 & 77.2 & 53.0 & 69.5 \\\\\n& 6.7B & 88.6 & 67.4 & 78.0 & 64.5 & 77.7 & 50.4 & 71.1 \\\\\n& 13B & 89.0 & 70.9 & 78.5 & 67.9 & 79.5 & 55.6 & 73.6 \\\\\n& 175B & 90.2 & 78.9 & 81.0 & 70.2 & 83.2 & 57.6 & 76.9 \\\\\n\\midrule\n\\multirow{4}{*}{\\shortstack{GPT-3 \\\\ (API)}}\n& ada & 77.4 & 42.9 & 70.3 & 52.9 & 68.6 & 41.0 & 58.9 \\\\\n& babb. & 83.1 & 55.1 & 74.5 & 59.4 & 73.3 & 45.6 & 65.2 \\\\\n& curie & 87.1 & 67.8 & 77.1 & 64.3 & 77.7 & 50.8 & 70.8 \\\\\n& davi. & -- & 78.8 & 80.0 & 70.0 & 83.1 & 58.8 & -- \\\\\n\\midrule\n\\multirow{6}{*}{\\shortstack{Ours \\\\ (dense)}}\n& 125M & 69.3 & 33.7 & 65.3 & 52.1 & 66.0 & 35.4 & 53.6 \\\\\n& 355M & 78.1 & 46.2 & 70.6 & 54.2 & 71.0 & 42.0 & 60.4 \\\\\n& 1.3B & 83.5 & 58.4 & 74.6 & 58.1 & 76.8 & 49.4 & 66.8 \\\\\n& 2.7B & 85.8 & 65.9 & 76.6 & 61.4 & 78.2 & 49.6 & 69.6 \\\\\n& 6.7B & 87.5 & 70.2 & 78.2 & 64.7 & 80.5 & 51.8 & 72.2 \\\\\n& 13B & 88.5 & 73.7 & 79.0 & 67.6 & 80.9 & 55.4 & 74.2 \\\\\n\\midrule\n\\multirow{4}{*}{\\shortstack{Ours \\\\ (MoE)}}\n& 15B & 77.8 & 53.2 & 74.3 & 53.4 & 73.6 & 42.0 & 62.4 \\\\\n& 52B & 83.4 & 64.9 & 76.8 & 57.4 & 75.9 & 51.0 & 68.2 \\\\\n& 207B & 86.0 & 70.5 & 78.2 & 60.9 & 78.1 & 50.8 & 70.7 \\\\\n& 1.1T & 88.0 & 78.6 & 80.3 & 66.4 & 81.8 & 55.2 & 75.0 \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{small}\n\\end{center}\n\\caption{\n\\textbf{Zero-shot priming accuracy.}\n\\textit{GPT-3 (paper)} results taken from \\citet{brown2020gpt3}, all the other results were obtained by us as described in \\S\\ref{subsec:downstream}.\n\\texttt{RE}: ReCoRD, \\texttt{HS}: HellaSwag, \\texttt{PI}: PIQA, \\texttt{WG}: WinoGrande, \\texttt{SC}: StoryCloze, \\texttt{OB}: OpenBookQA.\nWe do not evaluate the largest GPT-3 model (davinci) on \\texttt{RE} given the high price.\n}\n\\label{tab:zeroshot}\n\\end{table}\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/zeroshot-avg.pdf}\n\\caption{\n\\textbf{Zero-shot priming accuracy averaged across 6 tasks as a function of compute cost.} Each point corresponds to a different, fully-trained model (see Table \\ref{tab:models}).\n\\textit{GPT-3 (paper)} results taken from \\citet{brown2020gpt3}.\n}\n\\label{fig:zeroshot}\n\\end{figure}\n\n\n\n\\section{Results and Analysis}\\label{sec:results}\n\n\n\\subsection{Language modeling perplexity}\n\nWe report our perplexity results in Figure \\ref{fig:ppl}, and visualize the speedup curves in representative subsets of the Pile \\citep{gao2021thepile} in Figure \\ref{fig:efficiency-gain-ppl}. Refer to Appendix \\ref{app:full-results} for full results for all the 22 subsets of the Pile.\n\nWe observe that all MoE models outperform their dense counterparts in all datasets, but their advantage greatly varies across domains and models. MoEs are most efficient when evaluated in-domain, where they are able to match the performance of dense models trained with 8-16x more compute (see Figure \\ref{fig:efficiency-gain}). The improvement is more modest in out-of-domain settings, bringing a speedup of 2-4 on the Pile. This is reflected in Figure \\ref{fig:ppl}, where the gap between the MoE and dense curves is substantially smaller in out-of-domain settings. Moreover, the advantage of MoEs over dense models decreases at scale: MoEs need $\\sim$4 times less compute to match the performance of dense models trained with 2-6 ZFLOPs, but the speedup is $\\sim$2 for dense models trained with $\\sim$30 ZFLOPs. %\n\nWe also observe large difference across the subsets of the Pile, which correspond to different domains. As shown in Figure \\ref{fig:efficiency-gain-ppl}, MoEs obtain the largest speedups in subsets that are closest to the training corpus (e.g., CommonCrawl). The efficiency gains are more moderate but still remarkable for other domains like ArXiv and OpenSubtitles. Our largest MoE model barely outperforms its dense counterpart on DM Mathematics (7.63 vs. 7.66 perplexity), which is arguably very different from the training domain.\n\n\n\\subsection{Downstream task evaluation}\n\n\\subsubsection{Zero-shot learning}\\label{subsec:zero_shot_results}\n\nWe report zero-shot results in Table \\ref{tab:zeroshot}, and visualize how the different model families scale in Figure \\ref{fig:zeroshot}.\n\nOur dense models perform at par with their GPT-3 counterparts. This is consistent across different tasks, with our models doing marginally better on average. We are thus able to match \\citet{brown2020gpt3} despite some notable differences in our setup (e.g., different training corpus), establishing a solid baseline to evaluate MoE models on downstream tasks. Similarly, when using our own code to evaluate the strongest GPT-3 API backend (\\textit{davinci}), we obtain numbers that replicate those reported in the original paper for their largest model, which reinforces that our evaluation settings are comparable to \\citet{brown2020gpt3}.\\footnote{We assume that \\textit{ada} corresponds to the 355M model, \\textit{babbage} corresponds to the 1.3B model, and \\textit{curie} corresponds to the 6.7B model based on the API evaluation results.}\n\n\nAs with language modeling, MoEs outperform their dense counterparts for all datasets and model sizes. But, once again, we find the advantage narrows at scale as illustrated in Figure \\ref{fig:zeroshot}. Similar to the domain differences in language modeling, we observe differences across downstream tasks. As shown in Figure \\ref{fig:efficiency-gain-zeroshot}, MoEs obtain significant speedups in certain tasks like HellaSwag and PIQA, but this improvement is more modest in other tasks such as ReCoRD and Winogrande.\n\n\n\n\n\n\n\n\\subsubsection{Few-shot learning}\n\nWe report our few-shot results in Table \\ref{tab:fewshot} and plot the corresponding improvement over zero-shot in Figure \\ref{fig:fewshot}.\n\nOur dense baselines perform at par or slightly better than GPT-3. We observe that the improvement over zero-shot is bigger for larger models, further supporting that certain capabilities in language models emerge at scale \\citep{brown2020gpt3}. Finally, we find that our larger MoE models also benefit from few-shot learning, outperforming their dense counterparts in all conditions. However, the improvements going from zero-shot to few-shot are smaller for MoE models compared to their dense counterparts. For example, the average for the 6.7B dense model improves by 3.6 points to 69.3 going from zero-shot to few-shot, whereas the corresponding 1.1T model improves by 2.3 points yielding 70.1.\n\n\\begin{table}[t!]\n\\begin{center}\n\\begin{small}\n\\addtolength{\\tabcolsep}{-2pt}\n\\resizebox{0.48\\textwidth}{!}{\n\\begin{tabular}{cr|ccc|c}\n\\toprule\n&& WG & SC & OB & avg \\\\\n\\midrule\n\\multirow{8}{*}{\\shortstack{GPT-3 \\\\ (paper)}}\n& {125M} & 51.3 \\scriptsize{--0.7} & 62.3 \\scriptsize{--1.0} & 37.0 \\scriptsize{+1.4} & 50.2 \\scriptsize{--0.1} \\\\\n& {355M} & 52.6 \\scriptsize{+0.5} & 70.2 \\scriptsize{+1.7} & 43.6 \\scriptsize{+0.4} & 55.5 \\scriptsize{+0.9} \\\\\n& {760M} & 57.5 \\scriptsize{+0.1} & 73.9 \\scriptsize{+1.5} & 48.0 \\scriptsize{+2.8} & 59.8 \\scriptsize{+1.5} \\\\\n& {1.3B} & 59.1 \\scriptsize{+0.4} & 76.1 \\scriptsize{+2.7} & 50.6 \\scriptsize{+3.8} & 61.9 \\scriptsize{+2.3} \\\\\n& {2.7B} & 62.6 \\scriptsize{+0.3} & 80.2 \\scriptsize{+3.0} & 55.6 \\scriptsize{+2.6} & 66.1 \\scriptsize{+2.0} \\\\\n& {6.7B} & 67.4 \\scriptsize{+2.9} & 81.2 \\scriptsize{+3.5} & 55.2 \\scriptsize{+4.8} & 67.9 \\scriptsize{+3.7} \\\\\n& {13B} & 70.0 \\scriptsize{+2.1} & 83.0 \\scriptsize{+3.5} & 60.8 \\scriptsize{+5.2} & 71.3 \\scriptsize{+3.6} \\\\\n& {175B} & 77.7 \\scriptsize{+7.5} & 87.7 \\scriptsize{+4.5} & 65.4 \\scriptsize{+7.8} & 76.9 \\scriptsize{+6.6} \\\\\n\\midrule\n\\multirow{6}{*}{\\shortstack{Ours \\\\ (dense)}}\n& {125M} & 52.2 \\scriptsize{+0.1} & 64.7 \\scriptsize{--1.3} & 35.0 \\scriptsize{--0.4} & 50.7 \\scriptsize{--0.5} \\\\\n& {355M} & 53.7 \\scriptsize{--0.5} & 72.2 \\scriptsize{+1.1} & 42.0 \\scriptsize{+0.0} & 56.0 \\scriptsize{+0.2} \\\\\n& {1.3B} & 60.1 \\scriptsize{+2.0} & 78.6 \\scriptsize{+1.9} & 49.4 \\scriptsize{+0.0} & 62.7 \\scriptsize{+1.3} \\\\\n& {2.7B} & 63.9 \\scriptsize{+2.5} & 82.1 \\scriptsize{+3.8} & 53.2 \\scriptsize{+3.6} & 66.4 \\scriptsize{+3.3} \\\\\n& {6.7B} & 67.6 \\scriptsize{+2.9} & 83.2 \\scriptsize{+2.7} & 57.0 \\scriptsize{+5.2} & 69.3 \\scriptsize{+3.6} \\\\\n& {13B} & 71.0 \\scriptsize{+3.5} & 85.0 \\scriptsize{+4.1} & 59.5 \\scriptsize{+4.1} & 71.8 \\scriptsize{+3.9} \\\\\n\\midrule\n\\multirow{4}{*}{\\shortstack{Ours \\\\ (MoE)}}\n& {15B} & 52.5 \\scriptsize{--0.9} & 71.4 \\scriptsize{--2.1} & 42.2 \\scriptsize{+0.2} & 55.4 \\scriptsize{--0.9} \\\\\n& {52B} & 58.1 \\scriptsize{+0.7} & 77.5 \\scriptsize{+1.6} & 48.9 \\scriptsize{--2.1} & 61.5 \\scriptsize{+0.1} \\\\\n& {207B} & 62.8 \\scriptsize{+1.9} & 81.1 \\scriptsize{+3.0} & 52.4 \\scriptsize{+1.6} & 65.4 \\scriptsize{+2.2} \\\\\n& {1.1T} & 68.6 \\scriptsize{+2.3} & 83.9 \\scriptsize{+2.1} & 57.7 \\scriptsize{+2.5} & 70.1 \\scriptsize{+2.3} \\\\\n\\bottomrule\n\\end{tabular}\n}\n\\end{small}\n\\end{center}\n\\caption{\n\\textbf{Few-shot priming accuracy and absolute improvement over zero-shot.}\n\\textit{GPT-3 (paper)} results taken from \\citet{brown2020gpt3}, all the other results were obtained by us as described in \\S\\ref{subsec:downstream}.\n\\texttt{WG}: WinoGrande, \\texttt{SC}: StoryCloze, \\texttt{OB}: OpenBookQA.}\n\\label{tab:fewshot}\n\\end{table}\n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/fewshot-improvement.pdf}\n\\caption{\n\\textbf{Absolute accuracy improvement going from zero-shot to few-shot}, averaged across 3 tasks. Each point corresponds to a different, fully-trained model (see Table \\ref{tab:models}).\n\\textit{GPT-3 (paper)} results taken from \\citet{brown2020gpt3}.\n}\n\\label{fig:fewshot}\n\\end{figure}\n\n\n\n\n\n\\subsubsection{Supervised Fine-Tuning}\nTable \\ref{tab:finetuning} contrasts full fine-tuning performance of MoE models with their dense counterparts on 8 datasets, using zero-shot accuracy as a baseline for reference. We did not fine-tune the 6.7B and 13B dense models and the 1.1T MoE models, owing to their high resource needs. As expected, supervised fine-tuning yields substantial performance benefits for all dense models across all datasets, over zero-shot performance. In contrast, although fine-tuning of MoE models produces substantial benefits for Storycloze, BoolQ, SST-2, MNLI and some improvements on OpenBookQA, it results in worse performance for HellaSwag, PIQA, and Winogrande. For the cases where we see improvements, the accuracy of fine-tuned MoE models approach that of their corresponding dense models. For this comparison, we fine-tune MoE models exactly as we do the dense models. While MoE models may benefit from alternative fine-tuning approaches, for example, selective fine-tuning of the expert or non-expert parameters, we leave such exploration to future work. \n\n\\begin{table}[t!]\n\\begin{center}\n\\begin{small}\n\\addtolength{\\tabcolsep}{-2.5pt}\n\\resizebox{0.48\\textwidth}{!}{\n\\begin{tabular}{cr|cccc|ccc}\n\\toprule\n&& \\multicolumn{4}{c|}{Ours (Dense)} & \\multicolumn{3}{c}{Ours (MoE)} \\\\\n&& 125M & 355M & 1.3B & 2.7B & 15B & 52B & 207B \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{SC}}\n& zero-shot & 66.0 & 71.0 & 76.8 & 78.2 & 73.6 & 75.9 & 78.1 \\\\ %\n& fine-tune & 87.8 & 89.5 & 93.8 & 97.0 & 80.3 & 84.9 & 80.9 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{OB}}\n& zero-shot & 35.4 & 42.0 & 49.4 & 49.6 & 42.0 & 51.0 & 50.8 \\\\ %\n& fine-tune & 50.6 & 59.0 & 67.4 & 70.8 & 51.2 & 51.4 & 51.0 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{BQ}}\n& zero-shot & 56.1 & 58.6 & 58.7 & 60.3 & 60.9 & 56.0 & 54.2 \\\\ %\n& fine-tune & 73.2 & 75.2 & 79.6 & 84.6 & 71.6 & 75.3 & 77.5 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{MN}}\n& zero-shot & 46.2 & 52.1 & 55.3 & 56.0 & 49.3 & 52.1 & 52.6 \\\\ %\n& fine-tune & 80.9 & 84.3 & 84.1 & 88.9 & 77.7 & 81.2 & 78.7 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{SST-2}}\n& zero-shot & 50.9 & 50.9 & 51.6 & 51.9 & 51.6 & 50.9 & 50.9 \\\\ %\n& fine-tune & 92.9 & 92.9 & 94.8 & 93.4 & 89.3 & 90.1 & 90.3 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{HS}}\n& zero-shot & 33.7 & 46.2 & 58.4 & 65.9 & 53.2 & 64.9 & 70.5 \\\\ %\n& fine-tune & 50.7 & 64.8 & 74.1 & 90.0 & 37.3 & 45.4 & 42.2 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{PI}}\n& zero-shot & 65.3 & 70.6 & 74.6 & 76.6 & 74.3 & 76.8 & 78.2 \\\\ %\n& fine-tune & 68.2 & 71.7 & 71.2 & 80.3 & 66.3 & 66.1 & 68.3 \\\\ %\n\\midrule\n\\multirow{2}{*}{\\texttt{WG}}\n& zero-shot & 52.1 & 54.2 & 58.1 & 61.4 & 53.4 & 57.4 & 60.9 \\\\ %\n& fine-tune & 65.7 & 63.3 & 67.4 & 69.5 & 50.2 & 56.0 & 50.4 \\\\ %\n\\bottomrule\n\\end{tabular}}\n\\end{small}\n\\end{center}\n\\caption{\n\\textbf{Fully supervised fine-tuning accuracy compared with zero-shot accuracy.} \\texttt{SC}: StoryCloze, \\texttt{OB}: OpenBookQA, \\texttt{BQ}: BoolQ, \\texttt{MN}: MNLI, \\texttt{HS}: HellaSwag, \\texttt{PI}: PIQA, \\texttt{WG}: WinoGrande. Largest models omitted owing to their high resource utilization.}\n\\label{tab:finetuning}\n\\end{table}\n\n\n\n\\section{Conclusion}\n\nWe present results for scaling sparse Language Models up to 1.1T parameters. We observe that up to this scale sparse models offer better performance vs. computation trade-off when compared to their dense counterparts for language modeling, zero- and few-shot learning. While the gap begins to close at scale our biggest sparse model outperforms its dense counterpart where the latter requires twice as much computation. These results confirm that sparse MoE models can provide an alternative to widely used dense architectures that saves computation and reduces model energy consumption.\n\n\n\n\\section*{Ethical considerations}\nPrevious work~\\cite{sheng2019woman,bordia2019identifying,nadeem2020stereoset,de2021stereotype} has observed that language models absorb bias and toxicity represented in the training data. So as to better understand the potential harms of our models in this front, we evaluated them on StereoSet~\\cite{nadeem2020stereoset} and CrowS-Pairs~\\cite{nangia-etal-2020-crows}, and report our results in Appendix \\ref{app:potential_harms}. Our results show that the percentage of bias and stereotype in dense and MoE models is comparable, especially at scale.\nMoreover, in general, we note worse performance (more bias\/stereotyping) at larger scales. This observation points to more research needed in order to mitigate such behavior. Intuitively however, we believe that sparse models may be inherently more controllable -- e.g.~designing specific experts -- than dense models. We leave this line of investigation for future research. \n\nAnother concern of scaling language models is the energy usage and the associated environmental impact required for training, which we discuss in detail in Appendix \\ref{app:co2}. Nevertheless, our work shows that MoEs can be a more compute-efficient alternative to traditional dense models, which could alleviate the environmental impact of future scaling efforts.\nMoreover, by releasing all of our pre-trained language models, we believe we have alleviated some exploration burden for the community and the environment, allowing for more efficient offsets for other researchers.\n\nIn the spirit of transparency and allowing for maximal replicability and accountability, we include data and model cards together with our code. \n\n\n\\section*{Limitations}\n\nOur study is limited to one specific MoE configuration. In particular, our sparse and dense models use the same hyperparameters and model structure, closely following GPT-3. However, it is possible that this configuration is suboptimal for scaling MoE models. Similarly, we did not explore different MoE-specific hyperparameters, such as the number of experts. Finally, while our work reveals that the performance gap between MoE and dense models varies greatly across tasks and domains, the specific factors that make certain tasks more favorable for MoEs remain unclear.\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nEnergy levels, eigenvalues of a Hamiltonian, play the most primary role in\ndetermining properties of a quantum system. When energy levels cross\nor avoided cross as parameters of a Hamiltonian vary, various interesting\nphenomena happen. For example, if two instantaneous energy levels of a \ntime-dependent Hamiltonian are avoided crossing, the non-adiabatic tunneling \ncalled the Landau-Zener tunneling between them takes place~\\cite{Landau}. \nClosely related to this, the runtime of adiabatic quantum computation is \ninversely proportional to the square of the energy gap between the ground \nand first exited levels~\\cite{Farhi01}. An eigenstate encircling \nadiabatically degeneracy points accumulates a Berry phase in addition to \na dynamical phase~\\cite{Berry84,Shapere}. In quantum chemistry, a conical \nintersection of electronic energy surfaces of molecules plays a key role \nin understanding ultrafast radiationless reactions~\\cite{Yarkony96,Kuppermann}.\nA quantum phase transition, a dramatic change in a ground state as parameters \nof a system vary, is related with crossings or avoided crossings of\ntwo lowest energy levels~\\cite{Vojta03}. Kais {\\it et al}. have shown that \nthe finite size scaling method can be used for studying the critical behavior, \ni.e., the level degeneracy or absorption, of a few-body quantum Hamiltonian \n$H(\\lambda_1,\\cdots,\\lambda_k)$ as a function of a set of parameters \n$\\{\\lambda_i\\}$~\\cite{kais0,kais12}. These parameters could be the external \nfields, inter-atomic distances, nuclear charges for stability of negative \nions, cluster size, and optical lattice parameters such as the potential \ndepth~\\cite{kais34}. Thus, it is important to develop a way of finding level \ncrossings and to understand how eigenstates or relevant physical quantities \nchange at crossing or avoided crossings.\n\nRecently Bhattacharya and Raman presented a powerful algebraic method for \nfinding level crossings without solving an eigenvalue problem \ndirectly~\\cite{Bhattacharya06}. Along with this mathematical \nway, it is necessary to understand what physical quantities can be used to \ndetect or characterize crossings or avoided crossings. First of all, the \nmeasurement of a Berry phase could be a good way to detect level crossings \nbecause it due to level crossings. It is well known that avoided crossings \nor glancing intersections are not the source of Berry phases~\\cite{Yarkony96}.\nIs there any way that Berry phases can detect {\\it avoided level \ncrossings} ? Here we show that the marginal Berry phase of an entangled\nstate could be an indicator to avoided level crossings.\n\nThe entropy is an another indicator to level crossings. Since level crossing \nor avoided crossings are accompanied with a drastic change in eigenstates, \nany contents of information on relevant eigenstates may also vary. The Shannon \nentropy of the electron density measures the delocalization or the lack of \nstructure in the respective distribution. Thus the Shannon entropy is maximal \nfor uniform distribution, that is, for an unbound system, and is minimal when\nthe uncertainty about the structure of the distribution is minimal~\\cite{kais5}.\nGonz\\'alez-F\\'erez and Dehesa showed that the Shannon entropy could be \nused as an indicator of avoided crossings~\\cite{Gonzalez03}. The von Neumann \nentropy, the quantum version of the Shannon entropy, is a good entanglement \nmeasure for a bipartite pure state. In quantum information, much attention \nhas been paid to the relation between entanglement and quantum phase \ntransitions~\\cite{Amico08,kais6}. Recently one of the authors investigated \nthe relation between entanglement, Berry phases, and level crossings for two \nqubits with the XY-type interaction and found that the level crossing is not \nalway accompanied with the abrupt change in entanglement~\\cite{scoh08}. \n\nIn this paper, in order to study how entanglement and Berry phases vary at \nlevel crossings, we consider the Breit-Rabi Hamiltonian describing a hyperfine \ninteraction of electron and nuclear spins in a uniform magnetic \nfield~\\cite{Breit-Rabi}. It is shown that the von Neumann entropy of the\nelectron (or nuclear) spin is maximum at avoided crossings. \nIt is demonstrated that the significant changes in Berry phases and \nentanglement are closely related to level crossings. \nWe show that the marginal Berry phase of the electron (or nuclear) spin \ncould be a good indicator to avoided level crossings. \nThe marginal Berry phase has nodal points at the avoided crossing points. \n\nThe paper is organized as follows. In Sec.~\\ref{section_BR}, the Breit-Rabi\nHamiltonian is introduced. In Sec.~\\ref{section_hydrogen}, as a specific\napplication of the Breit-Rabi Hamiltonian, we consider a hyperfine interaction\nbetween a nuclear spin $1\/2$ and an electron spin $1\/2$ of a hydrogen atom \nin a magnetic field. We analyze the close relation between entanglement,\nBerry phases, and level crossings. In Sec.~\\ref{section_sodium}, we make \nan similar analysis for a sodium atom with a nuclear spin $3\/2$ and an \nelectron spin $1\/2$. In Sec.~\\ref{section_conclusion}, we summarize the main \nresults.\n\n\\section{Breit-Rabi Hamiltonian}\n\\label{section_BR}\n\nLet us consider an atom with a single valence electron in the ground state \nwith orbital angular momentum $L=0$. In the presence of a uniform magnetic \nfield $B$ in the $z$ direction, its atomic spectrum is described by \nthe Breit-Rabi Hamiltonian~\\cite{Breit-Rabi}, which is given by the sum of \nthe hyperfine interaction between a nuclear spin $\\bf I$ and an electron \nspin $\\bf S$ and their Zeeman couplings to the magnetic field \n\\begin{equation}\nH = A\\, {\\bf I}{\\bm \\cdot}{\\bf S} + (a S_z + b I_z)B \\,,\n\\label{Hamil_BR}\n\\end{equation}\nwhere $A$ is the hyperfine coupling constant, $a=\\gamma_e\\hbar$, and \n$b=\\gamma_n\\hbar$. Here $\\gamma_e$ and $\\gamma_n$ are the electron and \nnuclear gyromagnetic ratios, respectively. The electron spin operator \n${\\bf S}$ and the nuclear spin operator ${\\bf I}$ are measured in the unit \nof $\\hbar$. \n\nThe Breit-Rabi Hamiltonian~(\\ref{Hamil_BR}) is well studied to describe double\nresonance in nuclear magnetic resonance~\\cite{Slichter} and the muon spin \nrotation in semiconductors~\\cite{Patteson88}. Although simple and well\nunderstood, it still continues to provide new insights. Recently Bhattacharya \nand Raman found a new class of invariants of the Breit-Rabi\nHamiltonian~\\cite{Bhattacharya06}. As will be shown here, it is a prime example\nfor showing the close relation between level crossings, entanglement, and \ngeometric phases. Also it is related to a Hamiltonian of \nelectron spin qubits in quantum dots~\\cite{Loss98} where the Heisenberg \ninteraction between two electron spins can be turned on and off to \nimplement the controlled-not gate.\n\nBefore applying the Hamiltonian~(\\ref{Hamil_BR}) to specific systems, let us\nlook at its general properties. If $B =0$, then $H$ commutes with \nboth the square of the total spin operator ${\\bf J}^2$ and $J_z$, where the \ntotal spin operator is defined by ${\\bf J} \\equiv {\\bf I} + {\\bf S}$. However,\nfor $B\\ne 0$, due to the fact that $a \\ne b$, the Hamiltonian~(\\ref{Hamil_BR})\nno longer commutes with ${\\bf J}^2$, but still commutes with $J_z$. So the \neigenvalue $m$ of $J_z$ is a good quantum number for the Breit-Rabi \nHamiltonian. With ladder operators, $S_\\pm = S_x \\pm i S_y$ and \n$I_\\pm = I_x \\pm i I_y$, the Hamiltonian~(\\ref{Hamil_BR}) can be rewritten as\n\\begin{equation}\nH = A\\,I_zS_z + \\frac{A}{2}(S_{+}I_{-} + S_{-}I_{+}) + B (a S_z + b I_z)\\,.\n\\label{Hamil_BR2}\n\\end{equation}\nLet us use a simple notation $\\ket{m_S,m_I}$ to represent the product state \n$\\ket{S,m_S}\\otimes\\ket{I,m_I}$, where $\\ket{S,m_S}$ is an eigenstate of\n${\\bf S}^2$ and $S_z$, and $\\ket{I,m_I}$ is an eigenstate of ${\\bf I}^2$ and \n$I_z$. The first and third terms in Eq.~(\\ref{Hamil_BR2}) give the diagonal \nmatrix elements \n\\begin{subequations}\n\\begin{equation}\n\\bra{m_S\\,m_I}H\\ket{m_S\\,m_I} \n= f(m_S, m_I) = A\\,m_S\\,m_I + m_S\\,aB + m_I\\,bB \\,.\n\\end{equation}\nThe second term in Eq.~(\\ref{Hamil_BR2}) corresponds to the off-diagonal \nmatrix elements\n\\begin{align}\n&\\bra{m_S'\\,m_I'}S_{+}I_{_-}\\ket{m_S\\,m_I} \\nonumber\\\\ \n&=\\sqrt{(S-m_S)(S+m_s+1)}\\,\\sqrt{(I+m_I)(I-m_I+1)}\\,\n \\delta_{m_S',m_S+1}\\delta_{m_I',m_I-1}\\,.\n\\end{align}\n\\label{matrix_element}\n\\end{subequations}\nSince $m_S'-m_S =1$ and $m_I' - m_I= -1$ (or vice versa), one has the \nselection rule, $\\Delta m = (m_S'+m_I') -(m_S+m_I) =0$, that is, the magnetic\nquantum number $m=m_S + m_I$ is conserved. This implies that the \nHamiltonian~(\\ref{Hamil_BR}) is block diagonal in the basis set \n$\\{\\ket{m_S,m_I}\\}$ ordered by $m$. \n\n\\section{The Hydrogen Atom in a Uniform Magnetic Field}\n\\label{section_hydrogen}\n\n\\subsection{Eigenvalues and Eigenstates}\n\nAs a simple but real system described by the Hamiltonian~(\\ref{Hamil_BR}), \nlet us consider the interaction between the nuclear spin $I=1\/2$ and the \nelectron spin $S=1\/2$ of a hydrogen atom in a uniform magnetic field.\nSince $H$ commutes with the $z$-component of the total spin operator, \n$J_z = S_z + I_z$, it is convenient to arrange the product basis \n$\\{\\ket{m_S,m_I}\\}$ in the decreasing order of the magnetic quantum number \n$m$ of $J_z$ as \n$\\left\\{\\ket{\\tfrac{1}{2},\\tfrac{1}{2}}, \\ket{\\tfrac{1}{2},-\\tfrac{1}{2}},\n\\ket{-\\tfrac{1}{2},\\tfrac{1}{2}}, \\ket{-\\tfrac{1}{2},-\\tfrac{1}{2}}\\right\\}$. \nBy means of Eqs.~(\\ref{matrix_element}), the Breit-Rabi Hamiltonian for a\nhydrogen atom can be written in the ordered basis\n\\begin{align}\nH\n= \\frac{1}{4}\n\\begin{pmatrix}\nA + 2(a+b)B & 0 & 0 & 0 \\\\\n0 & -A+2(a-b)B & 2A & 0 \\\\\n0 & 2A & -A -2(a-b)B & 0 \\\\\n0 & 0 & 0 & A -2(a+b)B\n\\end{pmatrix}\\,.\n\\label{Hamil_Hydrogen}\n\\end{align}\nThe Hamiltonian~(\\ref{Hamil_Hydrogen}) is block diagonal, so it is \nstraightforward to obtain its eigenvalues and eigenvectors.\nThe subspace of $m=\\pm 1$ is spanned by \n$\\left\\{ \\ket{\\tfrac{1}{2},\\tfrac{1}{2}}, \n \\ket{-\\tfrac{1}{2},-\\tfrac{1}{2}}\\right\\}$. \nThe block Hamiltonian on this subspace is already diagonal and has its \neigenvalues and eigenvectors\n\\begin{subequations}\n\\label{H_eigen1}\n\\begin{align}\nE_{\\pm 1} &= \\frac{A}{4} \\pm \\frac{1}{2}(a + b)B \\,,\\\\\n\\ket{E_{\\pm 1}} &= \\ket{\\pm\\tfrac{1}{2},\\pm\\tfrac{1}{2}},\n\\label{H_eva}\n\\end{align}\n\\end{subequations}\nwhere the subscripts `$\\pm 1$' in $E_{\\pm 1}$ denote the magnetic quantum \nnumber $m=\\pm 1$. The block Hamiltonian with $m=0$ is defined on the subspace \nof $\\left\\{ \\ket{\\tfrac{1}{2}, -\\tfrac{1}{2}}, \n \\ket{\\tfrac{1}{2}, -\\tfrac{1}{2}} \\right\\}$ and is written as \n\\begin{align}\nH_{m=0} \n&= \\frac{1}{4}\n \\begin{pmatrix}\n -A +2(a-b)B & 2A \\\\\n 2A & -A - 2(a-b)B\n \\end{pmatrix}\\,.\n\\end{align}\nOne can interpret the Hamiltonian $H_{m =0}$ as that of a spin in an \neffective magnetic field in the $x$-$z$ plane, ${\\bf B}_{\\rm eff} \n\\equiv (A\/2, 0, (a-b)B\/2)$. \nThe eigenvalues and eigenvectors of $H_{m=0}$ can be written easily as\n\\begin{subequations}\n\\label{H_eigen0}\n\\begin{align}\nE_{0}^{\\pm} \n&= -\\frac{A}{4} \\pm \\frac{1}{2}\\sqrt{(a-b)^2 B^2 + A^2}\\,, \\\\\n\\ket{E_{0}^{+}} \n&= \\phantom{+} \\cos\\frac{\\alpha}{2}\\, \\ket{ \\tfrac{1}{2},-\\tfrac{1}{2}} \n + \\sin\\frac{\\alpha}{2}\\, \\ket{-\\tfrac{1}{2}, \\tfrac{1}{2}}\\,,\n\\label{H_evb} \\\\\n\\ket{E_{0}^{-}} \n&= - \\sin\\frac{\\alpha}{2}\\, \\ket{ \\tfrac{1}{2},-\\tfrac{1}{2}} \n + \\cos\\frac{\\alpha}{2}\\, \\ket{-\\tfrac{1}{2}, \\tfrac{1}{2}}\\,,\n\\label{H_evc}\n\\end{align}\n\\end{subequations}\nwhere $\\tan\\alpha\\equiv \\frac{A}{(a-b)B}$. In a weak magnetic field limit \n(so called the Zeeman region), the Zeeman energy is smaller\nthan the hyperfine coupling. At $B= 0$, i.e., $\\alpha = \\pi\/2$, \nthe ground eigenstate $\\ket{E_{0}^{-}}$ becomes the singlet state,\n$\\ket{E_{0}^{-}} = \\frac{1}{\\sqrt{2}}\\left(\\ket{-\\tfrac{1}{2},\\tfrac{1}{2}} -\n\\ket{\\tfrac{1}{2}, -\\tfrac{1}{2}} \\right)$. In a strong magnetic field called\nthe Paschen-Back region, the Zeeman couplings are dominant. That is, in \nlimit of $B\\to \\infty$, one has $\\alpha \\to 0$ and $\\ket{E_{0}^{-}} \\to\n\\ket{-\\tfrac{1}{2},\\tfrac{1}{2}}$.\n\nThe eigenvalues and eigenstates of the Breit-Rabi Hamiltonian for a hydrogen \natom, Eqs.~(\\ref{H_eigen1}) and (\\ref{H_eigen0}) depend on two parameters: \nthe hyperfine constant $A$ and the magnetic field $B$. The hyperfine constant \n$A$ of the hydrogen atom in vacuum is positive. However, if a hydrogen atom\nis in an inert gas, the hyperfine constant $A$ could be \nnegative~\\cite{Foner60}, resembling the spin-spin coupling constant in \na Heisenberg model. We assume that $A$ as well as $B$ varies and can be \nnegative. To this end, $A$ in Eqs.~(\\ref{H_eigen1}) and (\\ref{H_eigen0}) is \nreplaced by $fA$ with $-1\\le f\\le 1$, so $A$ is still kept the positive \nconstant in vacuum. If $f$ is negative, so the hyperfine constant.\n\nDepending on $f$ and $B$, the ground state of the \nHamiltonian~(\\ref{Hamil_Hydrogen}) is given either by $\\ket{E_{\\pm 1}}$ or by \n$\\ket{E_{0}^{-}}$. It is convenient to plot the energy levels normalized by \n$A$. Then, Eqs.~(\\ref{H_eigen1}) and (\\ref{H_eigen0}) become\n$E_{\\pm1}\/A = \\frac{f}{4} \\pm \\frac{1}{2}(a' + b') B$ and \n$E_{0}^{\\pm}\/A= -\\frac{f}{4} \\pm \\frac{1}{2}\\sqrt{(a'-b')^2 B^2 + f^2}$,\nwhere $a' \\equiv a\/A \\approx 19.767\\,\\,\\text{T}^{-1}$ and \n $b'\\equiv b\/A \\approx -0.03\\,\\,\\text{T}^{-1}$ are taken \nform Ref.~\\cite{Arimondo77}, and $B$ is measured in the unit of tesla. \nThe energy levels $E_m\/A$ are plotted as functions of $B$ for $f=1$ in \nFig.~\\ref{Fig1} (a) and for $f=-0.5$ in Fig.~\\ref{Fig1} (b). For $f\\ge 0$, \nthe ground level is $E_{0}^{-}$. For $f<0$, two levels, $E_{0}^{\\pm}$ and\n$E_{\\pm1}$, with different magnetic quantum numbers cross at \n$f =\\frac{2a'b'}{a'-b'} |B|$. Fig.~\\ref{Fig2} (a) shows the energy gap \n$\\Delta\/A$ between the ground and first exited states as a function of $f$ \nand $B$, where we take $a' =0.1\\,\\,\\text{T}^{-1}$ and \n$b' = -0.01\\,\\,\\text{ T}^{-1}$ to see clearly the phase diagram of \nthe ground state of the Hamiltonian~(\\ref{Hamil_Hydrogen}) determined by \nthe magnetic quantum number $m$. As shown in Fig.~\\ref{Fig2} (a), the energy\ngap $\\Delta\/A$ vanishes along the lines defined by \n$f = \\frac{2a'b'}{a'-b'} |B|$ \nand the negative $f$ axis. In the region of $f <\\frac{2a'b'}{a'-b'} |B|$, \nthe ground state becomes either $\\ket{E_{+1}}$ or $\\ket{E_{-1}}$ with the \nmagnetic quantum number $m=1$ or $m=-1$, respectively. On the other hand, \nthe ground state in the region defined by $f >\\frac{2a'b'}{a'-b'} |B|$ is \ngiven by $\\ket{E_{0}^{-}}$ with the magnetic quantum number $m=0$.\nOne can see that the magnetic quantum number $m$ of the ground state \nchanges abruptly at the level crossing points.\n\n\\subsection{Entanglement}\n\nLet us discuss the relation between level crossings and entanglement. \nEntanglement refers to the quantum correlation between subsystems and has no \nclassical analog~\\cite{book-Gruska,kais7}. When level crossing happens as the \nparameter of the Hamiltonian varies, the ground state changes drastically. \nEntanglement as a physical quantity may also undergo a significant change. \nHowever, entanglement is not always a good indicator to level crossing as \nshown in Ref.~\\cite{scoh08}.\n\nFirst, let us examine the relation between entanglement and level crossings \nfor each eigenstate. The von Neumann entropy $S$ of a subsystem is a good \nentanglement measure for a pure bipartite system. If $\\ket{\\psi_{AB}}$ is \na quantum state of a system composed of two subsystems $A$ and $B$, the \nentanglement between $A$ and $B$ is measured by the von Neumann entropy of \nthe subsystem,\n$S(\\rho_A) = -{\\rm tr}(\\rho_A\\log\\rho_A)= S(\\rho_B) \n = -{\\rm tr}(\\rho_B\\log\\rho_B)$, where the reduced density matrix \n$\\rho_A$ of the subsystem $A$ is obtained by tracing out the degrees of \nfreedom of $B$ as $\\rho_A ={\\rm tr}_B( \\ket{\\psi_{AB}} \\bra{\\psi_{AB}} )$. \nIf the ground state is given by $\\ket{E_{\\pm 1}}$, i.e., a product state, \nthen the von Neumann entropy $S$ of the electron (or nuclear) spin \nis zero. On the other hand, for the quantum state $\\ket{E_0^{\\pm}}$ of \nthe electron and nuclear spins, the von Neumann entropy of the electron \n(or nuclear) spin can be written as\n\\begin{align}\nS(\\rho_A) = -\\frac{1+\\cos\\alpha}{2}\\,\\log_2\\frac{1+\\cos\\alpha}{2}\n -\\frac{1-\\cos\\alpha}{2}\\,\\log_2\\frac{1-\\cos\\alpha}{2}\\,.\n\\end{align}\nFig.~\\ref{Fig1} (c) shows the von Neumann entropy of the electron (or nuclear) \nspin for each eigenstate as a function of $B$. For the eigenstates\n$\\ket{E_{0}^{\\pm}}$, it is maximum at $B=0$, i.e., at the avoided crossing\npoint. This is analogous to the sharp change in Shannon entropy at avoided \ncrossing in Ref.~\\cite{Gonzalez03}.\n\nNow, let us look at how entanglement changes at level crossings as the\nparameters of the Hamiltonian vary. Fig.~\\ref{Fig2} (b) plots the von Neumann \nentropy $S$ of the electron (or nuclear ) spin for the ground state as \na function of $f$ and $B$. Across the level crossing line, \n$f=\\frac{2a'b'}{a'-b'}|B|$, the von Neumann entropy changes abruptly. \nFor $f>0$, $S$ becomes 1 as $B$ goes to 0. Along the line of $f=0$, \nthe von Neumann entropy $S$ vanishes even though there is no level crossing. \n\n\\subsection{Berry Phase}\n\\label{hydrogen_berry}\n\nAn instantaneous eigenstate encircling the energy level crossing points \nacquires the Berry phase in addition to the dynamical phase. The information\non the level crossings is encoded in the Berry phase. At $B=0$ and $f=1$, the \ntwo levels $E_{\\pm1}$ cross and the other two levels $E_{0}^{\\pm}$ are avoided \ncrossing. Also $E_{\\pm1}$ and $E_{0}^{-}$ cross at $f=\\frac{2a'b'}{a'-b'}|B|$. \nHere we focus on the Berry phase due to the level crossing or avoided crossing \nat $B=0$.\n\nDue to the fact that $a\\gg |b|$, an electron spin rotates much faster than a \nnuclear spin. We assume the magnetic field ${\\bf B}$ is rotated slowly enough\nfor both the electron and nuclear spins to evolve adiabatically. The magnetic\nfield ${\\bf B} = B\\,{\\bf\\hat{n}}$ in the direction of ${\\bf\\hat{n}} = \n(\\sin\\theta\\cos\\phi, \\sin\\theta\\sin\\phi,\\cos\\theta)$ is constructed starting \nfrom ${\\bf B} = B\\,{\\bf \\hat{z}}$. First, it is rotated about the $y$ axis by \nangle $\\theta$. And it is subsequently rotated about the $z$ axis by angle \n$\\phi$. By applying SU(2) rotations corresponding to the above SO(3) rotations \non the Hamiltonian~(\\ref{Hamil_BR}), one obtains the Breit-Rabi Hamiltonian in \nthe magnetic field ${\\bf B} = B\\, {\\bf\\hat{n}}$\n\\begin{align}\nH(\\theta,\\phi) \n= A\\,{\\bf I}{\\bm \\cdot}{\\bf S} \n+ a\\,{\\bf B}{\\bm \\cdot}{\\bf S} + b\\,{\\bf B}{\\bm \\cdot}{\\bf I}\\,.\n\\label{Hamil_Berry}\n\\end{align}\nThe hyperfine interaction $A\\,{\\bf I}{\\bm \\cdot}{\\bf S}$ is spherical\nsymmetric, so the eigenvalues and eigenvectors of the \nHamiltonian~(\\ref{Hamil_Berry}) are identical to those of the \nHamiltonian~(\\ref{Hamil_Hydrogen}) except replacing \n$\\ket{\\pm\\frac{1}{2}}$ by $\\ket{{\\bf \\hat{n}};\\pm\\frac{1}{2}}$.\nHere $\\ket{{\\bf \\hat{n}};\\pm\\frac{1}{2}}$ are eigenstates of \n${\\bf\\hat{n}}{\\bm \\cdot}{\\bf S}$ or ${\\bf\\hat{n}}{\\bm \\cdot}{\\bf I}$. \nIf the magnetic field ${\\bf B}$ is rotated slowly about the $z$ axis by $2\\pi$ \nto make a cone with a solid angle $\\Omega = 2\\pi(1-\\cos\\theta)$, then the\ninstantaneous eigenstate $\\ket{{\\bf \\hat{n}};\\pm\\frac{1}{2}}$ follows it and\naccumulates the Berry phase $\\beta_\\pm =\\mp\\frac{1}{2}\\Omega$. The total Berry \nphase $\\beta$ of electron and nuclear spins is the sum of two phases acquired \nby each one. It depends on the magnetic quantum number $m$ \n\\begin{align}\n\\beta = \\left\\{ \\begin{array}{cl}\n \\mp\\Omega \\quad &\\text{for $m=\\pm 1$} \\,,\\\\\n 0 \\quad &\\text{for $m=0$} \\,.\n \\end{array} \\right.\n\\end{align}\nAs expected, the Berry phase is nonzero only for real crossings, i.e., \n$m=\\pm1$. Fig.~\\ref{Fig2} (c) plots the total Berry phase as a function of \n$B$ and $f$ and shows that the total Berry phase jumps at the level crossings.\nThe zero Berry phase of the eigenstates $\\ket{E_{0}^{\\pm}(\\theta,\\phi)}$ can \nbe understood in two ways. First, two levels $E_{0}^{\\pm}$ are avoided \ncrossing at $B =0$, so it is zero. Another view is as follows.\nSince $\\ket{E_{0}^{\\pm}}$ is a superposition of \n$\\ket{\\tfrac{1}{2},-\\tfrac{1}{2}}$ and $\\ket{-\\tfrac{1}{2},\\tfrac{1}{2}}$, \nthe Berry phase of the electron spin is opposite to that of the nuclear spin \nand they cancel each other. \n\t \nAlthough the entangled states $\\ket{E_{0}^{\\pm}}$ of electron and nuclear spins \naccumulates no Berry phase, each subsystem (electron spin or nuclear spin) can \nget nonzero marginal Berry phases of mixed states. Following the studies on\ngeometric phase of mixed states~\\cite{Sjoqvist00,Sjoqvist05} and the relation \nbetween entanglement and marginal Berry phases~\\cite{Yi04a,Yi04b,Sjoqvist05}, \nwe investigate the relation between avoided level crossings, marginal Berry \nphases, and entanglement. For an adiabatic cyclic evolution parameterized by\n$\\bf x$, an instantaneous eigenstate of a bipartite system $AB$ can be \nexpressed in a Schmidt decomposition $\\ket{\\psi({\\bf x})} = \\sum_{i=1}^{M}\n\\sqrt{p_i} \\ket{e_i({\\bf x})} \\otimes\\ket{f_i({\\bf x})}$, where \n$\\{\\ket{e_i({\\bf x})}\\}_{i=1}^{N_A}$ is an orthonormal basis for a subsystem\n$A$, $\\{\\ket{f_i({\\bf x})}\\}_{i=1}^{N_B}$ for a subsystem $B$, \n$M\\le\\min\\{N_A,N_B\\}$, and $\\sum_{i=1}^M p_i =1$. Here our attention is\nrestricted to the case that the Schmidt coefficients $\\sqrt{p_i}$ are\nindependent of ${\\bf x}$. After an adiabatic cyclic evolution implemented by\n${\\bf x}(0) = {\\bf x}(T)$, the total Berry phase of the bipartite system $AB$ \nis given by \n\\begin{align}\n\\beta=\\sum_{i=1}^{M} p_i \\left(\\beta^{A}_{i} + \\beta^{B}_{i}\\right)\\,,\n\\label{berry_sum}\n\\end{align}\nwhere $\\beta^{A}_{i} = i\\oint_C d{\\bf x}{\\cdot} \\bra{e_i({\\bf x})} \n \\nabla_{\\bf x} \\ket{e_i({\\bf x})}$. Then the marginal \nmixed state Berry phase $\\Gamma_A$ of a subsystem $A$ is defined by \n\\begin{align}\n\\Gamma_A =\\arg\\sum_{i}p_i \\exp\\left(i\\beta^A_i\\right)\\,.\n\\label{berry_mixed}\n\\end{align}\n\nWith Eqs.~(\\ref{berry_sum}) and~(\\ref{berry_mixed}), let us analyze how the \ntotal Berry phase and the marginal Berry phase of $\\ket{E_0^{-}}$ depend on\n$B$. The two Schmidt coefficients are given by $p_1 = \\sin^2\\tfrac{\\alpha}{2}$ \nand $p_2=\\cos^2\\tfrac{\\alpha}{2}$. It is easy to obtain the marginal \nBerry phase of the electron spin \n$\\Gamma_e = \\arctan\\left(\\cos\\alpha\\tan\\frac{\\Omega}{2}\\right)$\nand the average Berry phase of the electron spin \n$\\beta_e \\equiv p_1\\beta_1^e + p_2 \\beta_2^e =\\frac{\\Omega}{2}\\cos\\alpha$. \nIn the limit of $B\\gg1$, i.e., $\\alpha\\to 0$, one has $\\ket{E_0^{-}} \\to\n\\ket{-\\tfrac{1}{2},\\tfrac{1}{2}}$ and $\\beta_e={\\Omega}\/{2}$. Also the marginal\nBerry phase of the electron spin is given by $\\Gamma_e = {\\Omega}\/{2}$ for \n$0\\le \\theta <\\frac{\\pi}{2}$. Fig.~\\ref{Fig3} plots $\\Gamma_e$\nas a function of $B$ and the azimuthal angle $\\theta$. The marginal Berry \nphase of the electron spin jumps at $\\theta=\\pi\/2$ and $B=0$.\nThe node at $B=0$ corresponds to the avoided crossing. \n\n\\section{The Sodium Atom in a Uniform Magnetic Field}\n\\label{section_sodium}\n\n\\subsection{Energy Spectrum}\nNow we consider an $^{23}$Na atom in its $3S_{1\/2}$ ground state in the \npresence of a uniform magnetic field $B$ along the $z$ axis. The nuclear \nand electron spins of an $^{23}$Na atom are $I=2\/3$ and $S=1\/2$, respectively.\nAs in Sec.~\\ref{section_hydrogen}, it is convenient to arrange \nthe product basis $\\{\\ket{m_S,m_I}\\}$ in the decreasing order of \nthe magnetic quantum number $m$ of $J_z$ as follows.\n$\\left\\{\\ket{\\frac{1}{2},\\frac{3}{2}} \\right\\}$,\n$\\left\\{\\ket{\\frac{1}{2},\\frac{1}{2}}, \\ket{-\\frac{1}{2},\\frac{3}{2}}\\right\\}$,\n$\\left\\{\\ket{\\frac{1}{2},-\\frac{1}{2}},\\ket{-\\frac{1}{2},\\frac{1}{2}}\\right\\}$, \n$\\left\\{\\ket{\\frac{1}{2},-\\frac{3}{2}},\\ket{-\\frac{1}{2},-\\frac{1}{2}}\\right\\}$,\nand $\\left\\{\\ket{-\\frac{1}{2},-\\frac{3}{2}} \\right\\}$. For example, \n$\\left\\{ \\ket{\\frac{1}{2},\\frac{1}{2}}, \\ket{-\\frac{1}{2},\\frac{3}{2}}\\right\\}$ \nspans the subspace of $m=1$. In this ordered basis set, the \nHamiltonian~(\\ref{Hamil_BR}) for the sodium atom can be represented by \na block-diagonal matrix\n\\begin{align}\nH = \n\\begin{pmatrix}\nf({\\frac{1}{2},\\frac{3}{2}}) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n0 & f(\\frac{1}{2},\\frac{1}{2}) & \\frac{\\sqrt{3}}{2}A & 0 & 0 & 0 & 0 & 0 \\\\\n0 &\\frac{\\sqrt{3}}{2}A & f(\\frac{-1}{2},\\frac{3}{2}) & 0 & 0 & 0 & 0 & 0 \\\\\n0 & 0 & 0 & f(\\frac{1}{2},\\frac{-1}{2}) & \\frac{A}{2} & 0 & 0 & 0 \\\\\n0 & 0 & 0 & \\frac{A}{2} & f(\\frac{-1}{2},\\frac{1}{2}) & 0 & 0 & 0 \\\\\n0 & 0 & 0 & 0 & 0 & f(\\frac{1}{2},\\frac{-3}{2}) & \\frac{\\sqrt{3}}{2}A & 0 \\\\\n0 & 0 & 0 & 0 & 0 & \\frac{\\sqrt{3}}{2}A & f(\\frac{-1}{2},\\frac{-1}{2}) & 0 \\\\\n0 & 0 & 0 & 0 & 0 & 0 & 0 & f(\\frac{-1}{2},\\frac{-3}{2}) \n\\end{pmatrix},\n\\label{Hamil_Sodium}\n\\end{align}\nwhere $f(m_S,m_I) \\equiv A\\,m_S\\,m_I + m_S\\,aB + m_I\\,bB$.\nEach block is at most a $2\\times 2$ matrix and can be easily diagonalized.\nFirst, consider the subspace of $m = \\pm 2$. The corresponding eigenvalues \nand eigenvectors can be written as\n\\begin{subequations}\n\\label{Na_m2}\n\\begin{align}\nE_{\\pm 2} &= \\frac{3}{4}A \\pm \\frac{1}{2}(a+3b)B\\,,\\\\\n\\ket{E_{\\pm 2}} &= \\ket{\\pm\\tfrac{1}{2},\\pm\\tfrac{3}{2}} \\,. \n\\label{Na_ev2}\n\\end{align}\n\\end{subequations}\nNotice that Eqs.~(\\ref{Na_m2}) are comparable to Eqs.~(\\ref{H_eigen1}).\nSecond, in the subspace with $m= 1$ spanned by \n$\\left\\{\\ket{\\frac{1}{2},-\\frac{1}{2}}, \\ket{-\\frac{1}{2},\\frac{1}{2}}\\right\\}$, \none obtains the eigenvalues and eigenvectors,\n\\begin{subequations}\n\\label{Eq_plus}\n\\begin{align}\nE_{+1}^{\\pm} \n&= -\\frac{A}{4} + bB \\pm \\frac{1}{2}\\sqrt{\\bigl(A + (a-b)B\\bigr)^2 + 3A^2}\\,,\\\\\n\\ket{E_{+1}^+} \n&= \\phantom{+} \\cos\\frac{\\alpha_1}{2}\\, \\ket{ \\tfrac{1}{2},\\tfrac{1}{2}} \n + \\sin\\frac{\\alpha_1}{2}\\, \\ket{-\\tfrac{1}{2},\\tfrac{3}{2}}\\,,\n\\label{Na_p1p}\\\\\n\\ket{E_{+1}^-}&= -\\sin\\frac{\\alpha_1}{2}\\, \\ket{ \\tfrac{1}{2},\\tfrac{1}{2}} \n + \\cos\\frac{\\alpha_1}{2}\\, \\ket{-\\tfrac{1}{2},\\tfrac{3}{2}}\\,,\n\\label{Na_p1m}\n\\end{align}\n\\end{subequations}\nwhere $\\tan\\alpha_1\\equiv \\frac{\\sqrt{3} A}{A + (a-b)B}$. \nThird, the Hamiltonian of $m=-1$ is defined on the subspace \n spanned by $\\left\\{ \\ket{\\frac{1}{2},-\\frac{3}{2}}, \n\\ket{-\\frac{1}{2},-\\frac{1}{2}} \\right\\}$. \nIts eigenvalues and eigenstates are given by\n\\begin{subequations}\n\\label{Eq_minus}\n\\begin{align}\nE_{-1}^{\\pm} &= -\\frac{A}{4} -bB \n \\pm \\frac{1}{2}\\sqrt{\\bigl(A - (a-b)B\\bigr)^2 + 3A^2}\\,,\\\\\n\\ket{E_{-1}^+} \n&= \\phantom{+} \\cos\\frac{\\alpha_2}{2}\\, \\ket{-\\tfrac{1}{2},-\\tfrac{1}{2}} \n + \\sin\\frac{\\alpha_2}{2}\\, \\ket{ \\tfrac{1}{2},-\\tfrac{3}{2}}\\,,\n\\label{Na_m1p} \\\\\n\\ket{E_{-1}^-} \n&= - \\sin\\frac{\\alpha_2}{2}\\, \\ket{-\\tfrac{1}{2},-\\tfrac{1}{2}} \n + \\cos\\frac{\\alpha_2}{2}\\, \\ket{ \\tfrac{1}{2},-\\tfrac{3}{2}} \\,,\n\\label{Na_m1m}\n\\end{align}\n\\end{subequations}\nwhere $\\tan\\alpha_2\\equiv \\frac{\\sqrt{3}A}{A -(a-b)B}$. Note that\nEqs.~(\\ref{Eq_minus}) can be obtained from Eqs.~(\\ref{Eq_plus}) by replacing \n$B$ with $-B$. Finally, the subspace of $m=0$ is spanned by \n$\\left\\{\\ket{\\frac{1}{2},-\\frac{1}{2}},\\ket{\\frac{1}{2},-\\frac{1}{2}}\\right\\}$. \nThe corresponding eigenvalues and eigenvectors are given by\n\\begin{subequations}\n\\label{Na_ev0}\n\\begin{align}\nE_{0}^{\\pm} &= -\\frac{A}{4} \\pm \\frac{1}{2}\\sqrt{(a-b)^2B^2 + 4A^2} \\,,\\\\\n\\ket{E_0^+} &= \n\\phantom{+} \\cos\\frac{\\alpha_0}{2}\\, \\ket{ \\tfrac{1}{2},-\\tfrac{1}{2}} \n + \\sin\\frac{\\alpha_0}{2}\\, \\ket{-\\tfrac{1}{2}, \\tfrac{1}{2}} \\,, \n\\label{Na_0p} \\\\\n\\ket{E_0^-} &= \n - \\sin\\frac{\\alpha_0}{2}\\, \\ket{ \\tfrac{1}{2},-\\tfrac{1}{2}} \n\t+ \\cos\\frac{\\alpha_0}{2}\\, \\ket{-\\tfrac{1}{2}, \\tfrac{1}{2}}\\,,\n\\label{Na_0m}\n\\end{align}\n\\end{subequations}\nwhere $\\tan\\alpha_0\\equiv \\frac{A}{(a-b)B}$. As expected, Eqs.~(\\ref{Na_ev0})\nis very similar to Eqs.~(\\ref{H_eigen0}) in the case of a hydrogen atom.\n\n\n\\subsection{Entanglement}\n\nLet us examine the relation between entanglement and level crossings \nor avoided crossings for a sodium atom. \nWith the values of the parameters $A$, $a$, and $b$ of the $^{23}$Na \natom in Ref.~\\cite{Arimondo77}, energy levels $E_{m}^{\\pm}\/A$ are plotted\nin Fig.~\\ref{Fig4} (a). The von Neumann entropies of the electron \n(or nuclear) spin for each eigenstates are shown in Fig.~\\ref{Fig4} (b).\nThe ground state is given by $\\ket{E_{+1}^{-}}$ for $B>0$ and \n$\\ket{E_{-1}^{-}}$ for $B<0$. Two levels, $E_{+1}^{+}$ and \n$E_{+1}^{-}$, are avoided crossing and maximally entangled at \n$A -(a-b)B= \\sqrt{3}A$. Another two levels, $E_{-1}^{+}$ and $E_{-1}^{-}$, \nare avoided crossing and maximally entangled at $A + (a-b)B= \\sqrt{3}A$.\nTwo levels with $m=0$, $E_{0}^{\\pm}$ are avoided crossing and maximally\nentangled at $B=0$. Two levels $E_{\\pm2}$ show real crossing at $B=0$ and have\nzero von Neumann entropies. Again one can see that the eigenstate is maximally\nentangled at the avoided crossing point. This is analogous to the results \nin Ref.~\\cite{Gonzalez03}, where Shannon entropy is used as an indicator of\navoided crossings.\n\n\n\\subsection{Berry phase}\n\nAs in Sec.~\\ref{hydrogen_berry}, let us consider an adiabatic cyclic evolution\nof nuclear and electron spins of a sodium atom by rotating the magnetic \nfield $B\\,{\\bf\\hat{n}}$ slowly. For a adiabatic rotation keeping the azimuthal \nangle $\\theta$ constant and varying the polar angle $\\phi$ from $0$ to $2\\pi$,\nthe instantaneous eigenstates accumulates the total Berry phases proportional \nto the magnetic quantum number $m$, $\\beta = \\mp m \\Omega$. In contrast to a\nhydrogen atom, the ground state is given either by $\\ket{E_{+1}^{-}}$ or by\n$\\ket{E_{-1}^{-}}$ with $m=\\pm 1$, so it acquires the total Berry phase $\\beta\n=\\mp\\Omega$.\n\nLet us analyze how the marginal Berry phase of the entangled state is related \nto the avoided crossings. We focus on the eigenstate, $\\ket{E_{+1}^{-}}$. It has \ntwo Schmidt coefficients, \n$p_1 = \\sin^2\\tfrac{\\alpha_1}{2}$ and $p_2=\\cos^2\\tfrac{\\alpha_1}{2}$.\nWith Eq.~(\\ref{berry_sum}), one obtains the total phase as a sum of the Berry \nphases acquired by nuclear and electron spins with weights of the Schmidt \ncoefficients,\n\\begin{align}\n\\beta = \\sin^2\\tfrac{\\alpha_1}{2}\n \\left( -\\tfrac{\\Omega}{2} - \\tfrac{\\Omega}{2}\\right)\n + \\cos^2\\tfrac{\\alpha_1}{2}\n \\left(+\\tfrac{\\Omega}{2} - \\tfrac{3\\Omega}{2}\\right)\n = -\\Omega\\,.\n\\end{align}\nFrom Eq.~(\\ref{berry_mixed}), one obtains the marginal Berry phases of an\nelectron spin $\\Gamma_e$ and of nuclear spin $\\Gamma_n$\n\\begin{subequations}\n\\begin{align}\n\\Gamma_n &= \\arg\\bigl[\\,\n \\sin^2\\tfrac{\\alpha_1}{2}\\,e^{-i\\Omega\/2} \n\t + \\cos^2\\tfrac{\\alpha_1}{2}\\,e^{-i3\\Omega\/2}\\, \\bigr] \\,,\\\\\n\\Gamma_e &=\\arctan\\left[\\,\\cos\\alpha_1\\tan\\tfrac{\\Omega}{2}\\,\\right]\\,.\n\\end{align}\n\\end{subequations}\nFig.~\\ref{Fig5} plots the marginal Berry phase of a nuclear spin $\\Gamma_n$ \nas a function of $B$ and the azimuthal angle $\\theta$. In the limit of \n$B\\gg 1$, i.e., $\\alpha_1\\to 0$, one has $\\ket{E_{+1}^{-}} \\to\n\\ket{-\\tfrac{1}{2}, \\tfrac{3}{2}}$, $\\Gamma_e=\\Omega\/2$, and\n$\\Gamma_n=-3\\Omega\/2$. It is clearly seen that \nthe node of the marginal Berry phase of a nuclear (or electron) spin \ncorresponds to the avoid crossing at $A + (a-b)B = \\sqrt{3}A$. Thus it could \nbe expected that the marginal Berry phase of a subsystem for an entangled \nstate has a node at avoided crossings.\n\n\n\\section{Conclusions}\n\\label{section_conclusion}\n\nWe have considered the Breit-Rabi Hamiltonians for hydrogen and sodium atoms, \ndescribing the hyperfine interaction between a nuclear spin and an electron \nspin in the presence of a magnetic field. We have examined the relation between\nlevel crossings, entanglement, and Berry phases. It is shown that entanglement \nbetween nuclear and electron spins is maximum at avoided crossing points. \nThe Berry phase and the von Neumann entropy change abruptly at level\ncrossings as the parameters of the Breit-Rabi Hamiltonian for a hydrogen atom \nvary. An entangled state encircling the avoided crossing acquires \nthe marginal Berry phase of an electron (or nuclear) spin like an eigenstate \nmoving around the real crossing accumulates a Berry phase.\nWe have shown that the nodal points of the marginal Berry phase of \nan entangled state corresponds to the avoided crossing points.\n\n\n\\begin{acknowledgments}\nThis work is in part supported by the visitor program of Max Planck Institute \nfor the Physics of Complex Systems. We would like also to thank the Binational \nIsrael-US Foundation (BSF) for financial support. \n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n\\emph{Science} is commonly described as the ``discovery of natural laws through experimentation and observation''.\nResearchers in the natural sciences increasingly turn to machine learning (ML) to aid the discovery of natural laws from observational data alone, which is often abundantly available, hoping to bypass expensive and cumbersome targeted experimentation.\nWhile there may be fundamental limitations to what can be extracted from observations alone,\nrecent successes of ML in the entire range of natural sciences provide ample reason for excitement.\nIn this work, we focus on ordinary differential equations, a ubiquitous description of dynamical natural laws in physics, chemistry, and systems biology.\nFor a first order ODE $\\dot{y} := \\nicefrac{\\partial y}{\\partial t} = f(y, t)$, we call~$f$ (which uniquely defines the ODE) the underlying dynamical law.\nInformally, our goal is then to infer~$f$ in symbolic form given discrete time-series observations of a single solution $\\{y_i := y(t_i)\\}_{i=1}^n$ of the underlying ODE.\n\nContrary to ``black-box-techniques'' such as Neural Ordinary Differential Equations (NODE)~\\citep{chen2018neural} that aim at inferring a possible~$f$ as an arguably opaque neural network, we focus specifically on symbolic regression.\nFrom the perspective of the sciences, a law of nature is useful insofar as it is more broadly applicable than to merely describe a single observation.\nIn particular, the reason to learn a dynamical law in the first place is to dissect and understand it as well as to make predictions about situations that differ from the observed one.\nFrom this perspective, a symbolic representation of the law (in our case the function~$f$) has several advantages over block-box representations: they are compact and directly interpretable, they are amenable to analytic analysis, they allow for meaningful changes and thus enable assessment of interventions and counterfactuals.\n\nIn this work we present NSODE, a sequence-to-sequence transformer that maps observed trajectories, i.e., numeric sequences of the form $\\{(t_i, y_i)\\}_{i=1}^n$, directly to symbolic equations as strings, e.g., \\texttt{\"y**2+1.64*cos(y)\"}, which is the prediction for $f$.\nThis example directly highlights the benefit of symbolic representations in that the $y^2$ and $\\cos(y)$ terms tell us something about the fundamental dynamics of the observed system; the constant \\texttt{1.64} will have semantic meaning in a given context and we can, for example, make predictions about settings in which this constant takes a different value.\n\n\\begin{figure*}\n \\centering\n \\vspace{-7mm}\n \\includegraphics[width=1.0\\textwidth]{figs\/pipeline_new.pdf}\n \\caption{An overview illustration of the data generation (top) and training pipeline (bottom). Our dataset stores solutions in numerical (non-binarized) form on the entire regular solution time grid.}\n \\label{fig:overview}\n \\vspace{-2mm}\n\\end{figure*}\n\n\\section{Background and Related Work}\n\\label{sec:related_work}\nWhile NODE \\citep{chen2018neural} (with a large body of follow up work) is perhaps the most prominent method to learn ODEs from data in black-box form, we focus on various works that infer governing laws in symbolic form.\nClassically, symbolic regression aims at regular functional relationships (mapping $(x, f(x))$ pairs to~$f$ instead of mapping trajectories $(t, y(t))$ to the governing ODE $\\dot{y}=f(y, t)$) and has been approached by heuristics-based search, most prominently via genetic programming \\citep{koza}. Genetic programming randomly evolves a population of prospective mathematical expressions over many iterations and mimics natural selection by keeping only the best contenders across iterations, where superiority is measured by user-defined and problem-specific fitness functions.\nMore recently, symbolic regression has been approached with machine learning methods which exploit gradient information to optimize within the space of (finite) compositions of pre-defined basis functions.\n\\citet{sindy} use linear regression to identify a (sparse) linear combination of basis functions that yields the best fit for the observed data, while other approaches use neural networks with a diverse set of activation functions \\citep{eql2, pdenet2, rodenet}.\nAll these techniques deploy strong sparsity-promoting regularizers and fit a separate model for each observed trajectory.\n\nAlternatively, one can train a model to directly output the symbolic expressions.\nSupervised learning with gradient-based optimization for this approach is challenged by the formulation of a differentiable loss that measures the fit between the predicted symbolic expression and the observed data.\nThus, prior work resorted to reinforcement learning \\citep{deepsymres} or evolutionary algorithms \\citep{atkinson2019data, costa2021fast} for gradient-free optimization.\nFurthermore, inspired by common properties of known natural laws, \\citet{aifeynman2} devise a hybrid approach that combines a gradient-free heuristic search with neural network-based optimization. This approach has been extended to work with dynamical systems by \\citet{weilbach2021inferring}.\n\nThe closest works to ours use pre-trained, attention-based sequence-to-sequence models for symbolic regression \\emph{of functional relationships} \\citep{nesymres,SymbolicGPT2021,kamienny2022end, vastl2022symformer}.\nThey exploit the fact that symbolic expressions for (multi-variate) scalar functions can be both generated and evaluated on random inputs cheaply, resulting in essentially unlimited training data.\nLarge data including ground-truth expressions in symbolic form allow for a differentiable cross-entropy loss based directly on the symbols of the expression, instead of the numerical proximity between evaluations of the predicted and true expression.\nWhile the cross-entropy loss works well for operators and symbols (e.g. \\texttt{+,exp,sin,x,y}), a naive implementation is inefficient for numerical constants, e.g., \\texttt{1.452}. Previous works therefore resort to one of two strategies: \n1) represent all constants with a special \\texttt{} token when training the sequence-to-sequence model and predict only the presence of a constant. Actual values are then inferred in a second, subsequent parameter estimation step where the structure of an expression is held fixed and only constants are optimized.\nThis second optimization procedure comes with substantial computational cost as constants have to be fit per inferred expression. In particular, we highlight that it does not transfer to inferring ODEs as it would require to first solve the predicted ODE $\\dot{y} = \\hat{f}(y)$ to obtain predicted $\\{\\hat{y}_i\\}_{i=1}^n$ values that can be compared to the set of observations $\\{y_i\\}_{i=1}^n$. While differentiable ODE solvers exist, optimizing constants this way is prohibitively expensive and typically highly unstable. \n2) A second popular strategy consists in rounding constants within the range of interest so that they can be represented with a finite number of tokens. This second strategy avoids a subsequent optimization step and enjoys clever encoding schemes with improved token efficiency yet represents values with an inherent loss in precision.\nAs an alternative, we develop a representation based on a ``two-hot'' encoding which avoids subsequent optimization steps as well as rounding.\n\n\\section{Method}\n\\label{sec:method}\n\n\\xhdr{Problem setting}\nGiven observations $\\{(t_i, y_i)\\}_{i=1}^n$ of a trajectory $y: [t_1, t_n] \\to \\ensuremath \\mathbb{R}$ that is a solution of the ODE $\\dot{y} = f(y)$, we aim to recover the function~$f$ in symbolic form---in our case as a string.\nIn this work, we focus on time-invariant (or autonomous) ODEs (i.e.,~$f(y, t) = f(y)$).\nSuch settings are a good starting point for investigation as they are commonly studied and can be thought of as ``evolving on their own'' without external driving forces or controls, i.e., once an initial condition is fixed the absolute time does not directly influence the evolution.\nWe explicitly assume that the observed system actually evolves according to an ODE in canonical form~$\\dot{y} = f(y)$ such that~$f$ can be expressed in closed form using the mathematical operators seen during training (see \\cref{sec:data}).\nIn this paper we restrict ourselves to the rich class of non-linear, scalar, first-order, autonomous ODEs but we discuss extensions of NSODE to higher-order systems of coupled non-autonomous ODEs in \\cref{app:extensions}.\n\n\\subsection{Data Generation}\n\\label{sec:data}\n\n\\xhdr{Sampling symbolic expressions}\n\\label{xhdr:sampling}\nTo exploit large-scale supervised pretraining we generate a dataset of $\\sim$63M ODEs in symbolic form along with numerical solutions for randomly sampled initial values.\nSince we assume ODEs to be in canonical form $\\dot{y} = f(y)$, generating an ODE is equivalent to generating a symbolic expression $f(y)$.\nWe follow \\citet{lample2019deep}, who sample such an expression $f(y)$ as a unary-binary tree, where each internal node corresponds to an operator and each leaf node corresponds to a constant or variable.\nThe algorithm consists of two phases: (1) A unary-binary tree is sampled uniformly from the distribution of unary-binary trees with up to $k\\in \\ensuremath \\mathbb{N}$ internal nodes, which crucially does not overrepresent small trees corresponding to short expressions. Here the maximum number of internal nodes $\\ensuremath K$ is a hyperparameter of the algorithm.\n(2) The sampled tree is ``decorated'', that is, each binary internal node is assigned a binary operator, each unary internal node is assigned a unary operator, and each leaf is assigned a variable or constant. \nHence, we have to specify a distribution over the $\\ensuremath N_{\\mathrm{bin}}$ binary operators, one over the $\\ensuremath N_{\\mathrm{una}}$ unary operators, a probability $\\ensuremath p_{\\mathrm{sym}} \\in (0,1)$ to decide between symbols and constants, as well as a distribution $\\ensuremath p_{\\mathrm{c}}$ over constants.\nFor constants we distinguish explicitly between sampling an integer or a real value.\nTogether with $\\ensuremath K$, these choices uniquely determine a distribution over equations $f$ and are described in detail in \\cref{app:modeltraining}.\n\\Cref{fig:overview} depicts an overview of the data generation procedure.\n\nThe pre-order traversal of a sampled tree results in the symbolic expression for~$f$ in prefix notation.\nAfter conversion to infix notation, we simplify each expression using the computer algebra system SymPy \\citep{sympy}, and filter out constant equations~$f(y) = c$ as well as expressions that contain operators or symbols that were not part of the original distribution. \nWe call the structure modulo the value of the constants of such an expression a \\textbf{skeleton}.\nAny skeleton containing at least one binary operator or constant can be represented by different unary-binary trees.\nVice versa many of the generated trees will be simplified to the same skeleton. To ensure diversity and to mitigate potential dataset bias towards particular expressions, we discard duplicates on the skeleton level. To further cheaply increase the variability of ODEs we sample $\\ensuremath N_{\\mathrm{const}}$ unique sets of constants per skeleton.\nWhen sampling constants we take care not to modify the canonical expression by adhering to the rules listed in \\cref{app:constant_rules}.\nWe provide summary statistics on operator frequencies and expression complexities for the generated dataset in \\cref{app:datastats}. Here, \\textbf{complexity} refers to overall count of symbols (e.g., $y$, or constants) as well as operators in an expression, a simple yet common measure in the symbolic regression literature.\n\n\\xhdr{Computing numerical solutions}\nWe obtain numerical solutions for all ODEs via SciPy's interface \\citep{scipy} to the LSODA software package \\citep{lsoda} with both relative and absolute tolerances set to $10^{-9}$.\nWe solve each equation on a fixed time interval $t \\in [0, \\ensuremath T]$ and store solutions on a regular grid of $\\ensuremath N_{\\mathrm{grid}}$ points.\nFor each ODE, we sample up to $\\ensuremath N_{\\mathrm{iv}}$ initial values $y(0) = y_0$ uniformly from $\\ensuremath (y_0^{\\min}, y_0^{\\max})$.\\footnote{Due to a timeout per ODE, fewer solutions may remain in cases when the numerical solver fails repeatedly.}\nWhile LSODA attempts to select an appropriate solver, numerical solutions still cannot be trusted in all cases.\nTherefore, we check the validity of solutions via the following quality control check: we use 9th order central finite differences to approximate the temporal derivative of the solution trajectory (on the same temporal grid as the proposed solution), denoted by $\\dot{y}_{\\mathrm{fd}}$, and filter out any solution for which $\\|\\dot{y}_{\\mathrm{fd}} - \\dot{y} \\|_{\\infty} > \\epsilon$, where we use $\\epsilon = 1$.\n\n\\subsection{Model Design Choices}\n\\label{sec:model}\n\nNSODE consists of an encoder-decoder transformer with architecture choices listed in \\cref{app:modeldesign}.\nWe provide a visual overview in \\cref{fig:overview}.\n\n\\xhdr{Representing input trajectories}\nA key difficulty in feeding numerical solutions $\\{y_i\\}_{i=1}^n$ as input is that their range may differ greatly both within a single solution as well as across ODEs.\nFor example, the linear ODE $\\dot{y} = c \\cdot y$ for a constant~$c$ is solved by an exponential $y(t) = y_0 \\exp(c t)$ for initial value $y(0) = y_0$, which may span many orders of magnitude on a fixed time interval.\nTo prevent numerical errors and vanishing or exploding gradients caused by the large range of values, we assume each representable 64-bit float value is a token and use its IEEE-754 encoding as the token representation \\citep{nesymres}.\nWe thus convert all pairs $(t_i, y_i)$ to their IEEE-754 64 bit representations, channel them through a linear embedding layer before feeding them to the encoder.\n\n\\xhdr{Representing symbolic expressions}\nThe target sequence (i.e., the string for the symbolic expression of~$f$) is tokenized on the symbol-level.\nWe distinguish two cases: (1) \\emph{Operators and variables:} for each operator and variable we include a unique token in the vocabulary. \n(2) \\emph{Numerical constants:} constants may come from both discrete (integers) as well as continuous distributions, as for example in \\texttt{y**2+1.64*cos(y)}. Hence, it is unfeasible to include individual tokens ``for each constant''.\nNaively tokenizing on the digit level, i.e., representing real values literally as the sequence of characters (e.g., \\texttt{\"1.64\"}), not only significantly expands the length of target sequences and thus the computational cost, but also requires a variable number of prediction steps for every single constant.\n\nInstead, we take inspiration from \\citet{schrittwieser2020mastering} and encode constants in a \\emph{two-hot} fashion.\nWe fix a finite homogeneous grid on the real numbers $x_1 < x_2 < \\ldots < x_m$ for some~$m \\in \\ensuremath \\mathbb{N}$, which we add as tokens to the vocabulary.\nThe grid range $(x_1, x_m)$ and number of grid points $m$ are hyperparameters that can be set in accordance to the problems of interest. Our choices are described in \\cref{app:modeldesign}.\n\nFor any constant $c$ in the target sequence we then find $i \\in \\{1, \\ldots, m-1\\}$ such that $x_i \\le c \\le x_{i+1}$ and encode~$c$ as a distribution supported on $x_i, x_{i+1}$ with weights $\\alpha, \\beta$ such that $\\alpha x_i + \\beta x_{i+1} = c$.\nThat is, the target in the cross-entropy loss for a constant token is not a strict one-hot encoding, but a distribution supported on two (neighboring) vocabulary tokens resulting in a lossless encoding of continuous values in $[x_1, x_m]$.\nThis two-hot representation can be used directly in the cross-entropy loss function.\n\n\\xhdr{Decoding constants}\nWhen decoding a predicted sequence, we check at each prediction step whether the argmax of the logits corresponds to one of the~$m$ constant tokens $\\{x_1, \\ldots, x_m\\}$.\nIf not, we proceed by conventional one-hot decoding to obtain predicted operators and variables.\nIf instead the argmax corresponds to, for example, $x_i$, we also pick its largest-logit neighbor ($x_{i-1}$ or $x_{i+1}$; suppose $x_{i+1}$), renormalize their probabilities by applying a softmax to all logits and use the resulting two probability estimates as weights $\\alpha, \\beta$.\nConstants are then ultimately decoded as $\\alpha x_i + \\beta x_{i+1}$.\n\n\\subsection{Evaluation and Metrics}\n\\label{sec:metrics}\n\n\\xhdr{Sampling solutions}\nTo infer a symbolic expression for the governing ODE of a new observed solution trajectory $\\{(t_i, y_i)\\}_{i=1}^n$, all the typical policies such as greedy, sampling, or beam search are available.\nIn our evaluation, we use beam search with 1536 beams and report top-$k$ results with $k$ ranging from 1 to 1536.\n\n\\xhdr{Metrics}\nWe evaluate model performance both numerically and symbolically.\nFor numerical evaluation we follow \\citet{nesymres}: suppose the ground truth ODE is given by $\\dot{y} = f(y)$ with (numerical) solution $y(t)$ and the predicted ODE is given by $\\hat{\\dot{y}} = \\hat{f}(y)$.\nTo compute numerical accuracy we first evaluate $f$ and $\\hat{f}$ on $\\ensuremath N_{\\mathrm{eval}}$ points in the interval $[\\min(y(t)), \\max(y(t))]$ (i.e., the interval traced out by the observed solution), which yields function evaluations $\\texttt{gt}=\\{\\dot{y}_i\\}_{i=1}^{\\ensuremath N_{\\mathrm{eval}}}$ and $\\texttt{pred}=\\{\\hat{\\dot{y}}_i\\}_{i=1}^{\\ensuremath N_{\\mathrm{eval}}}$.\nWe then assess whether \\texttt{numpy.allclose}\\footnote{\\texttt{numpy.allclose} returns True if \\texttt{abs(a - b) <= (atol + rtol * abs(b))} holds element-wise for elements $a$ and $b$ from the two input arrays.\nWe use \\texttt{atol=1e-10} and \\texttt{rtol=0.05}; $a$ corresponds to predictions, $b$ corresponds to ground truth.} returns \\texttt{True} as well as whether the coefficient of determination $\\mathrm{R}^2 \\geq 0.999$.\\footnote{For observations $y_i$ and predictions $\\hat{y}_i$ we have $\\mathrm{R}^2 = 1 - (\\sum_i (y_i - \\hat{y}_i)^2) \/ (\\sum_i (y_i - \\overline{y})^2 )$.}\nNumerical evaluations capture how closely the predicted function approximates the ground truth function within the interval $[\\min(y(t)), \\max(y(t))]$.\n\nHowever, a key motivation for symbolic regression is to uncover a \\emph{symbolic} mathematical expression that governs the observations.\nTesting for symbolic equivalence between ground truth expression $f(y)$ and a predicted expression $\\hat{f}(y)$ is unsuitable in the presence of real-valued constants as even minor deviations between true and predicted constants render the equivalence false.\nInstead, we regard the predicted expression $\\hat{f}(y)$ to be symbolically correct if $f(y)$ and $\\hat{f}(y)$ can be made equivalent by modifying only the values of constants in the predicted expression $\\hat{f}(y)$.\nThis is implemented using SymPy's \\texttt{match} function.\nIn order not to alter the structure of the predicted expression, we constrain modifications of constants such that all constants remain non-zero and retain their original sign.\nThis definition is thus primarily concerned with the structure of an expression, rather than precise numerical agreement.\nOnce the structure is known, the inference problem becomes conventional parameter estimation.\nWe report percentages of samples in a given test set that satisfies any individual metric (numerical and symbolic), as well as percentages satisfying symbolic and numerical metrics simultaneously.\n\n\\section{Experiments}\n\\label{sec:experiments}\n\n\\subsection{Benchmark Datasets}\\label{sec:datasets}\nWe construct several test sets to evaluate model performance and generalization in different settings.\n\\begin{itemize}[leftmargin=*,topsep=0pt,itemsep=0pt]\n \\item \\textbf{testset-iv}: Our first test set assesses generalization within initial values not seen during training.\n It consists of 5793 ODEs picked uniformly at random from our generated dataset but re-sampled initial values. We also employ the following constraints via rejection sampling: (a) All skeletons in testset-iv are unique. (b) As the number of unique skeletons increases with the number of operators, we allow at most 2000 examples per number of operators (with substantially fewer unique skeletons existing for few operators).\n \n \\item \\textbf{testset-constants}: Our second test set assesses generalization within unseen initial values and constants.\n It consists of 2720 ODEs picked uniformly at random from our dataset (ensuring unique skeletons and at most 1000 examples per number of operators as above), but re-sampled intial values and constants.\n \n \\item \\textbf{testset-skeletons}: In principle, we can train NSODE on all possible expressions (using only the specified operators and number ranges) up to a specified number of operators.\n However, even with the millions examples in our dataset, we have by far not exhausted the huge space of possible skeletons (especially for larger numbers of operators).\n Hence, our third test set contains 100 novel random ODEs with skeletons that were never seen during training. \n \n \\item \\textbf{testset-iv-163}: This is a subset of testset-iv motivated by the fact that most symbolic regression models we want to compare to require a separate optimization for every individual example, which was computationally infeasible for our testset-iv.\n For a fair comparison, we therefore subsampled up to 10 ODEs per complexity uniformly at random, yielding 163 examples.\n \n \\item \\textbf{testset-textbook}: To assess how NSODE performs on ``real problems'', we manually curated 12 scalar, non-linear ODEs from Wikipedia pages, physics textbooks, and lecture note from university courses on ODEs.\n These equations are listed in \\cref{tab:textbook_equations} in \\cref{app:textbook_equations}.\n We note that they are all extremely simple compared to the expressions in our generated dataset in that they are ultimately mostly low order polynomials, some of which with one fractional exponent.\n \n \\item \\textbf{testset-classic}:\n To validate our approach on existing datasets we turn to benchmarks in the classic symbolic regression literature (inferring just the functional relationship between input-ouput pairs) and simply interpret functions as ODEs. In particular we include all scalar function listed in the overview in \\cite{mcdermott2012genetic} which includes equations from many different benchmarks \\cite{keijzer2003improving, koza, koza1994genetic, uy2011semantically, vladislavleva2008order}.\n For example, we interpret the function $f(y) = y^3 + y^2 + y$ from \\citet{uy2011semantically} as an autonomous ODE $\\dot{y}(t) = f(y(t)) = y(t)^3 + y(t)^2 + y$, which we solve numerically for a randomly sampled initial value as described before.\n\\end{itemize}\n\n\\subsection{Baselines}\\label{sec:baselines}\nWe compare our method to recent popular baselines from the literature (see \\cref{sec:related_work}).\nWe briefly describe them including some limitations here and defer all details to \\cref{app:baselines}.\nFirst, no baseline is suited directly to infer dynamical laws, but only to infer functional relationships.\nTherefore, all baselines fit a separate regression function mapping $y(t) \\mapsto \\hat{\\dot{y}}(t)$ per individual ODE, using the coefficient of determination $\\mathrm{R}^2$ as optimization objective.\nSince derivatives $\\hat{\\dot{y}}(t)$ are typically not observed, we approximate them via finite differences using PySindy \\citep{pysindy}.\nHence, all these methods crucially rely on regularly sampled and noise-free observations, whereas our approach can easily be extended to take those into account (see \\cref{app:extensions}).\n\n\\begin{itemize}[leftmargin=*,topsep=0pt,itemsep=0pt]\n \\item \\textbf{Sindy} \\citep{sindy}: Sindy builds a (sparse) linear combination of a fixed set of (non-linear) basis functions.\n The resulting Lasso regression is efficient, but suffers from limited expressiveness.\n In particular, Sindy cannot easly represent nested functions or non-integer powers as all non-linear expressions have to be added explicitly to the set of basis functions.\n We cross-validate Sindy over a fairly extensive hyperparameter grid of 800 different combinations for each individual trajectory.\n \n \\item \\textbf{GPL}\\footnote{\\ttfamily \\url{gplearn.readthedocs.io\/}} (genetic programming):\n GPL(earn) maintains a population of programs each representing a mathematical expression.\n The programs are mutated for several generations to heuristically optimize a user defined fitness function.\n While not originally developed for ODEs, we can apply GPLearn on our datasets by leveraging the finite difference approximation.\n We use a population size of 1000 and report the best performance across all final programs.\n Compared to sindy, GPLearn is more expressive yet substantially slower to fit.\n \n \\item \\textbf{AIFeynman} \\citep{aifeynman, aifeynman2}: AIFeynman is a physics-inspired approach to symbolic regression that exploits the insight that many famous equations in natural sciences exhibit well-understood functional properties such as symmetries, compositionality, or smoothness.\n AIFeynman implements a neural network based heuristic search that tests for such properties in order to identify a symbolic expression that fits the data.\n For every test sample AIFeynman computes a pareto front of solutions that trade off complexity versus accuracy.\n We report the best performance across all functions on the pareto front. \n Notably, AIFeynman performs quite an exhaustive search procedure such that running it even on a single equation took on the order of tens of minutes.\n\\end{itemize}\n\n\\subsection{Results}\\label{sec:results}\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{1\\textwidth}\n \\centering\n \\includegraphics[width=0.65\\linewidth]{figs\/iv_const_skel\/legend_iv_const_skel.pdf}\n\\end{subfigure}%\n\\\\ \\vspace{0.25cm}\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/iv_const_skel\/iv_const_skel_isclose.pdf}\n \\caption{allclose}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/iv_const_skel\/iv_const_skel_r2.pdf}\n \\caption{R$^2 \\geq 0.999$}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/iv_const_skel\/iv_const_skel_skelrec.pdf}\n \\caption{skeleton recovery}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/iv_const_skel\/iv_const_skel_skelrec_isclose.pdf}\n \\caption{skel. recov. \\& allclose}\n\\end{subfigure}%\n\\caption{Numerical and symbolic performance evaluation on testset-iv.}\n\\label{fig:results}\n\\end{figure}\n\n\n\\xhdr{Model Performance}\n\\Cref{fig:results} shows NSODE's performance on our testset-iv, testset-constants, and testset-skeletons according to our numerical and symbolic metrics as well as combined skeleton recovery and allclose as we vary $k$ in the top-k considered candidates of the beam search.\nInvesting sufficient test-time-compute (i.e., considering many candidates) continuously improves performance.\nWhile we capped $k$ at 1536 due to memory limitations, we did not observe a stagnation of the roughly logarithmic scaling of all performance metrics with $k$.\nThis cannot be attributed to ``exhaustion effects'', where one may assume that all possible ODEs will eventually be among the candidates, because (a) the space of possible skeletons grows much faster than exponentially, and (b) the numerical metrics are extremely sensitive also to the predicted constant values in continuous domains.\n\nAs one may expect, performance decreases as we move from only new initial conditions, to also sampling new constants, to finally sampling entirely unseen skeletons.\nOn testset-iv with $k=1536$ we achieve about 50\\% skeleton recovery and still successfully recover more than a third skeletons of testset-skeletons with similar numbers for allclose.\nThe fact that the combined metric (symbolic + numerical) is only about half of that indicates that numerical and symbolic fit are indeed two separate measures, none of which need to imply the other.\nHence, a thorough evaluation of both is crucial to understand model performance in symbolic regression tasks.\n\n\\begin{table}[h!]\n\\centering\n\\caption{Comparing NSODE to the baselines. Results are average percentages across dataset. GPLearn often generates extremely long expressions which take SymPy up to half a minute to parse during evaluation. We denote this extra time in gray.}\\label{tab:resultsummary}\n\\begin{tabularx}{\\columnwidth}{l@{\\hspace{8pt}}l@{\\hspace{8pt}}Y@{\\hspace{6pt}}Y@{\\hspace{8pt}}Y@{\\hspace{0pt}}Y}\n\\toprule \\rowcolor{white}\nDataset & Metric & NSODE & Sindy & GPLearn & AIFeynman \\\\ \n\\midrule\n & skel-recov & \\bf 37.4 & 3.7 & 2.5 & 14.1 \\\\ \n & R$^2 \\geq 0.999$ & 24.5 & 31.9 & 3.7& \\bf 49.7 \\\\ \niv-163 & allclose & 42.3 & 25.8 & 14.7 & \\bf 55.8 \\\\ \n & skel-recov \\& R$^2 \\geq 0.999$ & \\bf 15.3 & 3.1 & 1.8 & 13.5 \\\\ \n & skel-recov \\& allclose & \\bf 15.3 & 3.1 & 1.8 & 13.5 \\\\ \n & runtime [s] & 5.4 & \\bf0.4 & 29 {\\color{gray}+22} & 1203.6 \\\\\n\\midrule\n & skel-recov & 41.7 & 33.3 & 8.3 & \\bf 91.7 \\\\ \n & R$^2 \\geq 0.999$ & 16.7 & 50 & 0.0 & \\bf 75 \\\\ \ntextbook & allclose & 25 & 58.3 & 8.3 & \\bf 75 \\\\ \n & skel-recov \\& R$^2 \\geq 0.999$ & 33.3 & 41.7 & 0 & \\bf 66.7 \\\\ \n & skel-recov \\& allclose & 8.3 & 33.3 & 1.8 & \\bf 66.7 \\\\\n & runtime [s] & 6 & \\bf1 & 23 {\\color{gray}+22} & 1267.1 \\\\\n\\bottomrule\n\\end{tabularx}%\n\\end{table}\n\n\\xhdr{Comparison to baselines}\nIn \\cref{tab:resultsummary} we compare NSODE to all baselines using $k=1536$ in our beam search; full results on all datasets can be found in \\cref{app:detailedresults}.\nWe also include the average wallclock runtime per expression for each of the datasets.\n\nFirst, we note that on our subsampled testset-iv-163, NSODE outperforms competing approaches in terms of skeleton recovery by a wide margin and also performs best in terms of joint skeleton recovery and numerical measures, which is a strong indication of actually having recovered the governing ODE accurately.\nBy spending over 200x more time on its exhaustive heuristic search, AIFeynman manages to outperform NSODE in terms of numerical accuracy ($\\mathrm{R}^2$ and allclose).\n\\Cref{fig:distribution} shows the number of skeletons recovered by each method given the complexity of equations, results for other datasets can be found in \\cref{app:detailedresults}. \\footnote{Due to simplification, complexity is not upper bounded by the number of nodes in a unary-binary tree.}\nWhile AIFeynman and Sindy recover some of the low complexity expressions, NSODE is the only method to also recover some of the more complex skeletons. \n\nOn testset-textbook, AIFeynman outperforms all other methods on numerical and symbolic metrics. This can be understood with regard to the dataset where $8\/12$ expressions are polynomials with the remaining 4\/12 expressions having a polynomial skeleton with fractional or negative exponents. These expressions are particularly favorable for the heuristics implemented by AIFeynman which explicitly attempt to fit a polynomial to the data.\nHowever, even on these simple examples AIFeynman takes over 200x longer than our method, which in turn clearly outperforms Sindy and GPLearn in terms of skeleton recovery.\n\n\\begin{figure}\n\\centering\n\\begin{subfigure}{1\\textwidth}\n \\centering\n \\includegraphics{figs\/complexity_vs_recovery\/legend2.pdf}\n\\end{subfigure}%\n\\\\\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/complexity_vs_recovery\/complexity_vs_recovery_nsode_163.pdf}\n \\caption{NSODE}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/complexity_vs_recovery\/complexity_vs_recovery_aifeynman_163.pdf}\n \\caption{AIFeynman}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/complexity_vs_recovery\/complexity_vs_recovery_sindy_163.pdf}\n \\caption{Sindy}\n\\end{subfigure}%\n\\begin{subfigure}{.25\\textwidth}\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/complexity_vs_recovery\/complexity_vs_recovery_gplearn_163.pdf}\n \\caption{GPLearn}\n\\end{subfigure}%\n\\caption{Correctly recovered skeletons by each method on testset-iv-163 per complexity. AIFeynman and Sindy are mostly able to recover some of the low complexity skeletons, while NSODE performs much better also on higher complexities. GPLearn fails to recover most skeletons.}\\label{fig:distribution}\n\\vspace{-2mm}\n\\end{figure}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe have developed a flexible and scalable method to infer ordinary differential equations $\\dot{y} = f(y)$ from a single observed solution trajectory.\nOur method follows the successful paradigm of large-scale pretraining of attention-based sequence-to-sequence models on essentially unlimited amounts of simulated data, where the inputs are the observed solution $\\{(t_i,y_i)\\}_{i=1}^n$ and the output is a symbolic expression for~$f$ as a string.\nOnce trained, our method is orders of magnitude more efficient than similarly expressive existing symbolic regression techniques that require a separate optimization for each instance and achieves strong performance in terms of skeleton recovery especially for complex expressions on various benchmarks.\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{secIntro}\n\nThe affine Kac-Moody superalgebra $\\AKMSA{gl}{1}{1}$ is an attractive candidate for study. On the one hand, its highest weight theory is particularly easy to analyse. On the other, one is naturally led to study indecomposable modules of the type that arise in logarithmic conformal field theory{}. In \\cite{CR:GL11}, we reviewed and consolidated what was known about this superalgebra, drawing in particular upon the previous works \\cite{RozQua92,Rozansky:1992td,SalGL106,Creutzig:2007jy,CS09,Creutzig:2009zz,Creutzig:2010ne}.\n\nOne motivation for undertaking this work was to understand how one could reconcile the observation that conformal field theories{} with $\\AKMSA{gl}{1}{1}$ symmetry appeared to admit only continuous spectra, whereas one might expect that the Wess-Zumino-Witten{} model on the real form $\\SLSG{U}{1}{1}$ would have the same symmetry, but a discrete spectrum. Another was to understand whether $\\AKMSA{gl}{1}{1}$ could be related to other infinite-dimensional algebras, thus providing relationships between certain (logarithmic) conformal field theories{}. For the first question, we were able to show that certain discrete spectra seem to be consistent provided one \\emph{extends} the chiral algebra appropriately. For the second, we identified a certain $\\AKMA{u}{1}$-coset of $\\AKMSA{gl}{1}{1}$ as the chiral algebra of the well-known $\\beta \\gamma$ ghost system. Previous work \\cite{RidSL210} then links $\\AKMSA{gl}{1}{1}$ to the affine Kac-Moody algebra $\\AKMA{sl}{2}_{-1\/2}$ \\cite{RidSL208,RidFus10}, the triplet algebra $\\func{\\alg{W}}{1,2}$ of Gaberdiel and Kausch \\cite{GabRat96} and the symplectic fermions algebra \\cite{KauSym00} ($\\AKMSA{psl}{1}{1}$).\n\nThis article describes a certain family of \\emph{extended algebras} of $\\AKMSA{gl}{1}{1}$. In \\cite{CR:GL11}, we noted that the fusion rules give rise to an infinite family of simple currents labelled by $n \\in \\mathbb{R}$ and $\\ell \\in \\mathbb{Z}$. It follows that these algebra extensions may be computed algorithmically \\cite{RidSU206,RidMin07}. Here, we perform the computations up to a certain order, using a well-known free field realisation \\cite{Guruswamy:1999hi}. More precisely, we study the resulting W-algebras and show that, for certain infinite families of $n$ and $\\ell$, there is a bosonic subalgebra which we conjecture to be the $W^{\\brac{2}}_N$ algebra of Feigin and Semikhatov \\cite{Feigin:2004wb}.\n\n\\section{$\\SLSA{gl}{1}{1}$ and its representations} \\label{secFinAlg}\n\n\\subsection{Algebraic Structure}\n\nThe Lie superalgebra $\\SLSA{gl}{1}{1}$ consists of the endomorphisms of the super vector space $\\mathbb{C}^{1 \\mid 1}$ equipped with the standard graded commutator. It is convenient to choose the following basis,\n\\begin{equation} \\label{eqngl11DefRep}\nN = \\frac{1}{2} \n\\begin{pmatrix}\n1 & 0 \\\\\n0 & -1\n\\end{pmatrix}\n, \\qquad E = \n\\begin{pmatrix}\n1 & 0 \\\\\n0 & 1\n\\end{pmatrix}\n, \\qquad \\psi^+ = \n\\begin{pmatrix}\n0 & 1 \\\\\n0 & 0\n\\end{pmatrix}\n, \\qquad \\psi^- = \n\\begin{pmatrix}\n0 & 0 \\\\\n1 & 0\n\\end{pmatrix}\n,\n\\end{equation}\nin which $N$ and $E$ are parity-preserving (bosonic) whereas $\\psi^+$ and $\\psi^-$ are parity-reversing (fermionic). The non-vanishing brackets are then\n\\begin{equation} \\label{eqngl11Rels}\n\\comm{N}{\\psi^{\\pm}} = \\pm \\psi^{\\pm}, \\qquad \\acomm{\\psi^+}{\\psi^-} = E.\n\\end{equation}\nWe note that $E$ is central, so this superalgebra is not simple. In fact, $\\SLSA{gl}{1}{1}$ does not decompose as a direct sum of ideals. Equivalently, the adjoint representation of $\\SLSA{gl}{1}{1}$ is reducible, but indecomposable.\n\nThe standard non-degenerate bilinear form $\\killing{\\cdot}{\\cdot}$ on $\\SLSA{gl}{1}{1}$ is given by the supertrace of the product in the defining representation \\eqref{eqngl11DefRep}. With respect to the basis elements \\eqref{eqngl11DefRep}, this form is\n\\begin{equation}\n\\killing{N}{E} = \\killing{E}{N} = 1, \\qquad \\killing{\\psi^+}{\\psi^-} = -\\killing{\\psi^-}{\\psi^+} = 1,\n\\end{equation}\nwith all other combinations vanishing. From this, we compute the quadratic Casimir $Q \\in \\uealg{\\SLSA{gl}{1}{1}}$ (up to an arbitrary polynomial in the central element $E$). We find it convenient to take\n\\begin{equation} \\label{eqnDefCasimir}\nQ = NE + \\psi^- \\psi^+.\n\\end{equation}\n\n\\subsection{Representation Theory} \\label{secFinRep}\n\nThe obvious triangular decomposition of $\\SLSA{gl}{1}{1}$ regards $\\psi^+$ as a raising (annihilation) operator, $\\psi^-$ as a lowering (creation) operator, and $N$ and $E$ as Cartan elements. A highest weight state{} of a $\\SLSA{gl}{1}{1}$-representation is then defined to be an eigenstate of $N$ and $E$ which is annihilated by $\\psi^+$. Such states generate Verma modules in the usual way and as $\\psi^-$ squares to zero in any representation, every Verma module has dimension $2$. If $\\brac{n,e}$ denotes the weight (the $N$- and $E$-eigenvalues) of a highest weight state{} generating a Verma module, then its unique descendant will have weight $\\brac{n-1,e}$. We will denote this Verma module by $\\VerMod{n-1\/2,e}$, remarking that the convention of characterising a highest weight module{} by the \\emph{average} $N$-eigenvalue of its states, rather than that of the highest weight state{} itself, turns out to symmetrise many of the formulae to follow.\n\nSuppose now that $\\ket{v}$ is a (generating) highest weight state{} of $\\VerMod{n,e}$. It satisfies\n\\begin{equation}\n\\psi^+ \\psi^- \\ket{v} = \\acomm{\\psi^+}{\\psi^-} \\ket{v} = E \\ket{v} = e \\ket{v},\n\\end{equation}\nso the descendant $\\psi^- \\ket{v} \\neq 0$ is a singular vector if and only if $e=0$. Verma modules are therefore irreducible for $e \\neq 0$, and have irreducible quotients of dimension $1$ when $e = 0$. Modules with $e \\neq 0$ are called \\emph{typical} while those with $e = 0$ are \\emph{atypical}. We will denote a typical irreducible by $\\TypMod{n,e} \\cong \\VerMod{n,e}$ and an atypical irreducible by $\\AtypMod{n}$. Our convention of labelling modules by their average $N$-eigenvalue leads us to define the latter to be the irreducible quotient of $\\VerMod{n-1\/2,0}$. This is summarised in the short exact sequence\n\\begin{equation} \\label{ESFinV}\n\\dses{\\AtypMod{n-1\/2}}{\\VerMod{n,0}}{\\AtypMod{n+1\/2}}\n\\end{equation}\nand structure diagram\n\\begin{equation}\n\\parbox[c]{0.4\\textwidth}{\n\\begin{tikzpicture}[auto,thick,\n\tnom\/.style={circle,draw=black!20,fill=black!20,inner sep=2pt}\n\t]\n\\node (q1) at (0,0) {$\\AtypMod{n+1\/2}$};\n\\node (s1) at (3,0) {$\\AtypMod{n-1\/2}$};\n\\node at (-2,0) [nom] {$\\VerMod{n,0}$};\n\\node at (-1.25,0) {$:$};\n\\draw [->] (q1) to node {$\\psi^-$} (s1);\n\\end{tikzpicture}\n} \\ .\n\\end{equation}\nSuch diagrams illustrate how the irreducible composition factors of a module are combined, with arrows indicating (schematically) the action of the algebra.\n\nAtypical modules also appear as submodules of larger indecomposable modules. Of particular importance are the four-dimensional projectives\\footnote{We mention that the typical irreducibles are also projective in the category of finite-dimensional $\\SLSA{gl}{1}{1}$-modules.} $\\ProjMod{n}$ whose structure diagrams take the form\n\\begin{equation} \\label{picStaggered}\n\\parbox[c]{0.28\\textwidth}{\n\\begin{center}\n\\begin{tikzpicture}[auto,thick,\n\tnom\/.style={circle,draw=black!20,fill=black!20,inner sep=2pt}\n\t]\n\\node (top) at (0,1.5) [] {$\\AtypMod{n}$};\n\\node (left) at (-1.5,0) [] {$\\AtypMod{n+1}$};\n\\node (right) at (1.5,0) [] {$\\AtypMod{n-1}$};\n\\node (bot) at (0,-1.5) [] {$\\AtypMod{n}$};\n\\node at (0,0) [nom] {$\\ProjMod{n}$};\n\\draw [->] (top) to node [swap] {$\\psi^+$} (left);\n\\draw [->] (top) to node {$\\psi^-$} (right);\n\\draw [->] (left) to node [swap] {$\\psi^-$} (bot);\n\\draw [->] (right) to node {$-\\psi^+$} (bot);\n\\end{tikzpicture}\n\\end{center}\n}\n.\n\\end{equation}\nWe remark that these modules may be viewed as particularly simple examples of staggered modules \\cite{RidSta09}. Indeed, they may be regarded as extensions of highest weight modules{} via the exact sequence\n\\begin{equation} \\label{ESFinP}\n\\dses{\\VerMod{n+1\/2,0}}{\\ProjMod{n}}{\\VerMod{n-1\/2,0}},\n\\end{equation}\nand one can verify that the Casimir $Q$ acts non-diagonalisably on $\\ProjMod{n}$, taking the generator associated with the top $\\AtypMod{n}$ factor to the generator of the bottom $\\AtypMod{n}$ factor, while annihilating the other states.\n\n\\subsection{The Representation Ring}\n\nThe relevance of the projectives $\\ProjMod{n}$ is that they appear in the representation ring generated by the irreducibles.\\footnote{It is perhaps also worth pointing out that the adjoint representation of $\\SLSA{gl}{1}{1}$ is isomorphic to $\\ProjMod{0}$.} The tensor product rules governing this ring are \\cite{RozQua92}\n\\begin{equation} \\label{RepRing}\n\\begin{gathered}\n\\AtypMod{n} \\otimes \\AtypMod{n'} = \\AtypMod{n+n'}, \\qquad\n\\AtypMod{n} \\otimes \\TypMod{n',e'} = \\TypMod{n+n',e'}, \\qquad\n\\AtypMod{n} \\otimes \\ProjMod{n'} = \\ProjMod{n+n'}, \\\\\n\\TypMod{n,e} \\otimes \\TypMod{n',e'} = \n\\begin{cases}\n\\ProjMod{n+n'} & \\text{if $e+e'=0$,} \\\\\n\\TypMod{n+n'+1\/2,e+e'} \\oplus \\TypMod{n+n'-1\/2,e+e'} & \\text{otherwise,}\n\\end{cases}\n\\\\\n\\TypMod{n,e} \\otimes \\ProjMod{n'} = \\TypMod{n+n'+1,e} \\oplus 2 \\: \\TypMod{n+n',e} \\oplus \\TypMod{n+n'-1,e}, \\qquad\n\\ProjMod{n} \\otimes \\ProjMod{n'} = \\ProjMod{n+n'+1} \\oplus 2 \\: \\ProjMod{n+n'} \\oplus \\ProjMod{n+n'-1}.\n\\end{gathered}\n\\end{equation}\nThere are other indecomposables which may be constructed from submodules and quotients of the $\\ProjMod{n}$ by taking tensor products. We will not need them and refer to \\cite{GotRep07} for further discussion.\n\n\\section{$\\AKMSA{gl}{1}{1}$ and its Representations} \\label{secAffine}\n\n\\subsection{Algebraic Structure} \\label{secAffAlg}\n\nOur conventions for $\\SLSA{gl}{1}{1}$ carry over to its affinisation $\\AKMSA{gl}{1}{1}$ in the usual way. Explicitly, the non-vanishing brackets are\n\\begin{equation}\n\\comm{N_r}{E_s} = r k \\delta_{r+s,0}, \\qquad \\comm{N_r}{\\psi^{\\pm}_s} = \\pm \\psi^{\\pm}_{r+s}, \\qquad \\acomm{\\psi^+_r}{\\psi^-_s} = E_{r+s} + r k \\delta_{r+s,0},\n\\end{equation}\nwhere $k \\in \\mathbb{R}$ is called the level and $r,s \\in \\mathbb{Z}$. We emphasise that when $k \\neq 0$, the generators can be rescaled so as to normalise $k$ to $1$:\n\\begin{equation} \\label{eqnLevelScaling}\nN_r \\longrightarrow N_r, \\qquad E_r \\longrightarrow \\frac{E_r}{k}, \\qquad \\psi^{\\pm}_r \\longrightarrow \\frac{\\psi^{\\pm}_r}{\\sqrt{k}}.\n\\end{equation}\nAs in the more familiar case of $\\AKMA{u}{1}$, we see that the actual value of $k \\neq 0$ is not physical.\n\nThe Virasoro generators are constructed using (a modification of) the Sugawara construction. Because the quadratic Casimir of $\\SLSA{gl}{1}{1}$ is only defined modulo polynomials in $E$, one tries the ansatz \\cite{RozQua92}\n\\begin{equation} \\label{eqnDefT}\n\\func{T}{z} = \\mu \\func{\\normord{NE + EN - \\psi^+ \\psi^- + \\psi^- \\psi^+}}{z} + \\nu \\func{\\normord{EE}}{z},\n\\end{equation}\nfinding that this defines an energy-momentum tensor if and only if $\\mu = 1\/2k$ and $\\nu = 1\/2k^2$. Moreover, the $\\AKMSA{gl}{1}{1}$ currents $\\func{N}{z}$, $\\func{E}{z}$ and $\\func{\\psi^{\\pm}}{z}$ are found to be Virasoro primaries of conformal dimension $1$ and the central charge is zero.\n\nThe structure theory of highest weight modules{} for $\\AKMSA{gl}{1}{1}$ turns out to be particularly accessible because of certain automorphisms. These consist of the automorphism $\\mathsf{w}$ which defines the notion of conjugation and the family \\cite{SalGL106} of spectral flow automorphisms $\\sigma^{\\ell}$, $\\ell \\in \\mathbb{Z}$. Explicitly,\n\\begin{equation}\n\\begin{aligned}\n\\func{\\mathsf{w}}{N_r} &= -N_{r}, \\\\\n\\func{\\sigma^{\\ell}}{N_r} &= N_r,\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\func{\\mathsf{w}}{E_r} &= -E_{r}, \\\\\n\\func{\\sigma^{\\ell}}{E_r} &= E_r - \\ell k \\delta_{r,0},\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\func{\\mathsf{w}}{\\psi^{\\pm}_r} &= \\pm \\psi^{\\mp}_{r}, \\\\\n\\func{\\sigma^{\\ell}}{\\psi^{\\pm}_r} &= \\psi^{\\pm}_{r \\mp \\ell},\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\func{\\mathsf{w}}{L_0} &= L_0. \\\\\n\\func{\\sigma^{\\ell}}{L_0} &= L_0 - \\ell N_0.\n\\end{aligned}\n\\end{equation}\nThese automorphisms may be used to construct new modules $\\func{\\mathsf{w}^*}{\\mathcal{M}}$ and $\\func{\\sigma^*}{\\mathcal{M}}$ by twisting the action of the algebra on a module $\\mathcal{M}$:\n\\begin{equation} \\label{eqnInducedAction}\nJ \\cdot \\tfunc{\\mathsf{w}^*}{\\ket{v}} = \\func{\\mathsf{w}^*}{\\tfunc{\\mathsf{w}^{-1}}{J} \\ket{v}}, \\qquad J \\cdot \\tfunc{\\sigma^*}{\\ket{v}} = \\func{\\sigma^*}{\\tfunc{\\sigma^{-1}}{J} \\ket{v}} \\qquad \\text{($J \\in \\AKMSA{gl}{1}{1}$).}\n\\end{equation}\nNote that $\\func{\\mathsf{w}^*}{\\mathcal{M}}$ is precisely the module conjugate to $\\mathcal{M}$.\n\n\\subsection{Representation Theory} \\label{secAffRep}\n\nWe can now define affine highest weight states{}, affine Verma modules $\\AffVerMod{n,\\ell}$, and their irreducible quotients as before. We remark only that \\eqref{eqnLevelScaling} suggests that we characterise modules by the invariant ratio $\\ell = e\/k$ rather than by the $E_0$-eigenvalue $e$. The affine highest weight state{} $\\ket{v_{n,\\ell}}$ of $\\AffVerMod{n,\\ell}$, whose weight (its $N_0$- and $E_0\/k$-eigenvalues) is $\\brac{n + \\tfrac{1}{2}, \\ell}$, has conformal dimension\n\\begin{equation} \\label{eqnConfDim}\n\\Delta_{n,\\ell} = n \\ell + \\frac{1}{2} \\ell^2.\n\\end{equation}\nOf course, this formula also applies to singular vectors. Again, the label $n$ refers to the average $N_0$-eigenvalue of the zero-grade subspace of $\\AffVerMod{n,\\ell}$, generalising the labelling convention of \\secref{secFinRep}.\n\nVerma modules for $\\AKMSA{gl}{1}{1}$ are infinite-dimensional and their characters have the form\n\\begin{equation} \\label{eqnCharVerma}\n\\ch{\\AffVerMod{n,\\ell}}{z;q} = \\traceover{\\AffVerMod{n,\\ell}} z^{N_0} q^{L_0} = z^{n+1\/2} q^{\\Delta_{n,\\ell}} \\prod_{i=1}^{\\infty} \\frac{\\brac{1 + z q^i} \\brac{1 + z^{-1} q^{i-1}}}{\\brac{1 - q^i}^2}.\n\\end{equation}\nFor the irreducible quotients, the case with $\\ell = 0$ is particularly easy. As in \\secref{secFinRep}, we regard $\\brac{n,\\ell}$ (and modules so-labelled) as being \\emph{typical} if $\\AffVerMod{n,\\ell}$ is irreducible and \\emph{atypical} otherwise.\n\\begin{proposition} \\label{prop:ell=0}\nThe affine Verma module $\\AffVerMod{n,0}$ has an exact sequence\n\\begin{equation}\n\\dses{\\AffAtypMod{n-1\/2,0}}{\\AffVerMod{n,0}}{\\AffAtypMod{n+1\/2,0}}\n\\end{equation}\nin which the $\\AffAtypMod{n,0}$ are (atypical) irreducibles whose characters are given by\n\\begin{equation} \\label{eqnCharVac}\n\\ch{\\AffAtypMod{n,0}}{z;q} = z^n \\prod_{i=1}^{\\infty} \\frac{\\brac{1 + z q^i} \\brac{1 + z^{-1} q^i}}{\\brac{1 - q^i}^2}.\n\\end{equation}\n\\end{proposition}\n\\begin{proof}\nSince $\\ell = 0$, every singular vector of $\\AffVerMod{n,0}$ has dimension $0$ by \\eqnref{eqnConfDim}. The space of singular vectors is thus spanned by $\\ket{v_{n,0}}$ and $\\psi^-_0 \\ket{v_{n,0}}$. Taking the quotient by the module generated by $\\psi^-_0 \\ket{v_{n,0}}$ gives a module with a one-dimensional zero-grade subspace. The only singular vector is then the highest weight state{}, so this quotient is irreducible. We denote it by $\\AffAtypMod{n+1\/2,0}$ as its zero-grade subspace has $N_0$-eigenvalue $n+\\tfrac{1}{2}$. Its character follows trivially. The submodule of $\\AffVerMod{n,0}$ generated by $\\psi^-_0 \\ket{v_{n,0}}$ is not a Verma module because $\\bigl( \\psi^-_0 \\bigr)^2 \\ket{v_{n,0}} = 0$. It must therefore be a proper quotient of $\\AffVerMod{n-1,0}$ and, by the above argument, the only such quotient is the irreducible $\\AffAtypMod{n-1\/2,0}$. The exact sequence follows.\n\\end{proof}\nFor $\\ell \\neq 0$, one proves by direct calculation \\cite{CR:GL11} that for $0 < \\abs{\\ell} < 1$, $\\AffVerMod{n,\\ell}$ is irreducible. In other words, the corresponding irreducibles are typical, hence we denote them by $\\AffTypMod{n,\\ell}$. For $\\abs{\\ell} \\geqslant 1$, the structure of the Verma modules now follows from considering the induced action of the spectral flow automorphisms. More precisely, one proves \\cite{CR:GL11} that any Verma module is isomorphic to a twisted version of a Verma module with $-1 < \\abs{\\ell} < 1$ (or the conjugate of such a Verma module). We summarise the result as follows.\n\\begin{proposition}\nWhen $\\ell \\notin \\mathbb{Z}$, the affine Verma module $\\AffVerMod{n,\\ell}$ is irreducible, $\\AffVerMod{n,\\ell} \\cong \\AffTypMod{n,\\ell}$, so its character is given by \\eqnref{eqnCharVerma}. When $\\ell \\in \\mathbb{Z}$, the affine Verma module $\\AffVerMod{n,\\ell}$ has an exact sequence\n\\begin{equation}\n\\begin{gathered}\n\\dses{\\AffAtypMod{n+1,\\ell}}{\\AffVerMod{n,\\ell}}{\\AffAtypMod{n,\\ell}} \\qquad \\text{($\\ell = +1,+2,+3,\\ldots$),} \\\\\n\\dses{\\AffAtypMod{n-1,\\ell}}{\\AffVerMod{n,\\ell}}{\\AffAtypMod{n,\\ell}} \\qquad \\text{($\\ell = -1,-2,-3,\\ldots$),}\n\\end{gathered}\n\\end{equation}\nin which the $\\AffAtypMod{n,\\ell}$ are (atypical) irreducibles whose characters are given by\n\\begin{equation} \\label{eqnCharAtyp}\n\\ch{\\AffAtypMod{n,\\ell}}{z;q} = \n\\begin{cases}\n\\displaystyle \\frac{z^{n+1\/2} q^{\\Delta_{n,\\ell}}}{1 + zq^{\\ell}} \\prod_{i=1}^{\\infty} \\frac{\\brac{1 + z q^i} \\brac{1 + z^{-1} q^{i-1}}}{\\brac{1 - q^i}^2} & \\text{($\\ell = +1,+2,+3,\\ldots$),} \\\\\n\\displaystyle \\frac{z^{n+1\/2} q^{\\Delta_{n,\\ell}}}{1 + z^{-1} q^{-\\ell}} \\prod_{i=1}^{\\infty} \\frac{\\brac{1 + z q^i} \\brac{1 + z^{-1} q^{i-1}}}{\\brac{1 - q^i}^2} & \\text{($\\ell = -1,-2,-3,\\ldots$).}\n\\end{cases}\n\\end{equation}\n(The exact sequence and character for $\\ell = 0$ was given in \\propref{prop:ell=0}.)\n\\end{proposition}\n\\noindent Note that the $\\AffVerMod{n,\\ell}$ with $\\ell \\in \\mathbb{Z}$ have a non-trivial singular vector at grade $\\abs{\\ell}$. We emphasise that the $\\AffAtypMod{n,\\ell}$ with $\\ell \\neq 0$ therefore possess a two-dimensional zero-grade subspace.\n\nThis description of the Verma modules, their irreducible quotients and characters relies upon being able to identify the result of applying the spectral flow automorphisms to modules. For irreducibles, we have\n\\begin{equation}\n\\tfunc{\\bigl( \\sigma^{\\ell'} \\bigr)^*}{\\AffTypMod{n,\\ell}} = \\AffTypMod{n-\\ell',\\ell+\\ell'}, \\qquad \\tfunc{\\bigl( \\sigma^{\\ell'} \\bigr)^*}{\\AffAtypMod{n,\\ell}} = \\AffAtypMod{n-\\ell'+\\func{\\varepsilon}{\\ell+\\ell'}-\\func{\\varepsilon}{\\ell},\\ell+\\ell'},\n\\end{equation}\nwhere we introduce a convenient variant $\\varepsilon$ of the sign function on $\\mathbb{Z}$, defined by taking $\\func{\\varepsilon}{\\ell}$ to be $\\tfrac{1}{2}$, $0$ or $-\\tfrac{1}{2}$ according as to whether $\\ell \\in \\mathbb{Z}$ is positive, zero or negative, respectively.\n\n\\subsection{Fusion} \\label{secAffFus}\n\nThe fusion rules of the irreducible $\\AKMSA{gl}{1}{1}$-modules (among others) were first deduced in \\cite{Creutzig:2007jy} using three-point functions computed in a free field realisation and a conjectured completeness of the spectrum. These rules and the spectrum conjecture were confirmed in \\cite{CR:GL11} through a direct argument involving the Nahm-Gaberdiel-Kausch fusion algorithm \\cite{NahQua94,GabInd96} and spectral flow. The fusion ring generated by the irreducibles may be understood \\cite{Quella:2007hr} as a ``constrained lift'' of the representation ring \\eqref{RepRing} of $\\SLSA{gl}{1}{1}$ where the constraints are effectively implemented by spectral flow. Explicitly, the rules are\n\\begin{equation} \\label{Fusion}\n\\begin{gathered}\n\\AffAtypMod{n,\\ell} \\mathbin{\\times} \\AffAtypMod{n',\\ell'} = \\AffAtypMod{n+n'-\\func{\\varepsilon}{\\ell,\\ell'},\\ell+\\ell'}, \\quad\n\\AffAtypMod{n,\\ell} \\mathbin{\\times} \\AffTypMod{n',\\ell'} = \\AffTypMod{n+n'-\\func{\\varepsilon}{\\ell},\\ell+\\ell'}, \\quad\n\\AffAtypMod{n,\\ell} \\mathbin{\\times} \\AffProjMod{n',\\ell'} = \\AffProjMod{n+n'-\\func{\\varepsilon}{\\ell,\\ell'},\\ell+\\ell'}, \\\\\n\\AffTypMod{n,\\ell} \\mathbin{\\times} \\AffTypMod{n',\\ell'} = \n\\begin{cases}\n\\AffProjMod{n+n'+\\func{\\varepsilon}{\\ell+\\ell'},\\ell+\\ell'} & \\text{if $\\ell+\\ell'=0$,} \\\\\n\\AffTypMod{n+n'+1\/2,\\ell+\\ell'} \\oplus \\AffTypMod{n+n'-1\/2,\\ell+\\ell'} & \\text{otherwise,}\n\\end{cases}\n\\\\\n\\AffTypMod{n,\\ell} \\mathbin{\\times} \\AffProjMod{n',\\ell'} = \\AffTypMod{n+n'+1-\\func{\\varepsilon}{\\ell'},\\ell+\\ell'} \\oplus 2 \\: \\AffTypMod{n+n'-\\func{\\varepsilon}{\\ell'},\\ell+\\ell'} \\oplus \\AffTypMod{n+n'-1-\\func{\\varepsilon}{\\ell'},\\ell+\\ell'}, \\\\\n\\AffProjMod{n,\\ell} \\mathbin{\\times} \\AffProjMod{n',\\ell'} = \\AffProjMod{n+n'+1-\\func{\\varepsilon}{\\ell,\\ell'},\\ell+\\ell'} \\oplus 2 \\: \\AffProjMod{n+n'-\\func{\\varepsilon}{\\ell,\\ell'},\\ell+\\ell'} \\oplus \\AffProjMod{n+n'-1-\\func{\\varepsilon}{\\ell,\\ell'},\\ell+\\ell'}.\n\\end{gathered}\n\\end{equation}\nHere, we have defined $\\func{\\varepsilon}{\\ell , \\ell'} = \\func{\\varepsilon}{\\ell} + \\func{\\varepsilon}{\\ell'} - \\func{\\varepsilon}{\\ell + \\ell'}$ for convenience.\n\nThese fusion rules also introduce the indecomposable modules $\\AffProjMod{n,\\ell}$ which are the counterparts of the projective $\\SLSA{gl}{1}{1}$-modules $\\ProjMod{n}$ discussed in \\secref{secFinRep}.\\footnote{More precisely, $\\AffProjMod{n,0}$ is the affine counterpart to $\\ProjMod{n}$ and the remaining $\\AffProjMod{n,\\ell}$ are obtained by spectral flow.} The $\\AffProjMod{n,\\ell}$ are staggered with structure diagram\n\\begin{equation} \\label{picAffineStaggered}\n\\parbox[c]{0.28\\textwidth}{\n\\begin{center}\n\\begin{tikzpicture}[auto,thick,\n\tnom\/.style={circle,draw=black!20,fill=black!20,inner sep=2pt}\n\t]\n\\node (top) at (0,1.5) [] {$\\AffAtypMod{n,\\ell}$};\n\\node (left) at (-1.5,0) [] {$\\AffAtypMod{n+1,\\ell}$};\n\\node (right) at (1.5,0) [] {$\\AffAtypMod{n-1,\\ell}$};\n\\node (bot) at (0,-1.5) [] {$\\AffAtypMod{n,\\ell}$};\n\\node at (0,0) [nom] {$\\AffProjMod{n,\\ell}$};\n\\draw [->] (top) to (left);\n\\draw [->] (top) to (right);\n\\draw [->] (left) to (bot);\n\\draw [->] (right) to (bot);\n\\end{tikzpicture}\n\\end{center}\n}\n\\end{equation}\nand a non-diagonalisable action of the Virasoro mode $L_0$. It follows that conformal field theories{} whose spectra contain typical modules will also contain such $\\AffProjMod{n,\\ell}$ (by fusion), and so will be \\emph{logarithmic}.\n\n\\section{W-Algebras extending $\\AKMSA{gl}{1}{1}$} \\label{secExtAlg}\n\n\\subsection{Chiral Algebra Extensions}\n\nOur search for extended algebras is guided by the following considerations: First, note that if we choose to extend by a zero-grade field associated to any irreducible $\\AKMSA{gl}{1}{1}$-module, then we must include the rest of its zero-grade fields in the extension. Second, the fields we extend by should be closed under conjugation. Third, extending by fields from typical irreducibles will lead to logarithmic behaviour in the extended chiral algebra because fusing typicals with their conjugates yields the staggered indecomposable $\\AffProjMod{0,0}$.\n\nIt seems then that the most tractable extensions will involve zero-grade fields from atypical modules $\\AffAtypMod{n,\\ell}$ and their conjugates $\\AffAtypMod{-n,-\\ell}$. The simplest extension we could hope for would involve a single atypical and its conjugate and have the further property that these extension fields generate no new fields at the level of the commutation relations. This may be achieved for extension fields of integer or half-integer conformal dimension by requiring that the operator product expansions{} of the zero-grade fields of $\\AffAtypMod{n,\\ell}$ are regular. From the fusion rules \\eqref{Fusion}, we obtain\n\\begin{equation}\n\\AffAtypMod{n,\\ell} \\mathbin{\\times} \\AffAtypMod{n,\\ell} = \\AffAtypMod{2n - \\func{\\varepsilon}{\\ell},2\\ell},\n\\end{equation}\nfrom which it follows that the zero-grade fields of $\\AffAtypMod{n,\\ell}$ will have regular operator product expansions{} with one another if $2 \\: \\Delta_{n,\\ell} \\leqslant \\Delta_{2n - \\func{\\varepsilon}{\\ell},2\\ell}$, that is, if\n\\begin{equation}\\label{eqdim}\n\\abs{\\ell} \\leqslant 2 \\: \\Delta_{n,\\ell}.\n\\end{equation}\nWe may take $\\ell$ positive without loss of generality. Further, we require that the conformal dimension of the extension fields be a positive half-integer (so $2n\\ell \\in \\mathbb{Z}$). \\eqnref{eqdim} then implies that there are $m$ distinct possibilities to extend by fields of dimension $m\/2$.\nWe denote by $\\alg{W}_{n,\\ell}$ the algebra obtained upon extending $\\AKMSA{gl}{1}{1}$ by the atypical module $\\AffAtypMod{n,\\ell}$ and its conjugate $\\AffAtypMod{-n,-\\ell}$.\n\n\\subsection{Characters of Extended Algebras}\n\nThe complete extended algebra also contains normally-ordered products of the extension fields and their descendants. Indeed, the extended algebra $\\alg{W}_{n,\\ell}$ may be identified, at least at the level of graded vector spaces, with the orbit of the $\\AKMSA{gl}{1}{1}$ vacuum module under fusion by the simple current modules $\\AffAtypMod{n,\\ell}$ and $\\AffAtypMod{-n,-\\ell}$. In other words,\n\\begin{equation}\n\\alg{W}_{n+1\/2,\\ell} = \\AffAtypMod{0,0} \\oplus \\bigoplus_{m=1}^{\\infty} \\bigl( \\AffAtypMod{mn+1\/2,m\\ell} \\oplus \\AffAtypMod{-mn-1\/2,-m\\ell} \\bigr).\n\\end{equation}\nThe character of the extended vacuum module is therefore\n\\begin{equation} \\label{eqnCharW}\n\\begin{split}\n\\ch{\\alg{W}_{n+1\/2,\\ell}}{y;z;q} &= \\ch{\\AffAtypMod{0,0}}{y,z;q} + \\sum_{m=1}^\\infty \\Bigl[ \\ch{\\AffAtypMod{mn+1\/2, m \\ell}}{y;z;q} + \\ch{\\AffAtypMod{-mn-1\/2, -m \\ell}}{y;z;q} \\Bigr] \\\\\n&= z \\sum_{m \\in \\mathbb{Z}} \\frac{y^{m \\ell} z^{mn} q^{\\brac{mn+1\/2} m \\ell + m^2 \\ell^2 \/ 2}}{1 + z q^{m \\ell}} \\cdot \\prod_{i=1}^{\\infty} \\frac{\\brac{1 + z q^i} \\brac{1 + z^{-1} q^{i-1}}}{\\brac{1 - q^i}^2}.\n\\end{split}\n\\end{equation}\nHere, we have introduced an additional formal variable $y$ in order to keep track of the eigenvalues of $E_0 \/ k$. One can likewise identify the irreducible modules of the extended algebra with the other orbits of the extension modules. We will not consider these modules, their characters, nor their interesting modular properties here, but will return to this in a future publication.\n\n\\subsection{Free Field Realisations} \\label{appFreeFields}\n\nThe affine Kac-Moody superalgebra $\\AKMSA{gl}{1}{1}$ has two well-known free field realizations, the standard Wakimoto realization \\cite{SalGL106} and one constructed from a pair of symplectic fermions, a euclidean boson, and a lorentzian boson \\cite{Guruswamy:1999hi}. An explicit equivalence between the two realisations was established in \\cite{CR09}. Here, we review the latter one.\n\nWe take the symplectic fermions $\\chi^{\\pm}$ and bosons $Y$, $Z$ to have the following operator product expansions{}:\n\\begin{equation}\n\\func{\\chi^+}{z} \\func{\\chi^-}{w} = \\frac{1}{\\brac{z-w}^2} + \\text{ regular terms}, \\qquad \n\\func{\\partial Y}{z} \\func{\\partial Z}{w} = \\frac{1}{\\brac{z-w}^2} + \\text{ regular terms}\n\\end{equation}\n(the others are regular). The $\\AKMSA{gl}{1}{1}$ current fields are then given by\n\\begin{equation} \\label{eqnGL11FFR}\n\\func{E}{z} = k \\func{\\partial Y}{z}, \\qquad \\func{N}{z} = \\func{\\partial Z}{z}, \\qquad \\func{\\psi^{\\pm}}{z} = \\sqrt{k} \\vertop{\\pm \\func{Y}{z}} \\func{\\chi^{\\pm}}{z},\n\\end{equation}\nand a moderately tedious computation shows that the $\\AKMSA{gl}{1}{1}$ energy momentum tensor \\eqref{eqnDefT} indeed corresponds to the sum of those of the bosonic and symplectic fermion systems.\n\nIt remains to construct the $\\AKMSA{gl}{1}{1}$ primaries that generate our extended algebras. As these correspond to atypical modules, this is relatively straight-forward. First, we introduce some convenient notation:\n Let $X_{n,\\ell}$ be the bosonic linear combination $n Y + \\ell Z$ and define composite fields $F^{\\pm}_r$, with $r \\in \\mathbb{N}$, by $F^{\\pm}_0 = 1$ and $F^{\\pm}_r = \\normord{F^{\\pm}_{r-1} \\partial^{r-1} \\chi^{\\pm}}$ for $r \\geqslant 1$. The conformal dimension of $F^{\\pm}_r$ is then $\\tfrac{1}{2} r \\brac{r+1}$. The zero-grade fields of the atypicals $\\AffAtypMod{n,\\ell}$ for $\\ell > 0$ have conformal dimension $\\Delta_{n,\\ell} = \\ell \\brac{n + \\ell \/ 2}$ and are realised by\n\\begin{equation}\nV_{n,\\ell}^+ = \\vertop{X_{n+1\/2,\\ell}} F^-_{\\ell-1}, \\qquad V_{n,\\ell}^- = \\vertop{X_{n-1\/2,\\ell}} F^-_{\\ell}.\n\\end{equation}\nThis follows from their operator product expansions{} with the $\\AKMSA{gl}{1}{1}$ currents:\n\\begin{equation}\n\\begin{aligned}\n\\func{N}{z} \\func{V_{n,\\ell}^{\\pm}}{w} &= \\frac{\\brac{n \\pm 1\/2} \\: \\func{V_{n,\\ell}^{\\pm}}{w}}{z-w} + \\ldots , \\\\\n\\func{E}{z} \\func{V_{n,\\ell}^{\\pm}}{w} &= \\frac{\\ell k \\: \\func{V_{n,\\ell}^{\\pm}}{w}}{z-w} + \\ldots ,\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\func{\\psi^+}{z} \\func{V_{n,\\ell}^-}{w} &= \\brac{-1}^{\\ell - 1} \\ell ! \\frac{\\sqrt{k} \\: \\func{V_{n,\\ell}^+}{w}}{z-w} + \\ldots , \\\\\n\\func{\\psi^-}{z} \\func{V_{n,\\ell}^+}{w} &= \\frac{(-1)^{\\ell-1}}{\\brac{\\ell-1}!} \\frac{\\sqrt{k} \\: \\func{V_{n,\\ell}^-}{w}}{z-w} + \\ldots ,\n\\end{aligned}\n\\end{equation}\nthe others being regular. The zero-grade fields of the conjugate module $\\AffAtypMod{-n,-\\ell}$ are realised as\n\\begin{equation}\nV_{-n,-\\ell}^+ = \\vertop{X_{-n+1\/2,-\\ell}} F^+_{\\ell}, \\qquad V_{-n,-\\ell}^- = \\vertop{X_{-n-1\/2,-\\ell}} F^+_{\\ell-1}.\n\\end{equation}\nTheir operator product expansions{} with the current fields are similar.\n\n\\subsection{The Extended Operator Product Algebra}\n\nIn order to compute the leading contributions to the extended algebra operator product expansions{}, we need the expansion of the bosonic vertex operators. To second order, this is\n\\begin{multline} \\label{eqnVertexOPE}\n\\vertop{\\func{X_{n,\\ell}}{z}} \\vertop{\\func{X_{n',\\ell'}}{w}} = \\brac{z-w}^{n\\ell'+n'\\ell} \\biggl[ \\vertop{\\func{X_{n+n',\\ell+\\ell'}}{w}} + \\normord{\\func{\\partial X_{n,\\ell}}{w} \\Vertop{\\func{X_{n+n',\\ell+\\ell'}}{w}}} \\brac{z-w} \\Biggr. \\\\\n\\Biggl. + \\frac{1}{2} \\normord{\\Bigl( \\func{\\partial X_{n,\\ell}}{w} \\func{\\partial X_{n,\\ell}}{w} + \\func{\\partial^2 X_{n,\\ell}}{w} \\Bigr) \\Vertop{\\func{X_{n+n',\\ell+\\ell'}}{w}}} \\brac{z-w}^2 + \\ldots \\biggr].\n\\end{multline}\nNote that it follows that $\\vertop{\\func{X_{n,\\ell}}{w}}$ and $\\vertop{\\func{X_{n',\\ell'}}{w}}$ will be mutually bosonic when $n\\ell' + n'\\ell$ is an even integer and mutually fermionic when $n\\ell' + n'\\ell$ is odd. The implication of this for the statistics of the extended algebra generators $V_{n,\\ell}^{\\pm}$ and $V_{-n,-\\ell}^{\\pm}$ is a little subtle. It turns out that when $2n \\ell$ is even, these generators may be consistently assigned a bosonic or fermionic parity --- $\\alg{W}_{n,\\ell}$ is a superalgebra. In fact, $V_{n,\\ell}^+$ and $V_{-n,-\\ell}^-$ will be fermions and $V_{n,\\ell}^-$ and $V_{-n,-\\ell}^+$ will be bosons in this case. However, when $2n \\ell$ is odd, such an assignment is impossible --- $\\alg{W}_{n,\\ell}$ is \\emph{not} a superalgebra. In this case, separately taking $V_{n,\\ell}^+$ and $V_{-n,-\\ell}^-$ to be bosons and $V_{n,\\ell}^-$ and $V_{-n,-\\ell}^+$ to be fermions is consistent, but the mutual locality of a boson and a fermion will now be $-1$ instead of $+1$. We will remark further on this subtlety in \\secref{secExamples}.\n\nWe moreover need the leading terms of certain operator product expansions{} of the $F^{\\pm}_r$. In particular,\n\\begin{equation} \\label{eqnCompositeSFOPEs}\n\\begin{split}\n\\func{F^+_r}{z} \\func{F^-_r}{w} &= \\brac{z-w}^{-r \\brac{r+1}} \\biggl[ \\mu_r^{\\brac{0}} + \\mu_{r-1}^{\\brac{2}} \\normord{\\func{\\chi^+}{w} \\func{\\chi^-}{w}} \\brac{z-w}^2 + \\ldots \\biggr] , \\\\\n\\func{F^-_{r-1}}{z} \\func{F^+_r}{w} &= \\brac{z-w}^{-\\brac{r-1} \\brac{r+1}} \\biggl[ \\mu_{r-1}^{\\brac{1}} \\: \\func{\\chi^+}{w} + \\ldots \\biggr] , \\\\\n\\func{F^-_r}{z} \\func{F^+_{r-1}}{w} &= \\brac{z-w}^{-\\brac{r-1} \\brac{r+1}} \\biggl[ \\mu_{r-1}^{\\brac{1}} \\: \\func{\\chi^-}{w} + \\ldots \\biggr] ,\n\\end{split}\n\\end{equation}\nwhere the coefficients $\\mu_r^{\\brac{a}}$, for $a = 0$, $1$, $2$, are given by\n\\begin{equation} \\label{eq:coeff}\n\\mu_r^{\\brac{a}} = \\sum_{\\sigma \\in \\group{S}_r} \\brac{-1}^{\\abs{\\sigma}} \\prod_{i=1}^r \\brac{i + \\func{\\sigma}{i} + a - 1}! = \\prod_{i=1}^r \\brac{i-1}! \\brac{i+a}!\n\\end{equation}\nThis last equality follows from recognising the $\\mu_r^{\\brac{a}}$ as determinants of Hankel matrices for which LU-decompositions are easily found. In detail, consider the $r \\times r$ matrix $\\func{A_r}{a}$, for a non-negative integer $a$, with entries $\\brac{\\func{A_r}{a}}_{ij} = \\brac{i+j+a-1}!$ Defining $r \\times r$ matrices $\\func{L_r}{a}$ and $\\func{U_r}{a}$ by\n\\begin{equation}\n\\brac{\\func{L_r}{a}}_{ij} = \\frac{\\brac{i+a}!}{\\brac{j+a}!} \\binom{i-1}{j-1}, \\qquad \n\\brac{\\func{U_r}{a}}_{ij} = \\brac{i-1}! \\brac{j+a}! \\binom{j-1}{i-1},\n\\end{equation}\nand noting that $\\func{L_r}{a}$ is lower-triangular with diagonal entries equal to $1$ and $\\func{U_r}{a}$ is upper-triangular, we see that $\\func{L_r}{a} \\func{U_r}{a}$ is an LU-decomposition of $\\func{A_r}{a}$:\n\\begin{equation}\n\\begin{split}\n\\brac{\\func{L_r}{a} \\func{U_r}{a}}_{ij} &= \\sum_{k=1}^r \\frac{\\brac{i+a}! \\brac{i-1}! \\brac{j+a}! \\brac{j-1}!}{\\brac{k+a}! \\brac{k-1}! \\brac{i-k}! \\brac{j-k}!} = \\brac{j+a}! \\brac{i-1}! \\sum_{k=1}^r \\binom{i+a}{k+a} \\binom{j-1}{k-1} \\\\\n&= \\brac{j+a}! \\brac{i-1}! \\binom{i+j+a-1}{i-1} = \\brac{\\func{A_r}{a}}_{ij}.\n\\end{split}\n\\end{equation}\nSince $\\det \\: \\func{L_r}{a} = 1$, we obtain $\\det \\: \\func{A_r}{a} = \\det \\: \\func{U_r}{a} = \\prod_{i=1}^r \\brac{i-1}! \\brac{i+a}!$ and hence \\eqnref{eq:coeff}.\n\nWe are now in a position to obtain the leading contributions to the\noperator product expansions{} of the extension fields $V_{n,\\ell}^{\\pm}$ and their conjugates $V_{-n,-\\ell}^{\\mp}$. Since we assume \\eqref{eqdim}, there are only four non-regular expansions and these take the form\n\\begin{equation} \\label{eqnGenExtAlgOPEs}\n\\begin{split}\n\\func{V_{n,\\ell}^+}{z} \\func{V_{-n,-\\ell}^+}{w} &= \\frac{\\mu_{\\ell-1}^{\\brac{1}} \\: \\func{\\psi^+}{w} \/ \\sqrt{k}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 1}} + \\ldots , \\\\\n\\func{V_{-n,-\\ell}^-}{z} \\func{V_{n,\\ell}^+}{w} &= \\mu_{\\ell-1}^{\\brac{0}} \\Biggl[ \\frac{1}{\\brac{z-w}^{2 \\Delta_{n,\\ell}}} - \\frac{\\func{\\partial X_{n+1\/2,\\ell}}{w}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 1}} + \\frac{\\ell \\brac{\\ell-1}}{2} \\frac{\\normord{\\func{\\chi^+}{w} \\func{\\chi^-}{w}}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 2}} \\Biggr. \\\\\n& \\mspace{90mu} \\Biggl. + \\frac{1}{2} \\frac{\\normord{\\func{\\partial X_{n+1\/2,\\ell}}{w} \\func{\\partial X_{n+1\/2,\\ell}}{w}} - \\func{\\partial^2 X_{n+1\/2,\\ell}}{w}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 2}} + \\ldots \\Biggr] , \\\\\n\\func{V_{-n,-\\ell}^+}{z} \\func{V_{n,\\ell}^-}{w} &= \\mu_{\\ell}^{\\brac{0}} \\Biggl[ \\frac{1}{\\brac{z-w}^{2 \\Delta_{n,\\ell}}} - \\frac{\\func{\\partial X_{n-1\/2,\\ell}}{w}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 1}} + \\frac{\\ell \\brac{\\ell+1}}{2} \\frac{\\normord{\\func{\\chi^+}{w} \\func{\\chi^-}{w}}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 2}} \\Biggr. \\\\\n& \\mspace{90mu} \\Biggl. + \\frac{1}{2} \\frac{\\normord{\\func{\\partial X_{n-1\/2,\\ell}}{w} \\func{\\partial X_{n-1\/2,\\ell}}{w}} - \\func{\\partial^2 X_{n-1\/2,\\ell}}{w}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 2}} + \\ldots \\Biggr] , \\\\\n\\func{V_{n,\\ell}^-}{z} \\func{V_{-n,-\\ell}^-}{w} &= \\frac{\\mu_{\\ell-1}^{\\brac{1}} \\: \\func{\\psi^-}{w} \/ \\sqrt{k}}{\\brac{z-w}^{2 \\Delta_{n,\\ell} - 1}} + \\ldots\n\\end{split}\n\\end{equation}\nHere, we have used \\eqref{eq:coeff} to evaluate the ratios $\\mu_{r-1}^{\\brac{2}} \/ \\mu_r^{\\brac{0}} = \\tfrac{1}{2} r \\brac{r+1}$ appearing in these expansions.\n\n\\subsection{Examples} \\label{secExamples}\n\nLet us now illustrate the results of the above calculations with a few simple examples. First, \\eqref{eqdim} tells us that the extended algebra $\\alg{W}_{n,\\ell}$ will be unique if we insist that the extension fields have conformal dimension $\\tfrac{1}{2}$. Indeed, this requires $\\ell = 1$ and $n=0$. We are therefore extending $\\AKMSA{gl}{1}{1}$ by the fields associated with the atypical modules $\\AffAtypMod{0,1}$ and $\\AffAtypMod{0,-1}$. Since $2n \\ell = 0$ is even, the generators of the resulting extended algebra, $\\alg{W}_{0,1}$, may be assigned a definite parity: $\\varkappa = V_{0,1}^+$ and $\\bar{\\varkappa} = V_{0,-1}^-$ are odd, $\\beta = V_{0,1}^-$ and $\\gamma = -V_{0,-1}^+$ are even. The expansions \\eqref{eqnGenExtAlgOPEs} become\n\\begin{equation}\n\\begin{aligned}\n\\func{\\varkappa}{z} \\func{\\bar{\\varkappa}}{w} &= \\frac{1}{z-w} + \\func{N}{w} + \\frac{1}{2k} \\func{E}{w} + \\ldots , \\\\\n\\func{\\beta}{z} \\func{\\gamma}{w} &= \\frac{1}{z-w} + \\func{N}{w} - \\frac{1}{2k} \\func{E}{w} + \\ldots ,\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\func{\\beta}{z} \\func{\\varkappa}{w} = +\\frac{\\func{\\psi^+}{w}}{\\sqrt{k}} + \\ldots , \\\\\n\\func{\\gamma}{z} \\func{\\bar{\\varkappa}}{w} = -\\frac{\\func{\\psi^-}{w}}{\\sqrt{k}} + \\ldots ,\n\\end{aligned}\n\\end{equation}\nwhich we recognise as a free complex fermion $\\brac{\\varkappa,\\bar{\\varkappa}}$ and a $\\beta \\gamma$ ghost system. Because the mixed operator product expansions{} are regular, $\\alg{W}_{0,1}$ decomposes into the direct sum of the chiral algebras of these theories.\n\nIf we choose to extend by dimension $1$ fields, then there are two distinct choices: $n = \\tfrac{1}{2}$ and $\\ell = 1$ or $n = \\tfrac{1}{2}$ and $\\ell = -2$. We expect a current algebra symmetry in both cases. Indeed, if we set $\\mathbf{H} = N + E \/ \\ell k$ and $\\mathbf{Z} = N - E \/ \\ell k$, then we discover that the $\\brac{\\mathbf{H} , \\mathbf{Z}}$-weights of the $\\AKMSA{gl}{1}{1}$ currents and the extension fields $V_{n,\\ell}^{\\pm}$, $V_{-n,-\\ell}^{\\pm}$ precisely match the $\\brac{\\mathbf{H} , \\mathbf{Z}}$-weights of the adjoint representation of $\\SLSA{sl}{2}{1}$.\\footnote{Here, $\\mathbf{H}$ and $\\mathbf{Z}$ should be associated with the matrices $\\diag \\set{1,-1,0}$ and $\\diag \\set{1,1,2}$ in the defining representation of $\\SLSA{sl}{2}{1}$.} Moreover, we have\n\\begin{equation}\n\\func{\\mathbf{H}}{z} \\func{\\mathbf{H}}{w} = \\frac{2 \/ \\ell}{\\brac{z-w}^2} + \\ldots , \\qquad \\func{\\mathbf{Z}}{z} \\func{\\mathbf{Z}}{w} = \\frac{-2 \/ \\ell}{\\brac{z-w}^2} + \\ldots ,\n\\end{equation}\nand $\\func{\\mathbf{H}}{z} \\func{\\mathbf{Z}}{w}$ regular, which suggests that the extended algebra will be $\\AKMSA{sl}{2}{1}$ at level $1 \/ \\ell$.\n\nChecking this for the choice $\\ell = -2$ is easy. As $2n \\ell = -2$ is even, $\\alg{W}_{1\/2,-2}$ admits a superalgebra structure. Moreover, the fusion rules\n\\begin{equation}\n\\AffAtypMod{0,1} \\mathbin{\\times} \\AffAtypMod{0,1} = \\AffAtypMod{-1\/2,2}, \\qquad \\AffAtypMod{0,-1} \\mathbin{\\times} \\AffAtypMod{0,-1} = \\AffAtypMod{1\/2,-2}\n\\end{equation}\nimply that $\\alg{W}_{1\/2,-2}$ is a subalgebra of the extended algebra $\\alg{W}_{0,1}$ considered above. One readily checks that by taking normally-ordered products, the $\\beta \\gamma$ ghost fields of $\\alg{W}_{0,1}$ generate the bosonic subalgebra $\\AKMA{sl}{2}_{-1\/2} \\subset \\AKMSA{sl}{2}{1}_{-1\/2}$, the complex fermion gives the $\\AKMA{u}{1}$-subalgebra, and the mixed products yield the remaining fermionic currents. This establishes the superalgebra isomorphism $\\alg{W}_{1\/2,-2} \\cong \\AKMSA{sl}{2}{1}_{-1\/2}$.\n\nThe computation when $\\ell = 1$ is, however, more subtle because $2n \\ell = 1$ is odd, so $\\alg{W}_{1\/2,1}$ does not admit the structure of a superalgebra. To impose the correct parities on the extended algebra currents, we must adjoin an operator-valued function $\\mu$ which is required to satisfy\n\\begin{equation} \\label{eqCocycle}\n\\mu_{a,b} \\mu_{c,d} = (-1)^{ad} \\mu_{a+b,c+d}, \\qquad \\text{($a,b,c,d \\in \\mathbb{Z}$).}\n\\end{equation}\nNote that the algebra generated by these operators has unit $\\mu_{0,0}$. The currents are then given by\n\\begin{equation}\n\\begin{aligned}\n\\mathbf{E} &= +\\mu_{1,1} V_{1\/2,1}^+, \\\\\n\\mathbf{F} &= -\\mu_{-1,-1} V_{-1\/2,-1}^-,\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\mathbf{H} &= N + E\/k, \\\\\n\\mathbf{Z} &= N - E\/k,\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\mathbf{e}^+ &= -\\mu_{1,0} \\psi^+ \/ \\sqrt{k}, \\\\\n\\mathbf{f}^- &= +\\mu_{-1,0} \\psi^- \/ \\sqrt{k},\n\\end{aligned}\n\\qquad\n\\begin{aligned}\n\\mathbf{f}^+ &= \\mu_{0,-1} V_{-1\/2,-1}^+, \\\\\n\\mathbf{e}^- &= \\mu_{0,1} V_{1\/2,1}^-,\n\\end{aligned}\n\\end{equation}\nand routine computation now verifies that these currents indeed generate $\\AKMSA{sl}{2}{1}_1$.\n\nAs our final example, we briefly consider the case of extensions of conformal dimension $\\tfrac{3}{2}$. There are now three distinct choices, corresponding to $n=1$, $\\ell=1$, or $n=-\\tfrac{1}{4}$, $\\ell=2$, or $n=-1$, $\\ell=3$. The latter choice again results in an extended algebra which is a subalgebra of $\\alg{W}_{0,1}$ because\n\\begin{equation}\n\\AffAtypMod{0,1} \\mathbin{\\times} \\AffAtypMod{0,1} \\mathbin{\\times} \\AffAtypMod{0,1} = \\AffAtypMod{-1,3}.\n\\end{equation}\nBoth $\\alg{W}_{1,1}$ and $\\alg{W}_{-1,3}$ are superalgebras, while $\\alg{W}_{-1\/4,2}$ is not. We expect, however, that a modification similar to \\eqref{eqCocycle} will restore the superalgebra parity requirements. We will not analyse this in any detail as our interest in $\\Delta_{n,\\ell} = \\tfrac{3}{2}$ lies not with the full extended algebra, but rather with one of its subalgebras.\n\nWe start with the superalgebras $\\alg{W}_{1,1}$ and $\\alg{W}_{-1,3}$. Both $V_{-n,-\\ell}^+$ and $V_{n,\\ell}^-$ are bosonic and upon defining\n\\begin{equation}\n\\begin{gathered}\n\\mathsf{g}^+ = \\sqrt{\\frac{3 \\alpha \\brac{3 \\alpha - 1}}{2 \\mu_{\\ell}^{\\brac{0}}}} \\: V_{-n,-\\ell}^+, \\qquad \n\\mathsf{g}^- = \\sqrt{\\frac{3 \\alpha \\brac{3 \\alpha - 1}}{2 \\mu_{\\ell}^{\\brac{0}}}} \\: V_{n,\\ell}^-, \\\\\n\\mathsf{j} = -\\alpha \\partial X_{n-1\/2,\\ell}, \\qquad \n\\mathsf{t} = \\frac{\\alpha}{2} \\normord{\\partial X_{n-1\/2,\\ell} \\partial X_{n-1\/2,\\ell}} - \\frac{\\ell \\brac{\\ell + 1}}{2} \\frac{\\alpha \\brac{3 \\alpha - 1}}{\\alpha + 1} \\frac{\\normord{\\psi^+ \\psi^-}}{k},\n\\end{gathered}\n\\end{equation}\nwhere\n\\begin{equation}\n\\alpha = \\frac{1}{\\brac{2n-1} \\ell},\n\\end{equation}\nwe obtain the defining relations of the \\emph{Bershadsky-Polyakov algebra} $W_3^{\\brac{2}}$ \\cite{PolGau90,BerCon91}:\n\\begin{equation}\n\\begin{gathered}\n\\func{\\mathsf{g}^+}{z} \\func{\\mathsf{g}^-}{w} = \\frac{\\brac{K+1} \\brac{2K+3}}{\\brac{z-w}^3} + \\frac{3 \\brac{K+1} \\func{\\mathsf{j}}{w}}{\\brac{z-w}^2} + \\frac{3 \\func{\\normord{\\mathsf{j} \\mathsf{j}}}{w} + \\tfrac{3}{2} \\brac{K+1} \\func{\\partial \\mathsf{j}}{w} - \\brac{K+3} \\func{\\mathsf{t}}{w}}{z-w} + \\ldots , \\\\\n\\func{\\mathsf{j}}{z} \\func{\\mathsf{g}^{\\pm}}{w} = \\frac{\\pm \\func{\\mathsf{g}^{\\pm}}{w}}{z-w} + \\ldots , \\qquad \n\\func{\\mathsf{j}}{z} \\func{\\mathsf{j}}{w} = \\frac{\\brac{2K+3}\/3}{\\brac{z-w}^2} + \\ldots , \\\\\n\\func{\\mathsf{t}}{z} \\func{\\mathsf{g}^{\\pm}}{w} = \\frac{3}{2} \\frac{\\func{\\mathsf{g}^{\\pm}}{w}}{\\brac{z-w}^2} + \\frac{\\func{\\partial \\mathsf{g}^{\\pm}}{w}}{z-w} + \\ldots , \\qquad \n\\func{\\mathsf{t}}{z} \\func{\\mathsf{j}}{w} = \\frac{\\func{\\mathsf{j}}{w}}{\\brac{z-w}^2} + \\frac{\\func{\\partial \\mathsf{j}}{w}}{z-w} + \\ldots , \\\\\n\\func{\\mathsf{t}}{z} \\func{\\mathsf{t}}{w} = \\frac{-\\brac{2K+3} \\brac{3K+1} \/ 2 \\brac{K+3}}{\\brac{z-w}^4} + \\frac{2 \\func{\\mathsf{t}}{w}}{\\brac{z-w}^2} + \\frac{\\func{\\partial \\mathsf{t}}{w}}{z-w} + \\ldots\n\\end{gathered}\n\\end{equation}\nHere, the $\\AKMA{sl}{3}$-level $K = \\tfrac{3}{2} \\brac{\\alpha - 1}$ is $0$ for $\\alg{W}_{1,1}$ and $-\\tfrac{5}{3}$ for $\\alg{W}_{-1,3}$. The central charge of the $W_3^{\\brac{2}}$-subalgebra is in both cases $-1$.\n\nFor $\\alg{W}_{-1\/4,2}$, this procedure does not yield a Bershadsky-Polyakov algebra because $V_{-n,-\\ell}^+$ and $V_{n,\\ell}^-$ are, in this case, mutually fermionic. Rather, these fields generate a copy of the $\\mathcal{N} = 2$ superconformal algebra of central charge $-1$. Instead, we must consider the mutually bosonic fields $V_{n,\\ell}^+$ and $V_{-n,-\\ell}^-$. Taking\n\\begin{equation}\n\\mathsf{g}^{+} = \\sqrt{3} \\: V_{1\/4,-2}^-, \\quad \\mathsf{g}^{-} = \\sqrt{3} \\: V_{-1\/4,2}^+, \\quad \\mathsf{j} = -\\partial X_{1\/4,2}, \\quad \\mathsf{t} = \\frac{1}{2} \\normord{\\partial X_{1\/4,2} \\partial X_{1\/4,2}} - \\frac{1}{k} \\normord{\\psi^+ \\psi^-}\n\\end{equation}\nin particular, now leads to the Bershadsky-Polyakov algebra of level $0$ and central charge $-1$. (In contrast, $V_{n,\\ell}^+$ and $V_{-n,-\\ell}^-$ are fermionic in both $\\alg{W}_{1,1}$ and $\\alg{W}_{-1,3}$, generating copies of the $\\mathcal{N} = 2$ superconformal algebra with central charges $1$ and $-1$, respectively.)\n\n\\subsection{$W^{\\brac{2}}_N$-subalgebras}\n\nIn the previous section, we found the Bershadsky-Polyakov algebra $W^{\\brac{2}}_3$, at certain levels, appearing as a subalgebra of the extended algebras $\\alg{W}_{1,1}$, $\\alg{W}_{-1\/4,2}$ and $\\alg{W}_{-1,3}$. We now generalise this observation. The algebra $W^{\\brac{2}}_3$ is defined \\cite{PolGau90,BerCon91} as the Drinfel'd-Sokolov reduction of $\\AKMA{sl}{3}$ corresponding to the non-principal embedding of $\\SLA{sl}{2}$ in $\\SLA{sl}{3}$. Feigin and Semikhatov \\cite{Feigin:2004wb} found that it could also be realised as a subalgebra of $\\AKMSA{sl}{3}{1} \\oplus \\AKMA{u}{1}$ commuting with an $\\AKMA{sl}{3}$-subalgebra. They then studied a generalisation $W^{\\brac{2}}_N \\subset \\AKMSA{sl}{N}{1} \\oplus \\AKMA{u}{1}$ which commutes with the obvious $\\AKMA{sl}{N}$-subalgebra.\n\nWhen $N=1$, these generalisations reduce to the chiral algebra of the $\\beta \\gamma$ ghost system. For $N=2$, one gets $\\AKMA{sl}{2}$, and as mentioned above, $N=3$ recovers the Bershadsky-Polyakov algebra. The examples studied in \\secref{secExamples} therefore lead us to the plausible conjecture that the $W^{\\brac{2}}_N$ algebras of Feigin and Semikhatov may be realised, at least for certain levels, as subalgebras of certain of our extended algebras $\\alg{W}_{n,\\ell}$. We mention that there is a second construction of these $W^{\\brac{2}}_N$ algebras, but restricted to the critical level $K=-N$ (see \\eqref{eq:W_NC}), starting from the affine superalgebra $\\AKMSA{psl}{N}{N}$ at (critical) level $0$ \\cite{CGL}.\n\nFeigin and Semikhatov only computed the first few terms of the defining operator product expansions{} of $W^{\\brac{2}}_N$. We will compare these terms with those obtained from our extended algebras, finding decidedly non-trivial agreement. Our findings will, however, be stated as conjectures because the full operator product expansion{} of $W^{\\brac{2}}_N$ is not currently known. $W^{\\brac{2}}_N$ is generated by two fields $\\mathcal{E}^{\\pm}_N$ of dimension $\\tfrac{1}{2} N$, a $\\AKMA{u}{1}$-current $\\mathcal{H}_N$ and an energy-momentum tensor $\\mathcal{T}_N$. The defining expansions are:\n\\begin{equation}\\label{eq:W2nope}\n\\begin{split}\n\\func{\\mathcal{H}_N}{z} \\func{\\mathcal{H}_N}{w} &= \\frac{\\brac{N-1} K\/N + N-2}{\\brac{z-w}^2} + \\ldots , \\qquad\n\\func{\\mathcal{H}_N}{z} \\func{\\mathcal{E}^{\\pm}_N}{w} = \\pm \\frac{\\func{\\mathcal{E}^{\\pm}_N}{w}}{z-w} + \\ldots , \\\\\n\\func{\\mathcal{E}^+_N}{z} \\func{\\mathcal{E}^-_N}{w} &= \\frac{\\lambda_{N-1}}{\\brac{z-w}^N} + \\frac{N \\lambda_{N-2} \\func{\\mathcal{H}_N}{w}}{\\brac{z-w}^{N-1}} - \\frac{\\brac{K+N} \\lambda_{N-3} \\func{\\mathcal{T}_N}{w}}{\\brac{z-w}^{N-2}} \\\\\n&\\mspace{-20mu} + \\frac{\\lambda_{N-3}}{\\brac{z-w}^{N-2}} \\sqbrac{\\frac{N \\brac{N-1}}{2} \\func{\\normord{\\mathcal{H}_N \\mathcal{H}_N}}{w} + \\frac{N \\bigl( \\brac{N-2} \\brac{K+N-1} - 1 \\bigr)}{2} \\func{\\partial \\mathcal{H}_N}{w}} + \\ldots\n\\end{split}\n\\end{equation}\nHere, $\\lambda_m = \\prod_{i=1}^m \\bigl( i \\brac{K+N-1} - 1 \\bigr)$, $K$ is the level of the $W^{\\brac{2}}_N$ algebra, and the central charge is given by\n\\begin{equation} \\label{eq:W_NC}\nC = -\\frac{\\bigl( \\brac{K+N} \\brac{N-1} - N \\bigr) \\bigl( \\brac{K+N} \\brac{N-2} N - N^2 + 1 \\bigr)}{K+N}.\n\\end{equation}\n\nSuppose first that $2n\\ell$ is even, so we can consider the bosonic subalgebra generated by the fields\n\\begin{equation}\n\\mathcal{E}^+_N = \\sqrt{\\frac{\\lambda_{N-1}}{\\mu_{\\ell}^{\\brac{0}}}} \\: V_{-n,-\\ell}^+, \\qquad \\mathcal{E}^-_N = \\sqrt{\\frac{\\lambda_{N-1}}{\\mu_{\\ell}^{\\brac{0}}}} \\: V_{n,\\ell}^-.\n\\end{equation}\nEvaluating the operator product expansion{} of these fields using \\eqref{eqnGenExtAlgOPEs} and comparing with \\eqref{eq:W2nope}, we find that the first two singular terms agree provided that $N = 2 \\Delta_{n,\\ell}$ and $\\mathcal{H}_N = -\\partial X_{n-1\/2,\\ell} \/ \\brac{2n-1} \\ell$. This also fixes the $W^{\\brac{2}}_N$ level $K$. Comparing the third terms fixes the form of the $W^{\\brac{2}}_N$ energy-momentum tensor $\\mathcal{T}_N$ and $\\mathcal{H}_N$ is then verified to have dimension $1$. However, the $\\mathcal{E}^{\\pm}_N$ only have the required dimension $\\tfrac{1}{2} N = \\Delta_{n,\\ell}$ if $n=1$ or $2n+\\ell=1$.\\footnote{There is a third solution, $\\Delta_{n,\\ell} + \\ell + 1 = 0$, but this is invalid as we require $\\ell, \\Delta_{n,\\ell} > 0$.} These constraints also let us check that $\\mathcal{T}_N$ is an energy-momentum tensor and the central charge turns out to be $C=-1$. When $2n\\ell$ is odd, we instead consider the bosonic subalgebra generated by\n\\begin{equation}\n\\mathcal{E}^+_N = \\sqrt{\\frac{\\lambda_{N-1}}{\\mu_{\\ell-1}^{\\brac{0}}}} V_{-n,-\\ell}^-, \\qquad \\mathcal{E}^-_N = \\sqrt{\\frac{\\lambda_{N-1}}{\\mu_{\\ell-1}^{\\brac{0}}}} \\: V_{n,\\ell}^+.\n\\end{equation}\nA similar analysis reveals that this subalgebra agrees with $W^{\\brac{2}}_N$ up to the first three terms in the operator product expansions{} provided that $N = 2 \\Delta_{n,\\ell}$ and either $\\ell = 1$ or $\\ell = 2$.\\footnote{Taking $n = -\\tfrac{1}{2} \\brac{\\ell + 1}$ also satisfies these requirements, but then $2n\\ell$ is necessarily even. Moreover, there is again a solution of the form $\\Delta_{n,\\ell} - \\ell + 1 = 0$, but it is easy to check that it leads to the wrong operator product expansion{} of $\\mathcal{T}_N$ with itself.} In the first case, $C=1$; in the second, $C=-1$.\n\nWe summarise our findings as follows:\n\\begin{conjecture}\nThe extended algebra $\\alg{W}_{n,\\ell}$ has a subalgebra isomorphic to $W^{\\brac{2}}_N$ of level $K$ when:\n\\begin{itemize}\n\\item $\\ell = 1$ and $n = 0,1,2,\\ldots$ Then, $N = 2n + 1$ and $K = -2 \\brac{n-1} \\brac{2n+1} \/ \\brac{2n-1}$.\n\\item $\\ell = 1$ and $n = \\tfrac{1}{2}, \\tfrac{3}{2}, \\tfrac{5}{2}, \\ldots$ Then, $N = 2n + 1$ and $K = - \\brac{2n^2 - 1} \/ n$.\n\\item $\\ell = 2$ and $n = -\\tfrac{3}{4}, -\\tfrac{1}{4}, \\tfrac{1}{4}, \\ldots$ Then, $N = 4 \\brac{n+1}$ and $K = -2 \\brac{n+1} \\brac{4n+1} \/ \\brac{2n+1}$.\n\\item $n = -\\tfrac{1}{2} \\brac{\\ell - 1}$ and $\\ell = 1,2,3,\\ldots$ Then, $N = \\ell$ and $K = -\\brac{\\ell^2 - \\ell - 1} \/ \\ell$.\n\\end{itemize}\n\\end{conjecture}\n\\noindent Note that the examples considered in \\secref{secExamples} exhaust the $W^{\\brac{2}}_N$-subalgebras with $N \\leqslant 3$ except for $\\ell = 2$ and $n = -\\tfrac{3}{4}$. This latter case is excluded if one insists, as we did with \\eqref{eqdim}, that the operator product expansion{} of $\\mathcal{E}^{\\pm}$ with itself is regular. We mention that Feigin and Semikhatov actually computed the first \\emph{four} terms of the $W^{\\brac{2}}_N$ operator product expansions{}, finding in the fourth term a Virasoro primary field $\\mathcal{W}_N$ of dimension $3$ and $\\mathcal{H}_N$-weight $0$. We have extended \\eqnTref{eqnVertexOPE}{eqnCompositeSFOPEs}{eqnGenExtAlgOPEs} to compute $\\mathcal{W}_N$ in our extended algebras and have checked that for each $\\ell$ and $n$ appearing in our conjecture, this field indeed has the required properties. It follows that our conjecture has been verified for all $N \\leqslant 4$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction.}\nA recent analysis of the Low Energy Neutrino Anomaly (LNA) \\cite{RNA11},\\cite{GiuLev10} led to a \nchallenging claim that this anomaly can be explained in terms of a new \nfourth neutrino with a much larger mass\nsquared difference. Assuming that the neutrino mass eigenstates are non \ndegenerate one finds\\cite{RNA11}\\cite{GiuLev10}:\n \\barr\n \\Delta m^2_{31}&\\approx& \\Delta m^2_{32}=|m_3^2-m_2^2|,\\nonumber\\\\ \\Delta m^2_{41}&\\approx& \\Delta m^2_{42}=|m_4^2-m_2^2|>1.5\\mbox{(eV)}^2\n \\label{deltam}\n \\earr\nwith a mixing angle:\n\\beq \n\\sin^2{2 \\theta_{14}}=0.14\\pm 0.08 (95\\%).\n\\eeq\n\nIt is obvious that this new neutrino should contribute to the oscillation \nphenomenon. In the present paper we will assume that the new neutrino is sterile, that is it does not participate in weak interaction. Even then, however, it has an effect on neutrino oscillations since it will tend to decrease the electron neutrino flux. This makes the analysis of oscillation experiments more \nsophisticated. In all the previous experiments the oscillation length is \nmuch larger than the size of the detector. So one is able to see the effect only \nif the detector is placed in the right distance from the source. It is, \nhowever, possible to design an experiment with an oscillation length of the \norder of the size of the detector, as it was proposed in \\cite{VERGIOM06},\\cite{VERNOV10}. This is \nequivalent to many standard experiments done simultaneously. \nThe main \nrequirements are as follows \\cite{VERNOV10}:\n\\begin{itemize}\n\\item The neutrinos should have as low as possible energy so that the oscillation \nlength can be minimized. At the same time it should not be too low, so that \nthe neutrino-electron cross section is sizable.\n\\item A monoenergetic neutrino source has the advantage that some of the features \nof the oscillation pattern are not washed out by the averaging over a \ncontinuous neutrino spectrum.\n\\item The life time of the source should be suitable for the experiment to be \nperformed. If it is too short, the time available will not be adequate for \nthe execution of the experiment. If it is too long, the number of counts \nduring the data taking will be too small. Then one will face formidable \nbackgrounds and\/or large experimental uncertainties.\n\\item The source should be cheaply available in large quantities. Clearly a \ncompromise has to be made in the selection of the source.\n\\end{itemize}\nAt low energies the only neutrino detector, which is sensitive to neutrino \noscillations, is one, which is capable of detecting recoiling electrons\\cite{VERGIOM06} or nuclei \\cite{VGN-NC11}:\n\nThe aim of this article is to show that the existence of a new fourth \nneutrino can be verified experimentally by the direct measurements\nof the oscillation curves for the monoenergetic neutrino-electron \nscattering. It can be done point-by-point within the dimensions of the detector, \nthus providing what we call neutrino oscillometry \\cite{VERNOV10},\\cite{VERGIOMNOV}. \n\nThe electron neutrino, produced in weak interactions, can be expressed in \nterms of the standard mass eigenstates as follows:\n\\barr\n\\nu_e&=&\\cos_{\\theta_{14}}\\left[\\cos{\\theta_{12}} \\cos{\\theta_{13}}~\\nu_1+\\sin{\\theta_{12}} \\cos{\\theta_{13}}\\,\\nu_2+\\right .\\nonumber\\\\\n&&\\left . \\sin{\\theta_{13}}~ e^{i\\delta} \\nu_3 \\right]+\\sin{\\theta_{14}}e^{i\\delta_4} \\nu_4\n\\label{nue},\n\\earr\n where $\\sin{ \\theta_{13}}$ is a small quantity constrained by the\nCHOOZ experiment and $\\sin{ \\theta_{14}}$ is the small mixing angle proposed for the resolution of LNA\\cite{RNA11},\\cite{GiuLev10}.\n We can apply a four neutrino oscillation analysis to write, under the approximations of Eq. \\ref{deltam},\n the $\\nu_e$ disappearance oscillation probability as follows:\n\\barr\n P(\\nu_e \\rightarrow \\nu_e )&\\approx&1- \n \\left [\n \\sin ^2 {2\\theta_{12}}\n\\sin^2 {(\\pi \\frac{L}{L_{21}})}\\right .\\nonumber\\\\\n&+& \\left . \\sum_{n=3}^4 \\sin ^2{2\\theta _{1n}}\\sin^2{ (\\pi \\frac{L}{L_{n2}})} \\right]\n \\label{disap}\n\\earr\nwith\n\\beq \nL_{ij}=\\frac{4 \\pi E_{\\nu}}{m_i^2-m_j^2}.\n\\label{OscLength}\n\\eeq\nSince the oscillation lengths are very different, $L_{42}< 1.5\\mbox{(eV)}^{2} $\\cite{RNA11}, i.e. very \nlarge by neutrino mass standards, the oscillation length can be quite small \neven for quite energetic neutrinos. \n\n\\begin{table}[htbp]\n\\caption{\nProposed candidates for a new neutrino oscillometry at the \nspherical gaseous TPC. \nTabulated nuclear data have been taken from \\cite{AUDI03}, other data have been \ncalculated in this work (see the text for details. The mass of the source was assumed to be 0.1Kg).\n\\label{table1}}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\hline\n& & & & & & &\\\\\nNucli-& \n$T_{1\/2}$ \\par & \n$Q_{EC}$ & \n$E_{\\nu }$ & \n$L_{32}$& \n$L_{42}$ & \n$\\sigma(0,x) $& \n$N_{\\nu }$ \\\\\n& & & & && $10^{-45}$&\\\\\nde& \n(d)& \n(keV)& \n(keV)& \n(m)& \n(m)& \ncm$^2$& \n(s$^{-1})$ \\\\\n\\hline\n$^{37}$Ar& \n35 & \n814& \n811& \n842& \n1.35& \n5.69& \n$3.7\\times 10^{17}$ \\\\\n\\hline\n\n$^{51}$Cr& \n27.7 & \n753& \n747& \n742& \n1.23& \n5.12& \n$4.1\\times 10^{17}$ \\\\\n\\hline\n$^{65}$Zn& \n244 & \n1352& \n1343& \n1330& \n2.22& \n10.5& \n$3.0\\times 10^{16}$ \\\\\n\\hline\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\nIn other words, unlike the case involving $\\theta_{13}$ previously discussed \n\\cite{VERGIOM06},\\cite{VERNOV10},\\cite{VERGIOMNOV}, one can now choose much higher neutrino energy sources and thus \nachieve much higher cross sections. Thus our best candidates, see in Table \\ref{table1}, are \nnuclides, which emit monoenergetic neutrinos with energies higher than many \nhundreds of keV. Columns 2 and 3 show the decay characteristics of the \ncorresponding nuclides \\cite{NDSH}. The neutrino energies in column 4 have been \ncalculated by using equation (\\ref{Eq12}) taking $Q_{EC}$ from \\cite{AUDI03} and $B_{i}$ \nfrom \\cite{LARKINS}. For these nuclides the capture is strongly predominant between the \nground states, thus $E^{\\ast }$ =0. Columns 5 and 6 give the oscillation \nlengths for the third and the fourth neutrino states. One can see that \n$L_{32}$ and $L_{42}$ are very different and that the two oscillation curves \ncan be disentangled.\nThe maximum energy of the recoiling electron can be \ncalculated by use of Eq. (2.4) in \\cite{VERNOV10}. Column 7 shows the neutrino-electron \ncross-sections calculated by the use of formula (\\ref{sigmatot2}). The last column presents \nthe neutrino source intensities which can be reasonably produced by \nirradiation of the corresponding targets of stable nuclides in the high flux \nnuclear reactors. \n\nThe goal of the experiment is to scan the monoenergetic neutrino electron \nscattering events by measuring the electron recoil counts in a function of \ndistance from the neutrino source prepared in advance at the reactor\/s. This \nscan means point-by-point determination of scattering events along the \ndetector dimensions within its position resolution.\n\nIn the best cases these events can be observed as a smooth curve, which \nreproduces the neutrino disappearance probability.\nIt is worthwhile to note again that the \noscillometry is suitable for monoenergetic neutrino, since it deals with a \nsingle oscillation length $L_{32}$ or $L_{42}$. This is obviously not a case \nfor antineutrino, since, in this instance, one extracts only an effective \noscillation length. Thus some information may be lost due to the folding \nwith the continuous neutrino energy spectrum.\n\nTable \\ref{table1} clearly shows that the oscillation lengths for a new neutrino \nproposed in \\cite{RNA11}, \\cite{GiuLev10} are much smaller compared to those previously considered \\cite{VERGIOMNOV} in connection with $\\theta_{13}$. They can thus be directly measured within \nthe dimensions of detector of reasonable sizes. One of the very promising \noptions could be the STPC proposed \nin \\cite{VERGIOM06}. If necessary, a spherical Micromegas based on the micro-Bulk \n technology \\cite{ADRIAM10},\n which will be developed in the near future, can be employed in the STPC. In fact a large detector 1.3 m in diameter has already been developed and it is under operation at the LSM (Laboratoire Souterrain de Modane) underground laboratory. The device provides sub-keV energy threshold and good energy resolution. \n A thin 50\n micron polyamide foil will be used as bulk material to fabricate the\n detector structure. This detector provides an excellent energy \n resolution, can\n reach high gains at high gas pressure (up to 10 bar) and has the advantage that its \n radioactivity\n level \\cite{CEBRIAN10} should fulfill the requirements of the proposed \n experiment.\n \n In this spherical chamber with a modest radius of a few meters the shielded\nneutrino source can be situated in the center of the sphere.\nThe details of shielding, like the amount and the type of material surrounding the neutrino source, which is required to reach an \nappropriate background level, is under study. The electron \ndetector is also placed around the center of the smaller sphere with radius \n$r \\approx 1$m. The sphere volume out of the detector position is filled with a \ngas (a noble gas such as Ar or preferably Xe, which has a higher number of \nelectrons). The recoil electrons are guided by the strong electrostatic \nfield towards the Micromegas-detector \\cite{Giomataris},\\cite{GIOMVER08}. Such type of device has an \nadvantage in precise position determination (better than 0.1 m) and in \ndetection of very low electron recoils in 4$\\pi$-geometry (down to a few \nhundreds of eV, that well suits to the nuclides of table \\ref{table1}).\n\nAssuming that we have a gas target under pressure $P$ and temperature $T_0$, the number of electrons in STPC can be determined by formula: \n\\beq\nn_e= Z\\frac{P}{kT_0}=4.4\\times 10^{27}m^{-3} \\frac{P}{10~{\\mbox{Atm}}}\\frac{Z}{18}\\frac{300}{T_0},\n\\eeq\nwhere $Z$ is the atomic number, while\n $P$ and $T_{0}$ stand for a gas pressure and \ntemperature.\n\n\nSince in the resolution of neutrino anomaly one can employ sources with quite \nhigh energy neutrinos of hundreds of keV, one expects large cross sections. \nTherefore a modest size source, so that it can easily fit inside the \ninner sphere of the detector, and a modest size detector say of radius of 4 \nm and pressure of 10 bar can be adequate. We will thus employ these \nparameters in this calculation and assume a running time equal to the life \ntime of the source. The result obtained for one of the candidates, nuclide \n$^{51}$Cr, is shown in Fig. \\ref{fig1}. This nuclide has previously been considered for oscillation measurements \\cite{VERNOV10}, \\cite{RNA11}, \\cite{GiuLev10}.\n\nAs can be seen from this figure the oscillometry curves are well \ndisentangled for different values of mixing angle $\\theta_{14 }$, which shows the \nfeasibility of this method for identification of the new neutrino existence \nas such. \n\nThe sensitivity for determination of $\\theta_{14 }$ can be deduced also from the \ntotal number of events in the fiducial volume of detector. After integration \nof equation (\\ref{eventsph}) over $L$ from 0 to 4 m it can be written in the form:\n\\barr\nN_{0} &=& A + B {\\sin^2{ (2\\theta_{14})}},\\quad A=N_{\\nu} n_e R_0 \\sigma(0,x),\\nonumber\\\\\n \\frac{B}{A}&=&- \\left [\\frac{1}{2}-\\frac{0.067}{R_0} x \\sin \\left(\\frac{7.45 R_0}{x}\\right) \\right ].\n\\label{Eq14}\n\\earr\nThus for 55 days of measurements with $^{51}$Cr we find:\n $ A=1.59\\times 10^{4}$ and $B = -7.56 \\times 10^{3}$ . \n\nTaking these values we determined the sensitivity of $\\sin{^{2 }(2\\theta_{14})}$ = \n0.05 within 99{\\%} of confidence level reachable after two months of data handling in the STPC. This \nvalue is quite enough to access the validity of a new neutrino existence. \n\\begin{figure}[!ht]\n \\begin{center}\n \\includegraphics[width=3.3in,height=2.2in]{fig1.eps}\n\\hspace*{-0.0cm} { $L \\rightarrow$ meters}\\\\\n \\caption{ \n\n Oscillation spectra with different values of $\\sin{^{2}(2\\theta_{14})}$= \n0.07, 0.17 and 0.27 on the corresponding colored curves with the \nstatistical corridor of 1$\\sigma$. The values on the y-axis are obtained for 55 \ndays of measurement with a $^{51}$Cr source and an Ar target under a pressure of 10 bar. In all \ncases we have included distances up to $1.5\\times L_{42}$. The pattern is repeated \ntwo times up to the radius of the sphere $R_{0}$= 4 m.\n } \n \\label{fig1}\n \\end{center}\n \\end{figure} \n\nThe results presented in Fig. \\ref{fig1}\ndid not take into consideration the electron energy threshold of 0.1 keV, \nwhich is too small in comparison with the neutrino energy and the average \nelectron recoil energy. We neglected also the Solar background of 2 counts \nper day derived from the measured Borexino results \\cite{BOREXINO8}, \\cite{BOREXINO9}. It is obvious that STPC should be installed in an underground laboratory surrounded with appropriate shield against rock radioactivity.\n\n\nIn conclusion, we propose to use the oscillometry method for direct \nobservation of the fourth neutrino appearance. The calculations and analysis \nshows that neutrino oscillometry with the gaseous STPC is a powerful tool \nfor identification of a new neutrino in the neutrino-electron scattering. \nSince the expected mass-difference for this neutrino is rather high, the \ncorresponding oscillation length is going to be sufficiently small for 1 MeV neutrino energy so that it can \nbe fitted into the dimensions of a spherical detector with the radius of a \nfew meters. The neutrino oscillometry can be implemented in this detector \nwith the use of the intense monochromatic neutrino sources which can be \nplaced at the origin of sphere and suitably shielded. The gaseous STPC with the Micromegas \ndetection has a big advantage in the 4$\\pi$-geometry and in very good position \nresolution (better than 0.1 m) with a very low energetic threshold ($\\approx \n$ 100 eV). The most promising candidates for oscillometry have been \nconsidered. The sensitivity for one of them, e.g. $^{51}$Cr, to the mixing angle \n$\\theta_{14}$ is estimated as $\\sin{^{2}(2\\theta_{14})}$ = 0.05 with the 99{\\%} of \nconfidence, which can be reached after two months of data handling. This value can be pushed further down by using renewable sources. The observation of the oscillometry curve suggested in this work will be a \ndefinite manifestation of the existence of a new type of neutrino, like the one\n recently proposed by the analysis of the low energy neutrino anomaly.\n\nA help of D. Nesterenko in preparation of this manuscript is very much \nacknowledged.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}