diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzejns" "b/data_all_eng_slimpj/shuffled/split2/finalzzejns" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzejns" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\n\nLarge pre-trained Transformers have yielded remarkable performance on abstractive summarization~\\cite{liu-lapata-2019-text,lewis-etal-2020-bart,zhang2020pegasus} with impeccable fluency, yet their summaries often contain factually inconsistent content~\\cite{maynez-etal-2020-faithfulness,zhang-etal-2020-optimizing,goyal-durrett-2020-evaluating}, even for state-of-the-art models.\nThree types of remedies have been proposed: running a separately learned error correction component~\\cite{dong-etal-2020-multi}, removing noisy training samples~\\cite{nan-etal-2021-entity,goyal-durrett-2021-annotating}, and modifying the Transformer architecture~\\cite{huang-etal-2020-knowledge,zhu-etal-2021-enhancing}. \nYet they either rely on heuristically created data for error handling, falling short of generalization, or require learning a large number of new parameters, and summary informativeness is often sacrificed. \n\n\n\\begin{figure}[t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{3pt}\n \\begin{tabular}{p{0.47\\textwidth}}\n \\toprule\n \\textbf{XSum Article}: The Fermanagh MLA Phil Flanagan tweeted after Tom Elliott appeared on a BBC radio programme in May 2014. $\\ldots$\n ``I wonder if he will reveal how many people he harassed and shot as a member of the UDR.''$\\ldots$ \\\\\n \\midrule\n \\textbf{Contrastive learning} (our method): A Sinn F\u00e9in MLA has been ordered to apologise and pay compensation to a former member of the \\hlc[yellow!70]{ Ulster Defence Regiment} (UDR). \\\\\n \\midrule\n \\textbf{Cross-entropy}: A Sinn F\u00e9in MLA has agreed to pay compensation to a former \\hlc[cyan!40]{Ulster Unionist Party} (UDR) MP after he tweeted that he had harassed and shot people as a member of the party. \\\\\n \\midrule\n \\textbf{Entailment reranking}: A Sinn F\u00e9in MLA has agreed to pay compensation to a former \\hlc[cyan!40]{Ulster Unionist Party} (UDR) councillor for a tweet he sent about him. \\\\\n \\midrule\n \\textbf{Unlikelihood}: An MLA has been ordered to apologise and pay compensation to a former \\hlc[cyan!40]{loyalist} MP for a remark he made about him while serving in \\hlc[cyan!40]{the Ministry of Defence}. \\\\\n \n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Sample article and system summaries by different methods. \n Our contrastive learning model trained on low confidence system outputs correctly generates the \\hlc[yellow!70]{full name}.\n Comparisons using cross-entropy loss,\n beam reranking by entailment scores~\\cite{kryscinski-etal-2020-evaluating}, and unlikelihood objective~\\cite{Welleck2020Neural} over negative samples all produce \\hlc[cyan!40]{unfaithful content}.\n }\n \\label{fig:intro_sample}\n\\end{figure}\n\nOur goal is to train abstractive summarization systems that generate both faithful and informative summaries in an end-to-end fashion.\nWe observe that, while the commonly used maximum likelihood training optimizes over references, there is no guarantee for the model to distinguish references from incorrect generations~\\cite{Holtzman2020The,Welleck2020Neural}. \nTherefore, potential solutions reside in designing new learning objectives that can effectively inform preferences of factual summaries over incorrect ones. \n\nConcretely, we hypothesize that including factually inconsistent summaries (i.e., \\textit{negative samples}) for training, in addition to references (i.e., \\textit{positive samples}), let models become better at differentiating these two types of summaries. \nAlthough using negative samples has been effective at text representation learning, e.g., word2vec~\\cite{mikolov2013distributed} and BERT~\\cite{devlin-etal-2019-bert}, there exist two major challenges for it to succeed in concrete language tasks. \nFirst, \\textit{a suitable training objective} is critical to avoid performance degradation~\\cite{saunshi2019theoretical}. \nSecond, it is nontrivial to \\textit{construct ``natural'' samples} that mimic the diverse errors made by state-of-the-art systems that vary in words and syntax~\\cite{goyal-durrett-2021-annotating}. \n\nTo address both challenges, we first propose a new framework, \\textbf{\\textsc{CLIFF}}, that uses \\underline{c}ontrastive \\underline{l}earning for \\underline{i}mproving \\underline{f}aithfulness and \\underline{f}actuality of the generated summaries.\\footnote{Our code and annotated data are available at \\url{https:\/\/shuyangcao.github.io\/projects\/cliff_summ}.} \nContrastive learning (CL) has obtained impressive results on many visual processing tasks, such as image classification~\\cite{khosla2020supervised,pmlr-v119-chen20j} and synthesis~\\cite{park2020contrastive,zhang2021cross}. \nIntuitively, CL improves representation learning by compacting positive samples while contrasting them with negative samples.\nHere, we design a task-specific CL formulation that teaches a summarizer to expand the margin between factually consistent summaries and their incorrect peers.\n\nMoreover, we design four types of strategies with different variants to construct negative samples by \\textit{editing reference summaries} via rewriting entity-\/relation-anchored text, and using \\textit{system generated summaries} that may contain unfaithful errors. \nImportantly, these strategies are inspired by our new annotation study on errors made by state-of-the-art summarizers---models fine-tuned from BART~\\cite{lewis-etal-2020-bart} and PEGASUS~\\cite{zhang2020pegasus}---on two benchmarks: XSum~\\cite{narayan-etal-2018-dont} and CNN\/DailyMail (CNN\/DM)~\\cite{NIPS2015_5945}. \n\nWe fine-tune pre-trained large models with our contrastive learning objective on XSum and CNN\/DM. \nResults based on QuestEval~\\cite{scialom2020QuestEval}, a QA-based factuality metric of high correlation with human judgments,\nshow that our models trained with different types of negative samples uniformly outperform strong comparisons, including using a summarizer with post error correction and reranking beams based on entailment scores to the source.\nMoreover, compared with unlikelihood training method that penalizes the same negative samples~\\cite{Welleck2020Neural}, our summaries also obtain consistently better QuestEval scores. \nHuman evaluation further confirms that our models consistently reduce both extrinsic and intrinsic errors over baseline across datasets. \n\n\\section{Related Work}\n\\label{sec:related_work}\n\n\n\\paragraph{Factuality Improvement and Evaluation.} \nNeural abstractive summaries often contain unfaithful content with regard to the source~\\cite{falke-etal-2019-ranking}. To improve summary factuality, three major types of approaches are proposed. First, a separate correction model is learned to fix errors made by the summarizers~\\cite{zhao-etal-2020-reducing,chen-etal-2021-improving}, including replacing entities absent from the source~\\cite{dong-etal-2020-multi} or revising all possible errors~\\cite{cao-etal-2020-factual}. \nThe second type targets at modifying the sequence-to-sequence architecture to incorporate relation triplets~\\cite{cao2018faithful}, knowledge graphs~\\cite{zhu-etal-2021-enhancing}, and topics~\\cite{aralikatte-etal-2021-focus} to inform the summarizers of article facts. Yet additional engineering efforts and model retraining are often needed. \nFinally, discarding noisy samples from model training has also been investigated~\\cite{nan-etal-2021-entity,goyal-durrett-2021-annotating}, however, it often leads to degraded summary informativeness. \nIn comparison, our contrastive learning framework allows the model to be end-to-end trained and does not require model modification, thus providing a general solution for learning summarization systems. \n\n\n\n\nAlongside improving factuality, we have also witnessed growing interests in automated factuality evaluation, since popular word-matching-based metrics, e.g., ROUGE, correlate poorly with human-rated factual consistency levels~\\cite{gabriel-etal-2021-go,fabbri2021summeval}. \nEntailment-based scorers are designed at summary level~\\cite{kryscinski-etal-2020-evaluating} and finer-grained dependency relation level~\\cite{goyal-durrett-2020-evaluating}. \nQA models are employed to measure content consistency by reading the articles to answer questions generated from the summaries~\\cite{wang-etal-2020-asking,durmus-etal-2020-feqa}, or considering the summaries for addressing questions derived from the source~\\cite{scialom-etal-2019-answers}.\nThough not focusing on evaluation, our work highlights that models can produce a significant amount of world knowledge which should be evaluated differently instead of as extrinsic hallucination~\\cite{maynez-etal-2020-faithfulness}. We also show that world knowledge can possibly be distinguished from errors via model behavior understanding.\n\n\n\\medskip\n\\noindent \\textbf{Training with negative samples} has been investigated in several classic NLP tasks, such as grammatical error detection~\\cite{foster-andersen-2009-generrate} and dialogue systems~\\cite{li-etal-2019-sampling}. \nNotably, negative sampling plays a key role in word representation learning~\\cite{mikolov2013distributed} and training large masked language models, such as BERT and ALBERT, to induce better contextual representations~\\cite{devlin-etal-2019-bert,Lan2020ALBERT:}. \nFor text generation tasks, unlikelihood training is proposed to penalize the generation of negative tokens (e.g., repeated words) and sentences (e.g., contradictory responses in a dialogue system)~\\cite{Welleck2020Neural,li-etal-2020-dont,he-glass-2020-negative}. \nWe use contrastive learning that drives enhanced representation learning to better distinguish between factual and incorrect summaries, which encourages more faithful summary generation. \n\n\n\\paragraph{Contrastive Learning (CL) for NLP.}\nCL has been a popular method for representation learning, especially for vision understanding~\\cite{hjelm2018learning,pmlr-v119-chen20j}. \nOnly recently has CL been used for training language models with self-supervision~\\cite{fang2020cert}, learning sentence representations~\\cite{gao2021simcse}, and improving document clustering~\\cite{zhang2021supporting}. \nWith a supervised setup, \\citet{gunel2021supervised} adopt the contrastive objective to fine-tune pre-trained models on benchmark language understanding datasets. Using a similar idea, \\citet{liu-liu-2021-simcls} enlarge the distances among summaries of different quality as measured by ROUGE scores. \n\n\\section{\\textsc{CLIFF}: Contrastive Learning Framework for Summarization}\n\\label{sec:method}\n\n\nWe design a contrastive learning (CL)-based training objective that drives the summarization model to learn a preference of faithful summaries over summaries with factual errors.\nIt is then used for fine-tuning BART~\\cite{lewis-etal-2020-bart} and PEGASUS~\\cite{zhang2020pegasus} for training summarization models.\nFormally, let an article $x$ have a set of reference summaries $P$ (henceforth \\textit{positive samples}) and another set of erroneous summaries $N$ (\\textit{negative samples}). \nThe contrastive learning objective is~\\cite{khosla2020supervised,gunel2021supervised}:\n\n\\begin{equation}\n \\fontsize{10}{11}\\selectfont\n l_{cl}^{x} = - \\frac{1}{\\binom{|P|}{2}} \\sum_{\\substack{y_i, y_j \\in P\\\\y_j \\ne y_i}}\n \\log \\frac{\\exp(\\mathrm{sim} (\\bm{h}_i, \\bm{h}_j) \/ \\tau)}{\\sum\\limits_{\\substack{y_k \\in P \\cup N\\\\y_k \\ne y_i}} \\exp(\\mathrm{sim} (\\bm{h}_i, \\bm{h}_k) \/ \\tau)}\n \\label{eq:contrast}\n\\end{equation}\nwhere $\\bm{h}_i$, $\\bm{h}_j$, and $\\bm{h}_k$ are representations for summaries $y_i$, $y_j$, and $y_k$.\n$\\mathrm{sim}(\\cdot, \\cdot)$ calculates the cosine similarity between summary representations. $\\tau$ is a temperature and is set to $1.0$.\n\n\nImportantly, summaries in $P$ and $N$ are included in the \\textit{same batch} during training, so that the model acquires better representations to differentiate correct summaries from those with errors by comparing the two types of samples, thus maximizing the probabilities of the positive samples and minimizing the likelihoods of the corresponding negative samples.\nThe \\textbf{CL objective} on the full training set, denoted as $\\mathcal{L}_{CL}$, is the sum of losses $l_{cl}^{x}$ over all samples. \n\nTo effectively employ CL in summarization, we need to address two challenges: (1) how to automatically construct both positive and negative samples, which are critical for CL efficacy~\\cite{pmlr-v119-chen20j}, and (2) how to represent the summaries (i.e., $\\bm{h}_{\\ast}$).\nBelow we describe positive sample generation and options for $\\bm{h}_{\\ast}$, leaving the strategies for negative samples to \\S~\\ref{sec:sample_construct}. \n\n\n\\paragraph{Positive Sample Construction ($P$).}\nSummarization datasets often contain a single reference for each article. To create multiple positive samples, in our pilot study,\nwe experiment with paraphrasing with synonym substitution~\\cite{ren-etal-2019-generating}, randomly replacing words based on the prediction of masked language models~\\cite{kobayashi-2018-contextual}, and back-translation~\\cite{mallinson-etal-2017-paraphrasing}. We find back-translation to be best at preserving meaning and offering language variation, and thus use NLPAug\\footnote{\\url{https:\/\/github.com\/makcedward\/nlpaug}} to translate each reference to German and back to English. Together with the reference, the best translation is kept and added to $P$, if no new named entity is introduced.\n\n\n\\paragraph{Summary Representation ($\\bm{h}_{\\ast}$).}\nWe use the outputs of the decoder's last layer, and investigate three options that average over \\textit{all tokens}, \\textit{named entity tokens}, and the \\textit{last token} of the decoded summary. Entities and other parsing results are obtained by spaCy~\\cite{spacy}. \nWe further consider adding a multi-layer perceptron (MLP) with one hidden layer to calculate the final $\\bm{h}_{\\ast}$.\n\n\\medskip\n\\noindent \\textbf{The final training objective} combines the typical cross-entropy loss $\\mathcal{L}_{CE}$ and our contrastive learning objective: $\\mathcal{L} = \\mathcal{L}_{CE} + \\lambda \\mathcal{L}_{CL}$, where $\\lambda$ is a scalar and set to $1.0$ for all experiments. \n\n\\section{Summary Error Annotation and Model Behavior Analysis}\n\\label{sec:error_annotation}\n\nWe first describe annotating unfaithfulness errors by state-of-the-arts, i.e., models fine-tuned from BART and PEGASUS\non XSum and CNN\/DM.\nWe then probe into the model generation behavior that is indicative of errors, which guides the design of negative sample construction strategies.\n\n600 ($150\\times2\\times2$) summaries are annotated to demonstrate how often do the models ``hallucinate\", i.e., generating content not grounded by the source.\nTo characterize errors, we annotate \\textit{text spans} in summaries with \n(i) \\textbf{intrinsic errors} caused by misconstructing phrases or clauses from the source; \nand (ii) \\textbf{extrinsic errors} which include words not in the source that are either unverifiable or cannot be verified by Wikipedia. \nContent not covered by the article but can be validated by Wikipedia is annotated as \\textbf{world knowledge}, and the models' behavior pattern when generating them differs from when they generate errors.\n\nTwo fluent English speakers with extensive experience in summary evaluation and error labeling are hired. For each sample, they are shown the article and two system summaries, and instructed to annotate text spans with the aforementioned errors and world knowledge. After labeling every 50 samples, the annotators discuss and resolve any disagreement.\nThe Fleiss's Kappas on XSum and CNN\/DM are $0.35$ and $0.45$.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/error_type_dist.pdf}\n \n \\caption{Percentage of samples with intrinsic and extrinsic error spans for models fine-tuned from BART and PEGASUS on XSum and CNN\/DM.}\n \\label{fig:error_type_dist}\n \n\\end{figure}\n\n\\medskip\n\\noindent \\textbf{Error statistics} are displayed in Fig.~\\ref{fig:error_type_dist}. \nExtrinsic errors dominate both datasets, especially on XSum. $58.7\\%$ of summaries by BART (and $44.0\\%$ by PEGASUS) contain at least one extrinsic error.\nNoticeably, PEGASUS is a newer model pre-trained with a larger amount of data, thus contains less errors than BART and other older models studied for error annotations by~\\newcite{maynez-etal-2020-faithfulness}. This observation also highlights the usage of our annotations for future development and evaluation of summarization models. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{figures\/prob_both.pdf}\n \\caption{\n Probability distributions of generating the first tokens of proper nouns and numbers, grouped by extrinsic errors, intrinsic errors, world knowledge, and other correct tokens.}\n \\label{fig:output_prob_propnum}\n\\end{figure}\n\n\\paragraph{Low confidence generation is indicative of extrinsic errors.}\nInspired by recent work that studies model prediction confidence~\\cite{liu2021tokenlevel}, we examine generation probabilities for tokens of \\textit{different part-of-speech (POS) tags}.\nFig.~\\ref{fig:output_prob_propnum} shows salient results on the generation probabilities of the first token of a proper noun or a number (with additional analysis provided in Appendix~\\ref{appendix:error_annotation}). \nAs observed, model confidence tends to be lower for the first tokens of proper nouns and numbers if they are part of spans with \\textit{extrinsic errors}.\nAlso note that world knowledge, which cannot be inferred from the source either, often has higher generation probability than extrinsic errors. \nTake this snippet generated by a fine-tuned BART as an example: \\textit{``Manchester United captain Wayne Rooney's testimonial game against Manchester City$\\ldots$''}. \\textit{``Manchester City\"} is an extrinsic error and \\textit{``Wayne\"} is produced as world knowledge. The model assigns a low probability of $0.10$ to the first token of \\textit{``Manchester City''} and a high probability of $0.92$ to token \\textit{``Wayne''}. \nThis implies that model confidence can be a useful indicator for negative sample collection. \n\n\n\\section{Negative Sample Construction}\n\\label{sec:sample_construct}\n\nHere we describe four strategies for constructing negative samples that \n\\textit{modify the references} (\\S~\\ref{subsec:swapping}-\\ref{subsec:conditional}) or use \\textit{system generated summaries} (\\ref{subsec:modelgeneration}). \n\n\\begin{table}[t]\n \n \\fontsize{8.5}{9}\\selectfont\n \\setlength{\\tabcolsep}{2pt}\n \n \\hspace{-1mm}\n \\begin{tabular}{p{76mm}}\n \\toprule\n \\textbf{\\textsc{Reference}}: A ``rare'' short-eared owl found emaciated in Flintshire is now recuperating well, the RSPCA have said. \\\\\n \\midrule \\midrule\n \\textbf{\\textsc{SwapEnt}}: Flintshire $\\rightarrow$ Bettisfield \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found emaciated in \\textcolor{red!80!black}{\\textbf{Bettisfield}} is now recuperating well, the RSPCA have said. \\\\\n \\midrule\n \\textbf{\\textsc{MaskEnt}}: A ``rare'' short-eared owl found emaciated in \\texttt{[MASK]} is now recuperating well, the RSPCA have said. \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found emaciated in a field in \\textcolor{red!80!black}{\\textbf{South Yorkshire}} is now recuperating well, the RSPCA have said. \\\\\n \\midrule\n \\textbf{\\textsc{MaskRel}}: A ``rare'' short-eared owl found \\texttt{[MASK]} in \\texttt{[MASK]} is now recuperating well, the RSPCA have said. \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found \\textcolor{red!80!black}{\\textbf{dead} in \\textbf{London}} is now recuperating well, the RSPCA have said. \\\\\n \\midrule\n \\textbf{\\textsc{RegenEnt}}: A ``rare'' short-eared owl found emaciated in \\rule{6mm}{0.15mm} \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found emaciated in \\textcolor{red!80!black}{\\textbf{Nottinghamshire}} \\textbf{is now at a wildlife centre to recover.} \\\\\n \\midrule\n \\textbf{\\textsc{RegenRel}}: A ``rare'' short-eared owl found \\rule{6mm}{0.15mm} \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found \\textbf{in the grounds of a former coal mine is being cared for \\textcolor{red!80!black}{by the RSPCA in Somerset.}} \\\\\n \\midrule\n \\textbf{\\textsc{SysLowCon}}: An injured \\textcolor{red!80!black}{golden} owl found in a former coal mine in \\textcolor{red!80!black}{Lancashire} is being cared for \\textcolor{red!80!black}{by the RSPCA}. \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Negative sample construction strategies (\\S~\\ref{sec:sample_construct}). For summaries edited from the reference, their differences are \\textbf{bolded}. Introduced errors are in \\textcolor{red!80!black}{\\textbf{red}}. Text before \\rule{6mm}{0.15mm} is the prefix for regeneration.\n }\n \\label{tab:construction_example}\n \n\\end{table}\n\n\n\\subsection{Entity Swap}\n\\label{subsec:swapping}\n\nEntity swap imitates intrinsic errors, as over $55\\%$ of intrinsic errors in our annotations are found to contain named entities.\nWe construct negative samples by swapping named entities in the references with other randomly selected entities of the same entity type in the source (\\textbf{\\textsc{SwapEnt}}). One sample is constructed for each entity in the reference. \nThough this idea has been studied by \\citet{kryscinski-etal-2020-evaluating}, they allow entities of different types to be used, e.g., a PERSON can be replaced by a LOCATION. \nExamples are displayed in Table~\\ref{tab:construction_example}. \n\n\\textsc{SwapEnt} has the advantage of not depending on any trained model. Yet it only introduces intrinsic errors and lacks the coverage for extrinsic errors, which is addressed by the following generation-based methods. \n\n\n\\subsection{Mask-and-fill with BART}\n\\label{subsec:unconditional}\nTo simulate extrinsic errors, we leverage large unconditional language models' capability of converting a sequence with masked tokens into a fluent and appropriate sequence. Specifically, we replace each named entity in a reference with a \\texttt{[MASK]} token and encode it with BART (without any fine-tuning). BART then fills this partially masked summary with newly generated entities (\\textbf{\\textsc{MaskEnt}}). BART is chosen since it can fill \\texttt{[MASK]} with varying number of tokens.\nFor each entity in the reference, we sample three summaries and only retain the ones containing at least one entity that is absent from both the source and the reference.\n\nUp to now, the two introduced strategies both focus on incorrect named entities. To cover more diverse extrinsic and intrinsic errors~\\cite{goyal-durrett-2020-evaluating}, we extend \\textsc{MaskEnt} to contain relations (\\textbf{\\textsc{MaskRel}}). \nWe first obtain dependency relations using Stanza~\\cite{qi-etal-2020-stanza}, with each relation denoted as $<$\\texttt{gov}, \\texttt{rel}, \\texttt{dep}$>$. \nTo incorporate more context, we consider noun phrase spans enclosing the token of \\texttt{gov} or \\texttt{dep} if it is a content word and the noun phrase contains a named entity. \nSimilar to \\textsc{MaskEnt}, three negative samples are generated by BART based on the input with both \\texttt{gov} and \\texttt{dep} spans masked in the reference. Only the samples that introduce any new dependency relation that is not contained in the source nor the reference are kept. Specifically, we consider a match of a dependency relation as the same form or synonyms of its \\texttt{gov} and and \\texttt{dep} is found in the source or the reference with the same relation. \n\n\nBoth \\textsc{MaskEnt} and \\textsc{MaskRel} can create more extrinsic errors compared to other strategies introduced in this section, since negative samples are generated without being grounded on the source articles. However, their constructed negative samples may contained drifted topics that can be easily detected by a summarization model, resulting with less efficient training signals. \n\n\n\\subsection{Source-conditioned Regeneration}\n\\label{subsec:conditional}\n\nTo ground negative sample generation with the article, we further design a regeneration strategy based on conditional generation. For each named entity in the reference, we treat the text before it as a \\textit{prompt}.\nA summarizer, e.g., fine-tuned from BART or PEGASUS, first reads in the source using the encoder, then receives the prompt as the first part of the decoder output, and finally decodes the rest of the content based on nucleus sampling~\\cite{Holtzman2020The} with a cumulative probability threshold of $0.7$. \nThe prompt and the regenerated text comprise the final negative sample. This method is denoted as \\textbf{\\textsc{RegenEnt}}. \n\nWe also extend entities to relations with expanded governor and dependent spans (\\textbf{\\textsc{RegenRel}}). Here, we consider a prompt as the text before the \\texttt{gov} or \\texttt{dep} span, whichever occurs first. \nFor both \\textsc{RegenEnt} and \\textsc{RegenRel}, three negative samples are generated for each prompt, and a sample is kept if it introduces any new entity (for \\textsc{RegenEnt}) or dependency relation (for \\textsc{RegenRel}) with regard to the source and the reference. \n\nNegative samples generated by both methods are more relevant to the article than the mask-and-fill strategy, yet they may still miss certain types of errors and differ from real model outputs, since they are modified from the reference summaries. \n\n\n\\subsection{System Generation}\n\\label{subsec:modelgeneration}\n\nMotivated by the model confidence analysis in \\S~\\ref{sec:error_annotation}, we explore using system generated summaries as negative samples. We first run fine-tuned BART or PEGASUS on the same training set to decode summaries. For each summary, we check the model confidence on the first token of each proper noun and number span. If the probability is below a threshold, we keep it as a negative sample (\\textbf{\\textsc{SysLowCon}}). The threshold is tuned by maximizing F1 based on our error annotations. \n\nWe consider all beams at the last decoding step as candidates. We use beam sizes of $6$ and $4$ for XSum and CNN\/DM.\nStatistics of negative samples constructed by different strategies are in Appendix~\\ref{appendix:dataset_stat}.\n\n\n\\section{Experiment Setup}\n\\label{sec:exp_setup}\n\n\\paragraph{Evaluation Metrics.} \nQuestEval~\\cite{scialom2020QuestEval} is used as the main metric to evaluate summaries' factual consistency. \nGiven an article and a summary, QuestEval first generates natural language questions for entities and nouns from both. \nA QA model then consumes the article to answer questions derived from the summary, producing a score. Another score is obtained from a QA model addressing article-based questions after reading the summary.\nThe final QuestEval score is the harmonic mean of the two. We use the version with learned weights for questions, which has shown high correlation with human judged consistency and relevance.\n\n\nWe further use FactCC~\\cite{kryscinski-etal-2020-evaluating}, trained based on their negative sample construction method, to measure if the summary can be entailed by the source.\nWe also report ROUGE-L~\\cite{lin-2004-rouge}. \nBoth FactCC and ROUGE-L reasonably correlate with summary factuality as judged by human~\\cite{pagnoni2021understanding}. \n\nBased on our error annotations, we report the correlations between each metric and the error rate---percentage of tokens being part of an error span, and the raw number of errors (Table~\\ref{tab:metric_correlation}).\nQuestEval correlates better on both aspects than other metrics. \n\n\n\\begin{table}[t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{4.5pt}\n \\begin{tabular}{lcccc}\n \\toprule\n \\textbf{Metric} & \\multicolumn{2}{c}{\\textbf{XSum}} & \\multicolumn{2}{c}{\\textbf{CNN\/DM}} \\\\\n & \\textbf{\\% of Err} & \\textbf{\\# of Err} & \\textbf{\\% of Err} & \\textbf{\\# of Err} \\\\\n \\midrule\n QuestEval & \\textbf{-0.43}\\rlap{$^\\ast$} & \\textbf{-0.25}\\rlap{$^\\ast$} & \\textbf{-0.33}\\rlap{$^\\ast$} & \\textbf{-0.29}\\rlap{$^\\ast$} \\\\\n FactCC & -0.02 & -0.15\\rlap{$^\\ast$} & -0.13\\rlap{$^\\ast$} & -0.12\\rlap{$^\\ast$} \\\\\n ROUGE-1 & -0.16\\rlap{$^\\ast$} & -0.02 & -0.03 & -0.06 \\\\\n ROUGE-2 & -0.11\\rlap{$^\\ast$} & -0.05 & -0.02 & -0.04 \\\\\n ROUGE-L & -0.13\\rlap{$^\\ast$} & -0.03 & -0.06 & -0.08 \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Pearson correlation between metrics and error rates and numbers of errors. \n $\\ast$: p-value $< 0.05$.\n %\n }\n \\label{tab:metric_correlation}\n \n\\end{table}\n\n\\paragraph{Comparisons.}\nIn addition to the models fine-tuned with cross-entropy loss (\\textbf{\\textsc{CrsEntropy}}), we consider reranking beams based on FactCC score (also our metric) at the last decoding step (\\textbf{\\textsc{EntailRank}}). \nWe also include three common methods of improving factuality: \n(1) (\\textbf{\\textsc{Correction}}) fine-tunes BART to fix summary errors as a separate step~\\cite{cao-etal-2020-factual}. \n(2) \\textbf{\\textsc{SubsetFT}} fine-tunes large models based on training samples without any dependency relation error~\\cite{goyal-durrett-2021-annotating}, with released checkpoint only available for XSum. \n(3) \\textbf{\\textsc{FASum}} modifies Transformer by incorporating knowledge graphs for factual consistency~\\cite{zhu-etal-2021-enhancing}, with model outputs only on CNN\/DM. \n\n\nMoreover, we compare with \\textbf{unlikelihood training} that penalizes the probabilities of all tokens in a negative sample~\\cite{li-etal-2020-dont}. Given a negative sample $y'$, the loss is defined as $-\\sum_{t=1}^{| y' |} \\log (1 - p(y'_t | y'_{1:t-1}, x))$, where $p(y'_t | y'_{1:t-1}, x)$ is the output probability at the $t$-th step. We combine the unlikelihood training objective with cross-entropy loss with equal weights for fine-tuning. \n\n\nLastly, we compare our negative sample strategies with negative samples constructed for training the FactCC scorer, denoted as \\textbf{\\textsc{FCSample}}. \nFor CL only, we compare with using other samples in the same batch as negative samples (\\textbf{\\textsc{Batch}}), a common practice for CL-based representation learning~\\cite{gao2021simcse, zhang2021supporting}.\n\n\\section{Results}\n\\label{sec:results}\n\n\n\\subsection{Automatic Evaluation}\n\\label{subsec:autoeval}\nWe report results by models fine-tuned from BART and PEGASUS with different objectives and negative samples on XSum and CNN\/DM in Tables~\\ref{tab:main_result} and~\\ref{tab:pegasus_result}. \n\\textsc{CLIFF} models use a summary representation of averaging over all tokens with MLP projection, with other variants discussed in \\S~\\ref{subsec:ablaton}. Unless explicitly stated, comparison models are fine-tuned from the same large model used by \\textsc{CLIFF}. \n\n\nFirst, comparing with other factuality improvement models (top of the tables), \\textit{almost all \\textsc{CLIFF} models trained with different negative samples uniformly produce higher QuestEval scores across datasets with both large models}, with the improvements more pronounced on XSum.\nImportantly, ROUGE scores for \\textsc{CLIFF} models are comparable or better than baselines trained with cross-entropy, e.g., on CNN\/DM as in Table~\\ref{tab:main_result}. \nA similar trend is observed with the FactCC metric, especially when using PEGASUS as the base model (Table~\\ref{tab:pegasus_result}). \nNote that \\textsc{EntailRank} tends to yield significantly higher FactCC scores, though it obtains lower QuestEval scores than the cross-entropy baseline. Human inspection finds that \\textsc{EntailRank} can pick up beams with peculiar words of high FactCC scores, without improving factuality. \nMoreover, other comparisons based on post \\textsc{Correction} and model engineering (\\textsc{FASum}) only offer incremental gains. \nThe sample selection-based method, \\textsc{SubsetFT}, sacrifices ROUGE scores significantly. \nOverall, \\textsc{CLIFF} demonstrates stronger generalizability.\n\n\n\\begin{table}[!t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{2.5pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n \\textbf{Model} & \\multicolumn{3}{c}{\\textbf{XSum}} & \\multicolumn{3}{c}{\\textbf{CNN\/DM}} \\\\\n \\cmidrule(lr){2-4} \\cmidrule(lr){5-7}\n & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} \\\\\n \\midrule\n \\multicolumn{7}{l}{\\textit{Comparisons without Negative Samples}}\\\\\n \\textsc{CrsEntropy} & 32.50 & 25.48 & \\textbf{39.07} & 50.21 & 44.44 & 40.39 \\\\\n \\textsc{EntailRank} & 32.42 & \\textbf{41.90} & 38.47 & 50.15 & \\textbf{61.04} & \\textbf{40.67} \\\\\n \\textsc{Correction} & 32.55 & 25.15 & 39.02 & 49.48 & 32.96 & 39.79 \\\\\n \\midrule\n \\midrule\n \\multicolumn{7}{l}{\\textit{Comparisons with Unlikelihood Training}} \\\\\n \\textsc{FCSample} & 32.79 & \\hlc[xgreen]{25.37} & 38.46 & 50.63 & 45.45 & 39.28 \\\\\n \n \\textsc{SwapEnt} & 32.88 & 24.76 & 37.91 & 50.43 & 43.02 & \\hlc[xgreen]{38.96} \\\\\n \\textsc{MaskEnt} & 33.04 & \\hlc[xgreen]{26.30} & 37.51 & 51.11 & 52.19 & \\hlc[xgreen]{39.34} \\\\\n \\textsc{MaskRel} & \\hlc[xgreen]{33.08} & 24.38 & 38.05 & 51.14 & 52.93 & 39.31 \\\\\n \\textsc{RegenEnt} & 32.89 & 24.46 & \\hlc[xgreen]{38.47} & \\hlc[xgreen]{51.11} & \\hlc[xgreen]{52.90} & 39.23 \\\\\n \\textsc{RegenRel} & 32.91 & 24.80 & \\hlc[xgreen]{38.46} & 51.07 & \\hlc[xgreen]{53.68} & \\hlc[xgreen]{39.45} \\\\\n \\textsc{SysLowCon} & 31.66 & \\hlc[xgreen]{26.06} & 34.03 & \\hlc[xgreen]{50.92} & 51.08 & 39.19 \\\\\n \\midrule\n \\multicolumn{7}{l}{\\textit{Our Method:} \\textsc{CLIFF}} \\\\\n \\textsc{Batch} & 32.64 & 24.96 & 38.42 & 51.03\\rlap{$^\\ast$} & 51.81\\rlap{$^\\ast$} & 39.38 \\\\\n \\hdashline\n \\textsc{FCSample} & \\hlc[xgreen]{32.96}\\rlap{$^\\ast$} & 25.28 & \\hlc[xgreen]{38.58} & \\hlc[xgreen]{51.00}\\rlap{$^\\ast$} & \\hlc[xgreen]{51.80}\\rlap{$^\\ast$} & \\hlc[xgreen]{39.37} \\\\\n \\textsc{SwapEnt} & \\hlc[xgreen]{33.09}\\rlap{$^\\ast$} & \\hlc[xgreen]{25.09} & \\hlc[xgreen]{38.58} & \\hlc[xgreen]{51.16}\\rlap{$^\\ast$} & \\hlc[xgreen]{52.97}\\rlap{$^\\ast$} & 38.95 \\\\\n \\textsc{MaskEnt} & \\hlc[xgreen]{33.09}\\rlap{$^\\ast$} & 25.75 & \\hlc[xgreen]{38.12} & \\hlc[xgreen]{51.13}\\rlap{$^\\ast$} & \\hlc[xgreen]{53.60}\\rlap{$^\\ast$} & 39.24 \\\\\n \\textsc{MaskRel} & 33.06\\rlap{$^\\ast$} & \\hlc[xgreen]{25.28} & \\hlc[xgreen]{38.37} & \\hlc[xgreen]{\\textbf{51.17}}\\rlap{$^\\ast$} & \\hlc[xgreen]{53.34}\\rlap{$^\\ast$} & \\hlc[xgreen]{39.36} \\\\\n \\textsc{RegenEnt} & \\hlc[xgreen]{33.09}\\rlap{$^\\ast$} & \\hlc[xgreen]{24.48} & 38.33 & 50.99\\rlap{$^\\ast$} & 52.18\\rlap{$^\\ast$} & \\hlc[xgreen]{39.28} \\\\\n \\textsc{RegenRel} & \\hlc[xgreen]{33.16}\\rlap{$^\\ast$} & \\hlc[xgreen]{24.82} & 38.30 & \\hlc[xgreen]{51.16}\\rlap{$^\\ast$} & 53.21\\rlap{$^\\ast$} & 39.25 \\\\\n \\textsc{SysLowCon} & \\hlc[xgreen]{\\textbf{33.21}}\\rlap{$^\\ast$} & 25.18 & \\hlc[xgreen]{38.18} & 50.85\\rlap{$^\\ast$} & \\hlc[xgreen]{53.73}\\rlap{$^\\ast$} & \\hlc[xgreen]{39.30} \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Results of models fine-tuned from PEGASUS on XSum and CNN\/DM. We report results on $5,000$ randomly selected samples on CNN\/DM, due to long running time of QuestEval. \n \n \n For models of unlikelihood training and CLIFF that use the same negative samples, the better of the two is highlighted with \\hlc[xgreen]{green}. \n $\\ast$: our model is significantly better than \\textsc{CrsEntropy} (approximation randomization test, $p < 0.005$).}\n \\label{tab:pegasus_result}\n\\end{table}\n\n\nSecond, \\textit{\\textsc{CLIFF} is more effective and robust than unlikelihood training} with the same negative samples. \nAccording to Table~\\ref{tab:main_result}, using 7 negative sample construction strategies on two datasets, \\textsc{CLIFF} obtains higher QuestEval scores than unlikelihood training in 12 out of the 14 comparisons. Using PEGASUS, \\textsc{CLIFF} also outperforms in 11 setups as listed in Table~\\ref{tab:pegasus_result}. \nSimilar trends are found on FactCC and ROUGE-L.\nAnother noteworthy piece is that \\textsc{CLIFF}'s improvements over the cross-entropy baseline are more consistent, whereas unlikelihood training occasionally hurts factuality or ROUGE scores significantly. \nWe believe the key advantage of \\textsc{CLIFF} resides in its measure of representation similarities between positive and negative samples in the same batch, allowing models to better differentiate between correct and erroneous summaries. \n\n\nFinally, among all variants, \\textit{\\textsc{CLIFF} trained with low confidence summaries as negative samples obtains the best QuestEval scores on the more abstractive dataset}. As seen in Table~\\ref{tab:main_result}, using low confidence summaries also improves FactCC scores on both datasets, and enhances ROUGE-L on the more extractive dataset CNN\/DM. \nThis indicates that \\textit{system generated summaries contribute more diverse errors made by existing models organically}, which are particularly suitable for our CL framework. As we use summaries generated by the same model for \\textsc{CLIFF} training, one future direction is to use outputs by different models. \nFor our mask-and-fill and source-conditioned regeneration strategies, we find that relation-anchored construction often beats their entity-anchored counterparts. This calls for efforts that steer the entity-driven methods to a more relation-focused direction. \n\n\n\\paragraph{Combining Strategies.}\nWe further show results by fine-tuning BARTs using samples based on combined negative sample construction strategies in Table~\\ref{tab:combine_strategy}. As can be seen, \\textit{combining \\textsc{SysLowCon} and other strategies yields better QuestEval scores than models trained with negative samples by any single strategy}, except for \\textsc{MaskEnt} and \\textsc{RegenEnt} on XSum. This signifies the importance of covering diverse types of errors in negative samples. \n\n\n\\begin{table}[t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{2.5pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n \\textbf{Strategy} & \\multicolumn{3}{c}{\\textbf{XSum}} & \\multicolumn{3}{c}{\\textbf{CNN\/DM}} \\\\\n \\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \n & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} \\\\\n \\midrule\n \\textsc{SysLowCon} & 33.35 & 25.47 & \\textbf{36.19} & 51.05 & 50.05 & \\textbf{41.01} \\\\\n + \\textsc{SwapEnt} & \\textbf{33.40} & \\textbf{25.50} & 35.50 & \\textbf{51.32} & \\textbf{53.95} & 40.57 \\\\\n + \\textsc{MaskEnt} & 33.21 & 25.47 & 35.91 & 51.16 & 51.90 & 40.66 \\\\\n + \\textsc{MaskRel} & 33.39 & 25.20 & 35.70 & 51.24 & 52.48 & 40.80 \\\\\n + \\textsc{RegenEnt} & 33.31 & 25.07 & 35.94 & 51.21 & 51.86 & 40.91 \\\\\n + \\textsc{RegenRel} & 33.38 & 24.97 & 36.03 & 51.13 & 50.85 & 40.97 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results of fine-tuned BART with combinations of negative sample construction strategies.}\n \\label{tab:combine_strategy}\n\\end{table}\n\n\n\\subsection{Human Evaluation}\n\\label{subsec:humaneval}\n\n\\begin{table}[t]\n \\begin{subtable}[h]{0.48\\textwidth}\n \\small\n \\setlength{\\tabcolsep}{2pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n & \\multicolumn{3}{c}{\\textbf{Inform.}} & \\multicolumn{3}{c}{\\textbf{Factual.}} \\\\\n \\textbf{Model} & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ \\\\\n \\midrule\n \\textsc{EntailRank} & 3.3 & 84.3 & \\textbf{12.3} & 23.7 & 71.7 & \\textbf{4.7} \\\\\n \\textsc{ULL. MaskEnt} & 6.3 & 80.7 & 13.0 & 26.3 & 62.0 & 11.7 \\\\\n \\textsc{CL. Batch} & 5.3 & 80.0 & 14.7 & 21.7 & 68.0 & 10.3 \\\\\n \\textsc{CL. SysLowCon} & \\textbf{8.7} & 78.3 & 13.0 & \\textbf{31.3} & 61.7 & 7.0 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-1mm}\n \\caption{XSum}\n \n \\end{subtable}\n \n \\begin{subtable}[h]{0.48\\textwidth}\n \\small\n \\setlength{\\tabcolsep}{2pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n & \\multicolumn{3}{c}{\\textbf{Inform.}} & \\multicolumn{3}{c}{\\textbf{Factual.}} \\\\\n \\textbf{Model} & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ \\\\\n \\midrule\n \\textsc{EntailRank} & 2.3 & 86.3 & 11.3 & 4.7 & 94.7 & \\textbf{0.7} \\\\\n \\textsc{ULL. MaskEnt} & \\textbf{18.0} & 71.0 & 11.0 & 17.3 & 79.7 & 3.0 \\\\\n \\textsc{CL. Batch} & 17.3 & 74.7 & \\textbf{8.0} & \\textbf{20.7} & 77.0 & 2.3 \\\\\n \\textsc{CL. SysLowCon} & 15.7 & 75.7 & 8.7 & 20.0 & 77.7 & 2.3 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-1mm}\n \\caption{CNN\/DM}\n \\end{subtable}\n \n \\caption{\n Percentages of summaries that are better than, tied with, or worse than \\textsc{CrsEntropy}, in informativeness (Inform.) and factual consistency (Factual.) \n \n The Krippendorff's $\\alpha$s are $0.33$ and $0.62$ for the two aspects on XSum, and $0.34$ and $0.89$ on CNN\/DM. Our CL method using low confidence summaries is more frequently rated as better for informativeness and factuality on the more abstractive dataset XSum.\n }\n \\label{tab:human_eval}\n \n\\end{table}\n\n\n\\paragraph{Pairwise Comparison with Cross-entropy.} \nWe recruit the two human annotators for our summary error study, as well as another experienced annotator, to evaluate summary \\textbf{informativeness} and \\textbf{factual consistency}. For each article, the judges are shown summaries generated by the \\textsc{CrsEntropy} model and four other systems. They then rate each system summary against the \\textsc{CrsEntropy} summary.\nAll four summaries generated by different factuality-improved models are shown in random order without system names shown, ensuring the fair comparison among them.\n\nWe randomly pick $100$ articles from each dataset used in our error analysis study in \\S~\\ref{sec:error_annotation}, and evaluate summaries generated by \\textsc{EntailRank}, unlikelihood training (\\textsc{ULL}) with negative samples constructed by \\textsc{MaskEnt}, and \\textsc{CLIFF} models trained with \\textsc{Batch} and \\textsc{SysLowCon} negative samples. All are fine-tuned from BART. Detailed evaluation guidelines are in Appendix~\\ref{appendix:human_eval}. \n\n\nTable~\\ref{tab:human_eval} shows that on the more abstractive XSum data \\textit{CL trained with low confidence samples are more frequently rated as being more informative and more factual} than \\textsc{CrsEntropy} summaries. This echos our automatic evaluations with QuestEval in \\S~\\ref{subsec:autoeval}.\nOn CNN\/DM, all models trained with negative samples produce summaries with better informativeness and faithfulness.\nIn contrast, \\textsc{EntailRank} summaries are less distinguishable from outputs by \\textsc{CrsEntropy} on both datasets, as more ties are found. \nWe show sample outputs in Fig.~\\ref{fig:intro_sample}, with additional examples in Appendix~\\ref{appendix:outputs}. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.47\\textwidth]{figures\/human_eval_error_type_dist.pdf}\n \\vspace{-2mm}\n \\caption{Portions of summaries with errors. CL models consistently reduce both types of errors.\n }\n \\label{fig:human_eval_error_dist}\n \\vspace{-3mm}\n\\end{figure}\n\n\\paragraph{Intrinsic vs. Extrinsic Errors.}\nNext, the annotators are asked to label text spans with intrinsic and extrinsic errors as done in \\S~\\ref{sec:error_annotation}. \nFig.~\\ref{fig:human_eval_error_dist} shows that \\textit{CL is more effective at reducing extrinsic errors than unlikelihood training can} on both datasets.\nWe also observe slight decreases of world knowledge in the summaries (figure attached in Appendix~\\ref{appendix:human_eval}). \n\n\\paragraph{Error Correction Operations.}\nFinally, with reference to \\textsc{CrsEntropy} summaries, human judges are instructed to label each system summary as whether it corrects any error by \\textsc{CrsEntropy} using \\textbf{deletion} of the incorrect content, \\textbf{substitution} with factual information, or \\textbf{both}. \nAs seen in Fig.~\\ref{fig:error_correct_technique}, CL-based models restore factually consistent information, e.g., by replacing erroneous names and numbers with correct ones, more frequently than unlikelihood training or entailment reranking. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.47\\textwidth]{figures\/error_correct_dist.pdf}\n \\vspace{-2mm}\n \\caption{Summaries use different portions of error correction operations. \n Contrastive learning with \\textsc{SysLowCon} (CL.SLC) and \\textsc{Batch} (CL.B) substitute errors with correct content more often than unlikelihood training with \\textsc{MaskEnt} and \\textsc{EntailRank}. \n \n \n }\n \\label{fig:error_correct_technique}\n \\vspace{-3mm}\n\\end{figure}\n\n\n\n\\subsection{Variants of Summary Representation}\n\\label{subsec:ablaton}\n\nSample representation is critical for CL to be effective. Here we investigate summary representation variants as discussed in \\S~\\ref{sec:method}. \nThere are two major considerations: (1) Should we consider all tokens in a summary or only representative ones (e.g., entities or last token)? (2) Should additional transformation, i.e., an MLP, be used?\n\nExperiments on XSum using three negative sample construction strategies demonstrate that \\textit{averaging the decoder outputs of all tokens and adding an MLP projection yield the best overall performance}, as shown in Table~\\ref{tab:ablation_representation}. The implications are at least two-fold. \nFirst, even for entity- or relation-triggered sample modifications, using more global context helps with CL training. \nSecond, additional transformation can help avoid model degeneration. For instance, more nonsensical and repetitive content is produced by variants without MLP. \n\n\n\\begin{table}[t]\n \\centering\n \\setlength{\\tabcolsep}{2.5pt}\n \\fontsize{8.5}{10}\\selectfont\n \\begin{tabular}{lccccccc}\n \\toprule\n \\multicolumn{2}{l}{} & \\multicolumn{2}{c}{\\textbf{\\textsc{SwapEnt}}} & \\multicolumn{2}{c}{\\textbf{\\textsc{MaskRel}}} & \\multicolumn{2}{c}{\\textbf{\\textsc{SysLowCon}}} \\\\\n \\textbf{Rep.} & \\textbf{MLP} & \\textbf{QEval} & \\textbf{FC} & \\textbf{QEval} & \\textbf{FC} & \\textbf{QEval} & \\textbf{FC} \\\\\n \\midrule\n \\multicolumn{8}{l}{\\textit{BART}} \\\\\n Last & \\checkmark & 33.15 & 25.10 & 33.20 & 25.29 & 33.10 & 24.85 \\\\\n Last & & \\textcolor{red!53}{+0.13} & \\textcolor{red!42}{+0.02} & \\textcolor{blue!21}{--0.01} & \\textcolor{blue!72}{--0.32} & \\textcolor{blue!47}{--0.07} & \\textcolor{blue!50}{--0.10} \\\\\n Entity & \\checkmark & \\hlc[xgreen]{\\textbf{33.35}} & 25.41 & 33.34 & 25.44 & 33.32 & 24.46 \\\\\n Entity & & \\textcolor{blue!50}{--0.13} & \\textcolor{blue!47}{--0.07} & \\textcolor{blue!41}{--0.14} & \\textcolor{blue!45}{--0.05} & \\textcolor{blue!69}{--0.29} & \\textcolor{red}{+0.72} \\\\\n All & \\checkmark & 33.30 & \\hlc[xgreen]{\\textbf{25.67}} & \\hlc[xgreen]{\\textbf{33.35}} & \\hlc[xgreen]{\\textbf{25.69}} & \\hlc[xgreen]{\\textbf{33.35}} & \\hlc[xgreen]{\\textbf{25.47}} \\\\\n All & & \\textcolor{blue!43}{--0.23} & \\textcolor{blue!90}{--0.80} & \\textcolor{blue!44}{--0.04} & \\textcolor{blue!88}{--0.48} & \\textcolor{blue!46}{--0.26} & \\textcolor{blue!80}{--0.40} \\\\\n \\midrule\n \\multicolumn{8}{l}{\\textit{PEGASUS}} \\\\\n Last & \\checkmark & 33.07 & \\hlc[xgreen]{\\textbf{25.45}} & 32.99 & 25.09 & 33.18 & 24.94 \\\\\n Last & & \\textcolor{blue!47}{--0.07} & \\textcolor{blue!96}{--0.56} & \\textcolor{red!41}{+0.01} & \\textcolor{blue!41}{-0.01} & \\textcolor{blue!47}{--0.02} & \\textcolor{blue!44}{--0.04} \\\\\n Entity & \\checkmark & 33.03 & 25.43 & 33.05 & 24.77 & 33.20 & 24.59 \\\\\n Entity & & \\textcolor{red!41}{+0.01} & \\textcolor{blue!74}{--0.34} & \\textcolor{blue!42}{--0.05} & \\textcolor{red}{\\textbf{\\hlc[xgreen]{+0.64}}} & \\textcolor{blue!70}{--0.30} & \\textcolor{red!45}{+0.05} \\\\\n All & \\checkmark & \\hlc[xgreen]{\\textbf{33.09}} & 25.09 & 33.06 & 25.28 & \\hlc[xgreen]{\\textbf{33.21}} & \\hlc[xgreen]{\\textbf{25.18}} \\\\\n All & & \\textcolor{blue!51}{--0.11} & \\textcolor{red!65}{+0.25} & \\textcolor{red!43}{\\hlc[xgreen]{\\textbf{+0.03}}} & \\textcolor{blue!59}{-0.19} & \\textcolor{blue!42}{--0.02} & \\textcolor{blue}{--0.80} \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-2mm}\n \\caption{\n Comparing different formulations of summary representation in CL. \n For models without MLP, we display score changes from their counterparts. \n Overall, using all tokens with MLP produces better summaries. \n }\n \\label{tab:ablation_representation}\n \\vspace{-2mm }\n\\end{table}\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe present \\textsc{CLIFF}, a contrastive learning-based framework to promote faithfulness and factuality of abstractive summaries. \\textsc{CLIFF} uses both references and summaries that are factually inconsistent with the articles to train systems to be better at discriminating errors from factual and salient content. \nWe further study strategies that automatically create erroneous summaries by editing from references or leveraging systems outputs, inspired by our new summary error analysis on state-of-the-art models. \nBoth automatic evaluation and human ratings show that \\textsc{CLIFF} achieves consistent improvements over competitive comparison methods, and is generalizable across datasets with systems fine-tuned from different large models. \n\n\n\\section*{Acknowledgements}\nThis research is supported in part by Oracle for Research Cloud Credits, National Science Foundation through a CAREER award IIS-2046016, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract \\# FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \nWe thank three anonymous reviewers for their valuable suggestions. \n\n\n\\section{Additional Analysis for Summary Error Annotation}\n\\label{appendix:error_annotation}\n\nWe hire two fluent English speakers to annotate summary errors on XSum and CNN\/DailyMail (CNN\/DM). They annotate a common batch of 100 summaries generated by summarizers fine-tuned from BART and PEGASUS, with 50 articles in each batch. The two annotators are shown 50 HTML pages in a batch, each of which contains an article and two summaries generated by the two models. \nThe detailed annotation guideline is given in Fig.~\\ref{fig:annotation_guideline}.\n\nFor our analysis on token generation probabilities, we additionally show the distributions of the first token's probability for nouns and verbs in Fig.~\\ref{fig:first_token}. We also report the distributions of the non-first token's probability for proper nouns, numbers, nouns, and verbs in Fig.~\\ref{fig:nonfirst_token}. As can be seen, tokens within extrinsic and intrinsic errors have high generation probabilities when they are non-first tokens.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/prob_other_first.pdf}\n \\caption{Probability distributions of generating the \\textit{first} tokens of nouns and verbs, grouped by extrinsic errors, intrinsic errors, world knowledge, and other correct tokens. No verb is annotated as world knowledge.}\n \\label{fig:first_token}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/prob_all_nonfirst.pdf}\n \\caption{Probability distributions of generating the \\textit{non-first} tokens of proper nouns, numbers, nouns, and verbs, grouped by extrinsic errors, intrinsic errors, world knowledge, and other correct tokens. Non-first tokens do not exist for numbers and verbs, as they only contain single tokens.}\n \\label{fig:nonfirst_token}\n\\end{figure}\n\n\n\n\\section{Statistics for Datasets and Training Samples}\n\\label{appendix:dataset_stat}\n\n\\paragraph{Summarization Datasets.}\n\nWe follow the official data splits for the two datasets, with the number of samples in each split listed in Table~\\ref{tab:data_split}.\n\n\\begin{table}[t]\n \\centering\n \\small\n \\begin{tabular}{lccc}\n \\toprule\n \\textbf{Dataset} & \\textbf{Train} & \\textbf{Validation} & \\textbf{Test} \\\\\n \\midrule\n XSum & 204{,}045 & 11{,}332 & 11{,}334 \\\\\n CNN\/DM & 287{,}227 & 13{,}368 & 11{,}490 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Numbers of samples in train\/validation\/test splits of XSum and CNN\/DM.}\n \\label{tab:data_split}\n\\end{table}\n\n\\paragraph{Positive Samples.}\n\nWe observe unfaithful paraphrases by back-translation for some reference summaries, which are mainly due to the introduction of new entities and the rewriting of quoted text. Thus, we discard samples generated by back-translation that contain new entities and inconsistent quoted text. Finally, we obtain $182{,}114$ and $91{,}468$ positive samples by back-translation on XSum and CNN\/DM.\n\n\\paragraph{Negative Samples.}\n\nFor consistency, we use the summarizer fine-tuned from BART in \\textsc{RegenEnt}, \\textsc{RegenRel} (\\S~\\ref{subsec:conditional}), and \\textsc{SysLowCon} (\\S~\\ref{subsec:modelgeneration}) strategies. We tune a threshold to select negative samples from model generations in our \\textsc{SysLowCon} strategy. The threshold is set to $0.21$, with F1 scores of $73.99$ and $40.49$ on XSum and CNN\/DM annotated samples generated by BART.\n\nThe numbers of negative samples constructed by each strategy for training on XSum and CNN\/DM are shown in Table~\\ref{tab:num_neg_samples}.\n\\textsc{SysLowCon} constructs the least negative samples in total, while it achieves the best results as reported in our main paper (\\S~\\ref{subsec:autoeval}), indicating that its negative samples are more effective for training.\n\n\\begin{table}[ht]\n \\centering\n \\small\n \\begin{tabular}{lcc}\n \\toprule\n \\textbf{Strategy} & \\textbf{XSum} & \\textbf{CNN\/DM} \\\\\n \\midrule\n \\textsc{FCSample} & 936{,}164 & 1{,}291{,}710 \\\\\n \\textsc{SwapEnt} & 438{,}003 & 1{,}617{,}764 \\\\\n \\textsc{MaskEnt} & 360{,}795 & 1{,}050{,}200 \\\\\n \\textsc{MaskRel} & 391{,}224 & 1{,}345{,}317 \\\\\n \\textsc{RegenEnt} & 732{,}986 & 1{,}941{,}886 \\\\\n \\textsc{RegenRel} & 993{,}694 & 1{,}453{,}044 \\\\\n \\textsc{SysLowCon} & 401{,}112 & 502{,}768 \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Numbers of negative samples constructed by different strategies on XSum and CNN\/DM.}\n \\label{tab:num_neg_samples}\n\\end{table}\n\n\n\n\\section{Implementation Details}\n\\label{appendix:implementation}\n\nWe use Fairseq~\\cite{ott2019fairseq} and Huggingface Transformers~\\cite{wolf-etal-2020-transformers} for our experiments with BART and PEGASUS. Our experiments are conducted on the RTX 8000 GPU with 48GB memory and the A100 GPU with 40GB memory.\n\n\\paragraph{Training Settings.}\n\nFor hyperparameters, we follow \\citet{lewis-etal-2020-bart} for BART and \\citet{zhang2020pegasus} for PEGASUS.\nDuring training, we randomly select 5 and 4 negative samples for each input article in XSum and CNN\/DM. Mixed-precision training is not supported by the PEGASUS implementation and is utilized on BART only.\n\n\\paragraph{Decoding Settings.}\n\nWe use the beam search algorithm to decode summaries.\nFor BART, we set the beam sizes as 6 and 4 on XSum and CNN\/DM. A beam size of 8 is used for PEGASUS on both datasets.\n\n\\paragraph{Running Time and Model Sizes.}\n\nThe BART-based models take 6 and 13 hours for training on XSum and CNN\/DM, and it takes 1.5 hour to decode on the two datasets.\nMeanwhile, training the PEGASUS-based models takes 8 and 25 hours for XSum and CNN\/DM, and the decoding takes 1 hour.\n\nAs for model sizes, our BART-based models and PEGASUS-based models have 400M and 568M parameters.\n\n\n\\section{Human Evaluation}\n\\label{appendix:human_eval}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/human_eval_world_dist.pdf}\n \\caption{Percentages of samples containing world knowledge as labeled by human on the outputs of XSum and CNN\/DM.}\n \\label{fig:percent_world}\n\\end{figure}\n\nIn \\S~\\ref{subsec:humaneval}, we demonstrate the percentages of samples containing intrinsic errors and extrinsic errors for each model evaluated by human judges. \nHere, we report the percentages of samples containing world knowledge in Fig.~\\ref{fig:percent_world}. On XSum, all models produce less world knowledge compared to the model trained with cross-entropy loss, while generating similar or greater amounts of samples with world knowledge on CNN\/DM.\n\nOur human evaluation guideline is shown in Fig.~\\ref{fig:human_eval_guideline}.\n\n\n\n\\section{Sample Outputs}\n\\label{appendix:outputs}\n\nWe include more sample outputs in Fig.~\\ref{fig:generation_examples}.\n\n\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\fontsize{10}{12}\\selectfont\n \\begin{tabular}{p{0.88\\textwidth}}\n \\toprule\n In this study, you will first read article-summary pairs and then identify three types of text spans in the summaries. These spans include content that is contradicted by or cannot be implied from the article. The description for each type is described below: \\\\\n \\begin{itemize}\n \\item \\textbf{Intrinsic errors:} Text spans that misconstruct phrases or clauses from the article. \n \\item \\textbf{Extrinsic errors:} Text spans that include words that are not in the article and are not verifiable or cannot be verified by Wikipedia. \n \\item \\textbf{world knowledge:} Text spans that contain information that is not covered by the article but can be validated by Wikipedia. \n \\end{itemize} \\\\\n When selecting spans, you should always make sure the spans are complete words. \\\\\n \\\\\n In practice, you should follow the these steps carefully: (1) read the article and summaries carefully; (2) figure out if there is content contradicted by or not presented in the article; (3) label the span as an intrinsic error if it misconstructs phrases or clauses from the article; (4) if the span does not belong to intrinsic errors, search within Wikipedia and determine whether the content in the span can be verified; (5) label it as world knowledge if the it can be verified by Wikipedia, otherwise label it as an extrinsic error. \\\\\n \\midrule\n \\textbf{Example annotations 1} \\\\\n \\textbf{Article:} Isis Academy in Oxford said it had rebranded as ``Iffley Academy'' to protect its ``reputation, integrity and image''. \\textbf{The name `Isis' was originally chosen as the school is near to the section of the River Thames of the same name}. Formerly Iffley Mead School, it became Isis Academy in 2013. A statement issued by the school said it had changed name following ``the unforeseen rise of ISIS (also known as ISIL and the Islamic State) and related global media coverage of the activities of the group''. ``Our priority is to remove the detrimental impact which the name `Isis' had on pupils, their families and our staff.'' Last year a language school in the city removed Isis from its name for the same reason. The Isis is the name given to the part of the River Thames above Iffley Lock in Oxford. It is also the name of the goddess wife of the god Osiris in Egyptian beliefs. \\\\\n \n \\textbf{Summary:} A school that \\hlc[crimsonglory!40]{was named after} the Islamic State (IS) militant group has changed its name. \\\\\n \\textbf{Explanation:} \\textit{``was name after''} is an intrinsic error contradicted by the article sentence in \\textbf{bold}. \\\\%The name is not chosen because of the Islamic State. \\\\\n \\midrule\n \\textbf{Example annotations 2} \\\\\n \\textbf{Article:} Khalil Dale, 60, was abducted in \\textbf{Quetta} in January 2012 and was found dead on a roadside a few months later. He had been beheaded. A note next to his body said he was killed because a ransom had not been paid. Mr Dale was born in York but lived in Dumfries. He spent 30 years working in countries including Somalia, Afghanistan and Iraq. An inquest into his death was held at Chesterfield Coroners Court because he is buried in Derbyshire. The court heard that the Muslim convert, who was formerly known as Kenneth, worked as a humanitarian assistance relief worker. Following his abduction, negotiations were undertaken by the International Committee of the Red Cross with the help of the UK government. His body was found on 29 April 2012. The inquest was told that he died as a result of decapitation. Senior coroner Dr Robert Hunter concluded that Mr Dale was unlawfully killed while providing international humanitarian assistance. \\\\\n \\textbf{Summary:} A British aid worker was unlawfully killed by \\hlc[bleudefrance!40]{Islamist militants} in \\hlc[yellow!60]{Pakistan}, an inquest has heard. \\\\\n \\textbf{Explanation:} \\textit{``Islamist militant''} is an extrinsic error as it can not be found in or inferred from the article. The information is also not verifiable by Wikipedia. \\textit{``Pakistan''} is world knowledge as \\textbf{Quetta} in the article is a city in Pakistan according to Wikipedia. \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Guideline for our summary error annotation (\\S~\\ref{sec:error_annotation}).}\n \\label{fig:annotation_guideline}\n\\end{figure*}\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\fontsize{10}{12}\\selectfont\n \\begin{tabular}{p{0.92\\textwidth}}\n \\toprule\n In this study, you will evaluate 100 sets of summaries produced by four systems. For each set, its corresponding article and a baseline summary are shown before the four system summaries. The errors in the baseline summary are highlighted. \\\\\n Please \\textit{first read the article and the baseline summary} and then \\textit{compare each system summary against the baseline summary} based on \\textbf{informativeness} and \\textbf{factual consistency}. In addition, please decide the \\textbf{operations} made by the system to achieve better factual consistency. \\\\\n For informativeness and factual consistency, you need to label whether the system summary is better or worse than the baseline summary. You can also label the system summary as tying with the baseline summary. \\\\\n You need to consider two types of operations: \\textbf{deletions} and \\textbf{substitutions}. Please label the system summary as making deletions, substitutions, or both operations. Examples for the aspects and the operations are as follows. \\\\\n \\midrule\n \n \\textbf{Article:} Alexys Brown, also known as Lexi, died at her home in Emmadale Close, Weymouth, on Thursday. An investigation is under way to discover how she became trapped. A post-mortem examination is due to be carried out this week. It was originally hoped the appeal would raise \u00a32,000. Alison Record, who started the Just Giving appeal, said she was \"heart broken\" over the death. ``Everybody by now has heard of the terrible tragedy the Brown family have suffered with the loss of their beautiful and beloved little girl Lexi,'' the appeal page reads. Many other comments have been posted on the appeal page. Steph Harris said: ``Thinking of you all at this devastating time, fly high beautiful princess. Love Steph and family xxx'' Lesley Andrews added: ``No amount of money will take away the pain, but so much love comes with every penny. Take care. xx'' Aster Group, the housing association responsible for managing the home, is assisting with the police investigation. The Health and Safety Executive (HSE) is also investigating. Dorset County Council said it had not installed the disabled lift at the property. \\\\\n \n \\textbf{Baseline Summary:} An appeal to raise \\hlc[crimsonglory!40]{10,000 pounds} for the family of a \\hlc[bleudefrance!40]{three-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}. \\\\\n \n \\smallskip\n \\textbf{Informativeness:} Whether the summary captures salient content from the input article. Note that incorrect content should be considered as invalid. \\\\\n \\textbf{Win.} \\textit{An appeal to raise money for the family of a three-year-old girl who died after getting stuck in a lift was originally hoped for raising \u00a32,000.} The target money of the appeal is a salient information. \\\\\n \\textbf{Tie.} \\textit{An appeal to raise money for the family of a girl who died after getting stuck in a lift has raised more than \u00a320,000.} Compared to the baseline, missing incorrect information does not affect the informativeness. \\\\\n \\textbf{Lose.} \\textit{An appeal to raise money for the family of a three-year-old girl has raised more than \u00a320,000.} This system summary does not mention the death of the girl, which is a salient content of the article. \\\\\n \n \\smallskip\n \\textbf{Factual Consistency:} Whether the summary is factually correctly based on the article and knowledge from Wikipedia. \\\\\n \\textbf{Win.} \\textit{An appeal has been set up for the family of an \\hlc[bleudefrance!40]{eight-year-old} girl who died after becoming trapped in a lift at her Dorset home.} This system summary does not generate the incorrect numbers of money. \\\\\n \\textbf{Tie.} \\textit{An appeal to raise \\hlc[crimsonglory!40]{5,000 pounds} for the family of a \\hlc[bleudefrance!40]{seven-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}.} This system summary makes similar errors to the baseline. \\\\\n \\textbf{Lose.} \\textit{\\hlc[crimsonglory!40]{The family} of an \\hlc[bleudefrance!40]{eight-year-old} girl who died after becoming trapped in a lift at her Dorset home \\hlc[crimsonglory!40]{have set a fundraising target} of \\hlc[bleudefrance!40]{10,000 pounds}.} This system summary fabricates an event \\textit{The family have set a fundraising target}, which is more severe than errors of modifiers. \\\\\n \n \\smallskip\n \\textbf{Deletion:} The incorrect content in the baseline summary is deleted. \\\\\n - \\textit{An appeal for the family of a \\hlc[bleudefrance!40]{three-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}.} The error \\textit{``10{,}000 pounds''} is deleted. \\\\\n \n \\textbf{Substitution:} The incorrect content in the baseline summary is replaced with correct one. \\\\\n - \\textit{An appeal to raise 2,000 pounds for the family of a \\hlc[bleudefrance!40]{three-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}.} The error \\textit{``10{,}000 pounds''} is substituted with \\textit{``2,000 pounds''}, which is the correct information. \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Guideline for our human evaluation (\\S~\\ref{subsec:humaneval}).}\n \\label{fig:human_eval_guideline}\n\\end{figure*}\n\n\n\\begin{figure*}[th]\n \\centering\n \\small\n \\begin{tabular}{p{0.95\\textwidth}}\n \\toprule\n \\textbf{Example 1} \\\\\n \\midrule\n \\textbf{CNN\/DM Article:} At the grand old age of 75, Jack Nicklaus is still capable of hitting aces. The Golden Bear added another magic moment to his storied career at Augusta National in the Par-3 Contest. Stepping up to the tee on the 130-yard fourth, the greatest golfer of all time saw his shot sail beyond the flag before spinning back into the hole. Jack Nicklaus gave the crowd something to cheer with a hole in one on the fourth during the Par-3 Contest. Nicklaus holds up his ball to an adoring crowd as Gary Player (left) and Ben Crenshaw salute the great. Crenshaw and Nicklaus fist pump following his ace on the 130-yard hole at Augusta National. Nicklaus was playing alongside Gary Player and Ben Crenshaw as the first roar of Masters week filled the skies in a sunswept Georgia. 'I had an interview this morning, and I said \"well, all I've got to do is go out and win the Par-3 and make a hole and one\", and I make a hole in one,' Nicklaus said afterwards. . The ace was Nicklaus' first on the hallowed turf of Augusta, but the veteran saw his achievement eclipsed by that of Camilo Villegas on a day which saw five hole-in-ones, levelling the record from 2002. The Colombian followed Nicklaus with an ace on the fourth and also managed one on the eighth, but he couldn't quite land the title, losing out to world No 74 Kevin Streelman in a play-off. Nicklaus became the youngest person to wear a green jacket in 1963, and collected his sixth in 1986. He is one of five men to complete the career grand slam, an accolade which favourite Rory McIlroy can achieve if he wins his third major in succession. \\\\\n \\midrule\n \\textbf{Cross-entropy:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas and \\hlc[crimsonglory!40]{Kevin Streelman} also hit hole-in-ones on the day. \\\\\n \\textbf{Entailment reranking:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas and \\hlc[crimsonglory!40]{Kevin Streelman} also hit hole-in-ones on day one. \\\\\n \\textbf{Unlikelihood training:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas followed Nicklaus with a hole-in-one on the same hole. Nicklaus is one of five men to complete the career grand slam. \\\\\n \\textbf{Contrastive learning:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas followed Nicklaus with a hole-in-one on the eighth. \\\\\n \\midrule\n \\textbf{Example 2} \\\\\n \\midrule\n \\textbf{XSum Article:} The British supermarket giant, the world's third largest retailer, has been looking to dispose of its loss-making US food chain for some time. Around 4,000 of Fresh \\& Easy's 5,000 employees are to transfer to the new business as part of the deal. Tesco said in a statement that this is an \"orderly and efficient exit from the US market\". Under the terms of the deal, Tesco will loan the new business around \u00a380m. Fresh \\& Easy, which operates 200 stores across California, Nevada and Arizona, has lost money since opening in 2007. Tesco did not specify which stores are being transferred to Yucaipa, but it is understood the remaining 50 shops will be closed. It will cost Tesco \u00a3150m to dispose of the Fresh \\& Easy brand, but part of that is a loan which may be paid back. Also, if Yucaipa is sucessful and the business becomes profitable, Tesco has the option to buy a stake. Yucaipa is a Los Angeles based private equity company focused on supermaket and restaurant businesses and is run by billionaire investor Ron Burkle. In April of this year, Tesco reported its first fall in annual profits for 20 years. \\\\\n \\midrule\n \\textbf{Cross-entropy:} Tesco has agreed to sell its US food chain Fresh \\& Easy to Yucaipa in a deal \\hlc[bleudefrance!40]{worth about 150m}, the BBC understands. \\\\\n \\textbf{Entailment reranking:} Tesco is to sell its US food chain Fresh \\& Easy to Yucaipa \\hlc[bleudefrance!40]{for an undisclosed sum}. \\\\\n \\textbf{Unlikelihood training:} Tesco has agreed to sell its US food business, Fresh \\& Easy, to a private equity firm called Yucaipa \\hlc[bleudefrance!40]{for an undisclosed sum}. \\\\\n \\textbf{Contrastive learning:} Tesco has agreed to sell its US food chain, Fresh \\& Easy, to Yucaipa. \\\\\n \\midrule\n \\textbf{Example 3} \\\\\n \\midrule\n \\textbf{XSum Article:} Neil Aspin's promotion-chasing hosts have not lost in nine National League matches while Adam Lockwood's side are unbeaten in five. Guiseley went ahead on 15 minutes against the run of play when a throw-in found James Hurst who squared to Jake Lawlor to stroke into an empty net. Gateshead defender Liam Hogan superbly blocked Jordan Preston's effort and Guiseley keeper Jonny Maxted then saved well from Wesley York's shot just before the break. The hosts, who started the second half well, levelled on 62 minutes when a slip by half-time substitute Derek Asamoah let York curl sweetly into the top-right corner from the edge of the box. Match report supplied by the Press Association. Match ends, Gateshead 1, Guiseley 1. Second Half ends, Gateshead 1, Guiseley 1. Substitution, Guiseley. Michael Rankine replaces Jordan Preston. Substitution, Gateshead. Luke Hannant replaces Gus Mafuta. Gus Mafuta (Gateshead) is shown the yellow card. Substitution, Guiseley. Adam Boyes replaces Jake Cassidy. Goal! Gateshead 1, Guiseley 1. Wes York (Gateshead). Substitution, Guiseley. Derek Asamoah replaces Kevan Hurst. Second Half begins Gateshead 0, Guiseley 1. First Half ends, Gateshead 0, Guiseley 1. Simon Walton (Guiseley) is shown the yellow card. Goal! Gateshead 0, Guiseley 1. Jake Lawlor (Guiseley). First Half begins. Lineups are announced and players are warming up. \\\\\n \\midrule\n \\textbf{Cross-entropy:} Gateshead and Guiseley maintained their \\hlc[crimsonglory!40]{unbeaten starts to the season} with a 1-1 draw at the International Stadium. \\\\\n \\textbf{Entailment reranking:} Gateshead and Guiseley shared the spoils after a 1-1 draw at the International Stadium. \\\\\n \\textbf{Unlikelihood training:} Gateshead and Guiseley shared the spoils after a \\hlc[crimsonglory!40]{goalless} draw in the National League. \\\\\n \\textbf{Contrastive learning:} Gateshead and Guiseley shared the spoils after a 1-1 draw at Gateshead. \\\\\n \\bottomrule\n \n \\end{tabular}\n \\caption{Sample generated summaries by fine-tuned BART models. Intrinsic errors are highlighted in \\hlc[crimsonglory!40]{red} and extrinsic errors are in \\hlc[bleudefrance!40]{blue}. \n \\textsc{MaskEnt} and \\textsc{SysLowCon} are used for negative sample construction with unlikelihood training and contrastive learning.}\n \\label{fig:generation_examples}\n\\end{figure*}\n\n\\section{Summary Error Annotation and Model Behavior Analysis}\n\\label{sec:error_annotation}\n\nWe first describe annotating unfaithfulness errors by state-of-the-arts, i.e., models fine-tuned from BART and PEGASUS\non XSum and CNN\/DM.\nWe then probe into the model generation behavior that is indicative of errors, which guides the design of negative sample construction strategies.\n\n600 ($150\\times2\\times2$) summaries are annotated to demonstrate how often do the models ``hallucinate\", i.e., generating content not grounded by the source.\nTo characterize errors, we annotate \\textit{text spans} in summaries with \n(i) \\textbf{intrinsic errors} caused by misconstructing phrases or clauses from the source; \nand (ii) \\textbf{extrinsic errors} which include words not in the source that are either unverifiable or cannot be verified by Wikipedia. \nContent not covered by the article but can be validated by Wikipedia is annotated as \\textbf{world knowledge}, and the models' behavior pattern when generating them differs from when they generate errors.\n\nTwo fluent English speakers with extensive experience in summary evaluation and error labeling are hired. For each sample, they are shown the article and two system summaries, and instructed to annotate text spans with the aforementioned errors and world knowledge. After labeling every 50 samples, the annotators discuss and resolve any disagreement.\nThe Fleiss's Kappas on XSum and CNN\/DM are $0.35$ and $0.45$.\n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/error_type_dist.pdf}\n \n \\caption{Percentage of samples with intrinsic and extrinsic error spans for models fine-tuned from BART and PEGASUS on XSum and CNN\/DM.}\n \\label{fig:error_type_dist}\n \n\\end{figure}\n\n\\medskip\n\\noindent \\textbf{Error statistics} are displayed in Fig.~\\ref{fig:error_type_dist}. \nExtrinsic errors dominate both datasets, especially on XSum. $58.7\\%$ of summaries by BART (and $44.0\\%$ by PEGASUS) contain at least one extrinsic error.\nNoticeably, PEGASUS is a newer model pre-trained with a larger amount of data, thus contains less errors than BART and other older models studied for error annotations by~\\newcite{maynez-etal-2020-faithfulness}. This observation also highlights the usage of our annotations for future development and evaluation of summarization models. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.46\\textwidth]{figures\/prob_both.pdf}\n \\caption{\n Probability distributions of generating the first tokens of proper nouns and numbers, grouped by extrinsic errors, intrinsic errors, world knowledge, and other correct tokens.}\n \\label{fig:output_prob_propnum}\n\\end{figure}\n\n\\paragraph{Low confidence generation is indicative of extrinsic errors.}\nInspired by recent work that studies model prediction confidence~\\cite{liu2021tokenlevel}, we examine generation probabilities for tokens of \\textit{different part-of-speech (POS) tags}.\nFig.~\\ref{fig:output_prob_propnum} shows salient results on the generation probabilities of the first token of a proper noun or a number (with additional analysis provided in Appendix~\\ref{appendix:error_annotation}). \nAs observed, model confidence tends to be lower for the first tokens of proper nouns and numbers if they are part of spans with \\textit{extrinsic errors}.\nAlso note that world knowledge, which cannot be inferred from the source either, often has higher generation probability than extrinsic errors. \nTake this snippet generated by a fine-tuned BART as an example: \\textit{``Manchester United captain Wayne Rooney's testimonial game against Manchester City$\\ldots$''}. \\textit{``Manchester City\"} is an extrinsic error and \\textit{``Wayne\"} is produced as world knowledge. The model assigns a low probability of $0.10$ to the first token of \\textit{``Manchester City''} and a high probability of $0.92$ to token \\textit{``Wayne''}. \nThis implies that model confidence can be a useful indicator for negative sample collection. \n\n\n\\section{Conclusion}\n\\label{sec:conclusion}\n\nWe present \\textsc{CLIFF}, a contrastive learning-based framework to promote faithfulness and factuality of abstractive summaries. \\textsc{CLIFF} uses both references and summaries that are factually inconsistent with the articles to train systems to be better at discriminating errors from factual and salient content. \nWe further study strategies that automatically create erroneous summaries by editing from references or leveraging systems outputs, inspired by our new summary error analysis on state-of-the-art models. \nBoth automatic evaluation and human ratings show that \\textsc{CLIFF} achieves consistent improvements over competitive comparison methods, and is generalizable across datasets with systems fine-tuned from different large models. \n\n\n\\section{Introduction}\n\n\nLarge pre-trained Transformers have yielded remarkable performance on abstractive summarization~\\cite{liu-lapata-2019-text,lewis-etal-2020-bart,zhang2020pegasus} with impeccable fluency, yet their summaries often contain factually inconsistent content~\\cite{maynez-etal-2020-faithfulness,zhang-etal-2020-optimizing,goyal-durrett-2020-evaluating}, even for state-of-the-art models.\nThree types of remedies have been proposed: running a separately learned error correction component~\\cite{dong-etal-2020-multi}, removing noisy training samples~\\cite{nan-etal-2021-entity,goyal-durrett-2021-annotating}, and modifying the Transformer architecture~\\cite{huang-etal-2020-knowledge,zhu-etal-2021-enhancing}. \nYet they either rely on heuristically created data for error handling, falling short of generalization, or require learning a large number of new parameters, and summary informativeness is often sacrificed. \n\n\n\\begin{figure}[t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{3pt}\n \\begin{tabular}{p{0.47\\textwidth}}\n \\toprule\n \\textbf{XSum Article}: The Fermanagh MLA Phil Flanagan tweeted after Tom Elliott appeared on a BBC radio programme in May 2014. $\\ldots$\n ``I wonder if he will reveal how many people he harassed and shot as a member of the UDR.''$\\ldots$ \\\\\n \\midrule\n \\textbf{Contrastive learning} (our method): A Sinn F\u00e9in MLA has been ordered to apologise and pay compensation to a former member of the \\hlc[yellow!70]{ Ulster Defence Regiment} (UDR). \\\\\n \\midrule\n \\textbf{Cross-entropy}: A Sinn F\u00e9in MLA has agreed to pay compensation to a former \\hlc[cyan!40]{Ulster Unionist Party} (UDR) MP after he tweeted that he had harassed and shot people as a member of the party. \\\\\n \\midrule\n \\textbf{Entailment reranking}: A Sinn F\u00e9in MLA has agreed to pay compensation to a former \\hlc[cyan!40]{Ulster Unionist Party} (UDR) councillor for a tweet he sent about him. \\\\\n \\midrule\n \\textbf{Unlikelihood}: An MLA has been ordered to apologise and pay compensation to a former \\hlc[cyan!40]{loyalist} MP for a remark he made about him while serving in \\hlc[cyan!40]{the Ministry of Defence}. \\\\\n \n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Sample article and system summaries by different methods. \n Our contrastive learning model trained on low confidence system outputs correctly generates the \\hlc[yellow!70]{full name}.\n Comparisons using cross-entropy loss,\n beam reranking by entailment scores~\\cite{kryscinski-etal-2020-evaluating}, and unlikelihood objective~\\cite{Welleck2020Neural} over negative samples all produce \\hlc[cyan!40]{unfaithful content}.\n }\n \\label{fig:intro_sample}\n\\end{figure}\n\nOur goal is to train abstractive summarization systems that generate both faithful and informative summaries in an end-to-end fashion.\nWe observe that, while the commonly used maximum likelihood training optimizes over references, there is no guarantee for the model to distinguish references from incorrect generations~\\cite{Holtzman2020The,Welleck2020Neural}. \nTherefore, potential solutions reside in designing new learning objectives that can effectively inform preferences of factual summaries over incorrect ones. \n\nConcretely, we hypothesize that including factually inconsistent summaries (i.e., \\textit{negative samples}) for training, in addition to references (i.e., \\textit{positive samples}), let models become better at differentiating these two types of summaries. \nAlthough using negative samples has been effective at text representation learning, e.g., word2vec~\\cite{mikolov2013distributed} and BERT~\\cite{devlin-etal-2019-bert}, there exist two major challenges for it to succeed in concrete language tasks. \nFirst, \\textit{a suitable training objective} is critical to avoid performance degradation~\\cite{saunshi2019theoretical}. \nSecond, it is nontrivial to \\textit{construct ``natural'' samples} that mimic the diverse errors made by state-of-the-art systems that vary in words and syntax~\\cite{goyal-durrett-2021-annotating}. \n\nTo address both challenges, we first propose a new framework, \\textbf{\\textsc{CLIFF}}, that uses \\underline{c}ontrastive \\underline{l}earning for \\underline{i}mproving \\underline{f}aithfulness and \\underline{f}actuality of the generated summaries.\\footnote{Our code and annotated data are available at \\url{https:\/\/shuyangcao.github.io\/projects\/cliff_summ}.} \nContrastive learning (CL) has obtained impressive results on many visual processing tasks, such as image classification~\\cite{khosla2020supervised,pmlr-v119-chen20j} and synthesis~\\cite{park2020contrastive,zhang2021cross}. \nIntuitively, CL improves representation learning by compacting positive samples while contrasting them with negative samples.\nHere, we design a task-specific CL formulation that teaches a summarizer to expand the margin between factually consistent summaries and their incorrect peers.\n\nMoreover, we design four types of strategies with different variants to construct negative samples by \\textit{editing reference summaries} via rewriting entity-\/relation-anchored text, and using \\textit{system generated summaries} that may contain unfaithful errors. \nImportantly, these strategies are inspired by our new annotation study on errors made by state-of-the-art summarizers---models fine-tuned from BART~\\cite{lewis-etal-2020-bart} and PEGASUS~\\cite{zhang2020pegasus}---on two benchmarks: XSum~\\cite{narayan-etal-2018-dont} and CNN\/DailyMail (CNN\/DM)~\\cite{NIPS2015_5945}. \n\nWe fine-tune pre-trained large models with our contrastive learning objective on XSum and CNN\/DM. \nResults based on QuestEval~\\cite{scialom2020QuestEval}, a QA-based factuality metric of high correlation with human judgments,\nshow that our models trained with different types of negative samples uniformly outperform strong comparisons, including using a summarizer with post error correction and reranking beams based on entailment scores to the source.\nMoreover, compared with unlikelihood training method that penalizes the same negative samples~\\cite{Welleck2020Neural}, our summaries also obtain consistently better QuestEval scores. \nHuman evaluation further confirms that our models consistently reduce both extrinsic and intrinsic errors over baseline across datasets. \n\n\\section{Example Appendix}\n\n\n\\end{document}\n\n\\section*{Acknowledgements}\nThis research is supported in part by Oracle for Research Cloud Credits, National Science Foundation through a CAREER award IIS-2046016, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract \\# FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. \nWe thank three anonymous reviewers for their valuable suggestions. \n\n\n\\section{Results}\n\\label{sec:results}\n\n\n\\subsection{Automatic Evaluation}\n\\label{subsec:autoeval}\nWe report results by models fine-tuned from BART and PEGASUS with different objectives and negative samples on XSum and CNN\/DM in Tables~\\ref{tab:main_result} and~\\ref{tab:pegasus_result}. \n\\textsc{CLIFF} models use a summary representation of averaging over all tokens with MLP projection, with other variants discussed in \\S~\\ref{subsec:ablaton}. Unless explicitly stated, comparison models are fine-tuned from the same large model used by \\textsc{CLIFF}. \n\n\nFirst, comparing with other factuality improvement models (top of the tables), \\textit{almost all \\textsc{CLIFF} models trained with different negative samples uniformly produce higher QuestEval scores across datasets with both large models}, with the improvements more pronounced on XSum.\nImportantly, ROUGE scores for \\textsc{CLIFF} models are comparable or better than baselines trained with cross-entropy, e.g., on CNN\/DM as in Table~\\ref{tab:main_result}. \nA similar trend is observed with the FactCC metric, especially when using PEGASUS as the base model (Table~\\ref{tab:pegasus_result}). \nNote that \\textsc{EntailRank} tends to yield significantly higher FactCC scores, though it obtains lower QuestEval scores than the cross-entropy baseline. Human inspection finds that \\textsc{EntailRank} can pick up beams with peculiar words of high FactCC scores, without improving factuality. \nMoreover, other comparisons based on post \\textsc{Correction} and model engineering (\\textsc{FASum}) only offer incremental gains. \nThe sample selection-based method, \\textsc{SubsetFT}, sacrifices ROUGE scores significantly. \nOverall, \\textsc{CLIFF} demonstrates stronger generalizability.\n\n\n\\begin{table}[!t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{2.5pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n \\textbf{Model} & \\multicolumn{3}{c}{\\textbf{XSum}} & \\multicolumn{3}{c}{\\textbf{CNN\/DM}} \\\\\n \\cmidrule(lr){2-4} \\cmidrule(lr){5-7}\n & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} \\\\\n \\midrule\n \\multicolumn{7}{l}{\\textit{Comparisons without Negative Samples}}\\\\\n \\textsc{CrsEntropy} & 32.50 & 25.48 & \\textbf{39.07} & 50.21 & 44.44 & 40.39 \\\\\n \\textsc{EntailRank} & 32.42 & \\textbf{41.90} & 38.47 & 50.15 & \\textbf{61.04} & \\textbf{40.67} \\\\\n \\textsc{Correction} & 32.55 & 25.15 & 39.02 & 49.48 & 32.96 & 39.79 \\\\\n \\midrule\n \\midrule\n \\multicolumn{7}{l}{\\textit{Comparisons with Unlikelihood Training}} \\\\\n \\textsc{FCSample} & 32.79 & \\hlc[xgreen]{25.37} & 38.46 & 50.63 & 45.45 & 39.28 \\\\\n \n \\textsc{SwapEnt} & 32.88 & 24.76 & 37.91 & 50.43 & 43.02 & \\hlc[xgreen]{38.96} \\\\\n \\textsc{MaskEnt} & 33.04 & \\hlc[xgreen]{26.30} & 37.51 & 51.11 & 52.19 & \\hlc[xgreen]{39.34} \\\\\n \\textsc{MaskRel} & \\hlc[xgreen]{33.08} & 24.38 & 38.05 & 51.14 & 52.93 & 39.31 \\\\\n \\textsc{RegenEnt} & 32.89 & 24.46 & \\hlc[xgreen]{38.47} & \\hlc[xgreen]{51.11} & \\hlc[xgreen]{52.90} & 39.23 \\\\\n \\textsc{RegenRel} & 32.91 & 24.80 & \\hlc[xgreen]{38.46} & 51.07 & \\hlc[xgreen]{53.68} & \\hlc[xgreen]{39.45} \\\\\n \\textsc{SysLowCon} & 31.66 & \\hlc[xgreen]{26.06} & 34.03 & \\hlc[xgreen]{50.92} & 51.08 & 39.19 \\\\\n \\midrule\n \\multicolumn{7}{l}{\\textit{Our Method:} \\textsc{CLIFF}} \\\\\n \\textsc{Batch} & 32.64 & 24.96 & 38.42 & 51.03\\rlap{$^\\ast$} & 51.81\\rlap{$^\\ast$} & 39.38 \\\\\n \\hdashline\n \\textsc{FCSample} & \\hlc[xgreen]{32.96}\\rlap{$^\\ast$} & 25.28 & \\hlc[xgreen]{38.58} & \\hlc[xgreen]{51.00}\\rlap{$^\\ast$} & \\hlc[xgreen]{51.80}\\rlap{$^\\ast$} & \\hlc[xgreen]{39.37} \\\\\n \\textsc{SwapEnt} & \\hlc[xgreen]{33.09}\\rlap{$^\\ast$} & \\hlc[xgreen]{25.09} & \\hlc[xgreen]{38.58} & \\hlc[xgreen]{51.16}\\rlap{$^\\ast$} & \\hlc[xgreen]{52.97}\\rlap{$^\\ast$} & 38.95 \\\\\n \\textsc{MaskEnt} & \\hlc[xgreen]{33.09}\\rlap{$^\\ast$} & 25.75 & \\hlc[xgreen]{38.12} & \\hlc[xgreen]{51.13}\\rlap{$^\\ast$} & \\hlc[xgreen]{53.60}\\rlap{$^\\ast$} & 39.24 \\\\\n \\textsc{MaskRel} & 33.06\\rlap{$^\\ast$} & \\hlc[xgreen]{25.28} & \\hlc[xgreen]{38.37} & \\hlc[xgreen]{\\textbf{51.17}}\\rlap{$^\\ast$} & \\hlc[xgreen]{53.34}\\rlap{$^\\ast$} & \\hlc[xgreen]{39.36} \\\\\n \\textsc{RegenEnt} & \\hlc[xgreen]{33.09}\\rlap{$^\\ast$} & \\hlc[xgreen]{24.48} & 38.33 & 50.99\\rlap{$^\\ast$} & 52.18\\rlap{$^\\ast$} & \\hlc[xgreen]{39.28} \\\\\n \\textsc{RegenRel} & \\hlc[xgreen]{33.16}\\rlap{$^\\ast$} & \\hlc[xgreen]{24.82} & 38.30 & \\hlc[xgreen]{51.16}\\rlap{$^\\ast$} & 53.21\\rlap{$^\\ast$} & 39.25 \\\\\n \\textsc{SysLowCon} & \\hlc[xgreen]{\\textbf{33.21}}\\rlap{$^\\ast$} & 25.18 & \\hlc[xgreen]{38.18} & 50.85\\rlap{$^\\ast$} & \\hlc[xgreen]{53.73}\\rlap{$^\\ast$} & \\hlc[xgreen]{39.30} \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Results of models fine-tuned from PEGASUS on XSum and CNN\/DM. We report results on $5,000$ randomly selected samples on CNN\/DM, due to long running time of QuestEval. \n \n \n For models of unlikelihood training and CLIFF that use the same negative samples, the better of the two is highlighted with \\hlc[xgreen]{green}. \n $\\ast$: our model is significantly better than \\textsc{CrsEntropy} (approximation randomization test, $p < 0.005$).}\n \\label{tab:pegasus_result}\n\\end{table}\n\n\nSecond, \\textit{\\textsc{CLIFF} is more effective and robust than unlikelihood training} with the same negative samples. \nAccording to Table~\\ref{tab:main_result}, using 7 negative sample construction strategies on two datasets, \\textsc{CLIFF} obtains higher QuestEval scores than unlikelihood training in 12 out of the 14 comparisons. Using PEGASUS, \\textsc{CLIFF} also outperforms in 11 setups as listed in Table~\\ref{tab:pegasus_result}. \nSimilar trends are found on FactCC and ROUGE-L.\nAnother noteworthy piece is that \\textsc{CLIFF}'s improvements over the cross-entropy baseline are more consistent, whereas unlikelihood training occasionally hurts factuality or ROUGE scores significantly. \nWe believe the key advantage of \\textsc{CLIFF} resides in its measure of representation similarities between positive and negative samples in the same batch, allowing models to better differentiate between correct and erroneous summaries. \n\n\nFinally, among all variants, \\textit{\\textsc{CLIFF} trained with low confidence summaries as negative samples obtains the best QuestEval scores on the more abstractive dataset}. As seen in Table~\\ref{tab:main_result}, using low confidence summaries also improves FactCC scores on both datasets, and enhances ROUGE-L on the more extractive dataset CNN\/DM. \nThis indicates that \\textit{system generated summaries contribute more diverse errors made by existing models organically}, which are particularly suitable for our CL framework. As we use summaries generated by the same model for \\textsc{CLIFF} training, one future direction is to use outputs by different models. \nFor our mask-and-fill and source-conditioned regeneration strategies, we find that relation-anchored construction often beats their entity-anchored counterparts. This calls for efforts that steer the entity-driven methods to a more relation-focused direction. \n\n\n\\paragraph{Combining Strategies.}\nWe further show results by fine-tuning BARTs using samples based on combined negative sample construction strategies in Table~\\ref{tab:combine_strategy}. As can be seen, \\textit{combining \\textsc{SysLowCon} and other strategies yields better QuestEval scores than models trained with negative samples by any single strategy}, except for \\textsc{MaskEnt} and \\textsc{RegenEnt} on XSum. This signifies the importance of covering diverse types of errors in negative samples. \n\n\n\\begin{table}[t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{2.5pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n \\textbf{Strategy} & \\multicolumn{3}{c}{\\textbf{XSum}} & \\multicolumn{3}{c}{\\textbf{CNN\/DM}} \\\\\n \\cmidrule(lr){2-4} \\cmidrule(lr){5-7} \n & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} & \\textbf{QEval} & \\textbf{FC} & \\textbf{R-L} \\\\\n \\midrule\n \\textsc{SysLowCon} & 33.35 & 25.47 & \\textbf{36.19} & 51.05 & 50.05 & \\textbf{41.01} \\\\\n + \\textsc{SwapEnt} & \\textbf{33.40} & \\textbf{25.50} & 35.50 & \\textbf{51.32} & \\textbf{53.95} & 40.57 \\\\\n + \\textsc{MaskEnt} & 33.21 & 25.47 & 35.91 & 51.16 & 51.90 & 40.66 \\\\\n + \\textsc{MaskRel} & 33.39 & 25.20 & 35.70 & 51.24 & 52.48 & 40.80 \\\\\n + \\textsc{RegenEnt} & 33.31 & 25.07 & 35.94 & 51.21 & 51.86 & 40.91 \\\\\n + \\textsc{RegenRel} & 33.38 & 24.97 & 36.03 & 51.13 & 50.85 & 40.97 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Results of fine-tuned BART with combinations of negative sample construction strategies.}\n \\label{tab:combine_strategy}\n\\end{table}\n\n\n\\subsection{Human Evaluation}\n\\label{subsec:humaneval}\n\n\\begin{table}[t]\n \\begin{subtable}[h]{0.48\\textwidth}\n \\small\n \\setlength{\\tabcolsep}{2pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n & \\multicolumn{3}{c}{\\textbf{Inform.}} & \\multicolumn{3}{c}{\\textbf{Factual.}} \\\\\n \\textbf{Model} & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ \\\\\n \\midrule\n \\textsc{EntailRank} & 3.3 & 84.3 & \\textbf{12.3} & 23.7 & 71.7 & \\textbf{4.7} \\\\\n \\textsc{ULL. MaskEnt} & 6.3 & 80.7 & 13.0 & 26.3 & 62.0 & 11.7 \\\\\n \\textsc{CL. Batch} & 5.3 & 80.0 & 14.7 & 21.7 & 68.0 & 10.3 \\\\\n \\textsc{CL. SysLowCon} & \\textbf{8.7} & 78.3 & 13.0 & \\textbf{31.3} & 61.7 & 7.0 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-1mm}\n \\caption{XSum}\n \n \\end{subtable}\n \n \\begin{subtable}[h]{0.48\\textwidth}\n \\small\n \\setlength{\\tabcolsep}{2pt}\n \\begin{tabular}{lcccccc}\n \\toprule\n & \\multicolumn{3}{c}{\\textbf{Inform.}} & \\multicolumn{3}{c}{\\textbf{Factual.}} \\\\\n \\textbf{Model} & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ & \\textbf{Win}$\\uparrow$ & \\textbf{Tie} & \\textbf{Lose}$\\downarrow$ \\\\\n \\midrule\n \\textsc{EntailRank} & 2.3 & 86.3 & 11.3 & 4.7 & 94.7 & \\textbf{0.7} \\\\\n \\textsc{ULL. MaskEnt} & \\textbf{18.0} & 71.0 & 11.0 & 17.3 & 79.7 & 3.0 \\\\\n \\textsc{CL. Batch} & 17.3 & 74.7 & \\textbf{8.0} & \\textbf{20.7} & 77.0 & 2.3 \\\\\n \\textsc{CL. SysLowCon} & 15.7 & 75.7 & 8.7 & 20.0 & 77.7 & 2.3 \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-1mm}\n \\caption{CNN\/DM}\n \\end{subtable}\n \n \\caption{\n Percentages of summaries that are better than, tied with, or worse than \\textsc{CrsEntropy}, in informativeness (Inform.) and factual consistency (Factual.) \n \n The Krippendorff's $\\alpha$s are $0.33$ and $0.62$ for the two aspects on XSum, and $0.34$ and $0.89$ on CNN\/DM. Our CL method using low confidence summaries is more frequently rated as better for informativeness and factuality on the more abstractive dataset XSum.\n }\n \\label{tab:human_eval}\n \n\\end{table}\n\n\n\\paragraph{Pairwise Comparison with Cross-entropy.} \nWe recruit the two human annotators for our summary error study, as well as another experienced annotator, to evaluate summary \\textbf{informativeness} and \\textbf{factual consistency}. For each article, the judges are shown summaries generated by the \\textsc{CrsEntropy} model and four other systems. They then rate each system summary against the \\textsc{CrsEntropy} summary.\nAll four summaries generated by different factuality-improved models are shown in random order without system names shown, ensuring the fair comparison among them.\n\nWe randomly pick $100$ articles from each dataset used in our error analysis study in \\S~\\ref{sec:error_annotation}, and evaluate summaries generated by \\textsc{EntailRank}, unlikelihood training (\\textsc{ULL}) with negative samples constructed by \\textsc{MaskEnt}, and \\textsc{CLIFF} models trained with \\textsc{Batch} and \\textsc{SysLowCon} negative samples. All are fine-tuned from BART. Detailed evaluation guidelines are in Appendix~\\ref{appendix:human_eval}. \n\n\nTable~\\ref{tab:human_eval} shows that on the more abstractive XSum data \\textit{CL trained with low confidence samples are more frequently rated as being more informative and more factual} than \\textsc{CrsEntropy} summaries. This echos our automatic evaluations with QuestEval in \\S~\\ref{subsec:autoeval}.\nOn CNN\/DM, all models trained with negative samples produce summaries with better informativeness and faithfulness.\nIn contrast, \\textsc{EntailRank} summaries are less distinguishable from outputs by \\textsc{CrsEntropy} on both datasets, as more ties are found. \nWe show sample outputs in Fig.~\\ref{fig:intro_sample}, with additional examples in Appendix~\\ref{appendix:outputs}. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.47\\textwidth]{figures\/human_eval_error_type_dist.pdf}\n \\vspace{-2mm}\n \\caption{Portions of summaries with errors. CL models consistently reduce both types of errors.\n }\n \\label{fig:human_eval_error_dist}\n \\vspace{-3mm}\n\\end{figure}\n\n\\paragraph{Intrinsic vs. Extrinsic Errors.}\nNext, the annotators are asked to label text spans with intrinsic and extrinsic errors as done in \\S~\\ref{sec:error_annotation}. \nFig.~\\ref{fig:human_eval_error_dist} shows that \\textit{CL is more effective at reducing extrinsic errors than unlikelihood training can} on both datasets.\nWe also observe slight decreases of world knowledge in the summaries (figure attached in Appendix~\\ref{appendix:human_eval}). \n\n\\paragraph{Error Correction Operations.}\nFinally, with reference to \\textsc{CrsEntropy} summaries, human judges are instructed to label each system summary as whether it corrects any error by \\textsc{CrsEntropy} using \\textbf{deletion} of the incorrect content, \\textbf{substitution} with factual information, or \\textbf{both}. \nAs seen in Fig.~\\ref{fig:error_correct_technique}, CL-based models restore factually consistent information, e.g., by replacing erroneous names and numbers with correct ones, more frequently than unlikelihood training or entailment reranking. \n\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.47\\textwidth]{figures\/error_correct_dist.pdf}\n \\vspace{-2mm}\n \\caption{Summaries use different portions of error correction operations. \n Contrastive learning with \\textsc{SysLowCon} (CL.SLC) and \\textsc{Batch} (CL.B) substitute errors with correct content more often than unlikelihood training with \\textsc{MaskEnt} and \\textsc{EntailRank}. \n \n \n }\n \\label{fig:error_correct_technique}\n \\vspace{-3mm}\n\\end{figure}\n\n\n\n\\subsection{Variants of Summary Representation}\n\\label{subsec:ablaton}\n\nSample representation is critical for CL to be effective. Here we investigate summary representation variants as discussed in \\S~\\ref{sec:method}. \nThere are two major considerations: (1) Should we consider all tokens in a summary or only representative ones (e.g., entities or last token)? (2) Should additional transformation, i.e., an MLP, be used?\n\nExperiments on XSum using three negative sample construction strategies demonstrate that \\textit{averaging the decoder outputs of all tokens and adding an MLP projection yield the best overall performance}, as shown in Table~\\ref{tab:ablation_representation}. The implications are at least two-fold. \nFirst, even for entity- or relation-triggered sample modifications, using more global context helps with CL training. \nSecond, additional transformation can help avoid model degeneration. For instance, more nonsensical and repetitive content is produced by variants without MLP. \n\n\n\\begin{table}[t]\n \\centering\n \\setlength{\\tabcolsep}{2.5pt}\n \\fontsize{8.5}{10}\\selectfont\n \\begin{tabular}{lccccccc}\n \\toprule\n \\multicolumn{2}{l}{} & \\multicolumn{2}{c}{\\textbf{\\textsc{SwapEnt}}} & \\multicolumn{2}{c}{\\textbf{\\textsc{MaskRel}}} & \\multicolumn{2}{c}{\\textbf{\\textsc{SysLowCon}}} \\\\\n \\textbf{Rep.} & \\textbf{MLP} & \\textbf{QEval} & \\textbf{FC} & \\textbf{QEval} & \\textbf{FC} & \\textbf{QEval} & \\textbf{FC} \\\\\n \\midrule\n \\multicolumn{8}{l}{\\textit{BART}} \\\\\n Last & \\checkmark & 33.15 & 25.10 & 33.20 & 25.29 & 33.10 & 24.85 \\\\\n Last & & \\textcolor{red!53}{+0.13} & \\textcolor{red!42}{+0.02} & \\textcolor{blue!21}{--0.01} & \\textcolor{blue!72}{--0.32} & \\textcolor{blue!47}{--0.07} & \\textcolor{blue!50}{--0.10} \\\\\n Entity & \\checkmark & \\hlc[xgreen]{\\textbf{33.35}} & 25.41 & 33.34 & 25.44 & 33.32 & 24.46 \\\\\n Entity & & \\textcolor{blue!50}{--0.13} & \\textcolor{blue!47}{--0.07} & \\textcolor{blue!41}{--0.14} & \\textcolor{blue!45}{--0.05} & \\textcolor{blue!69}{--0.29} & \\textcolor{red}{+0.72} \\\\\n All & \\checkmark & 33.30 & \\hlc[xgreen]{\\textbf{25.67}} & \\hlc[xgreen]{\\textbf{33.35}} & \\hlc[xgreen]{\\textbf{25.69}} & \\hlc[xgreen]{\\textbf{33.35}} & \\hlc[xgreen]{\\textbf{25.47}} \\\\\n All & & \\textcolor{blue!43}{--0.23} & \\textcolor{blue!90}{--0.80} & \\textcolor{blue!44}{--0.04} & \\textcolor{blue!88}{--0.48} & \\textcolor{blue!46}{--0.26} & \\textcolor{blue!80}{--0.40} \\\\\n \\midrule\n \\multicolumn{8}{l}{\\textit{PEGASUS}} \\\\\n Last & \\checkmark & 33.07 & \\hlc[xgreen]{\\textbf{25.45}} & 32.99 & 25.09 & 33.18 & 24.94 \\\\\n Last & & \\textcolor{blue!47}{--0.07} & \\textcolor{blue!96}{--0.56} & \\textcolor{red!41}{+0.01} & \\textcolor{blue!41}{-0.01} & \\textcolor{blue!47}{--0.02} & \\textcolor{blue!44}{--0.04} \\\\\n Entity & \\checkmark & 33.03 & 25.43 & 33.05 & 24.77 & 33.20 & 24.59 \\\\\n Entity & & \\textcolor{red!41}{+0.01} & \\textcolor{blue!74}{--0.34} & \\textcolor{blue!42}{--0.05} & \\textcolor{red}{\\textbf{\\hlc[xgreen]{+0.64}}} & \\textcolor{blue!70}{--0.30} & \\textcolor{red!45}{+0.05} \\\\\n All & \\checkmark & \\hlc[xgreen]{\\textbf{33.09}} & 25.09 & 33.06 & 25.28 & \\hlc[xgreen]{\\textbf{33.21}} & \\hlc[xgreen]{\\textbf{25.18}} \\\\\n All & & \\textcolor{blue!51}{--0.11} & \\textcolor{red!65}{+0.25} & \\textcolor{red!43}{\\hlc[xgreen]{\\textbf{+0.03}}} & \\textcolor{blue!59}{-0.19} & \\textcolor{blue!42}{--0.02} & \\textcolor{blue}{--0.80} \\\\\n \\bottomrule\n \\end{tabular}\n \\vspace{-2mm}\n \\caption{\n Comparing different formulations of summary representation in CL. \n For models without MLP, we display score changes from their counterparts. \n Overall, using all tokens with MLP produces better summaries. \n }\n \\label{tab:ablation_representation}\n \\vspace{-2mm }\n\\end{table}\n\n\\section{Related Work}\n\\label{sec:related_work}\n\n\n\\paragraph{Factuality Improvement and Evaluation.} \nNeural abstractive summaries often contain unfaithful content with regard to the source~\\cite{falke-etal-2019-ranking}. To improve summary factuality, three major types of approaches are proposed. First, a separate correction model is learned to fix errors made by the summarizers~\\cite{zhao-etal-2020-reducing,chen-etal-2021-improving}, including replacing entities absent from the source~\\cite{dong-etal-2020-multi} or revising all possible errors~\\cite{cao-etal-2020-factual}. \nThe second type targets at modifying the sequence-to-sequence architecture to incorporate relation triplets~\\cite{cao2018faithful}, knowledge graphs~\\cite{zhu-etal-2021-enhancing}, and topics~\\cite{aralikatte-etal-2021-focus} to inform the summarizers of article facts. Yet additional engineering efforts and model retraining are often needed. \nFinally, discarding noisy samples from model training has also been investigated~\\cite{nan-etal-2021-entity,goyal-durrett-2021-annotating}, however, it often leads to degraded summary informativeness. \nIn comparison, our contrastive learning framework allows the model to be end-to-end trained and does not require model modification, thus providing a general solution for learning summarization systems. \n\n\n\n\nAlongside improving factuality, we have also witnessed growing interests in automated factuality evaluation, since popular word-matching-based metrics, e.g., ROUGE, correlate poorly with human-rated factual consistency levels~\\cite{gabriel-etal-2021-go,fabbri2021summeval}. \nEntailment-based scorers are designed at summary level~\\cite{kryscinski-etal-2020-evaluating} and finer-grained dependency relation level~\\cite{goyal-durrett-2020-evaluating}. \nQA models are employed to measure content consistency by reading the articles to answer questions generated from the summaries~\\cite{wang-etal-2020-asking,durmus-etal-2020-feqa}, or considering the summaries for addressing questions derived from the source~\\cite{scialom-etal-2019-answers}.\nThough not focusing on evaluation, our work highlights that models can produce a significant amount of world knowledge which should be evaluated differently instead of as extrinsic hallucination~\\cite{maynez-etal-2020-faithfulness}. We also show that world knowledge can possibly be distinguished from errors via model behavior understanding.\n\n\n\\medskip\n\\noindent \\textbf{Training with negative samples} has been investigated in several classic NLP tasks, such as grammatical error detection~\\cite{foster-andersen-2009-generrate} and dialogue systems~\\cite{li-etal-2019-sampling}. \nNotably, negative sampling plays a key role in word representation learning~\\cite{mikolov2013distributed} and training large masked language models, such as BERT and ALBERT, to induce better contextual representations~\\cite{devlin-etal-2019-bert,Lan2020ALBERT:}. \nFor text generation tasks, unlikelihood training is proposed to penalize the generation of negative tokens (e.g., repeated words) and sentences (e.g., contradictory responses in a dialogue system)~\\cite{Welleck2020Neural,li-etal-2020-dont,he-glass-2020-negative}. \nWe use contrastive learning that drives enhanced representation learning to better distinguish between factual and incorrect summaries, which encourages more faithful summary generation. \n\n\n\\paragraph{Contrastive Learning (CL) for NLP.}\nCL has been a popular method for representation learning, especially for vision understanding~\\cite{hjelm2018learning,pmlr-v119-chen20j}. \nOnly recently has CL been used for training language models with self-supervision~\\cite{fang2020cert}, learning sentence representations~\\cite{gao2021simcse}, and improving document clustering~\\cite{zhang2021supporting}. \nWith a supervised setup, \\citet{gunel2021supervised} adopt the contrastive objective to fine-tune pre-trained models on benchmark language understanding datasets. Using a similar idea, \\citet{liu-liu-2021-simcls} enlarge the distances among summaries of different quality as measured by ROUGE scores. \n\n\\section{\\textsc{CLIFF}: Contrastive Learning Framework for Summarization}\n\\label{sec:method}\n\n\nWe design a contrastive learning (CL)-based training objective that drives the summarization model to learn a preference of faithful summaries over summaries with factual errors.\nIt is then used for fine-tuning BART~\\cite{lewis-etal-2020-bart} and PEGASUS~\\cite{zhang2020pegasus} for training summarization models.\nFormally, let an article $x$ have a set of reference summaries $P$ (henceforth \\textit{positive samples}) and another set of erroneous summaries $N$ (\\textit{negative samples}). \nThe contrastive learning objective is~\\cite{khosla2020supervised,gunel2021supervised}:\n\n\\begin{equation}\n \\fontsize{10}{11}\\selectfont\n l_{cl}^{x} = - \\frac{1}{\\binom{|P|}{2}} \\sum_{\\substack{y_i, y_j \\in P\\\\y_j \\ne y_i}}\n \\log \\frac{\\exp(\\mathrm{sim} (\\bm{h}_i, \\bm{h}_j) \/ \\tau)}{\\sum\\limits_{\\substack{y_k \\in P \\cup N\\\\y_k \\ne y_i}} \\exp(\\mathrm{sim} (\\bm{h}_i, \\bm{h}_k) \/ \\tau)}\n \\label{eq:contrast}\n\\end{equation}\nwhere $\\bm{h}_i$, $\\bm{h}_j$, and $\\bm{h}_k$ are representations for summaries $y_i$, $y_j$, and $y_k$.\n$\\mathrm{sim}(\\cdot, \\cdot)$ calculates the cosine similarity between summary representations. $\\tau$ is a temperature and is set to $1.0$.\n\n\nImportantly, summaries in $P$ and $N$ are included in the \\textit{same batch} during training, so that the model acquires better representations to differentiate correct summaries from those with errors by comparing the two types of samples, thus maximizing the probabilities of the positive samples and minimizing the likelihoods of the corresponding negative samples.\nThe \\textbf{CL objective} on the full training set, denoted as $\\mathcal{L}_{CL}$, is the sum of losses $l_{cl}^{x}$ over all samples. \n\nTo effectively employ CL in summarization, we need to address two challenges: (1) how to automatically construct both positive and negative samples, which are critical for CL efficacy~\\cite{pmlr-v119-chen20j}, and (2) how to represent the summaries (i.e., $\\bm{h}_{\\ast}$).\nBelow we describe positive sample generation and options for $\\bm{h}_{\\ast}$, leaving the strategies for negative samples to \\S~\\ref{sec:sample_construct}. \n\n\n\\paragraph{Positive Sample Construction ($P$).}\nSummarization datasets often contain a single reference for each article. To create multiple positive samples, in our pilot study,\nwe experiment with paraphrasing with synonym substitution~\\cite{ren-etal-2019-generating}, randomly replacing words based on the prediction of masked language models~\\cite{kobayashi-2018-contextual}, and back-translation~\\cite{mallinson-etal-2017-paraphrasing}. We find back-translation to be best at preserving meaning and offering language variation, and thus use NLPAug\\footnote{\\url{https:\/\/github.com\/makcedward\/nlpaug}} to translate each reference to German and back to English. Together with the reference, the best translation is kept and added to $P$, if no new named entity is introduced.\n\n\n\\paragraph{Summary Representation ($\\bm{h}_{\\ast}$).}\nWe use the outputs of the decoder's last layer, and investigate three options that average over \\textit{all tokens}, \\textit{named entity tokens}, and the \\textit{last token} of the decoded summary. Entities and other parsing results are obtained by spaCy~\\cite{spacy}. \nWe further consider adding a multi-layer perceptron (MLP) with one hidden layer to calculate the final $\\bm{h}_{\\ast}$.\n\n\\medskip\n\\noindent \\textbf{The final training objective} combines the typical cross-entropy loss $\\mathcal{L}_{CE}$ and our contrastive learning objective: $\\mathcal{L} = \\mathcal{L}_{CE} + \\lambda \\mathcal{L}_{CL}$, where $\\lambda$ is a scalar and set to $1.0$ for all experiments. \n\n\n\\section{Negative Sample Construction}\n\\label{sec:sample_construct}\n\nHere we describe four strategies for constructing negative samples that \n\\textit{modify the references} (\\S~\\ref{subsec:swapping}-\\ref{subsec:conditional}) or use \\textit{system generated summaries} (\\ref{subsec:modelgeneration}). \n\n\\begin{table}[t]\n \n \\fontsize{8.5}{9}\\selectfont\n \\setlength{\\tabcolsep}{2pt}\n \n \\hspace{-1mm}\n \\begin{tabular}{p{76mm}}\n \\toprule\n \\textbf{\\textsc{Reference}}: A ``rare'' short-eared owl found emaciated in Flintshire is now recuperating well, the RSPCA have said. \\\\\n \\midrule \\midrule\n \\textbf{\\textsc{SwapEnt}}: Flintshire $\\rightarrow$ Bettisfield \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found emaciated in \\textcolor{red!80!black}{\\textbf{Bettisfield}} is now recuperating well, the RSPCA have said. \\\\\n \\midrule\n \\textbf{\\textsc{MaskEnt}}: A ``rare'' short-eared owl found emaciated in \\texttt{[MASK]} is now recuperating well, the RSPCA have said. \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found emaciated in a field in \\textcolor{red!80!black}{\\textbf{South Yorkshire}} is now recuperating well, the RSPCA have said. \\\\\n \\midrule\n \\textbf{\\textsc{MaskRel}}: A ``rare'' short-eared owl found \\texttt{[MASK]} in \\texttt{[MASK]} is now recuperating well, the RSPCA have said. \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found \\textcolor{red!80!black}{\\textbf{dead} in \\textbf{London}} is now recuperating well, the RSPCA have said. \\\\\n \\midrule\n \\textbf{\\textsc{RegenEnt}}: A ``rare'' short-eared owl found emaciated in \\rule{6mm}{0.15mm} \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found emaciated in \\textcolor{red!80!black}{\\textbf{Nottinghamshire}} \\textbf{is now at a wildlife centre to recover.} \\\\\n \\midrule\n \\textbf{\\textsc{RegenRel}}: A ``rare'' short-eared owl found \\rule{6mm}{0.15mm} \\\\\n \n $\\Rightarrow$ A ``rare'' short-eared owl found \\textbf{in the grounds of a former coal mine is being cared for \\textcolor{red!80!black}{by the RSPCA in Somerset.}} \\\\\n \\midrule\n \\textbf{\\textsc{SysLowCon}}: An injured \\textcolor{red!80!black}{golden} owl found in a former coal mine in \\textcolor{red!80!black}{Lancashire} is being cared for \\textcolor{red!80!black}{by the RSPCA}. \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Negative sample construction strategies (\\S~\\ref{sec:sample_construct}). For summaries edited from the reference, their differences are \\textbf{bolded}. Introduced errors are in \\textcolor{red!80!black}{\\textbf{red}}. Text before \\rule{6mm}{0.15mm} is the prefix for regeneration.\n }\n \\label{tab:construction_example}\n \n\\end{table}\n\n\n\\subsection{Entity Swap}\n\\label{subsec:swapping}\n\nEntity swap imitates intrinsic errors, as over $55\\%$ of intrinsic errors in our annotations are found to contain named entities.\nWe construct negative samples by swapping named entities in the references with other randomly selected entities of the same entity type in the source (\\textbf{\\textsc{SwapEnt}}). One sample is constructed for each entity in the reference. \nThough this idea has been studied by \\citet{kryscinski-etal-2020-evaluating}, they allow entities of different types to be used, e.g., a PERSON can be replaced by a LOCATION. \nExamples are displayed in Table~\\ref{tab:construction_example}. \n\n\\textsc{SwapEnt} has the advantage of not depending on any trained model. Yet it only introduces intrinsic errors and lacks the coverage for extrinsic errors, which is addressed by the following generation-based methods. \n\n\n\\subsection{Mask-and-fill with BART}\n\\label{subsec:unconditional}\nTo simulate extrinsic errors, we leverage large unconditional language models' capability of converting a sequence with masked tokens into a fluent and appropriate sequence. Specifically, we replace each named entity in a reference with a \\texttt{[MASK]} token and encode it with BART (without any fine-tuning). BART then fills this partially masked summary with newly generated entities (\\textbf{\\textsc{MaskEnt}}). BART is chosen since it can fill \\texttt{[MASK]} with varying number of tokens.\nFor each entity in the reference, we sample three summaries and only retain the ones containing at least one entity that is absent from both the source and the reference.\n\nUp to now, the two introduced strategies both focus on incorrect named entities. To cover more diverse extrinsic and intrinsic errors~\\cite{goyal-durrett-2020-evaluating}, we extend \\textsc{MaskEnt} to contain relations (\\textbf{\\textsc{MaskRel}}). \nWe first obtain dependency relations using Stanza~\\cite{qi-etal-2020-stanza}, with each relation denoted as $<$\\texttt{gov}, \\texttt{rel}, \\texttt{dep}$>$. \nTo incorporate more context, we consider noun phrase spans enclosing the token of \\texttt{gov} or \\texttt{dep} if it is a content word and the noun phrase contains a named entity. \nSimilar to \\textsc{MaskEnt}, three negative samples are generated by BART based on the input with both \\texttt{gov} and \\texttt{dep} spans masked in the reference. Only the samples that introduce any new dependency relation that is not contained in the source nor the reference are kept. Specifically, we consider a match of a dependency relation as the same form or synonyms of its \\texttt{gov} and and \\texttt{dep} is found in the source or the reference with the same relation. \n\n\nBoth \\textsc{MaskEnt} and \\textsc{MaskRel} can create more extrinsic errors compared to other strategies introduced in this section, since negative samples are generated without being grounded on the source articles. However, their constructed negative samples may contained drifted topics that can be easily detected by a summarization model, resulting with less efficient training signals. \n\n\n\\subsection{Source-conditioned Regeneration}\n\\label{subsec:conditional}\n\nTo ground negative sample generation with the article, we further design a regeneration strategy based on conditional generation. For each named entity in the reference, we treat the text before it as a \\textit{prompt}.\nA summarizer, e.g., fine-tuned from BART or PEGASUS, first reads in the source using the encoder, then receives the prompt as the first part of the decoder output, and finally decodes the rest of the content based on nucleus sampling~\\cite{Holtzman2020The} with a cumulative probability threshold of $0.7$. \nThe prompt and the regenerated text comprise the final negative sample. This method is denoted as \\textbf{\\textsc{RegenEnt}}. \n\nWe also extend entities to relations with expanded governor and dependent spans (\\textbf{\\textsc{RegenRel}}). Here, we consider a prompt as the text before the \\texttt{gov} or \\texttt{dep} span, whichever occurs first. \nFor both \\textsc{RegenEnt} and \\textsc{RegenRel}, three negative samples are generated for each prompt, and a sample is kept if it introduces any new entity (for \\textsc{RegenEnt}) or dependency relation (for \\textsc{RegenRel}) with regard to the source and the reference. \n\nNegative samples generated by both methods are more relevant to the article than the mask-and-fill strategy, yet they may still miss certain types of errors and differ from real model outputs, since they are modified from the reference summaries. \n\n\n\\subsection{System Generation}\n\\label{subsec:modelgeneration}\n\nMotivated by the model confidence analysis in \\S~\\ref{sec:error_annotation}, we explore using system generated summaries as negative samples. We first run fine-tuned BART or PEGASUS on the same training set to decode summaries. For each summary, we check the model confidence on the first token of each proper noun and number span. If the probability is below a threshold, we keep it as a negative sample (\\textbf{\\textsc{SysLowCon}}). The threshold is tuned by maximizing F1 based on our error annotations. \n\nWe consider all beams at the last decoding step as candidates. We use beam sizes of $6$ and $4$ for XSum and CNN\/DM.\nStatistics of negative samples constructed by different strategies are in Appendix~\\ref{appendix:dataset_stat}.\n\n\n\\section{Experiment Setup}\n\\label{sec:exp_setup}\n\n\\paragraph{Evaluation Metrics.} \nQuestEval~\\cite{scialom2020QuestEval} is used as the main metric to evaluate summaries' factual consistency. \nGiven an article and a summary, QuestEval first generates natural language questions for entities and nouns from both. \nA QA model then consumes the article to answer questions derived from the summary, producing a score. Another score is obtained from a QA model addressing article-based questions after reading the summary.\nThe final QuestEval score is the harmonic mean of the two. We use the version with learned weights for questions, which has shown high correlation with human judged consistency and relevance.\n\n\nWe further use FactCC~\\cite{kryscinski-etal-2020-evaluating}, trained based on their negative sample construction method, to measure if the summary can be entailed by the source.\nWe also report ROUGE-L~\\cite{lin-2004-rouge}. \nBoth FactCC and ROUGE-L reasonably correlate with summary factuality as judged by human~\\cite{pagnoni2021understanding}. \n\nBased on our error annotations, we report the correlations between each metric and the error rate---percentage of tokens being part of an error span, and the raw number of errors (Table~\\ref{tab:metric_correlation}).\nQuestEval correlates better on both aspects than other metrics. \n\n\n\\begin{table}[t]\n \\centering\n \\small\n \\setlength{\\tabcolsep}{4.5pt}\n \\begin{tabular}{lcccc}\n \\toprule\n \\textbf{Metric} & \\multicolumn{2}{c}{\\textbf{XSum}} & \\multicolumn{2}{c}{\\textbf{CNN\/DM}} \\\\\n & \\textbf{\\% of Err} & \\textbf{\\# of Err} & \\textbf{\\% of Err} & \\textbf{\\# of Err} \\\\\n \\midrule\n QuestEval & \\textbf{-0.43}\\rlap{$^\\ast$} & \\textbf{-0.25}\\rlap{$^\\ast$} & \\textbf{-0.33}\\rlap{$^\\ast$} & \\textbf{-0.29}\\rlap{$^\\ast$} \\\\\n FactCC & -0.02 & -0.15\\rlap{$^\\ast$} & -0.13\\rlap{$^\\ast$} & -0.12\\rlap{$^\\ast$} \\\\\n ROUGE-1 & -0.16\\rlap{$^\\ast$} & -0.02 & -0.03 & -0.06 \\\\\n ROUGE-2 & -0.11\\rlap{$^\\ast$} & -0.05 & -0.02 & -0.04 \\\\\n ROUGE-L & -0.13\\rlap{$^\\ast$} & -0.03 & -0.06 & -0.08 \\\\\n \\bottomrule\n \\end{tabular}\n \n \\caption{\n Pearson correlation between metrics and error rates and numbers of errors. \n $\\ast$: p-value $< 0.05$.\n %\n }\n \\label{tab:metric_correlation}\n \n\\end{table}\n\n\\paragraph{Comparisons.}\nIn addition to the models fine-tuned with cross-entropy loss (\\textbf{\\textsc{CrsEntropy}}), we consider reranking beams based on FactCC score (also our metric) at the last decoding step (\\textbf{\\textsc{EntailRank}}). \nWe also include three common methods of improving factuality: \n(1) (\\textbf{\\textsc{Correction}}) fine-tunes BART to fix summary errors as a separate step~\\cite{cao-etal-2020-factual}. \n(2) \\textbf{\\textsc{SubsetFT}} fine-tunes large models based on training samples without any dependency relation error~\\cite{goyal-durrett-2021-annotating}, with released checkpoint only available for XSum. \n(3) \\textbf{\\textsc{FASum}} modifies Transformer by incorporating knowledge graphs for factual consistency~\\cite{zhu-etal-2021-enhancing}, with model outputs only on CNN\/DM. \n\n\nMoreover, we compare with \\textbf{unlikelihood training} that penalizes the probabilities of all tokens in a negative sample~\\cite{li-etal-2020-dont}. Given a negative sample $y'$, the loss is defined as $-\\sum_{t=1}^{| y' |} \\log (1 - p(y'_t | y'_{1:t-1}, x))$, where $p(y'_t | y'_{1:t-1}, x)$ is the output probability at the $t$-th step. We combine the unlikelihood training objective with cross-entropy loss with equal weights for fine-tuning. \n\n\nLastly, we compare our negative sample strategies with negative samples constructed for training the FactCC scorer, denoted as \\textbf{\\textsc{FCSample}}. \nFor CL only, we compare with using other samples in the same batch as negative samples (\\textbf{\\textsc{Batch}}), a common practice for CL-based representation learning~\\cite{gao2021simcse, zhang2021supporting}.\n\n\\section{Additional Analysis for Summary Error Annotation}\n\\label{appendix:error_annotation}\n\nWe hire two fluent English speakers to annotate summary errors on XSum and CNN\/DailyMail (CNN\/DM). They annotate a common batch of 100 summaries generated by summarizers fine-tuned from BART and PEGASUS, with 50 articles in each batch. The two annotators are shown 50 HTML pages in a batch, each of which contains an article and two summaries generated by the two models. \nThe detailed annotation guideline is given in Fig.~\\ref{fig:annotation_guideline}.\n\nFor our analysis on token generation probabilities, we additionally show the distributions of the first token's probability for nouns and verbs in Fig.~\\ref{fig:first_token}. We also report the distributions of the non-first token's probability for proper nouns, numbers, nouns, and verbs in Fig.~\\ref{fig:nonfirst_token}. As can be seen, tokens within extrinsic and intrinsic errors have high generation probabilities when they are non-first tokens.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/prob_other_first.pdf}\n \\caption{Probability distributions of generating the \\textit{first} tokens of nouns and verbs, grouped by extrinsic errors, intrinsic errors, world knowledge, and other correct tokens. No verb is annotated as world knowledge.}\n \\label{fig:first_token}\n\\end{figure}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/prob_all_nonfirst.pdf}\n \\caption{Probability distributions of generating the \\textit{non-first} tokens of proper nouns, numbers, nouns, and verbs, grouped by extrinsic errors, intrinsic errors, world knowledge, and other correct tokens. Non-first tokens do not exist for numbers and verbs, as they only contain single tokens.}\n \\label{fig:nonfirst_token}\n\\end{figure}\n\n\n\n\\section{Statistics for Datasets and Training Samples}\n\\label{appendix:dataset_stat}\n\n\\paragraph{Summarization Datasets.}\n\nWe follow the official data splits for the two datasets, with the number of samples in each split listed in Table~\\ref{tab:data_split}.\n\n\\begin{table}[t]\n \\centering\n \\small\n \\begin{tabular}{lccc}\n \\toprule\n \\textbf{Dataset} & \\textbf{Train} & \\textbf{Validation} & \\textbf{Test} \\\\\n \\midrule\n XSum & 204{,}045 & 11{,}332 & 11{,}334 \\\\\n CNN\/DM & 287{,}227 & 13{,}368 & 11{,}490 \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Numbers of samples in train\/validation\/test splits of XSum and CNN\/DM.}\n \\label{tab:data_split}\n\\end{table}\n\n\\paragraph{Positive Samples.}\n\nWe observe unfaithful paraphrases by back-translation for some reference summaries, which are mainly due to the introduction of new entities and the rewriting of quoted text. Thus, we discard samples generated by back-translation that contain new entities and inconsistent quoted text. Finally, we obtain $182{,}114$ and $91{,}468$ positive samples by back-translation on XSum and CNN\/DM.\n\n\\paragraph{Negative Samples.}\n\nFor consistency, we use the summarizer fine-tuned from BART in \\textsc{RegenEnt}, \\textsc{RegenRel} (\\S~\\ref{subsec:conditional}), and \\textsc{SysLowCon} (\\S~\\ref{subsec:modelgeneration}) strategies. We tune a threshold to select negative samples from model generations in our \\textsc{SysLowCon} strategy. The threshold is set to $0.21$, with F1 scores of $73.99$ and $40.49$ on XSum and CNN\/DM annotated samples generated by BART.\n\nThe numbers of negative samples constructed by each strategy for training on XSum and CNN\/DM are shown in Table~\\ref{tab:num_neg_samples}.\n\\textsc{SysLowCon} constructs the least negative samples in total, while it achieves the best results as reported in our main paper (\\S~\\ref{subsec:autoeval}), indicating that its negative samples are more effective for training.\n\n\\begin{table}[ht]\n \\centering\n \\small\n \\begin{tabular}{lcc}\n \\toprule\n \\textbf{Strategy} & \\textbf{XSum} & \\textbf{CNN\/DM} \\\\\n \\midrule\n \\textsc{FCSample} & 936{,}164 & 1{,}291{,}710 \\\\\n \\textsc{SwapEnt} & 438{,}003 & 1{,}617{,}764 \\\\\n \\textsc{MaskEnt} & 360{,}795 & 1{,}050{,}200 \\\\\n \\textsc{MaskRel} & 391{,}224 & 1{,}345{,}317 \\\\\n \\textsc{RegenEnt} & 732{,}986 & 1{,}941{,}886 \\\\\n \\textsc{RegenRel} & 993{,}694 & 1{,}453{,}044 \\\\\n \\textsc{SysLowCon} & 401{,}112 & 502{,}768 \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Numbers of negative samples constructed by different strategies on XSum and CNN\/DM.}\n \\label{tab:num_neg_samples}\n\\end{table}\n\n\n\n\\section{Implementation Details}\n\\label{appendix:implementation}\n\nWe use Fairseq~\\cite{ott2019fairseq} and Huggingface Transformers~\\cite{wolf-etal-2020-transformers} for our experiments with BART and PEGASUS. Our experiments are conducted on the RTX 8000 GPU with 48GB memory and the A100 GPU with 40GB memory.\n\n\\paragraph{Training Settings.}\n\nFor hyperparameters, we follow \\citet{lewis-etal-2020-bart} for BART and \\citet{zhang2020pegasus} for PEGASUS.\nDuring training, we randomly select 5 and 4 negative samples for each input article in XSum and CNN\/DM. Mixed-precision training is not supported by the PEGASUS implementation and is utilized on BART only.\n\n\\paragraph{Decoding Settings.}\n\nWe use the beam search algorithm to decode summaries.\nFor BART, we set the beam sizes as 6 and 4 on XSum and CNN\/DM. A beam size of 8 is used for PEGASUS on both datasets.\n\n\\paragraph{Running Time and Model Sizes.}\n\nThe BART-based models take 6 and 13 hours for training on XSum and CNN\/DM, and it takes 1.5 hour to decode on the two datasets.\nMeanwhile, training the PEGASUS-based models takes 8 and 25 hours for XSum and CNN\/DM, and the decoding takes 1 hour.\n\nAs for model sizes, our BART-based models and PEGASUS-based models have 400M and 568M parameters.\n\n\n\\section{Human Evaluation}\n\\label{appendix:human_eval}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{figures\/human_eval_world_dist.pdf}\n \\caption{Percentages of samples containing world knowledge as labeled by human on the outputs of XSum and CNN\/DM.}\n \\label{fig:percent_world}\n\\end{figure}\n\nIn \\S~\\ref{subsec:humaneval}, we demonstrate the percentages of samples containing intrinsic errors and extrinsic errors for each model evaluated by human judges. \nHere, we report the percentages of samples containing world knowledge in Fig.~\\ref{fig:percent_world}. On XSum, all models produce less world knowledge compared to the model trained with cross-entropy loss, while generating similar or greater amounts of samples with world knowledge on CNN\/DM.\n\nOur human evaluation guideline is shown in Fig.~\\ref{fig:human_eval_guideline}.\n\n\n\n\\section{Sample Outputs}\n\\label{appendix:outputs}\n\nWe include more sample outputs in Fig.~\\ref{fig:generation_examples}.\n\n\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\fontsize{10}{12}\\selectfont\n \\begin{tabular}{p{0.88\\textwidth}}\n \\toprule\n In this study, you will first read article-summary pairs and then identify three types of text spans in the summaries. These spans include content that is contradicted by or cannot be implied from the article. The description for each type is described below: \\\\\n \\begin{itemize}\n \\item \\textbf{Intrinsic errors:} Text spans that misconstruct phrases or clauses from the article. \n \\item \\textbf{Extrinsic errors:} Text spans that include words that are not in the article and are not verifiable or cannot be verified by Wikipedia. \n \\item \\textbf{world knowledge:} Text spans that contain information that is not covered by the article but can be validated by Wikipedia. \n \\end{itemize} \\\\\n When selecting spans, you should always make sure the spans are complete words. \\\\\n \\\\\n In practice, you should follow the these steps carefully: (1) read the article and summaries carefully; (2) figure out if there is content contradicted by or not presented in the article; (3) label the span as an intrinsic error if it misconstructs phrases or clauses from the article; (4) if the span does not belong to intrinsic errors, search within Wikipedia and determine whether the content in the span can be verified; (5) label it as world knowledge if the it can be verified by Wikipedia, otherwise label it as an extrinsic error. \\\\\n \\midrule\n \\textbf{Example annotations 1} \\\\\n \\textbf{Article:} Isis Academy in Oxford said it had rebranded as ``Iffley Academy'' to protect its ``reputation, integrity and image''. \\textbf{The name `Isis' was originally chosen as the school is near to the section of the River Thames of the same name}. Formerly Iffley Mead School, it became Isis Academy in 2013. A statement issued by the school said it had changed name following ``the unforeseen rise of ISIS (also known as ISIL and the Islamic State) and related global media coverage of the activities of the group''. ``Our priority is to remove the detrimental impact which the name `Isis' had on pupils, their families and our staff.'' Last year a language school in the city removed Isis from its name for the same reason. The Isis is the name given to the part of the River Thames above Iffley Lock in Oxford. It is also the name of the goddess wife of the god Osiris in Egyptian beliefs. \\\\\n \n \\textbf{Summary:} A school that \\hlc[crimsonglory!40]{was named after} the Islamic State (IS) militant group has changed its name. \\\\\n \\textbf{Explanation:} \\textit{``was name after''} is an intrinsic error contradicted by the article sentence in \\textbf{bold}. \\\\%The name is not chosen because of the Islamic State. \\\\\n \\midrule\n \\textbf{Example annotations 2} \\\\\n \\textbf{Article:} Khalil Dale, 60, was abducted in \\textbf{Quetta} in January 2012 and was found dead on a roadside a few months later. He had been beheaded. A note next to his body said he was killed because a ransom had not been paid. Mr Dale was born in York but lived in Dumfries. He spent 30 years working in countries including Somalia, Afghanistan and Iraq. An inquest into his death was held at Chesterfield Coroners Court because he is buried in Derbyshire. The court heard that the Muslim convert, who was formerly known as Kenneth, worked as a humanitarian assistance relief worker. Following his abduction, negotiations were undertaken by the International Committee of the Red Cross with the help of the UK government. His body was found on 29 April 2012. The inquest was told that he died as a result of decapitation. Senior coroner Dr Robert Hunter concluded that Mr Dale was unlawfully killed while providing international humanitarian assistance. \\\\\n \\textbf{Summary:} A British aid worker was unlawfully killed by \\hlc[bleudefrance!40]{Islamist militants} in \\hlc[yellow!60]{Pakistan}, an inquest has heard. \\\\\n \\textbf{Explanation:} \\textit{``Islamist militant''} is an extrinsic error as it can not be found in or inferred from the article. The information is also not verifiable by Wikipedia. \\textit{``Pakistan''} is world knowledge as \\textbf{Quetta} in the article is a city in Pakistan according to Wikipedia. \\\\\n \\bottomrule\n \\end{tabular}\n \\caption{Guideline for our summary error annotation (\\S~\\ref{sec:error_annotation}).}\n \\label{fig:annotation_guideline}\n\\end{figure*}\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\fontsize{10}{12}\\selectfont\n \\begin{tabular}{p{0.92\\textwidth}}\n \\toprule\n In this study, you will evaluate 100 sets of summaries produced by four systems. For each set, its corresponding article and a baseline summary are shown before the four system summaries. The errors in the baseline summary are highlighted. \\\\\n Please \\textit{first read the article and the baseline summary} and then \\textit{compare each system summary against the baseline summary} based on \\textbf{informativeness} and \\textbf{factual consistency}. In addition, please decide the \\textbf{operations} made by the system to achieve better factual consistency. \\\\\n For informativeness and factual consistency, you need to label whether the system summary is better or worse than the baseline summary. You can also label the system summary as tying with the baseline summary. \\\\\n You need to consider two types of operations: \\textbf{deletions} and \\textbf{substitutions}. Please label the system summary as making deletions, substitutions, or both operations. Examples for the aspects and the operations are as follows. \\\\\n \\midrule\n \n \\textbf{Article:} Alexys Brown, also known as Lexi, died at her home in Emmadale Close, Weymouth, on Thursday. An investigation is under way to discover how she became trapped. A post-mortem examination is due to be carried out this week. It was originally hoped the appeal would raise \u00a32,000. Alison Record, who started the Just Giving appeal, said she was \"heart broken\" over the death. ``Everybody by now has heard of the terrible tragedy the Brown family have suffered with the loss of their beautiful and beloved little girl Lexi,'' the appeal page reads. Many other comments have been posted on the appeal page. Steph Harris said: ``Thinking of you all at this devastating time, fly high beautiful princess. Love Steph and family xxx'' Lesley Andrews added: ``No amount of money will take away the pain, but so much love comes with every penny. Take care. xx'' Aster Group, the housing association responsible for managing the home, is assisting with the police investigation. The Health and Safety Executive (HSE) is also investigating. Dorset County Council said it had not installed the disabled lift at the property. \\\\\n \n \\textbf{Baseline Summary:} An appeal to raise \\hlc[crimsonglory!40]{10,000 pounds} for the family of a \\hlc[bleudefrance!40]{three-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}. \\\\\n \n \\smallskip\n \\textbf{Informativeness:} Whether the summary captures salient content from the input article. Note that incorrect content should be considered as invalid. \\\\\n \\textbf{Win.} \\textit{An appeal to raise money for the family of a three-year-old girl who died after getting stuck in a lift was originally hoped for raising \u00a32,000.} The target money of the appeal is a salient information. \\\\\n \\textbf{Tie.} \\textit{An appeal to raise money for the family of a girl who died after getting stuck in a lift has raised more than \u00a320,000.} Compared to the baseline, missing incorrect information does not affect the informativeness. \\\\\n \\textbf{Lose.} \\textit{An appeal to raise money for the family of a three-year-old girl has raised more than \u00a320,000.} This system summary does not mention the death of the girl, which is a salient content of the article. \\\\\n \n \\smallskip\n \\textbf{Factual Consistency:} Whether the summary is factually correctly based on the article and knowledge from Wikipedia. \\\\\n \\textbf{Win.} \\textit{An appeal has been set up for the family of an \\hlc[bleudefrance!40]{eight-year-old} girl who died after becoming trapped in a lift at her Dorset home.} This system summary does not generate the incorrect numbers of money. \\\\\n \\textbf{Tie.} \\textit{An appeal to raise \\hlc[crimsonglory!40]{5,000 pounds} for the family of a \\hlc[bleudefrance!40]{seven-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}.} This system summary makes similar errors to the baseline. \\\\\n \\textbf{Lose.} \\textit{\\hlc[crimsonglory!40]{The family} of an \\hlc[bleudefrance!40]{eight-year-old} girl who died after becoming trapped in a lift at her Dorset home \\hlc[crimsonglory!40]{have set a fundraising target} of \\hlc[bleudefrance!40]{10,000 pounds}.} This system summary fabricates an event \\textit{The family have set a fundraising target}, which is more severe than errors of modifiers. \\\\\n \n \\smallskip\n \\textbf{Deletion:} The incorrect content in the baseline summary is deleted. \\\\\n - \\textit{An appeal for the family of a \\hlc[bleudefrance!40]{three-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}.} The error \\textit{``10{,}000 pounds''} is deleted. \\\\\n \n \\textbf{Substitution:} The incorrect content in the baseline summary is replaced with correct one. \\\\\n - \\textit{An appeal to raise 2,000 pounds for the family of a \\hlc[bleudefrance!40]{three-year-old} girl who died after becoming trapped in a lift has raised \\hlc[bleudefrance!40]{more than 20,000 pounds}.} The error \\textit{``10{,}000 pounds''} is substituted with \\textit{``2,000 pounds''}, which is the correct information. \\\\\n \n \\bottomrule\n \\end{tabular}\n \\caption{Guideline for our human evaluation (\\S~\\ref{subsec:humaneval}).}\n \\label{fig:human_eval_guideline}\n\\end{figure*}\n\n\n\\begin{figure*}[th]\n \\centering\n \\small\n \\begin{tabular}{p{0.95\\textwidth}}\n \\toprule\n \\textbf{Example 1} \\\\\n \\midrule\n \\textbf{CNN\/DM Article:} At the grand old age of 75, Jack Nicklaus is still capable of hitting aces. The Golden Bear added another magic moment to his storied career at Augusta National in the Par-3 Contest. Stepping up to the tee on the 130-yard fourth, the greatest golfer of all time saw his shot sail beyond the flag before spinning back into the hole. Jack Nicklaus gave the crowd something to cheer with a hole in one on the fourth during the Par-3 Contest. Nicklaus holds up his ball to an adoring crowd as Gary Player (left) and Ben Crenshaw salute the great. Crenshaw and Nicklaus fist pump following his ace on the 130-yard hole at Augusta National. Nicklaus was playing alongside Gary Player and Ben Crenshaw as the first roar of Masters week filled the skies in a sunswept Georgia. 'I had an interview this morning, and I said \"well, all I've got to do is go out and win the Par-3 and make a hole and one\", and I make a hole in one,' Nicklaus said afterwards. . The ace was Nicklaus' first on the hallowed turf of Augusta, but the veteran saw his achievement eclipsed by that of Camilo Villegas on a day which saw five hole-in-ones, levelling the record from 2002. The Colombian followed Nicklaus with an ace on the fourth and also managed one on the eighth, but he couldn't quite land the title, losing out to world No 74 Kevin Streelman in a play-off. Nicklaus became the youngest person to wear a green jacket in 1963, and collected his sixth in 1986. He is one of five men to complete the career grand slam, an accolade which favourite Rory McIlroy can achieve if he wins his third major in succession. \\\\\n \\midrule\n \\textbf{Cross-entropy:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas and \\hlc[crimsonglory!40]{Kevin Streelman} also hit hole-in-ones on the day. \\\\\n \\textbf{Entailment reranking:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas and \\hlc[crimsonglory!40]{Kevin Streelman} also hit hole-in-ones on day one. \\\\\n \\textbf{Unlikelihood training:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas followed Nicklaus with a hole-in-one on the same hole. Nicklaus is one of five men to complete the career grand slam. \\\\\n \\textbf{Contrastive learning:} Jack Nicklaus hit a hole in one on the fourth during the Par-3 Contest. The 75-year-old was playing alongside Gary Player and Ben Crenshaw. Camilo Villegas followed Nicklaus with a hole-in-one on the eighth. \\\\\n \\midrule\n \\textbf{Example 2} \\\\\n \\midrule\n \\textbf{XSum Article:} The British supermarket giant, the world's third largest retailer, has been looking to dispose of its loss-making US food chain for some time. Around 4,000 of Fresh \\& Easy's 5,000 employees are to transfer to the new business as part of the deal. Tesco said in a statement that this is an \"orderly and efficient exit from the US market\". Under the terms of the deal, Tesco will loan the new business around \u00a380m. Fresh \\& Easy, which operates 200 stores across California, Nevada and Arizona, has lost money since opening in 2007. Tesco did not specify which stores are being transferred to Yucaipa, but it is understood the remaining 50 shops will be closed. It will cost Tesco \u00a3150m to dispose of the Fresh \\& Easy brand, but part of that is a loan which may be paid back. Also, if Yucaipa is sucessful and the business becomes profitable, Tesco has the option to buy a stake. Yucaipa is a Los Angeles based private equity company focused on supermaket and restaurant businesses and is run by billionaire investor Ron Burkle. In April of this year, Tesco reported its first fall in annual profits for 20 years. \\\\\n \\midrule\n \\textbf{Cross-entropy:} Tesco has agreed to sell its US food chain Fresh \\& Easy to Yucaipa in a deal \\hlc[bleudefrance!40]{worth about 150m}, the BBC understands. \\\\\n \\textbf{Entailment reranking:} Tesco is to sell its US food chain Fresh \\& Easy to Yucaipa \\hlc[bleudefrance!40]{for an undisclosed sum}. \\\\\n \\textbf{Unlikelihood training:} Tesco has agreed to sell its US food business, Fresh \\& Easy, to a private equity firm called Yucaipa \\hlc[bleudefrance!40]{for an undisclosed sum}. \\\\\n \\textbf{Contrastive learning:} Tesco has agreed to sell its US food chain, Fresh \\& Easy, to Yucaipa. \\\\\n \\midrule\n \\textbf{Example 3} \\\\\n \\midrule\n \\textbf{XSum Article:} Neil Aspin's promotion-chasing hosts have not lost in nine National League matches while Adam Lockwood's side are unbeaten in five. Guiseley went ahead on 15 minutes against the run of play when a throw-in found James Hurst who squared to Jake Lawlor to stroke into an empty net. Gateshead defender Liam Hogan superbly blocked Jordan Preston's effort and Guiseley keeper Jonny Maxted then saved well from Wesley York's shot just before the break. The hosts, who started the second half well, levelled on 62 minutes when a slip by half-time substitute Derek Asamoah let York curl sweetly into the top-right corner from the edge of the box. Match report supplied by the Press Association. Match ends, Gateshead 1, Guiseley 1. Second Half ends, Gateshead 1, Guiseley 1. Substitution, Guiseley. Michael Rankine replaces Jordan Preston. Substitution, Gateshead. Luke Hannant replaces Gus Mafuta. Gus Mafuta (Gateshead) is shown the yellow card. Substitution, Guiseley. Adam Boyes replaces Jake Cassidy. Goal! Gateshead 1, Guiseley 1. Wes York (Gateshead). Substitution, Guiseley. Derek Asamoah replaces Kevan Hurst. Second Half begins Gateshead 0, Guiseley 1. First Half ends, Gateshead 0, Guiseley 1. Simon Walton (Guiseley) is shown the yellow card. Goal! Gateshead 0, Guiseley 1. Jake Lawlor (Guiseley). First Half begins. Lineups are announced and players are warming up. \\\\\n \\midrule\n \\textbf{Cross-entropy:} Gateshead and Guiseley maintained their \\hlc[crimsonglory!40]{unbeaten starts to the season} with a 1-1 draw at the International Stadium. \\\\\n \\textbf{Entailment reranking:} Gateshead and Guiseley shared the spoils after a 1-1 draw at the International Stadium. \\\\\n \\textbf{Unlikelihood training:} Gateshead and Guiseley shared the spoils after a \\hlc[crimsonglory!40]{goalless} draw in the National League. \\\\\n \\textbf{Contrastive learning:} Gateshead and Guiseley shared the spoils after a 1-1 draw at Gateshead. \\\\\n \\bottomrule\n \n \\end{tabular}\n \\caption{Sample generated summaries by fine-tuned BART models. Intrinsic errors are highlighted in \\hlc[crimsonglory!40]{red} and extrinsic errors are in \\hlc[bleudefrance!40]{blue}. \n \\textsc{MaskEnt} and \\textsc{SysLowCon} are used for negative sample construction with unlikelihood training and contrastive learning.}\n \\label{fig:generation_examples}\n\\end{figure*}\n\n\\section{Example Appendix}\n\n\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\input{section1}\n\n\\section{Problem Statement and the Main Result}\\label{sec:statement}\n\\input{section2}\n\n\\section{Proof of the Reconstruction: Non-Symmetric Paths}\n\\label{sec:proof_nonsymmetric}\n\\input{section3}\n\n\\section{Symmetric Paths}\n\\label{sec:proof_symmetric}\n\\input{section4}\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion_futurework}\n\\input{section5}\n\n\n\\bibliographystyle{plain}\n\n\\input{exact_reconstruction.bbl}\n\n\\end{document}\n\\subsection{Background and Motivation}\n\n\\newcommand{\\vspace{0.02in}}{\\vspace{0.02in}}\n\\newcommand{\\parab}[1]{\\vspace{0.02in}\\noindent{\\textsl{#1}}}\n\\def\\mathcal G{\\mathcal G}\n\\def\\mathcal W{\\mathcal W}\n\\def\\mathcal D{\\mathcal D}\n\\def\\mathcal P{\\mathcal P}\n\n\\parab{Graphs and Communication Networks.}\nThe problem that we study originates in performance measurement of\npacket communications networks. These are modeled as a a weighted\ndirected graph $\\mathcal G=(V,E,\\mathcal W)$ in which the vertex set $V$ represents\nrouters, the edges $E$ represent directed links between routers, and\nthe edge weights $\\mathcal W$ characterize performance metrics of the\nassociated links. A subset $V^B\\subset V$ of vertices represents the\n``boundary'' of the network where the measurements are performed. We\nassume that there is a fixed directed path $\\Path(b,b')$ connecting each\nboundary node pair $b,b' \\in V^B$ and that one can measure the common\nportion of any two paths.\n\nMore specifically, we will assume one can measure the length of the\nfollowing: the path between any two boundary nodes, the common part of\nthe two paths from a boundary node to any two other boundary nodes, as\nwell as the common part of the paths to a boundary node from any\ntwo other boundary nodes. This set of measurements we will call the\nPath Correlation Data (PCD). We establish necessary and sufficient\nconditions --- which turn out to be rather natural --- for the\nreconstruction of a graph from its PCD and present an algorithm to\nachieve it. In the case when the underlying graph violates the\nreconstructibility conditions, we describe the result of the algorithm\n--- it turns out to be the ``simplest'' routing network that produces\nthe observed PCD.\n\n\\parab{Network Tomography and the Inversion Problem.}\nThe form of our model and the assumptions we make originate from a\nbody of work developed under the term \\textsl{Network Tomography}\n\\cite{10.2307\/2291416} that seeks to infer link metrics and even the\nunderlying network topology from the measured metric values on paths\ntraversing the network between a set of routers at the network\nboundary, represented by $V^B$. This setting is similar to other\ngraph reconstruction problems, such as tomography of electrical\nresistance networks (see, e.g., \\cite{Cheney99,MR2719770}), optical\nnetworks \\cite{MR3634446}, and graph reconstruction from\nSchr\\\"odinger-type spectral data (see, e.g.,\n\\cite{MR2353313,MR2545980}). However, in a communication network\nmodel there is a single path between given origin and destination, in\ncontrast to the electrical current flowing between two points a\nresistive medium via all possible paths. In this sense, our model\nis more similar to combinatorial reconstruction problems\n\\cite{MR1047783,EvaLan_aam18}.\n\nIn many practical cases the communication network metrics are additive\nin the sense that the sum of metric values over links in a path\ncorresponds to the same performance metric for the path. Examples of\nadditive metrics include mean packet delay, log packet transmission\nprobability, and variances in an independent link metric model. For\nadditive metrics, a putative solution to the network tomography\nproblem attempts to invert a linear relation expression between the\nset of path metrics $\\mathcal{D}$ and the link metrics $\\mathcal{W}$ in the form\n\\begin{equation}\n \\label{eq:lin} \n \\mathcal{D} = \\mathcal{A} \\mathcal{W}\n\\end{equation}\nHere $\\mathcal{A}$ is the incidence matrix of links over paths,\n$\\mathcal{A}_{\\mathcal{P},\\ell}$ is 1 if path $\\mathcal{P}$ traverses link\n$\\ell$, and zero otherwise. The linear system (\\ref{eq:lin}) is\ngenerally underconstrained in real-life networks, and hence does not\nadmit a unique solution \\cite{4016134}. To overcome this deficiency,\none approach has been to impose conditions on the possible solutions,\ntypically through sparseness, effectively to find the ``simplest''\nexplanation of the observed path metrics; see\n\\cite{DBLP:journals\/tit\/Duffield06,10.1007\/978-3-642-30045-5_22}. A\ndifferent approach in the similar problem of traffic matrix tomography\nhas been to reinterpret (\\ref{eq:lin}) as applying to bi-modal\nmeasurements of packet and bytes counts \\cite{SinMic_ip07}, or \nof empirical means and variances then imposing constraints between these based on\nempirical models \\cite{10.2307\/2291416,866369}. However, the high\ncomputational complexity of this approach makes it infeasible for\nreal-world communications networks \\cite{Zhang:2003:FAC:781027.781053}, although\nquasi-likelihood methods offer some reduction in\ncomplexity \\cite{1212664}. A related approach known as Network\nKriging seeks to reduce dimensionality in the path set by assumption\non prior covariances \\cite{4016134}.\n\n\\parab{Correlations and Trees.}\nMore relevant for the work of this paper has been the idea to \nexploit correlations between metrics on different paths that occur due\nto common experience of packets on their intersection. One variant\nuses multicast probing \\cite{796384} or emulations thereof\n\\cite{DBLP:journals\/ton\/DuffieldPPT06}. Another variant exploits the\nfact that variances of some measurable packet statistics are both\nadditive and independent over links. If $\\mathcal D$ is such a\nvariance-based metric, then\n$ \\text{Cov}(\\mathcal D_{P_1}, \\mathcal{D}_{P_2}) =\n\\text{Var}(\\mathcal D_{P_1\\cap P_2})$\nwhere $\\mathcal D_P$ denotes the metric value across path $P$; see\n\\cite{DBLP:journals\/ton\/DuffieldP04}.\n\nWe abstract both these cases into a unified data model in which for\nevery triple $b$, $b_1$ and $b_2$ of boundary vertices, we can\nmeasure, via covariances of the packet statistics, the metric of the\nintersection $P=\\Path(b,b_1)\\cap \\Path(b,b_2)$ as well as to the metric of\nthe intersection $P=\\Path(b_1,b)\\cap \\Path(b_2,b)$. The results of such\nmeasurements we will denote by $\\PCD(b \\prec b_1,b_2)$ and\n$\\PCD(b_1,b_2 \\succ b)$ correspondingly. We remark that we use the\nsymbols $\\prec$ and $\\succ$ here not as binary comparison operators\nbut as pictograms meant to evoke the topological structure of the\ncorresponding pair of paths. The totality of such measurements we\ncall the \\textbf{path correlation data}. Note that we will not, in\ngeneral, assume that the paths are symmetric; the path $\\Path(b,b_1)$\nmay be different topologically from the path $\\Path(b_1,b)$. As a\nconsequence, the values $\\PCD(b \\prec b_1,b_2)$ and\n$\\PCD(b_1,b_2 \\succ b)$ are in general different. We will assume that\nthe function $\\PCD$ is measured exactly; see below for a brief\ndiscussion of possible sources of error and the techniques for\nerror-correction. \n\nUnder fairly general conditions that will be in force in this paper,\nthe problem has a natural formulation in terms of trees. First assume,\nthat for each $b,b_1,b_2\\in V^B$, the intersection\n$\\Path(b,b_1)\\cap \\Path(b,b_2)$ is connected. Second, assume that the\nmetric $\\mathcal D$ is path increasing, i.e., $\\mathcal D(\\Path)<\\mathcal D(\\Path')$ for\n$\\Path\\subsetneq \\Path'$. As shown in \\cite{971737}, the quantities\n$\\{\\PCD(b \\prec b_1,b_2): b_1, b_2\\in V^B\\}$ give rise to an embedded\nlogical weighted tree rooted at $b$ (called ``source tree''). The tree\nis computed iteratively by finding node pairs $(b_1,b_2)$ of maximal\n$\\PCD(b \\prec b_1,b_2)$ and identifying each such pair with a branch\npoint in the logical tree. Each logical link is assigned a weight\nequal to the difference between the values of\n$\\text{PCD}(b \\prec \\cdot,\\cdot)$ associated with its end points.\nSimilarly, the quantities $\\PCD(b_1,b_2 \\succ b)$ give rise to an\nembedded logical tree with a single receiver $b$ and the sources\n$V^B \\setminus \\{b\\}$. Such trees are called ``receiver trees''.\nUnder the assumed conditions, our problem can be restated as how to\nrecover the underlying weighted graph from the set of logical source\nand receiver trees rooted at every $b\\in V^B$.\n\n\n\\parab{Summary of the Results}. The main contribution of this paper,\nTheorem~\\ref{thm:main}, is to show that under natural conditions, a\nweighted directed graph $\\mathcal G$ can be recovered knowing only the\ngraph's Path Correlation Data (PCD). The conditions under which\nTheorem~\\ref{thm:main} holds are (i) each edge is traversed by at\nleast one path in $\\Path$; (ii) each non-boundary node is\n\\textsl{nontrivial} in the sense that in-degree and out-degree are not\nboth equal $1$; and (iii) each non-boundary node $x$ is\n\\textsl{nonseparable} in the sense that the set of paths\n$\\Path(x)\\subset \\Path$ that pass through $x$ cannot be partitioned\ninto two or more subsets with non intersecting end point sets. Our\nresult holds without the assumption of weight symmetry (defined as\nrequiring the existence of the reverse of any edge in $G$, having the\nsame weight) or path symmetry (defined as the paths in either\ndirection between two boundary nodes traversing the same set of\nedges). We prove the correctness of our reconstruction algorithm\n(Algorithm~\\ref{algorithm:non_symmetric}) under the stated\nassumptions.\n\nOur solution does not assume that link weights are symmetric. Neither\ndo we assume that that paths in either direction between two endpoints\nare symmetric: they are not required to traverse the same set of\ninternal (i.e. non-boundary) vertices. This level of generality\nreflects networking practice, in which non-symmetric routing is\nemployed for policy reasons including performance and revenue\noptimization \\cite{Teixeira:2004:DHR:1012888.1005723}. However, in\nthe cases where symmetric paths can be assumed {\\it a priori}, this\nknowledge enlarges the set of reconstructible networks. We therefore\npay special attention to this case, providing alternative definitions\nof the \\emph{nontrivial} and \\emph{nonseparable} vertices and a\nseparate proof of the correctness of\nAlgorithm~\\ref{algorithm:non_symmetric} (which requires a one-line\nchange). We also establish the correctness of a second, more\ncustomized Algorithm~\\ref{algorithm:symmetric} which applies only in\nthe case of symmetric weights and paths, and which is computationally\nless expensive than Algorithm~\\ref{algorithm:non_symmetric} applied to\nthis case.\n\n\\parab{Inference from Inconsistent Data.}\nBefore describing our approach in more detail, we note that practical\nnetwork measurement may provision data with imperfect consistency. For\nexample, input data may be provided in the form of weighted trees\ncomputed from packet measurement over time intervals that are not\nperfectly aligned, so that the metric of a path $\\Path(b,b')$ may be\nreported differently in the source tree from $b$ and the receiver tree\nto $b'$. Even with aligned intervals, deviations from the model and\nstatistical fluctuations due to finitely many probe packets may result\nin inconsistency. To enable our proposed algorithm to operate with\nsuch data, we propose to compute a $\\text{PCD}$ that is a\nleast-squares fit to inconsistent tree data. This extension and\nassociated error sensitivity analysis is described in the forthcoming\ncompanion paper \\cite{PrepUs2018}.\n\n\n\\subsection{An example of a communication network}\n\nTo illustrate the information available to an observer in our model,\nconsider the network graph schematically shown in\nFig.~\\ref{fig:AltSource}. It is assumed that the end-to-end\nmeasurements are possible among the boundary vertices\n$V^B = \\{b_1, ... , b_6\\}$. The three versions of the same graph\nshown contain information about the paths between the given source\n($b_1$, $b_2$ and $b_3$, correspondingly) and the\ncorresponding set of receivers $V^B \\setminus \\{b_i\\}$; the links belonging to\nthese paths are highlighted in thicker lines.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.7]{f_AltRouting0}\n \\caption{Alternative selections of the source and the corresponding\n routing paths (bold edges).} \n \\label{fig:AltSource}\n\\end{figure}\n\nFrom the point of view of an external observer, the routing paths on\nthe graph are hidden but can be reconstructed, to a certain extent, by\nthe measurements with a fixed source and alternating receivers,\nrepresented in our setting by queries to the $\\PCD$ function. As\nFig.~\\ref{fig:AltSourceLogic} shows, the trees reconstructed from PCD\nare \\emph{logical} trees where the edges represent the amalgamated\nversions of the actual physical edges. For example, the logical tree\nlabeled $(SLb_3)$ has a direct edge from $b_3$ to $b_6$, whereas the\nactual route, shown in $(Sb_3)$ passes through an internal node.\nSince this node does not feature as a junction in the tree $(Sb_3)$,\nit will not be detected from the PCD. Moreover each internal vertex\nin the original graph has multiple appearances in the logical trees\nwith no identifying information attached to them. Correctly\nidentifying multiple representation of the same internal node will be\nthe central challenge of this work.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.8]{f_AltRouting_LogicT}\n \\caption{Representation of PCD on the network graph through the set of\n observed logical source trees for sources $b_1$ to $b_6$.}\n \\label{fig:AltSourceLogic}\n\\end{figure}\n\nThe information form logical source trees\n(Fig.~\\ref{fig:AltSourceLogic}) is only enough for reconstruction of\nspecial class of network graph, namely the symmetric one where both\nthe routing and edge weights are symmetric. When this is not the\ncase, the\nmeasurements of the form $\\text{PCD}(b_1,b_2 \\succ b)$ will be essential to\nreconstruct \\emph{logical receiver trees} shown in \nFig.~\\ref{fig:AltRoutingLogicalT2}. Those contain information about\nthe paths with\nthe selected receiver $b$ and with the source set $V^B \\setminus\n\\{b\\}$.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[scale=0.8]{f_AltRouting_LogicT2}\n \\caption{Representation of PCD on the network graph through the set of\n observed logical receiver trees for receivers $b_1$ to $b_6$.} \n \\label{fig:AltRoutingLogicalT2}\n\\end{figure}\n\nTo summarize, the main goal in this paper is to develop a practical\nalgorithm to reconstruct the original graph from set of measurements\ninformation of the form $\\text{PCD}(b_1 \\prec b_2, b_3)$ and\n$\\text{PCD}(b_2, b_3 \\succ b_1)$ for $b_1,b_2,b_3 \\in V^B$ and to\nestablish necessary and sufficient conditions for the successful\nreconstruction.\n\n\n\\subsection{Outline of paper}\n\nThe outline of the paper is as follows. In Section~\\ref{sec:statement}\nwe state the definitions and assumptions concerning the network\ntopology and path measurements. We then state our main\nTheorem~\\ref{thm:main} concerning the reconstruction of the general\nnon-symmetric weighted graph under these assumptions. The proof of\nTheorem~\\ref{thm:main} will proceed by establishing that a computation\ncodified as Algorithm~\\ref{algorithm:non_symmetric} reconstructs the\ntopology under the stated assumptions, as established in\nSection~\\ref{sec:proof_nonsymmetric} for non-symmetric routing. We\nalso discuss the operation of Algorithm~\\ref{algorithm:non_symmetric}\non graphs that do not satisfy some assumptions of\nTheorem~\\ref{thm:main}, and the relation of the output to the true\ngraph. The flow of Algorithm~\\ref{algorithm:non_symmetric} is\nslightly modified if the routing is symmetric. This case is\nconsidered in Section~\\ref{sec:proof_symmetric}. We also present a\nmodified Algorithm~\\ref{algorithm:symmetric}, suitable \\emph{only} for\nthe case of symmetric paths. We conclude in\nSection~\\ref{sec:conclusion_futurework} with a discussion of possible\nfuture work.\n\n\n\\subsection{Problem Setup}\n\\label{sec:setup}\n\n\n\n\\begin{defn}\n\tA network graph $\\mathcal{N} = (\\Graph, V^B, \\Paths)$ is an edge-weighted graph $\\mathcal{G} =\n\t(V,E,\\mathcal{W})$ together with a set $V^B\\subset V$ of\n\t\\emph{boundary vertices} and the paths $\\Paths$ between\n\tthem. In detail, \n\t\\begin{itemize}\n\t\t\\item $V$ is a finite set of vertices,\n\t\t\\item $E \\subset V\\times V$ is the set of directed edges (no loops\n\t\tor multiple edges are allowed),\n\t\t\\item $\\mathcal{W} : E \\to \\mathbb{R}_+$ are the edge weights,\n\t\t\\item $V^B$ is an arbitrary subset of $V$\n\t\t\\item there is a path $\\Path(b,b')$ between any pair of boundary\n\t\tvertices $b \\neq b'$; each path $\\Path(b,b')$ is simple and\n\t\tassumed to be fixed for the duration of observation of the\n\t\tnetwork. The paths are assumed to have the \\emph{tree consistency\n\t\t\tproperty}: for any $b_1$, $b_1'$, $b_2$ and $b_2'$ in $V^B$ the\n\t\tintersection $\\Path(b_1,b_1') \\cap \\Path(b_2,b_2')$ is\n\t\tconnected.\n\t\\end{itemize}\n\\end{defn}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.375\\textwidth]{f_TreeConst}\n\t\\caption{An example of the violation of the tree consistency\n\t\tproperty. The paths $\\Path(b,b_1)$ and $\\Path(b,b_2)$ first\n\t\tdiverge and then meet again.} \n\t\\label{fig:TreeConst}\n\\end{figure}\n\n\nWe will assume that a path $\\Path(b,b')$ does not pass through any\nother boundary vertices. This is done for convenience only; a graph\ncan be easily made to satisfy this condition by ``drawing out'' the\nboundary vertices from the bulk of the graph as shown in\nFig.~\\ref{fig:BvRemove}. The vertices that do not belong to the\nboundary we will call \\emph{internal vertices} and use the notation\n$V^I = V \\setminus V^B$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.6\\textwidth]{f_BvRemove}\n\t\\caption{A graph (square with no diagonals) with the path\n\t\t$\\Path(b_1,b_3)$ going through $b_2$. The same graph with the\n\t\tboundary vertices drawn out.}\n\t\\label{fig:BvRemove}\n\\end{figure}\n\nWe remark that we do not assume that the weights are symmetric:\n$\\mathcal{W}_{x,y}$ is generally different from $\\mathcal{W}_{y,x}$. We also\ndo not need to assume that the paths are symmetric. However, since\nthe latter case is important in applications and allows for a\nsimplified reconstruction algorithm, we will devote some time to its\nseparate treatment, in particular in Definition~\\ref{def:symm_routing}\nand Section~\\ref{sec:proof_symmetric}.\n\nLet us consider some implications of the tree consistency property.\nConsider two paths, $\\Path(b,b_1)$ and $\\Path(b,b_2)$ with some\ndistinct $b,b_1,b_2 \\in V^B$. The tree property implies that the paths\ncan be written as\n\\begin{align}\n\\label{eq:path1}\n\\Path(b,b_1) &= [b, x_1, x_2, ... , x_j, y_1, y_2, ..., b_1] \\\\\n\\label{eq:path2}\n\\Path(b,b_2) &= [b, x_1, x_2, ... , x_j, z_1, z_2, ..., b_2],\n\\end{align} \nwhere the vertex sets $\\{y_1, y_2, ...\\}$ and $\\{z_1, z_2, ... \\}$ are\ndisjoint, see Fig.~\\ref{fig:Junction}(a).\n\nSimilarly tree consistency property applied to paths \n$\\Path(b_1,b)$ and $\\Path(b_2,b)$ implies that\n\\begin{align}\n\\label{eq:path_b1b}\n\\Path(b_1,b) &= [b_1, y_1, y_2, ... ,y_{i}, x_1, x_2, ..., b] \\\\\n\\label{eq:path_b2b}\n\\Path(b_2,b) &= [b_2, z_1, z_2, ... ,z_{j}, x_1, x_2, ..., b],\n\\end{align} \nwith disjoint $\\{y_1, y_2, ... , y_{m}\\}$ and $\\{z_1, z_2, ... ,\nz_n\\}$, see Fig.~\\ref{fig:Junction}(b).\n\n\\begin{defn}\n\tThe vertex $x_j$ in equations (\\ref{eq:path1})--(\\ref{eq:path2}) is\n\tcalled the $(b \\prec b_1,b_2)$-junction. Note that the set\n\t$\\{x_1, ... , x_j\\}$ is allowed to be empty in which case $b$ acts\n\tas the junction. The vertex $x_1$ in equations\n\t(\\ref{eq:path_b1b})--(\\ref{eq:path_b2b}) is called the\n\t$(b_1,b_2 \\succ b)$-junction.\n\\end{defn}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{f_Junction}\n\t\\caption{(a) $x_j = (b \\prec b_1,b_2)$-junction, \n\t\t(b) $x_1 = (b_1,b_2 \\succ b)$-junction.} \n\t\\label{fig:Junction}\n\\end{figure}\n\nWe remark that we use symbols $\\prec$ and $\\succ$ not as relational\noperators but as separators in the list of 3 vertices which are\npictorially similar to the path configurations in Figure\n\\ref{fig:Junction}.\n\nTo specify the graph reconstruction problem we will be solving we need\nto define the set of \\emph{measurements} available to us.\nThe length of a path is defined as the sum of the weights $\\mathcal{W}$ of\nits edges; we will denote the length by $|\\cdot|$. We consider a\nsingle vertex as a zero-length path; the length of an empty set is\nalso set to be zero. This allows us to assign length to an intersection\nof two paths between boundary vertices which is either empty or a single\nvertex or a connected subpath.\n\nThe totality of the measurements we can make will be called the Path\nCorrelation Data (PCD). In includes, for all $b, b_1, b_2 \\in V^B$,\n\\begin{itemize}\n\t\\item the length $|\\Path(b,b_1)|$,\n\t\n\t\\item the length $\\left| \\Path(b,b_1) \\cap \\Path(b,b_2)\n \\right|$, which we will denote by $\\PCD(b \\prec b_1,b_2)$,\n\t\n\t\\item the length $\\left| \\Path(b_1,b) \\cap \\Path(b_2,b)\n \\right|$, which we will denote by $\\PCD(b_1,b_2 \\succ b)$.\n\\end{itemize}\n\nThus we can directly measure the distance from $b \\in V^B$ to any $(b\n\\prec b_1,b_2)$-junction, or from the $(b_1,b_2 \\succ\nb)$-junction to $b$. We can also infer the distance from the $(b\n\\prec b_1,b_2)$-junction to $b_1$, or from $b_1$ to the $(b_1,b_2\\succ\nb)$-junction, see Fig.~\\ref{fig:JunctionMeas}. \n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{f_JunctionMeas}\n\t\\caption{Various distances we can measure from Path\n Correlation Data (PCD). Here $\\delta = \\PCD(b, \\prec\n b_1,b_2)$ and $\\gamma = \\PCD(b_1,b_2 \\succ b)$.} \n\t\\label{fig:JunctionMeas}\n\\end{figure}\n\nOur principal question is thus: \\emph{ Which network graphs\n $(\\mathcal{G},V^B, \\Paths)$ can be reconstructed from their path\n correlation data and how does one accomplish this?}\n\n\n\\subsection{Some obvious necessary conditions}\n\\label{sec:necessary_examples}\n\nBefore we state our result and the associated reconstruction\nalgorithm, let us consider examples that show some obvious\nnecessary conditions we need to impose on the network graph\n$\\mathcal{N}=(\\mathcal{G},V^B,\\Paths)$ in order to be able to reconstruct it.\n\n\\begin{exmp}\n\tConsider the network graphs in Fig.~\\ref{fig:Example_1}, with $V^B =\n\t\\{u,v,w\\}$ and with the routing\n\tpaths indicated by dashed lines. \n\tNone of the routing paths pass through the\n\tedge $e=(u,w)$ therefore the length of this edge cannot influence\n\tthe Path Correlation Data in any way. Conversely, the length\n of the edge $e$ (and even its existence) cannot be inferred\n from the PCD.\n\t\\begin{figure}[ht]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{f_Example_1}\n\t\t\\caption{Failure to recover the edge $e=(u,w)$: the graphs (a) and\n\t\t\t(b) will produce the same PCD since none of the paths pass\n\t\t\tthrough the extra edge.}.\n\t\t\\label{fig:Example_1}\n\t\\end{figure}\n\\end{exmp}\n\n\\begin{exmp}\n\t\\label{ex:bad_degree2}\n\tConsider the network graphs in Fig.~\\ref{fig:Example_2} with\n boundary vertex set $V^B = \\{u,w\\}$. In the left graph the\n length of the edge $(x,u)$ will never appear in the PCD on its\n own, without being added to the length of the edge $(x,w)$.\n Therefore, it will be impossible to reconstruct the location\n of the vertex $x$, and even detect it at all. This will be\n the case for any internal vertex of degree 2.\n\t\n\t\\begin{figure}[ht]\n\t\t\\centering\n\t\t\\includegraphics[width=0.45\\textwidth]{f_Example_2}\n\t\t\\caption{Failure to recover the internal vertex $x$: the graphs\n\t\t\t(a) and (b) will produce the same PCD as long as the sum of the\n\t\t\tlengths of $(u,x)$ and $(x,w)$ in the graph (a) is equal to the\n\t\t\tlength of the edge $(u,w)$ in the graph (b).}\n\t\t\\label{fig:Example_2}\n\t\\end{figure}\n\\end{exmp}\n\n\\begin{exmp}\n\tConsider the network graphs in Fig.~\\ref{fig:Example_3} with the\n\tboundary vertex set\n\t\\begin{equation}\n\tV^B = \\{u_1,v_1,u_2,v_2\\}. \n\t\\end{equation}\n\tIn Fig.~\\ref{fig:Example_3}(a) the paths $\\Path(u_1,v_1)$ and\n $\\Path(u_2,v_2)$ intersect at an internal vertex $x$, while in\n Fig.~\\ref{fig:Example_3}(b) they do not have any vertices in\n common. However, the two graphs will produce the same PCD and\n will be indistinguishable.\n\t\n\t\\begin{figure}[ht]\n\t\t\\centering\n\t\t\\includegraphics[width=0.9\\textwidth]{f_Example_3}\n\t\t\\caption{Failure to recover the integrity of the\n vertex $x$: the graphs (a), (b) and (c) will produce\n the same PCD since the path between $u_1$ and $v_1$\n is in no way correlated to the path between $u_2$\n and $v_2$.}\n\t\t\\label{fig:Example_3}\n\t\\end{figure}\n\t\n\tThe reader will undoubtedly observe that the vertices $x_1$ and $x_2$\n\tin Fig.~\\ref{fig:Example_3}(b) will not be detected, and the\n graph in Fig.~\\ref{fig:Example_3}(c) is the ``minimal'' graph\n which will have the same PCD. By making the graph\n\tstructure more complicated one can easily construct an example\n\twhere $x$, $x_1$ and $x_2$ will act as junctions for some pairs of\n\tpaths and thus will be detectable, see\n Fig.~\\ref{fig:Example_3_Sep}, but the two graphs are still not\n distinguishable from their PCD.\n\t\n\t\\begin{figure}[ht]\n\t\t\\centering\n\t\t\\includegraphics[width=0.7\\textwidth]{f_Example_3_Sep}\n\t\t\\caption{A more complicated example of failure to recover the\n\t\t\tintegrity of the vertex $x$.}\n\t\t\\label{fig:Example_3_Sep}\n\t\\end{figure}\n\t\n\tThus the real problem is that in the left graph in\n\tFig.~\\ref{fig:Example_3_Sep} there are two families of paths going\n\tthrough the internal vertex $x$ that do not interact in any way.\n\\end{exmp}\n\n\n\\subsection{Statement of the main result}\n\\label{sec:main_result}\n\nThe main result of this paper is that the necessary conditions\nillustrated by examples in Section~\\ref{sec:necessary_examples} are in\nfact sufficient for the reconstruction! We start by formalizing\n(and naming) the conditions we observed.\n\n\\begin{defn}\n\tAn edge is called \\emph{unused} if there is no path in $\\Paths$\n\tcontaining it.\n\\end{defn}\n\nWe remark that if there are no unused edges in a network graph, each\ninternal vertex has at least one incoming and at least one outgoing edge.\n\n\\begin{defn}\n\tAn internal vertex $x$ is called \\emph{trivial} if it has only one\n\tincoming and only one outgoing edge (i.e.\\ edges of the form\n\t$(y_1,x)$ and $(x,y_2)$ correspondingly).\n\\end{defn}\n\nWe remark that if there are no unused edges, then there are at least\ntwo paths through every non-trivial internal vertex.\n\n\\begin{defn}\n\tFor an internal vertex $x \\in V^I$, let $S_x \\subset V^B$ to be the set\n\tof the sources and $R_x \\subset V^B$ be the set of the receivers\n\twhose paths pass through $x$. More precisely, \n\t\\begin{equation*}\n\t\\begin{split}\n\tS_x &= \\big \\{ b \\in V^B : \\exists \\hat{b} \\in V^B,\\ \n\tx \\in \\Path(b,\\hat{b}) \\big \\} \\\\\n\tR_x &= \\big \\{ \\hat{b} \\in V^B : \\exists b \\in V^B,\\ \n\tx \\in \\Path(b,\\hat{b}) \\big \\}.\n\t\\end{split}\n\t\\end{equation*}\n\\end{defn}\n\n\\begin{defn}\n \\label{def:separable}\n A vertex $x \\in V^I$ is called \\emph{separable} if there are\n disjoint non-empty partitions $S_x = S_x^1 \\cup S_x^2$ and\n $R_x = R_x^1 \\cup R_x^2$ with the property that\n\t\\begin{equation}\n\t\\label{eq:separable_def}\n\tb\\in S_x^j,\\ \\hat{b}\\in R_x^{j'} \\mbox{ with }j\\neq j'\n \\quad\\Rightarrow\\quad x \\notin \\Path(b,\\hat{b}).\n\t\\end{equation}\n\\end{defn}\n\nAn example of a separable vertex is shown in Fig.~\\ref{fig:SeparateExam}. The\npartition sets here are $S^1 = \\{b_1\\}$, $S^2 = \\{b_2\\}$, $R^1 =\n\\{b_3,b_4\\}$ and $R^2 = \\{b_5,b_6\\}$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=1\\textwidth]{f_SeparateExam}\n\t\\caption{Paths through a separable vertex $x$ and its separation\n\t\tinto $x_1$ and $x_2$.}\n\t\\label{fig:SeparateExam}\n\\end{figure}\n\nFinally, if the graph has symmetric routing we need to modify the\nconditions slightly but the resulting reconstructibility theorem will\nstay the same. Naturally, symmetric routing is an extra piece of\ninformation and more graphs are reconstructable in this case. The\nnatural setting for the problem with symmetric routing is a\nnon-directed graph, but since we still allow non-symmetric edge\nweights, we will keep the edges directed. As a result, edges come in\npairs which correspond to undirected edges splitting into two directed\nones. This is formalized in the definition of a ``network graph with\nsymmetric routing'' below.\n\n\\begin{defn}\n\t\\label{def:symm_routing}\n\tWe will say that the network graph $\\mathcal{N}=(\\Graph, V^B, \\Paths)$ has\n\t\\emph{symmetric routing} if\n\t\\begin{itemize}\n\t\t\\item for all $x,y\\in V$, $(x,y) \\in E$ implies $(y,x) \\in E$ and\n\t\t\\item for all $b,b'\\in V^B$, the path $\\Path(b, b')$ is the\n\t\treversal of the path $\\Path(b',b)$, namely\n\t\t\\begin{equation}\n\t\t\\label{eq:reversal}\n\t\t\\Path(b, b') = [b, x_1, x_2, \\ldots, x_j, b']\n\t\t\\quad \\Rightarrow \\quad\n\t\t\\Path(b', b) = [b', x_j, x_{j-1}, \\ldots, x_1, b].\n\t\t\\end{equation}\n\t\\end{itemize}\n\\end{defn}\n\t\n\\begin{defn}\n \\label{def:symm_vertex}\n\tA vertex $x \\in V^I$ in a network graph with symmetric routing is\n\t\\emph{trivial} if it has two (or less) adjacent vertices. \n\n\n\tA vertex $x \\in V^I$ in a network graph with symmetric routing is\n\t\\emph{separable} if there is a disjoint partition $S_x = S_x^1 \\cup S_x^2$\n\tso that\n\t\\begin{equation}\n\t\\label{eq:separable_def_symm}\n\tb_1 \\in S_x^1,\\ b_2 \\in S_x^2 \n\t\\quad \\Rightarrow \\quad\n\tx \\notin \\Path(b_1,b_2).\n\t\\end{equation}\n\t\n\\end{defn}\n\nWe can now state our Main Theorem. We stress that the statement of\nthe theorem applies uniformly to network graphs with or without\nsymmetric routing, the differences being absorbed by the definitions\nabove. We will still need to provide two separate (but similar!)\nproofs.\n\n\\begin{thm}[Main Theorem]\n\t\\label{thm:main}\n\tLet $(\\mathcal{G},V^B,\\Paths)$ be a network graph. If\n\t\\begin{enumerate}\n\t\t\\item \\label{cond:edges}\n\t\tno edge $e \\in E$ is unused,\n\t\t\\item \\label{cond:degrees}\n\t\tno $x \\in V^I$ is trivial,\n\t\t\\item \\label{cond:nonsep} \n\t\tno $x \\in V^I$ is separable,\n\t\\end{enumerate}\n\tthen $(\\mathcal{G},V^B,\\Paths)$ is uniquely reconstructable from\n\tits Path Correlation Data (PCD).\n\t\n\tTo put it more generally, in every class of network graphs with the\n\tsame PCD, there is a unique network graph which satisfies the above\n\tconditions.\n\\end{thm}\n\nThe theorem will be proved constructively, by presenting a\nreconstruction algorithm and verifying its result. The second part,\nwhich posits not only uniqueness but also the existence of the\nreconstructed graph, means, in practical terms, that even when given\nPCD from a graph that does not satisfy the conditions, the algorithm\nwill terminate and produce a ``nearby'' result which does.\n\n\n\\subsection{Comments on the algorithm for non-symmetric routing}\n\nThe algorithm for the case of non-symmetric routing\n(Algorithm~\\ref{algorithm:non_symmetric}) works by discovering the\ninternal vertices and reconstructing the routing paths in the format\n\\begin{equation}\n\\label{eq:reconstructed_path_format}\n\\mathcal{R}(b, \\hat{b}) = [(b,0),\\,(x_1,\\delta_1),\\,(x_2,\\delta_2),\n\\ldots,\\, (\\hat{b},\\delta)],\n\\end{equation}\nwhere $\\delta_i$ is the cumulative distance from $b$ to $x_j$ along the\npath (naturally, $\\delta = |\\Path(b, \\hat{b})|$). Once every path is\ncomplete, the edges can be read off as pairs of consecutive vertices\nappearing in a path. The internal vertices\nare discovered as junctions. The main difficulty lies in identifying\ndifferent junctions that correspond to the same vertex. This is done\nby a depth-first search in the function {\\sc UpdatePath}.\n\n\\algnewcommand{\\IfThen}[2]{%\n \\State \\algorithmicif\\ #1\\ \\algorithmicthen\\ #2}\n\\algnewcommand{\\IfThenElse}[3]\n \\State \\algorithmicif\\ #1\\ \\algorithmicthen\\ #2\\ \\algorithmicelse\\ #3}\n\n\n\\begin{algorithm}[t]\n\t\\caption{Reconstruction of the network graph\n\t\t$(\\Graph, V^B, \\Paths)$}\n\t\\label{algorithm:non_symmetric}\n\t\\begin{algorithmic}[1]\n\t\t\\For{$b_1,b_2 \\in V^B$} \\Comment{{\\bf Initialization}}\n\t\t\\State $\\mathcal{R}(b_1,b_2) = [ (b_1,0),(b_2, |\\Path(b_1,b_2)|) ]$\n\t\t\\EndFor\n\t\t\n\t\n\t\t\\For{$b_1,b_2,b_3 \\in V^B$} \\Comment{{\\bf Main Loop}}\n\t\t\\State create label $a$ \\label{algo:create_x}\n\t\t\\State $\\delta = \\PCD(b_1 \\prec b_2,b_3)$ \\label{algo:distance_x}\n\t\t\\State {\\sc UpdatePath}($\\mathcal{R}(b_1,b_2), a,\\delta$) \\label{algo:first_call}\n \\IfThenElse {``symmetric routing''} {$a'=a$} {create label $a'$} \\label{algo_create_y}\n\t\t\\State $\\delta' = |\\Path(b_2,b_1)| - \\PCD(b_2,b_3 \\succ b_1)$ \\label{algo:dist_y} \n\t\t\\State {\\sc UpdatePath}($\\mathcal{R}(b_2,b_1), a',\\delta'$) \\label{algo:second_call}\n\t\t\\EndFor\n\t\t\n\t\n\t\t\\State \\textbf{Read off} the graph from reconstructed paths\n\t\t$\\mathcal{R}$. \\Comment{{\\bf Return the result}}\n\t\t\n\t\t\\Statex\n\t\t\\Function{UpdatePath}{$\\mathcal{R}(u,v),a,\\delta$}\n\t\t\\Comment{{\\bf Recursive Function}} \\label{algo:function_start}\n\t\t\\IfThen {$\\exists\\,(\\cdot,\\delta) \\in \\mathcal{R}(u,v)$} {return}\n\t\t\\label{algo:check_vertex}\n\t\t\\State insert $(a,\\delta)$ into $\\mathcal{R}(u,v)$\n \\State $\\delta' = |\\Path(u,v)| - \\delta$\n\t\t\\For {$z \\in V^B$} \\label{algo:discovery_loop}\n\t\t\\IfThen {$\\PCD(u \\prec v,z) \\geq \\delta$} \n\t\t\\label{algo:transfer_x_tail}\n\t\t{\\sc UpdatePath}($\\mathcal{R}(u,z),a,\\delta$)\n\t\n\t\t\\IfThen {$\\PCD(u,z \\succ v) \\geq \\delta'$} \n\t\t\\label{algo:transfer_x_head} \n {\\sc UpdatePath}($\\mathcal{R}(z,v),a,|\\Path(z,v)| - \\delta'$)\n\t\n\t\t\\EndFor\n\t\n\t\t\\EndFunction\n\t\\end{algorithmic}\n\\end{algorithm}\n\nThe following comments might be in order. In\nlines~\\ref{algo:create_x}--\\ref{algo:distance_x} $a$ is the label for\nthe vertex that is the $(b_1 \\prec b_2,b_3)$-junction and $\\delta$ is the distance from $b_1$ to $a$ along the path $\\Path(b_1,b_2)$. In\nlines~\\ref{algo_create_y}--\\ref{algo:dist_y} $a'$ is the label for the\n$(b_2,b_3 \\succ b_1)$-junction and $\\delta'$ is the distance from $b_2$ to $a'$ along the path $\\Path(b_2,b_1)$. If we know {\\it a priori\\\/} that the routing is symmetric, the $(b_1 \\prec b_2,b_3)$-junction and the $(b_2,b_3 \\succ b_1)$-junction are the same vertex and can receive the same label. \n\nLine~\\ref{algo:check_vertex} checks if there is already a vertex\ndistance $\\delta$ from $b_1$ (if there is, label $a$ is unused).\nFinally, the loop starting on line~\\ref{algo:discovery_loop} looks for\nany other paths that the vertex with label $a$ must belong to. Here we rely heavily on the tree consistency property.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[width=0.8\\textwidth]{f_Algorithm1}\n\t\\caption{Insertion of internal vertex $x$ in the reconstructed paths, (a) $\\mathcal{R}(u,z)$, and (b) $\\mathcal{R}(z,v)$ by fulfillment of conditions~\\ref{algo:transfer_x_tail} and~\\ref{algo:transfer_x_head} respectively called by line~\\ref{algo:first_call} of Algorithm~\\ref{algorithm:non_symmetric}.}\n\t\\label{fig:VertexInsertion}\n\\end{figure}\n\nSome further code improvements are possible. Creating and then\ndiscarding unused labels can be avoided either by performing a check\nsimilar to line~\\ref{algo:check_vertex} in the main loop or, more\nelegantly, by making $a$ an optional argument to the function {\\sc\nUpdatePath} and creating a label after line~\\ref{algo:check_vertex}\nif no $a$ was supplied. Additionally, if the edge weights are\nsymmetric, the call to $\\PCD$ in line \\ref{algo:dist_y} can be avoided\nby using $\\PCD(b_2,b_3 \\succ b_1) = \\PCD(b_1 \\prec b_2,b_3) = \\delta$.\n\nFinally, a crude upper bound on complexity of the algorithm (in terms\nof label insertions into various $\\mathcal{R}$) is $|V^I| \\times |V^B|^2$\ni.e. the product of number of internal vertices and the square number\nof boundary vertices of the graph.\n\n\n\n\n\\subsection{Specialized algorithm for symmetric routing}\t\n\nWe have shown that the reconstruction\nAlgorithm~\\ref{algorithm:non_symmetric} is universal: it covers both\nthe general non-symmetric network and also graphs with symmetric\nrouting. However, having the prior information that the routing on the\nnetwork is symmetric makes it possible to call the recursive function\n{\\sc UpdatePath} less. This is due to the symmetric routing property\nthat if an internal vertex $x$ is inserted in reconstruction path\n$\\Path(u,v)$ for $u,v \\in V^B$ then this vertex should also inserted\nin the reverse path $\\Path(v,u)$ with appropriate distance from root\n$v$ (this is not generally true for non-symmetric routing case). To\ntake advantage of this feature we propose\nAlgorithm~\\ref{algorithm:symmetric} as a specialized version of the\nreconstruction algorithm for the graphs with symmetric routing.\n\nCompared to the Algorithm~\\ref{algorithm:non_symmetric}, the new\nalgorithm calls the recursive function twice less (as can be seen by\ncomparing the main loops of the two algorithms). The change in the number\nof calls of reconstruction function is compensated by adding line 14\nin the new algorithm where the label of internal vertex which is\ninserted in the given path will be inserted in the reverse one as well.\n\nIt should be pointed out that although less calls of main\nreconstruction function makes the algorithm computationally more\neffective, from the point of view of mathematical complexity (in terms\nof insertions of a vertex into a reconstructed path), the two\nalgorithms can be considered the same.\n\n\n\\begin{algorithm}[h]\n\t\\caption{Reconstruction of a network graph with symmetric routing}\n\t\\label{algorithm:symmetric}\n\t\\begin{algorithmic}[1]\n\t\t\\For{$b_1,b_2 \\in V^B$} \\Comment{{\\bf Initialization}}\n\t\t\\State $\\mathcal{R}(b_1,b_2) = [ (b_1,0),(b_2, |\\Path(b_1,b_2)|) ]$\n\t\t\\EndFor\n\t\t\n\t\t\\For{$b_1,b_2,b_3 \\in V^B$} \\Comment{{\\bf Main Loop}}\n\t\t\\State create label $a$ \\label{algo:create_x_sym}\n\t\t\\State $\\delta = \\text{PCD}(b_1 \\prec b_2,b_3)$ \\label{algo:distance_x_sym_f}\n\t\t\\State $\\delta' = \\text{PCD}(b_2,b_3 \\succ b_1)$ \\label{algo:distance_x_sym_r}\n\t\t\\State {\\sc UpdatePath}($\\mathcal{R}(b_1,b_2),a,\\delta, \\delta'$)\n\t\t\\EndFor\n\t\t\n\t\n\\State \\textbf{Read off} the graph from reconstructed paths\n$\\mathcal{R}$. \\Comment{{\\bf Return the result}}\n\n\\Statex\n\\Function{UpdatePath}{$\\mathcal{R}(u,v),a,\\delta, \\delta'$}\n\\Comment{{\\bf Recursive Function}} \\label{algo:function_start_sym}\n\\IfThen {$\\exists\\,(\\cdot,\\delta) \\in \\mathcal{R}(u,v)$} {return}\n\\label{algo:check_vertex_sym}\n\\State insert $(a,\\delta)$ into $\\mathcal{R}(u,v)$\n\\State insert $(a,|\\Path(v,u)| - \\delta')$ into $\\mathcal{R}(v,u)$\n\n\\For {$z \\in V^B$} \\label{algo:discovery_loop_sym}\n\\IfThen {$\\PCD(u \\prec v,z) \\geq \\delta$} {{\\sc UpdatePath}($\\mathcal{R}(u,z),a,\\delta,\\delta'$)}\n\\label{algo:transfer_x_tail_sym}\n\\IfThen {$\\PCD(v \\prec u,z) \\geq \\gamma'$} {{\\sc UpdatePath}($\\mathcal{R}(v,z),a,|\\Path(v,u)| - \\delta',|\\Path(u,v)| - \\delta $)}\n\\label{algo:transfer_x_head_sym} \n\\EndFor\n\n\n\\EndFunction\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Reconstruction example}\n\nIn the following example, we will show how having the prior\ninformation of symmetric routing will help to uniquely reconstruct the\nnetwork graph shown in Fig.~\\ref{fig:Delta}(a). The selected routing\namong the boundary vertices $V^B = \\{b_1,b_2,b_3\\}$ follows the\nsequence $\\Path(b_i,b_j) = [b_i,x_i,x_j,b_j]$ for $i,j = 1,2,3$. From\nthe measurement point of view, the PCD on the graph is represented as\nthe set of observed logical trees in Fig.~\\ref{fig:DeltaReconst}.\n\nIf there is no information on the symmetry of routing, then we can\nnot conclude that $a_1 = a_4$, $a_2=a_5$ and $a_3=a_6$. The resulting\nreconstruction will be the graph appearing in Fig.~\\ref{fig:Delta}(b).\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[scale=0.9]{f_Delta_Underlying}\n\t\\caption{(a) An example of a network graph. To avoid clutter, the edges\n\t\twith no specified direction are assumed to go in both directions\n\t\twith the same weight, (b) reconstructed network}\n\t\\label{fig:Delta}\n\\end{figure}\n\nWe remark that such geometry is fairly realistic if we, for example,\nconsider the vertices $b_j$ to be internet service providers (ISPs)\nwho are eager to push the traffic addressed outside their network to\nother ISPs as soon as possible. Still, the reconstruction does not\nmatch the graph we had originally.\n\n\\begin{figure}[h]\n\t\\centering\n\t\\includegraphics[scale=0.9]{f_DeltaReconst}\n\t\\caption{Representation of PCD on the network example through set of logical source and receiver trees.}\n\t\\label{fig:DeltaReconst}\n\\end{figure}\n\nHowever, the reconstruction \\emph{is possible} if we know a priori\nthat the routes are symmetric, as they are in this example. Then the\npaths going out of a given source $b$ and the paths going to $b$\nacting as the receiver have exactly the same topology. This\nadditional information allows us to identify $a_1 = a_4$, $a_2=a_5$\nand $a_3=a_6$ in Fig.~\\ref{fig:Delta}(b) and thus recover the\noriginal graph. Since symmetric routing networks appear in\napplications, we expect the optimized\nAlgorithm~\\ref{algorithm:symmetric} to be a practical value.\n\n\n\\subsection{The effect of symmetric edge weights}\n\nSymmetric network graph for which both the routing on the graph and\nedge weights are symmetric can be considered as a special class of\ngeneral networks. For this class of networks, similar to the graphs\nwith symmetric routing (see eq.~\\ref{eq:reversal}), same set of edges\nare passed in the two reverse paths $\\Path_1(u,v)$ and\n$\\Path_2(v,u)$. Additionally, the symmetric edge weights property\nimplies that for any vertex $x \\in \\Path_1$, then\n$|\\Path(v,x)| = |\\Path(u,v)| - |\\Path(u,x)|$. In other words, path\n$\\Path_2$ is fully identified by the available information form the\npath $\\Path_1$. This property of symmetric networks implies that\n$\\text{PCD}(v,w \\succ u) = \\text{PCD}(u \\prec v,w)$, and as a result\nthe information from \\textit{path correlation data} encoded in the form\nof $(u \\prec v,w)$-junctions can be applied to identify the\ninformation for the $(v,w \\succ u)$-junctions. From the\n\\textbf{reconstruction} point of view, same conditions are required\nfor exact reconstruction of graphs with symmetric edge weights as\ndiscussed in previous sections and thereby\nAlgorithm~\\ref{algorithm:symmetric} can be applied for reconstruction\npurposes.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n \n Methyl chloride has been proposed as an observable biosignature gas in the search for life outside the Solar System.~\\cite{05SeKaMe.CH3Cl} A model of different hypothetical Earth-like planets orbiting a range of M stars predicted that a higher concentration of CH$_{3}$Cl would exist than on Earth, and with stronger spectral features. Seager et al. have since gone on to classify CH$_3$Cl as a Type III biomarker - a molecule produced from a secondary metabolic process - and estimated the required concentration needed for a realistic detection in a generalized oxidized atmosphere,~\\cite{13SeBaHu.CH3Cl} and for an exoplanet with a thin H$_2$ rich atmosphere and a habitable surface temperature.~\\cite{13aSeBaHu.CH3Cl} The rotation-vibration spectrum of CH$_3$Cl has received increased interest as a result.\n \n A highly accurate and comprehensive line list is lacking for methyl chloride, with varying coverage in the spectroscopic databases.~\\cite{HITRAN,PNNL,GEISA,JPL} The HITRAN database~\\cite{HITRAN} is the most extensive, containing over $100{\\,}000$ transitions for each of the two main isotopologues, $^{12}$CH$_{3}{}^{35}$Cl and $^{12}$CH$_{3}{}^{37}$Cl (henceforth labelled as CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl), in the range $0$ to $3200{\\,}$cm$^{-1}$. Although the latest update HITRAN2012 has seen improvements, notably around $3000{\\,}$cm$^{-1}$,~\\cite{11BrPeJa.CH3Cl} there are still deficiencies with certain line positions and intensities taken from an empirically refined theoretical anharmonic force field.~\\cite{85KoKoNa.CH3Cl}\n \n Due to its prominent role in depletion of the ozone layer, levels of methyl chloride are being closely monitored by satellite missions such as the Atmospheric Chemistry Experiment~\\cite{ACE_Bern06,06NaBeBo.CH3Cl,11BrChMa.CH3Cl,13BrVoSc.CH3Cl} and the Microwave Limb Sounder.~\\cite{13SaLiMa.CH3Cl} A number of recent publications focusing on line shapes and broadening coefficients,~\\cite{12BrJaBu.CH3Cl,13BrJaLa.CH3Cl,13aBrJaLa.CH3Cl,12GuRoBu.CH3Cl,\n 11BuGuEl.CH3Cl,12BuRoxx.CH3Cl,13Buxxxx.CH3Cl,13BuMaMo.CH3Cl,\n 13RaJaDh.CH3Cl,14RaJaDh.CH3Cl,14aRaJaDh.CH3Cl,13DuLaBu.CH3Cl,14DuLaBu.CH3Cl} needed for a realistic modelling of atmospheric spectra, confirm its terrestrial importance. The $3.4{\\,}\\mu$m region is particularly relevant for atmospheric remote sensing due to a relatively transparent window and strong spectral features of the $\\nu_1$ band of CH$_{3}$Cl. A high-resolution study of the $\\nu_1$, $\\nu_4$ and $3\\nu_6$ bands in this region produced a line list for the range $2920$ to $3100{\\,}$cm$^{-1}$.~\\cite{11BrPeJa.CH3Cl} The $6.9{\\,}\\mu$m region has seen line positions, intensities, and self-broadening coefficients determined for more than 900 rovibrational transitions of the $\\nu_{5}$ band.~\\cite{13RaJaDh.CH3Cl} Nikitin et al. have also measured, modelled and assigned over $20{\\,}000$ transitions for each isotopologue in the region $0$ to $2600{\\,}$cm$^{-1}$.~\\cite{03NiFeCh.CH3Cl,04NiChBu.CH3Cl,05NiChxx.CH3Cl,05NiChBu.CH3Cl} An effective Hamiltonian model adapted to the polyad structure of methyl chloride reproduced observed transitions involving the ground state and $13$ vibrational states with an overall standard deviation of $0.0003{\\,}$cm$^{-1}$.\n \n There is a large body of experimental work on the rovibrational spectrum of methyl chloride. We refer the reader to the most recent publications~\\cite{11BrPeJa.CH3Cl,12BrTrJa.CH3Cl,12BrJaBu.CH3Cl,13BrJaLa.CH3Cl,13aBrJaLa.CH3Cl,12GuRoBu.CH3Cl,11BuGuEl.CH3Cl,12BuRoxx.CH3Cl,13Buxxxx.CH3Cl,13BuMaMo.CH3Cl,13RaJaDh.CH3Cl,14RaJaDh.CH3Cl,14aRaJaDh.CH3Cl,13DuLaBu.CH3Cl,14DuLaBu.CH3Cl,03NiFeCh.CH3Cl,04NiChBu.CH3Cl,05NiChxx.CH3Cl,05NiChBu.CH3Cl} (and references therein) for a more complete overview.\n \n Theoretically there has been a consistent effort over the years to characterize the spectrum of CH$_3$Cl. Much attention has been given to a description of harmonic~\\cite{70DuAlMc.CH3Cl,72DuMcSp.CH3Cl,76Duxxxx.CH3Cl,79Wixxxx.CH3Cl,80LaTaBe.CH3Cl,87ScThxx.CH3Cl,95Haxxxx.CH3Cl,01BlLaxx.CH3Cl} and anharmonic~\\cite{77ScWoBe.CH3Cl,83BeAlxx.CH3Cl,83KoKoNa.CH3Cl,84KoKoNa.CH3Cl,85KoKoNa.CH3Cl,90DuLaxx.CH3Cl,92ScThxx.CH3Cl} force fields, both empirically and using \\textit{ab initio} methods. The latest work by Black and Law~\\cite{01BlLaxx.CH3Cl} employed spectroscopic data from ten isotopomers of methyl chloride to produce an empirical harmonic force field incorporating the most up to date treatment of anharmonic corrections. These were largely based on a complete set of empirical anharmonicity constants derived from a joint local mode and normal mode analysis of 66 vibrational energy levels in the region $700$ to $16{\\,}500{\\,}$cm$^{-1}$,~\\cite{90DuLaxx.CH3Cl} and follow-up work in a similar vein by Law.~\\cite{99Laxxxx.CH3Cl}\n \n From a purely \\textit{ab initio} standpoint, Nikitin has computed global nine-dimensional potential energy surfaces (PESs) for vibrational energy level calculations, considering both CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl in the region $0$ to $3500{\\,}$cm$^{-1}$.~\\cite{08Nixxxx.CH3Cl} Using fourth-order M{\\o}ller-Plesset perturbation theory MP4 and a correlation-consistent quadruple-zeta basis set, as well as coupled cluster theory CCSD(T) with a triple-zeta basis set, a combined total of $7241$ points with energies up to $h c \\cdot 40{\\,}000{\\,}$cm$^{-1}$ were employed to generate and fit the PESs ($h$ is the Planck constant and $c$ is the speed of light). Vibrational energies were calculated variationally using a finite basis representation and an exact kinetic energy operator, reproducing the fundamental term values with a root-mean-square error of $1.97$ and $1.71{\\,}$cm$^{-1}$ for CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl respectively.\n\n The potential energy surface is the foundation of rovibrational energy level calculations. Its quality not only dictates the accuracy of line positions, but is also crucial for achieving significant improvements in calculated band intensities.~\\cite{OvThYu08a.PH3} Achieving ``spectroscopic accuracy'' (better than $\\pm 1{\\,}$cm$^{-1}$) in a purely \\textit{ab initio} fashion is extremely challenging due to the limitations of electronic structure methods. To do so one must account for higher-level (HL) electron correlation beyond the initial coupled cluster method when generating the PES, and use a one-particle basis set near the complete basis set (CBS) limit. Core-valence (CV) electron correlation, scalar relativistic (SR) effects, higher-order (HO) electron correlation, and the diagonal Born-Oppenheimer correction (DBOC) are considered to be the leading HL contributions.~\\cite{Peterson12}\n \n The goal of this study is to use state-of-the-art electronic structure calculations to construct a global PES for each of the two main isotopologues of methyl chloride, CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl. This requires inclusion of the leading HL corrections and extrapolation to the CBS limit. The quality of the respective PESs will be assessed by variational calculations of the vibrational energy levels using the computer program TROVE.~\\cite{TROVE2007} To obtain fully converged term values we will exploit the smooth convergence of computed energies with respect to vibrational basis set size, and perform a complete vibrational basis set (CVBS) extrapolation.~\\cite{OvThYu08.PH3} By using a range of theoretical techniques we aim to find out exactly what accuracy is possible for a molecule such as methyl chloride.\n\n The paper is structured as follows: In Sec.~\\ref{sec:PES} the \\textit{ab initio} calculations and analytic representation of the PES are detailed. The variational calculations will be discussed in Sec.~\\ref{sec:variational}, where we assess the combined effect of the HL corrections and CBS extrapolation on the vibrational term values and equilibrium geometry. We offer concluding remarks in Sec.~\\ref{sec:conc}.\n \n\\section{Potential Energy Surface}\n\\label{sec:PES} \n\n\\subsection{Electronic structure calculations}\n \n We take a focal-point approach~\\cite{Csaszar98} to represent the total electronic energy,\n\\begin{equation}\\label{eq:tot_en}\nE_{\\mathrm{tot}} = E_{\\mathrm{CBS}}+\\Delta E_{\\mathrm{CV}}+\\Delta E_{\\mathrm{HO}}+\\Delta E_{\\mathrm{SR}}+\\Delta E_{\\mathrm{DBOC}}\n\\end{equation}\n\n\\noindent which allows for greater control over the PES. To compute $E_{\\mathrm{CBS}}$ we employed the explicitly correlated F12 coupled cluster method CCSD(T)-F12b~(Ref.~\\onlinecite{Adler07} - for a detailed review of this method see Refs.~\\onlinecite{F12_Tew2010,F12_Werner2010}) in conjunction with the F12-optimized correlation consistent polarized valence basis sets, cc-pVTZ-F12 and cc-pVQZ-F12,~\\cite{Peterson08} in the frozen core approximation. The diagonal fixed amplitude ansatz 3C(FIX),~\\cite{TenNo04} and a Slater geminal exponent value of $\\beta=1.0$~$a_0^{-1}$ as recommended by Hill et al.~\\cite{Hill09} were used. To evaluate the many electron integrals in F12 theory additional auxiliary basis sets (ABS) are required. For the resolution of the identity (RI) basis, and the two density fitting (DF) basis sets, we utilized the corresponding OptRI,~\\cite{Yousaf08} cc-pV5Z\/JKFIT,~\\cite{Weigend02} and aug-cc-pwV5Z\/MP2FIT~\\cite{Hattig05} ABS, respectively. All calculations were carried out using MOLPRO2012~\\cite{Werner2012} unless stated otherwise.\n \n To extrapolate to the CBS limit we used a parameterized two-point, Schwenke-style~\\cite{Schwenke05} formula,\n\\begin{equation}\\label{eq:cbs_extrap}\nE^{C}_{\\mathrm{CBS}} = (E_{n+1} - E_{n})F^{C}_{n+1} + E_{n}\n\\end{equation}\n\n\\noindent originally proposed by Hill et al.~\\cite{Hill09} The coefficients $F^{C}_{n+1}$ are specific to the $\\mathrm{CCSD-F12b}$ and $\\mathrm{(T)}$ components of the total CCSD(T)-F12b energy and we use values of $F^{\\mathrm{CCSD-F12b}}=1.363388$ and $F^{\\mathrm{(T)}}=1.769474$ as recommended in Ref.~\\onlinecite{Hill09}. No extrapolation was applied to the Hartree-Fock (HF) energy, rather the HF+CABS singles correction~\\cite{Adler07} calculated in the larger basis set was used.\n\n The energy correction from core-valence electron correlation $\\Delta E_{\\mathrm{CV}}$ was calculated at the CCSD(T)-F12b level of theory with the F12-optimized correlation consistent core-valence basis set cc-pCVQZ-F12.~\\cite{Hill10} The same ansatz and ABS as in the frozen core approximation computations were used, however we set $\\beta=1.5$~$a_0^{-1}$. All-electron calculations kept the (1\\textit{s}) orbital of Cl frozen with all other electrons correlated due to the inability of the basis set to adequately describe this orbital.\n \n Core-valence and higher-order electron correlation often contribute to the electronic energy with opposing signs and should thus be considered jointly. We use the hierarchy of coupled cluster methods to estimate the HO correction as $\\Delta E_{\\mathrm{HO}} = \\Delta E_{\\mathrm{T}} + \\Delta E_{\\mathrm{(Q)}}$, including the full triples contribution $\\Delta E_{\\mathrm{T}} = \\left[E_{\\mathrm{CCSDT}}-E_{\\mathrm{CCSD(T)}}\\right]$, and the perturbative quadruples contribution $\\Delta E_{\\mathrm{(Q)}} = \\left[E_{\\mathrm{CCSDT(Q)}}-E_{\\mathrm{CCSDT}}\\right]$. Calculations were carried out in the frozen core approximation at the CCSD(T), CCSDT and CCSDT(Q) levels of theory using the general coupled cluster approach~\\cite{Kallay05,Kallay08} as implemented in the MRCC code~\\cite{mrcc} interfaced to CFOUR.~\\cite{cfour} For the full triples and the perturbative quadruples calculations, we employed the augmented correlation consistent triple zeta basis set, aug-cc-pVTZ(+d for Cl),~\\cite{Dunning89,Kendall92,Woon93,Dunning01} and the double zeta basis set, aug-cc-pVDZ(+d for Cl), respectively. Note that for HO coupled cluster corrections, it is possible to use successively smaller basis sets at each step up in excitation level due to faster convergence.~\\cite{Feller06}\n \n In exploratory calculations, the contributions from the full quadruples $\\left[E_{\\mathrm{CCSDTQ}}-E_{\\mathrm{CCSDT(Q)}}\\right]$ and from the perturbative pentuples $\\left[E_{\\mathrm{CCSDTQ(P)}}-E_{\\mathrm{CCSDTQ}}\\right]$ were found to largely cancel each other out. Thus to reduce the computational expense, only $\\Delta E_{\\mathrm{T}}$ and $\\Delta E_{\\mathrm{(Q)}}$ were deemed necessary for an adequate representation of HO electron correlation.\n \n Scalar relativistic effects $\\Delta E_{\\mathrm{SR}}$ were included through the one-electron mass velocity and Darwin terms (MVD1) from the Breit-Pauli Hamiltonian in first-order perturbation theory.~\\cite{Cowan76} Calculations were performed with all electrons correlated (except for the (1\\textit{s}) of Cl) at the CCSD(T)\/aug-cc-pCVTZ(+d for Cl)~\\cite{Woon95,Peterson02} level of theory using the MVD1 approach~\\cite{Klopper97} implemented in CFOUR. The contribution from the two-electron Darwin term is expected to be small enough to be neglected.~\\cite{Gauss07}\n \n The diagonal Born-Oppenheimer correction $\\Delta E_{\\mathrm{DBOC}}$ was computed again with the (1\\textit{s}) orbital of Cl frozen and all other electrons correlated. Calculations employed the CCSD method~\\cite{Gauss06} as implemented in CFOUR with the aug-cc-pCVTZ(+d for Cl) basis set. The DBOC is the contribution from the nuclear kinetic energy operator acting on the ground state electronic wavefunction. It is mass dependent, so separate contributions were generated for CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl.\n\n The spin-orbit interaction was not considered as it can be safely neglected in spectroscopic calculations on light closed-shell molecules.~\\cite{Tarczay01} A simple estimate of the Lamb shift was also calculated from the MVD1 contribution,~\\cite{Pyykko01} but its effect on the vibrational energy levels was negligible. The differing levels of theory and basis set size reflect the fact that different HL energy corrections converge at different rates.\n\n Grid points were generated using a random energy-weighted sampling algorithm of Monte Carlo type, in terms of nine internal coordinates: the C-Cl bond length $r_0$; three C-H bond lengths $r_1$, $r_2$ and $r_3$; three $\\angle(\\mathrm{H}_i\\mathrm{CCl})$ interbond angles $\\beta_1$, $\\beta_2$ and $\\beta_3$; and two dihedral angles $\\tau_{12}$ and $\\tau_{13}$ between adjacent planes containing H$_i$CCl and H$_j$CCl (see Figure~\\eqref{fig:geometry}). This led to a global grid of $44{\\,}820$ geometries with energies up to $h c \\cdot 50{\\,}000{\\,}$cm$^{-1}$, which included geometries in the range $1.3\\leq r_0 \\leq 2.95{\\,}\\mathrm{\\AA}$, $0.7\\leq r_i \\leq 2.45{\\,}\\mathrm{\\AA}$, $65\\leq \\beta_i \\leq 165^{\\circ}$ for $i=1,2,3$ and $55\\leq \\tau_{jk} \\leq 185^{\\circ}$ with $jk=12,13$. To ensure an adequate description of the equilibrium region, around $1000$ carefully chosen low-energy points were also incorporated into the data set. At each grid point, the computed coupled cluster energies were extrapolated to the CBS limit using Eq.\\eqref{eq:cbs_extrap}.\n\n \\begin{figure}\n \\includegraphics{new_ch3cl_draw_int_coord}\n \\caption{\\label{fig:geometry}Definition of internal coordinates used for CH$_3$Cl.}\n \\end{figure}\n\n The HL energy corrections are generally small in magnitude and vary in a smooth manner,~\\cite{YaYuRi11.H2CS} displaying a straightforward polynomial-type dependence as can be seen in Figures \\eqref{fig:1d_cv_ho} and \\eqref{fig:1d_mvd1_dboc}. For each of the HL terms, a reduced grid was carefully designed to obtain a satisfactory description of the correction with minimum computational effort. Reduced grids of $9377$, $3526$, $12{\\,}296$ and $3679$ points with energies up to $h c \\cdot 50{\\,}000{\\,}$cm$^{-1}$ were used for the CV, HO, SR and DBOC corrections, respectively.\n \n \\begin{figure*}\n \\includegraphics{cv_ho_1d_ch3cl}\n \\caption{\\label{fig:1d_cv_ho}One-dimensional cuts of the core-valence (CV) and higher-order (HO) corrections with all other coordinates held at their equilibrium values.}\n \\end{figure*}\n \n \\begin{figure*}\n \\includegraphics{mvd1_dboc_1d_ch3cl}\n \\caption{\\label{fig:1d_mvd1_dboc} One-dimensional cuts of the scalar relativistic (MVD1) and diagonal Born-Oppenheimer (DBOC) corrections with all other coordinates held at their equilibrium values.}\n \\end{figure*}\n\n\\subsection{Analytic representation}\n\n Methyl chloride is a prolate symmetric top molecule of the $\\bm{C}_{3\\mathrm{v}}\\mathrm{(M)}$ symmetry group.~\\cite{MolSym_BuJe98} Of the six symmetry operations $\\left\\lbrace E,(123),(132),(12)^{*},(23)^{*},(13)^{*}\\right\\rbrace$ which make up $\\bm{C}_{3\\mathrm{v}}$(M), the cyclic permutation $(123)$ replaces nucleus 1 with nucleus 2, nucleus 2 with nucleus 3, and nucleus 3 with nucleus 1. The permutation-inversion operation $(12)^{*}$ interchanges nuclei 1 and 2 and inverts all particles (including electrons) in the molecular centre of mass. The identity operation $E$ leaves the molecule unchanged.\n \n To represent the PES analytically, an on-the-fly symmetrization procedure has been implemented. We first introduce the coordinates\n\\begin{equation}\\label{eq:stretch1}\n\\xi_1=1-\\exp\\left(-a(r_0 - r_0^{\\mathrm{eq}})\\right)\n\\end{equation}\n\\begin{equation}\\label{eq:stretch2}\n\\xi_j=1-\\exp\\left(-b(r_i - r_1^{\\mathrm{eq}})\\right){\\,};\\hspace{2mm}j=2,3,4{\\,}, \\hspace{2mm} i=j-1\n\\end{equation}\n\n\\noindent where $a=1.65{\\,}\\mathrm{\\AA}^{-1}$ for the C-Cl internal coordinate $r_0$, and $b=1.75{\\,}\\mathrm{\\AA}^{-1}$ for the three C-H internal coordinates $r_1,r_2$ and $r_3$. For the angular terms\n\\begin{equation}\\label{eq:angular1}\n\\xi_k = (\\beta_i - \\beta^{\\mathrm{eq}}){\\,};\\hspace{2mm}k=5,6,7{\\,}, \\hspace{2mm} i=k-4\n\\end{equation}\n\\begin{equation}\\label{eq:angular2}\n\\xi_8 = \\frac{1}{\\sqrt{6}}\\left(2\\tau_{23}-\\tau_{13}-\\tau_{12}\\right)\n\\end{equation}\n\\begin{equation}\\label{eq:angular3}\n\\xi_9 = \\frac{1}{\\sqrt{2}}\\left(\\tau_{13}-\\tau_{12}\\right)\n\\end{equation}\n\n\\noindent Here $\\tau_{23}=2\\pi-\\tau_{12}-\\tau_{13}$, and $r_0^{\\mathrm{eq}}$, $r_1^{\\mathrm{eq}}$ and $\\beta^{\\mathrm{eq}}$ are the reference equilibrium structural parameters of CH$_3$Cl. \n\n Taking an initial potential term of the form\n\\begin{equation}\\label{eq:V_i}\nV_{ijk\\ldots}^{\\mathrm{initial}}=\\xi_{1}^{\\,i}\\xi_{2}^{\\,j}\\xi_{3}^{\\,k}\\xi_{4}^{\\,l}\\xi_{5}^{\\,m}\\xi_{6}^{\\,n}\\xi_{7}^{\\,p}\\xi_{8}^{\\,q}\\xi_{9}^{\\,r}\n\\end{equation}\n\n\\noindent with maximum expansion order $i+j+k+l+m+n+p+q+r=6$, each symmetry operation of $\\bm{C}_{3\\mathrm{v}}$(M) is independently applied to $V_{ijk\\ldots}^{\\mathrm{initial}}$, i.e.\n\\begin{equation}\\label{eq:V_op}\nV_{ijk\\ldots}^{\\mathbf{X}}=\\mathbf{X}{\\,}V_{ijk\\ldots}^{\\mathrm{initial}}=\\mathbf{X}\\left(\\xi_{1}^{\\,i}\\xi_{2}^{\\,j}\\xi_{3}^{\\,k}\\xi_{4}^{\\,l}\\xi_{5}^{\\,m}\\xi_{6}^{\\,n}\\xi_{7}^{\\,p}\\xi_{8}^{\\,q}\\xi_{9}^{\\,r}\\right)\n\\end{equation}\n\n\\noindent where $\\mathbf{X}=\\lbrace E,(123),(132),(12)^{*},(23)^{*},(13)^{*}\\rbrace$, to create six new terms. The results are summed up to produce a final term,\n\\begin{equation}\\label{eq:V_f}\nV_{ijk\\ldots}^{\\mathrm{final}}=V_{ijk\\ldots}^{E}+V_{ijk\\ldots}^{(123)}+V_{ijk\\ldots}^{(132)}+V_{ijk\\ldots}^{(12)^*}+V_{ijk\\ldots}^{(23)^*}+V_{ijk\\ldots}^{(13)^*}\n\\end{equation}\n\n\\noindent which is itself subjected to the six $\\bm{C}_{3\\mathrm{v}}$(M) symmetry operations to check its invariance. The total potential function is then given by the expression\n\\begin{equation}\\label{eq:pot_f}\nV_{\\mathrm{total}}(\\xi_{1},\\xi_{2},\\xi_{3},\\xi_{4},\\xi_{5},\\xi_{6},\\xi_{7},\\xi_{8},\\xi_{9})={\\sum_{ijk\\ldots}}{\\,}\\mathrm{f}_{ijk\\ldots}V_{ijk\\ldots}^{\\mathrm{final}}\n\\end{equation}\n\n\\noindent where $\\mathrm{f}_{ijk\\ldots}$ are the corresponding expansion coefficients, determined through a least-squares fitting to the \\textit{ab initio} data. Weight factors of the form suggested by Partridge and Schwenke,~\\cite{Schwenke97}\n\\begin{equation}\\label{eq:weights}\nw_i=\\left(\\frac{\\tanh\\left[-0.0006\\times(\\tilde{E}_i - 15{\\,}000)\\right]+1.002002002}{2.002002002}\\right)\\times\\frac{1}{N\\tilde{E}_i^{(w)}}\n\\end{equation}\n\n\\noindent were used in the fitting, with normalization constant $N=0.0001$ and $\\tilde{E}_i^{(w)}=\\max(\\tilde{E}_i, 10{\\,}000)$, where $\\tilde{E}_i$ is the potential energy at the $i$th geometry above equilibrium (all values in cm$^{-1}$). In our fitting, energies below $15{\\,}000{\\,}$cm$^{-1}$ are favoured by the weight factors. For geometries where $r_0\\geq 2.35{\\,}\\mathrm{\\AA}$ and $r_i\\geq 2.00{\\,}\\mathrm{\\AA}$ for $i=1,2,3$, the weights were reduced by several orders of magnitude. At such large stretch coordinates, the coupled cluster method is known to become unreliable, as indicated by a T1 diagnostic value $>0.02$.~\\cite{T1_Lee89} Although energies at these points may not be wholly accurate, they are still useful and ensure that the PES maintains a reasonable shape towards dissociation.\n\n The same form of potential function, Eq.\\eqref{eq:pot_f}, and the same procedure, Eqs.\\eqref{eq:V_i} to \\eqref{eq:V_f}, were used to fit the higher-level correction surfaces. The stretching coordinates however were replaced with linear expansion variables; $\\xi_1=(r_0-r_0^{\\mathrm{eq}})$ and $\\xi_j=(r_i-r_1^{\\mathrm{eq}})$ where $j=2,3,4$ and $i=j-1$. The angular terms, Eqs.\\eqref{eq:angular1} to \\eqref{eq:angular3}, remained the same as before. Each HL correction was fitted independently and the parameters $r_0^{\\mathrm{eq}}$, $r_1^{\\mathrm{eq}}$ and $\\beta^{\\mathrm{eq}}$ were optimized for each surface. The four HL corrections were applied at each of the $44{\\,}820$ grid points, either from a directly calculated value at that geometry, or by interpolation using the corresponding analytic representation. Two final data sets were produced, one for each isotopologue of CH$_3$Cl, the only difference being the contribution from the DBOC.\n \n Two separate fits were carried out and in each instance we could usefully vary 414 expansion parameters to give a weighted root-mean-square (rms) error of $0.82{\\,}$cm$^{-1}$ for energies up to $50{\\,}000{\\,}$cm$^{-1}$. The fit employed Watson's robust fitting scheme,~\\cite{Watson03} the idea of which is to reduce the weight of outliers and lessen their influence in determining the final set of parameters. The Watson scheme improves the fit at energies below $10{\\,}000{\\,}$cm$^{-1}$ which is preferable for our purposes. When comparing the expansion parameters for CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl, only very slight differences were observed in the determined values. We refer to these two PESs as CBS-35$^{\\,\\mathrm{HL}}$ and CBS-37$^{\\,\\mathrm{HL}}$ in subsequent calculations.\n \n To assess the combined effect of the HL corrections and CBS extrapolation on the vibrational energy levels and equilibrium geometry of CH$_3$Cl, we fit a reference PES to the raw CCSD(T)-F12b\/cc-pVQZ-F12 energies. Again we used Watson's robust fitting scheme and 414 parameters to give a weighted rms error of $0.82{\\,}$cm$^{-1}$ for energies up to $50{\\,}000{\\,}$cm$^{-1}$. We refer to this PES as VQZ-F12 in subsequent calculations. Note that the CBS-(35\/37)$^{\\,\\mathrm{HL}}$ and VQZ-F12 PESs contain only slightly different parameter sets.\n \n The choice of reference equilibrium structural parameters in our PES expansion is to some extent arbitrary due to the inclusion of linear expansion terms in the parameter set. For this reason, values of $r_0^{\\mathrm{eq}}=1.7775{\\,}\\mathrm{\\AA}$, $r_1^{\\mathrm{eq}}=1.0837{\\,}\\mathrm{\\AA}$, and $\\beta^{\\mathrm{eq}}=108.445^{\\circ}$, used for the CBS-35$^{\\,\\mathrm{HL}}$ PES, were also employed for the CBS-37$^{\\,\\mathrm{HL}}$ and VQZ-F12 PESs. Note that these are not the actual equilibrium parameters which define the minimum of the PES, they are simply parameters of a function. The true equilibrium values will be determined and discussed in Sec.~\\ref{sec:variational}.\n \n Generating a PES on-the-fly is advantageous when it comes to variational calculations as its implementation requires only a short amount of code. Alternatively, one can derive the full analytic expression for the potential and incorporate this into the nuclear motion computations, but this method is cumbersome. The CBS-35$^{\\,\\mathrm{HL}}$ and CBS-37$^{\\,\\mathrm{HL}}$ expansion parameter sets are provided in the supplementary material along with a FORTRAN routine to construct the PESs.~\\cite{EPAPSCH3CL} \n\n\\section{Results}\n\\label{sec:variational}\n\n\\subsection{Extrapolation to the Complete Vibrational Basis Set Limit}\n\\label{sec:cvbs}\n\n The nuclear motion code TROVE (Theoretical ROVibrational Energies)~\\cite{TROVE2007} is designed to calculate the rotation-vibration energy levels and corresponding transition intensities for a polyatomic molecule of arbitrary structure in an isolated electronic state. The flexibility of TROVE has allowed a range of molecular systems to be treated,~\\cite{OvThYu08a.PH3,OvThYu08.PH3,YaYuRi11.H2CS,YuBaYa09.NH3,YuYaTh09.HSOH,YaYuJe10.HSOH,YuCaYa10.SBH3,\n YaYuJe11.H2CO,UnTeYu13.SO3,PoKoOv13.H2O2,YuTeBa13.CH4} and for the present study the functionality has been extended to handle molecules of the form XY$_3$Z.\n\n In TROVE, solution of the rotation-vibration Schr\\\"{o}dinger equation is achieved by numerical diagonalization of the corresponding Hamiltonian constructed in terms of a symmetry adapted basis set. The rovibrational Hamiltonian is represented as a power series expansion around a reference geometry, taken presently at the equilibrium configuration. For the present work we take advantage of recent developments in TROVE, in particular the implementation of a novel method of constructing the rovibrational Hamiltonian in terms of curvilinear internal coordinates.~\\cite{YaYu15.ADF} By employing this new approach, our variational results show much faster convergence with respect to vibrational basis set size. We will see the importance of this later on. For CH$_3$Cl, we truncate the kinetic and potential energy operators at 6th and 8th order respectively in all calculations. This level of truncation is sufficient for our purposes, however we refer the reader to Ref.~\\onlinecite{TROVE2007} for a detailed discussion of the associated errors of such a scheme. Note that atomic mass values have been used in the subsequent TROVE computations.\n \n A multi-step contraction scheme is used to generate the vibrational basis set, the size of which is controlled by the polyad number \n\\begin{equation}\n P = \\sum_{k=1}^{9} a_k n_k\n\\end{equation}\n\n\\noindent The quantum numbers $n_k$ correspond to primitive basis functions $\\phi_{n_k}$, which are obtained from solving one-dimensional Schr\\\"{o}dinger equations for each of the nine vibrational modes by means of the Numerov-Cooley method.~\\cite{Numerov1924,Cooley1961} Using the definition of the polyad coefficient $a_k=\\omega_k\/\\min(\\omega_1\\ldots\\omega_9)$, where $\\omega_k$ denotes the harmonic frequency of the $k$th mode, the polyad number for CH$_{3}$Cl is\n\\begin{equation}\\label{eq:polyad}\nP = n_1+2(n_2+n_3+n_4)+n_5+n_6+n_7+n_8+n_9 \\leq P_{\\mathrm{max}}\n\\end{equation}\n\n\\noindent which does not exceed a predefined maximum value $P_{\\mathrm{max}}$.\n\n Fully converged energies in variational calculations are usually obtained with the use of an extended basis set. We have only been able to compute $J=0$ vibrational energies up to a polyad truncation number of $P_{\\mathrm{max}}=14$ for CH$_3$Cl. As shown in Figure~\\eqref{fig:dimension}, this requires the diagonalization of a Hamiltonian matrix of dimension close to $128{\\,}000$, which in turn equals the number of primitive basis functions generated. The extension to $P_{\\mathrm{max}}=16$ using TROVE would be an arduous computational task. \n \n \\begin{figure}\n \\includegraphics{dim_matrix_poly}\n \\caption{\\label{fig:dimension}Size of the $J=0$ Hamiltonian matrix with respect to the polyad truncation number $P_{\\mathrm{max}}$. Computations were feasible up to $P_{\\mathrm{max}}=14$.}\n \\end{figure}\n \n One means of achieving converged vibrational energy levels without having to diagonalize increasingly large matrices, is the use of a complete vibrational basis set (CVBS) extrapolation.~\\cite{OvThYu08.PH3} In analogy to the common basis set extrapolation techniques of electronic structure theory,~\\cite{Petersson88,Petersson91} the same principles can be applied to TROVE calculations with respect to $P_{\\mathrm{max}}$. We adopt the exponential decay expression\n\\begin{equation}\n E_i(P_{\\mathrm{max}}) = E_i^{\\mathrm{CVBS}}+A_i\\exp(-\\lambda_i P_{\\mathrm{max}})\n\\end{equation}\n\n\\noindent where $E_i$ is the energy of the $i$th level, $E_i^{\\mathrm{CVBS}}$ is the respective energy at the CVBS limit, $A_i$ is a fitting parameter, and $\\lambda_i$ can be found from\n\\begin{equation}\n\\lambda_i=-\\frac{1}{2}\\ln\\left(\\frac{E_i(P_{\\mathrm{max}}+2)-E_i(P_{\\mathrm{max}})}{E_i(P_{\\mathrm{max}})-E_i(P_{\\mathrm{max}}-2)}\\right)\n\\end{equation}\n\n Values of $P_{\\mathrm{max}}=\\lbrace 10,12,14 \\rbrace$ were employed for a CVBS extrapolation of all vibrational term values up to $5000{\\,}$cm$^{-1}$, and for selected higher energies to compare with experiment. This was done for the CBS-35$^{{\\,}\\mathrm{HL}}$, CBS-37$^{{\\,}\\mathrm{HL}}$, and VQZ-F12 PESs. In Figure~\\eqref{fig:converge}, the convergence of the vibrational energy levels up to $5000{\\,}$cm$^{-1}$ for the CBS-35$^{{\\,}\\mathrm{HL}}$ PES can be seen with respect to the final $E_i^{\\mathrm{CVBS}}$ extrapolated values. Below $4000{\\,}$cm$^{-1}$ the computed $P_{\\mathrm{max}}=14$ term values are already reasonably well converged. Only five levels in this range possess a residual $\\Delta E(P_{\\mathrm{max}}-P_{\\mathrm{CVBS}})$ larger than $0.1{\\,}$cm$^{-1}$, none of which is greater than $0.3{\\,}$cm$^{-1}$. As expected, levels involving highly excited modes benefit the most from extrapolation as these converge at a much slower rate. \n\n \\begin{figure}\n \\includegraphics{pcvbs_term_converge}\n \\caption{\\label{fig:converge}Convergence of vibrational term values of CH$_{3}{}^{35}$Cl up to $5000{\\,}$cm$^{-1}$ with respect to $P_{\\mathrm{max}}=P_{\\mathrm{CVBS}}$. For illustrative purposes we restrict the range of $\\Delta E(P_{\\mathrm{max}}-P_{\\mathrm{CVBS}})$ to $10{\\,}$cm$^{-1}$.}\n \\end{figure}\n \n The limiting factor of a CVBS extrapolation is the correct identification of the energy levels at each step up in basis set size. TROVE automatically assigns quantum numbers to the eigenvalues and corresponding eigenvectors by analysing the contribution of the basis functions. Due to the increased density of states above $5000{\\,}$cm$^{-1}$ for higher values of $P_{\\mathrm{max}}$, it quickly becomes difficult to consistently identify and match levels, except for highly excited individual modes. \n\n\\subsection{Vibrational $J=0$ Energies}\n\\label{sec:vib_enr}\n\n The normal modes of methyl chloride are classified by two symmetry species, $A_{1}$ and $E$. Of $A_{1}$ symmetry are the three non-degenerate modes; the symmetric CH$_{3}$ stretching mode $\\nu_{1}$ ($2967.77\/2967.75{\\,}$cm$^{-1}$), the symmetric CH$_{3}$ deformation mode $\\nu_{2}$ ($1354.88\/1354.69{\\,}$cm$^{-1}$), and the C{--}Cl stretching mode $\\nu_{3}$ ($732.84\/727.03{\\,}$cm$^{-1}$). Of $E$ symmetry are the three degenerate modes; the CH$_{3}$ stretching mode $\\nu_{4}^{l_{4}}$ ($3039.26\/3039.63{\\,}$cm$^{-1}$), the CH$_{3}$ deformation mode $\\nu_{5}^{l_{5}}$ ($1452.18\/1452.16{\\,}$cm$^{-1}$), and the CH$_{3}$ rocking mode $\\nu_{6}^{l_{6}}$ ($1018.07\/1017.68{\\,}$cm$^{-1}$). The values in parentheses are the experimentally determined values for CH$_{3}{}^{35}$Cl\/CH$_{3}{}^{37}$Cl from Refs.~\\onlinecite{11BrPeJa.CH3Cl} and \\onlinecite{05NiChBu.CH3Cl}. The additional vibrational angular momentum quantum numbers $l_{4}$, $l_{5}$, and $l_{6}$ are needed to resolve the degeneracy of their respective modes. To be of spectroscopic use we map the vibrational quantum numbers $n_k$ of TROVE to the normal mode quantum numbers $\\mathrm{v}_k$ commonly used. For CH$_3$Cl, the vibrational states are labelled as $\\mathrm{v_1}\\nu_1+\\mathrm{v_2}\\nu_2+\\mathrm{v_3}\\nu_3+\\mathrm{v_4}\\nu_4+\\mathrm{v_5}\\nu_5+\\mathrm{v_6}\\nu_6$ where $\\mathrm{v_i}$ counts the level of excitation.\n\n The calculated $J=0$ energy levels for CH$_{3}{}^{35}$Cl using the CBS-35$^{{\\,}\\mathrm{HL}}$ and VQZ-F12 PESs are listed in Table~\\ref{tab:j0_cbs_35cl}. We compare against all available experimental data taken from Refs.~\\onlinecite{11BrPeJa.CH3Cl}, \\onlinecite{05NiChBu.CH3Cl}, \\onlinecite{90DuLaxx.CH3Cl}, and \\onlinecite{99Laxxxx.CH3Cl}. A small number of levels from Refs.~\\onlinecite{90DuLaxx.CH3Cl} and \\onlinecite{99Laxxxx.CH3Cl} have not been included as we were unable to confidently identify the corresponding values in TROVE.\n \n\\setlength\\LTleft{0pt}\n\\setlength\\LTright{0pt}\n\\LTcapwidth=\\textwidth\n\\begin{longtable*}[ht]{@{\\extracolsep{\\fill}} l c c c c c c c}\n\\caption{\\label{tab:j0_cbs_35cl} Comparison of calculated and experimental $J=0$ vibrational term values (in cm$^{-1}$) for CH$_{3}{}^{35}$Cl. The zero-point energy was computed to be $8219.661\\,$cm$^{-1}$ at the CVBS limit.}\\\\ \\hline\\hline\nMode & Sym. & VQZ-F12{\\,}(A) & CBS-35$^{{\\,}\\mathrm{HL}}${\\,}(B) & Exp. & Obs$-$calc{\\,}(A) & Obs$-$calc{\\,}(B) & Ref.\\\\ \\hline\n\\endfirsthead\n\\caption{(\\textit{Continued})}\\\\ \\hline \nMode & Sym. & VQZ-F12{\\,}(A) & CBS-35$^{{\\,}\\mathrm{HL}}${\\,}(B) & Exp. & Obs$-$calc{\\,}(A) & Obs$-$calc{\\,}(B) & Ref.\\\\ \\hline\n\\endhead\n$\\nu_3$ & $A_1$ & 734.37 & 733.22 & 732.8422 & -1.53 & -0.38 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_6$ & $E$ & 1018.16 & 1018.05 & 1018.0709 & -0.09 & 0.02 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_2$ & $A_1$ & 1355.11 & 1355.01 & 1354.8811 & -0.23 & -0.13 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_5$ & $E$ & 1451.57 & 1452.56 & 1452.1784 & 0.61 & -0.38 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$2\\nu_3$ & $A_1$ & 1459.94 & 1457.54 & 1456.7626 & -3.18 & -0.78 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_3+\\nu_6$ & $E$ & 1747.10 & 1745.78 & 1745.3711 & -1.73 & -0.41 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$2\\nu_6$ & $A_1$ & 2029.67 & 2029.46 & 2029.3753 & -0.29 & -0.09 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$2\\nu_6$ & $E$ & 2038.58 & 2038.37 & 2038.3262 & -0.25 & -0.04 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_2+\\nu_3$ & $A_1$ & 2082.35 & 2080.98 & 2080.5357 & -1.82 & -0.45 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$3\\nu_3$ & $A_1$ & 2176.84 & 2173.09 & 2171.8875 & -4.95 & -1.20 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_3+\\nu_5$ & $E$ & 2183.51 & 2183.30 & 2182.5717 & -0.94 & -0.73 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_2+\\nu_6$ & $E$ & 2368.08 & 2367.90 & 2367.7222 & -0.35 & -0.18 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_5+\\nu_6$ & $E$ & 2461.19 & 2461.98 & 2461.6482 & 0.46 & -0.33 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$2\\nu_3+\\nu_6$& $E$ & 2467.19 & 2464.65 & 2463.8182 & -3.37 & -0.83 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_5+\\nu_6$ & $A_1$ & 2464.50 & 2465.28 & 2464.9025 & 0.40 & -0.38 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$\\nu_5+\\nu_6$ & $A_2$ & 2466.85 & 2467.85 & 2467.6694 & 0.82 & -0.18 & \\onlinecite{05NiChBu.CH3Cl}\\\\\n$2\\nu_2$ & $A_1$ & 2694.69 & 2694.61 & 2693.00 & -1.69 & -1.61 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_3+2\\nu_6$& $A_1$ & 2753.23 & 2751.74 & 2751.18 & -2.05 & -0.56 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_2+2\\nu_3$& $A_1$ & 2800.39 & 2797.64 & 2796.81 & -3.58 & -0.83 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_2+\\nu_5$ & $E$ & 2803.10 & 2803.96 & 2803.26 & 0.16 & -0.70 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$4\\nu_3$ & $A_1$ & 2885.23 & 2880.47 & 2878.00 & -7.23 & -2.47 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_5$ & $A_1$ & 2877.75 & 2879.31 & 2879.25 & 1.50 & -0.06 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_5$ & $E$ & 2896.27 & 2898.22 & 2895.566 & -0.71 & -2.65 & \\onlinecite{11BrPeJa.CH3Cl}\\\\\n$2\\nu_3+\\nu_5$& $E$ & 2906.64 & 2905.14 & 2907.903 & 1.26 & 2.77 & \\onlinecite{11BrPeJa.CH3Cl}\\\\\n$\\nu_1$ & $A_1$ & 2965.78 & 2969.16 & 2967.7691 & 1.99 & -1.39 & \\onlinecite{11BrPeJa.CH3Cl}\\\\\n$\\nu_4$ & $E$ & 3035.50 & 3038.19 & 3039.2635 & 3.76 & 1.07 & \\onlinecite{11BrPeJa.CH3Cl}\\\\\n$3\\nu_6$ & $E$ & 3045.08 & 3045.76 & 3042.8944 & -2.18 & -2.87 & \\onlinecite{11BrPeJa.CH3Cl}\\\\\n$3\\nu_6$ & $A_1$ & 3060.95 & 3060.62 & 3060.0064 & -0.95 & -0.62 & \\onlinecite{11BrPeJa.CH3Cl}\\\\\n$\\nu_2+2\\nu_6$& $A_1$ & 3373.81 & 3373.57 & 3373.5 & -0.31 & -0.07 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_2+\\nu_3$& $A_1$ & 3415.53 & 3414.02 & 3413.0 & -2.53 & -1.02 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_3+2\\nu_5$& $A_1$ & 3607.98 & 3608.77 & 3607.70 & -0.28 & -1.07 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_1+\\nu_3$ & $A_1$ & 3700.18 & 3702.43 & 3700.67 & 0.49 & -1.76 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_2+\\nu_6$& $E$ & 3702.92 & 3702.80 & 3702.69 & -0.23 & -0.11 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_3+3\\nu_6$& $E$ & 3760.53 & 3759.07 & 3756.6 & -3.93 & -2.47 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_3+\\nu_4$ & $E$ & 3773.98 & 3776.04 & 3773.52 & -0.46 & -2.52 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_5+\\nu_6$& $E$ & 3884.88 & 3886.75 & 3886.05 & 1.17 & -0.70 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_1+\\nu_6$ & $E$ & 3977.68 & 3980.97 & 3979.66 & 1.98 & -1.31 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_4+\\nu_6$ & $E$ & 4047.37 & 4049.83 & 4051.22 & 3.85 & 1.39 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_2+\\nu_5$& $E$ & 4137.96 & 4138.86 & 4138.29 & 0.33 & -0.57 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_2+2\\nu_5$& $A_1$ & 4229.37 & 4231.18 & 4230.34 & 0.97 & -0.84 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_2+3\\nu_6$& $E$ & 4378.39 & 4379.54 & 4380.52 & 2.13 & 0.98 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_2+\\nu_4$ & $E$ & 4384.03 & 4385.90 & 4382.64 & -1.39 & -3.26 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_1+\\nu_5$ & $E$ & 4412.81 & 4416.81 & 4415.4 & 2.59 & -1.41 & \\onlinecite{99Laxxxx.CH3Cl}\\\\\n$\\nu_1+2\\nu_6$& $A_1$ & 4982.68 & 4985.90 & 4984.0 & 1.32 & -1.90 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_1+2\\nu_2$& $A_1$ & 5655.67 & 5658.93 & 5657.0 & 1.33 & -1.93 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_2+\\nu_4$& $E$ & 5708.85$^a$& 5711.12$^a$&5713 & 4.15 & 1.88 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$\\nu_1+\\nu_4$ & $E$ & 5870.30 & 5875.98 & 5873.8 & 3.50 & -2.18 & \\onlinecite{99Laxxxx.CH3Cl}\\\\\n$2\\nu_1$ & $A_1$ & 5875.28 & 5881.04 & 5878 & 2.72 & -3.04 & \\onlinecite{99Laxxxx.CH3Cl}\\\\\n$\\nu_4+2\\nu_5$& $E$ & 5918.20$^a$&5923.37$^a$&5923.4& 5.20 & 0.03 & \\onlinecite{99Laxxxx.CH3Cl}\\\\\n$2\\nu_4$ & $A_1$ & 6011.38 & 6018.47 & 6015.3 & 3.92 & -3.17 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_1+\\nu_5$& $E$ & 7303.96 & 7311.08 & 7313.2 & 9.24 & 2.12 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_4+\\nu_5$& $E$ & 7437.80 & 7445.86 & 7443.2 & 5.40 & -2.66 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$2\\nu_4+2\\nu_5$& $A_1$& 8870.15 & 8877.25 & 8874.3 & 4.15 & -2.95 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n$3\\nu_4$ & $A_1$ & 9069.53 & 9079.28 & 9076.9 & 7.37 & -2.38 & \\onlinecite{90DuLaxx.CH3Cl}\\\\\n\\hline\\hline\n$^a$ $P_{\\mathrm{max}}=14$ value. \\\\[-2mm]\n\\end{longtable*}\n \n The CBS-35$^{{\\,}\\mathrm{HL}}$ PES reproduces the six fundamental term values with a root-mean-square (rms) error of $0.75{\\,}$cm$^{-1}$ and a mean-absolute-deviation (mad) of $0.56{\\,}$cm$^{-1}$. This is a considerable improvement over the results of the VQZ-F12 PES, which reproduces the fundamentals with a rms error of $1.86{\\,}$cm$^{-1}$ and a mad of $1.37{\\,}$cm$^{-1}$. Inspection of all computed CH$_{3}{}^{35}$Cl energy levels shows that on the whole, the CBS-35$^{{\\,}\\mathrm{HL}}$ PES is more reliable. This is gratifying as the effort required to generate the CBS-35$^{{\\,}\\mathrm{HL}}$ PES is far greater than that of the VQZ-F12 PES. Unlike other instances,~\\cite{YaYuRi11.H2CS} the VQZ-F12 results do not benefit from an extensive cancellation of errors. Note that the PES reported in Ref.~\\onlinecite{08Nixxxx.CH3Cl}, which did not treat any additional HL energy corrections, produces results with errors similar to those of the VQZ-F12 PES.\n \n The accuracy achieved at lower energies with the CBS-35$^{{\\,}\\mathrm{HL}}$ PES is quite remarkable, with residuals larger than $2{\\,}$cm$^{-1}$ starting to appear around $3000{\\,}$cm$^{-1}$. This is a notoriously difficult region of CH$_{3}$Cl with strong resonances, but the experimental values we compare against are from a recent high-resolution study and should thus be trustworthy.~\\cite{11BrPeJa.CH3Cl} In the comparison against values reported in Ref.~\\onlinecite{90DuLaxx.CH3Cl}, and subsequently used in Ref.~\\onlinecite{99Laxxxx.CH3Cl}, we exercise some caution. The $2\\nu_5(E)$ and $2\\nu_3+\\nu_5(E)$ levels presented in Ref.~\\onlinecite{90DuLaxx.CH3Cl} are lower by around $3$ and $5{\\,}$cm$^{-1}$ respectively when compared with new values measured in Ref.~\\onlinecite{11BrPeJa.CH3Cl}. However the agreement for the $\\nu_1$, $\\nu_4$ and $3\\nu_6(E)$ levels is excellent. The residual for the $2\\nu_2$ level seems large given the residual for the $\\nu_2$ term value, and we suspect that the experimental value is incorrect. At higher energies the quality of the CBS-35$^{{\\,}\\mathrm{HL}}$ PES does not appear to deteriorate significantly.\n \n For the $^{37}$Cl isotopologue of methyl chloride, the $J=0$ term values calculated from the CBS-37$^{{\\,}\\mathrm{HL}}$ PES are compared with all available experimental data in Table~\\ref{tab:j0_cbs_37cl}. The CBS-37$^{{\\,}\\mathrm{HL}}$ PES reproduces the six fundamental term values with a rms error of $1.00{\\,}$cm$^{-1}$ and a mad of $0.70{\\,}$cm$^{-1}$. The reduction in accuracy when compared to the CBS-35$^{{\\,}\\mathrm{HL}}$ PES is primarily due to the $\\nu_4$ mode, whose residual has gone from $1.07{\\,}$cm$^{-1}$ for CH$_{3}{}^{35}$Cl to $1.92{\\,}$cm$^{-1}$ for CH$_{3}{}^{37}$Cl. The accuracy of the $3\\nu_6(E)$ level has also declined, but for energies leading up to $3000{\\,}$cm$^{-1}$ the agreement with experiment is excellent. Despite being unable to compare against higher energies we expect the CBS-37$^{{\\,}\\mathrm{HL}}$ PES to perform as well as its $^{35}$Cl counterpart.\n \n\\begin{table}[!ht]\n\\tabcolsep=5pt\n\\caption{\\label{tab:j0_cbs_37cl} Comparison of calculated and experimental $J=0$ vibrational term values (in cm$^{-1}$) for CH$_{3}{}^{37}$Cl. The zero-point energy was computed to be $8216.197\\,$cm$^{-1}$ at the CVBS limit.}\n\\begin{center}\n\t\\begin{tabular}{lcccc}\n\t\\hline\\hline\n \tMode & Sym. & CBS-37$^{{\\,}\\mathrm{HL}}$ & Exp.$^a$ & Obs$-$calc \\\\\n\t\\hline\n\t\n\t$\\nu_3$ & $A_1$ & 727.40 & 727.0295 & -0.37 \\\\\n\t$\\nu_6$ & $E$ & 1017.66 & 1017.6824 & 0.02 \\\\\n\t$\\nu_2$ & $A_1$ & 1354.82 & 1354.6908 & -0.13 \\\\\n\t$2\\nu_3$ & $A_1$ & 1446.12 & 1445.3509 & -0.77 \\\\\n\t$\\nu_5$ & $E$ & 1452.53 & 1452.1552 & -0.38 \\\\\n\t$\\nu_3+\\nu_6$ & $E$ & 1739.64 & 1739.2357 & -0.41 \\\\\n\t$2\\nu_6$ & $A_1$ & 2028.68 & 2028.5929 & -0.09 \\\\\n\t$2\\nu_6$ & $E$ & 2037.59 & 2037.5552 & -0.04 \\\\\n\t$\\nu_2+\\nu_3$ & $A_1$ & 2074.90 & 2074.4526 & -0.45 \\\\\n\t$3\\nu_3$ & $A_1$ & 2156.31 & 2155.1179 & -1.19 \\\\\n\t$\\nu_3+\\nu_5$ & $E$ & 2177.47 & 2176.7504 & -0.72 \\\\\n\t$\\nu_2+\\nu_6$ & $E$ & 2367.32 & 2367.1394 & -0.18 \\\\\n\t$2\\nu_3+\\nu_6$& $E$ & 2452.76 & 2451.9048 & -0.85 \\\\\n\t$\\nu_5+\\nu_6$ & $E$ & 2461.78 & 2461.4849 & -0.29 \\\\\n\t$\\nu_5+\\nu_6$ & $A_1$ & 2464.85 & 2464.4690 & -0.38 \\\\\n\t$\\nu_5+\\nu_6$ & $A_2$ & 2467.43 & 2467.2469 & -0.18 \\\\\n\t$\\nu_2+\\nu_5$ & $E$ & 2803.73 & 2803.2$\\,^b$ & -0.53 \\\\\n $2\\nu_5$ & $A_1$ & 2879.81 & 2879.0$\\,^b$ & -0.81 \\\\\n\t$2\\nu_3+\\nu_5$& $E$ & 2893.71 & 2893.7394$\\,^c$ & 0.03 \\\\\n\t$2\\nu_5$ & $E$ & 2898.19 & 2895.449$\\,^c$ & -2.74 \\\\\n\t$\\nu_1$ & $A_1$ & 2969.14 & 2967.7469$\\,^c$ & -1.39 \\\\\n\t$\\nu_4$ & $E$ & 3037.71 & 3039.6311$\\,^c$ & 1.92 \\\\\n\t$3\\nu_6$ & $E$ & 3044.97 & 3041.2568$\\,^c$ & -3.72 \\\\\n\t$3\\nu_6$ & $A_1$ & 3059.47 & 3058.6913$\\,^c$ & -0.78 \\\\\n\t\\hline\\hline\n \\end{tabular}\n\\end{center}\n $^a$ Values from Ref.~\\onlinecite{05NiChBu.CH3Cl} unless stated otherwise.\\\\[-2mm]\n $^b$ Ref.~\\onlinecite{82BeAlGu.CH3Cl}.\n $^c$ Ref.~\\onlinecite{11BrPeJa.CH3Cl}.\\\\[-2mm]\n\\end{table}\n \n We have not computed term values for CH$_{3}{}^{37}$Cl using the VQZ-F12 PES but we expect errors similar to those reported for CH$_{3}{}^{35}$Cl. It is evident that for methyl chloride, the inclusion of additional HL corrections and a CBS extrapolation in the PES lead to considerable improvements in computed $J=0$ energies. We have been able to identify and assign over $100$ new energy levels for both CH$_{3}{}^{35}$Cl and CH$_{3}{}^{37}$Cl which we provide as supplementary material.~\\cite{EPAPSCH3CL} We recommend the CBS-35$^{{\\,}\\mathrm{HL}}$ and CBS-37$^{{\\,}\\mathrm{HL}}$ PESs for future use.\n \n\\subsection{Equilibrium Geometry of CH$_3$Cl}\n\n The equilibrium geometry of methyl chloride determined empirically by Jensen et al.~\\cite{81JeBrGu.CH3Cl} has often served as reference. However the reliability of the axial rotational constants used in their analysis has been questioned.~\\cite{97DeWlRu.CH3Cl} The C-H bond length reported in Ref.~\\onlinecite{81JeBrGu.CH3Cl} also appears too large to be consistent with \\textit{ab initio} calculations, and also with the isolated C-H bond stretching frequency.~\\cite{94DeWlxx.CH3Cl} A combined empirical and \\textit{ab initio} structure has later been determined based on $^{12}$CH$_{3}{}^{35}$Cl, $^{12}$CH$_{3}{}^{37}$Cl, $^{12}$CD$_{3}{}^{35}$Cl and $^{12}$CD$_{3}{}^{37}$Cl experimental data.~\\cite{97DeWlRu.CH3Cl} We compare against this as well as another high-level \\textit{ab initio} study.~\\cite{03DeMaBo.CH3Cl}\n\n The equilibrium structural parameters calculated from the CBS-(35\/37)$^{\\,\\mathrm{HL}}$ PES and the VQZ-F12 PES are listed in Table~\\ref{tab:eq_ref}. The CBS-(35\/37)$^{\\,\\mathrm{HL}}$ bond lengths are shorter than the VQZ-F12 values, which is to be expected due to the inclusion of core-valence electron correlation.~\\cite{Coriani:2005} There is good agreement with the values from Refs.~\\onlinecite{97DeWlRu.CH3Cl} and \\onlinecite{03DeMaBo.CH3Cl}. The largest discrepancy concerns the bond angle determined in Ref.~\\onlinecite{97DeWlRu.CH3Cl} which is around $0.3$ degrees larger than all \\textit{ab initio} computed values.\n \n\\begin{table}\n\\tabcolsep=5pt\n\\caption{\\label{tab:eq_ref}Equilibrium structural parameters of CH$_3$Cl}\n\\begin{center}\n\\begin{tabular}{l c c c}\n\\hline\\hline\n& $r$(C-Cl)\/$\\mathrm{\\AA}$ & $r$(C-H)\/$\\mathrm{\\AA}$ & $\\beta$(HCCl)\/deg \\\\[0.5mm]\n\\hline \\\\[-2.5mm]\nCBS-(35\/37)$^{\\,\\mathrm{HL}}$ & 1.7777 & 1.0834 & 108.38 \\\\\nVQZ-F12 & 1.7805 & 1.0849 & 108.39 \\\\ \nRef.~\\onlinecite{97DeWlRu.CH3Cl}$\\,^a$ & 1.7768 & 1.0842 & 108.72 \\\\\nRef.~\\onlinecite{03DeMaBo.CH3Cl}$\\,^b$ & 1.7772 & 1.0838 & 108.45 \\\\\n\\hline\\hline\n\\end{tabular}\n\\end{center}\n$^a$ Value determined from empirical data and CCSD(T) calculations.\\\\[-2mm]\n$^b$ CCSD(T)(\\textit{fc})\/cc-pV(Q,5)Z + MP2(\\textit{ae})\/cc-pwCVQZ - MP2(\\textit{fc})\/cc-pwCVQZ.\n\\end{table}\n \n For further validation we studied the pure rotational spectrum as rotational energies are highly dependent on the molecular geometry through the moments of inertia. In Table~\\ref{tab:rotational} we present the calculated $J\\leq5$ rotational energies in the ground vibrational state for CH$_{3}{}^{35}$Cl using the CBS-35$^{\\,\\mathrm{HL}}$ PES. The computed values reproduce the experimental levels with a rms error of $0.0018{\\,}$cm$^{-1}$. The CBS-(35\/37)$^{\\,\\mathrm{HL}}$ \\textit{ab initio} structural parameters reported in Table~\\ref{tab:eq_ref} can thus be regarded as reliable, and we expect the true equilibrium geometry of methyl chloride to be close to these values.\n \n\\begin{table}[!h]\n\\tabcolsep=5pt\n\\caption{\\label{tab:rotational} Comparison of calculated and experimental $J\\leq5$ pure rotational term values (in cm$^{-1}$) for CH$_{3}{}^{35}$Cl. The observed ground state energy levels are from Ref.~\\onlinecite{05NiChxx.CH3Cl}.}\n\\begin{center}\n\t\\begin{tabular}{cccrrr}\n\t\\hline\\hline\n \t $J$ & $K$ & Sym. & Exp. & CBS-35$^{{\\,}\\mathrm{HL}}$ & Obs$-$calc \\\\ \n\t\\hline\t\n\t0 & 0 & $A_1$ & 0.0000 & 0.0000 & 0.0000 \\\\\n\t1 & 0 & $A_2$ & 0.8868 & 0.8868 & 0.0000 \\\\\n\t1 & 1 & $E$ & 5.6486 & 5.6489 & -0.0003 \\\\\n\t2 & 0 & $A_1$ & 2.6604 & 2.6603 & 0.0001 \\\\\n\t2 & 1 & $E$ & 7.4222 & 7.4223 & -0.0001 \\\\\n\t2 & 2 & $E$ & 21.7067 & 21.7075 & -0.0008 \\\\\n\t3 & 0 & $A_2$ & 5.3208 & 5.3205 & 0.0003 \\\\\n\t3 & 1 & $E$ & 10.0825 & 10.0826 & -0.0001 \\\\\n\t3 & 2 & $E$ & 24.3668 & 24.3676 & -0.0008 \\\\\n\t3 & 3 & $A_1$ & 48.1707 & 48.1727 & -0.0020 \\\\\n\t3 & 3 & $A_2$ & 48.1707 & 48.1727 & -0.0020 \\\\\n\t4 & 0 & $A_1$ & 8.8678 & 8.8675 & 0.0003 \\\\\n\t4 & 1 & $E$ & 13.6295 & 13.6294 & 0.0001 \\\\\n\t4 & 2 & $E$ & 27.9137 & 27.9143 & -0.0006 \\\\\n\t4 & 3 & $A_1$ & 51.7173 & 51.7191 & -0.0018 \\\\\n\t4 & 3 & $A_2$ & 51.7173 & 51.7191 & -0.0018 \\\\\n\t4 & 4 & $E$ & 85.0354 & 85.0389 & -0.0035 \\\\\n\t5 & 0 & $A_2$ & 13.3015 & 13.3010 & 0.0005 \\\\\n 5 & 1 & $E$ & 18.0632 & 18.0629 & 0.0003 \\\\\n 5 & 2 & $E$ & 32.3472 & 32.3476 & -0.0004 \\\\\n 5 & 3 & $A_1$ & 56.1505 & 56.1521 & -0.0016 \\\\\n 5 & 3 & $A_2$ & 56.1505 & 56.1521 & -0.0016 \\\\\n 5 & 4 & $E$ & 89.4681 & 89.4714 & -0.0033 \\\\\n 5 & 5 & $E$ & 132.2931 & 132.2985 & -0.0054 \\\\\n \\hline\\hline\n \\end{tabular}\n\\end{center}\n\\end{table}\n \n\\section{Conclusions}\n\\label{sec:conc}\n\n Using state-of-the-art electronic structure calculations, we have generated two global \\textit{ab initio} PESs for the two main isotopologues of methyl chloride. We believe that the accuracy achieved by these PESs is at the limit of what is currently possible using solely \\textit{ab initio} methods, and that it would be extremely challenging to go beyond this without empirical refinement of the respective PESs. Considering that the PESs are purely \\textit{ab initio} constructs, the computed vibrational wavenumbers show remarkable accuracy when compared with experiment for both isotopologues. It is evident that higher-level energy corrections and an extrapolation to the complete basis set limit should be included to obtain accurate theoretical vibrational energies for CH$_3$Cl. The same applies to the determination of equilibrium structural parameters.\n \n For the requirements of high-resolution spectroscopy, the \\textit{ab initio} surfaces presented here will no doubt need to be refined to empirical data.~\\cite{YuBaTe11.NH3} The resulting ``spectroscopic PESs'' can then be used to achieve unprecedented accuracy in the simulation of rotation-vibration spectra, and it is at this stage that the predictive power of the variational approach is fully realised. The PESs presented in this work provide an excellent starting point for this procedure. Once the refinement is complete, a comprehensive rovibration line list applicable for elevated temperatures will be generated for methyl chloride as part of the ExoMol project.~\\cite{ExoMol2012}\n\n\\begin{acknowledgments}\nThis work was supported by ERC Advanced Investigator Project 267219, and FP7-MC-IEF project 629237. A.O. is grateful to UCL for a studentship under their Impact Student Scheme and thanks J\\\"{u}rgen Breidung for valuable discussions.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\t\n\t\\label{sec:introduction}\n\tLicense Plate (LP) detection and recognition are the key parts of intelligent transportation systems because it is the unique identification of vehicles. The relevant methods are widely used on electronic toll payment, parking managing, and traffic monitoring systems. In order to achieve effective detection and recognition, researchers proposed a variety of techniques that are capable of handling the task in most conditions. Before the deep learning era, most of the methods were on the basis of artificial designs~\\cite{tradm1,tradm2,alpr}. They utilized hand-crafted features such as colors, shadows and textures, and integrated them by a cascaded strategy with license plate detection, segmentation, and recognition. Although they reached promising performance, the robustness may not be enough for some uncontrolled circumstances like weather, illumination, and rotation since the scheme relies on manually designed features. Then with the explosion of deep learning methods in recent years, most researchers turned their attention to this framework that is able to learn features automatically. The convenient and efficient technique quickly became popular, and networks for detection and recognition also sprung up~\\cite{pvw, vlpd, lprec1}. In these works, great successes have been achieved. However, for Chinese LPs, the problem of data scarcity gradually emerges. Existing annotated Chinese LP data that is representative of most scenarios cannot meet the huge demands. Thus, to alleviate the issue, we present our Chinese Road Plate Dataset (CRPD).\n\t\n\tAdmittedly, there are already some excellent public datasets with LPs~\\cite{pvw,reid, caltech,zemris,ccpd,aolp,ssig,ufpr,pku,clpd}. These public benchmarks lay the foundation of various LP processing methods, and our CRPD is an effective supplement to existing Chinese LP datasets, which is more challenging. Images of CRPD are collected from electronic monitoring systems in most provinces of mainland China in different periods and weather conditions. The images contain cars with different statuses and types, and quite a part of the data contains more than one LP in one image. Each image has annotations of (i) LP content. (ii) Locations of four vertices. (iii) LP type. More details will be introduced in Section~\\ref{sec:1}.\n\t\n\n\t\n\tAs for detection and recognition tasks, most prestigious methods designed the two branches separately. Zhou \\textsl{et al.}~\\cite{pvw} proposed a scheme for LP detection with Principal Visual Word (PVW) generation and applied bag-of-words in partial-duplicate image search. Chen \\textsl{et al.}~\\cite{vlpd} put forward a method to detect the vehicles and the LPs simultaneously, where the results can be used for further recognition. The two-stage framework is effective, but the error accumulation problems hinder further progresses. Therefore, end-to-end frameworks are increasingly prevalent. In this paper, we propose an end-to-end trainable Chinese LP detection and recognition network with both high efficiency and satisfactory performance as the baseline of our CRPD. Our method is a unified network that consists of two branches. The branch for detection is based on STELA~\\cite{stela}, which is a learned anchor-based detector. It only associates one reference box at each spatial position, which highly reduces the computation to reach a fast running speed. In the recognition branch, we abandon the recognition by segmentation pipeline and utilize a sequence-to-sequence method to accomplish the recognition task. The region-wise features are extracted by the RRoIAlign~\\cite{fots} operator and then will be fed into components for recognition. This scheme blurs the line between detection and recognition, which strongly alleviates the error accumulation issues. \n\t\n\n\t\n\tIn summary, there are three main contributions in this work:\n\t\n\t\\begin{itemize}\n\t\t\\item We publish a new Chinese LP dataset with more than 30k images, which covers more scenes and administrative regions of mainland China. We argue that this new dataset is more difficult than existing datasets and is also a supplement to the Chinese LP research field.\n\t\t\\item We propose an end-to-end trainable network for Chinese LP detection and recognition, which almost reaches a trade-off between accuracy and efficiency as a baseline. Through utilizing the common feature extraction branch and the RRoIAlign~\\cite{fots} operator, end-to-end training is achieved, and the error accumulation problems are alleviated with a real-time efficiency kept.\n\t\t\\item Our code and dataset will be publicly available soon. To facilitate the reference of researchers and get more progress, we will upload related materials.\n\t\\end{itemize}\n\t\n\t\\section{Related Work}\n\t\\label{sec:related}\n\t\\subsection{LP Datasets}\n\tDue to the importance of LP detection and recognition, researchers built and published a number of LP datasets. ReId~\\cite{reid} is a dataset for license plate recognition with 76k images gathered from surveillance cameras on highway toll gates. Caltech~\\cite{caltech} and Zemris~\\cite{zemris} collected over 600 images from the road and freeways with high-resolution cameras. Hsu~\\textsl{et al.}~\\cite{aolp} presented a dataset for applications of access control, traffic law enforcement and road patrol. Gon{\\c{c}}alves~\\textsl{et al.}~\\cite{ssig} proposed Sense SegPlate Database to evaluate license plate character segmentation problem. Laroca~\\textsl{et al.}~\\cite{ufpr} provided a dataset that includes 4,500 fully annotated images from 150 vehicles in real-world scenarios. These datasets strongly support the researches of LP detection and recognition methods.\n\t\n\tHowever, LPs in different countries and regions are usually not the same. The mentioned datasets mostly contain LPs that only include alphanumeric characters. In some countries or regions, such as Chinese mainland regions, Japan and South Korea, the LPs contain some special characters. Therefore, it is still significant to build new datasets for these LPs. Among them, mainland Chinese LP detection and recognition are one of the important tasks which require a large amount of data. To meet the demand, Chinese LP datasets were proposed. Zhou~\\textsl{et al.}~\\cite{pvw} collected 220 LP images where the LPs were of little affine distortion. Yuan~\\textsl{et al.}~\\cite{pku} presented a dataset that contains vehicle images captured from various scenes under diverse conditions. Zhang~\\textsl{et al.}~\\cite{clpd} proposed a dataset that contains 1,200 LP images from all 31 provinces in mainland China. CCPD~\\cite{ccpd} is a large and comprehensive dataset with about 290k images that contain plates with various angles, distances, and illuminations. Though they made important contributions to the progress of detection and recognition methods, there are still not enough large and representative datasets.\n\t\n\t\\subsection{LP Detection and Recognition}\n\tOwing to the successes of text detection and recognition, LP processing is also well developed.\n\tThere are methods with end-to-end frameworks, which achieved excellent performance. Zhang~\\textsl{et al.}~\\cite{vlpdr} integrated LP detection, tracking, and recognition into a unified framework via deep learning. Silva~\\textsl{et al.}~\\cite{cnnlpdr} proposed to identify the vehicle and the LP region using two passes on the same CNN and then to recognize the characters using a second CNN. Kessentini~\\textsl{et al.}~\\cite{twostage} presented a two-stage network to achieve the detection and recognition of LPs. In~\\cite{mtlpr}, a light CNN was proposed for detection and recognition, which achieved real-time efficiency.\n\t\n\tAs for Chinese LPs, there are also excellent end-to-end frameworks. Laroca~\\textsl{et al.}~\\cite{yoloalpr} proposed a unified approach for LP detection and layout classification to improve the recognition results. Qin~\\textsl{et al.}~\\cite{eulpr} proposed a unified method that can recognize both single-line and double-line LPs in an end-to-end way without line segmentation and character segmentation. However, Chinese LP processing is still challenging due to the large number of categories of Chinese characters. Meanwhile, LP processing under unconstrained scenarios also faces many problems. Therefore, it is still valuable to propose a new end-to-end framework for Chinese LP detection and recognition.\n\t\n\t\\section{CRPD Overview}\n\t\\label{sec:1}\n\t\\subsection{Constitution of Data}\n\t\\label{sec:2}\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=2.7in]{Fig1.eps}\n\t\t\\caption{The constitution of CRPD dataset.}\n\t\t\\label{cons}\n\t\\end{figure}\n\t\n\tBecause CRPD is presented as a supplement for existing datasets, special attention is paid to the diversity of data. The images are mainly captured on electronic monitoring systems, including vehicles that are running, turning, parked, or far away which may cause blur and rotation. The scene includes day and night and different weathers. Quite a part of the images contains more than one LP from different provinces and cities, and there are LPs of special vehicles involved, such as coach cars, police cars, and trailers, whose LPs will contain some special characters. The dataset includes three sub-datasets according to the numbers of LPs: CRPD-single, CRPD-double, and CRPD-multi, as shown in Figure~\\ref{cons}. CRPD-single contains images with only one LP, CRPD-double contains images with two LPs, and CRPD-multi contains images with three or more LPs.\n\t\n\tThere are totally about 25k images for training, 6.25k for validating, and 2.3k for testing. In CRPD-single, there are 20k images for training, 5k for validating, and 1k for testing. In CRPD-double, there are 4k images for training, 1k for validating, and 1k for testing. In CRPD-multi, there are 1k images for training, 0.25k for validating, and 0.3k for testing.\n\t\n\tThe annotations consist of three parts. The first is LP content which includes numbers, Chinese and English characters. There are some LPs that are too small or seriously blurred whose content is unidentifiable, and they are also annotated while the unrecognizable characters are replaced with a special one. The second is the coordinate of four vertices of the LPs. The last is the LP type, including blue (small cars), yellow and single line (front of large cars), yellow and double lines (back of large cars), and white (police cars).\n\t\n\t\\subsection{Data Analysis}\n\tCRPD provides more than 30k LP images with annotations. Though it is not the largest dataset in current frequently used Chinese datasets, some characteristics of CRPD will be helpful for training a robust model. Existing datasets mostly contain single and focused LP in one image, as shown in Figure~\\ref{public_datasets}. In some cases, such as electronic toll payment or parking managing systems, the data is highly effective. But when dealing with data for traffic monitoring systems, suspect car tracking, or vehicle flow measuring, images with more LPs will be required. CRPD aims to fill up this deficiency, so we paid special attention to the number of LPs, as shown in Figure~\\ref{crpd}.\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=3.5in]{Fig2.eps}%\n\t\t\\caption{Examples of LP images in existing public datasets.}\n\t\t\\label{public_datasets}\n\t\\end{figure}\n\t\n\tAlso, CRPD has some other advantages for building a robust model. To illustrate them, we compare with CCPD~\\cite{ccpd}, EasyPR~\\cite{easypr}, ChineseLP~\\cite{pvw} and CLPD~\\cite{clpd} in some aspects, which are shown in Tables~\\ref{LP_number},~\\ref{status} and~\\ref{type}.\n\t\n\tThe first is the number of LPs. Images in CCPD~\\cite{ccpd} and CLPD~\\cite{clpd} contain one LP, which can better indicate the detection Precision of a network. But as noted above, this may restrict the scenarios where it can be used. In comparison, EasyPR~\\cite{easypr}, ChineseLP~\\cite{pvw} and our CRPD have better compatibility in this aspect.\n\t\n\tThe second is the status of vehicles. CCPD~\\cite{ccpd} contains images captured from parking lots, and they are more interested in LPs in various circumstances with different illumination, rotation, and blur. Therefore, the vehicle status is not focused. Our CRPD concentrates on the capability to deal with a variety of vehicles, so there is better coverage on vehicle status.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=3.5in]{Fig3.eps}%\n\t\t\\caption{Examples of LP images in CRPD-single (the first row), CRPD-double (the second row), and CRPD-multi (the third row).}\n\t\t\\label{crpd}\n\t\\end{figure}\n\t\n\t\\begin{table}[!t]\n\t\n\t\t\\caption{A comparison on the number of images with different numbers of LPs between current public Chinese LP datasets.}\n\t\t\\label{LP_number}\n\t\t\\centering\n\t\t\\begin{tabular}{cccccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tLP Number & CCPD & EasyPR & ChineseLP & CLPD & CRPD \\\\\n\t\t\t\\midrule[0.5pt]\t\n\t\t\t$=1$ & 224001 & 225 & 392& 1200& 26659 \\\\\n\t\t\t$=2$ & 0 & 21 & 16& 0& 6242 \\\\\n\t\t\t$=3$ & 0 & 10 & 1& 0& 1232 \\\\\n\t\t\t$\\geq4$ & 0 & 0 & 2& 0& 371 \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\begin{table}[t]\n\t\n\t\t\\caption{A comparison of the coverage of different vehicle statuses between current public Chinese LP datasets.}\n\t\t\\label{status}\n\t\t\\centering\n\t\t\\begin{tabular}{cccccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tStatus of Vehicles & CCPD & EasyPR & ChineseLP& CLPD& CRPD \\\\\n\t\t\t\\midrule[0.5pt]\n\t\t\tParked & \\ding{52} &\\ding{52} &\\ding{52} &\\ding{52} &\\ding{52} \\\\\n\t\t\tRunning & \\ding{55} &\\ding{52} &\\ding{52} &\\ding{52} &\\ding{52} \\\\\n\t\t\tTurning & \\ding{55} &\\ding{52} &\\ding{52} &\\ding{52} &\\ding{52} \\\\\n\t\t\tFar away & \\ding{55} &\\ding{55} &\\ding{55} &\\ding{52} &\\ding{52} \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\tThe last is the vehicle types. Though there are not a number of special vehicles on the road, the detection and recognition of them are still of great importance. Thus, we paid attention to the LPs of special vehicles to ensure the capability of the network trained by CRPD to deal with these LPs. Some images of them are shown in Figure~\\ref{special}. Altogether, our CRPD covers most a variety of common scenes. It is a strong supplement to Chinese LP datasets either used as training or evaluating data.\n\t\n\t\\begin{table}[t]\n\t\n\t\t\\caption{A comparison of the coverage of different vehicle types between current public Chinese LP datasets.}\n\t\t\\label{type}\n\t\t\\centering\n\t\t\\begin{tabular}{cccccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tType of Vehicles & CCPD & EasyPR & ChineseLP& CLPD& CRPD \\\\\n\t\t\t\\midrule[0.5pt]\n\t\t\tCoach Vehicles & \\ding{55} & \\ding{52} & \\ding{55}& \\ding{52}& \\ding{52} \\\\\n\t\t\tPolice Vehicles & \\ding{55} & \\ding{55} & \\ding{55}& \\ding{52}& \\ding{52} \\\\\n\t\t\n\t\t\tTrailers & \\ding{55} & \\ding{55} & \\ding{55}& \\ding{55}& \\ding{52} \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\begin{figure}[!t]\n\t\t\\centering\n\t\t\\includegraphics[width=3.5in]{Fig4.eps}%\n\t\t\\caption{LPs of special vehicles with their annotations in CRPD. The green rectangles are the annotated boxes and the text on the top left corner of the zooming rectangle is the annotated LP content.}\n\t\t\\label{special}\n\t\\end{figure}\n\t\n\tHowever, there are also some limitations of CRPD. First, the location of each LP character is not annotated, which restricts the applications of object detection and data augmentation. Meanwhile, it is also difficult to detect each character because it is very small in the perspective of electronic monitoring systems. Second, the LPs are all on the vehicles. The actual LPs can be held in hand or placed on the ground in some circumstances, and the detection and recognition of these LPs are also useful.\n\t\n\t\\begin{figure*}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=5in]{Fig5.eps}%\n\t\t\\caption{The framework of our network. Conv, DeConv, Bn, MaxPooling, and ResBlock stand for convolution layer, deformable convolution layer, batch normalization layer, max pooling layer, and residual block, respectively. k, s, p, and c stand for kernel size, stride, padding size, and output channel number, respectively, with the size behind each of them.}\n\t\t\\label{pipeline}\n\t\\end{figure*}\n\t\n\t\\section{Methodology}\n\t\n\t\\label{methodology}\n\t\n\tIn this section, we will describe the proposed method in detail, and the pipeline is shown in Figure~\\ref{pipeline}.\n\t\n\t\\subsection{Detection Branch}\n\t\\label{methodology_det}\n\t\n\tThe first stage is the LP detection step. As noted above, there are some obstacles for LP detection networks to reach a balance between accuracy and efficiency. Considering the trade-off, we utilize our previous STELA~\\cite{stela}, a totally real-time detector, as the basis of our detection branch. The detection network is implemented on RetinaNet~\\cite{retinanet} and utilizes Feature Pyramid Network (FPN)~\\cite{fpn} to construct a rich, multi-scale feature pyramid from a single resolution input image. It consists of three portions: anchor classification, rotated bounding box regression, and anchor refining.\n\t\n\t\\subsubsection{Anchor Classification}\n\t\\label{methodology_det_cls}\n\t\n\tAs we do not generate region proposals, class imbalance problems still exist in our scheme. That means there are only a few anchors that are annotated as positive (the object), while the others are negative. Therefore, Focal Loss~\\cite{retinanet} are utilized to calculate the loss of classification, as it is designed to deal with the problem. Firstly, we define $p_t$ with\n\t\\begin{equation}\n\t\tp_t=\n\t\t\\begin{cases}\n\t\t\tp& if~y=1\\\\\n\t\t\t1-p& otherwise\n\t\t\\end{cases}\n\t\\end{equation}\n\twhere $p\\in [0,1]$ is the predicted probability and $y=1$ specifies the ground-truth class. Then the loss is defined as \n\t\\begin{equation}\n\t\tL_{cls} = FL(p_t)=-\\alpha_{t}(1-p_{t})^{\\gamma}log(p_t)\n\t\\end{equation}\n\t$\\alpha_{t}$ is a balanced weighting factor and $\\gamma$ is a focusing parameter. They are set to 0.25 and 2.0 respectively, which is the same as the original Focal Loss~\\cite{retinanet}.\n\t\n\t\\begin{figure}[b]\n\t\t\\centering\n\t\t\\includegraphics[width=2.3in]{Fig6.eps}%\n\t\t\\caption{Illustration of the parameters. The red dotted box represents $b$ which is a box and the green solid box represents $g$ which is the groundtruth.}\n\t\t\\label{regression}\n\t\\end{figure}\n\t\n\t\\subsubsection{Rotated Bounding Box Regression}\n\t\\label{methodology_det_reg}\n\t\n\tFor the detection of tilted LP, we utilize rotated bounding boxes to match the instances. The box can be represented by a five tuple $(x,y,w,h,\\theta)$, in which $x$ and $y$ are the coordinate of the center point, $w$ and $h$ are the width and height of the box, and $\\theta$ is the angle to horizontal, as shown in Figure~\\ref{regression}. For the regression operation, the distance vector $\\Delta=(\\delta_x,\\delta_y,\\delta_w,\\delta_h,\\delta_{\\theta})$ is defined as\n\t\\begin{equation}\n\t\t\\delta_x = (g_x -b_x )\/b_w , ~\\delta_y = (g_y -b_y )\/b_h\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\\delta_w = log(g_w \/b_w ), ~\\delta_h = log(g_h \/ b_h)\n\t\\end{equation}\n\t\\begin{equation}\n\t\t\\delta_{\\theta}=tan(g_{\\theta})-tan(b_{\\theta})\n\t\\end{equation}\n\twhere $b$ and $g$ represent a bounding box and the corresponding target groundtruth respectively. The loss of the regression can be calculated by\n\t\\begin{equation}\n\t\tL_{loc} = smooth_{L_1}(\\Delta_{t}-\\Delta_{p})\n\t\\end{equation}\n\twhere $smooth_{L_1}$ is the smooth L1 loss~\\cite{fastrcnn}, $\\Delta_t$ is the target and $\\Delta_p$ is the predicted tuple.\n\t\n\t\\subsubsection{Learned Anchor}\n\t\\label{methodology_det_ref}\n\t\n\tAs depicted in~\\cite{stela}, the most important part of the proposed scheme in two-stage is that the selected proposals are chosen by learning. The manually-defined original anchors with fixed scales and aspect ratios may not be the optimal designs, so an extra regression branch for anchor refining is added. The final classification and regression task will be reached on the learned anchors, which brings an improvement in accuracy with a little increment of computation. The original anchor, learned anchor, and output boxes are illustrated in Figure~\\ref{anchors}. It is obvious that the center should be well aligned with the pixel in feature maps. Thus, the offsets are only regressed within $\\Delta'=(\\delta'_{w},\\delta'_{h},\\delta'_{\\theta})$, which means that only the shapes are adjusted. The loss can be calculated with\n\t\\begin{equation}\n\t\tL_{ref} = smooth_{L_1}(\\Delta'_{t}-\\Delta'_{p})\n\t\\end{equation}\n\tin which $\\Delta'_t$ is the target and $\\Delta'_p$ is the predicted tuple. And finally, the total loss is\n\t\\begin{equation}\n\t\tL_{det} = \\lambda_{ref}L_{ref}+\\lambda_{loc}L_{loc}+\\lambda_{cls}L_{cls}\n\t\\end{equation}\n\tin which $\\lambda_{ref}, \\lambda_{loc}$ and $\\lambda_{cls}$ are the weights which are set to 0.5, 0.5, and 1 which have been proven to be effective.\n\t\n\t\\begin{figure}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=3.5in]{Fig7.eps}%\n\t\t\\caption{The illustration of anchors and boxes. The red, yellow, and green boxes represent the original anchor, the learned anchor, and the final output boxes respectively. The original anchor (red) and learned anchor (yellow) have the same center point.}\n\t\t\\label{anchors}\n\t\\end{figure}\n\t\n\t\\subsection{Recognition Branch}\n\t\\label{methodology_rec}\n\t\n\tThe recognition step is the second stage of the LP processing tasks. Because most of the networks for recognition achieved high efficiency, we simply utilize modules based on CRNN~\\cite{crnn} to achieve the recognition. There are three parts in the modules: the convolutional layers, the recurrent layers, and the transcription layer.\n\t\n\tIn consideration that the backbone of our network has already extracted critical features of the input images, to avoid redundant computation, we utilize the processed feature maps as the input of the recognition branch. In order to effectively deal with rotated plates, RRoIAlign~\\cite{fots} that can crop the feature map with a rotated box, is applied to crop the maps according to the groundtruth boxes of LPs, and the cropped size is $8\\times 25$. While training, we consider that the feature maps may not be processed well when the predicted boxes are not accurate, so the feature maps will also be cropped with both groundtruth boxes and predicted boxes with a score higher than 0.9.\n\t\n\tAnd because the input is feature maps, we remove the first three convolutions of the original convolutional layers of CRNN~\\cite{crnn} to avoid the over-fitting problems. And in order to reach better accuracy, we refer to our another previous work~\\cite{drnn}, and replace two convolution layers with deformable convolution layers and add four residual blocks in this branch. The deformable convolution layers have an adaptive receptive field that can better cover the text area, as shown in Figure~\\ref{deformable}.\n\t\n\t\\begin{figure}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=2.7in]{Fig8.eps}\n\t\t\\caption{Indication of fixed receptive fields in standard convolution (the first row) and adaptive receptive fields in deformable convolution (the second row). In each image triplet, the left shows the sampling locations of two levels of $3\\times3$ filters on the preceding feature map, the middle shows the sampling locations of a $3\\times3$ filter, and the right shows two activation units. Two sets of locations are highlighted according to the activation units.}\n\t\t\\label{deformable}\n\t\\end{figure}\n\t\n\tThe architecture of the convolutional layers is shown in Figure~\\ref{pipeline}. The recurrent layers are based on bidirectional LSTM~\\cite{lstm}, and the transcription component is based on CTCLoss~\\cite{ctc}. The total loss function is a weighted loss\n\t\n\t\\begin{equation}\n\t\tL=\\lambda_{det} L_{det} +\\lambda_{rec} L_{rec}\n\t\\end{equation}\n\t\n\twhere $\\lambda_{det}$ and $\\lambda_{rec}$ are constants that indicate the strength of the detection and recognition modules. And in our training, the value of $L_{rec}$ is one-magnitude-order larger than $L_{det}$, so to keep a balance, we set them to 1 and 0.1, respectively.\n\t\n\t\n\t\n\t\\section{Experiments}\n\t\\label{experiments}\n\t\\subsection{Training Details}\n\t\\label{experiments_details}\n\t\n\tThe training and testing datasets are from CCPD~\\cite{ccpd}, EasyPR~\\cite{easypr}, and our CRPD. The input images are resized to $640\\times 640$ with three channels. In the training stage, the optimizer of the network is Adam [18], the batch size is set to 32, and the learning rate is 1e-4. The network is trained for 35000 iterations which consume about 10 hours. The proposed method is implemented by PyTorch~\\cite{pytorch}. The experiments are carried on a platform with Intel Xeon(R) E5-2630 v3 CPU and a single NVIDIA TITAN RTX GPU.\n\t\n\t\\subsection{Evaluation Metrics}\n\t\\label{experiments_metrics}\n\t\n\tTo demonstrate the effectiveness of the methods, we utilize the protocols described in~\\cite{ccpd} to evaluate the models. The detection will be considered as a match if it overlaps a ground truth bounding box by more than 60\\% and the words match exactly. Then Recall, Precision, and F-score are calculated with\n\t\\begin{equation}\n\t\tRecall=\\frac{TP}{TP+FN}\n\t\\end{equation}\n\t\\begin{equation}\n\t\tPrecision=\\frac{TP}{TP+FP}\n\t\\end{equation}\n\t\\begin{equation}\n\t\tF-score=\\frac{2\\times Precision \\times Recall}{Precision+Recall}\n\t\\end{equation}\n\twhere $TP$ represents the number of positive objects that are predicted as positive and the words match, $FP$ represents the number of negative objects that are predicted as positive, and $FN$ represents the number of negative objects that are predicted as negative. \n\t\n\t\\subsection{Ablations}\n\t\\label{experiments_ablations}\n\t\n\tTo demonstrate that the unified architecture brings some improvements and our scheme reaches the best performance, we evaluate the proposed components. Firstly, we test the effectiveness of our unified network. As comparisons, cascaded STELA~\\cite{stela} and CRNN~\\cite{crnn} models are utilized. The two networks are trained respectively, and the CRNN~\\cite{crnn} model will process the original images cropped in the light of the boxes predicted by STELA~\\cite{stela}. All the models are trained on CRPD and evaluated on CCPD~\\cite{ccpd}, EasyPR~\\cite{easypr}, and CRPD. From Table~\\ref{unified_expr}, we see that our network achieves a few improvements. A better result is reached on CRPD because the training and testing data have the same distribution. The LPs in CCPD~\\cite{ccpd} have a larger size, so the model trained on our CRPD, which contains LPs with small sizes, is not able to reach the best performance.\n\t\n\t\\begin{table}[h]\n\t\t\\centering\n\t\t\\caption{Ablations of the unified scheme on different datasets.}\n\t\t\\label{unified_expr}\n\t\t\\begin{threeparttable}\n\t\t\\begin{tabular}{p{1.3cm}<\\centering p{0.55cm}<\\centering p{0.55cm}<\\centering p{0.55cm}<\\centering c p{0.55cm}<\\centering p{0.55cm}<\\centering p{0.55cm}<\\centering}\n\t\t\t\\toprule[0.75pt]\n\t\t\t\\multirow{2}{*}{Dataset} & \\multicolumn{3}{c}{STELA+CRNN} & & \\multicolumn{3}{c}{Our Method} \\\\\n\t\t\t\\cmidrule[0.5pt]{2-4}\\cmidrule[0.5pt]{6-8}\n\t\t\t& R & P & F & & R & P & F \\\\\n\t\t\t\\midrule[0.5pt]\n\t\t\tCCPD & 79.1 & 67.8 & 73.0 & & 75.6 & 72.1 & 73.8 \\\\\n\t\t\tEasyPR & \\textbf{90.2} & 72.8 & 80.6 & & 89.9 & 73.0 & 80.6 \\\\\n\t\t\tCRPD & 88.3 & \\textbf{82.9} & \\textbf{85.5} & & \\textbf{95.4} & \\textbf{84.1} & \\textbf{89.4} \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\t\\begin{tablenotes}\n\t\t\t\\item R: Recall; P: Precision; F: F-score\n\t\t\\end{tablenotes}\n\t\t\\end{threeparttable}\n\t\\end{table}\n\t\n\tThen as we utilize the predicted boxes of the detection modules to crop the feature maps in the training stage, we evaluate the effectiveness of them with different scores. In Table~\\ref{ablations_predbox}, it is obvious that utilizing predicted boxes will bring an improvement on the Recall because more incomplete LPs will be detected, with a little descend on Precision. Considering the recognition branch will also become more robust with this mechanism, it will make the network able to better deal with inaccurate boxes. But because Chinese LPs usually have seven characters, a box with a score lower than 0.86 may cut a whole character off. Thus, there must be a deterioration of the performance when using boxes with scores only higher than 0.85. And the number of boxes with scores higher than 0.95 is less, which causes a smaller batch size for recognition modules, so we choose 0.9 as the threshold finally.\n\t\n\t\\begin{table}[h]\n\t\n\t\t\\caption{Ablations on CRPD-all between different thresholds of scores of the predicted boxes utilized to train the model.}\n\t\t\\label{ablations_predbox}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tThreshold & Recall & Precision & F-score \\\\\n\t\t\t\\midrule[0.5pt] \n\t\t\tNone & 91.2 & \\textbf{85.4} & 88.2 \\\\\n\t\t\t\\textgreater0.95 & 91.7 & 84.6 & 88.0 \\\\\n\t\t\t\\textgreater0.90 & \\textbf{95.4} & 84.1 & \\textbf{89.4} \\\\\n\t\t\t\\textgreater0.85 & 94.5 & 82.5 & 88.1 \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\tThe way to crop the feature maps may also bring different results of the recognition modules. We compare RoIPool~\\cite{fastrcnn}, RoIAlign~\\cite{maskrcnn}, and RRoIAlign~\\cite{fots}, and the results are shown in Table~\\ref{ablations_crop}. There is no obvious difference between the first two methods, and we think the reason is that LPs are not small objects, so some slight errors will not influence much. RRoIAlign~\\cite{fots} can better handle the work because the recognition branch will have great progress with consideration of the rotation degrees.\n\t\n\t\\begin{table}[t]\n\t\n\t\t\\caption{Ablations on CRPD-all between different region feature extracting approaches.}\n\t\t\\label{ablations_crop}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tApproach & Recall & Precision & F-score \\\\\n\t\t\t\\midrule[0.5pt] \n\t\t\tRoIPooling & 95.0 & 80.2 & 87.0 \\\\\n\t\t\tRoIAlign & \\textbf{95.7} & 80.1 & 87.2 \\\\\n\t\t\tRRoIAlign & 95.4 & \\textbf{84.1} & \\textbf{89.4} \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\tIn the recognition branch, we crop the feature maps yield by different layers of the FPN~\\cite{fpn} components to explain why we utilize those from the third layer. Though it seems to utilize the feature maps from the third layer is optimal in Table~\\ref{ablations_fpnlayer}, but we consider it is because most of the LPs in the images of CRPD have sizes which match the anchors in the third layer. In the fifth or deeper layers, the anchors are mappings of some huge boxes in the original images, but most of the LPs have a small size.\n\t\n\t\\begin{table}[h]\n\t\n\t\t\\caption{Ablations on CRPD-all with feature maps from different FPN layers to train the model.}\n\t\t\\label{ablations_fpnlayer}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tFeature Map Layer & Recall & Precision & F-score \\\\\n\t\t\t\\midrule[0.5pt] \n\t\t\tP3 & \\textbf{95.4} & \\textbf{84.1} & \\textbf{89.4} \\\\\n\t\t\tP4 & 82.4 & 70.1 & 75.8 \\\\\n\t\t\tP5 & 2.6 & 2.9 & 2.7 \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\t\\begin{table}[h]\n\t\n\t\t\\caption{Ablations on CRPD-all with feature maps from different FPN layers to train the model.}\n\t\t\\label{ablations_deformable}\n\t\t\\centering\n\t\t\\begin{tabular}{cccc}\n\t\t\t\\toprule[0.75pt]\n\t\t\tDeformable Conv & Recall & Precision & F-score \\\\\n\t\t\t\\midrule[0.5pt]\n\t\t\t\\ding{55} & 93.0 & 83.7 & 88.1 \\\\\n\t\t\t\\ding{52} & \\textbf{95.4} & \\textbf{84.1} & \\textbf{89.4} \\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\\end{table}\n\t\n\tFinally, to demonstrate that the deformable layers and residual blocks in the recognition branch are effective, ablations are involved, as shown in Table~\\ref{ablations_deformable}. In our metrics, only when the content matches exactly, the result will be regarded as correct. Therefore, deformable convolution brings an improvement in recognition, and the Recall and Precision are both improved.\n\t\n\t\\subsection{Comparisons}\n\t\\label{experiments_comp}\n\t\n\t\\begin{table*}[h]\n\t\n\t\t\\scriptsize\n\t\t\\caption{Comparisons on CRPD between our method and other methods. The methods that are based on deep-learning are trained on CRPD.}\n\t\t\\label{table_crpd_comp}\n\t\t\\centering\n\t\t\\begin{threeparttable}\n\t\t\\begin{tabular}{m{2.9cm}<{\\centering}m{0.5cm}<{\\centering}m{0.8cm}<{\\centering}m{0.9cm}<{\\centering}m{0.3cm}<{\\centering}cm{0.5cm}<{\\centering}m{0.8cm}<{\\centering}m{0.9cm}<{\\centering}m{0.3cm}<{\\centering}}\n\t\t\t\\toprule[0.75pt]\n\t\t\t\\multirow{2}{*}{Method}& \\multicolumn{4}{c}{CRPD-all} & & \\multicolumn{4}{c}{CRPD-single} \\\\\n\t\t\t\\cmidrule[0.5pt]{2-5}\\cmidrule[0.5pt]{7-10}\n\t\t\t& Recall& Precision& F-score& FPS& & Recall& Precision& F-score& FPS\\\\\n\t\t\t\\midrule[0.5pt]\n\t\t\tEasyPR& 2.0& 1.3& 1.6& 6& & 2.3& 1.3& 1.7& 6\\\\\n\t\t\tSSD512 + CRNN& \\textbf{97.8}& 27.2& 42.6& \\textbf{66}& & \\textbf{98.9}& 28.7& 44.4& \\textbf{71}\\\\\n\t\t\t YOLOv3 + CRNN& 73.0& 61.0& 66.5& 18& & 73.7& 59.4& 65.8& 18\\\\\n\t\t\t YOLOv4 + CRNN& 84.4& 60.5& 70.5& 40& & 87.3& 68.4& 76.7& 40\\\\\n\t\t\t SYOLOv4 + CRNN& 86.8& 71.0& 78.2& 35& & 90.1& 72.4& 80.3& 35\\\\\n\t\t\tFaster-RCNN + CRNN& 79.9& 73.7& 76.7& 19& & 81.4& 71.7& 76.3& 20\\\\\n\t\t\t STELA + CRNN& 88.3& 82.9& 85.5& 36& & 83.1& 73.3& 77.9& 36\\\\\n\t\t\tOurs& 95.4& \\textbf{84.1}& \\textbf{89.4}& 30& & 96.3& \\textbf{83.6}& \\textbf{89.5}& 35\\\\\n\t\t\t\\hline\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{Method} & \\multicolumn{4}{c}{CRPD-double} & & \\multicolumn{4}{c}{CRPD-multi} \\\\\n\t\t\t\\cmidrule[0.5pt]{2-5}\\cmidrule[0.5pt]{7-10}\n\t\t\t& Recall& Precision& F-score& FPS& & Recall& Precision& F-score& FPS\\\\\n\t\t\t\\hline\n\t\t\tEasyPR& 1.5& 1.2& 1.3& 6& & 1.8& 1.7& 1.8& 6\\\\\n\t\t\tSSD512 + CRNN& \\textbf{97.5}& 27.5& 42.9& \\textbf{66}& & \\textbf{93.5}& 21.1& 34.5& \\textbf{63}\\\\\n\t\t\t YOLOv3 + CRNN& 74.6& 64.4& 69.1& 17& & 66.2& 61.3& 63.6& 17\\\\\n\t\t\t YOLOv4 + CRNN& 90.4& 41.2& 56.6& 40& & 88.9& 36.8& 52.0& 39\\\\\n\t\t\t SYOLOv4 + CRNN& 91.5& 75.9& 83.0& 35& & 91.5& 75.2& 82.5& 35\\\\\n\t\t\tFaster-RCNN + CRNN& 81.1& 77.9& 79.5& 19& & 69.3& 75.2& 72.1& 17\\\\\n\t\t\t STELA + CRNN& 84.0& 80.6& 82.3& 34& & 77.6& 82.8& 80.1& 33\\\\\n\t\t\tOurs& 95.8& \\textbf{84.5}& \\textbf{89.8}& 30& & 90.8& \\textbf{85.0}& \\textbf{87.7}& 26\\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\t\\begin{tablenotes}\n\t\t\t\\item SYOLOv4: Scaled YOLOv4\n\t\t\\end{tablenotes}\n\t\t\\end{threeparttable}\n\t\\end{table*}\n\t\n\tTo demonstrate that our method has reached a satisfactory performance, we make some comparisons with other methods. We involve EasyPR~\\cite{easypr}, SSD~\\cite{ssd}, YOLO-v3~\\cite{yolov3}, YOLO-v4~\\cite{yolov4}, Scaled YOLO-v4~\\cite{scaledyolov4} and Faster-RCNN~\\cite{fasterrcnn}. A CRNN~\\cite{crnn} model is appended to achieve recognition. We utilize cascaded STELA~\\cite{stela} and CRNN~\\cite{crnn} as our baseline in experiments. From Table~\\ref{table_crpd_comp}, it can be observed that EasyPR~\\cite{easypr} cannot treat LPs in CRPD well because it depends on manually designed features. SSD~\\cite{ssd}, YOLOv3~\\cite{yolov3}, YOLO-v4~\\cite{yolov4}, Scaled YOLO-v4~\\cite{scaledyolov4} and Faster-RCNN~\\cite{fasterrcnn} are not specially designed for LPs or text, but the results are still competitive. Our method reaches the best on all the sub-datasets, which proves the effectiveness.\n\t\n\tBecause Xu~\\textsl{et al.}~\\cite{ccpd} evaluated their method on different circumstances, for fair comparisons, experiments of our method on these datasets are also utilized. And as the other methods are trained on CCPD~\\cite{ccpd}, we also utilize it as the training data of our method in this comparison. Because images in CCPD~\\cite{ccpd} only contain one LP in one image, only Precision is involved when the Recall is not considered. Following the experiments of Xu~\\textsl{et al.}~\\cite{ccpd}, Cascade classifier~\\cite{wang2007cascade}, SSD300~\\cite{ssd}, YOLO9000~\\cite{yolov2}, Faster-RCNN~\\cite{fasterrcnn} are involved as the detector with Holistic-CNN~\\cite{reid} as the recognition model. And end-to-end methods TE2E~\\cite{te2e} and RPNet~\\cite{ccpd} are utilized for comparisons. We also involve YOLO-v4~\\cite{yolov4} and Scaled YOLO-v4~\\cite{scaledyolov4} with CRNN~\\cite{crnn} to report more results. Our baseline which consists of STELA~\\cite{stela} and CRNN~\\cite{crnn} are also involved. From Table~\\ref{table_ccpd_comp}, it can be observed that our method has reached the best in most circumstances, and the efficiency is also competitive.\n\t\n\t\\begin{table*}[h]\n\t\n\t\t\\caption{Comparisons of Precision on CCPD between our method and others.}\n\t\t\\scriptsize\n\t\t\\label{table_ccpd_comp}\n\t\t\\centering\n\t\t\\begin{threeparttable}\n\t\t\\begin{tabular}{m{2.95cm}<{\\centering}m{0.3cm}<{\\centering}m{0.3cm}<{\\centering}m{0.4cm}<{\\centering}m{0.3cm}<{\\centering}m{0.4cm}<{\\centering}m{0.6cm}<{\\centering}m{0.4cm}<{\\centering}m{0.8cm}<{\\centering}m{0.9cm}<{\\centering}m{0.3cm}<{\\centering}} \n\t\t\t\\toprule[0.75pt]\n\t\t\tMethod \n\t\t\t& Size& AP& Base& DB& FN& Rotate& Tilt& Weather& Challenge& FPS\\\\\n\t\t\t\\midrule[0.5pt]\n\t\t\tCascade classifier + HC\n\t\t\t& 480& 58.9& 69.7& 67.2& 69.7& 0.1& 3.1& 52.3& 30.9& 29\\\\\n\t\t\tSSD300 + HC\n\t\t\t& 300& 95.2& 98.3& 96.6& 95.9& 88.4& 91.5& 87.3& 83.8& 35\\\\\n\t\t\tYOLO9000 + HC\n\t\t\t& 480& 93.7& 98.1& 96.0& 88.2& 84.5& 88.5& 87.0& 80.5& 36\\\\\n\t\t\tYOLOv4 + CRNN\n\t\t\t& 512& 94.7& 97.8& 94.6& 87.3& 82.9& 89.9& 83.3& 75.7& 40\\\\\n\t\t\tSYOLOv4 + CRNN\n\t\t\t& 640& 95.3& 97.8& 95.0& 88.9& 84.9& 91.5& 90.4& 77.1& 34\\\\\n\t\t\tFaster-RCNN + HC\n\t\t\t& 600& 92.8& 97.2& 94.4& 90.9& 82.9& 87.3& 85.5& 76.3& 13\\\\\n\t\t\tTE2E\n\t\t\t& 600& 94.4& 97.8& 94.8& 94.5& 87.9& 92.1& 86.8& 81.2& 3\\\\\n\t\t\tRPNet\n\t\t\t& 480& 95.5& \\textbf{98.5}& 96.9& 94.3& 90.8& 92.5& 87.9& 85.1& \\textbf{61}\\\\\n\t\t\tSTELA + CRNN\n\t\t\t& 640& 97.8& 97.9& \\textbf{98.3}& 94.5& 90.1& 91.3& 89.5& 83.6& 41\\\\\n\t\t\tOurs\n\t\t\t& 640& \\textbf{97.9}& 98.3& 98.0& \\textbf{97.2}& \\textbf{92.5}& \\textbf{93.7}& \\textbf{90.7}& \\textbf{87.9}& 30\\\\\n\t\t\t\\bottomrule[0.75pt]\n\t\t\\end{tabular}\n\t\t\\begin{tablenotes}\n\t\t\t\\item SYOLOv4: Scaled YOLOv4; HC: Holistic-CNN; AP: average percent of all the circumstances\n\t\t\\end{tablenotes}\n\t\t\\end{threeparttable}\n\t\\end{table*}\n\t\n\tFinally, some detection and recognition results on CRPD are shown in Figure~\\ref{results} and~\\ref{results_wrong}. In the illustrations, we see that most LPs that are recognized incorrectly are also hard to distinguish by humans. Our proposed method is still effective in most cases, and the performance is competitive.\n\t\n\t\\begin{figure*}[t]\n\t\t\\centering\n\t\t\\includegraphics[width=4.5in]{Fig9.eps}%\n\t\t\\caption{Spotting results of our method on CRPD. The green rectangles are the predicted rotated bounding boxes and the text on the top left corner of the zooming rectangle is the predicted text.}\n\t\t\\label{results}\n\t\\end{figure*}\n\t\n\t\\begin{figure*}[h]\n\t\t\\centering\n\t\t\\includegraphics[width=4.5in]{Fig10.eps}%\n\t\t\\caption{Examples of failed recognition on CRPD. The green rectangles are the predicted rotated bounding boxes and the text on the top left corner of the zooming rectangle is the description of the LP, and the red characters are the ones that are recognized incorrectly. For simplicity, only the failed examples are zoomed in.}\n\t\t\\label{results_wrong}\n\t\\end{figure*}\n\t\n\t\\section{Discussion and Conclusion}\n\t\\label{conclusion}\n\t\n\tIn this paper, we present a dataset with Chinese LP images, which is named CRPD. As a supplement to multi-LP datasets, CRPD includes three sub-datasets, CRPD-single, CRPD-double, and CRPD-multi, which are able to deal with a variety of application scenarios. And CRPD covers many kinds of vehicles and a number of environments that will be helpful to build a robust model. We also propose an end-to-end trainable network to detect and recognize LPs with high efficiency as the baseline of the dataset. The experiments demonstrate the effectiveness of our proposed components, and the performance of the network is satisfactory. In the future, we hope CRPD will become a new benchmark on multi-LP detection and recognition tasks. We also consider utilizing the network for end-to-end scene text spotting and integrating more advanced techniques to achieve better portability and adaptation capability.\n\t\n\t\\section*{Acknowledgments}\n\tThis work was partly supported by the National\u00a0Key\u00a0Research\u00a0and\u00a0Development\u00a0Program\u00a0of\u00a0China\u00a0with\u00a0ID\u00a02018AAA0103203.\n\t\n\t\n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe lunar Cherenkov technique \\cite{Dagkesamanskii1989} provides a\npromising method of UHE neutrino detection since it utilises the\nlunar regolith as a detector; which has a far greater volume than\ncurrent ground-based detectors. This technique makes use of\nEarth-based radio telescopes to detect the coherent Cherenkov\nradiation emitted when a UHE neutrino interacts in the outer\nlayers of the Moon. It was first applied by Hankins, Ekers and\nO'Sullivan using the 64-m Parkes radio telescope\n\\cite{Hankins1996Asearch} and significant limits have been already\nbeen placed on the UHE neutrino flux by several collaborations\n\\citep{Hankins1996Asearch, Gorham2004Experimental, Beresnyak2005,\nJames2010MNRAS, NuMoon2010}.\n\nElectromagnetic pulses originating in the lunar surface will be\ndispersed when they arrive at Earth-based receivers due to\npropagation through the ionosphere which introduces a\nfrequency-dependent time delay. This dispersion reduces the peak\namplitude of a pulse, however, dedispersion techniques can be used\nto recover the full pulse amplitude and consequently increase the\nchances of detection. Accurate dedispersion requires an\nunderstanding of the ionospheric dispersion characteristic and\nit's effect on radio-wave propagation.\n\n\\section{Effects of Ionospheric Dispersion}\n\nThe ionosphere is a weakly ionized plasma which is formed by\nultraviolet ionizing radiation from the sun. Due to its\nrelationship with the sun, the ionosphere's electron density\nexperiences a strong diurnal cycle and is also dependent on the\nseason of the year, the current phase of the 11-year solar cycle\nand the geometric latitude of observation. The differential\nadditive delay, caused by pulse dispersion, is parameterised by\nthe ionospheric TEC (see Equation \\ref{eq:Derived3})\n\n\\begin{equation} \\Delta t= 0.00134 \\times\nSTEC \\times (\\nu_{\\text{lo}}^{-2}-\\nu_{\\text{hi}}^{-2}),\n\\label{eq:Derived3}\n\\end{equation}\nwhere $\\Delta t$ is the duration of the dispersed pulse in\nseconds, $STEC$ is the Slant Total Electron Content in electrons\nper cm$^{2}$ and $\\nu_{\\text{lo}}$ and $\\nu_{\\text{hi}}$ are the\nreceiver bandwidth band edges in Hz.\n\nCherenkov emission produces a sub-nanosecond pulse and therefore\ndetection requires gigahertz bandwidths to achieve the high time\nresolution needed to optimse the signal to noise. Due to\nexcessive data storage requirements, the only way to exploit such\nhigh data rates is to implement real-time dedispersion and\ndetection algorithms and to store potential events at full\nbandwidth for later processing. This requires an accurate\nknowledge of the real-time ionospheric TEC.\n\nIonospheric dispersion also offers some potential experimental\nadvantages, particularly for single dish experiments which can not\nuse array timing information to discriminate against RFI. A lunar\npulse will travel a one-way path through the ionosphere and be\ndispersed according to the current ionospheric TEC. Conversely,\nterrestrial RFI will not be dispersed at all and any Moon-bounce\nRFI will travel a two-way path through the ionosphere and be\ndispersed according to twice the current ionospheric TEC.\nTherefore performing real-time ionospheric dedispersion will\noptimise detection for lunar pulses and provide discrimination\nagainst RFI. Dispersion may also be seen to offer an increase in\ndynamic range. If triggering is performed on a dedispersed data\nstream while the raw data is buffered, any pulse clipping that\noccurs in the triggering hardware can be recovered during offline\nprocessing by reconstructing the pulse from the raw, undispersed\ndata.\n\n\\section{Dedispersion Hardware}\n\nPulse dispersion can be corrected using matched filtering\ntechniques implemented in either analog or digital technology.\nEarly LUNASKA experiments made use of the Australia Telescope\nCompact Array (ATCA) which consists of six 22-m dish antennas.\nThree antennas were fitted with custom-designed hardware for the\nneutrino detection experiments and pulse dedispersion was achieved\nthrough the use of innovative new analog dedispersion filters that\nemploy a method of planar microwave filter design based on inverse\nscattering \\cite{Roberts1995}.\n\nAs the microwave dedispersion filters have a fixed dedispersion\ncharacteristic, an estimate had to be obtained for the TEC which\nwould minimise errors introduced by temporal ionospheric\nfluctuations. The ATCA detection experiments were performed in\n2007 and 2008 during solar minimum and therefore relatively stable\nionospheric conditions. Initial observations were during the\nnights of May 5, 6 and 7, 2007 and these dates were chosen to\nensure that the Moon was at high elevation (particularly during\nthe night-time hours of ionospheric stability) and positioned such\nthat the ATCA would be sensitive to UHE particles from the\ngalactic center. The filter design was based on predictions made\nusing dual-frequency GPS data and assumed a differential delay of\n5 ns across the 1.2--1.8 GHz bandwidth. Data available post\nexperiment revealed that the average differential delay for these\nnights was actually 4.39 ns, with a standard deviation of 1.52 ns.\n\nThe ionosphere experiences both temporal and spatial fluctuations\nin TEC and therefore some signal loss is expected with a fixed\ndedispersion filter. A promising digital solution to overcome\nthese losses lies in the use of high speed Field Programmable Gate\nArrays (FPGAs). An FPGA implementation allows the dedispersion\ncharacteristic to be tuned in real time to reflect temporal\nchanges in the ionospheric TEC. A fully coherent or predetection\ndedispsersion method was pioneered by Hankins and Rickett\n\\cite{Hankins1971, Hankins1975} which completely eliminates the\neffect of dispersive smearing. This is achieved by passing the\npredetected signal through an inverse ionosphere filter which can\nbe implemented in either the time domain, as an FIR filter, or in\nthe frequency domain.\n\nIn 2009, the LUNASKA collaboration started a series of UHE\nneutrino detection experiments using the 64-m Parkes radio\ntelescope. For these experiments, dedispersion was achieved via a\nsuite of FIR filters implemented on a Vertex 4 FPGA. As GPS TEC\nestimates are currently not available in real-time, near real-time\nTEC measurements were derived from foF2 ionosonde measurements.\nIonosondes probe the peak transmission frequency (fo) through the\nF2-layer of the ionosphere which is related to the ionospheric TEC\nsquared. A comparison to GPS data available post-experiment\nrevealed that the foF2-derived TEC data consistently\nunderestimated the GPS TEC measurements. This is attributed to the\nground-based ionosondes probing mainly the lower ionospheric\nlayers and not properly measuring TEC contribution from the\nplasmasphere \\cite{Titheridge1972determination}.\n\n\\section{Monitoring the Ionospheric TEC}\n\nCoherent pulse dedispersion requires an accurate knowledge of the\nionospheric dispersion characteristic which is parameterised by\nthe instantaneous ionospheric TEC.\n\nTEC Measurements can be derived from dual-frequency GPS signals\nand are available online from NASA's CDDIS \\cite{CDDIS}, however,\nthese values are not available in real time. The CDDIS TEC data is\nsampled at two-hour intervals and is currently published after at\nleast a few days delay. Estimates derived from foF2 ionosonde\nmeasurements are available hourly from the Australian Ionospheric\nPrediction Service \\cite{IPS}. However, as discussed, there are\nknown inaccuracies in the derivation of the foF2-based TEC\nestimates.\n\nBoth of these products are published as vertical TEC (VTEC) maps\nwhich must be converted to Slant TEC (STEC) estimates to obtain\nthe true total electron content through the slant angle\nline-of-sight to the Moon. To perform this conversion, the\nionosphere can be modeled as a Single Layer Model (SLM)\n\\cite{Todorova2008} which assumes all free electrons are\nconcentrated in an infinitesimally thin shell and removes the need\nfor integration through the ionosphere. Slant and vertical TEC are\nrelated via\n\n\\begin{equation}\nSTEC=F(z)VTEC. \\label{Eq:slant}\n\\end{equation}\nwhere $F(z)$ is a slant angle factor defined as\n\n\\begin{align}\nF(z)&=\\frac{1}{\\cos(z^{\\prime})}\\\\\n&=\\frac{1}{\\sqrt{1-\\left(\\frac{R_e}{R_e+H}\\sin(z)\\right)^2}},\n\\end{align}\n$R_e$ is the radius of the Earth, $z$ is the zenith angle to the\nsource and $H$ is the height of the idealised layer above the\nEarth's surface (see Figure \\ref{Iono_model}). The CDDIS also use\nan SLM ionosphere for GPS interpolation algorithms and assume a\nmean ionospheric height of 450 km.\n\n\\begin{figure}[!tbph]\n\\begin{center}\n\\includegraphics [width=0.4\\textwidth,keepaspectratio]{fig1.eps}\n\\caption{Parameters of the Ionospheric Single Layer\nModel.}\\label{Iono_model}\n\\end{center}\n\\end{figure}\n\n\\section{A New Method of Ionospheric Calibration}\n\nAs the solar cycle enters a more active phase accurate pulse\ndedispersion is becoming a more important experimental concern for\nthe lunar Cherenkov technique. This requires methods of obtaining\nmore accurate measurements of the ionospheric TEC.\n\nA new technique has been formulated to obtain TEC measurements\nthat are both instantaneous and line-of-sight to the Moon. The\nionospheric TEC can be deduced from Faraday rotation measurements\nof a polarised source combined with geomagnetic field models,\nwhich are more stable than ionospheric models (the CCDIS\n\\cite{CDDIS} states that ionospheric TEC values are accurate to\n$\\sim$20\\% while the IGRF \\cite{IGRF} magnetic field values are\naccurate to better than 0.01\\%). Lunar thermal emission can be\nused as the polarised source since Brewster angle effects produce\na nett polarisation excess in the emission from the lunar limb\n\\cite{Heiles1963}. This provides a method for calibrating the\nionosphere directly line-of-sight to the Moon and makes the lunar\nCherenkov technique extremely attractive for UHE cosmic ray and\nneutrino astronomy as it allows the characteristic dispersion to\nbe used as a powerful discriminant against terrestrial RFI whilst\nremoving the need to search in dispersion-space.\n\nThe unique constraints of an UHE neutrino detection experiment\nusing the lunar Cherenkov technique conflict with traditional\nmethods of planetary synthesis imaging and polarimetry which\nrequires a complete set of spacings and enough observing time for\nearth rotation. Therefore to apply this method of ionospheric\ncalibration to the ATCA detection experiments, innovations in the\nanalysis of lunar polarisation observations were required. In\nparticular, a method of obtaining lunar Faraday rotation estimates\nin the visibility domain (i. e. without Fourier inversion to the\nimage plane) had to be developed.\n\nWorking in the visibility domain removes both the imaging\nrequirement of a compact array configuration, which would increase\nthe amount of correlated lunar noise between receivers, and also\nremoves the need for earth rotation allowing measurements to\nobtained in real time. This technique makes use of the angular\nsymmetry in planetary polarisation distribution. The intrinsic\nthermal radiation of a planetary object appears increasingly\npolarised toward the limb, when viewed from a distant point such\nas Earth \\cite{Heiles1963, SPORT2002}. The polarised emission is\nradially aligned and is due to the changing angle of the planetary\nsurface toward the limb combined with Brewster angle effects. The\nangular symmetry of this distribution can be exploited by an\ninterferometer so that an angular spatial filtering technique may\nbe used to obtain real-time position angle measurements directly\nin the visibility domain. The measured position angles are\nuniquely related to the corresponding $uv$ angle at the time of\nthe observation. Comparison with the expected radial position\nangles, given the current $uv$ angle of the observation, gives an\nestimate of the Faraday rotation induced on the Moon's polarised\nemission. Faraday rotation estimates can be combined with\ngeomagnetic field models to determine the associated ionospheric\nTEC and subsequently provide a method of calibrating the current\natmospheric effects on potential Cherenkov pulses.\n\nObservations of the Moon were taken using the 22-m telescopes of\nthe Australia Telescope Compact Array with a center frequency of\n1384 MHz. At this frequency the Moon is in the near field of the\narray, however, investigation of the Fresnel factor in polar\ncoordinates showed that it has no dependence on the spatial\nparameter, which determines the polarisation distribution of a\nplanetary body.\n\nUsing the angular spatial filtering technique, position angle\nestimates were calculated directly in the visibility domain of the\nlunar observational data. The Faraday rotation estimates were\nobtained by comparing these angles to the instantaneous $uv$ angle\nand the resultant Faraday rotation estimates were averaged over\nsmall time increments to smooth out noise-like fluctuations. Since\nthe polarised lunar emission received on each baseline varied in\nintensity over time, there were nulls during which the obtained\nposition angle information was not meaningful. A threshold was\napplied to remove position angle measurements taken during these\nperiods of low polarised intensity and baseline averaging was\nconsidered necessary as the results on each baseline were slightly\ndifferent and each affected differently by intensity nulls. The\nFaraday rotation estimates were converted to estimates of\nionospheric TEC via\n\n\\begin{equation}\n\\Omega = 2.36 \\times 10^4 \\nu^{-2}\\int_{\\text{path}}N(s)B(s)\\cos\n(\\theta) ds, \\label{eq:Faraday_rotation}\n\\end{equation} where $\\Omega$ is the rotation angle in radians, $f$ is the signal\nfrequency in Hz, $N$ is the electron density in m$^3$, $B$ is the\ngeomagnetic field strength in T, $\\theta$ is the angle between the\ndirection of propagation and the magnetic field vector and $ds$ is\na path element in m.\n\n\\begin{figure}\n\\begin{center}\n\\includegraphics [width=0.5\\textwidth,keepaspectratio]{fig2.eps}\n\\caption{Lunar Faraday rotation estimates converted to\n(\\emph{left}) ionospheric TEC values and (\\emph{right}) the\ndifferential delay across 1.2--1.8 GHz.}\\label{TECdelay}\n\\end{center}\n\\end{figure}\n\nTo evaluate the effectiveness of this new ionospheric calibration\ntechnique, the TEC results were compared against ionospheric TEC\nestimates derived from dual-frequency GPS data (Figure\n\\ref{TECdelay}) . Slant angle factors were used to convert the GPS\nVTEC estimates to STEC toward the Moon for comparison with the\nATCA data. Both data sets exhibited a similar general trend of\nsymmetry around the Moon's transit. However, the ATCA data often\nunderestimated the GPS data, particularly around 14:30--17:00 UT\nwhere the STEC estimates may have been influenced by bad data from\nthe shorter baselines or due to TEC contribution from the\nplasmasphere which is not in the presence of a magnetic field\n\\cite{Titheridge1972determination}. These observations were taken\nwhen the TEC was very low and therefore the relative error in the\nTEC estimates is large.\n\n\\section{Conclusions}\n\nAs the sun enters a more active phase, accurate ionospheric pulse\ndispersion is becoming a more important experimental concern for\nUHE neutrino detection using the lunar Cherenkov technique.\nHardware dedispersion options rely on the accuracy of real-time\nionospheric TEC measurements and, while there are a few options\navailable for obtaining these measurements, they are not currently\navailable in real time nor directly line-of-sight to the Moon. A\nnew ionospheric calibration technique has been developed. This\ntechnique uses Faraday rotation measurements of the polarised\nthermal radio emission from the lunar limb combined with\ngeomagnetic field models to obtain estimates of the ionospheric\nTEC which are both instantaneous and line-of-sight to the Moon.\nSTEC estimates obtained using this technique have been compared to\ndual-frequency GPS data. Both data sets exhibited similar features\nwhich can be attributed to ionospheric events, however, more\nobservations are required to investigate this technique further.\n\n\\section{Acknowledgements}\n\nThis research was supported as a Discovery Project by the\nAustralian Research Council. The Compact Array and Parkes\nObservatory are part of the Australia Telescope which is funded by\nthe Commonwealth of Australia for operation as a National Facility\nmanaged by CSIRO.\n\n\\biboptions{square,sort&compress}\n\\bibliographystyle{elsarticle-num}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}