diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjfdg" "b/data_all_eng_slimpj/shuffled/split2/finalzzjfdg" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjfdg" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sec:introduction}\n\n{\\let\\thefootnote\\relax\\footnotetext{Author contributions: Hao Zhu designed the research; Weize Chen prepared the data, and organized data annotation; Hao Zhu and Xu Han designed the experiments; Weize Chen performed the experiments; Hao Zhu, Weize Chen and Xu Han wrote the paper; Zhiyuan Liu and Maosong Sun proofread the paper. Zhiyuan Liu is the corresponding author.}}\n\nRelations\\footnote{Sometimes relations are also named properties.}, representing various types of connections between entities or arguments, are the core of expressing relational facts in most general knowledge bases (KBs)~\\cite{suchanek2007yago,bollacker2008freebase}. Hence, identifying relations is a crucial problem for several information extraction tasks. Although considerable effort has been devoted to these tasks, some nuances between similar relations are still overlooked, (\\cref{tab:similarity_example} shows an example); on the other hand, some distinct surface forms carrying the same relational semantics are mistaken as different relations. These severe problems motivate us to quantify the similarity between relations in a more effective and robust method. \n\n\\begin{table}\n\\centering\n\\resizebox{0.9\\linewidth}{!}{\n\\begin{tabular}{p{0.35\\columnwidth} p{0.69\\columnwidth}}\n\\toprule\nSentence & The crisis didn't influence his two daughters OBJ and SUBJ. \\\\ \\midrule\nCorrect & per:siblings \\\\ \\midrule\nPredicted & per:parents \\\\ \\midrule\nSimilarity Rank & 2 \\\\\n\\bottomrule\n\\end{tabular}}\n\\caption{An illustration of the errors made by relation extraction models. The sentence contains obvious patterns indicating the two persons are siblings, but the model predicts it as parents. We introduce an approach to measure the similarity between relations. Our result shows ``siblings'' is the second most similar one to ``parents''. By applying this approach, we could analyze the errors made by models, and help reduce errors.}\n\\label{tab:similarity_example}\n\\end{table}\n\nIn this paper, we introduce an adaptive and general framework for measuring similarity of the pairs of relations. Suppose for each relation $r$, we have obtained a conditional distribution, $P(h, t \\mid r)$ ($h,t\\in \\mathcal{E}$ are head and tail entities, and $r\\in \\mathcal{R}$ is a relation), over all head-tail entity pairs given $r$. We could quantify similarity between a pair of relations by the divergence between the conditional probability distributions given these relations. In this paper, this conditional probability is given by a simple feed-forward neural network, which can capture the dependencies between entities conditioned on specific relations. Despite its simplicity, the proposed network is expected to cover various facts, even if the facts are not used for training, owing to the good generalizability of neural networks. For example, our network will assign a fact a higher probability if it is ``logical'': e.g., the network might prefer an athlete has the same nationality as same as his\/her national team rather than other nations. \n\nIntuitively, two similar relations should have similar conditional distributions over head-tail entity pairs $P(\\,h, t \\mid r\\,)$, e.g., the entity pairs associated with \\textit{be trade to} and \\textit{play for} are most likely to be athletes and their clubs, whereas those associated with \\textit{live in} are often people and locations. In this paper, we evaluate the similarity between relations based on their conditional distributions over entity pairs. Specifically, we adopt Kullback--Leibler (KL) divergence of both directions as the metric. However, computing exact KL requires iterating over the whole entity pair space $\\mathcal{E} \\times \\mathcal{E}$, which is quite intractable. Therefore, we further provide a sampling-based method to approximate the similarity score over the entity pair space for computational efficiency. \n\nBesides developing a framework for assessing the similarity between relations, our second contribution is that we have done a survey of applications. We present experiments and analysis aimed at answering five questions:\n\n(1) How well does the computed similarity score correlate with human judgment about the similarity between relations? How does our approach compare to other possible approaches based on other kinds of relation embeddings to define a similarity? (\\cref{sec:relationship} and \\cref{sec:human-judgment})\n\n(2) Open IE models inevitably extract many redundant relations. How can our approach help reduce such redundancy? (\\cref{sec:openie})\n\n(3) To which extent, quantitatively, does best relational classification models make errors among similar relations? (\\cref{sec:error-analysis})\n\n(4) Could similarity be used in a heuristic method to enhance negative sampling for relation prediction? (\\cref{sec:training-guidance-relation-prediction})\n\n(5) Could similarity be used as an adaptive margin in softmax-margin training method for relation extraction? (\\cref{sec:training-guidance-relation-extraction})\n\n\nFinally, we conclude with a discussion of valid extensions to our method and other possible applications.\n\n\\cutforspace{\nWe conduct experiments on various tasks related to relations. The experimental results show that our relation similarity computed with fact distribution significantly outperforms other baseline approaches. Moreover, an ideal quantification should be easy to understand, and strongly correlates to human perception of ``similar''. Therefore, we also conduct human evaluation with guidance of given grading criterion. The results clearly show that our computed scores are interpretable and meet human perception. According to the experiment results, we conclude some typical scenarios can be handled well by our similarity:\n\n\\paragraph{Redundant relation removal (\\cref{sec:openie})} The relation patterns extracted by OpenIE~\\cite{angeli2015leveraging,saha2017bootstrapping} are always redundant, as distinct semantic patterns can express the same meaning. For example, for (\\emph{Cricket}, \\texttt{is also played in}, \\emph{County Donegal}) and (\\emph{Football}, \\texttt{is a popular sport in}, \\emph{Macedonia}), both \\texttt{is also played in} and \\texttt{is a popular sport in} means a sport played in a place. Therefore, these two relations should be combined into one. To eliminate the redundant relations would help us enrich the knowledge base, and thus benefit many downstream applications like question answering. Based on our proposed similarity quantification, we can filter out those redundant relation pairs beyond a threshold.\n\n\n\n\\paragraph{Error analysis (\\cref{sec:error-analysis})} As we proved the correlation between similarity and human perception (\\cref{sec:human-judgment}), we further use the computed score between each pair to analyze errors made in predicting relations. By comparing the similarity between the wrongly predicted relation and ground-truth relations, we find that even the best models on these tasks still make errors among the most similar relations, which highlights the needs to guide the model to focus on the nuances between these relations. \n\n\\paragraph{Training guidance (\\cref{sec:training-guidance})} Since similar relations are harder to discriminate, it is intuitive to learn more on these pairs. In this paper, we propose two methods to incorporate the similarity into negative sampling and softmax-margin methods, on relation prediction and relation extraction tasks respectively. For negative sampling, we use similarity-based probability to sampling contrastive samples to help learn the boundary between similar relations; for softmax-margin methods, we use similarity as cost function to force methods learn a relative large margin between regions of similar relations. In both of the application, we gradually reduce the temperature of exponential function, during which the probability move from flat to peaky accordingly. This can warm up our models at first, and help tell the fine-grained difference at last.\n}\n\n\\section{Learning Head-Tail Distribution}\n\\label{sec:fact-distribution}\nJust as introduced in \\cref{sec:introduction}, we quantify the similarity between relations by their corresponding head-tail entity pair distributions. Consider the typical case that we have got numbers of facts, but they are still sparse among all facts in the real world. How could we obtain a well-generalized distribution over the whole space of possible triples beyond the training facts? This section proposes a method to parameterize such a distribution.\n\n\\subsection{Formal Definition of Fact Distribution}\n\nA \\textit{fact} is a triple $(h, r, t)\\in \\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}$, where $h$ and $t$ are called \\textit{head} and \\textit{tail} entities, $r$ is the relation connecting them, $\\mathcal{E}$ and $\\mathcal{R}$ are the sets of entities and relations respectively. We consider a score function $F_{\\theta}:\\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}\\rightarrow \\mathbb{R}$ maps all triples to a scalar value. As a special case, the function can be factorized into the sum of two parts: $F_{\\theta}(\\,h,t; r\\,)\\mathrel{\\stackrel{\\mbox{\\tiny def}}{=}} u_{\\theta_1}(h; r) + u_{\\theta_2}(t; h, r)$. We use $F_{\\theta}$ to define the unnormalized probability.\n\\begin{equation}\n\\small\n\\tilde{P}_{\\theta}(\\,h, t\\mid r\\,) \\mathrel{\\stackrel{\\mbox{\\tiny def}}{=}} \\exp F_{\\theta}(\\,h, r; t\\,)\n\\end{equation}\nfor every triple $(\\, h, r, t\\,)$. The real parameter $\\theta$ can be adjusted to obtain difference distributions over facts.\n\nIn this paper, we only consider \\textit{locally normalized} version of $F_{\\theta}$:\n\\begin{equation}\n\\label{eq:local-normalization}\n\\small\n\\begin{aligned}\nu_{\\theta_1}(h; r) &= \\log \\frac{\\exp \\tilde{u}_{\\theta_1}(h; r)}{\\sum_{h'} \\exp \\tilde{u}_{\\theta_1}(h'; r)},\\\\\nu_{\\theta_2}(t; h, r) &= \\log \\frac{\\exp \\tilde{u}_{\\theta_2}(t;h, r)}{\\sum_{t'} \\exp \\tilde{u}_{\\theta_2}(t';h, r)},\\\\\n\\end{aligned}\n\\end{equation}\nwhere $\\tilde{u}_{\\theta_1}$ and $\\tilde{u}_{\\theta_2}$ are directly parameterized by feed-forward neural networks. Through local normalization, $\\tilde{P}_{\\theta}(\\,h,t \\mid r\\,)$ is naturally a valid probability distribution, as the partition function $\\sum_{h,t} \\exp F_{\\theta}(\\,h,t;r\\,) = 1$. Therefore, $P_{\\theta}(\\,h,t\\mid r\\,)=\\tilde{P}_{\\theta}(\\,h,t\\mid r\\,)$.\n\n\\subsection{Neural architecture design}\nHere we introduce our special design of neural networks. \nFor the first part and the second part, we implement the scoring functions introduced in \\cref{eq:local-normalization} as\n\\begin{equation}\n\\small\n\\begin{aligned}\n\\tilde{u}_{\\theta_1}(h; r) &= \\mathtt{MLP}_{\\theta_1}(\\bm{r})^{\\top}\\bm{h},\\\\\n\\tilde{u}_{\\theta_2}(t; h, r) &= \\mathtt{MLP}_{\\theta_2}([\\bm{h}; \\bm{r}])^{\\top}\\bm{t},\\\\\n\\end{aligned}\n\\end{equation}\nwhere each $\\mathtt{MLP}_{\\theta}$ represents a multi-layer perceptron composed of layers like $\\bm{y} = \\mathtt{relu}(\\bm{W}\\bm{x} + \\bm{b})$, $\\bm{h}, \\bm{r}$, $\\bm{t}$ are embeddings of $h,r$, $t$, and $\\theta$ includes weights and biases in all layers.\n\n\\subsection{Training}\nNow we discuss the method to perform training. In this paper, we consider joint training. By minimizing the loss function, we compute the model parameters $\\theta^*$:\n\n\\begin{equation}\n\\small\n\\begin{aligned}\n\\theta^* &= \\argmin_{\\theta} \\mathcal{L}(G) \\\\\n&=\\argmin_{\\theta} \\sum_{(\\,h,r,t\\,)\\in G} - \\log P_{\\theta}(\\,h, t\\mid r\\,),\n\\end{aligned}\n\\end{equation}\nwhere $G \\subset \\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}$ is a set of triples.\\footnote{In our applications, the set of triples could be a knowledge base or a set of triples in the training set etc.} The whole set of parameters, $\\theta = \\{\\theta_1, \\theta_2, \\{\\bm{e}, \\forall e\\in \\mathcal{E}\\}, \\{\\bm{r}, \\forall r\\in \\mathcal{R}\\}\\}$. We train these parameters by Adam optimizer \\cite{kingma2014adam}. Training details are shown in \\cref{sec:training_detail}.\n\n\\begin{table*}[!t]\n\\small\n\\center\n\\begin{tabular}{c c c}\nRelation Representation & Method & Similarity Quantification \\\\ \\toprule\nVectors & TransE \\cite{bordes2013translating} & $S(r_1, r_2) = \\exp \\left(\\bm{r}_1^{\\top} \\bm{r}_2\/\\lVert \\bm{r}_1\\rVert_2\\lVert\\bm{r}_2\\rVert_2\\right)$\\\\\nVectors & DistMult \\cite{yang2014embedding} & $S(r_1, r_2) = \\exp \\left(\\bm{r}_1^{\\top} \\bm{r}_2\/\\lVert \\bm{r}_1\\rVert_2\\lVert\\bm{r}_2\\rVert_2\\right)$\\\\\nMatrices & RESCAL \\cite{nickel2011three} & $S(r_1, r_2) = \\exp (\\lVert M_{r_1} - M_{r_2}\\rVert_F)$\\\\\nAngles & RotatE \\cite{sun2018rotate} & $S(r_1, r_2) = \\exp (- \\sum_{i=1}^n \\lvert \\bm{r}_{1,i} - \\bm{r}_{2,i} \\rvert_1)$\\\\ \\midrule\nProbability Distribution & Ours & \\cref{eq:similarity-defination} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{Methods to define a similarity function with different types of relation representations}\n\\label{tab:other-similarity}\n\\end{table*}\n\n\n\\section{Quantifying Similarity}\n\\label{sec:quantify}\nSo far, we have talked about how to use neural networks to approximate the natural distribution of facts. The center topic of our paper, quantifying similarity, will be discussed in detail in this section.\n\n\\subsection{Relations as Distributions}\nIn this paper, we provide a probability view of relations by representing relation $r$ as a probability distribution $P_{\\theta^*}(\\,h, t \\mid r\\,)$. After training the neural network on a given set of triples, the model is expected to generalize well on the whole $\\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}$ space. \n\nNote that it is very easy to calculate $P_{\\theta^*}(\\,h, t \\mid r\\,)$ in our model thanks to local normalization (\\cref{eq:local-normalization}). Therefore, we can compute it by\n\\begin{equation}\n\\small\nP_{\\theta^*}(\\,h, t \\mid r\\,) = \\exp (u_{\\theta_1}(h; r) + u_{\\theta_2}(t; h, r)).\n\\end{equation}\n\n\\subsection{Defining Similarity}\n\\label{sec:defining-similarity}\nAs the basis of our definition, we hypothesize that the similarity between $P_{\\theta^*}(\\,h, t\\mid r\\,)$ reflects the similarity between relations.\\footnote{\\cref{sec:human-judgment} provides empirical results to corroborate this hypothesis.} For example, if the conditional distributions of two relations put mass on similar entity pairs, the two relations should be quite similar. If they emphasize different ones, the two should have some differences in meaning. \n\nFormally, we define the similarity between two relations as a function of the divergence between the distributions of corresponding head-tail entity pairs:\n\\begin{equation}\n\\small\n\\label{eq:similarity-defination}\n\\begin{aligned}\n S(r_1, r_2) = g\\Big(&\\KLD{P_{\\theta^*}(\\,h,t\\mid r_1\\,)}{P_{\\theta^*}(\\,h,t\\mid r_2\\,)},\\\\\n &\\KLD{P_{\\theta^*}(\\,h,t\\mid r_2\\,)}{P_{\\theta^*}(\\,h,t\\mid r_1\\,)}\\Big),\n\\end{aligned}\n\\end{equation}\nwhere $\\KLD{\\cdot}{\\cdot}$ denotes Kullback--Leibler divergence, \n\\begin{equation}\n\\begin{aligned}\n\\KLD{P_{\\theta^*}(\\,h,t\\mid r_1\\,)}{P_{\\theta^*}(\\,h,t\\mid r_2\\,)}\\\\\n=\\mathbb{E}_{h, t\\sim P_{\\theta^*}(\\,h,t\\mid r_1\\,)} \\log \\frac{P_{\\theta^*}(\\,h,t\\mid r_1\\,)}{P_{\\theta^*}(\\,h,t\\mid r_2\\,)}\n\\end{aligned}\n\\end{equation}\nvice versa, and function $g(\\cdot, \\cdot)$ is a symmetrical function. To keep the coherence between semantic meaning of ``similarity'' and our definition, $g$ should be a monotonically decreasing function. Through this paper, we choose to use an exponential family\\footnote{We view KL divergences as energy functions.} composed with max function, i.e., $g(x, y) = e^{-\\max(x, y)}$. Note that by taking both sides of KL divergence into account, our definition incorporates both the entity pairs with high probability in $r_1$ and $r_2$. Intuitively, if $P_{\\theta^*}(\\,h, t\\mid r_1\\,)$ mainly distributes on a proportion of entities pairs that $P_{\\theta^*}(\\,h, t\\mid r_2\\,)$ emphasizes, $r_1$ is only hyponymy of $r_2$. Considering both sides of KL divergence could help model yield more comprehensive consideration. We will talk about the advantage of this method in detail in \\cref{sec:relationship}.\n\n\\subsection{Calculating Similarity}\nJust as introduced in \\cref{sec:introduction}, it is intractable to compute similarity exactly, as involving $\\mathcal{O}(|\\mathcal{E}|^2)$ computation. Hence, we consider the monte-carlo approximation:\n\\begin{equation}\n\\small\n\\begin{aligned}\n&\\KLD{P_{\\theta^*}(\\,h, t \\mid r_1\\,)}{P_{\\theta^*}(\\,h, t\\mid r_2\\,)}\\\\\n=\\, & \\mathbb{E}_{h, t\\sim P_{\\theta^*}(\\,h, t \\mid r_1\\,)} \\log\\frac{P_{\\theta^*}(\\,h, t \\mid r_1\\,)}{P_{\\theta^*}(\\,h, t\\mid r_2\\,)}\\\\\n\\approx\\, & \\frac{1}{|\\mathcal{S}|}\\sum_{h, t\\in \\mathcal{S}} \\log\\frac{P_{\\theta^*}(\\,h, t \\mid r_1\\,)}{P_{\\theta^*}(\\,h, t\\mid r_2\\,)},\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{S}$ is a list of entity pairs sampled from $P_{\\theta^*}(\\,h, t \\mid r_1\\,)$. We use sequential sampling\\footnote{Sampling $h$ and $t$ at the same time requires $\\mathcal{O}(|\\mathcal{E}|^2)$ computation, while sequential sampling requires only $\\mathcal{O}(|\\mathcal{E}|)$ computation.} to gain $\\mathcal{S}$, which means we first sample $h$ given $r$ from $u_{\\theta_1}(h;r)$, and then sample $t$ given $h$ and $r$ from $u_{\\theta_2}(t;h, r)$.\\footnote{It seems to be a non-symmetrical method, and sampling from the mixture of both forward and backward should yield a better result. Surprisingly, in practice, sampling from single direction works just as well as from both directions.}\n\n\\subsection{Relationship with other metrics}\n\\label{sec:relationship}\nPrevious work proposed various methods for representing relations as vectors \\cite{bordes2013translating, yang2014embedding}, as matrices \\cite{nickel2011three}, even as angles \\cite{sun2018rotate}, etc. Based on each of these representations, one could easily define various similarity quantification methods.\\footnote{Taking the widely used vector representations as an example, we can define the similarity between relations based on cosine distance, dot product distance, L1\/L2 distance, etc.} We show in \\cref{tab:other-similarity} the best one of them in each category of relation presentation.\n\n\n\nHere we provide two intuitive reasons for using our proposed probability-based similarity: (1) the capacity of a single fixed-size representation is limited --- some details about the fact distribution is lost during embedding; (2) directly comparing distributions yields a better interpretability --- you can not know about how two relations are different given two relation embeddings, but our model helps you study the detailed differences between probabilities on every entity pair. \\cref{fig:head-tail-distribution} provides an example. Although the two relations talk about the same topic, they have different meanings. TransE embeds them as vectors the closest to each other, while our model can capture the distinction between the distributions corresponds to the two relations, which could be directly noticed from the figure. \n\n\n\\begin{figure}[!]\n\\centering\n\\resizebox{0.8\\linewidth}{!}{\n\\includegraphics{scatter.pdf}}\n\\caption[Caption for LOF]{Head-tail entity pairs of relation ``be an unincorporated community in'' (in blue) and ``be a small city in'' (in red) sampled from our fact distribution model. The coordinates of the points are computed by t-sne \\cite{maaten2008visualizing} on the concatenation of head and tail embeddings\\protect\\footnotemark. The two \\textbf{larger} blue and red points indicate the embeddings of these two relations.}\n\\label{fig:head-tail-distribution}\n\\end{figure}\n\n\\footnotetext{Embeddings used in this graph are from a trained TransE model.}\n\n\\begin{figure}[!t]\n\\centering\n\\resizebox{0.86\\linewidth}{!}{\n\\includegraphics[trim={0 0 0 0},clip]{correlation.pdf}}\n\\caption{Spearman correlations between human judgment and models' outputs. The inter-subject correlation is also shown as a horizontal line with its standard deviation as an error band. Our model shows the strongest positive correlation with human judgment, and, in other words, the smallest margin with human inter-subject agreement. Significance: ***\/**\/* := p < .001\/.01\/.05.}\n\\label{fig:correlation}\n\\end{figure}\n\n\\section{Dataset Construction}\nWe show the statistics of the dataset we use in \\cref{tab:statistics}, and the construction procedures will be introduced in this section.\n\\subsection{Wikidata}\n\\label{sec:construction_wikidata}\n\nIn Wikidata \\cite{vrandevcic2014wikidata}, facts can be described as (Head item\/property, \\textit{Property}, Tail item\/property). To construct a dataset suitable for our task, we only consider the facts whose head entity and tail entity are both items. We first choose the most common 202 relations and 120000 entities from Wikidata as our initial data. Considering that the facts containing the two most frequently appearing relations (\\textit{P2860: cites}, and \\textit{P31: instance of}) occupy half of the initial data, we drop the two relations to downsize the dataset and make the dataset more balanced. Finally, we keep the triples whose head and tail both come from the selected 120000 entities as well as its relation comes from the remaining 200 relations. \n\n\\subsection{ReVerb Extractions}\n\\label{sec:construction_reverb}\nReVerb \\cite{fader2011identifying} is a program that automatically identifies and extracts binary relationships from English sentences. We use the extractions from running ReVerb on Wikipedia\\footnote{http:\/\/reverb.cs.washington.edu\/}. We only keep the relations appear more than 10 times and their corresponding triples to construct our dataset.\n\n\\subsection{FB15K and TACRED}\n\\label{sec:construction_fb15k_tacred}\nFB15K \\cite{bordes2013translating} is a subset of freebase. TACRED \\cite{zhang2017position} is a large supervised relation extraction dataset obtained via crowdsourcing. We directly use these two dataset, no extra processing steps were applied. \n\n\\section{Human Judgments}\n\\label{sec:human-judgment}\nFollowing \\citet{miller1991contextual,resnik1999semantic} and the vast amount of previous work on semantic similarity, we ask nine undergraduate subjects to assess the similarity of 360 pairs of relations from a subset of Wikidata \\cite{vrandevcic2014wikidata}\\footnote{Wikidata provides detailed descriptions to properties (relations), which could help subjects understand the relations better.} that are chosen to cover from high to low levels of similarity. In our experiment, subjects were asked to rate an integer similarity score from 0 (no similarity) to 4 (perfectly the same)\\footnote{The detailed instruction is attached in the \\cref{sec:wikidata_annotation}.} for each pair. The inter-subject correlation, estimated by leaving-one-out method \\cite{weiss1991computer}, is r = $0.763$, standard deviation = $0.060$. This important reference value (marked in \\cref{fig:correlation}) could be seen as the highest expected performance for machines \\cite{resnik1999semantic}. \n\nTo get baselines for comparison, we consider other possible methods to define similarity functions, as shown in \\cref{tab:other-similarity}. We compute the correlation between these methods and human judgment scores. As the models we have chosen are the ones work best in knowledge base completion, we do expect the similarity quantification approaches based on them could measure some degree of similarity. As shown in \\cref{fig:correlation}, the three baseline models could achieve moderate ($0.1\\text{--}0.5$) positive correlation. On the other hand, our model shows a stronger correlation ($0.63$) with human judgment, indicating that considering the probability over whole entity pair space helps to gain a similarity closer to human judgments. These results provide evidence for our claim raised in \\cref{sec:defining-similarity}. \n\n\\section{Redundant Relation Removal}\n\\label{sec:openie}\n\nOpen IE extracts concise token patterns from plain text to represent various relations between entities, e.g.,, (Mark Twain, \\textit{was born in}, Florida). As Open IE is significant for constructing KBs, many effective extractors have been proposed to extract triples, such as Text-Runner~\\cite{yates2007textrunner}, ReVerb~\\cite{fader2011identifying}, and Standford Open IE~\\cite{angeli2015leveraging}. However, these extractors only yield relation patterns between entities, without aggregating and clustering their results. Accordingly, there are a fair amount of redundant relation patterns after extracting those relation patterns. Furthermore, the redundant patterns lead to some redundant relations in KBs.\n\nRecently, some efforts are devoted to Open Relation Extraction (Open RE)~\\cite{Lin2001DIRT,Yao2011Structured,marcheggiani2016discrete,Elsahar2017Unsupervised}, aiming to cluster relation patterns into several relation types instead of redundant relation patterns. Whenas, these Open RE methods adopt distantly supervised labels as golden relation types, suffering from both false positive and false negative problems on the one hand. On the other hand, these methods still rely on the conventional similarity metrics mentioned above.\n\nIn this section, we will show that our defined similarity quantification could help Open IE by identifying redundant relations. To be specific, we set a toy experiment to remove redundant relations in KBs for a preliminary comparison (\\cref{sec:toy-experiment}). Then, we evaluate our model and baselines on the real-world dataset extracted by Open IE methods (\\cref{sec:real-experiment}). Considering the existing evaluation metric for Open IE and Open RE rely on either labor-intensive annotations or distantly supervised annotations, we propose a metric approximating recall and precision evaluation based on operable human annotations for balancing both efficiency and accuracy.\n\n\\begin{table}[!t]\n\\small\n\\resizebox{1.0\\linewidth}{!}{\n\\begin{tabular}{l r r r l}\nTriple Set & $|\\mathcal{R}|$& $|\\mathcal{E}|$ & \\#Fact & Section \\\\\\toprule\nWikidata & 188 & 112,946 & 426,067 & \\cref{sec:human-judgment} and \\cref{sec:toy-experiment}\\\\\nReVerb Extractions & 3,736 & 194,556 & 266,645 & \\cref{sec:real-experiment}\\\\\nFB15K & 1,345 & 14,951 & 483,142 & \\cref{sec:error-analysis-relation-prediction} and \\cref{sec:training-guidance-relation-prediction}\\\\\nTACRED & 42 & 29,943 & 68,124 & \\cref{sec:error-analysis-relation-extraction} and \\cref{sec:training-guidance-relation-extraction}\\\\ \\bottomrule\n\\end{tabular}}\n\\caption{Statistics of the triple sets used in this paper.}\n\\label{tab:statistics}\n\\end{table}\n\n\\subsection{Toy Experiment}\n\\label{sec:toy-experiment}\n\nIn this subsection, we propose a toy environment to verify our similarity-based method. Specifically, we construct a dataset from Wikidata\\footnote{The construction procedure is shown in \\cref{sec:construction_wikidata}.} and implement Chinese restaurant process\\footnote{Chinese restaurant process is shown in \\cref{sec:Chinese_restaurant}.} to split every relation in the dataset into several sub-relations. Then, we filter out those sub-relations appearing less than 50 times to eventually get 1165 relations. All these split relations are regarded as different ones during training, and then different relation similarity metrics are adopted to merge those sub-relations into one relation. As Figure~\\ref{fig:correlation} shown that the matrices-based approach is less effective than other approaches, we leave this approach out of this experiment. The results are shown in Table~\\ref{tab:toy-environment}. \n\n\\begin{table}[!t]\n\\small\n\\center\n\\begin{tabular}{c c c c}\n Method & P & R & $F_1$ \\\\ \n \\toprule\n Vectors (TransE) &0.28 & 0.14& 0.18\\\\\n Vectors (DistMult) &0.44 & 0.41 & 0.42\\\\\n Angles & 0.48 & 0.43 & 0.45 \\\\ \n Ours & \\textbf{0.65}& \\textbf{0.50} & \\textbf{0.57}\\\\\n \\bottomrule\n\\end{tabular}\n\\caption{The experiment results on the toy dataset show that our metric based on probability distribution significantly outperforms other relation similarity metrics.}\n\\label{tab:toy-environment}\n\\end{table}\n\n\\subsection{Real World Experiment}\n\\label{sec:real-experiment}\n\nIn this subsection, we evaluate various relation similarity metrics on the real-world Open IE patterns. The dataset are constructed by ReVerb. Different patterns will be regarded as different relations during training, and we also adopt various relation similarity metrics to merge similar relation patterns. Because it is nearly impossible to annotate all pattern pairs for their merging or not, meanwhile it is also inappropriate to take distantly supervised annotations as golden results. Hence, we propose a novel metric approximating recall and precision evaluation based on minimal human annotations for evaluation in this experiment.\n\n\\subsubsection*{Approximating Recall and Precision}\n\n\\paragraph{Recall} \n\nRecall is defined as the yielding fraction of true positive instances over the total amount of real positive\\footnote{Often called relevant in information retrieval field.} instances. However, we do not have annotations about which pairs of relations are synonymous. Crowdsourcing is a method to obtain a large number of high-quality annotations. Nevertheless, applying crowdsourcing is not trivial in our settings, because it is intractable to enumerate all synonymous pairs in the large space of relation (pattern) pairs $\\mathcal{O}(|\\mathcal{R}|^2)$ in Open IE. A promising method is to use rejection sampling by uniform sampling from the whole space, and only keep the synonymous ones judged by crowdworkers. However, this is not practical either, as the synonymous pairs are sparse in the whole space, resulting in low efficiency. Fortunately, we could use normalized importance sampling as an alternative to get an unbiased estimation of recall. \n\n\\begin{theorem}\\footnote{See proof in \\cref{sec:proof_recall}}\nSuppose every sample $x \\in X$ has a label $f(x)\\in\\{0, 1\\}$, and the model to be evaluated also gives its prediction $\\hat{f}(x)\\in\\{0, 1\\}$. The recall can be written as\n\\begin{equation}\n\\small\n\\label{eq:recall-expectation}\nRecall = \\mathbb{E}_{x\\sim U} \\mathbb{I}[\\hat{f}(x)=1],\n\\end{equation}\nwhere $U$ is the uniform distribution over all samples with $f(x)=1$. If we have a proposal distribution $q(x)$ satisfying $\\forall x, f(x)=1\\wedge \\hat{f}(x)=1\\Rightarrow q(x) \\neq 0$, we get an unbiased estimation of recall:\n\\begin{equation}\n\\small\n\\label{eq:recall}\nRecall \\approx\\sum_{i=1}^{n} \\mathbb{I}[\\hat{f}(x_i)=1] \\hat{w}_i,\n\\end{equation}\nwhere $\\hat{w}_i$ is a normalized version of $w_i = \\frac{\\mathbb{I}[f(x_i)=1]}{\\tilde{q}(x_i)}$, where $\\tilde{q}$ is the unnormalized version of q, and $\\{x_i\\}_{i=1}^{n}$ are i.i.d. drawn from $q(x)$.\n\\end{theorem}\n\n\\paragraph{Precision} Similar to \\cref{eq:recall-expectation}, we can write the expectation form of precision:\n\\begin{equation}\n\\small\nPrecision = \\mathbb{E}_{x\\sim U'}\\mathbb{I}[f(x) = 1],\n\\end{equation}\nwhere $U'$ is the uniform distribution over all samples with $\\hat{f}(x)=1$. As these samples could be found out by performing models on it. We can simply approximate precision by Monte Carlo Sampling:\n\\begin{equation}\n\\small\n\\label{eq:precision}\nPrecision \\approx \\frac{1}{n} \\sum_{i=1}^{n} \\mathbb{I}[f(x_i)=1],\n\\end{equation}\nwhere $\\{x_i\\}_{i=1}^{n} \\stackrel{i.i.d.}{\\sim} U'$.\n\nIn our setting, $x = (r_1, r_2)\\in \\mathcal{R}\\times\\mathcal{R}$, $f(x) = 1$ means $r_1$ and $r_2$ are the same relations, $\\hat{f}(x) = 1$ means $S(r_1, r_2)$ is larger than a threshold $\\lambda$.\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{PR_OpenIE.pdf}\n\\caption{Precision-recall curve on Open IE task comparing our similarity function with vector-based and angle-based similarity. Error bar represents $95\\%$ confidential interval. Bootstraping is used to calculate the confidential interval.}\n\\label{fig:precision-recall-openie}\n\\end{figure}\n\n\\begin{figure*}[!t]\n\\begin{minipage}{0.6\\textwidth}\n\\begin{subfigure}{0.49\\textwidth}\n \\centering\n \\resizebox{\\linewidth}{!}{\n \\includegraphics{aveRank_rp.pdf}}\n \\caption{FB15K}\n \\label{fig:ave_rank_rp}\n\\end{subfigure}\n\\begin{subfigure}{0.49\\textwidth}\n \\centering\n\\resizebox{\\linewidth}{!}{\n\\includegraphics[trim={0 0 0 0},clip]{aveRank_re.pdf}}\n\\caption{TACRED}\n\\label{fig:ave_rank_re}\n\\end{subfigure}\n\\caption{Similarity rank distributions of distracting relations on different tasks and datasets. Most of the distracting relations have top similarity rank. Distracting relations are, as defined previously, the relations have a higher rank in the relation classification result than the ground truth.}\n\\label{fig:ave_rank}\n\\end{minipage}\n\\hfill\n\\begin{minipage}{0.38\\textwidth}\n\\resizebox{\\linewidth}{!}{\n\\includegraphics[trim={0 0 1.2cm 0},clip]{relation_prediction.pdf}}\n\\caption{Improvement of using similarity in a heuristic method for negative sampling. MRR denotes the mean reciprocal\nrank.}\n\\label{fig:relation_prediction}\n\\vspace*{45pt}\n\\end{minipage}\n\\end{figure*}\n\n\n\\subsubsection*{Results}\n\nThe results on the ReVerb Extractions dataset that we constructed are described in \\cref{fig:precision-recall-openie}. To approximate recall, we use the similarity scores as the proposal distribution $\\tilde{q}$. 500 relation pairs are then drawn from $\\tilde{q}$. To approximate precision, we set thresholds at equal intervals. At each threshold, we uniformly sample 50 to 100 relation pairs whose similarity score given by the model is larger than the threshold. We ask 15 undergraduates to judge whether two relations in a relation pair have the same meaning. A relation pair is viewed valid only if 8 of the annotators annotate it as valid. We use the annotations to approximate recall and precision with \\cref{eq:recall} and \\cref{eq:precision}. Apart from the confidential interval of precision shown in the figure, the largest $95\\%$ confidential interval among thresholds for recall is $0.04$\\footnote{The figure is shown in \\cref{fig:recall_std}}. From the result, we could see that our model performs much better than other models' similarity by a very large margin.\n\n\n\n\\section{Error Analysis for Relational Classification}\n\\label{sec:error-analysis}\n\nIn this section, we consider two kinds of relational classification tasks: (1) relation prediction and (2) relation extraction. Relation prediction aims at predicting the relationship between entities with a given set of triples as training data; while relation extraction aims at extracting the relationship between two entities in a sentence. \n\n\\subsection{Relation Prediction}\n\\label{sec:error-analysis-relation-prediction}\nWe hope to design a simple and clear experiment setup to conduct error analysis for relational prediction. Therefore, we consider a typical method TransE \\cite{bordes2013translating} as the subject as well as FB15K~\\cite{bordes2013translating} as the dataset. TransE embeds entities and relations as vectors, and train these embeddings by minimizing\n\\begin{equation}\n\\small\n\\mathcal{L} = \\sum_{(h, r, t) \\in \\mathcal{D}} [d(\\bm{h}+\\bm{r},\\bm{t}) - d(\\bm{h}'+\\bm{r}',\\bm{t}') + \\gamma]_{+},\n\\end{equation}\nwhere $\\mathcal{D}$ is the set of training triples, $d(\\cdot, \\cdot)$ is the distance function, $(h', r', t')$\\footnote{Note that only head and tail entities are changed in the original TransE when doing link prediction. But changing $r'$ results in better performance when doing relation prediction.} is a negative sample with one element different from $(h,r,t)$ uniformly sampled from $\\mathcal{E}\\times\\mathcal{R}\\times\\mathcal{E}$, and $\\gamma$ is the margin.\n\nDuring testing, for each entity pair $(h, t)$, TransE rank relations according to $d(\\bm{h}+\\bm{r}, \\bm{t})$. For each $(h,r,t)$ in the test set, we call the relations with higher rank scores than $r$ \\textbf{\\textit{distracting relations}}. We then compare the similarity between the golden relation and \\textbf{\\textit{distracting relations}}. Note that some entity pairs could correspond to more than one relations, in which case we just do not see them as distracting relations.\n\n\\subsection{Relation Extraction}\n\\label{sec:error-analysis-relation-extraction}\nFor relation extraction, we consider the supervised relation extraction setting and TACRED dataset \\cite{zhang2017position}. As for the subject model, we use the best model on TACRED dataset --- position-aware neural sequence model. This method first passes the sentence into an LSTM and then calculate an attention sum of the hidden states in the LSTM by taking positional features into account. This simple and effective method achieves the best in TACRED dataset. \n\n\\subsection{Results}\n\\label{sec:error_analysis_result}\n\\cref{fig:ave_rank} shows the distribution of similarity ranks of distracting relations of the above mentioned models' outputs on both relation prediction and relation extraction tasks. From \\cref{fig:ave_rank_rp,fig:ave_rank_re}, we could observe the most distracting relations are the most similar ones, which corroborate our hypothesis that even the best models on these tasks still make mistakes among the most similar relations. This result also highlights the importance of a heuristic method for guiding models to pay more attention to the boundary between similar relations. We also try to do the negative sampling with relation type constraints, but we see no improvement compared with uniform sampling. The details of negative sampling with relation type constraints are presented in \\cref{sec:relation-type-constraints}.\n\n\n\\begin{table}[!t]\n\\center\n\\resizebox{\\linewidth}{!}{\n\\begin{tabular}{l l c c c}\n&Model & P & R & $F_1$\\\\\n\\toprule\nTraditional &Patterns & \\textbf{86.9} & 23.2 & 36.6\\\\\n&LR &73.5 &49.9 &59.4 \\\\\\midrule\nNeural&CNN & 75.6 & 47.5 & 58.3 \\\\\n&CNN-PE & 70.3 & 54.2 & 61.2\\\\\n&SDP-LSTM \\cite{xu2015classifying} & 66.3 & 52.7 & 58.7\\\\\n&LSTM & 65.7 & 59.9 & 62.7\\\\\n&PA-LSTM \\cite{zhang2017position} & 65.7 & 64.5 & 65.1\\\\ \n\\midrule\nNeural+Ours&PA-LSTM (Softmax-Margin Loss) & 68.5 & \\textbf{64.7} & \\textbf{66.6}\\\\\n\\bottomrule\n\\end{tabular}}\n\\caption{Improvement of using similarity in softmax-margin loss.}\n\\label{tab:relation_extraction}\n\\end{table}\n\n\n\\section{Similarity and Negative Sampling}\n\\label{sec:training-guidance-relation-prediction}\nBased on the observation presented in \\cref{sec:error_analysis_result}, we find out that similar relations are often confusing for relation prediction models. Therefore, corrupted triples with similar relations can be used as high-quality negative samples.\n\nFor a given valid triple $(h, r, t)$, we corrupt the triple by substituting $r$ with $r'$ with the probability,\n\\begin{equation}\np=\\frac{S(r,r')^{1\/\\alpha}}{\\sum_{r''\\in {\\mathcal{R}\\backslash \\{r\\}}} S(r,r'')^{1\/\\alpha}}, \n\\end{equation}\nwhere $\\alpha$ is the temperature of the exponential function, the bigger the $\\alpha$ is, the flatter the probability distribution is. When the temperature approaches infinite, the sampling process reduces to uniform sampling. \n\nIn training, we set the initial temperature to a high level and gradually reduce the temperature. Intuitively, it enables the model to distinguish among those obviously different relations in the early stage and gives more and more confusing negative triples as the training processes to help the model distinguish the similar relations. This can be also viewed as a process of curriculum learning\\cite{bengio2009curriculum}, the data fed to the model gradually changes from simple negative triples to hard ones. \n\nWe perform relation prediction task on FB15K with TransE. Following \\citet{bordes2013translating}, we use the \"Filtered\" setting protocol, i.e., filtering out the corrupted triples that appear in the dataset. Our sampling method is shown to improve the model's performance, especially on Hit@1 (\\cref{fig:relation_prediction}). Training details are described in \\cref{sec:training_detail}.\n\n\\section{Similarity and Softmax-Margin Loss}\n\\label{sec:training-guidance-relation-extraction}\nSimilar to \\cref{sec:training-guidance-relation-prediction}, we find out that relation extraction models often make wrong preditions on similar relations. In this section, we use similarity as an adaptive margin in softmax-margin loss to improve the performance of relation extraction models.\n\nAs shown in \\cite{gimpel2010softmax}, Softmax-Margin Loss can be expressed as\n \\begin{equation}\n \\small\n \\begin{aligned}\n \\mathcal{L}=&\\sum_{i=1}^n -\\theta^T f(x^{(i)},r^{(i)})+\\\\\n &\\log\\sum_{r\\in \\mathcal{R}(x^{(i)})}\\exp\\{\\theta^Tf(x^{(i)},r)+\\text{cost}(r^{(i)},r)\\},\n \\end{aligned}\n \\end{equation}\nwhere $\\mathcal{R}(x)$ denotes a structured output space for $x$, and $\\langle x^{(i)}, r^{(i)}\\rangle$ is $i^{th}$ example in training data.\n\nWe can easily incorporate similarity into cost function $\\text{cost}(r^{(i)},r)$. In this task, we define the cost function as $\\alpha S(r^{(i)},r)$, where $\\alpha$ is a hyperparameter. \n\nIntuitively, we give a larger margin between similar relations, forcing the model to distinguish among them, and thus making the model perform better. We apply our method to Position-aware Attention LSTM (PA-LSTM)\\cite{zhang2017position}, and \\cref{tab:relation_extraction} shows our method improves the performance of PA-LSTM. Training details are described in \\cref{sec:training_detail}.\n\n\\section{Related Works}\n\nAs many early works devoted to psychology and linguistics, especially those works exploring semantic similarity~\\cite{miller1991contextual,resnik1999semantic}, researchers have empirically found there are various different categorizations of semantic relations among words and contexts. For promoting research on these different semantic relations, \\newcite{bejar1991cognitive} explicitly defining these relations and \\newcite{miller1995wordnet} further systematically organize rich semantic relations between words via a database. For identifying correlation and distinction between different semantic relations so as to support learning semantic similarity, various methods have attempted to measure relational similarity~\\cite{turney2005measuring,turney2006similarity,zhila2013combining,pedersen2012duluth,rink2012utd,mikolov2013distributed,mikolov2013efficient}.\n\nWith the ongoing development of information extraction and effective construction of KBs~\\cite{suchanek2007yago,bollacker2008freebase,bizer2009dbpedia}, relations are further defined as various types of latent connections between objects more than semantic relations. These general relations play a core role in expressing relational facts in the real world. Hence, there are accordingly various methods proposed for discovering more relations and their facts, including open information extraction~\\cite{brin1998extracting,agichtein2000snowball,ravichandran2002learning,banko2007open,zhu2009statsnowball,etzioni2011open,saha2017bootstrapping} and relation extraction~\\cite{riedel2013relation,liu2013convolution,zeng2014relation,santos2015classifying,zeng2015distant,lin2016neural}, and relation prediction~\\cite{bordes2013translating,wang2014knowledge,lin2015learning,lin2015modeling,xie2016representation}.\n\nFor both semantic relations and general relations, identifying them is a crucial problem, requiring systems to provide a fine-grained relation similarity metric. However, the existing methods suffer from sparse data, which makes it difficult to achieve an effective and stable similarity metric. Motivated by this, we propose to measure relation similarity by leveraging their fact distribution so that we can identify nuances between similar relations, and merge those distant surface forms of the same relations, benefitting the tasks mentioned above.\n\n\\section{Conclusion and Future Work}\n\nIn this paper, we introduce an effective method to quantify the relation similarity and provide analysis and a survey of applications. We note that there are a wide range of future directions: (1) human prior knowledge could be incorporated into the similarity quantification; (2) similarity between relations could also be considered in multi-modal settings, e.g., extracting relations from images, videos, or even from audios; (3) by analyzing the distributions corresponding to different relations, one can also find some ``meta-relations'' between relations, such as hypernymy and hyponymy. \n\n\\section*{Acknowledgements}\nThis work is supported by the National Natural Science Foundation of China (NSFC No. 61572273, 61532010), the National Key Research and Development Program of China (No. 2018YFB1004503). Chen and Zhu is supported by Tsinghua University Initiative Scientific Research Program, and Chen is also supported by DCST Student Academic Training Program. Han is also supported by 2018 Tencent Rhino-Bird Elite Training Program.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{Introduction}\nMixed-valence or fluctuating valence behaviour are usually found in lanthanide-based compounds due to the intermixing of the \\textit{s}$-$\\textit{d} band with the localised \\textit{f} band near the Fermi level. Therefore, they exhibit unique magnetic, thermal and electrical properties \\cite{Varma1976a}. Eu cations in Eu-based compounds mostly occur in the 2\\textsuperscript{+} valence. However, in trieuropium tetroxide (Eu$_3$O$_4$) Eu ions exhibit a mixed-valence of Eu$^{2+}$ and Eu$^{3+}$.\n\nEu$_3$O$_4$ crystallises into an orthorhombic structure (space group \\textit{Pnma}) similar to CaFe\\textsubscript{2}O\\textsubscript{4} with the lattice parameters \\textit{a}$=10.085$ \\AA, \\textit{b}$=3.502$ {\\AA} and \\textit{c}$=12.054$ {\\AA} \\cite{Rau1966a,Ahn2009a}. Figure~\\ref{Eu3O4} shows the Eu$_3$O$_4$ structure, where the Eu$^{2+}$ and Eu$^{3+}$ ions are occupying the Ca$^{2+}$ and Fe$^{3+}$ sites, respectively. The oxygen ions form a six- and eight-fold coordination around the Eu$^{3+}$ and Eu$^{2+}$ ions, respectively. The coordination is then completed with two oxygen ions lying at the corner of the crystal \\cite{Rau1966a,Holmes1966a}. Therefore, Eu$_3$O$_4$ has two Eu$^{3+}$ and one Eu$^{2+}$ ions per unit formula \\cite{Holmes1966a}.\n \n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.5\\textwidth]{Eu3O4_colour.jpg}\n\\caption{The orthorhombic structure of Eu$_3$O$_4$, in which the purple, magenta and green spheres represent the Eu$^{3+}$, Eu$^{2+}$ and O$^{2-}$ ions, respectively. The black box represents one unit cell of the Eu$_3$O$_4$ crystal.\n\\label{Eu3O4}\n\\end{figure}\n\nThe compound Eu$_3$O$_4$ has an antiferromagnetic arrangement below 5 K. Its bulk and powder forms show a metamagnetic behaviour below the N\u00e9el temperature (T\\textsubscript{N}), at a critical field of 2.4 kOe. Therefore, Eu$_3$O$_4$ is considered a potential material for magnetic refrigeration applications \\cite{Ahn2009a,Holmes1966,Holmes1966a}. Although Eu$_3$O$_4$ is a mixed-valence compound, its magnetic ordering is mainly determined by the Eu$^{2+}$ ions at low-temperature due to the high magnetic moment of Eu$^{2+}$ (total angular momentum, \\textit{J}$=7\/2$) in comparison to the Eu$^{3+}$ ions (\\textit{J}$=0$) \\cite{Holmes1966,Holmes1966a,Ahn2009a}. It has been proposed that the nearest neighbouring Eu$^{2+}$ ions are strongly coupled by ferromagnetic interactions at low temperature, whereas the distant ions are coupled by weaker antiferromagnetic coupling, resulting in the overall antiferromagnetic state of Eu$_3$O$_4$ \\cite{Holmes1966}.\n\nGraphene is a promising material for spintronics applications due to its desirable properties such as its long spin-diffusion length and high electron mobility. So far, no study on the graphene\/Eu$_3$O$_4$ system has been reported. This may well be due to the difficulty of growing Eu$_3$O$_4$, which is the unstable high-temperature phase of Eu-oxides. Therefore, this study presents one of the first fundamental steps towards understanding the exchange coupling between a Eu$_3$O$_4$ and graphene\n\nIn this study, 20 nm Eu$_3$O$_4$ thick films were grown on a Si\/SiO$_2$ substrate and graphene sheet supported on Si\/SiO$_2$ by molecular beam epitaxy (MBE) and capped with 5 nm of Au. Eu was deposited at high temperatures (300 - 600 $^{\\circ}$C) in an oxygen flux. The growth parameters such as the oxygen partial pressure, temperature and deposition rate were optimised to achieve a crystalline Eu$_3$O$_4$(001) phase. The\nstructural characterisation of the films was studied by X-ray diffraction (XRD) and reflection (XRR), where a superconducting quantum interference device magnetometer (SQUID) was used to study their magnetic properties. The results show a successful growth of crystalline, highly-textured Eu$_3$O$_4$ (001) films with a Curie temperature (\\textit{T}\\textsubscript{C}) of $\\sim\\,5$ K, which is in agreement with the value reported in Ref. [3]. Depth-profile X-ray photon electron spectroscopy (XPS) scans were performed to prove the mixed-valence of Eu cations in Eu$_3$O$_4$. Furthermore, Raman spectroscopy measurements on the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$ sample showed that although the growth of Eu$_3$O$_4$ film induced defects in the graphene sheet, the graphene retains its hexagonal lattice structure.\n\n\\section{Sample Preparation} \\label{SamplePrep}\n20 nm Eu$_3$O$_4$(001) films were deposited on cleaned Si\/SiO$_2$ and commercially purchased Si\/SiO$_2$\/graphene substrates by MBE with a base pressure of 4 $\\times$ 10$^{-10}$ mbar. The substrates were heated to 400$^{\\circ}$C, while the Eu was evaporated at a rate of 1.2 nm\/min. Oxygen was then introduced into the growth chamber, resulting in a partial pressure of 1.1 $\\times$ 10$^{-8}$ mbar to deposit Eu$_3$O$_4$(001) at a rate of 1.11 nm\/min. A 5 nm film of Au was grown subsequently on the Eu$_3$O$_4$(001) films to prevent them from oxidising to the most stable oxide phase of Eu (Eu$_2$O$_3$). The Au films were deposited at 45$^{\\circ}$C, with a rate of 0.057 nm\/min at a pressure of 1.9 $\\times$ 10$^{-10}$ mbar.\n\nA quartz crystal microbalance was used during the deposition to monitor the growth rate and thus the thicknesses of the layers. Room temperature (RT) XRD scans were used to study the crystallinity of the grown films. The XRR measurements were then performed to confirm the results of the microbalance readings and deduce the density and roughness between the layers. These acquisitions were carried out using a Bruker D8 Discover HRXRD with a Cu K$\\alpha$ monochromatic beam with a voltage of 40 kV and a current of 40 mA. The magnetic properties of the Eu$_3$O$_4$ films were studied using a Quantum Design SQUID.\n\nDepth-profile XPS scans using Al K$\\alpha$ X-ray source (1486.68 eV, beam width of 500 $\\mu$m) were performed on the Si\/SiO$_2$\/Eu$_3$O$_4$\/Au sample. This was done to study the homogeneity of the Eu$_3$O$_4$ film, determine the atomic ratio of Eu$^{2+}$ and Eu$^{3+}$ and confirm the mixed-valence character of the grown film. Furthermore, the effect of depositing Eu$_3$O$_4$ film on the graphene sheet was investigated by Raman spectroscopy measurements using a Renishaw InVia spectrometer (100$\\times$ objective, 10\\% laser power, spot size of $\\sim\\,1\\,\\mu$m, 0.5 s exposure time, a wavelength of 532 nm). However, the Au capping layer was selectively etched in KI\/I$_2$ solution before the measurements to eliminate the Au interference with the Raman measurements. The Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample was cleaved into $\\sim 2\\times 2$ mm square and placed in the etchant solution for 5 minutes at RT, rinsed with DI water twice, then with IPA and dried with dry N$_2$. Raman scans were then taken at every $50\\,\\mu$m in a grid pattern over an area of $1000\\,\\mu$m $\\times1000\\,\\mu$m for the region coated with Eu$_3$O$_4$, and every $10\\,\\mu$m over an area of $140\\,\\mu$m $\\times140\\,\\mu$m for the bare graphene surface.\n\n\\section{Results and Discussion} \n\\subsection{X-ray Diffraction (XRD)}\n\\begin{figure}[t!]\n\\includegraphics[width=1.0\\textwidth]{XrayEu3O4monochromatic.jpg}\n\\caption{(a) The RT XRD scan of Si\/SiO$_2$\/Eu$_3$O$_4$(001)\/Au (black line) and Si\/SiO$_2$\/C\/Eu$_3$O$_4$(001)\/Au (red line) samples carried out between 20$^{\\circ}$ - 100$^{\\circ}$ using a monochromator and a 1D detector. The scan for the Si\/SiO$_2$\/C\/Eu$_3$O$_4$(001)\/Au sample is down-shifted by a factor of five for ease of comparison. (b) The XRR measurement of the Si\/SiO$_2$\/Eu$_3$O$_4$(001)\/Au sample (black line) and the corresponding fit (red line). The table lists the thickness, roughness and density of the deposited films as deduced from the fit.}\n\\label{Xray}\n\\end{figure}\nThe RT XRD scans (from 20$^{\\circ}$ - 100$^{\\circ}$) of the Eu$_3$O$_4$(001) films grown on the Si\/SiO$_2$ substrate and on graphene are shown in Figure~\\ref{Xray} (a). The XRD scans show highly textured Eu$_3$O$_4$(002) films with no sign of other oxide-phase of Eu or unreacted Eu within the detection limit of the set-up. Additional Eu$_3$O$_4$(004) and (008) peaks are observed in the scan of the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample, indicating that the underlying graphene layer improves the crystallinity of the Eu$_3$O$_4$ film. This is also proven by the smaller full-width at half-maximum (FWHM) of the Eu$_3$O$_4$ peaks of the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample compared to the Si\/SiO$_2$\/Eu$_3$O$_4$\/Au. Figure~\\ref{Xray} (b) shows the XRR scan and the corresponding fit for the Si\/SiO$_2$\/Eu$_3$O$_4$\/Au sample, whereas the deduced values for the thickness, density and roughness of the layers are listed in the inset table.\n\n\\subsection{SQUID}\nFigure~\\ref{SQUID} shows the field-cooled (FC) and zero field-cooled (ZFC) measurements of the Si\/SiO$_2$\/Eu$_3$O$_4$(001)\/Au and Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$(001)\/Au samples, respectively. Both show a \\textit{T}\\textsubscript{C} of $\\sim 5.5 \\pm 0.1$ K as can be deduced from the d\\textit{M}\/d\\textit{T} vs \\textit{T} (insets), which agrees with values reported in the literature \\cite{Holmes1966,Holmes1966a,Ahn2009a}. Therefore, care has to be given to check for impurities of Eu$_3$O$_4$ phase in EuO$_{1-x}$ thin films which sometimes show a pronounce bump at \\textit{T}$<20$ K \\cite{Liu2012e,Liu2013}.\n\\begin{figure}[t!]\n\\includegraphics[width=1.0\\textwidth]{Eu3O4_SQUID_both_samples.jpg}\n\\caption{(a) The 20 Oe FC \\textit{M} vs \\textit{T} measurement for Si\/SiO$_2$\/Eu$_3$O$_4$\/Au. The inset presents the d\\textit{M}\/d\\textit{T} vs \\textit{T} plot used to determine the \\textit{T}\\textsubscript{C}. (b) The ZFC \\textit{M} vs \\textit{T} measurement for the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample. The inset shows the d\\textit{M}\/d\\textit{T} vs \\textit{T} graph used to deduce the \\textit{T}\\textsubscript{C} of the Eu$_3$O$_4$.}\n\\label{SQUID}\n\\end{figure}\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=0.6\\textwidth]{ZFC_MvsH.jpg}\n\\caption{ZFC Isothermal magnetisation hysteresis loops of the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample measured as a function of the temperature at 2 K, 5 K and 10 K. The inset highlights the virgin magnetisation curves at these temperatures.}\n\\label{SQUID2}\n\\end{figure}\n\nThe ZFC isothermal magnetisation measurements as a function of the applied magnetic field for the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample at 2 K, 5K and 10 K are shown in Figure ~\\ref{SQUID2}. The hysteresis curves show that the grown Eu$_3$O$_4$ films exhibit ferromagnetic behaviour with a coercive field of 22 Oe. The inset highlights the virgin magnetisation curves at these temperatures. Although the XRD scans (Figure ~\\ref{Xray} (a)) and the \\textit{M} vs \\textit{T} measurements (Figure ~\\ref{SQUID}) prove the growth of Eu$_3$O$_4$(001) thin films, surprisingly, the virgin \\textit{M}$-$\\textit{H} curves show no metamagnetic transition even with an applied in-plane magnetic field of 3 kOe as reported for crystal and powder Eu$_3$O$_4$ \\cite{Holmes1966a,Ahn2009a}. This could be attributed to the strain from the substrate which could be resolved by growing a thicker film.\n\n\\subsection{XPS}\n\\begin{figure}[b!]\n\\includegraphics[width=1.0\\textwidth]{AtomicProfileSurvey.jpg}\n\\caption{(a) XPS etch profile of the Si\/SiO$_2$\/Eu$_3$O$_4$\/Au sample by Ar$^+$ plasma etching. The yellow shaded region highlights the area considered for the analysis of Eu cations, whereas the dashed vertical lines indicate $t=$210 s, 360 s, 450 s and 510 s. (b) The XPS survey spectra collected at \\textit{t}$=60$ s, \\textit{t}$=360$ s and \\textit{t}$=510$ s of the Si\/SiO$_2$\/Eu$_3$O$_4$\/Au highlighting the Au, C, Eu, Si and O peaks.}\n\\label{atomicProfile}\n\\end{figure}\n\\begin{figure}[b!]\n\\includegraphics[width=1.0\\textwidth]{XPS_Eu3d_merged.jpg}\n\\caption{The deconvoluted 3\\textit{d} XPS spectra of the Eu$_3$O$_4$ film on Si\/SiO$_2$ substrate using Al K$\\alpha$ source (1486.68 eV) after the Shirley background subtraction, measured at etching time (a) \\textit{t}=210 s, (b) \\textit{t}=360 s, (c) \\textit{t}=450 s and (d) \\textit{t}=510 s. The raw data (black line), fitting curve (red line), Eu$^{2+}$ 3\\textit{d} (blue shaded peaks), Eu$^{2+}$ multiplet satellites (yellow-shaded peaks), Eu$^{3+}$3\\textit{d} (green shaded peaks), Eu$^{3+}$ multiplet satellites (magenta shaded peaks) and plasmon excitations (grey shaded peaks).}\n\\label{XPS}\n\\end{figure}\nThe existence of mixed-valence Eu cations was investigated by performing depth-profile XPS scans while measuring the Eu 3\\textit{d} and 4\\textit{d} spectra simultaneously after Ar$^+$ plasma etching. Figure~\\ref{atomicProfile} (a) shows the XPS etch profile of the sample, whereas the XPS survey collected at \\textit{t}$=$210 s, 360 s, 450 s and 510 s, highlighting the different detected elements are shown in Figure ~\\ref{atomicProfile} (b). The 4\\textit{d} XPS spectra have a complicated structure (not shown) due to the strong unfilled 4\\textit{f}$-4$\\textit{d} hole interaction, whereas the 3\\textit{d} states have a weaker multiplet splitting and broader photoexcitation cross-section. Therefore, the latter is usually used to analyse the Eu XPS spectra and obtain a better estimation of the Eu initial valence \\cite{Caspers2011c,EnJinCho1996,Mariscal2018,EnJinCho1995}. \n\nFigure~\\ref{XPS} (a) - (d) shows the Eu 3\\textit{d} XPS spectra after subtracting an optimised Shirley background, measured at \\textit{t}$=$210 s, 360 s, 450 s and 510 s. The peaks were deconvoluted using Gaussian-Lorentzian fitting, while the $\\chi^2$ value indicates the quality of the fit. Although the Eu$_3$O$_4$ layer was etched fully, only these scans were considered for the analysis of the Eu cation valency (the yellow shaded area of Figure~\\ref{atomicProfile} (a)) to minimise the effect of interdiffusion at the SiO$_2$\/Eu$_3$O$_4$ and Eu$_3$O$_4$\/Au interfaces and\nincrease the intensity of the Eu 3\\textit{d} and 4\\textit{d} peaks.\n\n\\begin{table}\n\\centering\n\\caption{The positions of the Eu$^{2+}$ and Eu$^{3+}$ 3\\textit{d} peaks, their FWHM and the position of their corresponding multiplet satellites. The atomic ratios of the Eu$^{2+}$ to Eu$^{3+}$ were deduced from the areas of the peaks. The table also lists the $\\chi^2$ value of the fittings.}\n\\begin{tabular}{ P{2cm} P{1.3cm} P{1.2cm} P{2.8cm} P{1.3cm} P{1.2cm} P{2.8cm} }\n\\hline\n {Spectrum} & \\multicolumn{6}{c}{Eu$^{2+}$ (eV)} \\\\\n \\cline{2-7}\n & \\centering{3\\textit{d}$_{5\/2}$} & \\centering{FWHM} & \\centering{3\\textit{d}$_{5\/2}$ Satellites} & \\centering{3\\textit{d}$_{3\/2}$} & \\centering{FWHM} & 3\\textit{d}$_{3\/2}$ Satellites \\\\\n\\hline\n\\textit{t}$=210$ s & 1125.54 & \\centering{5.39} & \\centering{1131.26} & \\centering{1155.14} & \\centering{5.29} & 1160.16 \\\\\n\\textit{t}$=360$ s & 1124.88 & \\centering{5.17} & \\centering{1130.64} & 1154.75 & \\centering{5.76} & 1160.04 \\\\\n\\textit{t}$=450$ s & 1124.78 & \\centering{5.01} & \\centering{1131.19} & 1154.65 & \\centering{5.64} & 1159.96 \\\\\n\\textit{t}$=410$ s & 1124.56 & \\centering{4.58} & \\centering{1131.07} & 1154.21 & \\centering{5.36} & 1159.58 \\\\\n\n \\multirow{2}{2cm}{} & \\multicolumn{6}{c}{Eu$^{3+}$(eV)} \\\\\n \\cline{2-7}\n & \\centering{3\\textit{d}$_{5\/2}$} & FWHM & \\centering{3\\textit{d}$_{5\/2}$ Satellites} & \\centering{3\\textit{d}$_{3\/2}$} & FWHM & 3\\textit{d}$_{5\/2}$ Satellites \\\\\n\\hline\n\\textit{t}$=210$ s & 1134.07 & \\centering{4.56} & \\centering{1142.39} & 1163.46 & \\centering{5.03} & 1166.92 \\\\\n\\textit{t}$=360$ s & 1133.49 & \\centering{4.69} & \\centering{1141.90} & 1162.96 & \\centering{4.85} & 1166.00 \\\\\n\\textit{t}$=450$ s & 1133.93 & \\centering{4.46} & \\centering{1142.29} & 1163.15 & \\centering{4.80} & 1166.00 \\\\\n\\textit{t}$=410$ s & 1133.83 & \\centering{4.56} & \\centering{1142.23} & 1163.15 & \\centering{5.38} & 1167.48 \\\\\n\\\\\n & \\multicolumn{2}{c}{\\centering{Atomic ratio 3\\textit{d}$_{5\/2}$}} & & \\multicolumn{2}{c}{\\centering{Atomic ratio 3\\textit{d}$_{3\/2}$}} & $\\chi^2$ \\\\\n \\cline{2-3} \\cline{5-6}\n & \\centering{Eu$^{2+}$} & \\centering{Eu$^{3+}$} & & \\centering{Eu$^{2+}$} & \\centering{Eu$^{3+}$} & \\\\\n \\hline\n\\textit{t}$=210$ s & \\centering{49.81} & \\centering{50.19} & & \\centering{21.39} & \\centering{78.61} & 9.30 \\\\\n\\textit{t}$=360$ s & \\centering{14.63} & \\centering{85.37} & & \\centering{26.84} & \\centering{73.16} & 11.30 \\\\\n\\textit{t}$=450$ s & \\centering{21.58} & \\centering{78.42} & & \\centering{31.61} & \\centering{68.39} & 18.20 \\\\\n\\textit{t}$=410$ s & \\centering{26.80} & \\centering{73.20} & & \\centering{31.96} & \\centering{68.04} & 20.70 \\\\\n\\cline {1-6}\nAverage & \\centering{28.20} & \\centering{71.80} & & \\centering{27.95} & 72.05 \\\\\n \\hline\n\\end{tabular}\n\\label{Table}\n\\end{table}\n\nAll spectra in Figure~\\ref{XPS} show the spin-orbit coupling (SOC) components, 3\\textit{d}$_{5\/2}$ and 3\\textit{d}$_{3\/2}$, for Eu$^{2+}$ and Eu$^{3+}$ separated by $\\Delta\\sim 29.5$ eV, which agrees with previously reported values \\cite{Mariscal2018,Caspers2011c,Orlowski2005}. They also show additional peaks at slightly higher binding energy (BE) to the SOC peaks for the Eu$^{2+}$ and Eu$^{3+}$. These shake-up satellite peaks arise as a result of the multiplet structures of the 4\\textit{f}$^7-3$\\textit{d} hole in the final state \\cite{EnJinCho1995}. Furthermore, the fast 3\\textit{d} photoelectrons create plasmon excitation structures observed as broad peaks at BE $\\sim1146$ eV and $\\sim1170$ eV \\cite{Caspers2011c}. The XPS spectra shown in Figure~\\ref{XPS} prove the mixed-valency of Eu cations as they agree well with previous work reported for Eu$^{2+}$ and Eu$^{3+}$ \\cite{Caspers2011c,Mariscal2018,Ohno2002,Kim2019,Cho1999}. Moreover, the average atomic ratio of the Eu$^{2+}$ to Eu$^{3+}$ in the 3\\textit{d}$_{5\/2}$ and 3\\textit{d}$_{3\/2}$ $\\sim\\,28:72$ is consistent with the values reported in Eu-doped ZnO \\cite{Lupan2013} and Eu-doped GaN nanowires \\cite{Faye2019a}. Table~\\ref{Table} summarises the positions of the Eu$^{2+}$ and Eu$^{3+}$ 3\\textit{d} peaks, their FWHM, their corresponding multiplet satellites, the ratio of Eu$^{2+}$\/Eu$^{3+}$ and the fits $\\chi^2$ values of the four spectra. \n\n\\subsection{Raman Spectroscopy}\nRaman spectroscopy is a versatile and non-destructive technique widely used to study the structural and electronic properties of graphene \\cite{Wang2008,Childres2013}. A good-quality monolayer of graphene has two main characteristic Raman peaks; the \\textit{G} and 2\\textit{D} peaks at $\\sim\\,1582$ cm$^{-1}$ and $\\sim\\,2700$ cm$^{-1}$, respectively. It can also possesses other disorder-induced peaks such as the \\textit{D} peak at $\\sim 1350$ cm$^{-1}$ \\cite{Allard2010,Malard2009,Shlimak2015a}. Therefore, the presence or absence of these peaks and the ratio of the intensity of the \\textit{D} peak to the intensity of the \\textit{G} peak (\\textit{I}$_D$\/\\textit{I}$_G$), which represents the defect density in the graphene structure, were mostly used to assess the quality of our graphene underlayer \\cite{Aboljadayel2021}.\n\nFigure~\\ref{RamanOptical} (a) and (b) show the microscopic optical images of the sample highlighting three different regions of the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample taken before etching the Au capping layer. The zoom-in image collected with a $\\times 100$ objective lens (Figure~\\ref{RamanOptical} (b)) shows that the graphene layer consists of a mixture of mono- and multilayer graphene domains rather than a continuous homogeneous monolayer, which could be either a result of the growth of the Eu$_3$O$_4$ film or the pristine quality of the commercial graphene. Therefore, one would expect the presence of defect-induced peaks in the Raman scans \\cite{Aboljadayel2021}. \n\\begin{figure}[b!]\n\\centering\n\\includegraphics[width=0.75\\textwidth]{OpticalImages.jpg}\n\\caption{The microscopic optical images of the Si\/SiO$_2$\/graphene\/Eu$_3$O$_4$\/Au sample highlighting the different regions of the sample. (a) the bare and coated area of the Si\/SiO$_2$ substrate before the etching process, (b) a zoom-in view of the surface using a 100X objective lens showing the mixed domain structures of the graphene underlayer, (c) the graphene under the Eu$_3$O$_4$ layer before and (d) after etching the Au capping layer.}\n\\label{RamanOptical}\n\\end{figure}\n\nFigure~\\ref{RamanOptical} (c) and (d) show the microscopic optical images of the graphene edge under the Eu$_3$O$_4$ film before and after removing the Au layer, respectively. No significant change in the contrast is observed between the two images suggesting that the etching process did not remove or affect the graphene underlayer. \n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=1.0\\textwidth]{RamanScans.jpg}\n\\caption{RT Raman spectroscopy measurements for (a) the bare graphene sheet on the Si\/SiO$_2$ substrate and (b) graphene under the Eu$_3$O$_4$ film after etching the Au capping layer.} \n\\label{RamanScans}\n\\end{figure}\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[width=1.0\\textwidth]{ID_IG_new.jpg}\n\\caption{The frequency distribution of the intensity of the \\textit{D} peak to the intensity of the \\textit{G} peak of (a) the bare graphene and (b) the graphene under the Eu$_3$O$_4$ film. Raman spectra of the regions with no graphene signal were removed from the statistics.} \n\\label{ID_IG}\n\\end{figure}\nFour random Raman scans taking on two different areas of the sample's surface; bare graphene and the graphene\/Eu$_3$O$_4$ region after removing the Au layer are shown in Figure~\\ref{RamanScans} (a) and (b), respectively. The emergence of the additional defect-induced peaks in the spectra of the graphene\/Eu$_3$O$_4$ area and the increase in their intensities (Figure~\\ref{RamanScans} (b)) compared to the scans of the bare graphene (Figure~\\ref{RamanScans} (a)) indicates that the growth of Eu$_3$O$_4$ film increased the defect density in the graphene structure. This is also seen by the shift in the \\textit{I}$_D$\/\\textit{I}$_G$ ratio of the graphene under the Eu$_3$O$_4$ film towards the higher values, compared to the bare graphene sheet (Figure~\\ref{ID_IG}). This is because \\textit{I}$_D$\/\\textit{I}$_G$ is known to be small for low-defect-density graphene \\cite{Shlimak2015a,Cabrero-Vilatela2016a}. However, since the underlayer graphene is visible with the optical microscope (Figure ~\\ref{RamanOptical}) and that the Raman characteristic features of graphene are maintained after the growth of the Eu$_3$O$_4$ film (Figure ~\\ref{RamanScans}), one would expect the graphene underlayer to retain its properties.\n\n\\section{Conclusion} \\label{conclusion}\nIn summary, we discussed the experimental work carried out to study the growth of Eu$_3$O$_4$ thin film by MBE on Si\/SiO$_2$ and on a graphene sheet. The structural and magnetic characterisations show successful deposition of crystalline, highly-textured Eu$_3$O$_4$(001) films with a \\textit{T}\\textsubscript{C} of $\\sim 5.5\\pm0.1$ K. However, the films show no metamagnetic behaviour which could be attributed to the strain from the substrate. Furthermore, a qualitative analysis of the XPS scans confirms the mixed-valency of the Eu cation.\n\nRaman measurements show that the graphene layer retained its hexagonal lattice structure under the Eu$_3$O$_4$ film. Therefore, this study represents the first successful step towards integrating a Eu$_3$O$_4$ thin film in two widely used electronic substrates for future spintronics applications. \n\n\n\n\n\n\n\n\n\n\n\n\\begin{acknowledgments}\nAuthors would like to acknowledge David Love and Pedro M S Monteiro for their support.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Supplementary Material: Learning Embeddings from Knowledge Graphs With Numeric Edge Attributes}\n\n\\section{Preliminaries}\\label{sec:background} \n\n\\paragraph{Knowledge Graph.}\nA knowledge graph $\\mathcal{G}=\\{ (s,p,o)\\} \\subseteq \\mathcal{E} \\times \\mathcal{R} \\times \\mathcal{E}$ is a set of triples $t=(s,p,o)$ each including a subject $s \\in \\mathcal{E}$, a predicate $p \\in \\mathcal{R}$, and an object $o \\in \\mathcal{E}$. $\\mathcal{E}$ and $\\mathcal{R}$ are the sets of all entities and relation types of $\\mathcal{G}$.\n\n\n\\paragraph{Knowledge Graph with numeric-enriched triples.}\nIn a numeric-enriched knowledge graph $\\mathcal{G}$, each triple is assigned a numeric attribute $w \\in \\mathbb{R}$, leading to $\\mathcal{G}=\\{ t=(s,p,o,w)\\}$. It is worth noting that we are not tied to a specific numeric attribute semantics, as these numbers may encode importance, uncertainty, strength, etc. \nFor example, Figure~\\ref{fig:kg} assumes that numeric values indicate the importance of a link. The triple \\texttt{(MLEngineer, requiresSkill, Python, 0.98)} is therefore more important than \\texttt{(MLEngineer, requiresSkill, JavaScript, 0.13)}. \n\nNote $w$ can be either defined at a predicate level or at triple level. In this paper, we assume that $w$ is triple-specific. %\n\n\n\n\\paragraph{Knowledge Graph Embedding Models.}\nKnowledge graph embedding models (KGE) are neural architectures designed to predict missing links between entities. KGE encode both entities $\\mathcal{E}$ and relations $\\mathcal{R}$ into low-dimensional, continuous vectors $\\in \\mathbb{R}^k$ (i.e, the embeddings). \nKnowledge graph embeddings are learned by training a neural architecture over a training knowledge graph: an input layer feeds training triples to an embedding lookup layer that retrieves embeddings of entities and relations. \n\nA scoring layer $f(t)$ assigns plausibility scores to each triple. The scoring layer is designed to assign high scores to positive triples and low scores to negative corruptions. \nMost of literature differs in the design rationale of $f(t)$.\nFor example, the scoring function of TransE~\\cite{bordes2013translating} computes a similarity between the embedding of the subject $\\mathbf{e}_{s}$ translated by the embedding of the predicate $\\mathbf{e}_{p}$ and the embedding of the object $\\mathbf{e}_{o}$. %\n\n\\textit{Corruptions} are synthetic negative triples generated by a corruption generation layer that follows the protocol proposed in~\\cite{bordes2013translating}:\nwe define a corruption of $t$ as $t^-=(s,p,o')$ or $t^-=(s',p,o)$ where $s', o'$ are respectively subject or object corruptions (i.e. other entities randomly selected from $\\mathcal{E}$). We generate synthetic negatives by corrupting one side of the triple at a time to comply with the local closed world assumption~\\cite{nickel2016review}.\n\nFinally, a loss layer optimizes the embeddings by maximizing the margin between positive triples $t$ and corruptions $t^-$. In other words, the goal of the optimization procedure is learning optimal embeddings, such that at inference time the scoring function $f(t)$ assigns high scores to triples likely to be correct and low scores to triples unlikely to be true.\n\n\n\n\\textbf{Link Prediction.} \nThe task of predicting unseen triples in knowledge graphs is formalized in literature as a learning to rank problem, where the objective is learning a scoring function $f(t=(s, p, o)): \\mathcal{E} \\times \\mathcal{R} \\times \\mathcal{E} \\rightarrow \\ensuremath{\\mathbb{R}}$ that given an input triple $t=(s,p,o)$ assigns a score $f(t) \\in \\ensuremath{\\mathbb{R}}$ proportional to the likelihood that the fact $t$ is true. \nSuch predictions are ranked against predictions from synthetic corruptions, to gauge how well the model tells positives from negatives.\n\n\\paragraph{Link Prediction with numeric-enriched triples.}\nIn this paper we predict probability estimates for unseen numeric-enhanced triples $t=(s, p, o, w)$. \nThe task is formalized as the same learning to rank problem described above for conventional link prediction.\n\n\n\\section{Conclusion}\\label{sec:conclusion}\nWe show that by plugging an additional layer we can make a conventional KGE architecture aware of numeric values associated to triples. This leads to models that better discriminate high-valued and low-valued triples, regardless of the semantics of the numeric attributes, and without requiring additional out-of-band rules (unlike UKGE). \n\n\nFuture work will investigate the capability of predicting numeric values associated to unseen triples. We will also extend our approach to support multiple numeric attributes associated to the same triple.\n\n\n\\section{Experiments}\\label{sec:eval}\nWe assess the predictive power of FocusE on the link prediction task with numeric-enriched triples. Experiments show that FocusE outperforms conventional KGE models and its closest direct competitor UKGE~\\cite{chen2019embedding} in discriminating low-valued triples from high-valued ones.\n\n\\textbf{Datasets}\nWe experiment with three publicly available benchmark datasets originally proposed by~\\cite{chen2019embedding}. Triples are associated to numeric values $w$ interpreted as uncertainty values. We also introduce a fourth dataset, where numeric values $w$ encode the \\textit{importance} of each link. \nAll datasets include triple-specific $w$ values.\nTable~\\ref{table:kgstats} shows the statistics of all the datasets used for the experiments.\n\n\n\\begin{itemize}\n \\item \\textbf{CN15K}~\\cite{chen2019embedding}. A subset of ConceptNet~\\cite{CN}, a common sense knowledge graph built to represent general human knowledge. Numeric values on triples represent uncertainty.\n \\item \\textbf{NL27K}~\\cite{chen2019embedding}. A subset of the Never-Ending Language Learning (NELL)~\\cite{mitchell2018never} dataset, which collects data from web pages. Numeric values on triples represent link uncertainty.\n \\item \\textbf{PPI5K}~\\cite{chen2019embedding}. Knowledge graph of protein-protein interactions~\\cite{PPI}. Numeric values represent the confidence of the link based on existing scientific literature evidence.\n\n \\item \\textbf{O*NET20K}\\footnote{\\texttt{\\url{https:\/\/docs.ampligraph.org\/en\/latest\/ampligraph.datasets.html}}}. \nWe introduce a subset of O*NET~\\footnote{\\texttt{\\url{https:\/\/www.onetonline.org\/}}}, a dataset that includes job descriptions, skills and labeled, binary relations between such concepts. Each triple is labeled with a numeric value that indicates the \\textit{importance} of that link. \nUnlike the other datasets, we built a test set that includes a single predicate type that connects jobs to skills, under the assumption that we are interested in predicting that type of links only. This is done to mock up single-target link prediction tasks adopted in applied knowledge discovery scenarios.\n\n\n\\end{itemize} \n\n\\begin{table}[t]\n \\centering\n \\footnotesize\n \\setlength{\\tabcolsep}{5pt}\n \\begin{tabular}{l cccc}\n \\toprule & O*NET20K & CN15K & NL27K & PP15K\\\\ \n \\midrule\n Training & 461,932 & 204,984 & 149,100 & 230,929 \\\\ \n Validation$^\\ddagger$ & 138 & 3532 & 8161 & 1940 \\\\ \n \\multirow{2}{*}{\\shortstack[l]{Test \\\\(top 10\\%)$^*$}} & \\multirow{2}{*}{200} & \\multirow{2}{*}{1,929} & \\multirow{2}{*}{1,402} & \\multirow{2}{*}{2,172} \\\\ \n &&&& \\\\\n \\multirow{2}{*}{\\shortstack[l]{Test \\\\(bottom 10\\%)$^*$}} & \\multirow{2}{*}{200} & \\multirow{2}{*}{1,929} & \\multirow{2}{*}{1,402} & \\multirow{2}{*}{2,172} \\\\ \n &&&& \\\\\n\n Entities & 20,643 & 15,000 & 27,221 & 5,000 \\\\ \n Relations & 19 & 36 & 404 & 7 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Datasets used in experiments. ($^*$) test sets include either top-10\\% valued triples or bottom 10\\%. All $w$ are normalized, so that $w \\in [0-1]$. ($^\\ddagger$) validation sets only include high-valued triples where $w \\geq 0.8$.}\n \\label{table:kgstats}\n\\end{table}\n\n\\textbf{Implementation Details and Baselines.}\nFocusE and all baselines are implemented with the AmpliGraph library~\\cite{ampligraph} version 1.4.0, using TensorFlow 1.15.2 and Python 3.7. Code and experiments are available at \\texttt{\\url{https:\/\/github.com\/Accenture\/AmpliGraph}}.\nWe experiment with three popular KGE baselines: TransE, DistMult, ComplEx. For each baseline and for FocusE, we carried out extensive grid search, over the following ranges of hyperparameter values: embedding dimensionality $k=[200-600]$, with a step of 100; baseline losses=\\{negative log-likelihood, multiclass-NLL, self-adversarial\\}; synthetic negatives ratio $\\eta=\\{5, 10, 20, 30\\}$; learning rate$=\\{\\num{1e-3}, \\num{5e-3}, \\num{1e-4}\\}$; epochs$=[100-800]$, step of 100; L3 regularizer, with weight $\\gamma=\\{\\num{1e-1}, \\num{1e-2}, \\num{1e-3}\\}$.\nFor FocusE we also tuned the decay $\\lambda=[100-800]$, with increments of 100. \nBest combinations for all experiments are reported in Appendix~\\ref{app:hyper}.\n\nFor experiments with UKGE we modified\\footnote{The original UKGE codebase does not perform a fair rank computation: the authors assign rank=1 when $t$ and $t^-$ have the same score, whereas we assign a lower rank, as per agreed upon practice in the community. For example, if we have $\\eta=1,000$ corruptions $t^-$ which are all assigned the same score as $t$, we assign $t$ a rank=1,000 whereas UKGE ranks it first. To guarantee a fair comparison, we aligned UKGE to our procedure.} the original codebase\\footnote{\\texttt{\\url{https:\/\/github.com\/stasl0217\/UKGE}}} provided by the authors to generate learning to rank metrics required by our experimental protocol. We used the best hyperparameter configuration proposed by the authors.\nNote we do not evaluate UKGE on O*NET20K, since the model requires additional logical rules which are not available for this dataset.\n\nAll experiments were run under Ubuntu 16.04 on an Intel Xeon Gold 6142, 64 GB, equipped with a Tesla V100 16GB.\n\n\n\\textbf{Evaluation protocol.}\nWe adopt the standard evaluation protocol described by \\cite{bordes2013translating} to all datasets described above. We predict whether each triple $t=(s,p,o) \\in \\mathcal{T}$ is a positive fact, where $\\mathcal{T}$ is a disjoint held-out test set that includes only positives triples. \nWe cast the problem as a learning-to-rank task: for each $t=(s,p,o) \\in \\mathcal{T}$, we generate synthetic negatives $t^- \\in \\mathcal{N}_t$ by corrupting one side of the triple at a time (i.e. either the subject or the object). \nWe predict a score for each $t$ and all its negatives $t^- \\in \\mathcal{N}_t$. We then rank the only positive $t$ against all the negatives $\\mathcal{N}_t$ \\footnote{As for standard protocol, we used distinct entities twice, leading to CN15K=$\\sim$30k, NL27K=$\\sim$54k, PP15K=10k, O*NET20K=$\\sim$40k synthetic negatives (Table~\\ref{table:kgstats}).}. \nWe report learning to rank metrics such as mean rank (MR), mean reciprocal rank (MRR), and Hits at $n$ (where $n=1, 10$) by filtering out spurious ground truth positives from the list of generated corruptions (i.e. ``filtered'' metrics).\n\n\n\\subsection{Predicting High-Valued Links}\\label{sec.top10}\nFirst, we assess how well FocusE predicts triples associated to \\textit{high} numeric values. This is an important task in many applicative scenarios: high numeric values usually imply low uncertainty, high strength or importance, thus leading to more valuable newly-discovered knowledge.\nTable~\\ref{table:high-p} reports, for each dataset, the best mean rank, MRR, and Hits@\\{1,10\\} computed over test sets that include only triples associated to the top-10\\% numeric attributes. Our goal is learning to assign low ranks to test triples (i.e. rank=1 being the best outcome).\n\nResults show that FocusE brings better or very similar MRR to traditional, numeric-unaware baselines:\non O*NET20K, FocusE increases MRR for all models, and it outperforms the best baseline by 14 base points (MRR marginally differs on CN15K, NL27K, and PPI5K).\n\nExperiments show that FocusE outperforms UKGE: on CN15K, MRR is 15 base points higher, 19 points higher on NL27K, and up to 30 points better for PPI15K (UKGE$_{rect}$ fails to provide actionable results\\footnote{Due to a problem in the UKGE codebase, UKGE predictive power is MRR=0, hence \"not actionable\".} on PPI15K). \nIt is worth mentioning that UKGE requires additional external rules, hence the absence of results for O*NET20K. FocusE achieves better predictive power, without requiring additional out-of-band rules.\n\n\\begin{table}\n \\centering\n \\footnotesize\n \\setlength\\tabcolsep{2pt}\n \\begin{tabular}{ll cccc}\n\n \\toprule \n & & \\multicolumn{4}{c}{\\textbf{MRR} (bottom 10\\%)} \\\\\n & & O*NET20K & CN15K & NL27K & PP15K\\\\ \n \\midrule\n TransE & & .13 & .05 & .36 & .26 \\\\ \n DistMult & & .15 & .05 & .83 & .97 \\\\ \n ComplEx& & \\textbf{.12} & .08 & .81 & .95 \\\\ \n UKGE$_{rect}$ & & - & \\textbf{.02} & .28 & .00$^*$ \\\\ \n UKGE$_{logi}$ & & - & \\textbf{.02} & .36 & .56 \\\\\n \\multirow{3}{*}{\\textbf{FocusE}} \n & TransE & .14 & .04 & \\textbf{.26} & \\textbf{.08} \\\\ \n & DistMult & .15 & .05 & .70 & .72 \\\\ \n & ComplEx & .13 & .08 & .53 & .49 \\\\ \n \\bottomrule\n \\end{tabular}\n \\caption{Predictive power of FocusE on the 10\\% low-valued triples. Lower = better. Best results in bold. ($^*$) UKGE$_{rect}$ fails to produce actionable results on PPI5K.}\n \\label{table:low-p}\n\\end{table}\n\n\n\\begin{table*}\n \\centering\n \\small\n\n \\begin{tabular}{ll @{\\extracolsep{8pt}} cccc cccc}\n & & \\multicolumn{3}{c}{\\textbf{O*NET20K} (top 10\\%)} & & & \\multicolumn{3}{c}{\\textbf{CN15K} (top 10\\%)} \\\\\n \\cline{3-6} \\cline{7-10}\n\n & & & & \\multicolumn{2}{c}{Hits@} & & & \\multicolumn{2}{c}{Hits@} \\\\\n \\cline{5-6} \\cline{9-10}\n\n & & MR & MRR & 1 & 10 & MR & MRR & 1 & 10 \\\\\n\n \\midrule\n\n \\multicolumn{2}{l}{TransE} \n & \\underline{8} & .37 & .00 & .88\n & \\underline{623} & .19 & .03 & \\textbf{.47} \\\\\n\n \\multicolumn{2}{l}{DistMult} \n & 13 & \\underline{.49} & \\underline{.33} & .82\n & 942 & .22 & .15 & .37 \\\\\n\n \\multicolumn{2}{l}{ComplEx} \n & 26 & .32 & .19 & .59\n & 863 & \\underline{.28} & \\textbf{.22} & .40 \\\\\n\n \\multicolumn{2}{l}{UKGE$_{rect}$} \n & - & - & - & -\n & 984 & .14 & .05 & .32 \\\\\n\n \\multicolumn{2}{l}{UKGE$_{logi}$} \n & - & - & - & - \n & 2214 & .13 & .08 & .22 \\\\\n \n \\midrule\n\n \\multirow{3}{*}{\\textbf{FocusE}}\n & TransE\n & \\textbf{4} & .41 & .0 & \\textbf{.94} \n & \\textbf{438} & .18 & .02 & \\textbf{.47} \\\\\n \n & DistMult\n & 102 & \\textbf{.63} & \\textbf{.52} & \\underline{.85} \n & 1552 & .24 & .17 & .37 \\\\\n \n & ComplEx\n & 9 & .46 & .23 & .83 \n & 840 & \\textbf{.29} & \\underline{.21} & \\underline{.45} \\\\\n\n \\end{tabular}\n \\caption*{}\n\n\n \\begin{tabular}{ll @{\\extracolsep{8pt}} cccc cccc}\n & & \\multicolumn{3}{c}{\\textbf{NL27K} (top 10\\%)} & & & \\multicolumn{3}{c}{\\textbf{PP15k} (top 10\\%)} \\\\\n \\cline{3-6} \\cline{7-10}\n\n & & & & \\multicolumn{2}{c}{Hits@} & & & \\multicolumn{2}{c}{Hits@} \\\\\n \\cline{5-6} \\cline{9-10}\n\n & & MR & MRR & 1 & 10 & MR & MRR & 1 & 10 \\\\\n\n \\midrule\n\n \\multicolumn{2}{l}{TransE} \n & \\underline{91} & .56 & .38 & .86 \n & 6 & .42 & .00 & .94 \\\\\n\n \\multicolumn{2}{l}{DistMult} \n & 130 & .83 & .77 & \\underline{.93} \n & 6 & \\textbf{.97} & \\textbf{.96} & \\textbf{.99} \\\\\n\n \\multicolumn{2}{l}{ComplEx} \n & \\textbf{90} & \\textbf{.87} & \\textbf{.82} & \\textbf{.94} \n & 10 & \\textbf{.97} & \\underline{.95} & .\\textbf{.99} \\\\\n \n \\multicolumn{2}{l}{UKGE$_{rect}$} \n\n & 169 & .61 & .50 & .80 \n & 4943 & .00 & .00 & .00 \\\\\n\n \\multicolumn{2}{l}{UKGE$_{logi}$} \n & 299 & .65 & .56 & .80\n & \\textbf{2} & .65 & .37 & \\textbf{.99} \\\\\n\n\n \\midrule\n\n \\multirow{3}{*}{\\textbf{FocusE}}\n & TransE\n & \\textbf{90} & .57 & .37 & .88 \n & \\underline{4} & .43 & $<.01$ & \\underline{.96} \\\\\n \n & DistMult\n & 140 & .82 & .77 & .92 \n & 16 & \\underline{.96} & .93 & \\textbf{.99} \\\\\n \n & ComplEx\n & 224 & \\underline{.84} & \\underline{.79} & .92 \n & 20 & .95 & .92 & \\textbf{.99} \\\\\n\n \\end{tabular}\n\n\n\\caption{Predicting high-valued links: ranking metrics computed on test sets that consist in the top-valued 10\\% triples. Filtered metrics. Best results in bold, second best underlined.}\n\\label{table:high-p}\n\\end{table*}\n\n\n\n\n\n\n\\begin{table*}\n \\centering\n \\small\n\n \\begin{tabular}{ll @{\\extracolsep{8pt}} ccc ccc}\n & & \\multicolumn{3}{c}{\\textbf{O*NET20K}} & \\multicolumn{3}{c}{\\textbf{CN15K}} \\\\\n \\cline{3-5} \\cline{6-8}\n\n & & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(top 10\\%)}} & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(bottom 10\\%)}} & \\multirow{2}{*}{$\\Delta$MRR} \n & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(top 10\\%)}} & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(bottom 10\\%)}} & \\multirow{2}{*}{$\\Delta$MRR}\\\\ \n &&&& \\\\\n\\midrule\n \\multicolumn{2}{l}{TransE} \n & .37 & .13 & .24 \n & .19 & .05 & \\textbf{.14} \\\\\n\n \\multicolumn{2}{l}{\\textbf{FocusE} TransE} \n & .41 & .14 & \\textbf{.27 (+13\\%)} \n & .18 & .04 & \\textbf{.14} \\\\\n \n \\midrule\n\n \\multicolumn{2}{l}{DistMult} \n & .49 & .15 & .34 \n & .22 & .05 & .17 \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} DistMult} \n & .63 & .15 & \\textbf{.48 (+41\\%)} \n & .24 & .05 & \\textbf{.19 (+12\\%)} \\\\\n \n \\midrule\n\n \\multicolumn{2}{l}{ComplEx} \n & .32 & .12 & .20 \n & .28 & .08 & .20 \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} ComplEx} \n & .46 & .13 & \\textbf{.33 (+65\\%)} \n & .29 & .08 & \\textbf{.21 (+5\\%)} \\\\\n \n\n \\midrule\n\n \\multicolumn{2}{l}{UKGE$_{rect}$} \n & - & - & - \n & .14 & .02 & .12 \\\\\n\n \\multicolumn{2}{l}{UKGE$_{logi}$} \n & - & - & - \n & .13 & .02 & .11 \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} ComplEx} \n & .46 & .13 & .33 \n & .29 & .08 & \\textbf{.21 (+75\\%)} \\\\\n \n\n\n\n\n \\end{tabular}\n \\caption*{}\n \n \\begin{tabular}{ll @{\\extracolsep{8pt}} ccc ccc}\n & & \\multicolumn{3}{c}{\\textbf{NL27K}} & \\multicolumn{3}{c}{\\textbf{PPI5K}} \\\\\n \\cline{3-5} \\cline{6-8}\n\n & & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(top 10\\%)}} & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(bottom 10\\%)}} & \\multirow{2}{*}{$\\Delta$MRR} \n & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(top 10\\%)}} & \\multirow{2}{*}{\\shortstack[c]{MRR\\\\(bottom 10\\%)}} & \\multirow{2}{*}{$\\Delta$MRR}\\\\ \n &&&& \\\\\n\\midrule\n \\multicolumn{2}{l}{TransE} \n & .56 & .36 & .20 \n & .42 & .26 & .16 \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} TransE} \n & .57 & .26 & \\textbf{.31 (+55\\%)}\n & .43 & .08 & \\textbf{.35 (+119\\%)} \\\\\n \n \\midrule\n \n \\multicolumn{2}{l}{DistMult} \n & .83 & .83 & $<.01$ \n & .97 & .97 & $<.01$ \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} DistMult} \n & .82 & .70 & \\textbf{.12 (+1,100\\%)} \n & .96 & .72 & \\textbf{.24 (+2,300\\%)} \\\\\n \n \\midrule\n \n \\multicolumn{2}{l}{ComplEx} \n & .87 & .81 & .06 \n & .97 & .95 & .02 \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} ComplEx} \n & .84 & .53 & \\textbf{.31 (+417\\%)} \n & .95 & .49 & \\textbf{.46 (+2,200\\%)} \\\\\n \n\n \\midrule\n\n \\multicolumn{2}{l}{UKGE$_{rect}$} \n & .61 & .28 & \\textbf{.33} \n & .00 & .00 & .00 \\\\\n\n \\multicolumn{2}{l}{UKGE$_{logi}$} \n & .65 & .36 & .29 \n & .65 & .56 & .09 \\\\\n \n \\multicolumn{2}{l}{\\textbf{FocusE} ComplEx} \n & .84 & .53 & .31 (-6\\%) \n & .95 & .49 & \\textbf{.46 (+411\\%)} \\\\\n \n \\end{tabular}\n\n\n\\caption{High-valued vs low-valued triples discriminative power: FocusE brings larger differences in MRR across the board, showing better capabilities at correctly ranking the top-10\\% high-valued triples vs the bottom 10\\%. Best results in bold.}\n\n\\label{table:deltaMRR}\n\\end{table*}\n\n\n\\subsection{Discriminating High and Low-Valued Links}\nThe primary goal of FocusE is achieving a clear separation between scores assigned to high and low-valued triples. %\nTo assess how well FocusE can tell high-valued links from low-valued, we look at two complementary tasks: how the model performs on test triples with high values (top 10\\%) and on triples with low values (bottom 10\\%). We collect the respective MRR results and we compute the difference $\\Delta$MRR. The higher the difference, the better. \nNote this task differs from what traditionally done by KGE models, which are designed to tell positives from synthetic negatives only, thus not being able to discriminate between low and high-valued triples.\n\n\nSuch prior art shortcoming is evident from results in Table~\\ref{table:deltaMRR}. Models trained using FocusE have a much wider $\\Delta$MRR compared to traditional baselines. \nUsing link numeric values of training triples, FocusE better discriminates triples with higher values in the test set from the ones with lower values (see also Table~\\ref{table:low-p} for details on the bottom 10\\% links). \nIn particular, on O*NET20K, NL27K and PPI5K, FocusE models perform exceptionally better, with $\\Delta$MRR being consistently higher than traditional KGE baselines. \nFocusE also outperforms UKGE by a large margin (75\\%) on CN15K. It provides a large margin of 40 base points on PPI5K as well, i.e. a 411\\% increase (UKGE$_{rect}$ fails to provide actionable results on PPI15K).\n\nAppendix \\ref{app:violin} shows additional evidence of FocusE outperforming baselines, by comparing distributions of scores (logits).\n\n \n\n\n\\subsection{Decay Impact}\\label{sec:decay}\n\nThe structural influence $\\beta$ affects the impact of numeric values $w$. If $\\beta=1$, FocusE falls back to the underlying KGE model. If $\\beta=0$, the original numeric values $w$ have maximum impact throughout the entire training. \nDuring training we decay $\\beta$ linearly, until it reaches 0. \n\nIn this experiment we vary the decay $\\lambda$, while freezing the other hyperparameters to ($k = 300$, $\\eta=30$, NLL of normalized softmax scores loss (Equation~\\ref{eq:loss}), Adam optimizer with learning rate \\num{1e-3}, L3 regularizer, with weight $\\gamma=\\num{1e-2}$. \nWe choose different decay epoch values $\\lambda=\\{0, 100, 200, 300, 400, 500, 600, 700, 800\\}$.\nWe report changes in MRR on NL27K's top 10\\% test set by experimenting on TransE, DistMult, and ComplEx. \nFigure~\\ref{fig:decay} shows that performance improves if $\\lambda$ increases. In most cases model performance saturates when $\\lambda > 400$ epochs. %\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=.7\\columnwidth]{img\/decay_impact.png}\n\\caption{Impact of $\\lambda$, the decay of structural influence $\\beta$ on model performance over n-epochs for NL27k dataset.}\n\\label{fig:decay}\n\\end{figure}\n\n\n\n\n\n\\section{Introduction}\\label{sec:intro}\n\nKnowledge graphs are graph-based knowledge bases whose facts are modeled as labeled, directed edges between entities. Whether it is a social network, a bioinformatics dataset, or retail purchase data, modelling knowledge as a graph lets organizations capture patterns that would otherwise be overlooked. Research led to broad-scope graphs such as DBpedia~\\cite{auer2007dbpedia}, WordNet, and YAGO~\\cite{suchanek2007yago}. Countless domain-specific knowledge graphs have also been published on the web, giving birth to the so-called Web of Data~\\cite{bizer2011linked}.\n\nKnowledge graph embeddings (KGE) are a family of graph representation learning methods that learn vector representations of nodes and edges of a knowledge graph. They are applied to graph completion, knowledge discovery, entity resolution, and link-based clustering, just to cite a few~\\cite{nickel2016review}. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=.7\\columnwidth]{img\/kg_eg.png}\n\\caption{A Knowledge graph with numeric attributes associated to triples.}\n\\label{fig:kg}\n\\end{figure}\n\n\n\n\n\nIn \\textit{multimodal} knowledge graphs, entity nodes are associated to attributes from different data modalities, such as numbers, text, images. \nIn this paper we deal with a specific flavor of multimodal knowledge graphs, that is graphs with numeric-enriched edges, either at predicate-level (i.e. a number assigned to each predicate \\textit{type}), or specific to each triple (this latter case is shown in Figure~\\ref{fig:kg}).\nNumeric-enriched triples have a prominent role in a growing number of applicative scenarios, from protein networks to workforce knowledge bases. Such numeric values may represent edge uncertainty (i.e. uncertain knowledge graphs) as in ConceptNet~\\cite{speer2017conceptnet}; trust in the automated or semi-automated data extraction workflow used to build the graph~\\cite{mitchell2018never}; out-of-band knowledge from wet lab experiments~\\cite{PPI}; edge importance or link strength.\n\nA number of works in knowledge graph representation learning literature support multimodal information and leverage numeric values associated to node entities to generate better embeddings for enhanced link prediction~\\cite{garcia2017kblrn,kristiadi2019incorporating,wu2018knowledge}.\nNevertheless, such models are not designed to learn from numeric values associated to \\textit{edges} of a knowledge graph. With the notable exception of \\cite{chen2019embedding}, that however is designed for uncertain graphs only, supporting numeric is to date still an under-researched direction.\n\nIn this work, we focus on the task of predicting probability estimates of missing links in knowledge graphs with numeric-enhanced triples. We claim that a model must take into account such numeric literals. Regardless of their semantics, we operate under the assumption that such numeric values intensify or mitigate the probability of existence of a link. That implies treating triples with low-numeric values as ``pseudo-negatives''. Traditional embedding models put at the same level low-valued and high-valued triples. However, in many real-world scenarios it is important to make such distinction (e.g. predicting interactions in protein networks~\\cite{PPI}).\n\n\n\nWe propose FocusE, an add-on layer for knowledge graph embeddings to enhance link prediction with edge-related numeric literals. FocusE works with any existing KGE model that adopts the standard negatives generation protocol~\\cite{bordes2013translating}. \nWe use edge numeric literals to modulate the margin between the scores of true triples and their corresponding negative corruptions.\nInspired by Focal Loss~\\cite{Lin2017} that aims at sparser hard examples by modulating the loss function, we leverage numeric literals to ``focus'' traditional KGE models on triples with higher numeric values.\n\nExperiments show that models trained using FocusE outperform numeric-unaware baselines, in particular in discriminating triples with high-numeric attributes from those associated to low values.\n\n\n\n\n\\section{FocusE}\\label{sec:method}\n\nWe present FocusE, an add-on layer for knowledge graph embedding architectures designed for link prediction with numeric-enriched triples.\nFocusE takes into account numeric literals associated to each link. Regardless of their semantics, we operate under the assumption that numeric values intensify or mitigate the probability of existence of a link.\nFor example, given numeric values $w$ in the $[0-1]$ range, we assume that high values identify triples with higher chances of being true, low scores single out weak or unlikely relations, and $w=0$ triples are considered negative samples. \n\nFocusE consists in a plug-in layer that fits between the scoring and loss layers of a conventional KGE method and it is designed to be used during training (Figure~\\ref{fig:arch}).\nUnlike traditional architectures, before feeding the scoring layer to a loss function, we modulate its output based on numeric values associated to triples, to obtain ``focused'' scores. \nWe leverage numeric values associated to triples so that during training the model focuses on triples with higher numeric values. \nWe want our model to learn from training triples with high numeric values, and at the same time use edge numeric values to maximise the margin between scores assigned to true triples and those assigned to their corruptions. This increases the loss of the model and helps it focus on triples with higher values.\nOur contribution is described below in detail.\n\n\nLet $t=(s,p,o)$ be a positive triple and $w>0$ its numeric-value. We define a corruption of $t$ as $t^-=(s,p,o')$ or $t^-=(s',p,o)$. where $s', o'$ are respectively subject or object corruptions.\n\nLet $f(t)$ be the scoring function of a KGE model. In the case of TransE~\\cite{bordes2013translating} this is:\n\n\\begin{equation}\nf(t) = -||\\mathbf{e}_{s} + \\mathbf{r}_{p} - \\mathbf{e}_{o}||_n\n\\end{equation}\n\nwhere $\\mathbf{e}_{s}$, $\\mathbf{r}_{p}$ and $\\mathbf{e}_{o}$ are the embeddings of the subject $s$, predicate $p$, and object $o$.\n\nWe use a softplus non-linearity $\\sigma$ to make sure the scores returned by $f(t)$ are greater or equal to zero, without introducing excessive distortion: %\n\n\\begin{equation}\ng(t) = \\sigma(f(t))= \\ln{(1+e^{f(t)})} \\geq 0\n\\end{equation}\n\n\nTo take into account the effect of numeric values associated to triples, we define a modulating factor $\\alpha \\in \\mathbb{R}$ which is responsible to strike a balance between the influence of the structure of the graph and the impact of the numeric values associated to each triple:\n\n\\begin{equation}\n\\alpha =\n\\begin{cases}\n \\beta + (1-w)(1-\\beta) & \\mbox{if \\,} t \\\\\n \\beta + w(1-\\beta) & \\mbox{if \\,} t^-\n\\end{cases}\n\\label{eq:alpha}\n\\end{equation}\n\nwhere $\\beta \\in [0,1]$ is the \\textit{structural influence}, an hyperparameter that modulates the influence of graph topology, and $w \\in \\mathbb{R}$ is the numeric value associated to the \\textit{positive} triple $t$. \n$\\beta$ is used to re-weigh the triple value $w$. If $\\beta=0$ the original numeric values $w$ are used. If $\\beta=1$, numeric values $w$ are ignored and the model is equivalent to a conventional KGE architecture.\nNote that positive and negative triples are assigned different $\\alpha$ equations.\nThis is done to lower the margin between the scores for triples and their respective corruptions when the triple numeric value is high. \n\nFinally, the FocusE layer $h(t)$ is defined as:\n\n\\begin{equation}\nh(t) = \\alpha \\, g(t)\n\\label{eq:ht}\n\\end{equation}\n\nPutting all together, the FocusE layer $h(t)$ is then used in the loss function\\footnote{Note that the FocusE layer described in Eq.\\ref{eq:ht} is compatible with other losses used in KGE literature (e.g. pairwise, negative log-likelihood, etc.)} $L$. \nThis is a modified, more numerically stable version of the negative log-likelihood of normalized softmax scores proposed in \\cite{kadlec2017knowledge}:\n\n\\begin{equation}\nL = - \\sum_{t^+,t^-} log \\frac{e^{h(t^+)}}{e^{h(t^+)} + e^{h(t^-)}} \\label{eq:loss}\n\\end{equation}\n\nAs seen in \\eqref{eq:loss}, the modulation between structural influence and numeric values increases the margin between high and low-valued triples. Hence the model learns to focus on triples with higher numeric attributes. \n\nDuring training, we decay the structural influence $\\beta$ from 1 to 0. This is controlled by the hyperparameter $\\lambda$ (decay). Initially the model gives equal importance to all training triples. When $\\beta$ decays to zero (linearly, over $\\lambda$ epochs), the model exclusively relies on numeric attributes ($w$).\nSection~\\ref{sec:decay} shows empirical evidence of the effectiveness of $\\lambda$.\n\n\n\n\n\\section{Related Work}\\label{sec:relatedwork}\n\n\\textbf{Knowledge Graph Embeddings.} Although a comprehensive survey is out of the scope of this work (recent surveys provide a good coverage of the landscape~\\cite{bianchi2020knowledge}), it is worth listing the most popular knowledge graph embedding models proposed to date.\nTransE~\\cite{bordes2013translating} is the forerunner of distance-based models, and inspired a number of models commonly referred to as TransX.\nThe symmetric bilinear-diagonal model DistMult~\\cite{yang2014embedding} paved the way for its asymmetric evolutions in the complex space, ComplEx~\\cite{trouillon2016complex} and RotatE~\\cite{sun2019rotate}.\nSome models such as RESCAL~\\cite{nickel2011three}, TuckER~\\cite{balavzevic2019tucker}, and SimplE~\\cite{kazemi_simplE} rely on different tensor decomposition techniques. \nModels such as ConvE~\\cite{DBLP:conf\/aaai\/DettmersMS018} or ConvKB~\\cite{nguyen2018novel} leverage convolutional layers. \nAttention is used by~\\cite{nathani2019learning}. \n\n\\textbf{Numeric-aware models (node attributes).}\nNone of the models listed above leverage numeric attributes of any kind. \nHowever, a number of recent works support multimodal knowledge graphs and learn from numeric values associated to node entities.\nLiteralE enriches node embeddings with numeric information before scoring the triples~\\cite{kristiadi2019incorporating}. KBLRN combines latent, relational and numeric features using product of experts model~\\cite{garcia2017kblrn}. TransEA learns a vanilla structural model using TransE scoring, and an attribute model for attributed triples, using regression over the attribute values, which is jointly trained~\\cite{wu2018knowledge}. \nNevertheless, such models are not designed to learn from numeric values associated to \\textit{edges}. \n\n\\textbf{Numeric-aware models (edge attributes).}\nTo the best of our knowledge\\footnote{We limit to Knowledge Graph Embeddings literature and knowledge graphs, i.e. directed, labeled, multi graphs. Other numeric-aware models such as graph neural networks that operate on different data modalities are out of the scope of this paper (they are either designed for tasks other than link prediction, most of them do not support multi-relational graphs, and cannot currently be applied at the scale KGE models operate). \n}, the only work designed to work with numeric-aware edges is UKGE~\\cite{chen2019embedding}. \nUKGE generates confidence scores for known triples by squashing numeric values in the $[0-1]$ interval. \nIt then uses probabilistic soft logic~\\cite{kimmig2012short} to predict probability estimates for unseen triples, by jointly training a model to regress over the confidence values. A limitation of this approach is that out-of-band logical rules are required as additional input. %\nIt is also worth noting that UKGE design rationale aims at supporting uncertain knowledge graphs, i.e. graphs whose edge numeric values represent uncertainty. In this paper we aim at supporting generic numeric values, regardless of their semantics (uncertainty, strength, importance, etc).\nMoreover, we claim that triples with high numeric values should have higher impact, and this should be explicitly modeled in the scoring or loss computation.\n\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\textwidth]{img\/FocusEArchitecture.png}\n\\caption{A Knowledge Graph Embedding model architecture enhanced with FocusE. The add-on acts as an intermediate layer between the traditional scoring layer and the loss.}\n\\label{fig:arch}\n\\end{figure*}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Recollection of the canonical odd Laplacian}\n\nIn this section we review the construction of the odd Laplacian on\nhalf-densities due to~\\cite{hov:max}. See also~\\cite{hov:semi,\nhov:proclms} and \\cite{tv:laplace1}.\n\nLet $M$ be a supermanifold endowed with an odd symplectic structure,\ngiven by an odd $2$-form $\\omega$. We shall refer to such supermanifolds\nas \\textit{odd symplectic manifolds}. (We always skip the prefix\n`super-' unless required to avoid confusion.) Later we shall discuss\nthe more general case of an odd Poisson manifold. A brief definition\nof the odd Laplacian acting on half-densities on $M$ follows.\n\nConsider a cover of $M$ by Darboux charts, in which the symplectic\nform takes the canonical expression $\\omega=dx^id{\\xi}_i$. Here $x^i$,\n${\\xi}_i$ are canonically conjugate variables of opposite parity. We\nassume that the $x^i$ are even; hence the ${\\xi}_i$, odd. Let $Dy$, for\nany kind of variables $y$, stand for the Berezin volume element.\nThen half-densities on $M$ locally look like\n$\\sigma=s(x,{\\xi})(D(x,{\\xi}))^{1\/2}$. (Notice that we skip questions related\nwith orientation.) We set\n\\begin{equation}\\label{eq:delta}\n \\Delta \\sigma:=\\dder{s}{x^i}{{\\xi}_i}\\,\\bigl(D(x,{\\xi})\\bigr)^{1\/2},\n\\end{equation}\nin Darboux coordinates, and call $\\Delta$, the \\textit{canonical odd\nLaplacian on half-densities}.\n\nThe simplicity of formula~\\eqref{eq:delta} is very deceptive. The\nexpression $\\der{}{x^i}\\der{f}{{\\xi}_i}$ was originally suggested by\nBatalin and Vilkovisky, and is the famous `BV operator'. However,\nthe trouble is that it is not well-defined on functions (actually,\non any objects) unless we fix a volume form, which should therefore\nenter the definition. The geometrically invariant construction for\nfunctions, using a volume form, was first given\nin~\\cite{hov:deltabest}. \\textit{There is no canonical volume form\non an odd symplectic manifold} (unlike even symplectic manifolds,\nenjoying the Liouville form). In particular, the coordinate volume\nform $D(x,{\\xi})$ for Darboux coordinates is \\textbf{not} preserved by\nthe (canonical) coordinate transformations (see later). Hence the\ninvariance of the operator $\\Delta$ given by~\\eqref{eq:delta} is a deep\ngeometric fact.\n\nAs we showed in~\\cite{tv:laplace1}, on any odd Poisson, in\nparticular, odd symplectic, manifold there is a natural\n\\textit{master groupoid} of `changes of volume forms' $\\r\\mapsto\ne^S\\r$ satisfying the master equation $\\Delta_{\\r}e^{S\/2}=0$ (note $1\/2$\nin the exponent; without it there would be no groupoid). Here\n$\\Delta_{\\r}$ is the odd Laplacian on functions with respect to the\ngiven volume form $\\r$. It is defined by $\\Delta_{\\r}f:=\\div_{\\r}X_f$,\nwhere $X_f$ is the Hamiltonian vector field corresponding to $f$.\n(See~\\cite{hov:deltabest}; note also~\\cite{yvette:divergence} for\nanother approach.) In a similar way one can define the odd Laplacian\non any densities\n--- again, depending on a chosen volume form. Now, half-densities are\ndistinguished from densities of other weights precisely by the fact\nthat for them the corresponding odd Laplacian would depend only on\nthe orbit of a volume form with respect to the action of the above\ngroupoid~\\cite{tv:laplace1}. It turns out that on an odd symplectic\nmanifold, all Darboux coordinate volume forms belong to the same\norbit of the master groupoid. We can regard it as a `preferred\norbit'; hence, in the absence of an invariant volume form, the odd\nLaplacian on half-densities defined by an arbitrary Darboux\ncoordinate volume form is invariant. It is just~\\eqref{eq:delta}.\n\n\\section{Homological interpretation of the odd Laplacian}\n\nNow we are going to approach $\\Delta$ on half-densities from a very\ndifferent angle.\n\nLet $\\Omega(M)$ be the space of all pseudodifferential forms on $M$,\ni.e., functions on $\\P TM$. (As usual, $\\P$ stands for the parity\nreversion functor on vector spaces, vector bundles, etc.) In\ncoordinates such functions have the form $s=s(x,{\\xi},dx,d{\\xi})$, where\nthe differentials of coordinates are commuting variables of parity\nopposite to that of the respective coordinate. In our case $dx^i$\nare odd and $d{\\xi}_i$ are even. We do not assume that functions\n$s(x,{\\xi},dx,d{\\xi})$ are polynomial in $d{\\xi}_i$. Of course they are\n(Grassmann) polynomial in $dx^i$, because these variables are odd.\n\n\n\nConsider the odd symplectic form $\\omega$. Since $\\omega^2=0$,\nmultiplication by $\\omega$ can be considered as a differential. Define\nthe operator $D=d+\\omega$, where $d$ is the de Rham differential. Since\n$d\\omega=0$, it follows that $D^2=0$ and we have a `double complex'\n$\\bigl(\\Omega(M),D=\\omega+d\\bigr)$. \\textbf{Warning:} here a\n\\textit{complex} means just a ${\\mathbb Z_{2}}$-graded object.\n\nThe reader should bear in mind that since $\\omega=d\\Theta$ for some even\n$1$-form $\\Theta$, which is true globally, we have\n$D=e^{-\\Theta}\\circ d \\circ e^{\\Theta}$ and the multiplication by\nthe inhomogeneous differential form $e^{\\Theta}$ sets an isomorphism\nbetween the complexes $\\bigl(\\Omega(M), D\\bigr)$ and $\\bigl(\\Omega(M),\nd\\bigr)$. It follows that $H(\\Omega(M), D)$ is isomorphic to $H(\\Omega(M),\nd)$, which is just the de Rham cohomology of the underlying manifold\n$M_0$. (Note that the isomorphism $e^{\\Theta}$ preserves only\nparity, but not ${\\mathbb Z}$-grading, even if we restrict it to\n{differential} forms on $M$, i.e., polynomials in $dx,d{\\xi}$.)\n\n\nThe operator $D=\\omega+d$ was introduced in~\\cite{severa:originbv}. The\nidea was to consider the spectral sequence for $\\bigl(\\Omega(M),D\\bigr)$\nregarded as a double complex. We shall follow it in a form best\nsuiting our purposes and which is slightly different\nfrom~\\cite{severa:originbv}. (In particular, we do not assume\ngrading in the space of forms.)\n\nAlthough there is no ${\\mathbb Z}$-grading present, single or double, one\ncan still develop the machinery of spectral sequences as follows.\n\nWe define linear relations (see~\\cite{maclane:homology}) on $\\Omega(M)$:\n\\begin{equation*}\n \\partial_0:=\\bigl\\{(\\a,\\b) \\in \\Omega(M)\\times \\Omega(M)\\ |\\ \\omega\\a=\\b\\bigr\\},\n\\end{equation*}\nand\n\\begin{multline*}\n \\partial_r:=\\bigl\\{(\\a,\\b) \\in \\Omega(M)\\times \\Omega(M)\\ |\\ \\exists \\a_1,\\ldots, \\a_r\\in \\Omega(M) \\ :\n \\ \\omega\\a=0, \\\\\n d\\a+ \\omega\\a_1=0, \\ \\ldots,\n \\ d\\a_{r-2}+ \\omega\\a_{r-1}=0, \\ d\\a_{r-1}+ \\omega\\a_{r}=\\b\\bigr\\}\n\\end{multline*}\nfor all $r=1,2, 3, \\ldots \\ {}$ We also set $\\partial_{-1}:=\\{(\\a,0)\\}$.\nWe have subspaces $\\Ker\\partial_r$, $\\Def\\partial_r$ (the \\textit{domain of\ndefinition}), $\\Ind\\partial_r$ (the \\textit{indeterminancy}), and\n$\\Im\\partial_r$ in $\\Omega(M)$, and by a direct check\n\\begin{align*}\n \\Im \\partial_r&\\subset \\Ker \\partial_r\\,,\\\\\n\\Def \\partial_r&=\\Ker\\partial_{r-1}\\,,\\\\\n\\Ind \\partial_r&=\\Im\\partial_{r-1}\\,.\n\\end{align*}\nThat is, we have a sequence of differential relations on $\\Omega(M)$,\ndefining a spectral sequence $(E_r,d_r)$ where\n\\begin{equation*}\n E_{r}:=\\frac{\\Ker \\partial_{r-1}}{\\Im\\partial_{r-1}}=\\frac{\\Def \\partial_{r}}{\\Ind\\partial_{r}}\n\\end{equation*}\nand the homomorphism $d_r\\colon\\thinspace E_r\\to E_r$ is induced by $\\partial_r$ in the\nobvious way. (In fact, differential relations like this is the\nshortest way of defining spectral sequences,\nsee~\\cite[p.~340]{maclane:homology}.)\n\nClearly $E_0=\\Omega(M)$. The relation $\\partial_0$ is simply the graph of the\nlinear map $d_0\\colon\\thinspace \\Omega(M)\\to \\Omega(M)$, $d_0\\a=\\omega\\a$. What is $E_1$?\n\n\\begin{thm} The space $E_1$ can be naturally identified with the space\nof half-densities on $M$.\n\\end{thm}\n\nA proof consists of two independent steps. First, we find the\ncohomology of $d_0$ using algebra. Second, we identify the result\nwith a geometrical object. The first part goes as follows.\n\nThe operator $d_0=\\omega$ is a Koszul type differential, since in an\narbitrary Darboux chart $\\omega=dx^id{\\xi}_i$. Introduce a ${\\mathbb Z}$-grading by\nthe degree in the odd variables $dx^i$. The operator $d_0$ increases\nthe degree by one. (This grading is \\textbf{not} preserved by\nchanges of coordinates.) From general theory it follows that the\ncohomology should be concentrated in the ``maximal degree''. Indeed,\nsuppose that $\\dim M=n|n$ and consider the linear operator $H$ on\npseudodifferential forms defined as follows. For\n$\\sigma=\\sigma(x,{\\xi},dx,d{\\xi})$,\n\\begin{equation*}\n H\\sigma(x,{\\xi},dx,d{\\xi}):=\\int_0^1 \\!dt\\,t^{n-1}\\, \\dder{\\sigma}{dx^i}{d{{\\xi}}_i}\n (x,{\\xi},t^{-1}dx,t\\,d{\\xi})\\,,\n\\end{equation*}\n--- notice the similarity with the $\\Delta$-operator. The operator $H$ is\nwell defined on all forms of degree less than $n$ in $dx^i$ and on\nforms of `top' degree if they vanish at $d{\\xi}_i=0$. (In both cases\nthere will be no problem with division by $t$.) For forms on which\n$H$ makes sense one can check that\n\\begin{equation*}\n (Hd_0+d_0H)\\sigma=\\sigma\\,.\n\\end{equation*}\nIn particular, if a form $\\sigma$ is $d_0$-closed and of degree less\nthan $n$ in $dx^i$, then $\\sigma=d_0 H\\sigma$. The same applies for a top\ndegree form taking a non-zero value at $d{\\xi}_i=0$. Hence the\n$d_0$-cohomology ``sits on'' pseudodifferential forms of degree $n$\nin $dx^i$ that do not depend on $d{\\xi}_i$:\n\\begin{equation*}\n \\sigma=s(x,{\\xi})\\,dx^1\\ldots dx^n.\n\\end{equation*}\nNo non-zero form of this appearance can be cohomologous to zero:\nindeed, any $d_0$-exact form, $d_0\\tau=\\omega\\tau$, vanishes at\n$d{\\xi}_i=0$.\n\n\\textit{Hence, each $d_0$-cohomology class has a unique\nrepresentative in a given Darboux coordinate system $x^i,{\\xi}_i$. It\nis obtained by taking an arbitrary form from the class, extracting\nits component of degree $n$ in $dx^i$ and evaluating at $d{\\xi}_i=0$.}\nBy applying this to the class of $dx^1\\ldots dx^n$, we immediately\narrive at\n\\begin{lm} \\label{lem1} Elements of the cohomology space $E_1=H(\\Omega(M),\\omega)$\nare represented in Darboux coordinates as classes\n\\begin{equation*}\n \\sigma=s(x,{\\xi})\\,[dx^1\\ldots dx^n],\n\\end{equation*}\nwhere under a change of Darboux coordinates\n\\begin{align*}\n x^i&=x^i(x',{\\xi}'),\\\\\n{\\xi}_i&={\\xi}_i(x',{\\xi}')\n\\end{align*}\nthe class $[dx^1\\ldots dx^n]$ transforms as follows:\n\\begin{equation*}\n [dx^1\\ldots dx^n]=\\det J_{00}\\cdot[dx^{1'}\\ldots\n dx^{n'}]\\,.\n\\end{equation*}\nHere $J_{00}=\\der{x}{x'}$ is the even-even block of the Jacobi\nmatrix $J=\\der{(x,{\\xi})}{(x',{\\xi}')}$.\n\\end{lm}\n\n\nTo better appreciate the statement, notice that\n\\begin{equation*}\n dx^i=dx^{i'}\\der{x^i}{x^{i'}}+d{\\xi}_{i'}\\der{x^i}{{\\xi}_{i'}}.\n\\end{equation*}\nHence\n\\begin{equation*}\n dx^1\\ldots dx^n=dx^{1'}\\ldots\n dx^{n'}\\cdot \\det\\left(\\der{x^i}{x^{i'}}\\right)+ \\text{terms containing\n $d{\\xi}_{i'}$}\\,.\n\\end{equation*}\nPassing to cohomology is equivalent to discarding these lower order\nterms.\n\nWhat kind of geometrical object is this?\n\n\n\\begin{lm} \\label{lem2} Objects of the form\n $\\sigma=s(x,{\\xi})\\,[dx^1\\ldots dx^n]$,\nin Darboux coordinates, with the transformation law given in\nLemma~\\ref{lem1} can be identified with half-densities on $M$.\n\\end{lm}\n\nThis is the crucial claim. There is a simple but fundamental fact\nfrom linear algebra behind Lemma~\\ref{lem2}, which will be proved in\nthe next section.\n\nThe transformation law for $[dx^1\\ldots dx^n]$ can be obtained from\nthe formal ``law'' $[dx^i]=[dx^{i'}]\\der{x^i}{x^{i'}}$.\nUnfortunately, it does not define a geometric object, because it\ndoes not obey the cocycle condition. In a way, it is only a\n`virtual' transformation law, which will make sense only if an extra\nstructure is imposed on $M$.\n\n\nNow as we have the space $E_1$, let us check the differential $d_1$\non it. It is induced by the differential relation $\\partial_1$ on\n$\\Omega(M)$. Take an element $\\sigma=s(x,{\\xi})\\,[dx^1\\ldots dx^n]\\in E_1$,\ntake its representative $\\a=s(x,{\\xi})\\, dx^1\\ldots dx^n$ and consider\n$\\b\\in\\Omega(M)$ such that $d\\a+\\omega\\a_1=\\b$, for $\\a_1\\in \\Omega(M)$. We\nwill have $[\\b]=d_1\\sigma$ for the class $[\\b]$ in $E_1$. Notice that\n$d\\a=d{\\xi}_i\\der{s}{{\\xi}_i}\\, dx^1\\ldots dx^n$ and it will vanish at\n$d{\\xi}_i=0$, therefore it is an $\\omega$-exact form, according to our\nprevious analysis. \\textit{Thus $d_1=0$ identically and $E_2=E_1$.}\n\nConsider $d_2$ on $E_1=E_2=H(\\Omega(M),\\omega)$. By definition, $d_2$ maps\nthe class $\\sigma=s(x,{\\xi})\\,[dx^1\\ldots dx^n]$, with a local\nrepresentative $\\a=s(x,{\\xi})\\, dx^1\\ldots dx^n$, to the class of\n$\\b\\in \\Omega(M)$ such that $d\\a+\\omega\\a_1=0$, $d\\a_1+\\omega\\a_2=\\b$, for some\n$\\a_1$ and $\\a_2$. We may set $\\a_1:=-Hd\\a$, where $H$ is the\nhomotopy operator defined above, and $\\b:=d\\a_1=-dHd\\a$. Directly:\n\\begin{multline*}\n Hd\\a=H\\Bigl(d{\\xi}_i\\der{s}{{\\xi}_i}\\, dx^1\\ldots dx^n\\Bigr)=\\sum (-1)^{i +\\tilde\ns} \\der{s}{{\\xi}_i}\\, dx^1\\ldots \\widehat{dx^i} \\ldots dx^n\n\\end{multline*}\nand\n\\begin{multline*}\n \\b=-dHd\\a=-d \\sum (-1)^{i-1+\\tilde\ns} \\der{s}{{\\xi}_i}\\, dx^1\\ldots \\widehat{dx^i} \\ldots dx^n=\\\\\n-dx^j\\der{}{x^j}\\sum (-1)^{i +\\tilde s} \\der{s}{{\\xi}_i}\\, dx^1\\ldots\n\\widehat{dx^i} \\ldots dx^n + \\text{\\ lower order terms in $dx$}=\\\\\n-\\dder{s}{x^i}{{\\xi}_i}\\,dx^1\\ldots dx^n + \\text{\\ lower order terms in\n$dx$}\\,.\n\\end{multline*}\nHence in $E_1$ we get:\n\\begin{equation*}\n d_2\\sigma=d_2 \\bigl(s(x,{\\xi})\\,[dx^1\\ldots dx^n]\\bigr)=-\\dder{s}{x^i}{{\\xi}_i}\\,[dx^1\\ldots\n dx^n]=-\\Delta\\sigma\\,,\n\\end{equation*}\nwhich is quite remarkable. What about the space $E_3$ and the\ndifferential $d_3$, and so on?\n\nIt is not hard to notice that the cohomology of the $\\Delta$-operator on\nhalf-densities on $M$ is isomorphic to the de Rham cohomology of the\nunderlying ordinary manifold $M_0$ (we shall say more about this\nlater). Locally the cohomology vanishes except for constants:\n$\\sigma=\\const\\cdot [dx^1\\ldots dx^n]$. Thus $d_3=0$, and $E_4=E_3$; the\nsame continues for $d_4=0$, $E_5=E_4=E_3$, and so on. We arrive at\nthe following statement (which was the main result\nof~\\cite{severa:originbv}):\n\n\n\\begin{thm} \\label{thm2} With the identification of the space $E_1=H(\\Omega(M),\\omega)$\nwith half-densities on $M$, the differential $d_1$ vanishes and the\nnext differential $d_2$ coincides up to a sign with the canonical\nodd Laplacian. The spectral sequence $(E_r,d_r)$ degenerates at the\nterm $E_3$, which is the cohomology of the operator $\\Delta$.\n\\end{thm}\n\n\n\nThe importance of Theorem~\\ref{thm2} is in the fact that it gives\nan alternative proof of the invariance of the odd Laplacian on\nhalf-densities $\\Delta$, by identifying it with an operator in a\nspectral sequence invariantly associated with the odd symplectic\nstructure.\n\n\n\n\\section{Berezinian of a canonical transformation}\n\nConsider a vector space $V=V_0\\oplus V_1$ with an odd symplectic\nstructure, i.e., an odd non-degenerate antisymmetric bilinear form.\n(A choice of `antisymmetric' or `symmetric' does not make any\ndifference.) Necessarily $\\dim V=n|n$. We call matrices preserving\nthis form, \\textit{symplectic}. This should not cause problems; when\ncomparing them with ordinary symplectic matrices corresponding to\nan even symplectic structure, we shall make the reference to the\nparity of the bilinear form explicit.\n\n\\begin{thm} \\label{thm:block} Suppose that $J$ is a symplectic matrix for an odd symplectic space.\nLet\n\\begin{equation*}\n J=\\begin{pmatrix}\n J_{00} & J_{01} \\\\\n J_{10} & J_{11} \\\\\n \\end{pmatrix}\n\\end{equation*}\nbe its standard block decomposition. Then\n\\begin{equation}\n \\Ber J =(\\det J_{00})^2\\,.\n\\end{equation}\n\\end{thm}\n\n\\begin{proof} We can write the matrix of our symplectic form as\n\\begin{equation*}\n B=\\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0 \\\\\n \\end{pmatrix}\\,.\n\\end{equation*}\nThe relation for $J$ is $JBJ\\,^T=B$, where the operation of matrix\ntranspose takes into account the parities of the blocks:\n\\begin{multline*}\n \\begin{pmatrix}\n J_{00} & J_{01} \\\\\n J_{10} & J_{11} \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0 \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n J_{00} & J_{01} \\\\\n J_{10} & J_{11} \\\\\n \\end{pmatrix}^T=\n\\begin{pmatrix}\n J_{00} & J_{01} \\\\\n J_{10} & J_{11} \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0 \\\\\n \\end{pmatrix}\n \\begin{pmatrix}\n J_{00}^{\\,T} & J_{10}^{\\,T} \\\\\n -J_{01}^{\\,T} & J_{11}^{\\,T} \\\\\n \\end{pmatrix}.\n \n\n\n\n\\end{multline*}\nHence we obtain\n\\begin{align\n J_{00}J_{01}^{\\,T}=\\left(J_{00}J_{01}^{\\,T}\\right)^T, \\label{eq:symp1}\\\\\n J_{11}J_{10}^{\\,T}=-\\left(J_{11}J_{10}^{\\,T}\\right)^T, \\label{eq:symp2}\\\\\n J_{00}J_{11}^{\\,T}+J_{01}J_{10}^{\\,T}=1 \\label{eq:symp3}.\n\\end{align}\nFrom~\\eqref{eq:symp3} we may express\n\\begin{multline*}\n J_{11}={J_{00}^{\\,T}}^{-1}+J_{10}{J_{01}^{\\,T}}{J_{00}^{\\,T}}^{-1} \\\\\n ={J_{00}^{\\,T}}^{-1}+J_{10}J_{00}^{-1}J_{01}, \\text{\\quad taking into\n account~\\eqref{eq:symp1}}.\n\\end{multline*}\nWe arrive at the identity\n\\begin{equation}\n J_{11}-J_{10}J_{00}^{-1}J_{01}={J_{00}^{\\,T}}^{-1}.\\label{eq:id}\n\\end{equation}\nTherefore\n\\begin{equation*}\n \\Ber J=\\frac{\\det\n J_{00}}{\\det\\bigl(J_{11}-J_{10}J_{00}^{-1}J_{01}\\bigr)}=\n \\frac{\\det\n J_{00}}{\\det{J_{00}^{\\,T}}^{-1}}=\\left(\\det\n J_{00}\\right)^2.\n\\end{equation*}\n\\end{proof}\n\n\n\nNotice that the steps we followed in the proof are the same as may\nbe used for proving the classical Liouville theorem, i.e., that\n$\\det J=1$ for an ordinary symplectic matrix $J\\in \\Sp(2n)$. The\ndecomposition of $V$ in that case will be the decomposition into the\nsum of Lagrangian subspaces and an identity similar to~\\eqref{eq:id}\nwill be valid. The difference will arise only when calculating the\ndeterminant: instead of the ratio of the determinants of the\nblocks, there will the product, which will give $1$ instead of $\\det\nJ_{00}^2$.\n\nIt is easy to generalize. Let $V$ be a vector (super)space with a\nsymplectic structure, even or odd. Consider its decomposition into\nthe sum of two Lagrangian subspaces. In the even case they will have\nthe same dimensions; in the odd case, the opposite, i.e., $p|{n-p}$\nand ${n-p}|p$. Denote the chosen decomposition by $V=V_0\\oplus V_1$.\nHere the indices have nothing to do with parity. By picking\n`canonically conjugate' bases in $V_0$ and $V_1$ we arrive at a\npicture formally the same as above. The Berezinian can be calculated\nusing the corresponding block decompositions. It will be either\n\\begin{equation*}\n \\Ber J=\\Ber J_{00}\\cdot \\Ber \\bigl(J_{11}-J_{10}J_{00}^{-1}J_{01}\\bigr)\n\\end{equation*}\nor\n\\begin{equation*}\n \\Ber J=\\frac{\\Ber J_{00}}{\\Ber\n \\bigl(J_{11}-J_{10}J_{00}^{-1}J_{01}\\bigr)}\n\\end{equation*}\ndepending on the parity of the symplectic form (in the odd case the\n`formats' of the matrix blocks will be the opposite, hence\ndivision). Then the analog of the identity~\\eqref{eq:id} should be\napplied. For even symplectic structure we thus obtain the analog of\nLiouville's theorem, and for odd, we arrive at\n\n\n\\begin{thm} \\label{thm:lagrange} Let $J$ be a symplectic transformation of an odd symplectic space $V$.\nThen for an arbitrary decomposition into the sum of Lagrangian\nsubspaces, $V=V_0\\oplus V_1$ \\emph{(indices not indicating parity)},\nthe identity\n\\begin{equation}\\label{eq:berj}\n \\Ber J=\\left(\\Ber J_{00}\\right)^2\n\\end{equation}\nholds.\n\\end{thm}\n\n\n\\begin{rem} Theorem~\\ref{thm:block} gives, in particular, that the\nBerezinian of a symplectic matrix is a polynomial in the matrix\nentries and, moreover, a complete square. This is somewhat masked in\nthe more general Theorem~\\ref{thm:lagrange}.\n\\end{rem}\n\n\nThere is an `abstract' argument parallel to the calculation above.\nConsider a decomposition into Lagrangian subspaces $V=V_0\\oplus\nV_1$, for an even or odd symplectic form. (This works in the same\nway for symmetric forms.) Consider the dual space $V_1^*$. We have\n$V_1^*\\cong \\Ann V_0\\subset V^*$ and, using the form, can replace\nthe annihilator by the orthogonal complement:\n\\begin{equation*}\n V_1^*\\cong V_0^{\\perp}\\subset V\n\\end{equation*}\nfor the even form or\n\\begin{equation*}\n V_1^*\\cong \\Pi V_0^{\\perp}\\subset \\Pi V\n\\end{equation*}\nfor the odd form. Recalling that $V_0$ and $V_1$ are Lagrangian\nsubspaces, we get $V_1^*\\cong V_0$ or $V_1^*\\cong \\Pi V_0$. Hence\n\\begin{align*}\n \\Ber V=\\Ber (V_0\\oplus V_1)&=\n \\Ber V_0\\otimes \\Ber V_1\\\\\n &=\\Ber\n V_0\\otimes \\Ber V_0^*=1\n \\intertext{(even symplectic form) or}\n&=\\Ber V_0\\otimes \\Ber \\Pi V_0^*=(\\Ber V_0)^{\\otimes 2}\n\\end{align*}\n(odd symplectic form). Equalities here mean natural isomorphisms.\n\nA different abstract argument based on the well-known interpretation\nof the space $\\Ber V$ as the cohomology of a Koszul complex and\njustifying the equality $\\Ber V=(\\Ber V_0)^{\\otimes 2}$ for an odd\nsymplectic space was given in~\\cite{severa:originbv}.\n\nA weak point of abstract arguments is that they do not really give\ninformation about matrices, which is necessary in applications such\nas a proof of Lemma~\\ref{lem2}.\n\n\n\\section{``Supersymmetries'' of differential forms}\n\nIn this section we change viewpoint. We would like to phrase the\nprevious constructions entirely in the language of `classical'\ndifferential-geometric objects. In this way we shall see how the\ncanonical odd Laplacian on half-densities on an odd symplectic\nmanifold can be seen as a `classical' object equipped with extra\nsymmetries.\n\nLet $M$ now stand for an arbitrary manifold or supermanifold.\nPreviously we worked with odd symplectic manifolds. It is known\nthat any such odd symplectic manifold can be non-canonically\nidentified with $\\Pi T^*M$ considered with the natural odd bracket,\nfor some $M$. A change of identifying symplectomorphism is\nequivalent to a symplectomorphism (or canonical transformation) of\nthe space $\\Pi T^*M$. Therefore, we can restrict ourselves to\nobjects on $\\Pi T^*M$, but should analyze them from the viewpoint\nof the larger supergroup of all canonical transformations of $\\Pi\nT^*M$, not just that of diffeomorphisms of $M$.\n\nWe shall consider multivector fields (and multivector densities)\nand differential forms on $M$. When $M$ is a supermanifold, we\nactually speak of pseudodifferential forms.\n\nMultivector fields on $M$ are identified with functions on $\\Pi\nT^*M$. In local coordinates, we have $X=X(x,x^*)$, where $x^a$ are\ncoordinates on $M$ and $x^*_a$ are the corresponding coordinates on\nthe fibers, of the opposite parity, transforming as\n$x^*_a=\\der{x^{a'}}{x^a}\\,x^*_{a'}$. There is no problem with\ncanonical transformations of $\\Pi T^*M$ acting on multivector fields\non $M$ --- it is just the pull-back of functions.\n\nMultivector densities on $M$ have the form $\\sigma=s(x,x^*)\\,Dx$ and at\nfirst glance it is not obvious how a transformation mixing $x$ and\n$x^*$ can be applied to them. However, in view of\nTheorems~\\ref{thm:block} and \\ref{thm:lagrange}, for a canonical\ntransformation $F\\colon\\thinspace \\Pi T^*M\\to \\Pi T^*M$ one can set\n\\begin{equation*}\n F^*\\sigma:=s\\bigl(y(x,x^*),y^*(x,x^*)\\bigr)\\,\\Ber\\left(\\der{y}{x}\\right)\\,Dx\n\\end{equation*}\nif in coordinates $F\\colon\\thinspace (x,x^*)\\mapsto \\bigl(y=y(x,x^*),\ny^*=y^*(x,x^*)\\bigr)$. This is a well-defined action. In other\nwords, we identify multivector densities on $M$ with half-densities\non $\\Pi T^*M$ and apply the natural action, taking into account\nidentity~\\eqref{eq:berj}. In integration theory, multivector\ndensities are known as integral forms (more precisely,\npseudointegral, if we insist on differentiating between arbitrary\nsmooth functions and polynomials). Therefore we can make a remark:\nintegral forms on an arbitrary supermanifold $M$ are the same as\nhalf-densities on the odd symplectic manifold $\\Pi T^*M$. In this\nlanguage we see that integral forms have more symmetries than those\nobvious ones given by diffeomorphisms of $M$.\n\n\nConsider now pseudodifferential forms on $M$, i.e., functions on\n$\\Pi TM$. They are related with (pseudo)integral forms, i.e.,\nmultivector densities on $\\Pi T^*M$ by the Fourier transform:\n\\begin{equation}\\label{eq:fourier}\n \\omega(x,dx)=\\int\\limits_{\\Pi T^*_xM}\\! Dx^*\\, e^{\\i dx^ax^*_a}\\,s(x,x^*)\n\\end{equation}\nand conversely\n\\begin{equation}\\label{eq:invfourier}\n s(x,x^*)=\\const\\int\\limits_{\\Pi T_xM}\\! D(dx)\\,\n e^{-\\i dx^ax^*_a}\\,\\omega(x,dx)\\,.\n\\end{equation}\nFrom here we obtain the action of the canonical transformations of\nthe odd symplectic manifold $\\Pi T^*M$ on forms on $M$ as follows:\n\\begin{multline*}\n (F^*\\omega)(x,dx)=\\const\\iint_{\\Pi T^*_xM\\times \\Pi T_yM} Dx^* D(dy)\n \\,e^{\\i \\left(dx^ax^*_a-dy^ay^*_a(x,x^*)\\right)}\\cdot \\\\\n \\omega\\bigl(y(x,x^*),dy\\bigr)\\,\\Ber\\der{y}{x}(x,x^*)\\,,\n\\end{multline*}\nwhere, as above, $F\\colon\\thinspace (x,x^*)\\mapsto \\bigl(y=y(x,x^*),\ny^*=y^*(x,x^*)\\bigr)$. In general, this action is non-local.\n\nWe shall consider the representation of the infinitesimal canonical\ntransformations of $\\Pi T^*M$ on forms and multivector densities on\n$M$. As it turns out, the description in both cases will be very\nsimple.\n\nFor the odd symplectic manifold $\\Pi T^*M$, the canonical odd\nLaplacian on half-densities $\\Delta$ on $\\Pi T^*M$ is just the familiar\ndivergence of multivector densities $\\delta$ on $M$. Indeed,\n\\begin{equation*}\n \\delta\\sigma= \\dder{s}{x^a}{x^*_a} \\,(x,x^*)\\,Dx\\,,\n\\end{equation*}\nif $\\sigma=s(x,x^*)\\,Dx$. Consider the infinitesimal canonical\ntransformation of $\\Pi T^*M$ generated by a function\n(``Hamiltonian'') $H=H(x,x^*)$. Denote by $L_H$ the corresponding\nLie derivative. Notice that from the viewpoint of $M$, the function\n$H$ is a multivector field.\n\\begin{thm} On multivector densities (= integral forms) on $M$, the\nLie derivative w.r.t. the infinitesimal canonical transformation of\n$\\Pi T^*M$ generated by $H$ is given by the formula:\n\\begin{equation*}\n L_H=[\\delta,H],\n\\end{equation*}\nwhere at the r.h.s. stands the commutator of the divergence operator\n$\\delta$ and multiplication by the multivector field $H$.\n\\end{thm}\n\nA proof can be given by a direct computation. It fact, the statement\nmimics a similar and more general statement concerning odd Laplace\noperators acting on densities of various weights,\nsee~\\cite{tv:laplace1}.\n\n\\begin{cor} \\label{cor:invar} The operator $\\delta$ on multivector densities on $M$ is\ninvariant under all canonical transformations of the odd symplectic\nmanifold $\\Pi T^*M$. \\emph{(At least those given by a Hamiltonian.)}\n\\end{cor}\n\\begin{proof} We need to show that $\\delta$ commutes with all Lie derivatives\n$L_H$. Indeed, $[\\delta,L_H]=[\\delta,[\\delta,H]]=[\\delta^2,H]=0$, since $\\delta^2=0$.\n\\end{proof}\n\nWe can adopt the following viewpoint. Suppose we do not know\nanything about the operator $\\Delta$ on half-densities in odd symplectic\ngeometry. Instead we concentrate on a familiar object, the operator\n$\\delta$ on multivector densities on a manifold $M$. The operator $\\delta$,\nas shown, is invariant under much larger group of transformations\nthan just diffeomorphisms of $M$. It is invariant under\nsymplectomorphisms of $\\Pi T^*M$. We can then take $\\delta$ as the\ndefinition of $\\Delta$ for $\\Pi T^*M$. Since any odd symplectic manifold\n$N$ is symplectomorphic to some $\\Pi T^*M$, we can use this to\n\\textbf{define} $\\Delta$ on $N$. The invariance of $\\Delta=\\delta$ for $\\Pi\nT^*M$ under symplectomorphisms of $\\Pi T^*M$ shows that $\\Delta$ on $N$\nis well-defined, i.e., its action on half-densities on $N$ does not\ndepend on an arbitrary choice of the identifying symplectomorphism\n$N\\cong \\Pi T^*M$.\n\nIt may seem that there is a gap in such an argument as the\ninvariance was proved only infinitesimally or, equivalently, for\ntransformations that can be included into a Hamiltonian flow. In\nfact, there is no gap. Consider the supergroup $\\Can \\Pi T^*M$ of\nall canonical transformations. If $M$ is an ordinary manifold, the\nstructure of this supergroup was described in~\\cite{hov:semi}. It is\nthe product of the three subgroups:\n\n\\begin{enumerate}\n \\item Transformations induced by diffeomorphisms of $M$;\n \\item Shifts in the fibers of $\\Pi T^*M$ of the form\n\\begin{equation*}\n F^*x^a=x^a, \\quad\n F^*x^*_a=x^*_a+\\der{\\Phi}{x^a}\\,,\n\\end{equation*}\nwhere $\\Phi=\\Phi(x)$ is an odd function on $M$;\n \\item Transformations identical on the submanifold $M\\subset \\Pi\n T^*M$.\n\\end{enumerate}\n\nSince $\\delta$ is invariant under diffeomorphisms of $M$, all that\nremains is to study transformations of types 2 and 3. Canonical\ntransformations of types 2 and 3 can be included into Hamiltonian\nflows. Indeed, for type 2 one can take the flow with the\nHamiltonian $\\Phi$. For type 3 there is also a Hamiltonian flow,\nwith the Hamiltonian of the form $\\Psi=\\Psi^{ab}(x,x^*)x^*_a x^*_b$,\nas shown in~\\cite{hov:semi}. Therefore, transformations of types 2\nand 3 are covered by the argument in the proof of\nCorollary~\\ref{cor:invar}, and this completes the proof.\n\nNow let us turn our attention to (pseudo)differential forms on $M$.\n\nUnder the Fourier\ntransform~\\eqref{eq:fourier},\\eqref{eq:invfourier}, the divergence\noperator $\\delta$ becomes the exterior differential $d$, up to a\nmultiple of $i$. The multiplication by a multivector field\n$H=H(x,x^*)$ becomes the `convolution' (or `cap product'):\n\\begin{multline*}\n (H*\\omega)(x,dx)=\\const\\iin\n Dx^* D(\\overline{dx})\\,e^{\\i(dx^a-\\overline{dx}^a)x^*_a}\\,\n H(x,x^*)\\,\\omega(x,\\overline{dx})\\\\=\n \\int D(\\overline{dx})\\,\\check\n H(x,dx-\\overline{dx})\\,\\omega(x,\\overline{dx})\\,.\n\\end{multline*}\nwhere $\\check H=\\check H(x,dx)$ is the inverse Fourier transform of\n$H$. In other words, if we denote\n\\begin{equation*}\n i_H\\omega:=H*\\omega\\,,\n\\end{equation*}\nwe have\n\\begin{equation}\\label{eq:inter}\n i_H=H\\Bigl(x,-\\i\\der{}{dx}\\Bigr)\\,\n\\end{equation}\nthe differential operator, w.r.t. the variables $dx^a$, with the\nsymbol $H$. It is clear that up to $\\i$'s, it is just the classical\ninternal product of a form by a multivector field, if we deal with\nordinary differential forms and multivectors on an ordinary\nmanifold.\n\nWe immediately get\n\\begin{thm} On (pseudo)differential forms on $M$, the\nLie derivative w.r.t. the infinitesimal canonical transformation of\n$\\Pi T^*M$ generated by $H$ is given by the `Cartan like formula':\n\\begin{equation}\\label{eq:lie}\n \\i L_H=[d,i_H],\n\\end{equation}\nwhere at the r.h.s. stands the commutator of the de Rham\ndifferential and the interior product by the multivector field $H$\nas defined by~\\eqref{eq:inter}.\n\\end{thm}\n\nThis is very remarkable. Suppose $M$ is an ordinary manifold. The\noperation $i_H$, up to the imaginary unit, is the familiar interior\nproduct with a multivector field, generalizing the interior product\nwith a vector field. For Lie derivatives along vector fields one\nproves the Cartan formula $L_X=[d,i_X]$. For multivector fields, as\nopposed to vector fields, this equation is taken as the definition\nof a `Lie derivative of a differential form along a multivector\nfield'. In the classical picture it is not seen how these\nderivatives corresponds to actual transformations. Now we see that\nthey are generators of odd canonical transformations acting on\ndifferential forms.\n\nNotice that in general $L_H$ is not a derivation of the algebra\n$\\Omega(M)$. Of course,\n\\begin{equation*}\n L_{{[\\![} H,G {]\\!]}}=[L_H,L_G]\n\\end{equation*}\nwhere at the l.h.s. stands the Schouten bracket of multivector\nfields. Equation~\\eqref{eq:lie} implies that the de Rham\ndifferential on $M$ is invariant under the canonical transformations\nof $\\Pi T^*M$. Again, one can see the $\\Delta$ operator as the de Rham\ndifferential considered together with these extra symmetries.\n\nSome of the arguments of this section were implicit in our earlier\nworks~\\cite{hov:deltabest, hov:max, hov:semi, hov:proclms}.\n\n\n\n\\def$'$} \\def\\cprime{$'${$'$} \\def$'$} \\def\\cprime{$'${$'$}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nCarbon is one of the most important elements in materials science, chemistry and biology. However, its versatile chemical bonding presents a formidable challenge for the development of accurate and transferable atomistic simulation models. The development of interatomic potentials for carbon started in the 1980's with empirical formulations of the bond order~\\cite{abell1985empirical, tersoff_C_1988_PhysRevLett.61.2879} and the effort culminated with the rigorous derivation of bond-order potentials (BOP) from the electronic structure \\cite{oleinik_pettifor_prl}. When it became possible to carry out large numbers of electronic structure calculations, mainly using density functional theory (DFT) \\cite{hohenberg_kohn_DFT_orig1,kohn_sham_DFT_orig2}, machine learning (ML) potentials superseded the classical potentials since they were able to reproduce the DFT data with minimal errors. Yet the inherent dependence on the reference data and poor extrapolative capabilities limit the transferability of ML potentials and often lead to nonphysical predictions for atomic configurations not included in the training dataset. Emerging alternatives to ML potentials are polynomial\/tensorial expansions, in particular the moment tensor potentials (MTP) \\cite{Shapeev16} and the atomic cluster expansion (ACE)~\\cite{ACE_Drautz_PhysRevB.99.014104}. By employing a mathematically complete basis of the atomic environment \\cite{Dusson22}, it was demonstrated that traditional ML frameworks like neural networks or kernel-based Gaussian process regression are not necessary for obtaining accurate interatomic potentials. In fact, ACE was shown to be not only superior in terms of accuracy but also computational efficiency~\\cite{PACE_Lysogorskiy2021}.\n\nThe difficulty of modeling carbon is reflected by the fact that the pioneering developments of Tersoff~\\cite{tersoff_C_1988_PhysRevLett.61.2879} and Brenner~\\cite{REBO1_PhysRevB.42.9458} followed the Finnis-Sinclair (FS) \\cite{Finnis84} and embedded atom method (EAM) \\cite{Daw84} potentials for metals. Different from the metallic potentials, which focus on atomic energies, Tersoff introduced an empirical expression for angularly dependent bond order \\cite{Tersoff86} to model the formation of covalent directional bonds in C and Si.\nThe Tersoff potential as well as the family of reactive empirical bond order (REBO) potentials~\\cite{REBO1_PhysRevB.42.9458, REBO2_Brenner_2002} were applied widely to study properties of crystalline, amorphous and molecular carbon structures~\\cite{tersoff_application1, tersoff_application2, tersoff_application3}. Nevertheless, these early potentials had a number of limitations, such as short interaction ranges, neglect of $\\pi$ bonding or missing van der Waals (vdW) interactions ~\\cite{Pastewka2012_BOP_for_fracture, marks1_transferability, marks2_graphitization}. Some of these deficiencies were improved in subsequent modifications, for example, by introducing screening functions for a better description of bond breaking ~\\cite{Pastewka_SiC_PhysRevB.87.205410,Pastewka_SiC_PhysRevB.87.205410,rebo_S_pastewka_PhysRevB.78.161402}, or explicit terms to capture long-range dispersion forces~\\cite{airebo_doi:10.1063\/1.481208, AIREBO_M_doi:10.1063\/1.4905549,LCBOP_PhysRevB.68.024107}. Other successful carbon potentials, such as the environment dependent interaction potential (EDIP) \\cite{edip_marks_PhysRevB.63.035401,Justo98} or ReaxFF~\\cite{reax_ff_vanDuin2001, reaxff_Srinivasan2015}, followed similar strategies of employing suitable functional forms based on physical and chemical intuition, yet still in a mostly empirical way. Rigorous derivations of the $\\sigma$ and $\\pi$ bond orders were eventually carried out by Pettifor and co-workers~\\cite{Alinaghian94,oleinik_pettifor_prb2} based on a quantum-mechanical tight-binding model~\\cite{Oleinik_pettifor_aBOP_carbon,oleinik_pettifor_prb2,oleinik_pettifor_prl,MROVEC2007230}. \n\nDespite their relatively short history, several ML potentials have already been developed for carbon. They can be differentiated by how the local atomic environment is sensed via descriptor functions~\\cite{musil2021physics}. To fulfill fundamental physical symmetries, these functions should be invariant under translation, rotation, inversion, and permutation of atoms of the same chemical species. For every atom, hundreds or thousands of different descriptor functions typically need to be evaluated and their numerical values form the input for an ML algorithm that predicts the atomic property. The most prominent ML approaches include neural network potentials (NNP) \\cite{NN_Behler_parinello_sym_PhysRevLett.98.146401} and kernel based methods, in particular the Gaussian approximation potentials (GAP) \\cite{kernel_regression_GAP_PhysRevLett.104.136403}, with descriptors based on the atom centered symmetry functions (ACSF) \\cite{ml_behler_C1CP21668F} or the smooth overlap of atomic positions (SOAP)~\\cite{SOAP_PhysRevB.87.184115}. \n\nFirst NNP models were tailored for specific applications, such as transformations between graphite and diamond or behavior of multilayered graphene~\\cite{nnp_carbon_gradia_PhysRevB.81.100103,nnp_carbon_gradia_Khaliullin2011, NNP_Gr_PhysRevB.100.195419}. Recent NNPs aim at improved transferability by employing comprehensive reference datasets. These models include PANNA~\\cite{Shaidu2021_PaNNA2021}, which was built using an iterative self-consistent workflow, and DeepMD~\\cite{deep_MD_carbonpot_WANG20221}, which was trained on a large dataset of bulk and low dimensional phases as well as snapshots from ab initio molecular dynamics (AIMD). Parallel to the NNPs, several GAP parametrizations for carbon were also developed. The first GAPs focused on simulations of liquid and amorphous carbon~\\cite{gap17_PhysRevB.95.094203} and pristine graphene~\\cite{GAP_graphene_PhysRevB.97.054303}. In 2020, a large number of diverse carbon structures were employed to obtain a widely transferable GAP (GAP20)~\\cite{gap20_doi:10.1063\/5.0005084}. A closely related TurboGAP parameterization used a more efficient implementation of the SOAP descriptor~\\cite{TurboGAP, soap_turbo_PhysRevB.100.024112}. These GAPs were applied to study complex phenomena, such as vapour deposition of amorphous carbon films~\\cite{GAP_application_thinfilm_PhysRevB.102.174201} or the effect of defects on the corrugation of graphene~\\cite{Thiemann_2021_corrugation_graphene}. Overall, GAPs were found to be the most accurate models among fourteen carbon potentials in predicting realistic amorphous structures~\\cite{marks1_transferability}.\n\nDifferent from ML potentials, which often employ empirical descriptor functions, the basis of ACE is mathematically complete. This means that ACE parameterizations can be improved and converged systematically. Furthermore, the hierarchical basis not only enables ACE to represent many ML potentials~\\cite{ACE_Drautz_PhysRevB.99.014104} but also to relate ACE to physically and chemically intuitive classical models. The physically motivated representation, which can be linear or mildly non-linear (see below), in combination with a consistent reference dataset helps to ensure that ACE asserts genuine transferability and is not plagued by the reproducibility-crisis of ML-based science \\cite{Kapoor22}.\n\nHere we present a first ACE parametrization for carbon which is not only more accurate and transferable than any of the previous potentials but also significantly more computationally efficient. We compare it in detail to the best available ML potentials for carbon and provide performance indicators for several other potentials. The excellent transferability and predictive power of ACE is highlighted on three distinct applications - a brittle crack propagation in diamond, a formation of amorphous carbon structures at different quench rates, and a nucleation and growth of fullerene clusters from gas phase at high pressures and temperatures.\n\nThe paper is organized as follows. In the following two sections, we provide a brief theoretical overview of the ACE formalism and elucidate how common classical potentials can be understood as simplified representations of a general ACE descriptor. In section \\ref{sec:training}, we describe the training dataset and the fitting protocol we employed for construction of our carbon model. A detail assessment of the parametrization with respect to a test dataset to determine the general quality of the parametrization is also provided. In section \\ref{sec:validation}, we subject the potential to multiple validation tests that show the model's ability to predict structural, elastic, vibrational and thermodynamic properties of perfect bulk phases as well as defects. The ACE predictions are compared with those of the reference electronic structure calculations and the best available ML carbon potentials, namely GAP20, TurboGAP and PANNA. In section~\\ref{sec:applications}, we apply the model to perform three large scale simulations to investigate the brittle crack propagation in diamond, the formation of amorphous carbon structures, and the growth of fullerene molecules from the gas phase.\n\n\n\\section{Basics of the atomic cluster expansion}\\label{sec:ace_basics}\n\nThe atomic cluster expansion provides a complete set of basis functions that span the space of local atomic environments. We summarize only the essentials of ACE here and direct interested readers to Refs.~\\cite{ACE_Drautz_PhysRevB.99.014104, dusson2019atomic, drautz2020atomic, PACE_Lysogorskiy2021, pace_bochkarev_PhysRevMaterials.6.013804}.\n\nAn atomic property $p$ that is a function of the local atomic environment of atom $i$ is expanded as\n\\begin{equation}\n\\varphi_i^{(p)} = \\sum_{{\\bm v}} \\cBB_{{\\bm v}}^{(p)} {\\bm B}_{i {\\bm v}} \\,, \\label{eq:ACE}\n\\end{equation}\nwith expansion coefficients $\\cBB_{{\\bm v}}^{(p)}$, and basis functions ${\\bm B}_{i {\\bm v}}$ with multi-indices ${{\\bm v}}$. The energy of atom $i$ can then be evaluated as\n\\begin{equation}\n E_i = \\varphi_i^{(1)} \\,,\n\\end{equation}\nfor only one atomic property ($p=1$). If more properties are used,\n\\begin{equation}\nE_i = \\mathcal{F}(\\varphi_i^{(1)}, \\varphi_i^{(2)}, \\dots, \\varphi_i^{(P)}) \\,,\n\\end{equation}\nwhere $\\mathcal{F}$ in general is a non-linear function.\nIn the present ACE model, the energy is expressed using two contributions, a linear term and a square root term,\n\\begin{equation}\nE_i = \\varphi_i^{(1)} + \\sqrt{\\varphi_i^{(2)}} \\,. \\label{eq:EFS}\n\\end{equation}\n\nThe basis functions ${\\bm B}_{i {\\bm v}}$ depend on atomic positions and are ordered hierarchically, which enables a systematic convergence of ACE by increasing the number of basis functions. The basis functions fulfill the fundamental translation, rotation, inversion and permutation (TRIP) invariances for the representation of scalar variables, or equivariances for the expansion of vectorial or tensorial quantities. This is achieved by taking linear combinations of basis functions, which do not necessarily fulfill any particular symmetries, as\n\\begin{equation}\n{\\bm B}_{i {\\bm v}} = \\sum_{{\\bm v}'} \\pmb{C}_{{\\bm v} {\\bm v}'} {\\bm A}_{i {\\bm v}'} \\,. \n\\end{equation}\nwhere the generalized Clebsch-Gordan coefficients $\\pmb{C}$ act as a filter and remove basis functions that are not invariant under rotation or permutation.\n\nFor numerical efficiency the basis functions ${\\bm A}$ are constructed recursively \\cite{dusson2019atomic, PACE_Lysogorskiy2021}. The order of the product $\\nu$ determines the body order of a basis function\n\\begin{equation}\n {{\\bm A}}_{i{\\bm v}} = \n \\prod_{t = 1}^{\\nu} A_{i {\\bm v}_t} \\,. \\label{eq:AA}\n\\end{equation}\nThe atomic base $A$ is obtained by projecting local basis functions on the atomic density\n\\begin{equation}\n A_{i{\\bm v}} = \\braket{\\phi_{{\\bm v}}}{\\rho_i} \\,,\n\\end{equation}\nwith the atomic density centered on atom $i$\n \\begin{equation}\n\\rho_i = \\sum_j^{j \\neq i} \\delta( \\pmb{r} - \\pmb{r}_{ji}) \\,.\n\\end{equation} \nThe local basis functions are expressed as\n\\begin{equation}\n\\phi_{{\\bm v}} = R_{nl}(r_{ji}) Y_{lm}(\\hat{{\\bm r}}_{ji}) \\,, \\label{eq:LCAO}\n\\end{equation}\nwith $r_{ji}$ being the distance from atom $i$ to $j$ that enters in the radial functions $R_{nl}$, while the spherical harmonics $Y_{lm}$ depend on the direction $\\hat{\\pmb{r}}$. The index ${\\bm v} = (nlm)$ is cumulative, where $n$ differentiates between orbitals with the same angular quantum number $l$ and $m$. \n\nThe basis functions can represent local descriptor functions that are used in ML potentials as well as density and angular functions from classical potentials \\cite{ACE_Drautz_PhysRevB.99.014104, drautz2020atomic}. If decomposed into explicit many-atom functions, the two-body basis functions are given by radial functions\n\\begin{equation}\nB_{i{\\bm v}} = R_{n0} (r_{ji}) \\,, \\label{eq:B1exp}\n\\end{equation}\nwhile the three-body terms have the form\n\\begin{equation}\nB_{i{\\bm v}} = \\frac{1 }{2l +1} \\sum_{jk} R_{n_1 l} (r_{ji}) R_{n_2 l} (r_{ki}) P_l( \\cos \\theta_{jik}) \\,, \\label{eq:B2exp}\n\\end{equation}\nwith Legendre polynomials $P_l(x)$. Expressions for higher body orders can also be obtained but are more complex~\\cite{ACE_Drautz_PhysRevB.99.014104}.\n\n\n\\section{ACE as a generalization of classical potentials}\\label{sec:ace_classical_pots}\n\n\nTo elucidate that ACE is not only a formally complete expansion but that it also can be considered as a systematic generalization of classical interatomic potentials, we sketch the link between ACE and the second-moment approximation (SMA) of electronic structure~\\cite{pettifor1995bonding, finnis2003interatomic,Friedel69,Ducastelle70}. \n\nStarting from DFT, the energy functional can be decomposed into the band energy and a double-counting contribution,\n\\begin{equation}\nE^{(\\text{DFT})} = E_{band} + E_{dc} \\,.\n\\end{equation}\nThe band energy is given by $E_{band} = \\sum_n f_n \\epsilon_n$, with occupation numbers $f_n$ and eigenvalues $\\epsilon_n$. The eigenstates can then be expanded using local orbitals $\\phi_{i \\alpha}$ [cf. Eq.~(\\ref{eq:LCAO})] as \n\\begin{equation}\n\\psi_n = \\sum_{i \\alpha} \\braket{i \\alpha}{n} \\phi_{i \\alpha} \\,.\n\\end{equation}\nWe assume the basis to be complete and for ease of notation consider the local orbitals to be orthonormal. \nThe band energy is then represented as\n\\begin{equation}\nE_{band} = \\sum_{i \\alpha} \\int^{\\epsilon_F} \\epsilon\\, n_{i\\alpha}(\\epsilon) \\,d\\epsilon = \\sum_{i\\alpha \\, j \\beta} \\Theta_{i \\alpha j \\beta} H_{j \\beta i \\alpha } \\,. \\label{eq:onin}\n\\end{equation}\nThe first identity is the the so-called onsite representation of the band energy, where the local density of states $n_{i\\alpha}(\\epsilon)$ of orbital $\\alpha$ on atom $i$ is filled with electrons up to the Fermi level $\\epsilon_F$.\\footnote{For a discrete spectrum the local density of states is given as $n_{i\\alpha}(\\epsilon) = \\sum_n |\\braket{i \\alpha}{n}|^2 \\delta(\\epsilon - \\epsilon_n)$.}\nThe second identity, the so-called intersite representation, involves the density matrix\/bond order\n\\begin{equation}\n\\Theta_{i \\alpha j \\beta} = \\sum_n f_n \\braket{i \\alpha}{n} \\braket{n}{j \\beta} \\,,\n\\end{equation}\nand Hamiltonian matrix elements $H_{i \\alpha j \\beta} = \\braHket{i \\alpha}{\\hat{H}}{j \\beta}$. \nA formal expansion of the DFT functional with respect to charge density \\cite{harris1985simplified, sutton1988tight, Foulkes89, frauenheim2000self, finnis2003interatomic, Drautz_PhysRevB.84.214114_2011, Drautz15} presents the basis of modern tight-binding (TB) models and results in a partitioning of the energy as\n\\begin{equation}\nE^{(\\text{TB)}} = E_{bond} + E_{prom} + E_{rep} \\,,\n\\end{equation}\nwhere for simplicity we neglected charge transfer. If there is no promotion of electrons and the energy scale is set such that $H_{i \\alpha i \\alpha} = 0$, the band and bond energies are identical, $E_{band} = E_{bond}$. The repulsive energy $E_{rep}$, comprising the double-counting contribution and Coulomb interactions between the atomic cores, is often approximated by a pair potential\n\\begin{equation}\nE_{rep} = \\frac{1}{2} \\sum_{ij} V(r_{ij})\\,.\n\\end{equation}\nFor the derivation of the SMA for metals, we utilize a local expansion of the band energy for an atom \n\\begin{equation}\nE_{band,i} = \\int^{\\epsilon_F} \\epsilon\\, n_{i}(\\epsilon) \\,d\\epsilon \\,\\,\\,\\,\\, \\text{with} \\,\\,\\,\\,\\, n_{i}(\\epsilon) = \\sum_{\\alpha} n_{i \\alpha}(\\epsilon) \\,, \n\\end{equation}\nWe next construct the local density of states from the information about the local atomic environment. This is achieved by the recursion method~\\cite{Ducastelle70,Friedel69,Haydock80}. If the recursion is continued with constant coefficients after the first recursion level, only information up to the second moment of the density of states enters the expansion. The second moment, given by\n\\begin{align}\n\\mu_i^{(2)} &= \\int \\epsilon^2 n_{i}(\\epsilon) \\,d\\epsilon = \\sum_{\\alpha\\, j \\beta} H_{i \\alpha j \\beta} H_{ j \\beta i \\alpha} \\, , \n\\end{align}\nis determined by Hamiltonian matrix elements that rapidly decay with increasing distance between atoms $j$ and $i$. As the zeroth moment $\\mu_i^{(0)}=1$ and the first moment was set to $\\mu_i^{(1)} = \\sum_{\\alpha} H_{i \\alpha i \\alpha} = 0$, it can be viewed merely as a consequence of appropriate scaling that in SMA~\\cite{Gupta81, pettifor1995bonding, Ackland_1988}\n\\begin{equation}\nE_{band,i}^{(\\text{SMA})} = C \\sqrt{\\mu_i^{(2)} } \\propto \\sqrt{\\mathcal{Z}_{i}} \\,, \\label{eq:2Mband}\n\\end{equation}\nwhere the pre-factor $C$ is a function of band filling and $\\mathcal{Z}_{i}$ is the coordination, the number of nearest neighbors, of atom $i$. The total atomic energy can then be written as \n\\begin{equation}\nE_{i}^{(\\text{SMA})} = C \\sqrt{\\mu_i^{(2)} } + \\sum_j V(r_{ij})\\,.\n\\end{equation}\nAs $\\mu_i^{(2)}$ is strictly positive, it may also be understood as an atomic density $\\rho_i$ computed from pairwise functions to neighboring atoms. In this way, one obtains immediately the FS potential~\\cite{Finnis84}. If the square root function is replaced by a general, concave embedding function one arrives at the EAM formulation~\\cite{Daw84}. Therefore, the cohesion in metals in SMA does not increase linearly with increasing coordination and correctly reflects the unsaturated nature of the metallic bond~\\cite{pettifor1995bonding, heine1991many}.\n\n\nThe derivation of SMA expressions for covalent elements is somewhat more involved and requires explicit consideration of the angular character of atomic orbitals. By equivalence of the onsite and intersite representations of the band energy [cf. Eq.~(\\ref{eq:onin})], SMA for the band energy also implies a corresponding expression for the bond order. We assume the Slater-Koster two-center approximation \\cite{Slater54} with the $z$-axis of the coordinate system aligned along the bond $i -j$. For a d-valent atom, the Hamiltonian matrix is diagonal with the matrix elements equal to two-center distance-dependent bond integrals dd$\\sigma(r_{ij})$, dd$\\pi(r_{ij})$ and dd$\\delta(r_{ij})$.\nThe second moment is then by construction invariant under rotation and given by\n\\begin{equation}\n \\mu_i^{(2)} = \\sum_{j} \\left[ \\mathrm{dd}\\sigma(r_{ij})^2 + 2\\, \\mathrm{dd}\\pi(r_{ij})^2 + 2\\, \\mathrm{dd}\\delta(r_{ij})^2 \\right] \\,, \\label{eq:dSMA}\n\\end{equation}\nwhere the summation is over the $\\mathcal{Z}_{i}$ neighbors of atom $i$. \nThe $\\sigma$ bond order for the $i-j$ bond (with analogous expressions for the $\\pi$ and $\\delta$ bond orders) is expressed as~\\cite{pettifor1995bonding, finnis2003interatomic}\n\\begin{equation}\n| \\Theta_{ij}^{(\\sigma)} | = \n\\frac{C }{ \\sqrt{ \\mu_i^{(2)} } } \\propto \\frac{1}{\\sqrt{\\mathcal{Z}_{i} }}\\,, \\label{eq:BO}\n\\end{equation}\nand is thus inversely proportional to the square root of the number of neighbors of atoms $i$. Note that the bond order is not symmetric with respect to exchange of atoms $i$ and $j$ and therefore the denominator needs to be replaced by $\\sqrt{( \\mu_i^{(2)} + \\mu_j^{(2)})\/2}$.\n\n\nNext, since the bond (or band) energy of atom $i$ \n\\begin{equation}\nE_{bond,i}^{(\\sigma)} = \\sum_j \\Theta_{ij}^{(\\sigma)} dd\\sigma(r_{ij}) \\propto \\sqrt{\\mathcal{Z}_{i}} \\,. \\label{eq:ebond_dds}\n\\end{equation}\nis obtained as the sum over all $\\mathcal{Z}_{i}$ neighbors, it attains the square root dependence with the number of neighbors as in Eq.~(\\ref{eq:2Mband}).\n\nThe derivation for sp-valent elements follows along the same lines but needs to take into account the directionality of the hybrid orbitals and the energy splitting $\\Delta E_{sp}$ of the s and p orbitals. The hybrid $\\sigma$ orbitals oriented along the $z$-axis are formed as\n\\begin{equation}\n\\ket{i\\sigma} = \\frac{1}{\\sqrt{1+\\lambda^2}} \\left ( \\ket{is} + \\lambda \\, \\ket{i p_z} \\right ) \\,,\n\\end{equation}\nwhere $\\lambda=1$, $\\sqrt{2}$ and $\\sqrt{3}$ correspond to sp, sp$^2$ and sp$^3$ hybrids, respectively. The bond integral of the $\\sigma$ hybrid is given as\n\\begin{equation}\nh_{\\sigma}(r_{ij}) = \\frac{ \\mathrm{ss}\\sigma(r_{ij}) - 2 \\lambda\\, \\mathrm{sp}\\sigma(r_{ij}) - \\lambda^2 \\, \\mathrm{pp}\\sigma(r_{ij}) }{1+\\lambda^2} \\, ,\n\\end{equation}\nwhere ss$\\sigma$, pp$\\sigma$ and sp$\\sigma$ are Slater-Koster two-center bond integrals.\nThe second moment takes the form \\cite{pettifor1995bonding}\n\\begin{equation}\n\\begin{split}\n \\mu^{(2)}_{i\\to j} &= c \\, \\Delta E_{sp}^2 + h_\\sigma(r_{ij})^2 + \\\\\n & \\sum_{k \\ne i,j} \\frac{1}{2} \\left( h_\\sigma(r_{ik})^2 \\, g(\\theta_{jik}) + h_\\sigma(r_{jk})^2 \\, g(\\theta_{ijk}) \\right) \\,.\n\\end{split}\n\\end{equation}\nwhich differs from Eq.~(\\ref{eq:ebond_dds}) due to the angular functions $g_{jik}$ and $g_{ijk}$ which depend on the angle $\\theta$ between bonds~\\cite{pettifor1995bonding, Alinaghian94}. Different from the d-valent case, the second moment is not rotationally invariant and the orientation of the $z$-axis along the bond $i \\to j$ has to be given explicitly.\nThe bond order retains a form analogous to Eq.~(\\ref{eq:BO}),\n\\begin{equation}\n |\\Theta_{i \\to j}^{(\\sigma)} | = \\frac{C}{ \\sqrt{ \\mu_{i\\to j}^{(2)} } } \\,.\n\\end{equation}\nThis expression for the bond order is not dissimilar from the empirical bond order introduced by Tersoff \\cite{Tersoff86}. Derivations of the $\\pi$ bond order and the promotion energy, which are important for the bond formation in carbon, can be found in Refs.~\\cite{Alinaghian94,Oleinik_pettifor_aBOP_carbon, pettifor1995bonding, finnis2003interatomic}. \n\nThe aim of this analysis was to show that the most important classical potentials for metals and covalent semiconductors can be understood from the second-moment approximation. The crucial point is that for both these materials, SMA predicts the bond energy to scale as a square root of the local atomic density. This suggests that a physically based model of the atomic energy should comprise an attractive part with a square-root dependence on the number of neighbors, and a pair-wise repulsive part which scales linearly with the number of neighbors. This expression for the energy,\n\\begin{equation}\nE_i = - A \\sqrt{\\mathcal{Z}_i} + B \\mathcal{Z}_i \\,\n\\end{equation}\nmimics the ACE formulation in Eq.~(\\ref{eq:EFS}). If we limit ACE to two-body basis functions, the representation of the energy is closely related to FS models. If 2-body and 3-body contributions are included in the ACE basis, we expect ACE to reproduce not only the original Tersoff formulations but also Stillinger-Weber \\cite{Stillinger85}, EDIP \\cite{Justo98} and other empirical bond order potentials. Incorporation of basis functions with higher body orders then represents a systematic generalization. The higher body orders are similar to including higher moments of the density of states as required for more accurate structural differentiation~\\cite{pettifor1995bonding}. An ACE parametrization with contributions up to the body order of six was developed recently for metallic Cu \\cite{PACE_Lysogorskiy2021}. Here we show that this approach also works extremely well for covalent carbon, where directional bonding is much more important and delicate.\n\n\n\\section{Training of the ACE for carbon}\n\\label{sec:training}\n\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.99\\textwidth]{dataset_together.png} \n \n \\caption{\\label{fig:dataset} Training dataset visualized as a scatter plot of energy per atom with respect to the nearest interatomic distance within each structure and (b) the distribution of cohesive energies.}\n\\end{figure*}\n\nAn accurate and consistent reference dataset that covers a large part of the phase space of atomic configurations is critical for the construction of an ACE model. Such a dataset consists of a series of atomic structures and their corresponding energies, forces and stresses, typically evaluated using electronic structure methods such as DFT. In this section, we describe the details of our DFT calculations including peculiarities related to carbon, present our strategy to generate an exhaustive and balanced set of reference structures, and show how the ACE parametrization is carried out.\n\n\n\\subsection{DFT reference and dispersion interactions}\n\\label{sec:dft_settings}\n\nThe DFT reference calculations were performed using the Vienna Ab-initio Simulation Package (VASP)~\\cite{dft1,dft2,dft3}, version 5.4.4. The exchange-correlation energy was computed using the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA)~\\cite{gga} and the core electrons were modeled by the Projector-Augmented Wave (PAW)~\\cite{paw1,paw2} method (C: s2p2). We carried out highly converged calculations with tight settings of the principal parameters in order to obtain accurate results for the energy and forces. Specifically, the energy cutoff for the plane-wave basis was set to 500 eV and the convergence threshold for the energy to 10$^{-6}$ eV. Gaussian smearing with a width of 0.1 eV was applied.\nFor periodic structures, the Brillouin zone was sampled using a dense $\\Gamma$-centered k-point mesh with the spacing between the k-point of 0.125 \nper \\AA{}$^{-1}$ while for non-periodic clusters only the $\\Gamma$ point was used. An additional support grid was employed for the calculation of forces to ensure that forces and energies are consistent.\n\nWhile the PBE functional describes accurately covalent bonds, it does not capture long-range dispersion interactions. As the vdW interaction plays a crucial role in the stabilization of many important carbon structures, such as graphite and its derivatives, PBE data alone is not suitable for the parametrization of a fully transferable carbon model. There exist various approaches to account for dispersion interactions within DFT~\\cite{vdw_d2_coorection, vdw_d3_coorection,vdw_d4_coorection,vdw_TS_correction, vdw_TS_MBD, vdw_TS_MBD2}. We decided to employ additive corrections, which allow us to parameterize ACE based on standard PBE data, which is significantly shorter ranged than the vdW interactions, and then to amend the ACE model with a correction term in analogy to most dispersion-corrected DFT approaches~\\cite{grimme2011density}. This approach not only results in an efficient model with the correct description at long interatomic distances, but it also gives the flexibility to employ correction terms of different complexity or even switch off the long-range interactions if they are not needed.\n\nFinally, ACE is parametrized such that the interaction between C atoms approaches zero at infinite separation to ensure that the fitted energies correspond to cohesive energies. This is done by taking the energy of an isolated spin-unpolarized atom as the reference zero energy. \n\n\n\\begin{table*}\n\\caption{A description of datasets used in this work.}\\label{tab:dataset_description}\n\\centering\n\\begin{tabular}{lccccc}\n\\toprule\n Category & Description & \\makecell{Number\\\\of structures} & \\makecell{Number\\\\ of atoms} & \\makecell{[$E_{min}$, $E_{max}$]\\\\ (eV\/atom)} & \\makecell{NNB range\\\\ (\\AA)} \\\\\n\\midrule\n sp$^2$ structures & \\makecell{graphene, graphite, fullerenes, nanotubes, incl. defects} & 3532 & 88358 & [-9.07, 78.50] & [0.7, 4.4] \\\\[5pt] \\hline\n sp$^3$ structures & \\makecell{cubic and hexagonal diamond, \\\\ high-pressure phases (bc8, st12, m32, etc.), incl. defects} & 3407 & 84290 & [-8.93, 36.99] & [0.9, 4.9] \\\\[5pt] \\hline\n amorphous\/liquid & \\makecell{selected from available datasets;\\\\ amorphous and liquid phases \\cite{gap20_doi:10.1063\/5.0005084} \\\\ MD trajectories of multilayered graphene \\cite{NNP_Gr_PhysRevB.100.195419}} & 2642 & 146188 & [-9.06, -3.18] & [1.0, 1.7] \\\\[5pt] \\hline\ngeneral bulk & \\makecell{basic crystals; fcc, hcp, bcc, sc, A15, etc.\\\\ over broad range of volume and \\\\random displacements of atoms\/cell deformations } & 5342 & 39 126 & [-8.06, 82.17] & [0.9, 4.4] \\\\[5pt] \\hline\ngeneral clusters & \\makecell{non-periodic clusters with 2-6 atoms} & 2370 & 8801 & [-6.19, 83.28] & [0.6, 5.0] \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table*}\n\n\n\n\\subsection{Reference dataset}\n\\label{sec:dataset}\n\nWe constructed an extensive reference dataset consisting of 17,293 structures with a total number of 366,763 atoms. The reference structures were chosen to sample a broad range of atomic configurations for carbon to ensure both accuracy and transferability of the ACE parametrization.\n\n\nWe divided the reference structures into five categories. The first category contains carbon structures with prototypical sp$^2$ bonding, including a variety of bulk graphite structures, 2D graphene sheets, molecular fullerenes and nanotubes. The second category contains sp$^3$ four-fold coordinated crystalline diamond structures and their high pressure variants. In both categories, we sampled the structures over a broad range of interatomic distances, shape distortions, random displacements of atoms, and incorporated point and planar defects such as vacancies, Stone-Wales defects, surfaces, interfaces, etc. These two categories were complemented by structures from MD simulations of amorphous and liquid carbon~\\cite{gap20_doi:10.1063\/5.0005084} and multilayered graphene~\\cite{NNP_Gr_PhysRevB.100.195419}. As the MD structures are highly correlated, we included only a relatively small number of atomic configurations from these two collections. \nThe last two categories, referred to as `general bulk' and `general clusters', consist of general crystal structures, e.g., fcc, bcc, sc, hcp, A15, and isolated random clusters containing up to six carbon atoms. These two categories serve to sample a broader region of configurational space and allow the potential to model close-packed atomic environments. A summary of the five categories is provided in Table~\\ref{tab:dataset_description}.\n\nFigure~\\ref{fig:dataset} shows the cohesive energy for the reference structures as a function of the shortest bond length within each structure (left panel) and the distribution of structures within the corresponding energy range (right panel). The standard PBE functional without any vdW correction predicts the graphene phase to have the lowest cohesive energy of $-9.12$~eV\/atom and an equilibrium bond length of 1.45~\\AA{}. The energies of most structures from the first three categories lie within 3 eV\/atom above the ground state energy, while the remaining two categories are characterized by cohesive energies mostly greater than $-6$~eV\/atom. The six-dimensional configuration space of clusters with four atoms was sampled uniformly~\\cite{ACE_Drautz_PhysRevB.99.014104} with interatomic distances ranging from $0.7$ to $3.0\\, r{_s}$, where $r{_s}$ = 1.45~\\AA{} corresponds to the NNB distance in graphene. \n\n\n\\subsection{ACE implementation and efficiency}\n\\label{sec:software}\n\nFor the parameterization of ACE we employed the software package {\\verb+pacemaker+} \\cite{pace_bochkarev_PhysRevMaterials.6.013804}. The simulations for validation and applications were carried out using {\\texttt{\\detokenize{LAMMPS}} } \\cite{LAMMPS} with the {\\texttt{\\detokenize{PACE}} } package \\cite{PACE_Lysogorskiy2021}. Both {\\verb+pacemaker+} and {\\texttt{\\detokenize{LAMMPS}} }+ {\\texttt{\\detokenize{PACE}} } can be executed on CPU and GPU architectures. The computational performance of ACE in comparison with other ML methods is shown in Fig.~\\ref{fig:walltime}. The graph displays the CPU\/GPU times for a single MD time step per atom from representative $NVT$ MD simulations of liquid carbon at 4000 K with density of 2 g\/cm$^{3}$ (using a periodic supercell containing 1000 atoms for 1000 time steps). For TurboGAP we used the value reported in Ref.~\\cite{TurboGAP}, since it is not available in {\\texttt{\\detokenize{LAMMPS}} }. On CPU, ACE is more than 1-2 orders of magnitude faster than the other models in accordance with our previous benchmarks~\\cite{PACE_Lysogorskiy2021}. On GPU, ACE reaches efficiency comparable to that of classical interatomic potentials.\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{walltime_withGPU.png}\n \\caption{\n A comparison of typical CPU (AMD Ryzen 5 3600X) and GPU (Tesla V100S-PCIE 32 GB) times for ACE, TurboGAP, PANNA and GAP20 (for TurboGAP the value was taken from Ref.~\\cite{TurboGAP}). CPU times are given per core and GPU times are per divice.}\n \\label{fig:walltime}\n\\end{figure}\n\n\n\n\\subsection{Training}\n\\label{sec:fitting_process}\n\nTraining a carbon potential for an optimal balance between accuracy and transferability is more challenging than for many other materials. The potential must simultaneously reproduce minute energy differences between the most stable phases and allotropes while being able to capture large energy changes associated with breaking and re-arrangement of the strong directional covalent bonds that occur during phase transformations or in the vicinity of structural defects.\n\nDuring the ACE training, we employed a hierarchical optimization strategy implemented in the {\\verb+pacemaker+} code that sequentially adds more basis functions. The radial basis functions represented by exponentially-scaled Chebshev polynomials were also included in the optimization. Structures with low cohesive energies (3 eV above the ground state) were assigned higher weights in the loss function. The employed training dataset together with the input for the {\\verb+pacemaker+} code is provided in the supplementary material~\\cite{suppl}.\n\nThe presented ACE parametrization comprises of 488 basis functions with terms up to the fifth body order, which is sufficient to attain an outstanding overall accuracy while remaining computationally very efficient. As shown on the feature curve in Fig.~\\ref{fig:timing_n_funcs}(a), the accuracy can be further improved by the increasing number of basis functions~\\cite{pace_bochkarev_PhysRevMaterials.6.013804}, while the scaling of the computational time, shown in Fig.~\\ref{fig:timing_n_funcs}(b), remains linear~\\cite{PACE_Lysogorskiy2021}.\n\n\n\\begin{figure}\n\\centering\n \\subfloat[]{\\includegraphics[width=0.25\\textwidth]{learning_curves.png}}\n \\subfloat[]{\\includegraphics[width=0.25\\textwidth]{time_funcs.png}} \n \\caption{(a) The feature curves showing the RMSE of energy and force as a function of the number of basis functions; (b) the scaling of ACE computational time on GPU with the number of basis functions; vertical dashed lines mark the presented ACE parametrization with 488 basis functions.}\n \\label{fig:timing_n_funcs} \n\\end{figure}\n\n\n\n\\begin{table}\n\\centering\n\\begin{tabular}{lrrrr}\n\\toprule\n Category (train\/test) & $E_\\text{train}$ & $E_\\text{test}$ & $F_\\text{train}$ & $F_\\text{test}$ \\\\\n \n\\midrule\n sp$^2$ structures (4549\/528) & 29 & 28 & 465 & 499 \\\\\n sp$^3$ structures (6130\/623) & 56 & 54 & 587 & 567 \\\\\n amorphous\/liquid (2615\/315) & 59 & 74 & 307 & 376 \\\\\n general bulk (5926\/652) & 97 & 114 & 1332 & 1420 \\\\\n general clusters (2360\/276) & 154 & 186 & 1120 & 1195 \\\\\n\\bottomrule\n\\end{tabular}\n\\caption[fit stats]{Energy and force RMSE for each category of the reference dataset. Numbers in brackets correspond to the number of structures for training and testing, respectively. Energies are in meV\/atom and forces in meV\/\\AA{}. \\label{tab:rmse_dataset}}\n\\end{table}\n\n\nThe root mean square errors (RMSE) for the different subsets are given in Table~\\ref{tab:rmse_dataset}. The fact that RMSE for train and test sets are comparable demonstrates that the model is not overfitted. The optimized potential has an energy RMSE of 21 meV\/atom for the structures within 3 eV\/atom from the ground state and 166 meV\/atom for the complete dataset. The corresponding force RMSE amount to 218 and 689 meV\/\\AA{}, respectively. Most large force errors arise from high energy structures from the `general bulk' and `general clusters' categories (see supplementary material~\\cite{suppl}) and structures with short interatomic distances, which show forces up to 100 eV\/\\AA{}. As reported in the supplementary material~\\cite{suppl}, previous carbon models exhibit even larger errors for both forces and energies.\n\n\n\\begin{figure}\n\\centering\n \n \\begin{tabular}{@{}c@{}}\n \\includegraphics[width=1\\columnwidth]{crossval_en_test.png}\n \n \\end{tabular}\n \n \\vspace{\\floatsep}\n\n \\begin{tabular}{@{}c@{}}\n \\includegraphics[width=1\\columnwidth]{crossval_f_test.png}\n \n \\end{tabular}\n \n \n \\caption{\\label{fig:crossref} Energy and force errors for a split test dataset consisting of 1,912 structures; the dashed vertical line marks the boundary of differently weighted data.}\n\\end{figure}\n\n\nAn overall assessment of the accuracy of presented ACE parametrization is given in Fig.~\\ref{fig:crossref}. Figure~\\ref{fig:crossref}(a) shows the predicted ACE energies with respect to the reference PBE data for a 10\\% split test set with 1,912 structures. The effect of higher weighting of the low energy structures is clearly visible. Most structures with energies below $-6.2$ eV\/atom, indicated by the vertical line, match the reference very clesely. For structures with higher energies and lower weights, deviations are larger. Figure~\\ref{fig:crossref}(b) details the distribution of energy errors for the different categories. As these are not normalized distributions, the area under the peaks corresponds to the total number of structures within each category. The standard deviations associated with each category are considerably different, with the sp$^2$ and sp$^3$ structures having the most narrow distributions. The force cross-correlation and error distribution are plotted for all categories in Figs.~\\ref{fig:crossref}(c) and (d), respectively. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{graphite_eb.png}\n\\caption{Energy as a function of interlayer separation in graphene. Result for other models are shown in the supplementary material~\\cite{suppl}.}\n\\label{fig:graphite_eb}\n\\end{figure}\n\n\nFigure~\\ref{fig:graphite_eb} shows the binding energy between the layers of graphite as a function of the interlayer separation. The standard PBE functional gives a negligible binding energy of 1 meV\/atom at around 4.5 \\AA{}, i.e., essentially only a short range repulsive interactions between the graphene sheets. This PBE reference is reproduced accurately by the base ACE (referred to as ACE$_\\text{PBE}$), that was trained on the uncorrected PBE data. By adding the D2 correction with a long range cutoff of 9 \\AA{}, ACE achieves an excellent description of cohesion in graphite, very close to that of TurboGAP. While qualitatively similar, GAP20 shows rather oscillatory behavior whereas for PANNA the range of vdW interactions is significantly underestimated. Additional results for the hNN\\_Gr$_x$ and DeepMD potentials are given in the supplementary material~\\cite{suppl}. \n\n\n\n\\section{Validation}\n\\label{sec:validation}\n\nWe carried out multiple validation tests to assess the performance of the carbon ACE. In the following we present key tests and compare the predictions of ACE to those of the best available potentials. Further validations are provided in the supplementary material~\\cite{suppl}.\n\n\n\\subsection{Stability of bulk phases} \\label{sec:basic_tests}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{ev_groundstate_4pots_acegaps.png}\n\\caption{Relative stability of graphite, graphene and diamond as predicted by PBE+D2, ACE and the best available models. $\\Delta E$ with respect to graphite as given by the respective potential. Result for other NNPs are presented~\\cite{suppl}.}\n\\label{fig:ev_groundstate}\n\\end{figure}\n\n\nOne of the peculiarities of bonding in carbon is that the energy of three-fold coordinated graphite and graphene is nearly the same as that of four-fold coordinated diamond, while the energy of the carbon dimer is much higher. This illustrates the importance of strong angular bond contributions as well as weak dispersion interactions. According to our PBE+D2 calculations, the graphite ground state is separated from those of diamond and graphene by only 29 and 50~meV\/atom, respectively. This energy ordering is in agreement with experimental~\\cite{wagman1945heats} as well as recent theoretical predictions using a high-level coupled cluster theory, according to which diamond lies less than 30 meV\/atom above graphite~\\cite{Popov_2019_C8CP07592A, Gruber_PhysRevX.8.021043}. These subtle energy differences are, however, not captured correctly by all DFT functionals. While our PBE+D2 results agree well with those of the PBE+MBD \\cite{mbd1_PhysRevLett.108.236402,mbd2_doi:10.1063\/1.4865104} functional, which was used to generate the TurboGAP reference data, the hybrid optB88-vdW \\cite{opt1_PhysRevB.82.081101,opt2_PhysRevB.83.195131,opt3_PhysRevLett.92.246401,opt4_Klime__2009} and rVV10 \\cite{rVV10_PhysRevB.87.041108} functionals, which were used for the GAP20 and PANNA datasets, respectively, predict diamond to have a significantly larger energy than both graphite and graphene. The consequence of different training data is illustrated in Fig.~\\ref{fig:ev_groundstate}, which shows the relative stability of the three structures as a function of nearest-neighbor (NNB) distance obtained by ACE, TurboGAP, GAP20 and PANNA. While ACE and TurboGAP match closely the PBE+D2 reference data, GAP20 and PANNA predict that diamond is substantially less stable than both sp$^2$ allotropes. \n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth]{gradia_transformation_2.png}\n \\caption{Energy barrier associated with the graphite to diamond transformation; $\\Delta E$ with respect to graphite. \\label{fig:gradia}\n }\n\\end{figure}\n\nFigure~\\ref{fig:gradia} shows the energy barrier associated with the transformation between AB-stacked rhombohedral graphite and diamond, which proceeds by simultaneous buckling and lateral compression of graphene sheets~\\cite{gradia_transform_PhysRevB.34.1191}. According to PBE+D2, the barrier is 350 meV\/atom with respect to graphite. ACE reproduces the barrier within a few meV. TurboGAP, GAP20 and PANNA overestimate it by 164, 97 and 95 meV\/atom, respectively.\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.44\\textwidth]{ev_all_virtical.png}\n \n \\caption{\\label{fig:ev_curves} Binding energy vs NNB distance for various carbon structures predicted by ACE, TurboGAP, GAP20 and PANNA. Symbols in ACE panel represent the PBE+D2 reference. Binding energies for other carbon potentials cen be found in the supplementary material~\\cite{suppl}.}\n\\end{figure}\n\n\nIn Fig.~\\ref{fig:ev_curves} we compare binding energy curves for various carbon structures obtained by ACE, TurboGAP, GAP20 and PANNA. Results for other carbon potentials are provided in the supplementary material~\\cite{suppl}. ACE describes the binding energy curves in excellent agreement with the DFT+D2 reference over the whole range of considered NNB distances. This may be attributed to the extensive reference dataset and the extrapolation capabilities of ACE basis functions~\\cite{PACE_Lysogorskiy2021}. In contrast, TurboGAP, GAP20 and PANNA reveal the well known inability of most ML potentials to extrapolate outside of the reference dataset. The three models were fitted mostly to configurations with densities close to those of equilibrium graphite and diamond, and they clearly fail to describe the low-density structures. The problem is more severe for GAP20, as it predicts large unphysical oscillations for nearly all structures beyond the NNB distance of about 1.7 \\AA{}. For all structures there exist multiple local minima. These can lead to occurrence of spurious phases, for instance, in the vicinity of defects or when the system is subject to external loads. The local minima and maxima can further affect forces and undermine the description of bond making and breaking~\\cite{rebo_S_pastewka_PhysRevB.78.161402}. Finally, even though carbon does not readily form structures like fcc, bcc or sc, it is advisable to reproduce properties of these structures as well, as they may occur in MD simulations under non-equilibrium conditions or at high pressure. For example, atomistic models of amorphous carbon are often generated by melting a sc lattice of C atoms \\cite{marks1_transferability,marks2_graphitization,Jana_2019}, and the sc phase is found to be stable at extreme pressures\\cite{Oleynik_2022_c_extrem,sun_2009}. \n\nIt is worth mentioning that the energy of an isolated atom, which represents the reference for the cohesive energy, is different for the four models. For ACE, the energy of a non-magnetic free atom is subtracted from the reference data, as discussed in Sec.~\\ref{sec:dft_settings}, and therefore the energy tends to zero as the atoms are pulled apart beyond the chosen cutoff. For TurboGAP, GAP20 and PANNA, this limit is not zero (cf. Fig.~\\ref{fig:ev_curves}) but $-0.51$, $+0.94$ and $-249.96$ eV\/atom, respectively. \n\n\n\n\\subsection{Elastic and vibrational properties} \\label{sec:elc_phon}\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.5\\textwidth]{phonons_3_phases.png}\n \\caption{Phonon band structures and densities of states for (a) graphene, (b) graphite, and (c) diamond predicted by ACE (solid lines) and PBE+D2 (dashed lines). \\label{fig:phonons} }\n\\end{figure}\n\n\n\\begin{table*}[]\n\\caption{Elastic moduli of graphite (Gr-ite), graphene (Gr-ene) and diamond (Dia); all valueas are in GPa. }\\label{tab:elastic_prop}\n\\begin{tabular}{l|ccc|ccc|ccc|ccc|ccc|}\\hline\n & \\multicolumn{3}{c|}{PBE+D2} & \\multicolumn{3}{c|}{ACE} & \\multicolumn{3}{c|}{TurboGAP} & \\multicolumn{3}{c|}{GAP20} & \\multicolumn{3}{c|}{PANNA} \\\\ \n & Gr-ite & Dia & Gr-ene & Grite & Dia & Gr-ene & Gr-ite & Dia & Gr-ene & Gr-ite & Dia & Gr-ene & Gr-ite & Dia & Gr-ene \\\\ \\hline\n$C_{11}$ & 1095 $\\pm$ 25 & 1042 $\\pm$ 8 & 238 $\\pm$ 12 & 1045 & 1004 & 275 & 945 & 1024 & 261 & 1022 & 924 & 273 & 1045 & 1054 & 235 \\\\\n$C_{12}$ & 179 $\\pm$ 51 & 131 $\\pm$ 6 & 39 $\\pm$ 25 & 182 & 141 & 44 & 142 & 104 & 39 & 210 & 24 & 47 & 213 & 131 & 58 \\\\\n$C_{13}$ & -5 $\\pm$ 25 & & & 13 & & & 12 & & & 24 & & & -8 & & \\\\\n$C_{33}$ & 58 $\\pm$ 10 & & & 27 & & & 186 & & & 110 & & & 33 & & \\\\\n$C_{44}$ & 1 $\\pm$ 5 & 556 $\\pm$ 23 & & 9 & 537 & & 10 & 524 & & 35 & 474 & & -9 & 461 & \\\\\n$C_{66}$ & 457 $\\pm$ 36 & & 96 $\\pm$ 18 & 431 & & 115 & 402 & & 111 & 406 & & 113 & 403 & & 89 \\\\ \\hline\n\\end{tabular}\n\\end{table*}\n\n\nThe computed elastic moduli for graphene, diamond and graphite are listed in Table~\\ref{tab:elastic_prop} and the phonon spectra plotted along high symmetry directions of the Brillouin zone and the corresponding phonon densities of states are shown in Fig.~\\ref{fig:phonons}. The ACE predictions agree closely with the PBE+D2 reference for all three structures. As pointed out by the PANNA developers~\\cite{Shaidu2021_PaNNA2021} ab: authors maybe?, long wave length undulations of graphene or graphite sheets are sensitive to numerical details and can induce slightly negative phonon branches close to the $\\Gamma$ point as well as slightly negative elastic moduli. Elastic and phonon properties for other bulk structures are provided in the supplementary material~\\cite{suppl}. Most of these phases show a number of elastic and phonon instabilities that are accurately captured by ACE but not by TurboGAP, GAP20 or PANNA.\n\n\n\\subsection{Point defects} \\label{sec:point_defects}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{gra_monovac_compiled.png}\n\\caption{Three-fold $D_{3h}$ (left) and reconstructed $C_{2v}$ (right) monovacancy predicted by ACE. Bottom panel: side view of graphene sheet, out-of-plane displacement of the marked atom is about 0.4 \\AA. \n}\n\\label{fig:gra_monovac_restructuring}\n\\end{figure}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{defects.png}\n\\caption{Point defects in graphene (left panel) and surfaces in diamond (right panel). See text for details on DFT calculations ; the empty bars for the (110) surface correspond to values reported in the original GAP20 reference~\\cite{gap20_doi:10.1063\/5.0005084}.}\n\\label{fig:defects}\n\\end{figure*}\n\nThe directional covalent bonds in carbon imply large point defect energies and significant local reconstructions. Vacancies and their clusters influence a broad range of electronic, physical and mechanical properties in graphene~\\cite{han2014graphene, lopez2015increasing, skowron2015energetics, zambudio2021, lui2019}. An unreconstructed monovacancy in graphene, formed by removing a single carbon atom, has a three-fold $D_{3h}$ symmetry with three dangling bonds. It undergoes a Jahn-Teller distortion~\\cite{skowron2015energetics} and reconstructs to a configuration with lower $C_{2v}$ symmetry consisting of 5- and 9-membered rings. In Fig.~\\ref{fig:gra_monovac_restructuring} we show that ACE correctly reproduces this reconstruction, including an out of plane displacement of the central atom. The energy difference between unreconstructed and reconstructed configuration presents an upper bound of the monovacancy migration barrier, predicted to be 0.37 eV by ACE, in close agreement with a range of DFT values~\\cite{skowron2015energetics}.\nMost classical as well as the considered ML potentials are unable to account for the reconstruction and predict the three-fold symmetric structure as the only stable monovacancy configuration~\\cite{qian2021comprehensive}. \n\nThe removal of two neighbouring atoms in graphene allows for a better saturation of dangling bonds. There exist three stable divacancy configurations. The simplest one consists of one 8-membered ring with two adjacent 5-membered rings (5-8-5 configuration). A rotation of a pair of bonds in the 8-member ring results in two further configurations with lower energies, the 555-777 and 5555-6-7777 configurations, with increased numbers of the adjacent rings. The formation energies of these configurations are reported in the supplementary material~\\cite{suppl}. ACE predicts correctly that the 555-777 divacancy is more stable than the 5-8-5 and 5555-6-7777 configurations by about 0.8 and 0.5 eV, respectively. Since, the formation energies of divacancies are comparable to that of the monovacancy, it is favorable for two monovacancies to coalesce.\n\nFigure~\\ref{fig:defects} (left panel) shows a comparison of computed energies for unreconstructed vacancy defects in graphene. For a fair comparison, the DFT references are those used for the construction of the potentials, namely, PBE+D2 for ACE, PBE+MBD for TurboGAP, and optB88-vdw for GAP20~\\cite{gap20_doi:10.1063\/5.0005084}. The DFT data for PANNA were not provided, but we assume them to be similar to those for GAP20. It should be noted that the differences between different DFT methods are comparable to the differences between the fitted and reference values. Energies and structural reconstructions of additional point defects are provided in the supplementary material~\\cite{suppl}.\n\n\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.49\\textwidth]{stone_Wales_rotation.png}\n\\caption{Bond rotation for formation of Stone-Wales (SW) defect.}\n\\label{fig:stone_Wales_rotatation}\n\\end{figure}\n\nThe point defect in graphene with the lowest energy is the Stone-Wales (SW) defect. It is formally generated by a 90$^{\\circ}$ rotation of one C-C bond, transforming four 6-member rings into two 5-member rings and two 7-member rings. A consistent description of the SW defect and its formation mechanism implies that a potential models local changes in hybridization correctly and is critical for simulating more complex structures, such as the recently reported monolayer amorphous graphene~\\cite{monolayer_a_grapheneToh2020_nature}. We computed the energy barrier associated with C-C bond rotation to be $\\sim 8$ eV and the energy of the SW defect as $4.91$ eV, both in excellent agreement with PBE+D2 reference as shown in Fig.~\\ref{fig:stone_Wales_rotatation}. \n\nOverall, ACE predicts structures and energies of basic point defects in graphene in close agreement with the reference DFT values and a comparable level of accuracy is obtained also for other defects~\\cite{suppl}. \n\n\n\\subsection{Diamond surfaces} \\label{sec:surfaces}\n\nThe right panel of Fig.~\\ref{fig:defects} shows the energy of relaxed but unreconstructed low-index diamond surfaces. As for the point defects in the left panel, the DFT reference energies correspond to the values obtained with the exchange correlation functional that was used for the training of each potential. The empty bars for the (110) surface correspond to the values reported in the original GAP20 publication~\\cite{gap20_doi:10.1063\/5.0005084}, which we were not able to reproduce. ACE predicts the energies of all surfaces within 3\\% error, while the errors of TurboGAP and GAP20 are larger.\n\nReconstructions of diamond surfaces driven by relaxation of dangling bonds induce subtle atomic displacement patterns that are a challenging test for interatomic potentials. For example, due to dangling bonds the unreconstructed $(111)$ and $(100)$ surfaces have significantly higher energies than the $(110)$ surface. Two distinct surface terminations, with three (3db) and with one (1db) dangling bonds, exist for the $(111)$ surface. We examined surface reconstructions by relaxing atomic positions to minimize the energy of larger supercells. The simulations were initialized to break the symmetry of the unreconstructed surfaces, either manually or by short MD simulations. \n\nFigure~\\ref{fig:surface_reconstruction} shows side and top views of the ideal and reconstructed $(100)$, 3db-$(111)$ and 1db-$(111)$ surfaces. Surface atoms with low coordinations tend to rearrange to achieve a more favourable sp$^2$ (green) hybridization. The 3db-$(111)$ surface undergoes the so-called Pandey-chain reconstruction~\\cite{Pandey_reconstruction_PhysRevB.25.4338}, which rearranges surface atoms into $\\pi$-bonded chain structures. It is a delicate reconstruction and the surface layers show a strong tendency to graphitize~\\cite{chadi_dia_111_surfs}. The 1db-$(111)$ surface instead creates $\\pi$-bonded chains between the surface atoms. All reconstructions have a dramatic effect on the surface energies, which are reduced by 2.4, 7.3 and 0.3 J\/m$^2$ for the $(100)$, 3db-$(111)$ and 1db-$(111)$ surfaces, respectively. The surface energies are summarized in Table~\\ref{tab:surface_energies}.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{surface_reconstruction_all_view_just3.png}\n\\caption{Top views of the (a) $(100)$ and (b,c) $(111)$ surfaces with reconstructions predicted by ACE. White, red, green and blue corresponds to 1-fold, 2-fold, 3-fold and 4-fold coordinated atoms.\n}\n\\label{fig:surface_reconstruction}\n\\end{figure}\n\n\n\\begin{table}[]\n\\begin{tabular}{lccccc}\n\\hline\nSurface & DFT & ACE & TurboGAP & GAP20 & PANNA \\\\ \\hline\n\\bf{(100)} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n ~~Ideal & 9.33 (9.33) & 9.10 & 11.85 & 10.09 & 8.97 \t \\\\\n ~~Relaxed & 9.04 (9.08) & 8.78 & 10.25 & 9.13 & 5.92 \t \\\\\n ~~Reconstr. & 4.97 (4.83) & 6.08 & 5.45 & 4.96 & 5.92 \t \\\\ \\hline\n\\bf{3db-(111)} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n ~~Ideal & 13.01 (13.02) & 12.72 & 11.37 & 13.45 & 12.81 \\\\\n ~~Relaxed & 12.93 (12.98) & 12.71 & 11.37 & 12.17 & 12.65 \\\\\n ~~Reconstr. & & 7.36 \t & 6.56 & 6.24 & 6.72 \\\\ \\hline\n\\bf{1db-(111)} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n ~~Ideal & 7.25 (8.10) & 6.72 & 6.56 & 5.12 & 4.80 \\\\\n ~~Relaxed & 5.72 (6.46) & 5.28 & 4.48 & 3.68 & 3.84 \\\\\n ~~Reconstr. & 3.51 (3.76) & 4.96 \t & 4.32 & 3.68 & 4.00 \\\\ \\hline\n\\bf{(110)} & & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} & \\multicolumn{1}{l}{} \\\\\n ~~Ideal & 6.77 (7.46) & 7.13 & 6.88 & 5.76 & 5.12 \\\\\n ~~Relaxed & 5.76 (6.50) & 5.73 & 4.80 & 4.16 & 3.52 \\\\ \\hline\n\\end{tabular}\n\\caption{Surface energies in J\/m$^2$ for ideal, relaxed and reconstructed low-index diamond surfaces. DFT: PBE+D2, in brackets: B3LYP from~\\cite{delapierre_2013_doi:10.1080\/00268976.2013.829250}}\\label{tab:surface_energies}\n\\end{table}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.48\\textwidth]{aitts.png}\n\\caption{Decohesion of unrelaxed, unreconstructed diamond crystals along the $(111)$, $(110)$ and $(100)$ crystallographic orientations. \n}\n\\label{fig:aitts}\n\\end{figure}\n\nIn addition to static surface calculations, we carried out cleavage simulations to assess smoothness of the energy landscape during bond breaking. A bulk diamond cubic crystal was rigidly separated, excluding relaxations and reconstructions, in the direction normal to the surface using periodic supercells with 32 (for the $(100)$ surface) or 48 (for the $(110)$ and $(111)$ surfaces) atoms. The corresponding variations of the energy as a function of the separation are shown for all orientations in Fig.~\\ref{fig:aitts}. The energy increase is steepest for the $(111)$ surface orientation, for which one set of the C-C bonds is oriented parallel to the loading direction. DFT (PBE+D2) predicts that the energy plateaus at a separation of $\\approx 2$ \\AA{} with a shallow maximum between 1.0 and 1.5 \\AA{}. For the $(100)$ and $(110)$ surface orientations, with bonds inclined with respect to the loading direction, the energy increases less steeply and without any barrier. For the three orientations, ACE captures the PBE+D2 reference closely. TurboGAP and especially GAP20 exhibit oscillations, while PANNA predicts relatively smooth decohesion curves. \n\n\n\n\\section{Applications}\n\\label{sec:applications}\n\n\\subsection{Brittle fracture of diamond} \\label{sec:decohesion_crack}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=1\\columnwidth]{crack_propagation.png}\n\\caption{Relative distance between atoms at crack tip as function of applied stress intensity factor $K_\\text{I}$. }\n\\label{fig:crack_propagation_plot}\n\\end{figure}\n\n\nFracture simulations of brittle materials are very challenging as they require transferable models which are able to describe bond breaking processes under large and inhomogeneous stresses. At the same time, the models need to remain numerically efficient to be able to simulate large supercells with complex crack geometries at finite temperatures and over realistic time scales~\\cite{andric2018atomistic, bitzek2015atomistic}. Here we present MD simulations of brittle cleavage of diamond performed using ACE, GAP20 and PANNA potentials, mostly intended to compare the predictions of these three models. More extended study of diamond fracture will be presented elsewhere.\n\nWe simulated semi-infinite cracks with periodic boundary conditions applied along the crack front. The atomic configurations were generated by applying a given stress intensity factor, $K_\\text{I}$, to model the asymptotic crack tip region in linear elastic fracture mechanics without applying traction on the outer cell boundaries~\\cite{andric2018atomistic} using the code {\\texttt{\\detokenize{atomsk}} }~\\cite{atomsk_HIREL2015212}. \nDepending on the magnitude of the applied $K_\\text{I}$, the crack either tends to heal or to propagate during the simulation. By varying K$_\\text{I}$, one can estimate the critical stress intensity factor, $K_\\text{IC}$.\n\nWe simulated nine cracks with different magnitudes of the stress intensity factor using LAMMPS \\cite{LAMMPS}. The simulation cells contained 5280 atoms with the crack plane normal oriented along the $\\langle 011 \\rangle$ direction and the $ \\langle 111 \\rangle$ propagation direction~\\cite{suppl}. The initial crack tip was always located in between two $ \\langle 011 \\rangle$ planes in the center of the simulation cell. We followed the crack evolution for 2 ps using $NVT$ MD simulations at $T=300$ K. The change of the interatomic distance $r$ between two atoms at the initial crack tip (see the supplementary material~\\cite{suppl}) was used to quantify the healing or propagation of the crack. The variation $r\/r_0$, where $r_0$ is the initial bond length, as a function of the applied $K_\\text{I}$ is plotted for ACE, GAP20 and PANNA in Fig.~\\ref{fig:crack_propagation_plot}. We were not able to run equivalent simulations using TurboGAP since it is not implemented in LAMMPS. \n\nOur simulations show that below the critical loading all models predict a closing of the crack along the crack plane. However, ACE is the only model that sustains brittle cleavage, when the loading exceeds the critical value of about 4.2 MPa m$^{1\/2}$. Both GAP20 and PANNA show instead local structural transformations into graphitic structures which lead to blunting of the crack tip, as displayed in Fig.~\\ref{fig:compiled_crack_600}. The formation of the graphitic structures explains the sharp drop of $r\/r_0$ for both potentials visible in Fig.~\\ref{fig:crack_propagation_plot}. To our best knowledge this behavior has not been observed in any theoretical or experimental studies. As discussed in Sec.~\\ref{sec:basic_tests}, the phase transformation may be related to the overestimated energy of the diamond phase with respect to those of graphene and graphite due to the DFT reference data employed in the construction of the GAP20 and PANNA potentials.\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.98\\textwidth]{fracture_final_snapshots.png}\n\\caption{Snapshots of crack configurations at $K_\\text{I} = 6$ MPa m$^{1\/2}$ after 1~ps as predicted by different models; Green color highlights the atoms with sp$^2$ bonding. GAP20 and PANNA show local structural transformations into graphitic structures at the crack tip, while ACE demostrates brittle cleavage.}\n\\label{fig:compiled_crack_600}\n\\end{figure*}\n\n\n\n\\subsection{Amorphous carbon}\n\\label{sec:amorphous_C}\n\n\nThe great variability of amorphous carbon (a-C) networks, governed by competing sp, sp$^2$ and sp$^3$ hybridizations, poses another difficult challenge for atomistic simulations. Two extensive comparative studies of fourteen interatomic potentials~\\cite{marks1_transferability, marks2_graphitization} showed that there exist marked differences in the predictions of structural and physical properties of a-C systems. Among the investigated models, GAP17 \\cite{gap17_PhysRevB.95.094203} was found to provide the most reliable description (except of some unphysical predictions of 5-fold coordinated atoms at high densities). However, it had by far the highest computational cost of all potentials, limiting its use in large-scale simulations. Even though GAP17 was successfully applied to study the deposition of thin a-C films~\\cite{Caro2018_PhysRevLett.120.166101, GAP_application_thinfilm_PhysRevB.102.174201}, larger system sizes and extended simulation times are crucial to achieve realistic amorphous networks. For instance, Jana et al.~\\cite{Jana_2019} investigated in detail the effect of quench rates on the formation and properties of a-C structures using GAP17, the screened Tersoff (Tersoff-S) potential~\\cite{Pastewka_SiC_PhysRevB.87.205410} and DFT. The simulations revealed a crucial role of the quench rate on the resulting a-C morphology. Slower cooling rates allowed the atoms to achieve energetically more favourable configurations, thus giving rise to structures with lower cohesive energies, while fast cooling rates resulted in more distorted and less stable forms of a-C.\n\nWe studied properties of bulk a-C samples prepared with the liquid-quench MD protocol from Ref.~\\onlinecite{Jana_2019}. Simple cubic supercells containing 8000 atoms were melted during 4.0 ps using $NVT$ MD at 12000 K, employing the Nos\\'e-Hoover thermostat and time step of 1 fs. The liquid phase was then equilibrated at 8000 K for 10 ps before quenching it to 300 K by linearly decreasing the temperature. We employed three different quench rates: fast at 1000 K\/ps, medium at 100 K\/ps, and slow at 10 K\/ps. The final structures were optimized by relaxing either the atomic positions only or both the atomic positions and the cell vectors to minimize the stresses in the cells. For the latter protocol, we observed negligible changes of densities in most samples. For each quench rate, we generated ten a-C samples with densities ranging from 1.8 to 3.5 g\/cm$^3$. This range encompasses low-density nanoporous structures, crystalline graphite ($\\rho_\\text{gra} = 2.24$ g\/cm$^3$), and diamond ($\\rho_\\text{dia} = 3.54$ g\/cm$^3$).\n\n \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{v3_aC_strucs.png}\n\\caption{Representative a-C structures at three densities using 1000 K\/ps (left) and 10 K\/ps (right) quench rate. Shown are slices of 1 nm thickness that were cut out of the simulation cell. White, red, green and blue colors correspond to 1-fold, 2-fold, 3-fold and 4-fold coordinated atoms, respectively. Coordination from 1.85 \\AA{} cutoff.}\n\\label{fig:a_C_snapshots}\n\\end{figure}\n\n\nSnapshots of representative a-C structures with three different densities generated using the fast and slow quench rates are depicted in Fig.~\\ref{fig:a_C_snapshots}. Samples generated with the fast and medium quench rates exhibit uniformly disordered structures that differ in the fraction of the sp$^2$ and sp$^3$ bonded atoms, in agreement with Ref.~\\cite{Jana_2019}. The structures of lowest density ($\\rho$ = 1.8 g\/cm$^3$) are composed of highly distorted and defective graphene sheets with a considerable number of nanovoids and sp-bonded carbon chains connecting the sheets. The structures of intermediate densities ($\\rho$ = 2.2 to 2.9 g\/cm$^3$) correspond to commonly synthesized a-C structures and the fast and medium quench rates result in disordered glassy networks with a homogeneous mixture of sp$^2$ and sp$^3$ bonded atoms. The structures with the highest density ($\\rho$ = 3.4 g\/cc) contain mostly sp$^3$ bonded, diamond-like atoms.\n\nIn the thermodynamic limit of infinitesimally slow cooling we expect graphite, diamond or coexisting graphite and diamond at relative phase fractions that are determined by density. At the slow cooling rate we observe the onset of phase separation and the occurrence of ordered structures with separated sp$^2$ and sp$^3$ regions. For the lowest density, slow quenching leads to more extended sheets with fewer defects. Such graphitized nanostructures have been observed in previous simulations with GAP17~\\cite{marks2_graphitization,Jana_2019}, TurboGAP~\\cite{TurboGAP} and experimentally motivated simulations by Bhattarai et al.~\\cite{exp_aG_BHATTARAI2018168,exp_aG_C8CP02545B}. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\columnwidth]{aC_sp3_atrel.png}\n\\caption{Variation of sp$^3$ bond fraction in a-C with density for different quench rates and methods (see text for details).\n}\n\\label{fig:sp3_fraction}\n\\end{figure}\n\n\n\n\nFigure~\\ref{fig:sp3_fraction} displays how the fraction of sp$^3$ bonded atoms changes as a function of density for different quench rates and potentials. Results for GAP17 (4087 atoms) and DFT (216 atoms) were taken from Ref.~\\cite{Jana_2019}, PANNA (216 atoms) was taken from Ref.~\\cite{Shaidu2021_PaNNA2021}. As expected the sp$^3$ fraction increases with density for all methods and quench rates. For the first time we observe a clear influence of quench rate on the a-C morphology with a non-classical potential. For the fast quench at 1000 K\/ps, which is also feasible with DFT, the sp$^3$ fraction increases almost linearly with density and leads to a homogeneous amorphous network, see Fig.~\\ref{fig:a_C_snapshots}. In contrast, at the slow quench rate of 10 K\/ps the sp$^3$ fraction is almost negligible up to $\\rho$ = 2.55 g\/cm$^3$ and then increases sharply to reach the high values of diamond-like a-C structures. In Fig.~\\ref{fig:sp3_fraction}, we mark the equilibrium densities of graphite and diamond by dashed vertical lines. The solid vertical line indicates the density at which the free energies of homogeneously compressed graphite and dilated diamond coincide at $T=0$ K. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=\\columnwidth]{aC_vorvol_adf.png}\n \\caption{\n Distribution of (a) Voronoi atomic volumes and (b) bond angles in a-C structures at densities 2.0, 2.9 and 3.4 g\/cc, quenched at 10 K\/ps with ACE.}\n \\label{fig:voronoi_adf}\n\\end{figure}\n\nThe distributions of Voronoi atomic volumes for the slowly quenched samples with densities of 2.0, 2.9 and 3.4 g\/cm$^3$ are plotted in Fig.~\\ref{fig:voronoi_adf} (a). For the highest density sample, there is a single peak (dark blue) centered about the atomic volume of diamond. In contrast, the sample with the intermediate density is characterized by two peaks (dark red). The smaller peak coincides again with the volume of diamond while the higher peak is located in between the diamond and graphite volumes. This results is in accordance with the phase separation into sp$^2$ and sp$^3$ dominated regions observed in Fig.~\\ref{fig:a_C_snapshots}. The position of the higher peak indicates that the sp$^2$ phase is a compressed form of amorphous graphite as it is located below the equilibrium volume of crystalline graphite. The distribution of the low density sample (green) is broadest and skewed towards larger volumes due to the existence of nanovoids.\n\nA qualitatively similar outcome can be seen on the distributions of bond angles displayed Fig.~\\ref{fig:voronoi_adf} (b). The angular distribution function (ADF) enables to characterize lattice distortions by comparing the positions and widths of the ADF peaks with bond angles in ideal graphite and diamond. The ADFs are centered around $\\phi$ = 109.5$^{\\circ}$ for the high-density sample and $\\phi$ = 120$^{\\circ}$ for the low-density sample. For the intermediate-density sample, the distribution is significantly broader and clearly composed of two overlapping peaks centered at the angles mentioned above. The ADFs for structures generated using the medium and high quench rates are shown in the supplementary material~\\cite{suppl}.\n\n\\subsection{Fullerene formation}\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=0.99\\textwidth]{fuller_growth.png}\n \\caption{Formation of fullerenes from gas phase during combustion simulation. Small dots are Ar atoms.\\label{fig:fullerene_growth}}\n\\end{figure*}\n\nCarbon clusters are often formed during combustion of carbon-rich materials at high temperatures and pressures. The nucleation and growth process of carbon was the subject of various experimental and theoretical studies\\cite{Greiner1988,Millicent2017,Korets2010,Pineau2008,Qin2020,Los2009}. Here, we present long-time MD simulations of the nucleation and growth of molecular fullerenes from gas phase carbon at high pressure and temperature.\n\nAs the starting configuration, in a cubic supercell of side 47.6 \\AA{} 402 C atoms and 2973 Ar atoms were arranged randomly. The Ar-Ar and Ar-C interactions were modeled using a simple Lennard-Jones potential with parameters taken from Refs.~\\cite{Pineau2008,Shashkov_1979}, while the carbon interactions were modelled using ACE. The Ar atoms serve as a proxy to exert pressure in the cell and to induce collisions between the C atoms, but do not participate in chemical reactions. The $NVT$ ensemble was used to run MD at 3000 K for 12 nanoseconds. Snapshots from the simulation are shown in Fig.~\\ref{fig:fullerene_growth}.\n\nSimilar to the findings of Pineau et al.~\\cite{Pineau2008}, early in the simulation the gas phase carbon atoms bond together to form small buckyball molecules. As the system evolves, the buckyballs interact and coalesce into larger fullerenes. Eventually, at around 12 ns all 402 atoms have merged into a large sp2 bonded fullerene cluster.\n\n\n\\section{Summary and conclusions} \n\\label{sec:conclusions}\n\nWe developed a general purpose ACE parametrization for carbon that surpasses the accuracy and transferability of state-of-the-art ML models at a fraction of their computational cost. The outstanding predictive power of ACE stems from its physically and chemically motivated formulation, smooth extrapolative properties of the ACE basis, and carefully chosen and internally consistent training data.\n\nWe validated extensively the potential by a number of challenging tests. ACE predicts accurately structural and thermodynamic properties for a broad range of ideal and defective carbon polytypes and captures the complex bonding of carbon including bond distortions and bond breaking and making. We showed several exemplary cases where the best available ML potentials fail while ACE predictions are correct.\n\nThe efficiency and robustness of ACE was demonstrated on three distinct applications. In simulations of diamond fracture we showed that ACE maintains brittle cleavage when the system is strained beyond the critical load. In contrast, GAP20 and PANNA both predict a graphitic phase transformation at the crack tip. Simulations of non-equilibrium amorphous carbon structures reveal that their structural morphology depends not only on the density but also strongly on the quenching rate. This result could only be achieved due to the outstanding computational efficiency of ACE that enables much slower quenching rates than was possible with other ML potentials. Lastly, we examined the capability of ACE to describe the evolution of large fullerene clusters during combustion at high temperatures and pressures. The nucleation and growth of these clusters requires not only long-time MD simulations but also a reliable description of bond formation under highly non-equilibrium conditions.\n\nIn summary, our carbon ACE opens new possibilities for structural modeling of carbon at the atomic scale. It not only describes the fundamental properties of carbon allotropes with DFT accuracy, but is also able to maintain this accuracy in large-scale simulations. If necessary, the ACE accuracy and transferability can be further improved systematically, either by tailoring of the training dataset for the required application or by extending the ACE basis. The increased complexity of the ACE parametrization increases the computational costs only linearly which is in stark contrast to the much steeper scaling behaviors of most ML models. Finally, the elemental ACE models can be readily extended or combined to address multi-component systems, such as hydrocarbon systems or transition metal carbides.\n\n\n\n\\section*{Acknowledgements}\nThe authors would like to acknowledge valuable discussions with Lars Pastewka, Romain Perriot and Bernd Meyer. MQ acknowledges funding through a scholarship from the International Max Planck Research School for Interface Controlled Materials for Energy Conversion (IMPRS-SurMat). This work was in part supported by the German Science Foundation (DFG), projects 405621081 and 405621217.\n\n\n\n\\newpage\n\n\n\\section{D2 correction}\nThe D2 term in ACE is expressed as a simple pair potential\n\n\\begin{equation}\n E_{disp} = -\\frac{1}{2} \\sum_{i} \\sum_{j} \\frac{C_6}{r_{ij}^6} f_{d}(r_{ij}) ,\n\\end{equation}\nwhere the damping function $f_d$ is given by\n\\begin{equation}\n f_d = \\frac{s_6}{1+\\exp(-d (r_{ij}\/r_c - 1))} .\n\\end{equation}\nThe dispersion coefficient $C_{6}$, the damping factor $d$, and the vdW radius $r_c$ were parameterized for all elements up to Xe~\\cite{vdw_d2_coorection}. The global scaling parameter $s_{6}$ depends on the specific DFT functional used and is set to 0.75 for PBE.\n\n\n\\section{Validation of neural network models}\n\nThis section includes additional validation of the recent NN models hNN\\_Gr$_x$ and DeepMD.\n\n\\begin{figure}\n\\centering\n \\includegraphics[width=0.45\\columnwidth]{graphite_eb_nnps.png}\n \\hfill\n \\includegraphics[width=0.45\\columnwidth]{gradia_transformation_nnps.png}\n \\caption{Binding of graphite as a function of interlayer separation (a), and the ransformation path from graphite to diamond (b), as predicted by hNN\\_Gr$_x$ and DeepMD potentials.\\label{fig:nnp_graphite_eb} }\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.75\\textwidth]{ev_groundstate_NNpots.png}\n\\caption{Relative stability of the three lowest energy phases of carbon, as predicted by DFT and NN-based models. $\\Delta$E refers to the energy difference with respect to graphite given by the respective potential. Solid, dashed and dotted lines refer to graphite, graphene and diamond phase respectively.}\n\\label{fig:ev_groundstate}\n\\end{figure}\n\n\n\\clearpage\n\\section{Comparison of RMSE values for energies and forces}\n\n\\begin{figure*}\n\\centering\n\\includegraphics[height=0.9\\textheight]{all_Crossvals.png}\n\\caption{Fit metrics for 10\\% split test datasets corresponding to different categories. }\n\\label{fig:all_crossvalidations}\n\\end{figure*}\n\n\\begin{table}[h]\n\\caption{Comparison of RMSE values for energies and forces for the considered models computed from a test dataset. RMSE values are calculated with respect to PBE data. Energies are expressed in terms of relative cohesive energy with respect to ground state of the given potential to account for different references.}\\label{tab:s_rmse_pots}\n\\begin{tabular}{lcc}\n\\hline\n Potenial & E$_\\text{RMSE}$ (meV\/atom) & F$_\\text{RMSE}$ (meV\/\\AA{}) \\\\ \\hline\nACE & 102 & 539 \\\\\nTurboGAP & 238 & 883 \\\\\nDeepMD & 841 & 942 \\\\\nGAP20 & 1083 & 725 \\\\\nhNN\\_Gr$_\\text{x}$ & 1552 & 1982 \\\\\nPANNA & 1793 & 1327 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\n\\begin{figure}\n\\centering\n \n \\begin{tabular}{@{}c@{}}\n \n \\includegraphics[width=0.9\\columnwidth]{pots_testset_E_separated.png}\n \n \\end{tabular}\n \n \\small (a)\n \n \\vspace{\\floatsep}\n\n \\begin{tabular}{@{}c@{}}\n \\includegraphics[width=0.9\\columnwidth]{pots_testset_F_separated.png} \n \n \\end{tabular}\n \n \\small (b)\n \n \n \\caption{\\label{fig:s_pots_crossref} (a) Cohesive energy and (b) force validation plots for considered models for a split test dataset with 1912 structures. RMSE values are given in Table~\\ref{tab:s_rmse_pots}. Points are colored according to the legend given in Fig. 1 in the main paper.}\n\\end{figure}\n\n\\clearpage\n\\section{Validation of additional models and properties}\\label{fig:all_Ev}\n\n\n\\begin{table}[h]\n\\caption{Formation energies for point defects in graphene and diamond lattice }\n\\begin{tabular}{lcccccc}\n\\hline\n & DFT* & ACE & GAP20 & PANNA & hNN\\_Gr$_\\text{x}$ & DeepMD \\\\ \\hline\nGraphene & & & & & & \\\\ \\hline\nStone-Wales & 4.9 & 4.9 & 4.8 & 4.6 & 5.3 & 4.8 \\\\\nMonovacancy & 7.7 & 5.8 & 7.0 & 6.5 & 7.9 & 6.2 \\\\\nDivacancy (5-8-5) & 7.4 & 7.2 & 7.9 & 7.0 & 8.0 & 7.2 \\\\\nDivacancy (555-777) & 6.6 & 6.4 & 6.9 & 6.0 & 6.9 & 6.5 \\\\\nDivacancy (5555-6-7777) & 6.9 & 6.9 & 7.4 & 6.3 & 7.3 & 8.0 \\\\\nAdatom & 6.4 & 6.4 & 5.9 & 10.8 & 8.3 & 6.7 \\\\ \\hline\nDiamond & & & & & & \\\\ \\hline\nMonovacancy & 6.6 & 5.8 & 4.3 & 5.0 & & 5.7 \\\\\nDivacancy & 9.1 & 12.0 & 6.6 & 11.0 & & 11.6\\\\ \\hline\n\\end{tabular}\n\n{*DFT reference is taken from~\\cite{gap20_doi:10.1063\/5.0005084} and is given for optB88-vdW DFT functional}\n\\end{table}\n\n\n\n\\begin{figure}\n\n\\centering\n \\subfloat{\\includegraphics[width=0.33\\textwidth]{ev_pot_deepMD.png}} \n \\subfloat{\\includegraphics[width=0.33\\textwidth]{ev_pot_tb.png}}\n \\subfloat{\\includegraphics[width=0.33\\textwidth]{ev_pot_reaxff.png}}\n\n\n \\subfloat{\\includegraphics[width=0.33\\textwidth]{ev_pot_gr_nn.png}} \n \\subfloat{\\includegraphics[width=0.33\\textwidth]{ev_pot_airebo.png}}\n \\subfloat{\\includegraphics[width=0.33\\textwidth]{ev_pot_lcbop.png}}\n \n\\caption{Energy volume curves for some bulk phases as predicted by (a) DeepMD NNP~\\cite{deep_MD_carbonpot_WANG20221}, (b) Tight Binding model parametrized by Xu et al.~\\cite{Xu_1992}, (c) ReaxFF for hydrocarbon oxidation~\\cite{reaxff_CHO_Chenoweth2008}. (d) NNP for multilayered graphene~\\cite{NNP_Gr_PhysRevB.100.195419}, (e) AIREBO with Morse modification~\\cite{AIREBO_M_doi:10.1063\/1.4905549} and (f) LCBOP potentials~\\cite{LCBOP_PhysRevB.68.024107} \\label{fig:ev_suppl} } \n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n \\includegraphics[height=0.9\\textheight]{phonons_otherbulk.png}\n \\caption{Computed phonon band structures for fcc, bcc and sc bulk phases; all three phases are dynamically unstable according to DFT.\\label{fig:other_bulk_phonons} }\n\\end{figure}\n\n\n\\clearpage\n\\section{Diamond fracture}\n\n\\begin{figure}[h]\n \\centering\n \\begin{tabular}{@{}c@{}}\n \\includegraphics[width=0.5\\linewidth]{crack_480_full.png} \\\\[\\abovecaptionskip]\n \\small (a)\n \\end{tabular}\n\n \\vspace{\\floatsep}\n\n \\begin{tabular}{@{}c@{}}\n \\includegraphics[width=0.5\\linewidth]{cracktip_480.png} \\\\[\\abovecaptionskip]\n \\small (b)\n \\end{tabular}\n\n \\caption{An example of simulation cell consisting of 5280 atoms used to study the brittle crack propagation in diamond (a) and a close-up of the crack tip region (b).}\\label{fig:crack_tip}\n\\end{figure}\n\n\\clearpage\n\\section{Amorphous carbon}\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=0.9\\textwidth]{adf_all.png}\n \\caption{A comparison of angular distribution functions for a-C structures at different quench rates\\label{fig:all_adfs} }\n\\end{figure}\n\n\\clearpage\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}