diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzkfpi" "b/data_all_eng_slimpj/shuffled/split2/finalzzkfpi" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzkfpi" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{1}\n\nEdges play an important role in many vision tasks~\\cite{YangZYPML20,wu2012strong,ramamonjisoa2020predicting,wang2015designing}. While generic edge detection~\\cite{xie2015hed,liu2017rcf,he2019bdcn,wang2017ced} has been extensively studied for decades, specific-edge detection recently attracts increasing amount of efforts due to its practical applications concerning on different types of edges, such as occlusion contours~\\cite{wang2016doc,wang2018doobnet,lu2019ofnet} or semantic boundaries~\\cite{yu2017casenet,dff19}.\n\nIn his seminal work \\cite{marr1982vision}, David Marr summarized four basic ways edges can arise: (1) \\textit{surface-reflectance discontinuity}, (2) \\textit{illumination discontinuity}, (3) \\textit{surface-normal discontinuity}, and (4) \\textit{depth discontinuity}, as shown in Fig.~\\ref{fig:fig1}. Recent studies~\\cite{YangZYPML20,wu2012strong,ramamonjisoa2020predicting,wang2015designing} have shown that the above types of edges are beneficial for downstream tasks. For example, pavement crack detection (reflectance discontinuity) is a critical task for intelligent transportation \\cite{YangZYPML20}; shadow edge (illumination discontinuity) detection is a prerequisite for shadow removal and path detection \\cite{wu2012strong}; \\cite{ramamonjisoa2020predicting} and \\cite{wang2015designing} show that depth edge and normal edge representation prompt refined normal and sharp depth estimation, respectively. Besides, \\cite{KimTO15Joint} utilizes four types of cues simultaneously to improve the performance of depth refinement.\n\nDespite their importance, fine-grained edges are still under-explored, especially when compared with generic edges. Generic edge detectors usually treat edges indistinguishably; while existing studies for specific edges focus on individual edge type. By contrast, the four fundamental types of edges, to our best knowledge, have never been explored in an integrated edge detection framework.\n\n\n\nIn this paper, for the first time, we propose to detect simultaneously the four types of edges, namely \\textit{reflectance edge} (RE), \\textit{illumination edge} (IE), \\textit{normal edge} (NE) and \\textit{depth edge} (DE). Although edges share similar patterns in intensity variation in images, they have different physical bases. Specifically, REs and IEs are mainly related to photometric reasons -- REs are caused by changes in material appearance (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, texture and color), while IEs are produced by changes in illumination (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, shadows, light sources and highlights). By contrast, NEs and DEs reflect the geometry changes in object surfaces or depth discontinuity. Considering the correlations and distinctions among all types of edges, we develop a CNN-based solution, named \\textit{RINDNet}, for jointly detecting the above four types of edges.\n\nRINDNet works in three stages. In stage \\uppercase\\expandafter{\\romannumeral1}, it extracts general features and spatial cues from a backbone network for all edges. Then, in stage \\uppercase\\expandafter{\\romannumeral2}, it proceeds with four separate decoders. Specifically, low-level features are first integrated under the guidance of high-level hints by Weight Layer (WL), and then fed into RE-Decoder and IE-Decoder to produce features for REs and IEs respectively. At the same time, NE\/DE-Decoder take the high-level features as input and explore effective features. After that, these features and accurate spatial cues are forwarded to four decision heads in stage \\uppercase\\expandafter{\\romannumeral3} to predict the initial results. Finally, the attention maps obtained by the Attention Module (AM), which captures the underlying relations between all types, are aggregated with the initial results to generate the final predictions. All these components are differentiable, making RINDNet an end-to-end architecture to jointly optimize the detection of four types of edges.\n\n\nTraining and evaluating edge detectors for all four types of edges request images with all such edges annotated. In this paper, we create the first known such dataset, named \\textit{BSDS-RIND}, by carefully labeling images from the BSDS~\\cite{arbelaez2010bsds} benchmark (see Fig.~\\ref{fig:fig1}). BSDS-RIND allows the first thorough evaluation of edge detection of all four types. The proposed RINDNet shows clear advantages over previous edge detectors, both quantitatively and qualitatively. The source code, dataset, and benchmark are available at \\url{https:\/\/github.com\/MengyangPu\/RINDNet}.\n\nWith the above efforts, our study is expected to stimulate further research along the line, and benefit more downstream applications with rich edge cues. Our contributions are summarized as follows: (1) We develop a novel end-to-end edge detector, RINDNet, to jointly detect the four types of edges. RINDNet is designed to effectively investigate shared information among different edges (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, through feature sharing) and meanwhile flexibly model the distinction between them (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, through edge-aware attention). (2) We present the first public benchmark, BSDS-RIND, dedicated to studying simultaneously the four edge types, namely reflectance edge, illumination edge, normal edge and depth edge. (3) In our experiments, the proposed RINDNet shows clear advantages over state of the arts.\n\n\\section{Related Works}\n\\label{2}\n\n\n\\vspace{-2mm}\n\\paragraph{Edge Detection Algorithms.}\nEarly edge detectors~\\cite{kittler1983accuracy,canny1986computational,winnemoller2011xdog} obtain edges based directly on the analysis of image gradients. By contrast, learning-based methods \\cite{martin2004learning,dollar2006supervised,lim2013sketch} exploit different low-level features that respond to characteristic changes, then a classifier is trained to generate edges. CNN-based edge detectors~\\cite{kokkinos2015pushing,deng2020dscd,xu2017AMHNet,shen2015deepcontour,bertasius2015deepedge,bertasius2015hfl,liu2016rds,maninis2016cob,deng2018lpcb,kelm2019rcn,poma2020dexined} do not rely on hand-crafted features and achieve better performance.\nCombining multi-scale and multi-level features, \\cite{xie2015hed,liu2017rcf,poma2020dexined,he2019bdcn} yield outstanding progresses on generic edge detection. A novel refinement architecture is also proposed in \\cite{wang2017ced} using a top-down backward refinement pathway to generate crisp edges.\nRecent works~\\cite{zhen2020joint,acuna2019devil,yu2018simultaneous,yang2016object} pay more attention to special types of edges. In~\\cite{hariharan2011semantic} the generic object detector is combined with bottom-up contours to infer object contours. CASENet~\\cite{yu2017casenet} adopts a nested architecture to address semantic edge detection. For better prediction, DFF~\\cite{dff19} learns adaptive weights to generate specific features of each semantic category. For occlusion boundary detection, DOC~\\cite{wang2016doc} decomposes the task into occlusion edge classification and occlusion orientation regression, then two sub-networks are used to separately perform the above two tasks. DOOBNet~\\cite{wang2018doobnet} uses an encoder-decoder structure to obtain multi-scale and multi-level features, and shares the backbone features with two branches. OFNet~\\cite{lu2019ofnet} considers the relevance and distinction for the occlusion edge and orientation, thus it shares the occlusion cues between two sub-networks.\n\n\n\\vspace{-5mm}\n\\paragraph{Edge Datasets.} \nMany datasets have been proposed for studying edges.\nBSDS \\cite{arbelaez2010bsds} is a popular edge dataset for detecting generic edges containing $500$ RGB natural images. Although each image is annotated by multiple users, they usually pay attention to salient edges related to objects. BIPED~\\cite{mely2016multicue} is created to explore more comprehensive and dense edges, and contains $250$ outdoor images. NYUD~\\cite{silberman2012indoor} contains $1,449$ RGB-D indoor images, and lacks edge types pertaining to outdoor scenes. Significantly, Multicue~\\cite{mely2016multicue} considers the interaction between several visual cues (luminance, color, stereo, motion) during boundary detection.\n\nRecently, SBD \\cite{hariharan2011semantic} is presented for detecting semantic contours, using the images from the PASCAL VOC challenge \\cite{everingham2010pascal}. Cityscapes \\cite{cordts2016cityscapes} provides the object or semantic boundaries focusing on road scenes. For reasoning occlusion relationship between objects, the dataset in \\cite{ren2006figure} consists of $200$ images, where boundaries are assigned with figure\/ground labels. Moreover, PIOD~\\cite{wang2016doc} contains $10,000$ images, each with two annotations: a binary edge map denotes edge pixels and a continuous-valued occlusion orientation map. The recent dataset in~\\cite{ramamonjisoa2020predicting} annotates NYUD test set for evaluating the occlusion boundary reconstruction.\n\n\nOur work is inspired by the above pioneer studies, but makes novel contributions in two aspects: the proposed RINDNet, to the best of our knowledge, is the first edge detector to jointly detect all four types of edges, and the proposed BSDS-RIND is the first benchmark with all four types of edges annotated.\n\n\\section{Problem Formulation and Benchmark}\n\\label{3}\n\n\\subsection{Problem Formulation}\n\\label{3.1}\nLet $X \\in \\mathbb{R}^{3 \\times W \\times H}$ be an input image with ground-truth labels $\\mathcal{E}=\\{E^{r},E^{i},E^{n},E^{d}\\}$, where $E^{r}, E^{i}, E^{n}, E^{d} \\in\\{0,1\\}^{W \\times H}$ are binary edge maps indicating the reflectance edges (REs), illumination edges (IEs), surface-normal edges (NEs) and depth edges (DEs), respectively. Our goal is to generate the final predictions $\\mathcal{Y}=\\{Y^{r}, Y^{i}, Y^{n}, Y^{d}\\}$, where $Y^{r}, Y^{i}, Y^{n}, Y^{d}$ are the edge maps corresponding to REs, IEs, NEs and DEs, respectively. In our work, we aim to learn a CNN-based edge detector $\\psi$: \n$\\mathcal{Y} = \\psi (X)$.\n\nThe training of $\\psi$ can be done over training images by minimizing some loss functions between $\\mathcal{E}$ and $\\mathcal{Y}$. Therefore, a set of images with ground-truth labels are required to learn the mapping $\\psi$. We contribute such annotations in this work, and the detailed processes are shown in \\S\\ref{3.2}.\n\n\n\\subsection{Benchmark}\n\\label{3.2}\nOne aim of our work is to contribute a first public benchmark, named BSDS-RIND, over BSDS images~\\cite{arbelaez2010bsds}. The original images contain various complex scenes, which makes it challenging to jointly detect all four types of edges. Fig.~\\ref{fig:fig1} (Right) shows some examples of our annotations.\n\n\n\\vspace{-3mm}\n\\paragraph{Edge Definitions.}\nIt is critical to define four types of edges for the annotation task. Above all, we give the definition of each type and illustrate it with examples.\n\\begin{itemize}\n \\vspace{-1.6mm}\\item \\textbf{\\textit{Reflectance Edges} (REs)} usually are caused by the changes in material appearance (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, texture and color) of smooth surfaces. Notably, although the edges within paintings in images (see Fig.~\\ref{fig:fig1} (c)) could be classified to DEs by human visual system, these edges are assigned as REs since there is no geometric discontinuity. \n \\vspace{-1.6mm}\\item \\textbf{\\textit{Illumination Edges} (IEs)} are produced by shadows, light sources, highlights, \\emph{etc.}} \\def\\vs{\\emph{vs.}\\ (as shown in Fig.~\\ref{fig:fig1}).\n \\vspace{-1.6mm}\\item \\textbf{\\textit{Normal Edges} (NEs)} mark the locations of discontinuities in surface orientation and normally arise between parts. As shown in Fig.~\\ref{fig:fig1} (b), we take the edges between the building and the ground as an example, the change of depth across these two surfaces is continuous but not smooth, which is caused by the surface-normal discontinuity between them.\n \\vspace{-1.6mm}\\item \\textbf{\\textit{Depth Edges} (DEs)} are resulted by depth discontinuity, and often coincide with object silhouettes. It is difficult to measure the depth difference (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, Fig.~\\ref{fig:fig1} (a)), thus the relative depth difference is used to determine whether an edge belongs to DEs. Although there exists the depth changes between windows and walls in the building (Fig.~\\ref{fig:fig1} (b)), the small distance ratio is caused by the long distance between them and camera, so such edges are classified to REs rather than DEs.\n\\end{itemize}\n\n\\vspace{-6.5mm}\n\\paragraph{Annotation Process.}\nThe greatest effort for constructing a high-quality edge dataset is devoted to, not surprisingly, manual labeling, checking, and revision. For this task, we manually construct the annotations using ByLabel~\\cite{qin2018bylabel}.\nTwo annotators collaborate to label each image. One annotator first manually labels the edges, and another annotator checks the result and may supplement missing edges. Those edges with labels are added directly to the final dataset if both annotators agree with each other. Ambiguous edges will be revised by both annotators together: the consistent annotations are given after discussion. After iterating several times, we get the final annotations. Moreover, for some edges that are difficult to determine the main factors of their formation, multiple labels are assigned for them. It is only about 53k (2\\%) pixels with multi-labels in BSDS-RIND. In addition, we use the average Intersection-over-Union (IoU) score to measure agreement between two annotators, and get $0.97$, $0.92$, $0.93$ and $0.95$ for REs, IEs, NEs and DEs, respectively. The statistics show good consistency. \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.86\\linewidth, height=.28\\linewidth]{edgeNum.pdf}\n\\caption{The distribution of pixels for each type of edges on BSDS-RIND training set and testing set.}\n\\label{fig:edge_num}\n\\vspace{-13pt}\n\\end{figure}\n\nWith all efforts, finally, a total of $500$ images are carefully annotated, leading to a densely annotated dataset, named BSDS-RIND. Then it is split into $300$ training images, and $200$ testing images, respectively. The total number of pixels for each type on the BSDS-RIND training and testing set are reported in Fig. \\ref{fig:edge_num}. Significantly, the number of edge pixels in BSDS-RIND are twice as in BSDS. Moreover, edge detection is a pixel-wise task, thus the number of samples provided by BSDS-RIND decently supports learning-based algorithms. More examples and details are given in the supplementary material.\n\n\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.95\\linewidth, height=.44\\linewidth]{figure2.pdf}\n\\caption{The three-stage architecture of RINDNet. \\textbf{Stage \\uppercase\\expandafter{\\romannumeral1}}: the input image is fed into a backbone to extract features shared with all edge types. \\textbf{Stage \\uppercase\\expandafter{\\romannumeral2}}: features across different levels are fused via Weight Layers (WLs), and are forwarded to four decoders in two clusters: RE-Decoder\/IE-Decoder and NE-Decoder\/DE-Decoder. \\textbf{Stage \\uppercase\\expandafter{\\romannumeral3}}: four decision heads predict the four types of initial results. In addition, the attention maps learned by the attention module are integrated into the final prediction ($A^b$ used only in training). (Best viewed in color)}\n\\label{fig:fig2}\n\\vspace{-13pt}\n\\end{figure*}\n\n\n\n\\section{RINDNet}\n\\label{4}\n\nIn this work, we design an end-to-end network (\\S\\ref{4.1}), named \\textit{RINDNet}, to learn distinctive features for optimizing the detection of four edge types jointly. Fig.~\\ref{fig:fig2} shows an overview of our proposed RINDNet, which includes three stages of initial results inference (\\emph{i.e.}} \\def\\Ie{\\emph{I.e.}, extracting common features, preparing distinctive features, and generating initial results) and final predictions integrated by an Attention Module. We also explain the loss functions and training details in \\S\\ref{4.2} and \\S\\ref{4.3}.\n\n\\subsection{Methodology}\n\\label{4.1}\n\n\\paragraph{Stage \\uppercase\\expandafter{\\romannumeral1}: Extracting Common Features for All Edges.}\nWe first use a backbone to extract common features for all edges because these edges share similar patterns in intensity variation in images.\nThe backbone follows the structure of ResNet-50 \\cite{he2016deepres} which is composed of five repetitive building blocks. Specifically, the feature maps from the above five blocks of ResNet-50 \\cite{he2016deepres} are denoted as $res_1$, $res_2$, $res_3$, $res_4$ and $res_5$, respectively.\n\nThen, we generate spatial cues from the above features. It is well known that different layers of CNN features encode different levels of appearance\/semantic information, and contribute differently to different edge types. Specifically, bottom-layer feature maps $res_{1-3}$ focus more on low-level cues (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, color, texture and brightness), while top-layer maps $res_{4-5}$ are in favor of object-aware information. Thus it is beneficial to capture multi-level spatial responses from different layers of feature maps. Given multiple feature maps $res_{1-5}$, we obtain the spatial response maps:\n\\begin{equation}\n f_{sp}^k = \\psi^{k}_{\\rm sp}(res_k), \\quad k \\in \\{1,2,3,4,5\\}\n\\end{equation}\nwhere the spatial responses $f_{sp}^k \\in \\mathbb{R}^{2 \\times W \\times H}$ are learned by Spatial Layer $\\psi^{k}_{\\rm sp}$ which is composed of one convolution layer and one deconvolution layer.\n\n\n\\vspace{-3mm}\n\\paragraph{Stage \\uppercase\\expandafter{\\romannumeral2}: Preparing Distinctive Features for REs\/IEs and NEs\/DEs.} Afterwards, RINDNet learns particular features for each edge type separately by the corresponding decoder in stage \\uppercase\\expandafter{\\romannumeral2}. Inspired by \\cite{lu2019ofnet}, we design the Decoder with two streams to recover fine location information, as shown in Fig.~\\ref{fig:fig6} (b). Two-stream decoder can work collaboratively and learn more powerful features from different views in the proposed architecture. Although four decoders have the same structure, some special designs are proposed for different types of edges, and we will give the detailed descriptions below. To distinguish each type of edges reasonably and better depict our work, we next cluster the four edge types into two groups, \\emph{i.e.}} \\def\\Ie{\\emph{I.e.}, REs\/IEs and NEs\/DEs, to prepare features for them respectively.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\linewidth, height=.7\\linewidth]{components.pdf}\n\\caption{The architectures of (a) Weight Layer, (b) Decoder and (c) Attention Module. $\\odot$ is the element-wise multiplication.}\n\\label{fig:fig6}\n\\vspace{-13pt}\n\\end{figure}\n\n\n\\textbf{\\textit{REs and IEs.}} In practice, the low-level features (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, $res_{1-3}$) capture detailed intensity changes that are often reflected in REs and IEs. Besides, REs and IEs are related to the global context and surrounding objects provided by the high-level features (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, $res_{5}$). Thus, it is desirable that semantic hints may give the felicitous guidance to aware the intensity changes, before forwarding to the Decoder. Moreover, it is notable that simply concatenating the low-level and high-level features may be too computationally expensive due to the increased number of parameters. We therefore propose the Weight Layer (WL) to adaptively fuse the low-level features and high-level hints in a learnable manner, without increasing the dimension of the features. \n\nAs shown in Fig.~\\ref{fig:fig6} (a), WL contains two paths: the first path receives high-level feature $res_{5}$ to recover high resolution through a deconvolution layer, and then two $3\\times3$ convolution layers with Batch Normalization (BN) and ReLU excavate adaptive semantic hints; another path is implemented as two convolution layers with BN and ReLU, which encodes low-level features $res_{1-3}$. Afterwards, they are fused by element-wise multiplication. Formally, given the low-level features $res_{1-3}$ and high-level hints $res_{5}$, we generate the fusion features for REs and IEs individually,\n\\begin{equation}\n\\begin{array}{ll}\n g^{r}= \\psi^{r}_{\\rm wl} \\big(res_5, [res_1,res_2,\\rm up (res_3)]\\big), \\\\\n g^{i}= \\psi^{i}_{\\rm wl} \\big(res_5, [res_1,res_2,\\rm up (res_3)]\\big),\n\\end{array}\n\\end{equation}\nwhere the WL of REs and IEs are indicated as $\\psi^{r}_{\\rm wl}$ and $\\psi^{i}_{\\rm wl}$ respectively, $g^{r}$\/$g^{i}$ are the fusion features for REs\/IEs, and $[\\cdot]$ is the concatenation. Note that, the resolution of $res_{3}$ is smaller than $res_{1}$ and $res_{2}$, so one up-sampling operation $\\rm up(\\cdot)$ is used on $res_{3}$ to increase resolution before feature concatenation.\nNext, the fusion features are fed into the corresponding Decoder to generate specific features with accurate location information separately for IEs and REs,\n\\begin{equation}\n\\begin{array}{ll}\n f^{r} = \\psi^{r}_{\\rm deco}(g^{r}) , &\n f^{i} = \\psi^{i}_{\\rm deco}(g^{i}) , \\\\\n\\end{array}\n\\end{equation}\nwhere $\\psi^{r}_{\\rm deco}$ and $\\psi^{i}_{\\rm deco}$ indicate Decode of REs and IEs respectively, and $f^{r}$\/$f^{i}$ are decoded feature maps for REs\/IEs.\n\n\n\\textbf{\\textit{NEs and DEs.}} Since the high-level features (\\emph{e.g.}} \\def\\Eg{\\emph{E.g.}, $res_5$) express strong semantic responses that are usually epitomized in NEs and DEs, we utilize $res_5$ to obtain the particular features for NEs and DEs,\n\\begin{equation}\n\\begin{array}{ll}\n f^{n} = \\psi^{n}_{\\rm deco}(res_5) , &\n f^{d} = \\psi^{d}_{\\rm deco}(res_5) ,\\\\\n\\end{array}\n\\end{equation}\nwhere NE-Decode and DE-Decoder are denoted as $\\psi^{n}_{\\rm deco}$ and $\\psi^{d}_{\\rm deco}$ respectively, and $f^{n}$\/$f^{d}$ are the decoded features of NEs\/DEs. Since DEs and NEs commonly share some relevant geometry cues, we share the weights of the second stream of NE-Decoder and DE-Decoder to learn the collaborative geometry cues. At the same time, the first stream of NE-Decoder and DE-Decoder is responsible for learning particular features for REs and DEs, respectively.\n\n\\vspace{-3mm}\n\\paragraph{Stage \\uppercase\\expandafter{\\romannumeral3}: Generating Initial Results.}\nWe predict the initial results for each type of edges by the respective decision head in final stage. The features from previous stages, containing rich location information of edges, can be used to predict edges. Specifically, we concatenate the decoded features $f^{r}$\/$f^{i}$ with spatial cues $f_{sp}^{1-3}$ to predict REs\/IEs,\n\\begin{equation}\n\\begin{array}{ll}\n O^{r} = \\psi^{r}_{\\rm h}\\big([f^{r}, f_{sp}^{1-3}]\\big) , & \n O^{i} = \\psi^{i}_{\\rm h}\\big([f^{i}, f_{sp}^{1-3}]\\big) , \n\\end{array}\n\\end{equation}\nwhere $O^{r}$\/$O^{i}$ are the initial predictions of REs\/IEs. The decision heads of REs and IEs, named $\\psi^{r}_{\\rm h}$ and $\\psi^{i}_{\\rm h}$ respectively, are modeled as a $3\\times3$ convolution layer and a $1\\times1$ convolution layer. Note that REs and IEs do not directly rely on the location cues provided by top-layer, thus spatial cues $f_{sp}^{4-5}$ are not used for them. By contrast, all spatial cues $f_{sp}^{1-5}$ are concatenated with the decoded features to generate initial results for NEs and DEs, respectively,\n\\begin{equation}\n\\begin{array}{ll}\n O^{n} = \\psi^{n}_{\\rm h}\\big([f^{n}, f_{sp}^{1-5}]\\big) , &\n O^{d} = \\psi^{d}_{\\rm h}\\big([f^{d}, f_{sp}^{1-5}]\\big) , \n\\end{array}\n\\end{equation}\nwhere $\\psi^{n}_{\\rm h}$ and $\\psi^{d}_{\\rm h}$ respectively indicate the decision heads of NEs and DEs, which are composed of three $1 \\times 1$ convolutional layers to integrate hints at each position. In summary, $\\mathcal{O}=\\{O^{r},O^{i},O^{n},O^{d}\\}$ denotes the initial result set.\n\n\n\\begin{table*}\n\\caption{Quantitative comparison for REs, IEs, NEs, DEs and Average (best viewed in color: ``\\textcolor{red}{\\bf{red}}'' for best, and ``\\textcolor{blue}{\\bf{blue}}'' for second best).}\n \\centering\n \\small\n \\renewcommand\\tabcolsep{3.5pt}\n \\renewcommand\\arraystretch{0.9}\n \\begin{tabular}{|l|ccc|ccc|ccc|ccc|ccc|}\n \\hline\n \\multirow{2}{*}{Method}\n &\\multicolumn{3}{c|}{Reflectance} & \\multicolumn{3}{c|}{Illumination} & \\multicolumn{3}{c|}{Normal} &\\multicolumn{3}{c|}{Depth} &\\multicolumn{3}{c|}{Average}\\\\ \n \\cline{2-16}\n & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP\\\\\n \\hline\n HED~\\cite{xie2015hed} & 0.412 & 0.466 & 0.343 & 0.256 & 0.290 & 0.167 & 0.457 & 0.505 & \\bf{\\textcolor{blue}{0.395}} & 0.644 & 0.679 & 0.667 & 0.442 & 0.485 & \\bf{\\textcolor{blue}{0.393}}\\\\\n CED~\\cite{wang2017ced} & 0.429 & 0.473 & 0.361\t & 0.228 & 0.286 & 0.118 & 0.463 & 0.501 & 0.372 & 0.626\t& 0.655\t& 0.620 & 0.437 & 0.479 & 0.368\\\\\n RCF~\\cite{liu2017rcf} & 0.429 & 0.448 & 0.351 & 0.257 & 0.283 & \\bf{\\textcolor{red}{0.173}} & 0.444 & 0.503 & 0.362 & 0.648 & 0.679 & 0.659 & 0.445 & 0.478 & 0.386\\\\\n BDCN~\\cite{he2019bdcn} & 0.358 & 0.458 & 0.252 & 0.151 & 0.219 & 0.078 & 0.427 & 0.484 & 0.334 & 0.628 & 0.661 & 0.581 & 0.391 & 0.456 & 0.311\\\\\n DexiNed~\\cite{poma2020dexined} & 0.402 & 0.454 & 0.315\t & 0.157 & 0.199 & 0.082 & 0.444 & 0.486 & 0.364 & 0.637 & 0.673 & 0.645 & 0.410 & 0.453 & 0.352\\\\\n CASENet~\\cite{yu2017casenet} & 0.384 & 0.439 & 0.275 & 0.230 & 0.273 & 0.119 & 0.434 & 0.477 & 0.327 & 0.621 & 0.651 & 0.574 & 0.417 & 0.460 & 0.324\\\\\n DFF~\\cite{dff19} & \\bf{\\textcolor{blue}{0.447}} & 0.495 & 0.324 & \\bf{\\textcolor{red}{0.290}} & \\bf{\\textcolor{red}{0.337}} & 0.151 & \\bf{\\textcolor{blue}{0.479}} & \\bf{\\textcolor{blue}{0.512}} & 0.352 & \\bf{\\textcolor{blue}{0.674}} & \\bf{\\textcolor{blue}{0.699}} & 0.626 & \\bf{\\textcolor{blue}{0.473}} & \\bf{\\textcolor{blue}{0.511}} & 0.363\\\\\n *DeepLabV3+~\\cite{chen2018deeplabv3} & 0.297 & 0.338 & 0.165 & 0.103 & 0.150 & 0.049 & 0.366 & 0.398 & 0.232 & 0.535 & 0.579 & 0.449 & 0.325 & 0.366 & 0.224\\\\\n *DOOBNet~\\cite{wang2018doobnet} & 0.431 & 0.489 & 0.370 & 0.143 & 0.210 & 0.069 & 0.442 & 0.490 & 0.339 & 0.658 & 0.689 & 0.662 & 0.419 & 0.470 & 0.360\\\\\n *OFNet~\\cite{lu2019ofnet} & 0.446 & 0.483 & \\bf{\\textcolor{blue}{0.375}} & 0.147 & 0.207 & 0.071 & 0.439 & 0.478 & 0.325 & 0.656 & 0.683 & \\bf{\\textcolor{blue}{0.668}} & 0.422 & 0.463 & 0.360\\\\\n \\hline\n DeepLabV3+~\\cite{chen2018deeplabv3} & 0.444 & 0.487 & 0.356 & 0.241 & \\bf{\\textcolor{blue}{0.291}} & 0.148 & 0.456 & 0.495 & 0.368 & 0.644 &0.671 & 0.617 & 0.446 & 0.486 & 0.372\\\\\n DOOBNet~\\cite{wang2018doobnet} & 0.446 & \\bf{\\textcolor{blue}{0.503}} & 0.355 & 0.228 & 0.272 & 0.132 & 0.465 & 0.499 & 0.373 & 0.661 & 0.691 & 0.643 & 0.450 & 0.491 & 0.376\\\\\n OFNet~\\cite{lu2019ofnet} & 0.437 & 0.483 & 0.351 & 0.247 & 0.277 & 0.150 & 0.468 & 0.498 & 0.382 & 0.661 & 0.687 & 0.637 & 0.453 & 0.486 & 0.380\\\\\n \n \\hline\n RINDNet (Ours) & \\bf{\\textcolor{red}{0.478}} & \\bf{\\textcolor{red}{0.521}} & \\bf{\\textcolor{red}{0.414}} &\\bf{\\textcolor{blue}{0.280}} &\\bf{\\textcolor{red}{0.337}} &\\bf{\\textcolor{blue}{0.168}} &\\bf{\\textcolor{red}{0.489}} &\\bf{\\textcolor{red}{0.522}} &\\bf{\\textcolor{red}{0.440}} &\\bf{\\textcolor{red}{0.697}} &\\bf{\\textcolor{red}{0.724}} &\\bf{\\textcolor{red}{0.705}} &\\bf{\\textcolor{red}{0.486}} &\\bf{\\textcolor{red}{0.526}} &\\bf{\\textcolor{red}{0.432}}\\\\\n \\hline\n \\end{tabular}\n \\label{tab:tab1}\n\\vspace{-8pt}\n\\end{table*}\n\n\\begin{figure*}[!t]\n\\centering\n\\includegraphics[width=0.99\\linewidth, height=.20\\linewidth\n]{eval.pdf}\n\\caption{Evaluation results on BSDS-RIND for (a) REs, (b) IEs, (c) NEs, (d) DEs and (e) generic edges.}\n\\label{fig:fig3}\n\\vspace{-8pt}\n\\end{figure*}\n\n\\vspace{-3mm}\n\\paragraph{Attention Module.} Finally, RINDNet integrates initial results with attention maps obtained by Attention Module (AM) to generate the final results. Since different types of edges are reflected in different locations, when predicting each type of edges, it is necessary to pay more attention to the related locations. Fortunately, the edge annotations provide the label of each location. Accordingly, the proposed AM could infer the spatial relationships between multiple labels with pixel-wise supervision by the attention mechanism. The attention maps could be used to activate the responses of related locations. Formally, given the input image $X$, AM learns spatial attention maps,\n\\begin{equation} \\label{eq:eq4.4}\n \\mathcal{A} =\\{A^{b},A^{r},A^{i},A^{n},A^{d}\\} ={\\rm softmax} \\big(\\psi_{\\rm att}(X) \\big),\\\\\n\\end{equation}\nwhere $\\mathcal{A}$ is the normalized attention maps by a softmax function, and $A^{b},A^{r},A^{i},A^{n},A^{d} \\in [0, 1]^{W \\times H}$ are the attention maps corresponding to background, REs, IEs, NEs and DEs respectively. Obviously, if a label is tagged to one pixel, the location of this pixel should be assigned with higher attention values. The AM $\\psi_{\\rm att}$ is achieved by the first building block of ResNet, four $3 \\times 3$ convolution layers (each layer is followed by ReLU and BN operations), and one $1 \\times 1$ convolution layers, as shown in Fig. \\ref{fig:fig6} (c). \n\nFinally, the initial results are integrated with the attention maps to generate the final results $\\mathcal{Y}$,\n\\begin{equation} \n \\mathcal{Y} = {\\rm sigmoid}\\big(\\mathcal{O} \\odot (1 + A^{\\{r,i,n,d\\}})\\big),\n\\end{equation}\nwhere $\\odot$ is the element-wise multiplication.\n\n\n\n\\subsection{Loss Function}\n\\label{4.2}\n\\paragraph{Edge Loss.}\nWe use the loss function presented in \\cite{wang2018doobnet} to supervise the training of our edge predictions:\n\\begin{equation}\n\\mathcal{L}_{\\rm e}(\\mathcal{Y},\\mathcal{E})= \\sum_{k \\in {\\left\\{r,i,n,d\\right\\}}} \\ell_{\\rm e} \\left (Y^{k},E^{k} \\right),\n\\end{equation}\n\\begin{equation}\\label{eq10}\n\\begin{split}\\ell_{\\rm e} \\left (Y,E \\right) & = - \\sum_{i,j} \\Big( E_{i,j}\\alpha_{1}\\beta^{(1-Y_{i,j})^{\\gamma_{1}}}{\\rm log}(Y_{i,j}) \\\\\n & + (1-E_{i,j})(1-\\alpha_{1})\\beta^{Y^{\\gamma_{1}}}{\\rm log}(1-Y_{i,j}) \\Big), \n\\end{split}\n\\end{equation}\nwhere $\\mathcal{Y}=\\{Y^{r}, Y^{i}, Y^{n}, Y^{d}\\}$ is the final prediction, $\\mathcal{E} = \\{E^{r},E^{i},E^{n},E^{d}\\}$ is the corresponding ground-truth label, and $E_{i,j}$\/$Y_{i,j}$ are the $(i,j)^{th}$ element of matrix $E$\/$Y$ respectively.\nMoreover, $\\alpha_{1}=|E_{-}|\/|E|$ and $1-\\alpha_{1}=|E_{+}|\/|E|$, where $E_{-}$ and $E_{+}$ denote the non-edge and edge ground truth label sets, respectively. In addition, $\\gamma_{1}$ and $\\beta$ are the hyperparameters. We drop the superscript $k$ in Eq. \\ref{eq10} and Eq. \\ref{eq13} for simplicity.\n\n\n\\vspace{-5mm}\n\\paragraph{Attention Module Loss.}\nSince the pixel-wise edge annotations provide spatial labels, it is easy to obtain the ground truth of attention.\nLet $\\mathcal{T} = \\{T^{b},T^{r},T^{i},T^{n},T^{d}\\}$ be the ground-truth label of attention, where $T^{b}$ specifies the non-edge pixels. $T^{b}_{i,j} = 1$ if the $(i,j)^{th}$ pixel is located on non-edge\/background, otherwise $T^{b}_{i,j} = 0$.\n$T^{r},T^{i},T^{n},T^{d}$ indicate attention labels of REs, IEs, NEs and DEs respectively, which are obtained from $\\mathcal{E} = \\{E^{r},E^{i},E^{n},E^{d}\\}$,\n\\begin{equation}\n\\label{eq:att_gt}\nT_{i,j}^k = \\left\\{\\begin{array}{ll}\n E_{i,j}^k, & {\\rm if} \\sum_{k} E_{i,j}^k =1, k\\in\\{r,i,n,d\\} \\\\\n 255, & {\\rm if} \\sum_{k} E_{i,j}^k > 1, k\\in\\{r,i,n,d\\} \\\\\n \\end{array} \\right.,\n\\end{equation}\nwhere $k$ denotes the type of edges, $T_{i,j}^k$ and $E_{i,j}^k$ indicate the attention label and edge label of the $(i,j)^{th}$ pixel, respectively. The attention label equals the edge label if one pixel is only assigned one type of edge label, or it will be tagged 255 that will be ignored during training if one pixel has multiple types. It is should be noted that multi-labeled edges are used when training four decision heads for each type of edges, and only excluded when training the AM. The loss function $\\mathcal{L}_{\\rm att}$ of the AM is formulated as:\n\\begin{equation}\n\\mathcal{L}_{\\rm att}(\\mathcal{A},\\mathcal{T}) = \n\\sum_{k \\in {\\left\\{b,r,i,n,d\\right\\}}} \\ell_{\\rm foc} \\left (A^{k},T^{k} \\right),\n\\end{equation}\n\\begin{equation}\\label{eq13}\n\\begin{split}\n\\ell_{\\rm foc} &\\left (A,T \\right) = - \\sum_{i,j} \\big( T_{i,j}\\alpha_{2}(1-A_{i,j})^{\\gamma_{2}} {\\rm log}(A_{i,j}) \\\\\n& + (1-T_{i,j})(1-\\alpha_{2})A_{i,j}^{\\gamma_{2}} {\\rm log}(1-A_{i,j})\\big),\n\\end{split}\n\\end{equation}\nwhere $\\ell_{\\rm foc}$ indicates the Focal Loss \\cite{lin2017focal} and $\\mathcal{A}$ is the output of Attention Module. Note that $\\alpha_{2}$ and $\\gamma_{2}$ are a balancing weight and a focusing parameter, respectively.\n\n\\textbf{Total Loss.} Finally, we optimize RINDNet by minimizing the total loss defined as:\n\\begin{equation}\n\\label{eq:eq4}\n\\mathcal{L} = \\lambda \\mathcal{L}_{\\rm e} + (1-\\lambda) \\mathcal{L}_{\\rm att} ,\n\\end{equation}\nwhere $\\lambda$ is the weight for balancing the two losses.\n\n\n\n\\begin{table*}\n \\caption{Ablation study to verify the effectiveness of each component in our proposed RINDNet.}\n \\label{tab:wlam}\n \\centering\n \\small\n \\renewcommand\\tabcolsep{3.7pt}\n \\renewcommand\\arraystretch{0.9}\n \\begin{tabular}{|l|ccc|ccc|ccc|ccc|ccc|}\n \\hline\n \\multirow{2}{*}{Method}\n &\\multicolumn{3}{c|}{Reflectance} & \\multicolumn{3}{c|}{Illumination} & \\multicolumn{3}{c|}{Normal} &\\multicolumn{3}{c|}{Depth} &\\multicolumn{3}{c|}{Average}\\\\ \n \\cline{2-16}\n & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP & ODS & OIS & AP\\\\\n \\hline\n Ours & 0.478 & 0.521 & 0.414 & 0.280 & 0.337 & 0.168 & 0.489 & 0.522 & 0.440 & 0.697 & 0.724 & 0.705 & 0.486 & 0.526 & 0.432\\\\\n Ours w\/o WL & 0.422 & 0.468 & 0.357 & 0.280 & 0.321 & 0.180 & 0.476 & 0.515 & 0.425 & 0.693 & 0.713 & 0.700 & 0.468 & 0.504 & 0.416\\\\\n Ours w\/o AM & 0.443 & 0.494 & 0.338 & 0.268 & 0.327 & 0.139 & 0.473 & 0.506 & 0.378 & 0.670 & 0.699 & 0.649 & 0.464 & 0.507 & 0.376\\\\ \n Ours w\/o AM\\&WL & 0.409 & 0.460 & 0.316 & 0.277 & 0.331 & 0.178 & 0.471 & 0.507 & 0.389 & 0.677 & 0.707 & 0.662 & 0.459 & 0.501 & 0.386\\\\\n \\hline\n \\end{tabular}\n \\vspace{-13pt}\n\\end{table*}\n\n\n\\subsection{Training Details}\n\\label{4.3}\nOur network is implemented using PyTorch \\cite{paszke2017automatic} and finetuned from a ResNet-50 model pre-trained on ImageNet \\cite{DengDSLL009}. Specifically, we adopt the Stochastic Gradient Descent optimizer with momentum=$0.9$, initial learning rate=$10^{-5}$, and we decay it by the ``poly'' policy on every epoch. We train the model for $70$ epochs on one GPU with a batch size of $4$. Moreover, we set $\\beta=4$ and $\\gamma_{1}=0.5$ for $\\mathcal{L}_{\\rm e}$; $\\alpha_{2}=0.5$ and $\\gamma_{2}=2$ for $\\mathcal{L}_{\\rm att}$; and $\\lambda=0.1$ for total loss. In addition, following \\cite{wang2018doobnet}, we augment our dataset by rotating each image by four different angles of $\\{0,90,180,270\\}$ degrees. Each image is randomly cropped to $320\\times320$ during training while retaining the original size during testing.\n\n\n\\section{Experiment Evaluation}\n\\label{5}\nWe compare our model with $10$ state-of-the-art edge detectors. HED \\cite{xie2015hed}, RCF \\cite{liu2017rcf}, CED \\cite{wang2017ced}, DexiNed \\cite{poma2020dexined}, and BDCN \\cite{he2019bdcn} exhibit excellent performance in general edge detection; DeepLabV3+ \\cite{chen2018deeplabv3}, CASENet \\cite{yu2017casenet} and DFF \\cite{dff19} show outstanding accuracy on semantic edge detection; DOOBNet \\cite{wang2018doobnet} and OFNet \\cite{lu2019ofnet} yield competitive results for occlusion edge detection. \nAll models are trained on $300$ training images and evaluated on $200$ test images. In addition, more qualitative results are provided in the supplementary material. \nWe evaluate these models with three metrics introduced by \\cite{arbelaez2010bsds}: fixed contour threshold (ODS), per-image best threshold (OIS), and average precision (AP). Moreover, a non-maximum suppression \\cite{canny1986computational} is performed on the predicted edge maps before evaluation.\n\n\n\n\\subsection{Experiments on Four Types of Edges}\n\\vspace{-2mm}\n\n\\paragraph{Comparison with State of the Arts.} To adapt existing detectors for four edge types simultaneously, they are modified in two ways: (1) The output $Y \\!\\in\\! \\{0,1\\}^{W \\times H}$ is changed to $\\mathcal{Y} \\!\\in\\! \\{0,1\\}^{4 \\times W \\times H}$. In particular, for DeepLabV3+ focusing on segmentation, the output layer of DeepLabV3+ is replaced by an edge path (same as DOOBNet~\\cite{wang2018doobnet} and OFNet~\\cite{lu2019ofnet}, containing a sequence of four $3\\!\\times\\!3$ convolution blocks and one $1\\!\\times\\!1$ convolution layer) to predict edge maps. As shown in Table \\ref{tab:tab1}, ten compared models are symbolized as HED, RCF, CED, DexiNed, BDCN, CASENet, DFF, *DeepLabV3+, *DOOBNet and *OFNet, respectively. (2) DeepLabV3+, DOOBNet and OFNet only provide one edge prediction branch without structure suitable for multi-class predictions, thus we provide the second modification: the last edge prediction branch is expanded to four, and each branch predicts one type of edges. The modification is similar to the prediction approach of our model and aims to explore the capabilities of these models. They are symbolized as DeepLabV3+, DOOBNet, and OFNet, respectively.\n\n\nTable~\\ref{tab:tab1} and Fig.~\\ref{fig:fig3} present the F-measure of the four types of edges and their averages. We observe that the proposed RINDNet outperforms other detectors over most metrics across the dataset. Essentially, \\cite{xie2015hed,liu2017rcf,wang2017ced,he2019bdcn,poma2020dexined,wang2018doobnet,lu2019ofnet} are designed for generic edge detection, so the specific features of different edges are not fully explored. Even if we extend the prediction branch of OFNet~\\cite{lu2019ofnet}, DOOBNet~\\cite{wang2018doobnet} and DeepLabV3+~\\cite{chen2018deeplabv3} to four for learning specific features respectively, the results are still unsatisfactory. DFF~\\cite{dff19} could learn the specific cues in a certain extent by introducing the dynamic feature fusion strategy, but the performance is still limited. On the contrary, our proposed RINDNet achieves promising results by extracting the corresponding distinctive features based on different edge attributes.\n\n\n\\begin{table}\n\\caption{Ablation study on the choices of features or spatial cues from different layers for the proposed RINDNet.}\n\\centering\n\\small\n\\renewcommand\\tabcolsep{4.9pt}\n\\renewcommand\\arraystretch{1.0}\n\\begin{tabular}{|c|c|c|ccc|}\n \\hline\n \\multirow{2}{*}{Reference} & \\multirow{2}{*}{RE\\&IE} & \\multirow{2}{*}{NE\\&DE} &\\multicolumn{3}{c|}{Average} \\\\\n \\cline{4-6}\n & & & ODS & OIS & AP\\\\\n \\hline\n \\hline\n \\multirow{4}{*}{\\makecell[c]{ Different-Layer \\\\ Features \\\\}} \n & $res_{1-3}$ & $res_{5}$ & 0.486 & 0.526 & 0.432\\\\\n & $res_{1-3}$ & $res_{1-3}$ & 0.467 & 0.499 & 0.422\\\\\n & $res_{5}$ & $res_{1-3}$ & 0.452 & 0.482 & 0.381\\\\ \n & $res_{5}$ & $res_{5}$ & 0.464 & 0.489 & 0.396\\\\\n \\hline\n \\hline\n \\multirow{4}{*}{Spatial Cues} \n & $f_{sp}^{1-3}$ & $f_{sp}^{1-5}$ & 0.486 & 0.526 & 0.432\\\\\n & $f_{sp}^{1-3}$ & $f_{sp}^{1-3}$ & 0.472 & 0.504 & 0.416\\\\\n & $f_{sp}^{1-5}$ & $f_{sp}^{1-5}$ & 0.478 & 0.512 & 0.418\\\\\n & w\/o $f_{sp}$ & w\/o $f_{sp}$ & 0.478 & 0.516 & 0.420\\\\\n \\hline\n\\end{tabular}\n\\label{tab:sp}\n\\vspace{-8pt}\n\\end{table}\n\n\\begin{table}\n\\caption{Ablation study to verify the effectiveness of Decoder for the proposed RINDNet. SW refers to ``Share Weight'', $1^{st}$ and $2^{nd}$ refer to the first stream and the second stream in Decoder.}\n\\centering\n\\small\n\\renewcommand\\tabcolsep{5.0pt}\n\\renewcommand\\arraystretch{1.1}\n\\begin{tabular}{|cc|cc|ccc|}\n \\hline\n \\multicolumn{2}{|c|}{RE\\&IE-Decoder} &\\multicolumn{2}{c|}{NE\\&DE-Decoder} &\\multicolumn{3}{c|}{Average}\\\\\n \\cline{1-7}\n $1^{st}$ & $2^{nd}$ & $1^{st}$ & $2^{nd}$ & ODS & OIS & AP\\\\\n \\hline\n $\\surd$ & $\\surd$ & $\\surd$ & w SW & \\bf{0.486} & \\bf{0.526} & \\bf{0.432}\\\\\n $\\surd$ & $\\times$ & $\\surd$ & $\\times$ & 0.457 & 0.492 & 0.398\\\\\n $\\surd$ & $\\times$ & $\\surd$ & w SW & 0.476 & 0.514 & 0.415\\\\\n $\\surd$ & $\\surd$ & $\\surd$ & w\/o SW & 0.474 & 0.517 & 0.408\\\\\n \\hline\n\\end{tabular}\n\\label{tab:decoder}\n\\vspace{-13pt}\n\\end{table}\n\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.975\\linewidth]{test.pdf}\n\\caption{Qualitative comparison for (a) Reflectance Edges, (b) Illumination Edges, (c) Normal Edges, (d) Depth Edges and (e) Generic Edges (best viewed in color: ``\\textcolor{green2}{\\textbf{green}}'' for true positive, ``\\textcolor{red}{\\textbf{red}}'' for false negative (missing edges), and ``\\textcolor{yellow}{\\textbf{yellow}}'' for false positive). We provide visualization results of the top 6 scores in this figure, more results can be found in the \\textbf{supplementary material}.}\n\\label{fig:fig4}\n\\vspace{-13pt}\n\\end{figure*}\n\n\\vspace{-3mm}\n\\paragraph{Ablation Study.}\nWe first conduct the ablation study to verify the role of Weight Layer (WL) and Attention Module (AM) in RINDNet. In the experiments, each module is removed separately or together to construct multiple variants for evaluation, as shown in Table~\\ref{tab:wlam} (rows 2 -- 4). Intuitively, WL plays a significant role for REs and IEs. Especially for REs, in terms of ODS, OIS and AP, the WL improves the performance significantly from $42.2\\%$, $46.8\\%$ and $35.7\\%$ to $47.8\\%$, $52.1\\%$ and $41.4\\%$, respectively. Besides, with AM, RINDNet achieves noticeable improvements for all types of edges, as shown in row 1 and row 3 of Table~\\ref{tab:wlam}. This illustrates the effectiveness of the proposed AM for capturing the distinctions between different edges. Overall, the cooperation of WL and AM allows RINDNet to successfully capture the specific features of each type of edges and thus delivers remarkable performance gain.\n\nNext, we perform an experiment to verify the effectiveness of different-layer features for detecting REs\/IEs and NEs\/DEs. As shown in Table~\\ref{tab:sp} (rows 1 -- 4), the combination based on edge attributes (row 1) performs better than using other choices (rows 3 -- 4). Besides, we also explore the impact of choosing different spatial cues for REs\/IEs and NEs\/DEs, and report the quantitative results in Table~\\ref{tab:sp} (rows 5 -- 7). Similarly, there are average performance drops in different combinations of spatial cues (rows 6 -- 7). Basically, ours (row 5) is the reasonable combination that helps achieve the best performance.\n\nWe also design careful ablation experiments (Table \\ref{tab:decoder}) to study the effectiveness of each stream in Decoder. The two-stream design combined with share weight (referred to as SW) for NEs\/DEs performs better together (row 1) than using either of them separately (rows 3 -- 4). More detailed results are provided in the supplementary material.\n\n\n\\subsection{Experiments on Generic Edges}\n\nTo fully examine the performance of RINDNet, we modify and test it on generic edges setting (the ground truth of four types of edges are merged to one ground-truth edge map). The outputs of four decision heads are combined and fed into a final decision head $\\psi_{\\rm e}$ (one $1\\times1$ convolution layer) to predict generic edges: $P=\\psi_{\\rm e}\\big([Y^{r}, Y^{i}, Y^{n}, Y^{d}]\\big)$. Moreover, the original ground-truth labels of Attention Module (AM) are unavailable in this setting. Thus we take ground-truth labels of generic edges as the supervisions of AM, so that AM could capture the location of generic edges.\n\n\\begin{table}\n\\caption{Comparison of generic edge detection on \\textbf{BSDS-RIND}.\n}\n\\centering\n\\small\n\\renewcommand\\tabcolsep{14pt}\n\\renewcommand\\arraystretch{0.9}\n\\begin{tabular}{|l|ccc|}\n \\hline\n Method & ODS & OIS & AP \\\\\n \\hline\n HED~\\cite{xie2015hed} & 0.786 & 0.805 &\\bf{\\textcolor{red}{0.834}} \\\\\n CED~\\cite{wang2017ced} & \\bf{\\textcolor{red}{0.801}} & \\bf{\\textcolor{red}{0.814}} & \\bf{\\textcolor{blue}{0.824}} \\\\\n RCF~\\cite{liu2017rcf} & 0.771 & 0.791 & 0.800 \\\\\n BDCN~\\cite{he2019bdcn} & 0.789 & 0.803 & 0.757 \\\\\n DexiNed~\\cite{poma2020dexined} & 0.789\t& 0.805\t& 0.816 \\\\\n CASENet~\\cite{yu2017casenet} & 0.792 & 0.806 & 0.786 \\\\\n DFF~\\cite{dff19} & 0.794 & 0.806 & 0.767 \\\\\n DeepLabV3+~\\cite{chen2018deeplabv3} & 0.780 & 0.792 & 0.776 \\\\\n DOOBNet~\\cite{wang2018doobnet} & 0.790 & 0.805 & 0.809 \\\\\n OFNet~\\cite{lu2019ofnet} & 0.794 & 0.807 & 0.800 \\\\\n \\hline\n RINDNet (Ours) &\\bf{\\textcolor{blue}{0.800}} &\\bf{\\textcolor{blue}{0.811}} & 0.815 \\\\\n \\hline\n\\end{tabular}\n\\label{tab:generic}\n\\vspace{-12pt}\n\\end{table}\n\nWe report the quantitative results over generic edges in Table~\\ref{tab:generic} and Fig.~\\ref{fig:fig3} (e). Note that CED is pre-trained on HED-BSDS. In contrast, our model is trained from scratch and still achieves competitive results. This confirms the integration capability of our model, especially considering that RINDNet is not specially designed for generic edges. Some qualitative results on BSDS-RIND are shown in the last row in Fig. \\ref{fig:fig4}.\n\n\\section{Conclusions}\n\\label{6}\n\nIn this work, we study edge detection on four types of edges including \\textit{Reflectance Edges}, \\textit{Illumination Edges}, \\textit{Normal Edges} and \\textit{Depth Edges}. We propose a novel edge detector RINDNet that, for the first time, simultaneously detects all four types of edges. In addition, we contribute the first public benchmark with four types of edges carefully annotated. Experimental results illustrate that RINDNet yields promising results in comparison with state-of-the-art edge detection algorithms.\n\n\\vspace{-4mm}\n\\paragraph{Acknowledgements.}\nThis work is supported by Fundamental Research Funds for the Central Universities (2019JBZ104) and National Natural Science Foundation of China (61906013, 51827813). The work is partially done while Mengyang Pu was with Stony Brook University.\n\n\n{\\small\n\\bibliographystyle{ieee_fullname}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nOne of the important observations in the LHC era is that the self-normalised production of hadrons increases faster than linearly as a function of the self-normalised charged-particle multiplicity \\cite{aliceHFhadrons}. This enhancement in high-multiplicity events is not fully understood, but calculations including multiple parton interactions (MPI) and colour reconnection (CR) are able to reproduce the trend. The measurement of the multiplicity-dependent production of the \\W boson, which is colourless, with associated hadron in proton--proton (pp) collisions can then provide further understanding of the MPI and CR mechanisms.\n\nIn heavy-ion collisions (HIC), electroweak bosons are valuable probes of the initial phase of the collision, on which a precise knowledge is required to disentangle QGP-induced phenomena from other nuclear effects. Being hard processes, the production of the \\W and \\Z bosons is highly sensitive to the initial state, especially the modifications of the Parton Distribution Functions (PDF) in the nucleus with respect to that in the proton. The bosons decay before the typical time of creation of the QGP and can be measured via their leptonic decays, providing a final state insensitive to the strong force. The whole process is then medium-blind, carrying the information from the initial state to the detector where it can be recorded. \\\\\n\nThe measurements presented here were performed using data collected with the ALICE detector~\\cite{alice}. In pp collisions at \\thirteen, electrons from \\W-boson decays were measured at midrapidity, in the interval $|y| < 0.6$. Electrons coming from \\W-boson decays typically have a large transverse momentum (\\pt) and are well isolated from other particles. They were thus identified by considering electrons in the range $30 < \\pt^{\\rm e} < 60$~\\GeVc, then defining an isolation cone surrounding the electron in which the total energy is requested to be below a certain threshold. The associated hadrons, produced together with \\W bosons, were detected away-side in azimuth with respect to the electron coming from the decay of the boson, applying a lower minimum \\pt selection of 5~\\GeVc.\n\nMeasurements in HIC were performed in proton--lead (p--Pb) collisions at \\eightnn and lead--lead (Pb--Pb) collisions at \\fivenn. The data were collected from muon-triggered events in the muon forward spectrometer, covering the rapidity interval $2.5 < y < 4$. The \\Z-boson candidates were reconstructed by combining high-\\pt muons ($\\pt^\\mu > 20$ \\GeVc) in pairs of opposite charge, considering the pairs with an invariant mass in the range $60 < m_{\\mu^+\\mu^-} < 120$ \\GeVmass. The number of \\Wminus and \\Wplus bosons was extracted via a template fit to the single-muon \\pt distribution, accounting for the various contributions to the inclusive spectrum. As the low-\\pt region features a very low signal-to-background ratio, the measurements were performed on muons with \\pt > 10 \\GeVc.\n\nThe available measurements of electroweak bosons performed by the ALICE Collaboration at forward rapidity can be found in Refs.~\\cite{aliceWZ,aliceZ,aliceW}.\n\n\\section{Results}\n\\label{sec:results}\n\n\\subsection{Measurements in pp collisions}\n\\label{sec:pp}\n\nFigure~\\ref{fig:pp} shows the self-normalised multiplicity-dependent yield of electrons from \\W boson decays (combining \\Wminus and \\Wplus for increased precision), and that of hadrons associated with a \\W boson, measured in pp collisions at \\thirteen. The yield of electrons from \\W-boson decays is consistent with a linear increase as a function of the charged-particle multiplicity, while the yield of associated hadron shows a faster-than-linear increase. This measurement thus suggests a significant correlation of the associated hadron production with the event multiplicity, and the trend is well reproduced by PYTHIA~8~\\cite{pythia8} calculations including MPI and CR mechanisms. This analysis also constitutes the first measurement of electroweak bosons at midrapidity with ALICE, and will serve as reference for similar measurements in HIC.\n\n\\begin{figure}[h]\n \\centering\n \\sidecaption\n \\includegraphics[width=0.5\\linewidth,clip]{figures\/pp13tev_multiplicity.pdf}\n \\caption{Self-normalised yield of electron from \\W decays (red), and hadron associated with \\W bosons (blue), as a function of the charged-particle multiplicity. The coloured line correspond to calculations with PYTHIA~8~\\cite{pythia8} including MPI and CR.}\n \\label{fig:pp}\n\\end{figure}\n\n\\subsection{Measurements in p--Pb collisions}\n\\label{sec:ppb}\n\nThe measurements of the \\Z and \\Wplus bosons as a function of rapidity in p--Pb collisions at \\eightnn are shown in the left and right panels of Fig.~\\ref{fig:ppb}, respectively. The larger number of events available for the \\Wplus boson allowed splitting the acceptance into several rapidity intervals. The \\Z-boson measurement is compared with predictions from the EPPS16~\\cite{epps16} and nCTEQ15~\\cite{ncteq15} nuclear PDFs (nPDFs), as well as predictions from the CT14~\\cite{ct14} free PDF accounting for the isospin effect but without nuclear modifications. The models are in good agreement with the measurement, although no strong conclusion can be derived on the nuclear modifications due to statistical limitations.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.43\\linewidth,clip]{figures\/pPb8tev_Z.pdf}\n \\includegraphics[width=0.55\\linewidth,clip]{figures\/pPb8tev_W_xSecWplusDiff.pdf}\n \\caption{\\textbf{Left}: cross section of dimuons from \\Z decays measured in p--Pb collisions at \\eightnn, compared with theoretical calculations with and without nuclear modifications of the PDF. \\textbf{Right}: cross section of muons from \\Wplus decays in the same collisions systems, compared with model predictions from various PDF and nPDF sets.}\n \\label{fig:ppb}\n\\end{figure}\n\nThe \\Wplus cross section shown in the right panel of Fig.~\\ref{fig:ppb} is compared with the same models, as well as two recent nPDF sets: nCTEQ15WZ~\\cite{ncteq15wz}, an improvement of nCTEQ15 with the addition of electroweak-boson measurements from the LHC for the nPDF determination; and nNNPDF2.0~\\cite{nnnpdf2}, a new model obtained following a methodology based on machine learning. The models including nuclear modifications are in agreement with one another, and provide a good description of the data. Calculations without nuclear modifications, on the contrary, overestimate the production at large positive rapidity. The resulting 3.5$\\sigma$ deviation is the strongest observation of nuclear modification of the PDF measured by the ALICE Collaboration from electroweak-boson measurements. This measurement helps constraining the nPDF models in the very low Bjorken-$x$ region (down to about 10$^{-4}$), where the available constraints are scarce. \\\\\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.46\\linewidth,clip]{figures\/ZPbPb5tev_IntegratedYield.pdf}\n \\includegraphics[width=0.52\\linewidth,clip]{figures\/PbPb5tev_W_yieldW_HGpythia.pdf}\n \\caption{\\textbf{Left}: \\avTaa-scaled yield of dimuons from \\Z-boson decays in Pb--Pb collisions at \\fivenn, compared with theoretical calculations with and without nuclear modifications of the PDF. \\textbf{Right}: \\avTaa-scaled yield of muons from \\W-boson decays as a function of centrality, in Pb--Pb at \\fivenn. The measured distribution is compared with HG-PYTHIA~\\cite{hg-pythia} predictions of the nuclear modification factor of hard scatterings, scaled to the measured value in 0--90\\% centrality.}\n \\label{fig:pbpb}\n\\end{figure}\n\n\\subsection{Measurements in Pb--Pb collisions}\n\\label{sec:pbpb}\n\nThe results obtained in Pb--Pb collisions at \\fivenn are illustrated in Fig.~\\ref{fig:pbpb}. In the left panel, the yield of dimuons from \\Z decays, scaled with the average nuclear overlap function \\avTaa, is compared with a previously published result obtained from the 2015 data only, as well as predictions with and without nuclear modifications. The combination of the 2015 and 2018 data periods for this new measurement improved the available integrated luminosity by a factor three, allowing for a significant reduction of the uncertainty. The various nPDFs provide a good description of the measured yields, while a deviation by 3.4$\\sigma$ from the CT14-only calculation is observed, constituting another strong sign of nuclear modifications.\n\nIn the right panel of Fig.~\\ref{fig:pbpb}, the \\avTaa-scaled yield of electrons and positrons from \\W decays are shown as a function of the collision centrality. They are compared with HG-PYTHIA~\\cite{hg-pythia} calculations of the nuclear modification factor of hard scatterings, scaled with the value measured in 0--90\\% centrality. The scaled calculations are in good agreement with the data. They predict a sizeable drop for the most peripheral collisions, but statistical limitations prevent to have enough granularity in this region in the measurement, such that no experimental conclusion can be drawn. Nonetheless, this measurement is the first measurement of the \\W-boson production in Pb--Pb collisions at forward rapidity, where low values of Bjorken-$x$ are attained.\n\n\\section{Conclusion}\n\nThe recent measurements of electroweak-boson production show their usefulness in the characterization of key features of QCD. The measurement at midrapidity in pp collisions is well described by calculations including MPI and CR. In heavy-ion collisions, the measurements are generally well described by nuclear models, while significant deviations from free-PDF calculations can be observed. These measurements can thus be used to help constraining the nPDF models.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $\\frg$ be a finite-dimensional simple Lie algebra over $\\bbC$. Fix a Cartan subalgebra $\\frh$ of $\\frg$.\nThe associated root system is $\\Delta=\\Delta(\\frg, \\frh)\\subseteq\\frh_{\\bbR}^*$. Recall that a decomposition\n\\begin{equation}\\label{grading}\n\\frg=\\bigoplus_{i\\in \\bbZ}\\frg(i)\n\\end{equation}\nis a \\emph{$\\bbZ$-grading} of $\\frg$ if $[\\frg(i), \\frg(j)]\\subseteq \\frg(i+j)$ for any $i, j\\in\\bbZ$.\nIn particular, in such a case, $\\frg(0)$ is a Lie subalgebra of $\\frg$. Since each derivation of $\\frg$ is inner, there exists $h_0\\in\\frg(0)$ such that $\\frg(i)=\\{x\\in\\frg\\mid [h_0, x]=i x\\}$. The element $h_0$ is said to be \\emph{defining} for the grading \\eqref{grading}. Without loss of generality, one may assume that $h_0\\in\\frh$. Then $\\frh\\subseteq\\frg(0)$. Let $\\Delta(i)$ be the set of roots in $\\frg(i)$. Then we can\nchoose a set of positive roots $\\Delta(0)^+$ for $\\Delta(0)$ such that\n$$\n\\Delta^+ :=\\Delta(0)^+\\sqcup \\Delta(1)\\sqcup \\Delta(2)\\sqcup \\cdots\n$$\nis a set of positive roots of $\\Delta(\\frg, \\frh)$. Let $\\Pi$ be the\ncorresponding simple roots, and put $\\Pi(i)=\\Delta(i)\\cap \\Pi$. Note\nthat the grading \\eqref{grading} is fully determined by\n$\\Pi=\\bigsqcup_{i\\geq 0} \\Pi(i)$. We refer the reader to Ch.~3, \\S 3\nof \\cite{GOV} for generalities on gradings of Lie algebras. Each\n$\\Delta(i)$, $i\\geq 1$, inherits a poset structure from the usual\none of $\\Delta^+$. That is, let $\\alpha$ and $\\beta$ be two roots of\n$\\Delta(i)$, then $\\beta\\geq\\alpha$ if and only if $\\beta-\\alpha$ is\na nonnegative integer combination of simple roots.\n\n\n\n\n\n\nRecently, Panyushev initiated the study of the rich structure of\n$\\Delta(1)$ in \\cite{P}. In particular, he raised five\nconjectures concerning the $\\mathcal{M}$-polynomial,\n$\\mathcal{N}$-polynomial and the reverse operator of $\\Delta(1)$.\nNote that Conjectures 5.1, 5.2 and 5.12 there have been solved by\nWeng and the author \\cite{DW}. The current paper aims\nto handle conjecture 5.11 of \\cite{P}. Let us prepare more notation.\n\nRecall that a subset $I$ of a finite poset $(P, \\leq)$ is a\n\\emph{lower} (resp., \\emph{upper}) \\emph{ideal} if $x\\leq y$ in $P$\nand $y\\in I$ (resp. $x\\in I$) implies that $x\\in I$ (resp. $y\\in\nI$). We collect the lower ideals of $P$ as $J(P)$, which is\npartially ordered by inclusion. A subset $A$ of $(P, \\leq)$ is an\n\\emph{antichain} if any two elements in $A$ are non-comparable under\n$\\leq$. We collect the antichains of $P$ as $\\mathrm{An}(P)$. For\nany $x\\in P$, let $I_{\\leq x}=\\{y\\in P\\mid y\\leq x\\}$. Given an\nantichain $A$ of $P$, let $I(A)=\\bigcup_{a\\in A} I_{\\leq a}$. The\n\\emph{reverse operator} $\\mathfrak{X}$ is defined by\n$\\mathfrak{X}(A)=\\min (P\\setminus I(A))$. Since antichains of $P$\nare in bijection with lower (resp. upper) ideals of $P$, the reverse\noperator acts on lower (resp. upper) ideals of $P$ as well. Note\nthat the current $\\mathfrak{X}$ is inverse to the reverse operator\n$\\mathfrak{X}^{\\prime}$ in Definition 1 of \\cite{P}, see Lemma\n\\ref{lemma-inverse-reverse-operator}. Thus replacing $\\mathfrak{X}^{\\prime}$ by $\\mathfrak{X}$ does not affect our\nforthcoming discussion on orbits.\n\n\nWe say the $\\bbZ$-grading \\eqref{grading} is \\emph{extra-special} if\n\\begin{equation}\\label{extra-special}\n\\frg=\\frg(-2)\\oplus \\frg(-1) \\oplus \\frg(0) \\oplus \\frg(1)\n\\oplus \\frg(2) \\mbox{ and }\\dim\\frg(2)=1,\n\\end{equation}\nUp to conjugation, any simple Lie algebra $\\frg$ has a unique extra-special $\\bbZ$-grading. Without loss of generality, we assume that $\\Delta(2)=\\{\\theta\\}$ , where $\\theta$ is the highest root of $\\Delta^+$.\nNamely, we may assume that the grading \\eqref{extra-special} is defined by the element $\\theta^{\\vee}$, the dual root of $\\theta$. In such a case, we have\n\\begin{equation}\\label{Delta-one}\n\\Delta(1)=\\{\\alpha\\in\\Delta^+\\mid (\\alpha, \\theta^{\\vee})=1\\}.\n\\end{equation}\nLet $\\mathrm{ht}$ be the height function. Recall that $h:=\\mathrm{ht}(\\theta)+1$ is the \\emph{Coxeter number}\nof $\\Delta$. Let $h^*$ be the \\emph{dual Coxeter number }of\n$\\Delta$. That is, $h^*$ is the height of\n$\\theta^{\\vee}$ in $\\Delta^{\\vee}$. As noted on p.~1203 of \\cite{P},\nwe have $|\\Delta(1)|=2h^*-4$. We call a lower (resp. upper) ideal\n$I$ of $\\Delta(1)$ \\emph{Lagrangian} if $|I|=h^*-2$. Write\n$\\Delta_l$ (resp. $\\Pi_l$) for the set of \\emph{all} (resp.\n\\emph{simple}) \\emph{long} roots. In the simply-laced cases, all\nroots are assumed to be both long and short. Note that $\\theta$ is\nalways long, while $\\theta^{\\vee}$ is always short.\n\n\n\n\n\n\n\n\nNow Conjecture 5.11 of \\cite{P} is stated as follows.\n\n\n\\medskip\n\\noindent \\textbf{Panyushev conjecture.}\\quad In any extra-special\n$\\bbZ$-grading of $\\frg$, the number of\n$\\mathfrak{X}_{\\Delta(1)}$-orbits equals $|\\Pi_l|$, and each orbit\nis of size $h-1$. Furthermore, if $h$ is even (which only excludes the case $A_{2k}$ where $h=2k+1$), then each\n$\\mathfrak{X}_{\\Delta(1)}$-orbit contains a unique Lagrangian lower\nideal.\n\n\\medskip\n\nOriginally, the conjecture is stated in terms of upper ideals and\nthe reverse operator $\\mathfrak{X}^{\\prime}$. One agrees that we can\nequivalently phrase it using lower ideals and $\\mathfrak{X}$. The\nmain result of the current paper is the following.\n\n\\begin{thm}\\label{thm-main}\nPanyushev conjecture is true.\n\\end{thm}\n\nAfter collecting necessary preliminaries in Section 2, the above\ntheorem will be proven in Section 3. Moreover, we note that by our\ncalculations in Section 3, one checks easily that for any\nextra-special $1$-standard $\\bbZ$-grading of $\\frg$, all the\nstatements of Conjecture 5.3 in \\cite{P} hold.\n\n\n\n\n\n\n\n\\medskip\n\n\\noindent\\textbf{Notation.} Let $\\bbN =\\{0, 1, 2, \\dots\\}$, and let\n$\\mathbb{P}=\\{1, 2, \\dots\\}$. For each $n\\in\\mathbb{P}$, $[n]$\ndenotes the poset $(\\{1, 2, \\dots, n\\}, \\leq)$.\n\n\\section{Preliminaries}\n\nLet us collect some preliminary results in this section. Firstly,\nlet us compare the two reverse operators. Let $(P, \\leq)$ be any\nfinite poset. For any $x\\in P$, let $I_{\\geq x}=\\{y\\in P\\mid y\\geq\nx\\}$. For any antichain $A$ of $P$, put $I_{+}(A)=\\bigcup_{a\\in A}\nI_{\\geq a}$. Recall that in Definition 1 of \\cite{P}, the reverse\noperator $\\mathfrak{X}^{\\prime}$ is given by\n$\\mathfrak{X}^{\\prime}(A)=\\max (P\\setminus I_{+}(A))$.\n\n\\begin{lemma}\\label{lemma-inverse-reverse-operator}\nThe operators $\\mathfrak{X}$ and $\\mathfrak{X}^{\\prime}$ are\ninverse to each other.\n\\end{lemma}\n\\begin{proof}\nTake any antichain $A$ of $P$, note that\n$$I_{+}(\\min(P\\setminus\nI(A)))=P\\setminus I(A)\\mbox{ and } I(\\max(P\\setminus\nI_{+}(A)))=P\\setminus I_{+}(A).\n$$\nThen the lemma follows.\n\\end{proof}\n\n\nLet $(P_i,\\leq), i=1, 2$ be two finite posets. One can define a\nposet structure on $P_1\\times P_2$ by setting $(u_1, v_1)\\leq (u_2,\nv_2)$ if and only if $u_1\\leq u_2$ in $P_1$ and $v_1\\leq v_2$ in\n$P_2$. We simply denote the resulting poset by $P_1 \\times P_2$. The\nfollowing well-known lemma describes the lower ideals of\n$[m]\\times P$.\n\n\\begin{lemma}\\label{lemma-ideals-CnP}\nLet $P$ be a finite poset. Let $I$ be a subset of $[m]\\times P$. For\n$1\\leq i\\leq m$, denote $I_i=\\{a\\in P\\mid (i, a)\\in I\\}$. Then $I$\nis a lower ideal of $[m]\\times P$ if and only if each $I_i$ is a\nlower ideal of $P$, and $I_m\\subseteq I_{m-1}\\subseteq \\cdots\n\\subseteq I_{1}$.\n\\end{lemma}\n\n\nIn this section, by a \\emph{finite graded poset} we always mean a\nfinite poset $P$ with a rank function $r$ from $P$ to the positive\nintegers $\\mathbb{P}$ such that all the minimal elements have rank\n$1$, and $r(x)=r(y)+1$ if $x$ covers $y$. In such a case, let $P_i$\nbe the set of elements in $P$ with rank $i$. The sets $P_i$ are said\nto be the \\emph{rank levels} of $P$. Suppose that\n$P=\\bigsqcup_{j=1}^{d} P_j$. Let $P_0$ be the empty set $\\emptyset$.\nPut $L_i=\\bigsqcup_{j=1}^{i} P_j$ for $1\\leq j\\leq d$, and let $L_0$ be the empty set.\nWe call those $L_i$ \\emph{rank level lower ideals}.\n\n\nLet $\\mathfrak{X}$ be the reverse operator on $[m]\\times P$. In\nview of Lemma \\ref{lemma-ideals-CnP}, we denote by $(I_1, \\cdots,\nI_m)$ a general lower ideal of $[m]\\times P$, where each $I_i\\in\nJ(P)$ and $I_m\\subseteq \\cdots \\subseteq I_{1}$. We say that the\nlower ideal $(I_1, \\cdots, I_m)$ is \\emph{full rank} if each $I_i$\nis a rank level lower ideal of $P$. Let $\\mathcal{O}(I_1, \\cdots,\nI_m)$ be the $\\mathfrak{X}_{[m]\\times P}$-orbit of $(I_1, \\cdots,\nI_m)$. The following lemma will be helpful in determining\n$\\mathfrak{X}_{[m]\\times P}$-orbits consisting of rank level lower\nideals.\n\n\\begin{lemma}\\label{lemma-operator-ideals-CmP}\nKeep the notation as above. Then\nfor any $n_0\\in \\bbN$, $n_i\\in\\mathbb{P}$ ($1\\leq i\\leq s$) such that $\\sum_{i=0}^{s} n_i =m$, we have\n\\begin{equation}\\label{rank-level}\n\\mathfrak{X}_{[m]\\times P}(L_d^{n_0}, L_{i_1}^{n_1}, \\cdots, L_{i_s}^{n_s})=\n(L_{i_1+1}^{n_0+1}, L_{i_2+1}^{n_1}, \\cdots, L_{i_s+1}^{n_{s-1}}, L_0^{n_s-1}),\n\\end{equation}\nwhere $0\\leq i_s<\\cdots 0$\nis even, while $(L_{n}, L_{n-1})$ is the unique ideal\nwith size $2n$ when $i=0$.\n\n\nSecondly, assume that $n$ is even and let us analyze the orbit\n$\\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \\ref{lemma-operator-ideals-CmK}, we have\n\\begin{align*}\n\\mathfrak{X}(I_n, I_n)&=(I_{n^{\\prime}}, L_0),\\\\\n\\mathfrak{X}^{n-1}(I_{n^{\\prime}}, L_0)&=(I_{n}, L_{n-1}),\\\\\n\\mathfrak{X}(I_{n}, L_{n-1})&=(L_{n}, I_{n}),\\\\\n\\mathfrak{X}^{n-1}(L_{n}, I_{n})&=(L_{2n-1}, I_{n^{\\prime}}),\\\\\n\\mathfrak{X}(L_{2n-1}, I_{n^{\\prime}})&=(I_{n}, I_{n}).\n\\end{align*}\nThus the type II orbit $\\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,\nin this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The\nanalysis of the orbit $\\mathcal{O}(I_{n^{\\prime}}, I_{n^{\\prime}})$\nis entirely similar.\n\n\nFinally, assume that $n$ is odd and let us analyze the orbit\n$\\mathcal{O}(I_n, I_n)$. Indeed, by Lemma \\ref{lemma-operator-ideals-CmK}, we have\n\\begin{align*}\n\\mathfrak{X}(I_n, I_n)&=(I_{n^{\\prime}}, L_0),\\\\\n\\mathfrak{X}^{n-1}(I_{n^{\\prime}}, L_0)&=(I_{n^{\\prime}}, L_{n-1}),\\\\\n\\mathfrak{X}(I_{n^{\\prime}}, L_{n-1})&=(L_{n}, I_{n^{\\prime}}),\\\\\n\\mathfrak{X}^{n-1}(L_{n}, I_{n^{\\prime}})&=(L_{2n-1}, I_{n^{\\prime}}),\\\\\n\\mathfrak{X}(L_{2n-1}, I_{n^{\\prime}})&=(I_{n}, I_{n}).\n\\end{align*}\nThus the type II orbit $\\mathcal{O}(I_n, I_n)$ consists of $2n+1$ elements. Moreover,\nin this orbit, $(I_n, I_n)$ is the unique ideal with size $2n$. The\nanalysis of the orbit $\\mathcal{O}(I_{n^{\\prime}}, I_{n^{\\prime}})$\nis entirely similar.\n\nTo sum up, we have verified the claim since there are $(n+2)(2n+1)$\nlower ideals in $[2]\\times K_{n-1}$ by Lemma \\ref{lemma-ideals-CnP}.\nNote that $|\\Pi_{l}|=n+2$, $h=h^*=2n+2$ for $\\frg=D_{n+2}$, one sees that Theorem\n\\ref{thm-main} holds for $D_{n+2}$.\n\n\n\n\nTheorem \\ref{thm-main} has been verified for all exceptional Lie\nalgebras using \\texttt{Mathematica}. We only present the details for\n$E_6$, where $\\Delta(1)=[\\alpha_2]$, and the Dynkin diagram is as follows.\n\n\\begin{figure}[H]\n\\centering \\scalebox{0.5}{\\includegraphics{E6-Dynkin.eps}}\n\\end{figure}\n\nNote that $|\\Pi_l|=6$,\n$h-1=11$, $h^*-2=10$. On the other hand, $\\mathfrak{X}$ has six\norbits on $\\Delta(1)$, each has $11$ elements. Moreover, the size of\nthe lower ideals in each orbit is distributed as follows:\n\\begin{itemize}\n\\item[$\\bullet$] $0, 1, 2, 4, 7, \\textbf{10}, 13, 16, 18, 19, 20$;\n\n\\item[$\\bullet$] $3, 4, 5, 6, 9, \\textbf{10}, 11, 14, 15, 16, 17$;\n\n\\item[$\\bullet$] $3, 4, 5, 6, 9, \\textbf{10}, 11, 14, 15, 16, 17$;\n\n\\item[$\\bullet$] $7, 7, 8, 8, 9, \\textbf{10}, 11, 12, 12, 13, 13$;\n\n\\item[$\\bullet$] $5, 6, 6, 8, 9, \\textbf{10}, 11, 12, 14, 14, 15$;\n\n\\item[$\\bullet$] $7, 7, 8, 8, 9, \\textbf{10}, 11, 12, 12, 13, 13$.\n\\end{itemize}\nOne sees that each orbit has a unique Lagrangian lower ideal.\n\n\n\n\nThis finishes the proof of Theorem \\ref{thm-main}. \\hfill\\qed\n\n\n\\medskip\n\n\\centerline{\\scshape Acknowledgements} The research is supported by\nthe National Natural Science Foundation of China (grant no.\n11571097) and the Fundamental Research Funds for the Central\nUniversities.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nAccording to the International Diabetes Federation, the number of people affected by diabetes is expected to reach 700 million by 2045 \\cite{Saeedi2019}. Diabetic retinopathy (DR) affects over one-third of this population and is the leading cause of vision loss worldwide \\cite{ogurtsova2017idf}. This happens when the retinal blood vessels are damaged by high blood sugar levels, causing swelling and leakage. In fundus retina images, lesions appear as leaking blood and fluids. Red and bright lesions are the type of lesions that can be commonly identified during DR screening. The blindness incidence can be reduced if the DR is detected at an early stage. In clinical routine, color fundus photographs (CFP) are employed to identify the morphological changes of the retina by examining the presence of retinal lesions such as microaneurysms, hemorrhages, and soft or hard exudates. The international clinical DR severity scale includes no apparent DR, mild non-proliferative diabetic retinopathy (NPDR), moderate NPDR, severe NPDR, and proliferative diabetic retinopathy (PDR), labeled as grades 0, 1, 2, 3, (illustrated in Fig.\\ref{fig:progresion_DR}) and 4. NPDR (grades 1, 2, 3) corresponds to the early-to-middle stage of DR and deals with a progressive microvascular disease characterized by small vessel damages and occlusions. PDR (grade 4) corresponds to the period of potential visual loss which is often due to a massive hemorrhage. Early identification and adequate treatment, particularly in the mild to moderate stage of NPDR, may slow the progression of DR, consequently preventing the establishment of diabetes-related visual impairments and blindness.\n\nIn the past years, deep learning has achieved great success in medical image analysis. Many supervised learning techniques based on convolutional neural networks have been proposed to tackle the automated DR grading task \\cite{PRATT2016200,QUELLEC2017178,gayathri2020lightweight}. Nevertheless, these approaches rarely take advantage\nof longitudinal information. In this direction, Yan et al. \\cite{Yan} proposed to exploit a Siamese network with different pre-training and fusion schemes to detect the early stage of RD using longitudinal pairs of CFP acquired from the same patient. Further, self-supervised learning (SSL) held great promise as it can learn robust high-level representations by training on pretext tasks \\cite{e24040551} before solving a supervised downstream task. Current self-supervised models are largely based on contrastive learning \\cite{liu2020self,https:\/\/doi.org\/10.48550\/arxiv.2002.05709}. However, the choice of the pretext task to learn a good representation is not straightforward, and the application of contrastive learning to medical images is relatively limited. To tackle this, a self-supervised framework using lesion-based contrastive learning was employed for automated diabetic retinopathy (DR) grading \\cite{Lesion-based}.\n\n\n\\begin{figure}[h!]\n\\includegraphics[width=\\textwidth]{image_paper\/longitudinal_progression_DR.png}\n\\caption{Evolution from no DR to severe NPDR for a patient in OPHDIAT \\cite{ophdiat} dataset.\n} \n\\label{fig:progresion_DR}\n\\end{figure}\n\n\n\nMore recently, a new pretext task has appeared for classification purposes in a longitudinal context. Rivail et al. \\cite{Rivail2019} proposed a longitudinal self-supervised learning Siamese model trained to predict the time interval between two consecutive longitudinal retinal optical coherence tomography (OCT) acquisitions and thus capturing the disease progression. Yang et al. \\cite{Zhao2021} proposed an auto-encoder named LSSL that takes two consecutive longitudinal scans as inputs. They added to the classic reconstruction term a time alignment term that forces the topology of the latent space to change in the direction of longitudinal changes. An extension of such principle was provided in \\cite{Ouyang}. To reach a smooth trajectory field, a dynamic graph in each training batch was computed to define a neighborhood in the latent space for each subject. The graph then connects nearby subjects and enforces their progression directions to be maximally aligned.\n\nIn this regard, we aim to use LSSL approaches to capture the disease progression to predict the change between no DR\/mild NPDR (grade 0 or 1) and more severe DR (grade $\\geq$2) through two consecutive follow-ups. To this end, we explore three methods incorporating current and prior examinations. Finally, a comprehensive evaluation is conducted by comparing these pipelines on the OPHDIAT dataset \\cite{ophdiat}. To the best of our knowledge, this work is the first to automatically assess the early DR severity changes between consecutive images using self-supervised learning applied in a longitudinal context.\n\n\\section{Methods}\n\nIn this work, we study the use of different longitudinal pretext tasks. We use the encoders trained with those pretext tasks as feature extractors embedded with longitudinal information. The aim is to predict the severity grade change from normal\/mild NPDR to more severe DR between a pair of follow-up CFP images. Let $\\mathcal{X}$ be the set of subject-specific image pairs for the collection of all CFP images. $\\mathcal{X}$ contains all $(x^t, x^s)$ that are from the same subject with $x^t$ scanned before $x^s$. These image pairs are then provided as inputs to an auto-encoder (AE) structure (Fig.\\ref{fig:overview}\\textit{c}). The latent representations generated by the encoder are denoted by $z^t=F(x^t)$ and $ z^s=F(x^s)$ where $F$ is the encoder. From this encoder, we can define the $\\Delta z = (z^s - z^t)$ trajectory vector and then formulate $\\Delta z^{(t,s)} = (z^s - z^t) \/ \\Delta t^{(t,s)}$ as the normalized trajectory vector where $\\Delta t^{(t,s)}$ is the time interval between the two acquisitions. The decoder $H$ uses the latent representation to reconstruct the input images such that $\\tilde{x}^t=H(z^t)$ and $\\tilde{x}^s=H(z^s)$. $\\mathbf{E}$ denotes the expected value. In what follows, three longitudinal self-supervised learning schemes are further described.\n\n\\begin{figure}[h!]\n\\includegraphics[width=\\textwidth]{image_paper\/image_paper_LSSL_norm_delta_z_final.png}\n\\caption{The figure a) illustrates to longitudinal Siamese and takes as inputs a pair of consecutive images and predict the time between the examinations. The figure b) represents the longitudinal self-supervised learning which is composed of two independent modules, an AE and dense layers. The AE takes as input the pair of consecutive images and reconstruct the image pairs while the dense layer maps a dummy vector to the direction vector $\\tau$. The figure c) corresponds to the LNE, and takes as input the consecutive pairs and build a dynamic graph to align in a neighborhood the subject-specific trajectory vector ($\\Delta z$) and the pooled trajectory vector ($\\Delta h$) that represents the local progression direction in latent space (green circle).}\n\\label{fig:overview}\n\\end{figure}\n\n\n\\subsection{Longitudinal Siamese} \n\nThe Siamese network takes the two image pairs $(x^t, x^s)$. These images are encoded into a compact representation ($z^t$, $z^s$) by the encoder network, $F$. A feed forward neural network (denoted $G$) then predicts $\\Delta t^{(t,s)}$, the time interval between the pair of CFP images (Fig.\\ref{fig:overview}\\textit{a}). The regression model is trained by minimizing the following L2 loss: $\\parallel G(z^t,z^s) - \\Delta t^{(t,s)} \\parallel_2^2$.\n\n\\subsection{Longitudinal self-supervised learning} \n\nThe longitudinal self-supervised learning (LSSL) exploits a standard AE. The AE is trained with a loss that forces the trajectory vector $\\Delta z$ to be aligned with a direction that could rely in the latent space of the AE called $\\tau$. This direction is learned through a sub-network composed of single dense layers which map a dummy data into a vector $\\tau \\in \\Omega_{\\alpha}$, the dimension of the latent space. The high-level representation of the network is illustrated in Fig.\\ref{fig:overview}\\textit{b}. Enforcing the AE to respect this constraint is equivalent to encouraging $\\cos\\left(\\Delta z,\\boldsymbol{\\tau}\\right)$ to be close to 1, i.e. a zero-angle between $\\boldsymbol{\\tau}$ and the direction of progression in the representation space.\n\n\n\n\n\\noindent\\textbf{Objective Function.}\n\\begin{equation}\n\\mathbf{E}_{(x^t, x^s) \\sim \\mathcal{X}} \\left(\\lambda_{rec}\\cdot(\\parallel x^t - \\tilde{x}^t \\parallel_2^2 + \\parallel x^s - \\tilde{x}^s \\parallel_2^2)-\\lambda_{dir} \\cdot \\cos(\\Delta z,\\tau)\\right)\n\\label{eq:loss_1}\n\\end{equation}\n\n\n\n\n\\subsection{Longitudinal neighbourhood embedding}\n\n\nLongitudinal neighborhood embedding (LNE) is based on the LSSL framework. The main difference is that a directed graph $\\mathcal{G}$ is built in each training batch. A pair of sample ($x_{t},x_{s}$) serves as a node in the graph with node representation $\\Delta z$. For each node $i$, Euclidean distances to other nodes $j\\neq i$ are computed by $D_{i,j} = \\parallel z^t_i - z^t_j\\parallel_2$. The neighbour size ($N_{nb}$) is the closest nodes of node $i$ form its 1-hop neighbourhood $\\mathcal{N}_i$ with edges connected to $i$. The adjacency matrix $A$ for the directed graph ($\\mathcal{G}$) is then defined as:\n\n\\begin{align*} \n A_{i,j} := \n \\begin{cases}\n exp(-\\frac{D_{i,j}^2}{2\\sigma_i^2}) & j \\in \\mathcal{N}_i\\\\\n 0, & j \\notin \\mathcal{N}_i\n \\end{cases}~. \\\\\n \\mbox{with } \\sigma_i := max(D_{i,j \\in \\mathcal{N}_i}) - min(D_{i,j \\in \\mathcal{N}_i})\n \\label{eqn:adj}\n\\end{align*}\nThis matrix regularizes each node's representation by a longitudinal neighbourhood embedding $\\Delta h$ pooled from the neighbours' representations. The neighborhood embedding for a node $i$ is computed by:\n\\begin{equation*}\n\\Delta h_i := \\sum _{j \\in \\mathcal{N}_i} A_{i,j} O^{-1}_{i,j} \\Delta z_j,\n\\end{equation*} where $O$ is the out-degree matrix of graph $\\mathcal{G}$, a diagonal matrix that describes the sum of the weights for outgoing edges at each node. They define $\\theta_{\\langle \\Delta z,\\Delta h \\rangle}$ the angle between $\\Delta z$ and $\\Delta h$, and only incite $\\cos(\\theta_{\\langle \\Delta z,\\Delta h \\rangle}) = 1$, i.e., a zero-angle between the subject-specific trajectory vector and the pooled trajectory vector that represents the local progression direction in the latent space (Fig. \\ref{fig:overview}\\textit{c}). \n\n\\noindent\\textbf{Objective Function.} \n\\begin{equation}\n \\mathbf{E}_{(x^t, x^s) \\sim \\mathcal{X}} \\left(\\lambda_{rec} \\cdot(\\parallel x^t - \\tilde{x}^t \\parallel_2^2 + \\parallel x^s - \\tilde{x}^s \\parallel_2^2)- \\lambda_{dir} \\cdot \\cos(\\theta_{\\langle \\Delta z,\\Delta h \\rangle})\\right) \\label{eq:loss_2}\n\\end{equation}\n\n\\section{Dataset}\n\nThe proposed models were trained and evaluated on OPHDIAT \\cite{ophdiat}, a large CFP database collected from the Ophthalmology Diabetes Telemedicine network consisting of examinations acquired from 101,383 patients between 2004 and 2017. Within 763,848 interpreted CFP images 673,017 are assigned with a DR severity grade, the others being non-gradable. Image sizes vary from 1440 $\\times$ 960 to 3504 $\\times$ 2336 pixels. Each examination has at least two images for each eye. Each subject had 2 to 16 scans with an average of 2.99 scans spanning an average time interval of 2.23 years. The age range of the patients is from 9 to 91. \n\n\\noindent\\textbf{Image pair selection.} The majority of patients from the OPHDIAT database have multiple images with different fields of view for both eyes. To facilitate the pairing, we propose to select a single image per eye for each examination: we select the one that best characterizes the DR severity grade, as detailed hereafter. For this purpose, we train a standard classifier using the complete dataset that predicts the DR severity grade (5 grades). During the first epoch, we randomly select one image per eye and per examination for the full dataset. After the end of the first epoch with the learned weights of the model, for each image present in every examination, we select the image that gives the highest classification probability. We repeat this process until the selected images by the model converge to a fixed set of images per examination. From the set of selected images, we construct consecutive pairs for each patient and finally obtain 100,033 pairs of images from 26,483 patients. Only 6,690 (6.7\\%) pairs have severity grade changes from grade 0 or 1 to grade $\\geq 2$ against 93,343 (93.3\\%) pairs with severity changes that lie between grades 0 and 1. The resulting dataset exhibits the following proportions in gender (Male 52\\%, Female 48\\%) and diabetes type (type 2 69\\%, type 1 31\\%). This dataset was further divided into training (60\\%), validation (20\\%), and test (20\\%) based on subjects, i.e., images of a single subject belonged to the same split and in a way that preserves the same proportions of examples in each class as observed in the original dataset. \n\n\\noindent\\textbf{Image pre-processing.} Image registration is a fundamental pre-processing step for longitudinal analysis \\cite{Saha2019}. Therefore, using an affine transformation, we first conducted a registration step to align $x_{t}$ to $x_{s}$. Images are then adaptively cropped to the width of the field of view (i.e., the eye area in the CPF image) and then resized to 256$\\times$256. A Gaussian filter estimates the background in each color channel to attenuate the strong intensity variations among the dataset which is then subtracted from the image. Finally, the field of view is eroded by 5\\% to eliminate illumination artifacts around the edges. During the training, random resized crops ([0.96, 1.0] as scale range and [0.95, 1.05] as aspect ratio range) are applied for data augmentation purposes. \n\n\n\\section{Experiments and Results}\n\n\\noindent\\textbf{Implementation Details.} As it was conducted in \\cite{Zhao2021,Ouyang}, we constructed a standard AE for all the compared methods to focus only on the presented methodology and make a fair comparison between approaches, with the hope that using advanced AE structures could lead to better encoding and generalization. In our basic architecture, we employed a stack of $n$ pre-activated residual blocks, where $n$ determines the depth of that scale for the encoder. In each res-block, the residual feature map was calculated using a series of three 3$\\times$3 convolutions, the first of which always halves the number of the feature maps employed at the present scale, such that the residual representations live on a lower-dimensional manifold. Our encoder comprises five levels; the first four levels are composed of two residual blocks, and the latter only one residual block. This provides a latent representation of size $64\\times4\\times4$. The employed decoder is a reverse structure of the encoder. The different networks were trained for 100 epochs by the AdamW optimizer with a learning rate of $5 \\times 10^{-4}$, OneCycleLR as scheduler and a weight decay of $10^{-5}$, using an A6000 GPU with the PyTorch framework. The regularization weights were set to $\\lambda_{dir}=1.0$ and $\\lambda_{rec}=5.0$. A batch size of 64 was used for all models, and a neighbour size $N_{nb}=5$ and $\\Delta z^{(t,s)}$ were used for the LNE, as in the original paper \\cite{Ouyang}.\n\\subsection{Comparison of the approaches on the early change detection}\n\nWe evaluate the LSSL encoders on detecting the severity grade change from normal\/mild NPDR to more severe DR between a pair of follow-up CFP images. The classifier was constructed as the concatenation of the learned backbone (feature extractor) and a multi-layer perceptron (MLP). The MLP consists of two fully connected layers of dimensions 1024 and 64 with LeakyReLU activation followed by a last single perceptron. Receiving the flattened representation of the trajectory vector $\\Delta z$, the MLP predicts a score between 0 and 1.\nWe compared the area under the receiver operating characteristics curve (AUC) and the accuracy (Acc) in Tab.\\ref{table:table_comparaison_AUC} for different pre-training strategies (from scratch, trained on LSSL methods, encoder from a standard AE). We also pre-trained on the OPHDIAT dateset (classification of the DR severity grade) to compare the LSSL pre-training strategies with a conventional pre-training method. The statistical significance was estimated using DeLong's t-test \\cite{Robin2011} to analyze and compare ROC curves. The results in Tab.\\ref{table:table_comparaison_AUC} and Fig.\\ref{fig:ROC_curve} show the clear superiority of the LSSL encoder, with a statistical significance p-value < 2.2e-16. Due to class imbalance, the Longitudinal-siamese (L-siamese) have a high Acc while exhibiting a lower AUC than the baseline (trained from scratch). \n\n\n\n\n\n\n\\newfloatcommand{capbtabbox}{table}[][\\FBwidth]\n\n\n\n\\begin{figure}\n\\begin{floatrow}\n\\ffigbox{%\n \\includegraphics[width=6cm]{image_paper\/roc_curve.png}%\n}{%\n \\caption{ROC Curve Analysis of the compared methods}\\label{fig:ROC_curve}%\n}\n\\capbtabbox{%\n\\begin{adjustbox}{width=\\columnwidth,center}%\n\\begin{tabular}{l c c}\n \n \n \n \\cline{2-3} \n & AUC (95\\% CI) & Acc \\tabularnewline\n \\cline{2-3} \n Model & & \\tabularnewline\n \\hline\n No pretrain& 0.8758 (0.8688-0.883) & 0.8379 \\tabularnewline\n Pre-train on OPHDIAT& 0.8994 (0.8921-0.9068) &\t0.8289 \\tabularnewline\n AE& 0.7724 (0.7583-0.7866) & 0.5599 \\tabularnewline\n L-siamese \\cite{Rivail2019}& 0.8253 (0.8127-0.838) & 0.9354 \\tabularnewline\n LSSL \\cite{Zhao2021}& \\textbf{0.9624} (0.9593-0.9655) & 0.8871 \\tabularnewline\n LNE \\cite{Ouyang} & \\textbf{0.9448} (0.9412-0.9486) & 0.8646 \\tabularnewline\n \\hline\n & & \\tabularnewline\n & & \\tabularnewline\n & & \\tabularnewline\n & & \\tabularnewline\n & & \\tabularnewline\n \n \n\\end{tabular}\n\\end{adjustbox}\n\n}{%\n\\caption{Comparison of the approach on the early change detection with the frozen encoder.}\\label{table:table_comparaison_AUC}%\n}%\n\\end{floatrow}\n\\end{figure}\n\n\n\n\n\n\n\n\n \t \t\n\n\n\n\n\n\n\\subsection{Norm of trajectory vector analyze}\n\nWe constructed different histograms in Fig.\\ref{fig:norm_plot} representing the mean value of the norm of the trajectory vector with respect to both diabetes type and progression type. According to Fig.\\ref{fig:norm_plot}, only the models with the direction alignment loss term are able to capture the change detection in the DR relative to the longitudinal representation. Therefore, we observe in the histogram that the trajectory vector $(\\Delta z)$ is able to dissociate the two types of diabetes (t-test p-value < 0.01) and change detection (t-test p-value < 0.01). For the diabetes type, a specific study \\cite{Chamard2020} about the OPHDIAT dataset indicates that the DR progression is faster for \n\\begin{figure}[h!]\n\\includegraphics[width=\\textwidth]{image_paper\/norm_z_change.png}\n\\caption{Mean of the trajectory vector norm for the different self-supervised method used} \n\\label{fig:norm_plot}\n\\end{figure}\npatients with type 1 diabetes. Based on the fact that the $\\Delta z$ can be seen as a relative speed, this observation agrees with the histogram plot of the mean of the $\\Delta z$ norm represented in Fig. \\ref{fig:norm_plot}. We also observed that the norm of the $\\Delta z$ vector is lower for the normal stage of the DR than for mild NPDR to more severe. This was expected because only the methods with a direction alignment term in their objective explicitly modeled longitudinal effects, resulting in more informative $\\Delta z$. This also implies that simply computing the trajectory vector itself is not enough to force the representation to capture the longitudinal change. \n\n\\section{Discussion}\nWe applied different LSSL techniques to encode diabetic retinopathy (DR) progression. The accuracy boost, relative to a network trained from scratch or transferred from conventional tasks, demonstrates that longitudinal pre-trained self-supervised representation learns clinically meaningful information. Concerning the limitations of the LSSL methods, we first observe that the models with no time alignment loss perform poorly and provide no evidence of disease progression encoding. Also, we report for the LNE that the normalized trajectory vector for some pairs, that have a large time between examinations, is almost all zeros, which results in a non-informative representation. This could explain the difference between the LSSL and LNE prediction performances. Moreover, during the LSSL and the LNE training, we often faced a plateau with the direction loss alignment. Therefore, we also claim that intensive work should be done regarding the choice of the hyperparameters : constant weights for the losses, latent space size ($\\Omega_{\\alpha}$), neighbour size ($N_{nb}$). The results concerning quantifying the encoding of the disease progression from the models trained with a time direction alignment are encouraging but not totally clear. As it was mentioned in \\cite{Vernhet2021}, one limitation of the LSSL approach pretains to the cosine loss (direction alignment term from equations (\\ref{eq:loss_1},\\ref{eq:loss_2}) used to encode the longitudinal progression in a specific direction in the latent space and learned while training. The loss only focuses on the correlation with the disease progression timeline but not disentanglement of the disease progression itself. Therefore, a more in-depth analysis of the latent space is required to evaluate if the trajectory vector could be used to find a specific progression trajectory according to patient characteristics (diabetes types, age, DR severity). The pairing and the registration are critical steps in the longitudinal study. As it was previously mentioned, by using a better registration method and exploiting different fusion schemes and backbone architectures, we could get enriched latent representation and, thus, hopefully, better results. Also, the frozen encoders could be transferred to other types of longitudinal problems. In summary, LSSL techniques are quite promising: preliminary results are encouraging, and we expect further improvements. \\\\\n\n\n\n\\noindent\\textbf{Acknowledgements}\nThe work takes place in the framework of the ANR RHU project Evired. This work benefits\nfrom State aid managed by the French National Research Agency under \"Investissement\nd'Avenir\" program bearing the reference ANR-18-RHUS-0008\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{splncs04}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLong period helical spin order resulting from\nDzyaloshinskii-Moriya (DM) spin-orbit\ncoupling\\cite{dzyaloshinskii58} in non-centrosymmetric\nitinerant magnets (e.g. MnSi, Fe$_x$Co$_{1-x}$Si, FeGe with crystal\nstructure B20) has been intensively studied in the\npast.\\cite{dzyaloshinskii64,ishikawa76, moriya76,nakanishi80,bak80,beille83,lebech89,ishimoto95,uchida06}\nOne material of this family, MnSi, has attracted recent interest\nbecause of its puzzling behavior under applied hydrostatic\npressure.\\cite{pfleiderer97,thessieu97} \n A state with non-Fermi liquid transport\nproperties\\cite{pfleiderer01} is obtained over a wide range of\npressures above a critical threshold $p_c$. \nBeginning at the same critical pressure, but over a smaller\npressure range, magnetic 'partial order' is observed in neutron\nscattering experiments.\\cite{yu04,pfleiderer04} While usual helical\norder gives rise to sharp Bragg peaks in neutron scattering\n(corresponding to the periodicity of the helical spin-density wave),\n'partial order' is characterized by a neutron scattering signal which\nis smeared over a wavevector sphere rather than localized at\ndiscrete points in reciprocal space.\n\nRecent theoretical work on electronic properties, critical fluctuations and\ncollective modes of helimagnets \nare in Refs.~\\cite{fischer04,grigoriev05,belitz06}. Theoretical\nproposals for the high pressure state of MnSi have invoked proximity to a\nquantum multi-critical point \\cite{schmalian04} or magnetic liquid-gas transitions.\\cite{tewari05} Closest in spirit to our approach are the skyrmion-like magnetic patterns studied in Refs.~\\cite{bogdanov05,fischer06}.\n\nRecently, we have proposed a novel kind of magnetic order, the\nhelical spin crystal, as a promising starting point for a theory of\n'partial order'.\\cite{binz06} Helical spin crystals are magnetic\npatterns, which are obtained by superposition of several helical\nspin-density waves which propagate in different directions. There is a substabtial resemblance to multi-$k$ magnetic structures (also known as multiple-$q$ or multiple spin density wave states).\\cite{mukamel76,jo83,forgan89,longfield02,steward04} But in contrast to most other magnetic multi-$k$ systems, in helical spin crystals the ordering wavevectors are selected from an infinite number of degenerate modes lying on a sphere in reciprocal space - a process analogous to the crystallization of liquids.\n\nIn this\nwork, we present a detailed theory of such structures. The\nstability, structure and distinctive properties of such states are\ndescribed, and the consequences of \n coupling to non-magnetic disorder is discussed.\n\nThe paper is organized as follows. First, we review the standard\ntheory of helimagnetism in Section \\ref{GLsection}, finishing with a\nshort remark about more general helical magnetic states.\n Then, the theory of helical spin crystals is developed.\nThe requirements\nto energetically stabilize helical spin crystal states are\ninvestigated in Section \\ref{energetics}. The analysis works in two directions. First, we establish a phase diagram in terms of natural parameters which tune the interaction between helical modes and second, we give simple rules to construct model interactions which stabilize a large class of helical spin crystals. The remaining parts of the\npaper are dedicated to extracting testable consequences of these\nnovel magnetic states. In Section \\ref{structure}, we give a\ndescription of the most prominent spin crystals which emerge from\nour energetic analysis in terms of their symmetry. It is shown that the symmetry of the magnetic state may stabilize topological textures like merons and anti-vortices which are otherwise not expected to be stable in the present context, given the order parameter and dimensionality of the system.\nSymmetry also determines, which higher-harmonics Bragg peaks these structures would produce.\nWe subsequently study the response of helical spin crystals with respect to\ndifferent perturbations in Section \\ref{response}. For example, sub-leading spin orbit coupling (crystal anisotropy) locks the magnetic crystal to the\n underlying atomic lattice and thus determines the location of magnetic Bragg peaks. \nWe also study the response to an\nexternal magnetic field which, apart from producing a uniform\nmagnetic moment, also leads to distinctive distortions of the helical\nmagnetic structure, which could be observable by neutron scattering. \nFinally, in Section \\ref{disorder} we investigate the implications of\nnon-magnetic impurities, which are expected to destroy long-range\nmagnetic order and produce diffuse scattering.\n\n\n\\section{Landau-Ginzburg theory of helimagnetism}\\label{GLsection}\n\nFor a cubic magnet without a center of inversion, the Landau-Ginzburg free energy to quadratic order in the magnetization $\\bv M(\\bv r)$ is\n\\begin{equation}\nF_2=\\left\\langle r_0 \\bv M^2 + J (\\partial_\\alpha M_\\beta)(\\partial_\\alpha M_\\beta)+2D\\bv M\\cdot(\\bv \\nabla\\times\\bv M)\\right\\rangle,\\label{F2}\n\\ee\nwhere $\\left\\langle\\ldots\\right\\rangle$ indicates sample averaging, $r_0,J,D$ are parameters ($J>0$) and Einstein summation is understood. The last term of Eq.~\\eref{F2} is the DM interaction, which is odd under spatial inversion and originates in spin-orbit coupling.\\cite{dzyaloshinskii58} Fourier transformation, $\\bv M(\\bv r)=\\sum_{\\bv q} \\bv m_{\\bv q}e^{i\\bv q\\cdot\\bv r}$ with $\\bv m_{-\\bv q}=\\bv m^*_{\\bv q}$, leads to\n\\begin{equation}\nF_2=\\sum _{\\bv q} \\left[\\left(r_0+Jq^2\\right)|\\bv m_{\\bv q}|^2+2D \\bv m_{\\bv q}^*\\cdot\\left(i\\bv q\\times\\bv m_{\\bv q}\\right)\\right].\n\\ee\nClearly, the energy is minimal for circularly polarized spiral modes,\n where $\\nabla\\times\\bv M$ points in the direction of $-D\\,\\bv M$. For such modes,\n\\begin{equation}\nF_2=\\sum _{\\bv q} r(q)|\\bv m_{\\bv q}|^2,\n\\ee\nwhere $r(q)=r_0-JQ^2+J(q-Q)^2$ with $Q=|D|\/J$.\nThe Gaussian theory thus determines both the chirality of low-energy helical modes and their wavelength $\\lambda=2\\pi\/Q$. The latter is typically long (between $180\\mathring{A}$ in MnSi and $2300\\mathring{A}$ in Fe$_{0.3}$Co$_{0.7}$Si), reflecting the smallness of spin-orbit coupling effects compared to exchange.\n However, no preferred spiraling direction is selected by Eq.~\\eref{F2}, since $F_2$ is rotation-invariant. Cubic anisotropy terms which break this invariance are of higher order in the spin-orbit interaction and therefore small. We neglect them for the moment and reintroduce them later.\n\nThe isotropic Gaussian theory leaves us with an infinite number of modes which become soft as $r(Q)\\to0$. They consist of helical spin-density waves with given chirality (determined by the sign of $D$), whose wave-vectors lie on a sphere $|\\bv q|=Q$ in reciprocal space.\n Each of these helical modes is determined by an amplitude and a phase. Hence, for each point $\\bv q$ on the sphere, we define a complex order parameter $\\psi_{\\bv q}$ (with $\\psi_{-\\bv q}=\\psi^*_{\\bv q}$) through\n\\begin{equation}\n\\bv m_{\\bv q}=\\frac12\\psi_{\\bv q}(\\uv\\epsilon'_{\\bv q}+i \\uv\\epsilon''_{\\bv q}),\\label{psis}\n\\ee\nwhere $\\uv \\epsilon'_{\\bv q}$, $\\uv \\epsilon''_{\\bv q}$, and $\\uv q$, are mutually orthogonal unit vectors (with a defined handedness, given by the sign of $D$). Obviously, changing the phase of $\\psi_{\\bv q}$ is equivalent to rotating $\\uv \\epsilon'_{\\bv q}$ and $\\uv \\epsilon''_{\\bv q}$ around $\\uv q$. The phase of $\\psi_{\\bv q}$ is thus only defined relative to some initial choice of $\\epsilon'_{\\bv q}$. The neutron scattering intensity is proportional to $|\\uv q\\times \\bv m_{\\bv q}|^2=1\/2|\\psi_{\\bv q}|^2$, independent of the phase. Changing the phase of $\\psi_{\\bv q}$ is also equivalent to translating $\\bv M(\\bv r)$ along $\\uv q$.\n\n\nIn the following, we study minima of the free energy in the ordered phase [$r(Q)<0$].\nThese depend on the interactions between degenerate modes (i.e., free energy contributions, which are quartic or higher order in $\\bv M$). We only consider interactions which, as $F_2$, have full rotation symmetry and we will include the weak crystal anisotropy last. The most general quartic term which has full rotation symmetry (transforming space and spin together) is of the form\n\\begin{equation}\n F_4=\\!\\!\\sum_{\\bv q_1,\\bv q_2,\\bv q_3}\\!\\!U(\\bv\nq_1,\\bv q_2,\\bv q_3)\\left(\\bv m_{\\bv q_1}\\cdot\\bv m_{\\bv\nq_2}\\right)\\left(\\bv m_{\\bv q_3}\\cdot\\bv m_{\\bv\nq_4}\\right),\\label{F4}\n\\ee\n with $\\bv q_4=-(\\bv q_1+\\bv q_2+\\bv q_3)$.\n\n\\subsection{Single-spiral state}\n\nFor example, if $U(\\bv q_1,\\bv q_2,\\bv q_3)$ is a constant, then $F_4\\propto \\left\\langle \\bv M^4\\right\\rangle$.\nIf the interaction depends only on the local magnetization amplitude, i.e., in general if $F=F_2+\\left\\langle f(\\bv M^2)\\right\\rangle$ for some function $f$, then the absolute minimum of $F$ is given by a single-spiral state (also known as helical spin density wave) $\\bv M(\\bv r)=\\bv m_{\\bv k}e^{i\\bv k\\cdot\\bv r}+\\bv m^*_{\\bv k}e^{-i\\bv k\\cdot\\bv r}$, where a single pair of opposite momenta $\\pm \\bv k$ is selected. \n To proof this, we write $F$ as\n\\begin{equation}\n\\sum_{\\bv q} [r(q)-r(Q)]\\,|\\bv m_{\\bv q}|^2 + \\left\\langle r(Q)\\bv M^2+f(\\bv M^2)\\right\\rangle.\\label{proof}\n\\ee\nIn the single-spiral state, $\\bv M^2$ is constant in space and it minimizes the first and the second term of Eq.~\\eref{proof} independently. Therefore, no other magnetic state can be lower in energy.\n\nBecause $Q$ is small, the relevant wavevectors entering Eq.~\\eref{F4} are also small and $U(\\bv q_1,\\bv q_2,\\bv q_3)$ is effectively\n close to a constant. Therefore, the single-spiral state, as observed in Fe$_x$Co$_{1-x}$Si, FeGe and in MnSi at ambient pressure, is the most natural helical magnetic order from the point of view of Landau theory.\n\n\n\n\n\\subsection{Linear superpositions of single-spiral states}\\label{continuous}\n\nMotivated by the phenomenology of 'partial order', we will now extend the theory beyond this standard solution. We speculate that $U(\\bv q_1,\\bv q_2,\\bv q_3)$ is not constant, such that $F_4$ favors a linear superpositions of multiple spin-spirals with different wave-vectors on the sphere of degenerate modes $|\\bv q|=Q$.\n\n\nOne may first speculate about magnetic patterns whose Fourier transform\nis non-zero everywhere on the wave-vector sphere and\n peaked infinitely sharply perpendicular to the sphere, i.e.\n\\begin{equation}\n|\\psi_{\\bv q}|^2\\propto \\delta(|\\bv q|-Q)\\label{po-literally}\n\\ee\n[see Eq.~\\eref{psis}]. This idea turns out to be complicated for at least two reasons.\n\nThe first complication is that there is no continuous way of attributing a finite-amplitude spiral mode to each point on the wavevector sphere. This can be seen by noting that $\\uv \\epsilon'_{\\bv q}$ [Eq.~\\eref{psis}] is a tangent vector field on the sphere. Thus, it cannot be continuous (impossibility of combing a hedgehog).\\cite{mermin79} Thus there is no ``uniform'' superposition of helical modes on the sphere. The problem of singularities can be avoided if one assumes a $\\psi_{\\bv q}$ with point nodes on the sphere.\n\nThe second complication is that higher harmonics would result in a broadening of the delta-function in Eq.~\\eref{po-literally}. This is seen as follows. Consider three momenta $\\bv q_1, \\bv q_2, \\bv q_3$ on the wavevector sphere and $\\bv q_4$ which is off the sphere. The non-vanishing modes $\\bv m_{\\bv q_1},\\bv m_{\\bv q_2}, \\bv m_{\\bv q_3}$ couple linearly to $\\bv m_{\\bv q_4}$ via Eq.~\\eref{F4} and thus induce a higher harmonic ``off-shell'' mode $\\bv m_{\\bv q_4}\\neq0$. Since this happens for every point away from the sphere,\n the effect is an intrinsic broadening of the peak in $|\\psi_{\\bv q}|^2$, in contradiction with the initial assumption of Eq.~\\eref{po-literally}.\n\n\n\\section{Energetics of helical spin crystals}\\label{energetics}\n\nIn the following, we study magnetic structures which are superpositions of a {\\em finite} number of degenerate helical modes $\\psi_j$ with wavevectors $\\pm\\bv k_j$, $j=1,\\ldots,N$. We call the resulting states helical spin crystals, because of the analogy with weak crystallization theory of the solid-liquid transition.\\cite{brazovskii87}\n\n\\subsection{Structure of the quartic interaction}\\label{section_int}\n\nWe assume that $F_4$ is small, and that its main effect is to provide an\ninteraction between the modes which are degenerate under $F_2$.\nThus, the relevant terms of $F_4$ are those with $|\\bv q_1|= |\\bv q_2|=|\\bv\nq_3|=|\\bv q_4|=Q$. This phase-space constraint and rotational symmetry implies that the coupling function $U$ depends only on two relative angles between the momenta\n\\begin{equation}\n\\left.U(\\bv q_1,\\bv q_2,\\bv q_3)\\right|_{|\\bv q_i|=Q}=U(\\theta,\\phi),\\label{effint}\n\\ee\n where we have chosen the following parameterization:\n\\begin{equation}\n\\begin{split}\n2\\,\\theta&=\\arccos(\\uv q_1\\cdot\\uv q_2)\\\\\n\\phi\/2&=\\arccos\\left[\\frac{(\\uv q_2-\\uv q_1)\\cdot \\uv q_3}{1-\\uv q_1\\cdot\\uv q_2}\\right].\n\\end{split}\\label{thph}\n\\ee\n Geometrically,\n$\\phi\/2$ is the angle between the two planes\nspanned by $(\\bv q_1, \\bv q_2)$ and $(\\bv q_3,\\bv q_4)$ (Fig.~\\ref{angles}). In the special case $\\bv q_1+\\bv q_2=0$, it\n becomes the angle between $\\bv q_2$ and $\\bv q_3$. This mapping allows $\\theta$ and $\\phi$ to be interpreted as the polar and azimuthal angles of a sphere and the coupling $U(\\theta,\\phi)$ is a function on that sphere. Since it describes an effective coupling between modes on the wavevector sphere, the coupling $U(\\theta,\\phi)$ has a status similar to that of Fermi liquid parameters in the theory of metals.\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig1.eps}\n\\caption{The set of quartets $(\\bv q_1,\\ldots,\\bv q_4)$ satisfying $|\\bv q_i|=Q$ and $\\bv q_1+\\ldots+\\bv q_4=0$, modulo global rotations, may be parameterized by two angles $\\theta$ and $\\phi$ as shown in this figure. \\label{angles}}\n\\end{figure}\n\n\nThe Landau free energy $F=F_2+F_4$ of helical spin crystal states is calculated as follows. Obviously,\n\\begin{equation}\nF_2=r(Q)\\sum_j|\\psi_j|^2.\\label{F2psi}\n\\ee\nThe quartic term, Eq.~\\eref{F4}, may be split into three distinct contributions $F_4=F_{s}+F_{p}+F_{\\rm nt}$. The first term, $F_{s}$, is the self-interaction of each spiral mode with itself:\n\\begin{equation}\nF_{s}=U_{s}\\sum_j|\\psi_j|^4,\\label{Fs}\n\\ee\nwhere $U_{s}=U\\!(\\theta\\!=\\!\\pi\/2,\\phi\\!=\\!0)$. A minimum requirement for stability of the theory is $U_{s}>0$.\n\nThe next term, $F_{p}$, stems from pairwise interactions between modes. It is of the form\n\\begin{equation}\nF_{p}=2\\sum_{i0$ and $W'=0$. In the grey region, $F4<0$ and the quartic theory is unstable. The various phases are explained in the text. In contrast to our earlier use of these symbols,\\cite{binz06} ``$\\bigtriangleup$'' and ``$\\square$'' denote general states with 3 and 4 helical modes, respectively.\n\\label{pdiag0}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig3.eps}\n\\caption{Same as Fig.~\\ref{pdiag-2} with $Q^2W'=-0.5W$. The point $U_{11}=U_{22}=0$ is now at the phase boundary between spiral order and simple cubic.\n\\label{pdiag-2}}\n\\end{figure}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig4.eps}\n\\caption{a) and b) For the locations $A,\\ldots,E$ in the phase diagram of Fig.~\\ref{pdiag0},\nthe ratio between the pair interaction $V_{p}(\\theta)$ and the self-interaction $U_{s}$ is plotted as a function of the angle $2\\theta$ (the angle between propagation directions of modes). A: single-spiral state; B: phase boundary between single-spiral and bcc1; C: bcc1; D: sc; E: $\\bigtriangleup$.\n\\label{Vpfig}}\n\\end{figure}\n\nAll magnetic ground states are equal-amplitude superpositions of 1,2,3,4 or 6 spiral modes.\n The body-centered cubic (bcc) states are superpositions of all $\\langle110\\rangle$-modes.\n bcc1 and bcc2 differ by the relative phases of the six interfering helical modes (see Section \\ref{bcc1}). The simple cubic (sc) crystal consists of three mutually orthogonal spirals (e.g. along all $\\langle100\\rangle$ directions). A face-centered cubic (fcc) helical spin crystal is obtained by superposing all four $\\langle111\\rangle$-modes. However, the ground state is not fcc but a small distortion of it: fcc$^*$. In fcc$^*$, the wavevectors are shifted slightly away from $\\langle111\\rangle$ in order to gain energy from the quartet-term $F_{\\rm nt}$.\\footnote{The shift\n to directions $[\\pm\\sqrt{1+\\delta}\\,\\pm\\!\\!\\sqrt{1+\\delta}\\,\\sqrt{1-2\\delta}\\,]$ is very small with $\\delta\\sim 0.02$ and even without the deformation, fcc would still beat the neighboring phases in the fcc$^*$-region.} The symbols ``$\\bigtriangleup$'' and ``$\\square$'' are used differently here than in our previous paper.\\cite{binz06} Here, ``$\\square$'' stands for a superposition of four modes with wavevectors as shown in Fig.~\\ref{angles} with $\\phi\/2=\\pi\/2$. Hence, wavevectors $\\pm k_j$ form a square cuboid and allow for one quartet-term $F_{\\rm nt}$. The angle $2\\theta$ changes as a function of interaction parameters within the range $0.24\\pi<2\\theta<0.38\\pi$.\nFinally, the phase ``$\\bigtriangleup$'' consists of three modes. The wavevectors $\\bv k_1,\\bv k_2,\\bv k_3$ point to the vertices of an equilateral triangle on the sphere, whose size is determined by the requirement that the mutual angle between two wavevectors, $2\\theta$, minimizes Eq.~\\eref{V}. The angle $2\\theta$ is parameter-dependent and lies in the range $0.14\\pi<2\\theta<0.24\\pi$.\n\nAs expected, a negative $W'$ [Eq.~\\eref{W'}] favors multi-mode spin crystal states with varying magnetization amplitude relative to the spiral state with constant $\\bv M^2$ (compare Figs.~\\ref{pdiag0} and \\ref{pdiag-2}). Positive $W'$ has the opposite effect, and enhances the region of the spiral phase (not shown). The term $W'$ alone (i.e., with $U_{11}=U_{22}=0$) stabilizes sc in the regime $Q^2W'<-W\/2<0$. However, a small positive $U_{22}$ is sufficient to favors bcc1 over sc.\n\nIn conclusion, we observe that two helical spin crystals, bcc1 and sc, appear adjacent to the single-spiral state and are stable at relatively small values of $Q^2W'$, $U_{11}$ and $U_{22}$.\n In the following, we study the properties of the bcc1, and sc states since they are the most likely candidates of helical spin crystals from the point of view of energetics.\n\n\n\n\\subsection{Model interactions with exact ground states}\\label{exact}\n\nIn the preceding Section, we established a variational phase diagram for ``natural'', i.e., slowly varying coupling functions $U(\\theta,\\phi)$. Most phases in this phase diagram, (``spiral'', sc, bcc1, bcc2, fcc and ``$\\bigtriangleup$'') can be shown to be the {\\em exact} global minima for some fine-tuned model interaction, that are constructed below.\n\nLet us consider the toy model $F_{{\\rm toy}}=F_2+F_{s}+F_{p}$ [Eqs.~(\\ref{F2psi}-\\ref{Fp})] where the quartet term $F_{\\rm nt}$ is dropped and we replace $V_{p}(\\theta)$ by a constant $V$. In this model, local minima with $N$ non-vanishing modes ($N\\geq1$) must have equal amplitudes $|\\psi_j|^2=|r|\/(U_{s}-V+N V)\/2$ and the minimum energy with $N$ modes is\n\\begin{equation}\nF_{{\\rm toy},N}=-\\frac14\\,\\frac{r^2}{V+\\frac{U_{s}-V}N}.\n\\ee\nThere are three regimes. If $00$ ($<0$), the phases of the $\\psi$'s are such that $T_x$, $T_y$, $T_z$ are all negative (positive). Three out of the six phases are arbitrary due to global translation symmetry. This means that the magnetic pattern of bcc1 and bcc2 is uniquely determined up to translational and time-reversal degeneracy.\n\n\\begin{table\n\\begin{ruledtabular}\n\\begin{tabular}{c|cccccc|ccc}\n & $\\psi_1'$ & $\\psi_2'$ & $\\psi_3'$ & $\\psi_4'$ & $\\psi_5'$ & $\\psi_6'$ & $T_x'$ & $T_y'$ & $T_z'$ \\\\\n\\hline\n$R_z$ & $\\psi_2$ & $\\psi_1^*$ & $\\psi_6^*$ & $\\psi_5^*$ & $\\psi_3$ & $\\psi_4$ & $T_y$ & $T_x^*$ & $T_z^*$ \\\\\n$R_x$ & $i\\psi_5$ & $-i\\psi_6^*$ & $-\\psi_4^*$ & $\\psi_3$ & $i\\psi_2^*$ & $i\\psi_1$ & $T_x^*$ & $T_z$ & $T_y^*$\n\\end{tabular}\n\\caption{Transformation properties of the $\\psi$-variables and three quartic terms (defined in Section \\ref{bcc1}) of the bcc spin crystals under rotations. $R_z$ and $R_x$, respectively, are $\\pi\/2$ rotations around the $z$- and $x$-axis. These two rotations generate the cubic point group $O$ and therefore, the behavior under any rotation which maps the 12 wavevectors onto each other may be obtained by combining these two operations.\n\\label{trans}}\n\\end{ruledtabular}\n\\end{table}\n\nThe solution for $\\lambda_{\\rm nt}>0$, bcc1, turns out to be the bcc structure with the highest point group symmetry.\nBy selecting the coordinate origin conveniently, we obtain \n $-\\psi_1=\\psi_2=-i\\psi_3=i\\psi_4=i\\psi_5=-i\\psi_6=SM_0$ for bcc1, where $S=\\pm1$ is the time-reversal symmetry label and $M_0>0$ is the amplitude. From Table \\ref{trans}, we deduce that $\\bv M(\\bv r)$ changes sign under a $\\pi\/2$ rotation about the $x$, $y$ or $z$ axis. That is, the magnetic point group is $O(T)$ (international notation $\\underline{4}32$) with 4-fold anti-rotation axes at $\\langle 100\\rangle$, 3-fold rotation axes at $\\langle111\\rangle$ and 2-fold anti-rotation axes at $\\langle110\\rangle$.\n\nThe real-space representation of the bcc1 state is\n\\begin{equation}\n\\bv M(\\bv r)=SM_0\\left(\\begin{array}{c}\\sqrt2\\,s_x(c_y-c_z)-2\\,s_ys_z\\\\ \\sqrt2\\,s_y(c_z-c_x)-2\\,s_zs_x\\\\ \\sqrt2\\,s_z(c_x-c_y)-2\\,s_xs_y\\end{array}\\right),\n\\ee\n where $s_x=\\sin(Qx\/\\sqrt{2})$, $c_x=\\cos(Qx\/\\sqrt{2})$, etc.\n The resulting pattern was shown in Fig.~2 of our earlier paper.\\cite{binz06} In Fig.~\\ref{real-space}, we show the symmetry axes. As discussed above, the magnetization must vanish along the 4-fold anti-rotation axes, which are anti-vortices with winding number $-1$. The $x$, $y$, $z$ axes, and their translations according to the bcc periodicity, form two interpenetrating cubic latices of such line-nodes. The cubic space diagonals are 3-fold and the red arrowed lines of Fig.~\\ref{real-space} are 2-fold rotation axes.\n In the vicinity of these lines, the magnetization field is skyrmion-like (i.e., $\\bv M_\\perp$ has a winding number of $+1$).\n\nThe fact that the bcc1 state breaks $\\mathcal{T}$ in a non-trivial way and cannot be restored by any translation is manifest in the occurrence of the $\\mathcal{T}$-breaking order parameter $\\langle M_xM_yM_z\\rangle=SM_0^3\/2\\neq0$, which is a magnetic octupole. This curious property may lead to distinctive anomalous effects, e.g. in the magnetotransport.\\cite{binz06b} \nOctupolar magnetic ordering has recently been discussed in different contexts.\\cite{octupole}\n\n\n\\begin{figure}\n\\includegraphics[scale=0.6]{fig6.eps}\n\\caption{(Color online). Symmetry of the bcc1 state. The figure shows a cubic unit cell of bcc1. Black lines are anti-vortex lines with 4-fold anti-rotation symmetry and vanishing magnetization. The red (dark gray) lines are 2-fold rotation axes and the arrows indicate the direction of $\\bv M$. The structure has 3-fold rotation symmetry about all cubic space diagonals.\n\\label{real-space}}\n\\end{figure}\n\n\\subsubsection{Symmetry and real-space picture of sc}\n\n\nThe simple cubic (sc) helical spin crystal consists of three modes with $\\uv k_1=[100]$, $\\uv k_2=[010]$ and $\\uv k_3=[001]$. It forms a periodic structure with a cubic unit cell and the lattice constant is $\\lambda=2\\pi\/Q$.\n The convention for $ \\epsilon_j''$ is given by Eq.~\\eref{conv}, where the unit vector $\\uv z$ is replaced by $[111]$.\n\n\\begin{table\n\\begin{ruledtabular}\n\\begin{tabular}{c|ccccc}\n& $R_x$ & $R_y$ & $R_z$ & $R_{[111]}$ & $R_{[1\\bar 1 0]}$ \\\\\n\\hline\n $\\psi_1'$ & $i\\psi_1$ & $-i\\psi_3$ & $\\psi_2^*$ & $\\psi_3$ & $-\\psi_2^*$ \\\\\n $\\psi_2'$ & $\\psi_3^*$ & $i\\psi_2$ & $-i\\psi_1$ & $\\psi_1$ & $-\\psi_1^*$ \\\\\n $\\psi_3'$ & $-i\\psi_2$ & $\\psi_1^*$ & $i\\psi_3$ & $\\psi_2$ & $-\\psi_3^*$\n\\end{tabular}\n\\caption{Transformation for the $\\psi$-variables of sc under rotations. $R_z$ and $R_x$ are $\\pi\/2$-rotations, $R_{[111]}$ is a $2\\pi\/3$-rotation and $R_{[1\\bar 1 0]}$ a $\\pi$-rotation around the indicated axis.\n\\label{transsc}}\n\\end{ruledtabular}\n\\end{table}\n\n\n\n The transformation properties of the $\\psi$-variables under rotations are given in Table \\ref{transsc}. By choosing the center of coordinates corresponding to $\\psi_1=\\psi_2=\\psi_3=iM_0$, we obtain from Table \\ref{transsc} that the point group symmetry is $D_3(D_3)$ (international notation 32). That is, the chosen origin has a 3-fold rotation axis along $[111]$ and three two-fold axes along $[1\\bar10]$, $[10\\bar1]$ and $[01\\bar1]$. Obviously, $\\bv M$ must vanish at a point of such high symmetry. Hence there is a point node at the origin.\n\nSymmetry operations consisting of a rotation followed by an appropriate translation yield similar point nodes at $\\frac 12\\frac {3}4\\frac {1}4$ (with 3-fold axis along $[\\bar111]$), $\\frac 14\\frac {1}2\\frac 34$ (3-fold axis $[1\\bar11]$) and $\\frac {3}4\\frac {1}4\\frac {1}2$ (3-fold axis $[11\\bar1]$). Finally, each of these nodes is doubled inside one unit cell\n because a translation by $(\\frac \\lambda2,\\frac \\lambda2,\\frac \\lambda2)$ amounts to $\\bv M\\to-\\bv M$.\nThe 2- and 3-fold rotation axes form a complex array of skyrmion-like lines, all with winding numbers of $+1$.\nThe real-space representation is\n\\begin{equation}\n\\bv M(\\bv r)=SM_0\\left(\\begin{array}{c}\n\\tilde c_y-\\tilde s_z\\\\\n\\tilde c_z-\\tilde s_x\\\\\n\\tilde c_x-\\tilde s_y\n\\end{array}\\right),\n\\ee\nwhere $\\tilde s_x=\\sin[Q(x+\\lambda\/8)]$, $\\tilde c_x=\\cos[Q (x+\\lambda\/8)]$, etc.\n\n\n\\subsection{Higher harmonics Fourier modes}\\label{higher-harmonics}\n\nAs briefly mentioned in Section \\ref{continuous}, magnetic ordering in wavevectors $\\pm\\bv k_j$ generally induces higher harmonics in the magnetic structure. In the presence of magnetic order $\\bv m_{\\bv k_j}\\neq0$, the Landau free energy for the modes $\\bv m_{\\bv q}$, which do not belong to the set $\\bv m_{\\bv k_j}$, is (to quartic order)\n\\begin{equation}\n\\Delta F=\\sum_{\\bv q} \\tilde r_\\psi(\\bv q) |\\bv m_{\\bv q}|^2-\\bv h_{\\psi}(\\bv q)\\cdot \\bv m^*_{\\bv q}-\\bv h^*_{\\psi}(\\bv q)\\cdot \\bv m_{\\bv q}, \\label{induce}\n\\ee\nwith $\\tilde r_\\psi(\\bv q)=r(q)+O\\left(|\\psi_j|^2\\right)$ and\n\\begin{equation}\n\\bv h_{\\psi}(\\bv q)=-4\n \\!\\!{\\sum_{\\bv q_1,\\bv q_2,\\bv q_3}}^{\\!\\!\\!\\!\\!\\!\\prime}\\,\\,U(\\bv\nq_1,\\bv q_2,\\bv q_3)\\left(\\bv m_{\\bv q_1}\\cdot\\bv m_{\\bv\nq_2}\\right)\\,\\bv m_{\\bv q_3},\n\\label{hpsi}\n\\ee\nwhere the sum is restricted to $\\bv q_1,\\bv q_2,\\bv q_3\\in \\{\\pm\\bv k_{j}\\}$ such that $\\bv q_1+\\bv q_2+\\bv q_3=\\bv q$. The origin of the exchange field $\\bv h_{\\psi}$ is the coupling term Eq.~\\eref{F4}. In the following, we assume that $\\tilde r(\\bv q)>0$. Obviously, Eq.~\\eref{induce} then leads to induced modes\n\\begin{equation}\n\\bv m_{\\bv q}=\\frac{\\bv h_{\\psi,\\bv q}}{\\tilde r(\\bv q)}\\label{harmonics}\n\\ee\nat momenta $\\bv q=\\pm\\bv k_{j_1}\\pm\\bv k_{j_2}\\pm\\bv k_{j_3}$.\\footnote{In non-magnetic crystals, the coupling term is cubic and therefore higher harmonics are generated at all sums of {\\em two} momenta $\\bv k_{j_1}\\pm\\bv k_{j_2}$. In magnetic structures, \n higher-harmonics wavevectors are restricted to sums of {\\em three} ordering momenta.} These modes modify the detailed magnetic structure, but they do not change its symmetry, since the field $\\bv h_\\psi$ respects all the symmetries of the spin crystal.\n\nWe now briefly discuss the consequences for the three helical magnetic structures under consideration.\n\n\n A single spin-density wave involving wavevectors $\\pm\\bv k$ might create higher harmonics at $\\pm3\\bv k$ via Eq.~\\eref{harmonics}. However, in the case of spin spirals, $\\bv m_{\\bv k}^2=0$ and therefore $\\bv h_{\\psi,3\\bv k}=0$.\n Thus, there are {\\em no} higher harmonics created by a single spin spiral.\n\n\nThe sc spin structure with principal ordering wave-vectors along $\\langle001\\rangle$ with $|\\bv k_j|=Q$\n generates higher harmonics along $\\langle111\\rangle$ (with $|\\bv q|=\\sqrt{3}Q$) and\n along $\\langle012\\rangle$ (with $|\\bv q|=\\sqrt{5}Q$). Note that throughout the current and last sections, all crystal directions refer to the magnetic crystals.\nThe orientation of a magnetic crystal with respect to the atomic crystal depends on the anisotropy term $F_{\\rm a}$, which will be considered in Section \\ref{crystalanisotropy}.\n\nIn contrast to the former cases, bcc structures couple linearly to the $\\bv q=0$ mode (i.e., the uniform magnetization), since some triples of ordering vectors add to zero. This coupling will be further investigated in Section \\ref{H}. Here, we only notice that for the bcc1 and bcc2 states, $\\bv h_{\\psi,\\bv q=0}=0$. This results can be understood in terms of the symmetry of these states. In the case of bcc1, the point group symmetry is too high to support a non-zero axial vector $\\bv h_{\\psi,\\bv q=0}$. Therefore, bcc1 and bcc2 do not create a spontaneous net magnetization.\nThe next set of wavevectors which can be reached by adding three ordering vectors are along $\\langle001\\rangle$ (with $|\\bv q|=\\sqrt{2}Q$). However for bcc1 and bcc2, direct calculation shows $\\bv h_{\\psi}=0$ for these modes. As before, this can be understood in terms of symmetry. Higher harmonics along $\\langle001\\rangle$ would have the structure of a sc spin crystal. We have seen in Section \\ref{symmetry}, that the point group symmetry of sc is lower than that of bcc1. Therefore, bcc1 can not create such an exchange field $\\bv h_{\\psi}$. \n We conclude that bcc1 creates {\\em no} secondary Bragg peaks at $(0,0,\\sqrt{2}Q)$, etc. The same is true for bcc2.\nThe shortest wavevectors which are created by bcc1 or bcc2 as higher harmonics are along $\\langle 112\\rangle$ (with $|\\bv q|=\\sqrt{3}Q$). Others are at $\\langle110\\rangle$ ($|\\bv q|=2Q$), $\\langle013\\rangle$ ($|\\bv q|=\\sqrt{5}Q$), $\\langle111\\rangle$ ($|\\bv q|=\\sqrt{6}Q$) and $\\langle123\\rangle$ ($|\\bv q|=\\sqrt{7}Q$).\n\n\\section{Response to crystal anisotropy, magnetic field and disorder.}\\label{response}\n\n\\subsection{Effect of crystal anisotropy}\n\nSo far, our free energy has\nbeen completely rotation invariant. In the magnetically ordered states, full rotation symmetry\nis spontaneously broken, but any global rotation of the spin\nstructure leaves the energy invariant.\n This degeneracy is lifted by an additional anisotropy term $F_{\\rm a}$, which couples the magnetic crystal to the underlying atomic lattice. The crystal anisotropy energy is small and may be treated as a perturbation which merely selects the directional orientation, but does not otherwise affect the magnetic state.\n\n\n\\subsubsection{Single-spiral state}\\label{spiral-anisotropy}\n\nIn the case of a single-spiral state, crystal anisotropy is a function $F_{\\rm a}(\\uv k)$, where $\\uv k$ is the spiral direction.\n The function $F_{\\rm a}(\\uv k)$ may depend on various parameters, it should be symmetric under the\n point group of the (atomic) crystal lattice and satisfy $F_{\\rm a}(\\uv k)=F_{\\rm a}(-\\uv k)$. For concreteness, we assume the cubic point group $T$, relevant for the $B20$ crystal structure. We further assume that $F_{\\rm a}(\\uv k)$ is a slowly varying function, since a singular or rapidly oscillating function in reciprocal space would translate into a (non-local) interaction between magnetic moments and the atomic crystal. Such a function $F_{\\rm a}(\\uv k)$ generally has its minimum at either $\\left\\langle100\\right\\rangle$ or $\\langle111\\rangle$, which can be shown in two different ways.\n\nThe first argument is based on combining symmetry with Morse's theory of critical points.\\cite{morse34} Morse theory implies that\n\\begin{equation}\n\\mbox{maxima}-\\mbox{saddles}+\\mbox{minima}=2 \\label{morse}\n\\ee\nfor a function on the unit sphere. Symmetry requires that $F_{\\rm a}(\\uv k)$ has stationary points (points with vanishing first derivative, i.e., maxima, minima or saddles) at $\\left\\langle100\\right\\rangle$ (6 directions), $\\left\\langle111\\right\\rangle$ (8 directions) and $\\left\\langle110\\right\\rangle$ (12 directions).\nIf $F_{\\rm a}(\\uv k)$ is slowly varying, we suspect that these are the only stationary points, since adding more maxima, minima and saddles means that the function is more rapidly oscillating. Under this hypothesis, it follows from Eq.~\\eref{morse}, that the $\\left\\langle110\\right\\rangle$ directions are saddle points and that the extrema are at $\\left\\langle111\\right\\rangle$ and $\\left\\langle100\\right\\rangle$. For $\\left\\langle110\\right\\rangle$ to be minima, $F_{\\rm a}(\\uv k)$ needs to have additional stationary points (e.g. saddles) at non-symmetric, parameter-dependend locations. We conclude that an anisotropy which favors $\\left\\langle110\\right\\rangle$ would need to be more rapidly oscillating than required by symmetry.\n\nThe second argument is based on an expansion of $F_{\\rm a}(\\uv k)$ in powers of the directions cosines $\\hat k_x, \\hat k_y,\\hat k_z$:\n\\begin{equation}\n F_{\\rm a}(\\uv k)=\\alpha\\, (\\hat k_x^4+\\hat k_y^4+\\hat k_z^4) + \\alpha' \\, \\hat k_x^2\\hat k_y^2\\hat k_z^2+\\ldots,\\label{Fa}\n\\ee\nwhere we retained the first two terms allowed by cubic symmetry. Because of the smallness of the wave-vector sphere $Q$, one typically expects $|\\alpha'|\\ll |\\alpha|$ and subsequent terms even smaller. It is easily checked that for most values of the parameters $\\alpha,\\alpha'$, Eq.~\\ref{Fa} has its global minima at either $\\left\\langle100\\right\\rangle$ (for $\\alpha<\\min\\{0,\\alpha'\/18\\}$) or $\\left\\langle100\\right\\rangle$ (for $a>\\max\\{2\\alpha'\/9,\\alpha'\/18\\}$). Only in the narrow parameter regime $0<\\alpha<2\\alpha'\/9$, the minima are indeed at $\\left\\langle110\\right\\rangle$.\\footnote{In this regime, all 26 high-symmetry points are extrema of $F_{\\rm a}$ and 24 saddle points are located at $\\left\\langle\\sqrt{2\\alpha}\\,\\sqrt{2\\alpha}\\,\\sqrt{\\alpha'-4\\alpha}\\right\\rangle$, in agreement with Eq.~\\eref{morse}.} We conclude that crystal anisotropy which favors $\\left\\langle110\\right\\rangle$ may only appear in a narrow regime between two phases which favor $\\left\\langle111\\right\\rangle$ and $\\left\\langle100\\right\\rangle$, respectively.\n\nAccordingly, $\\left\\langle111\\right\\rangle$ or $\\left\\langle100\\right\\rangle$ are the selected spiral directions in all cubic helimagnets known so far.\\cite{ishikawa76,beille83,lebech89}\nThe preferred direction in MnSi at low pressure is $\\left\\langle 111\\right\\rangle$ and in Fe$_x$Co$_{1-x}$Si, it is $\\left\\langle 100\\right\\rangle$. In FeGe, there is a phase transition between these two directions,\\cite{lebech89} but no intermediate phase with $\\left\\langle110\\right\\rangle$ spiral orientation has been reported. However, the neutron scattering data\\cite{pfleiderer04} in the partially ordered phase of MnSi clearly show a maximum signal along the $\\left\\langle110\\right\\rangle$ crystal directions. While it is initially tempting to interpret the partially ordered state of MnSi as a single-spiral state that has lost it's orientational long-range order by some mechanism, one would still expect a maximal scattering intensity in the energetically preferred lattice direction.\n Theories of the partially ordered state in terms of disordered helical spin-density waves\\cite{tewari05,grigoriev05} thus depend on a crystal anisotropy that prefers spiral directions along $\\left\\langle 110\\right\\rangle$. As we have shown, this seems very unlikely .\n\n\n\n\n\\subsubsection{Helical spin crystals}\\label{crystalanisotropy}\n\n\n For multi-mode spin crystals, \n $F_{\\rm a}$ is no longer determined by a single direction $\\uv k$, so the arguments of Section \\ref{spiral-anisotropy} do not apply. Rather, the anisotropy energy depends on three Euler angles, which rotate the full three-dimensional magnetic structure relatively to the atomic crystal. In other words, $F_{\\rm a}$ is a function of the rotation group $SO(3)$. Relative to some standard orientation $\\bv k_j$ of the mode directions, the leading-order anisotropy term is\n\\begin{equation}\nF_{\\rm a}(R)=a\\sum_{j}g({R\\uv k_j})\\,|\\psi_j|^2\n\\label{crysFa}\n\\ee\nwhere \n $R$ is a rotation operator and $g(\\uv k)=\\hat k_x^4+\\hat k_y^4+\\hat k_z^4$. As before, we have assumed a cubic point group symmetry.\n\nIn the case $a>0$, the modes of the bcc spin crystals get locked to the $\\langle110\\rangle$ directions. The orientation of sc is four times degenerate if $a>0$. The four minima of $F_{\\rm a}$ are obtained from the standard orientation along $\\langle100\\rangle$ through a $\\pi\/3$-rotation around any of the four space diagonals, such that the three spiral modes point along $\\langle122\\rangle$.\n\nIn the opposite case ($a<0$), sc is oriented along $\\langle100\\rangle$. This time, it is the bcc spin crystals that get rotated by $\\pi\/3$ around any $\\langle111\\rangle$ axis to reach one of four stable orientations. Under such $\\pi\/3$-rotations, three of the six modes remain along $\\langle110\\rangle$ and three move to $\\langle114\\rangle$. Each individual $\\langle114\\rangle$ direction appears only in one of the four solutions but each $\\langle110\\rangle$ direction appears in two of four solutions.\n\n\nIf there is more than one degenerate\n orientation, the sample typically breaks up into domains such that full cubic symmetry is restored in the neutron scattering signal. Table \\ref{orienttable} lists the directions of magnetic Bragg peaks for the \n different cases.\n Out of the three prominent phases in our phase diagram, the bcc1 spin crystal is the only one that can explain the neuron-scattering peaks along $\\langle110\\rangle$ in the 'partial order' phase of MnSi. It does so most naturally for the case $a>0$, which is the known sign of the anisotropy in MnSi at low pressure.\n\n\nIn order to compare the energy scale of $F_{\\rm a}$ (i.e., the locking energy) for the different magnetic states, we note the following.\n At the phase boundary between two equal-amplitude spin-crystal phases (one phase with amplitudes $|\\psi_1|=\\ldots=|\\psi_N|$ and the second phase with $|\\tilde\\psi_1|=\\ldots=|\\tilde\\psi_{\\tilde N}|$)\nthe amplitudes of the two neighboring phases are related by\n\\begin{equation}\nN|\\psi_j|^2=\\tilde N|\\tilde\\psi_{j'}|^2.\n\\ee\nIn the vicinity of the phase boundary, the anisotropy term is therefore proportional to the mode-average $1\/N\\sum_j g(R\\uv k_j)$. Using this result, we find that the effective anisotropy energy is smaller for the bcc spin crystals than for the single-spiral state by a factor of 4 - 4.5.\\footnote{Using $\\max\\{F_{\\rm a}\\}-\\min\\{F_{\\rm a}\\}$ to estimate the locking energy scale leads to a reduction factor of 4.5. Expanding $F_{\\rm a}$ around its minimum for $a>0$ leads to a factor of 4.}\nFor the sc state, the locking energy is anisotropic. Certain rotations are equally costly in energy as for the single-spiral case while some small rotations about the minimum for $a>0$ are softer than for the single-spiral by a factor of 4.5.\n\n\n\\begin{table\n\\begin{ruledtabular}\n\\begin{tabular}{c|cc|cc|cc}\n & spiral & No. & bcc & No. & sc & No. \\\\\n\\hline\n $a>0$ &$\\langle111\\rangle$ & 4 & $\\langle110\\rangle$ & 1 &$\\langle122\\rangle$ & 4\\\\\n $a<0$ &$\\langle100\\rangle$ & 3 & $\\langle110\\rangle$,$\\langle114\\rangle$\\footnote{When averaged over domains, Bragg peaks along $\\langle110\\rangle$ are twice as intense as peaks along $\\langle114\\rangle$.} & 4 &$\\langle100\\rangle$ & 1\n\\end{tabular}\n\\caption{Crystal directions of magnetic Bragg peaks and number of degenerate orientations for three magnetic structures, sc, bcc and single-spiral, with crystal anisotropy given by Eq.~\\eref{crysFa}.\n\\label{orienttable}}\n\\end{ruledtabular}\n\\end{table}\n\n\\subsection{Effect of magnetic field}\\label{H}\n\n\nA uniform external magnetic field $\\bv H$ couples to the $\\bv q=0$ mode of the\nmagnetization, $\\bv m=\\langle\\bv M\\rangle$, via Zeeman coupling.\nThe uniform magnetization, in turn, couples to the helical modes $\\psi_j$ through\n\\begin{equation}\n\\begin{split}\nF=&\\left.F\\right|_{\\bv m=0}+r_0\\,\\bv m^2+U(0,0,0)\\,\\bv m^4-\\bv h_{\\psi}(0)\\,\\bv m\\\\\n&+2\\sum_j\\left(U(\\bv k_j,-\\bv k_j,0)\\,\\bv m^2+U(\\bv k_j,0,0)\\,\\bv m^2_{\\perp,j}\\right)|\\psi_j|^2,\\label{Fm}\n\\end{split}\n\\ee\nwhere $\\bv m_{\\perp,j}=\\bv m-(\\bv m\\cdot\\uv k_j)\\uv k_j$ and we have used Eqs.~\\eref{F4},\\eref{hpsi}. Plumer and Walker\\cite{plumer81} argued that $U(0,0,0)\\approx U(\\bv k_j,-\\bv k_j,0)\\approx U_{s}$, which we will use in the following for simplicity.\n\n\\subsubsection{Response of the single-spiral state}\n\n\nThe behavior of the single-mode spiral state under a\nmagnetic field has been studied both experimentally\\cite{ishikawa76,lebech89,ishimoto95, thessieu97,uchida06} and theoretically.\\cite{kataoka81,plumer81}\n It is characterized by a strongly anisotropic susceptibility,\ninduced by the last term in Eq.~\\eref{Fm}. For a fixed spiral direction $\\bv k$, the susceptibilities parallel and orthogonal to the spiral direction are given by\n\\begin{equation}\n\\begin{split}\n\\chi_\\parallel\\,\\approx&\\,\\Delta^{-1}\\\\\n\\chi_\\perp\\approx&\\left(\\Delta+2\\frac{U'}{U_{s}}|r(Q)|\\right)^{-1},\n\\end{split}\n\\ee\nwhere $\\Delta=2[r_0-r(Q)]=2JQ^2$ and $U'=U(\\bv k,0,0)$. Well below the critical ordering temperature, $\\Delta\\ll 2|r(Q)|$ and therefore $\\chi_\\parallel\\gg \\chi_\\perp$. This strong anisotropy leads to a spin reorientation transition at $H=H_{\\rm sr}$, where the spiral axis gets oriented along the field direction. The value of $H_{\\rm sr}$ depends on the anisotropy [Eq.~\\eref{Fa}] and the field direction. For $\\alpha>0$ and $\\bv H\\parallel \\langle 100\\rangle$, Plumer and Walker obtained\n\\begin{equation}\nH_{\\rm sr}^2=\\frac{4\\alpha}{\\chi_\\parallel-\\chi_\\perp}\\gtrsim 4 \\alpha \\Delta,\\label{Hsr}\n\\ee\nwhere we have used $\\chi_\\parallel\\gg \\chi_\\perp>0$. Once the spiral is oriented, the susceptibility is large (equal to $\\chi_\\parallel$).\n\nThe spiral amplitude decreases as a function of the external field and vanishes at\n\\begin{equation}\n H_{\\rm c}=|\\psi_0|\\Delta,\\label{Hc}\n\\ee\nwhere $|\\psi_0|^2=|r(Q)|\/(2U_{s})$. Above $H_{\\rm c}$, the magnetization is uniform.\n\n\n\\subsubsection{Response of the bcc1 spin crystal}\\label{bcc-and-field}\n\n In the bcc spin crystal\nstates, the linear response is isotropic, because\ntheir symmetry group does not allow for an anisotropic\nsusceptibility tensor. As a consequence, there is no orientation of the bcc state towards the magnetic field at the level of linear response (i.e., from energies up to order $\\bv H^2$). However, there is a sub-leading contribution to the energy $\\propto\\langle M_xM_yM_z\\rangle H_xH_yH_z$. This contribution splits the degeneracy between the $S=1$ and $S=-1$ states and it may lead to a reorientation of the bcc crystal towards the field.\n\n\nIn terms of the six $\\psi$-variables of bcc, the exchange field $\\bv h_\\psi(0)$, which enters Eq.~\\eref{Fm}, amounts to\n \\begin{equation}\n\\bv h_\\psi(0)=-\\mu \\Re{\\left[(5-i\\sqrt{2})\\bv{\\tilde h}_\\psi\\right]},\n\\ee\nwhere $\\mu=U(\\bv k_{j_1},\\bv k_{j_2},\\bv k_{j_3})$ for $\\bv k_{j_1}+\\bv k_{j_2}+\\bv k_{j_3}=0$ and\n\\begin{equation}\n\\begin{split}\n\\bv{\\tilde h}_\\psi= & \\,\\psi_1\\psi_3^*\\psi_6^*\\left(\\begin{array}{c}1\\\\1\\\\1\\end{array}\\right)\n+\\psi_1^*\\psi_4\\psi_5\\left(\\begin{array}{c}-1\\\\-1\\\\1\\end{array}\\right) \\\\\n& -\\psi_2\\psi_4^*\\psi_6\\left(\\begin{array}{c}-1\\\\1\\\\-1\\end{array}\\right)\n-\\psi_2^*\\psi_3\\psi_5^*\\left(\\begin{array}{c}1\\\\-1\\\\-1\\end{array}\\right).\n\\end{split}\n\\ee\n\nThe (isotropic) inverse spin susceptibility (see Appendix) in the bcc1 state is composed of three contributions\n\\begin{equation}\n\\chi^{-1}_{\\rm bcc1}=\\chi^{-1}_{\\rm bare}+\\chi_{\\rm phase}^{-1}+\\chi_{\\rm amp}^{-1}.\\label{chibcc1}\n\\ee\nThe first term\n\\begin{equation}\n\\chi^{-1}_{\\rm bare}=2\\left(r_0+\\frac{U_s+\\frac23 U'}{U_{\\rm bcc1}}|r(Q)|\\right),\n\\ee\nwhere $U_{\\rm bcc1}=1\/6[U_{s}+V_{p}(\\pi\/4)+4V_{p}(\\pi\/6)-\\lambda_{\\rm nt}]$, can be derived in analogy to the single-spiral case. In fact, $\\chi_{\\rm bare}$ is a ``mixture'' of $\\chi_\\parallel$ and $\\chi_\\perp$, determined geometrically by the angles between the mode directions $\\bv k_j$ and the magnetic field. It follows that $\\chi_{bare}\\ll \\chi_\\parallel$, provided $U_{\\rm bcc1}\\sim U_{s}$ (the two couplings are equal at the phase boundary between single-spiral and bcc1).\nThe remaining terms in Eq.~\\eref{chibcc1}\n\\begin{equation}\n\\begin{split}\n\\chi_{\\rm phase}^{-1}&=-\\,\\frac{\\mu^2\\,|r(Q)|}{3\\,U_{\\rm bcc1}\\,\\lambda_{\\rm nt}}\\\\\n\\chi_{\\rm amp}^{-1}&=-\\,\\frac{25\\,\\mu^2\\,|r(Q)|}{12\\,U_{\\rm bcc1}\\,[U_{s}-V_{p}(\\pi\/4)+\\lambda_{\\rm nt}]},\n\\end{split}\\label{chi-phase-amp}\n\\ee\nstem from the response of the bcc magnetic structure to the field. That is, they originate from the adjustments of relative phases and amplitudes, respectively, of the helical modes as a result of the term $-\\bv h_\\psi(0)\\cdot\\bv m$ in Eq.~\\eref{Fm}. The effect of $\\chi_{\\rm phase}$ and $\\chi_{\\rm amp}$, which are necessarily negative, is to increase the susceptibility of the bcc1 state.\n\nThe change in\nthe relative amplitudes and phases of the six interfering spirals as\na function of the magnetic field may be calculated (see Appendix). For example, the\nlinear response of the amplitudes of bcc1 is\n\\begin{equation}\n\\left(\\begin{array}{c}\n\\delta|\\psi_1|\\\\ \\delta|\\psi_2|\\\\ \\delta|\\psi_3|\\\\ \\delta|\\psi_4|\\\\ \\delta|\\psi_5|\\\\ \\delta|\\psi_6|\\\\\n\\end{array}\\right)=\\frac{5\\,\\mu\\, S}{4 [U_{s}-V_{p}(\\pi\/4)+\\lambda_{\\rm nt}]}\\left(\\begin{array}{r}\n-m_z\\\\ m_z\\\\ -m_x\\\\ m_x\\\\ m_y\\\\ -m_y\n\\end{array}\\right).\n\\ee\nThis response should be observable by neutron scattering, if it is\npossible to prepare the sample in a single-domain state (i.e.\nwithout mixture of the two time-reversal partners). For example, a field in $\\uv z$ direction affects $|\\psi_1|$ and $|\\psi_2|$, the amplitudes of the modes propagating orthogonally to $\\uv z$ (Fig. \\ref{6modes}), which get enhanced and suppressed by the magnetic field, respectively. \n\nThe expected effects of external magnetic field on the resistivity of bcc spin crystal are presented elsewhere.\\cite{binz06b}\n\n\n\\section{Effect of impurities: a possible route to 'partial order'.}\\label{disorder}\n\nWhile the helical spin crystal states are expected to show Bragg\nspots at particular wavevectors, a variety of effects such as\nthermal or quantum fluctuations or disorder can destroy the long\nrange order while preserving the helical spin crystal structure at\nshorter scales. Here, we investigate in more detail the effect of\nnon-magnetic disorder on helical spin crystal structures.\n\n Although\nthe experimentally studied helimagnets are very clean from the\nelectrical resistivity point of view, the helical magnetic\nstructures are sensitive to disorder at a much longer length scale.\nIn addition, the low energy scales required to distort them means\nthat one needs to consider disorder effects. An observation that can\nimmediately be made is that for the physically relevant case of\nnon-magnetic disorder ($V_{dis}(\\bv r)$), the single-spiral state\nand the spin crystal states respond very differently. By symmetry,\nthe coupling of disorder to the magnetic structure is given by\n$F_{dis}=\\langle V_{dis}(\\bv r) |\\bv M(\\bv r)|^2\\rangle$. Hence,\nsingle-spiral states which are unique in having a spatially uniform\nmagnitude of magnetization ($|\\bv M(\\bv r)|={\\rm constant}$) are\nunaffected by this coupling; in contrast the spin-crystal states\nnecessarily have a modulated magnitude\\cite{binz06} and hence are\naffected by non-magnetic disorder. Therefore the neutron scattering\nsignal of the spin-crystal state is expected to have more diffuse\nscattering than the single mode state. This is consistent with the\nexperimental observation that the high pressure phase has diffuse\nscattering peaked about $\\langle110\\rangle$ while the low pressure\nphase has sharper spots, consistent with identifying the two as\nspin-crystal and single-spiral states respectively.\n\n The effect of disorder on the\nspin-crystal state is closely related to the problem\nof the ordering of an XY model in the presence of a random external field.\nThe phase rotation symmetry of the XY model captures the\ntranslational invariance of the spin-crystal in the clean state.\nDisorder destroys this invariance and behaves like a random field\napplied to the XY system. Using the insights from the study of that\nproblem in three dimensions,\\cite{giamarchi95}\n one expects that for weak\ndisorder a Bragg glass will result, where although true long range\norder is destroyed, power law divergent peaks at the Bragg\nwavevectors remain, and the elastic constants remain finite. For\nstronger disorder one expects this algebraic phase to also be\ndestroyed, and recover a short range correlated phase without\nelasticity. Nevertheless, for the case of the bcc1 and bcc2\ncrystals, due to time reversal symmetry ($\\mathcal{T}$) breaking in\nthese states, the disordered states also spontaneously break time\nreversal symmetry, and hence a phase transition is expected on\ncooling despite the absence of long range order. It is difficult to\npredict which of these two scenarios (Bragg glass or only\n$\\mathcal{T}$ breaking) is more appropriate for MnSi. In the latter\ncase one may estimate the spreading of the Bragg spots due to\ndisorder by considering the energetic cost to deform the\nspin-crystal state in different ways.\n\nIgnoring elastic contributions,\n there are two\ndistinct types of deformations - ones that involve a change in the\nmagnitude of the ordering wavevectors $\\delta q_\\parallel$\n and others that do not change\nthe wavevector magnitude but rotate the structure from its preferred\norientation: $\\delta q_\\perp$. The second is expected to be low in energy because\nrotations of the structure are locked by the crystal anisotropy\nterm, which is weak.\n From Eq.~\\eref{Fa}, we obtain the energy cost to shift the ordering vector by $\\delta q_\\perp$ along the sphere $|\\bv q|=Q$\n\\begin{equation}\n\\delta F_\\perp=\\frac{4\\alpha}{3\\kappa}\\left(\\frac{\\delta q_\\perp}Q\\right)^2,\\label{Fperp}\n\\ee\nwhere $\\kappa=1$ for the single-spiral. For multi-mode spin crystals, the energy cost of rotation is reduced, as explained in Section \\ref{crystalanisotropy}. Thus, $\\kappa\\approx4$ for the bcc spin crystals.\nIn contrast, deformations that change the magnitude of the ordering\nwavevectors, must contend with the DM\ninteraction scale, and hence pay a higher energy penalty\n\\begin{equation}\n\\delta F_\\parallel=\\frac12\\Delta\\cdot|\\psi_0|^2\\cdot\\left(\\frac{\\delta q_\\parallel}Q\\right)^2.\\label{Fpar}\n\\ee\nAssuming\nthe disorder couples to these deformations equally, we can estimate\nthe ratio of their amplitudes in the limit of weak deformations, by equating Eqs.~\\eref{Fperp} and \\eref{Fpar}. It follows\n\\begin{equation}\n\\left(\\frac{\\delta q_\\perp}{\\delta q_\\parallel}\\right)^2=\\frac{3\\kappa\\Delta|\\psi_0|^2}{8\\alpha}.\n\\ee\nUsing Eqs.~\\eref{Hsr} and \\eref{Hc}, we can relate this ratio to the experimentally known ratio between the critical magnetic fields for, respectively, reorienting and polarizing the single-spiral state\n\\begin{equation}\n\\frac{\\delta q_\\perp}{\\delta q_\\parallel}\n\\gtrsim\\sqrt{\\frac{3\\kappa}2}\\frac{H_{\\rm c}}{H_{\\rm sr}}.\n\\ee\n\nWe can now apply these results to the case of MnSi and test the\nhypothesis that the 'partial order' state is in fact a\ndisordered bcc spin crystal. Setting in $\\kappa=4$ and the\nexperimentally measured\\cite{thessieu97} critical fields for MnSi,\n$H_{c}=0.6{\\rm Tesla}$ and $H_{c1}=0.1{\\rm Tesla}$, one obtains\n$\\delta q_{\\perp}\/\\delta q_\\parallel\\gtrsim 15$. Neutron scattering\nexperiments do indeed find that the transverse broadening is larger\nthan the longitudinal broadening, but since the latter is resolution\nlimited, this only gives us an lower bound that is consistent with\nthe estimate above: $[\\delta q_{\\perp}\/ \\delta\nq_{\\parallel}]_{\\rm{expt}}> 2.3$. Nevertheless, the trend that the\nwidth of the spot is greater along the equal magnitude sphere than\ntransverse to it is clearly seen in the experimental data.\n\nThus, weak non-magnetic disorder of the atomic crystal is expected to destroy magnetic long range order in multi-mode helical spin crystal states and lead to a neutron scattering signal compatible with the observations in the 'partial order' phase of MnSi. \n However in the case of bcc spin crystals, time-reversal symmetry breaking is expected to persist even in the presence of disorder. The scenario of interpreting 'partial order' in MnSi as a bcc1 state disordered by impurities thus predicts quasi-static local magnetic moments \n and implies a finite temperature phase transition on cooling into this phase. \n \n\\section{Conclusion}\n\nWe have analyzed the magnetic properties of non-centrosymmetric weak ferromagnets subject to DM spin-orbit coupling. This problem falls into the general class of systems where the low energy excitations live on a surface in reciprocal space rather than on discrete points.\nThe addition of DM interactions to a ferromagnetic state produces a\nlarge degeneracy of magnetic states characterized by arbitrary\nsuperpositions of spin helices of a fixed helicity and fixed\nwavevector magnitude. This enormous degeneracy is broken by\ninteractions between modes, and the single-spiral state is realized\nfor slowly varying interactions, by virtue of its unique property of\nhaving a spatially uniform magnitude of magnetization. For more\ngeneral interactions, multi-mode helical spin crystal states are\nobtained. We show that for the model interactions considered, the\nphase diagram is largely determined just by considering the\ninteractions between pairs of modes. The phase that is eventually\nrealized may be readily deduced from the range of angles in which\nthis interaction drops below a critical value. In particular, the\nbcc structure is stabilized by virtue of the fact that its\nreciprocal lattice, fcc, is a close packed structure. These results\nmay also be relevant in other physical situations where\ncrystallization occurs, such as the Larkin-Ovchinnikov-Fulde-Ferrel\n instability in spin-imbalanced superconductors, which may\npotentially be realized in solid state systems,\\cite{CeCoIn5} cold\natomic gases\\cite{zwierlein} and dense nuclear matter.\\cite{Rajagopal}\n\nHelical spin crystals typically give rise to complicated real space\nmagnetic structures which we discussed in this paper. In particular,\ntopological textures like merons and anti-vortices can be seen about\nspecial axes in particular realizations, although these are not\nexpected to be stable given the order parameter and spatial\ndimensionality of the system. We show here that such topological\nstructures exist as a consequence of symmetry, which also dictates\nthe absence of certain higher Bragg reflections, which a naive\nanalysis would predict.\n\nThe response of helical spin crystals to crystalline anisotropy and\napplied magnetic field are considered with a special emphasis on the\nbcc structures which are contrasted against the response of the\nsingle helix state. An unusual transfer of spectral intensity in the\npresence of an applied magnetic field, which is strongly dependent\non the direction of applied field is noted for the bcc structures.\nThis is a consequence of broken time reversal symmetry in the\nabsence of a net magnetization (which is symmetry forbidden). The\nunusual magnetotransport in such a state, a linear in field\nmagnetoresistance and quadratic Hall effect, has been discussed\nbriefly in Ref.~\\cite{binz06} and was elaborated upon in Ref.~\\cite{binz06b}.\n\n Helical spin crystals exhibit Bragg peaks at specific\nwave-vectors, and hence are not directly consistent with the\nexperimental observation of 'partial order'. The point of view taken\nin our earlier work\\cite{binz06} is that the short distance and\nshort time properties are captured by the appropriate helical spin\ncrystal structure. Studying the properties of helical spin crystals\nwith long range order is a theoretically well defined task with\ndirect consequences for a proximate disordered phase with similar\ncorrelations up to some intermediate scale. The mechanism\nthat leads to the destruction of long range helical spin crystal\norder is uncear; in Ref.~\\cite{binz06}, this was assumed to be the coupling\nto non-magnetic disorder. Then, as elaborated in this paper,\nbeginning with a bcc helical spin crystal a neutron scattering\nsignature consistent with that of 'partial-order' may be obtained.\nHowever, within the simplest version of this scenario, one also\nexpects a finite temperature phase transition where time reversal\nsymmetry breaking develops, and static magnetic order which may be\nseen in nuclear magnetic resonance or muon spin rotation\nexperiments. Other mechanism for the destruction of long range order\nof the bcc spin crystal state, such as thermal or quantum\nfluctuations may also be considered, but are left for future work.\n\n\n\\acknowledgments We would like to thank L. Balents, P. Fazekas, I. Fischer, D.\nHuse, J. Moore, N. Nagaosa, M.P. Ong, C. Pfleiderer, D. Podolsky, A. Rosch, T.\nSenthil, H. Tsunetsugu and C. Varma for useful and stimulating discussions and V.\nAji for an earlier collaboration on related topics. This work was\nsupported by the Swiss National Science Foundation, the A.P. Sloan\nFoundation and Grant No. DE-AC02-05CH11231.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}