diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeilo" "b/data_all_eng_slimpj/shuffled/split2/finalzzeilo" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeilo" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:Intro}\nTremendous advancements in sensor technology and data-driven machine learning have enabled exciting applications such as automatic health monitoring and autonomous cars. In many cases, the lack of data in certain regions of the domain reveals important structure. For instance, the sensors on a car driving through a parking lot might have dense observation points in 3D except inside pillars. Such \\emph{voids} in the data are ubiquitous across applications whether it is a subspace of unattainable configurations for a robot~\\cite{farber2018configuration}, regions without network coverage~\\cite{Ghrist2005} or missing measurements in an experimentally-determined chemical structure~\\cite{townsend2020representation}, to name a few~\\cite{aktas2019persistence}. \n\nA standard approach to extract structural information from data proceeds by first encoding pairwise relationships in a problem via a graph and then analysing its properties. Recent advances in Graph Neural Networks have enabled practical learning in the domain of graphs and have provided approximate solutions to difficult graph problems~\\cite{hamilton2017representation}. Despite the wealth of techniques underpinned by solid graph theory, this approach is fundamentally misaligned with problems where the relationships involve multiple points, and topological \\& geometric structure must be encoded beyond pairwise interactions.\n\nFortunately, higher dimensional combinatorial structures come to the rescue in the form of simplicial complexes, the powerhorse of topological data analysis~\\cite{chazal2017introduction}. Interfacing between combinatorics and geometry, simplicial complexes capture multi-scale relationships and facilitate the passage from local structure to global invariant features. These features occur in the form of homology groups, intuitively perceived as {\\em holes}, or {\\em voids}, in any desired dimension. Alas, this expressive power comes with a burden, that of high computational complexity, and difficulty in localization of said voids~\\cite{chen2011hardness}.\n\nFor every hard computational problem there seems to exist a neural network approximation~\\cite{xu2018powerful}. Nevertheless, homology and simplicial complexes have only recently started to follow suit~\\cite{bodnar2021weisfeilerMPNNS, ebli2020simplicial,bunch2020simplicial}, with inference {\\em of} homological information still lacking.\n\nThe key insight in this paper is to guide learning on simplicial complexes by flipping the conventional view of approximation. We propose a GNN model for localizing homological information in the form of a distance function of each point of a complex to its the nearest homology generating features. \nInstead of using the most-significant eigenvectors of the relevant Laplacian operator we focus on the subspace spanned by the eigenvectors corresponding to the lowest eigenvalues. The justification is that homology-related information is contained in its null space. We implement this idea by calculating the most-significant subspace of an inverted version of the operator (see Sec.~\\ref{subsec:Ltp}). Figure~\\ref{fig:diff} shows the result of twelve diffusion iterations performed using the conventional view and compares it with our inverted operator. Although diffusion is insufficient to localise homology, it highlights the tendency of the inverted operator to localize cycles. \n\nThe main contributions in the paper are:\n\\begin{enumerate}\n \\item A novel way to represent simplicial complexes as computational graphs suitable for inference via GNNs, the {\\em Hodge Laplacian graphs}, focusing on the dimension of interest (see Section~\\ref{subsec:SCGNN}), and\n \\item A new homology-aware graph convolution framework operating on the proposed Hodge Laplacian graphs, taking advantage of the spectral properties of a shifted-inverted version of the Hodge Laplacian (see Section~\\ref{subsec:SFGNN}, Eq.~\\eqref{eq:SFGC}).\n\\end{enumerate}\n\nThe rest of the paper is structured as follows: in Section~\\ref{sec:Prelims} all the necessary theoretical background is presented, followed by relevant work, in Section~\\ref{sec:Related}. We describe our proposed model in detail in Section~\\ref{sec:Model}, followed by thorough evaluation and discussion, in Section~\\ref{sec:Experiments}.\n\n\\newcommand{0.11}{0.11}\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/2D_ones_init\/double_torus_2D_L1_12_steps.png}\n \\caption{$\\hL_1$}\n \\label{fig:hL1}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/2D_ones_init\/double_torus_2D_L1-k3_12_steps.png}\n \\caption{$\\hL_1$ (rank-3)}\n \\label{fig:hL1_k3}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/2D_ones_init\/double_torus_2D_shiftedPinv_12_steps.png}\n \\caption{$\\Ltpd{1}$}\n \\label{fig:Ltpd1}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/2D_ones_init\/double_torus_2D_shiftedPinv-k3_12_steps.png}\n \\caption{$\\Ltpd{1}$(rank-3)}\n \\label{fig:Ltpd1_k3}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/3D_ones_init\/double_torus_3D_L1_12_steps.png}\n \\caption{$\\hL_1$}\n \\label{fig:hL1_3D}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/3D_ones_init\/double_torus_3D_L1-k5_12_steps.png}\n \\caption{$\\hL_1$ (rank-5)}\n \\label{fig:hL1_3D_k5}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/3D_ones_init\/double_torus_3D_shiftedPinv_12_steps.png}\n \\caption{$\\Ltpd{1}$}\n \\label{fig:Ltpd1_3D}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/3D_ones_init\/double_torus_3D_shiftedPinv-k5_12_steps.png}\n \\caption{$\\Ltpd{1}$(rank-5)}\n \\label{fig:Ltpd1_3D_k5}\n \\end{subfigure}\n \\caption{\\label{fig:diff}Twelve diffusion iterations based on the 1-dimensional Hodge Laplacian $\\hL_{1}$, and its shifted pseudoinverse $\\Ltpd{1}$ of a double annulus in 2D (four leftmost) and a double torus in 3D (four rightmost). \\subref{fig:hL1_k3}, \\subref{fig:Ltpd1_k3}, \\subref{fig:hL1_3D_k5}, and \\subref{fig:Ltpd1_3D_k5} are low-rank approximations of the respective Laplacians based on the top 3 and 5 eigenpairs corresponding to the largest magnitude eigenvalues. \n \n }\n \\label{fig:laplacian_diffusion}\n\\end{figure*}\n\n\\section{Preliminaries}\\label{sec:Prelims}\n\\subsection{The Simplicial Laplacian Operators}\\label{subsec:homology}\n\nAn {\\em abstract simplicial complex} is a collection $K$ of subsets of a finite set $S$ satisfying two axioms: first, for each $v$ in $S$ the singleton set $\\{v\\}$ lies in $K$, and second, whenever some $\\sigma \\subset S$ lies in $K$, every subset of $\\sigma$ must also lie in $K$. The constituent subsets $\\sigma \\subset S$ which lie in $K$ are called {\\em simplices}, and the dimension of each such $\\sigma$ is one less than its cardinality, i.e., $\\dim \\sigma = |\\sigma|-1$. By far the most familiar examples of simplicial complexes are (undirected, simple) {\\em graphs}; each graph $G = (V,E)$ forms a simplicial complex whose $0$-dimensional simplices are given by the vertex set $V$ and $1$-dimensional simplices constitute the edge set $E$. The passage from graphs to simplicial complexes is motivated by the compelling desire to model phenomena beyond pairwise interactions using higher-dimensional simplices.\n\n\\subsubsection{Homology Groups}\n\nTo each directed graph $G=(V,E)$ one can associate an {\\em incidence matrix}, which is best viewed as a linear map $A:\\mathbb{R}[E] \\to \\mathbb{R}[V]$ from a real vector space spanned by edges to the vector space spanned by the vertices. The entry of $A$ in the column corresponding to a directed edge $e:v \\to v'$ and the row corresponding to a vertex $u$ is prescribed by\n\\[\nA_{u,e} = \\begin{cases} -1 & \\text{if } u = v, \\\\\n 1 & \\text{ if } u = v', \\text{ and} \\\\\n 0 & \\text{ otherwise.}\n \\end{cases}.\n\\]\nWriting $r$ for the rank of $A$, the number of connected components and loops in $G$ equals $|V|-r$ and $|E|-r$, respectively. Thus, one can learn the geometry of $G$ from the linear algebraic data given by its adjacency matrix.\n\nThis linear algebraic success story admits a remarkable simplicial sequel. Fix a simplicial complex $K$ and write $K_d$ to indicate the set of all $d$-simplices in $K$. We seek linear maps $\\partial_d:\\mathbb{R}[K_d] \\to \\mathbb{R}[K_{d-1}]$ to play the role of the $d$-dimensional incidence matrices. To build these {\\em boundary operators}, one first orders the vertices in $K_0$ so that each $d$-simplex $\\sigma \\in K$ can be uniquely expressed as a list $\\sigma = [v_0,\\ldots,v_d]$ of vertices in increasing order. The desired matrix $\\partial_d$ is completely prescribed by the following action on each such $\\sigma$:\n\\begin{align}\\label{eq:sbound}\n\\partial_d(\\sigma) = \\sum_{i=0}^d (-1)^i \\cdot \\sigma_{-d}\n\\end{align}\nwhere $\\sigma_{-d} := [v_0,\\dots,\\hat{v}_i,\\ldots,v_d]$ is the $(d-1)$-simplex obtained by removing the $i$-th vertex $v_i$ from $\\sigma$.\n\nThese higher incidence operators assemble into a sequence of vector spaces and linear maps:\n\\begin{align}\\label{eq:chcomp}\n\\xymatrix{\n\\cdots\n\\ar@{->}[r]^--{\\partial_{d+1}} \n& \\mathbb{R}[K_{d}] \\ar@{->}[r]^{\\partial_{d}} & \\mathbb{R}[K_{d-1}] \\ar@{->}[r]^--{\\partial_{d-1}} & \\cdots.\n}\n\\end{align}\nIt follows from \\eqref{eq:sbound} that for each $d > 0$ the composite $\\partial_d \\circ \\partial_{d+1}$ is the zero map, so the kernel of $\\partial_d$ contains the image of $\\partial_{d+1}$ as a subspace, $\\im\\partial_{d+1} \\subseteq \\ker \\partial_{d}$.\n\nFor each $d \\geq 0$, the $d$-th {\\bf homology group} of $K$ is the quotient vector space $\\mathcal{H}_d(K) := {\\ker \\partial_d}\/{\\im \\partial_{d+1}}$ of $k$-cycles $\\mathcal{Z}_k=\\ker \\partial_d$ by $(k+1)$-boundaries $\\mathcal{B}_k=\\im \\partial_{d+1}$. The basis of $\\mathcal{H}_d(K)$ contains equivalence classes of $d$-dimensional {\\em voids} or {\\em loops} $[g_i]$, i.e. $\\mathcal{H}_d(K)=\\text{span}\\{[g_1], \\dots, [g_k]\\}$, each $[g_i]$ describing a family of loops that cannot be contracted to a point, and cannot be continuously deformed into another family $[g_j]$, $i\\neq j$. Consequently, the dimension of $\\mathcal{H}_d(K)$ provides us with a topological invariant, namely, the $k$-th \\textbf{betti number} $\\beta_d=\\text{rank}(\\mathcal{H}_d(K))$, which counts the number of $d$-dimensional voids in $K$.\n\nEach $d$-cycle $g \\in \\mathcal{Z}_d$ is a formal sum of $d$-simplices satisfying $\\partial_d (g)=0$. By assigning weights $w:K_d \\rightarrow \\mathbb{R}_+$ to these simplices, one can thus define the length of $g$ by adding together weights of its constituent simplices, i.e., $\\text{len}(g)=\\sum_{\\sigma \\in g} w(\\sigma)$. An {\\em optimal} homology basis is the one whose generators have minimum length among all possible bases. \n\nReplacing each matrix $\\partial_d$ in \\eqref{eq:chcomp} by its transpose $\\partial_d^T$, one similarly obtains the $d$-th {\\bf cohomology group} of $K$, denoted $\\mathcal{H}^d(K;\\mathbb{R})$. It is a straightforward consequence of the rank-nullity theorem that there are isomorphisms $\\mathcal{H}_d(K;\\mathbb{R}) \\cong \\mathcal{H}^d(K;\\mathbb{R})$ between homology and cohomology groups. \n\n\n\n\\subsubsection{Hodge Laplacians}\n\n{\\em Hodge Laplacians} \\cite{horak2013spectra} are to graph Laplacians what simplicial boundary operators are to adjacency matrices. Given a simplicial complex $K$ and the corresponding sequence \\eqref{eq:chcomp}, both composites $\\mathcal{L}_d^\\text{up} := \\partial_{d+1} \\partial_{d+1}^T$ and $\\mathcal{L}_d^\\text{down} := \\partial_d^T \\partial_d$ furnish linear maps $\\mathbb{R}[K_d] \\to \\mathbb{R}[K_d]$. The $d$-th Hodge Laplacian is their sum:\n\\begin{align}\\label{eq:hodgelapl}\n\\mathcal{L}_d:=\\mathcal{L}_d^\\text{up}+\\mathcal{L}_d^\\text{down}.\n\\end{align}\n\nAn immediate consequence of this definition is that the standard graph Laplacian agrees with the 0-th Hodge Laplacian. The nullity of the graph Laplacian equals the number of connected components of the underlying graph. Similarly, the kernel of the $d$-th Hodge Laplacian of a simplicial complex $K$ is isomorphic the corresponding $d$-th homology group~\\cite{eckmann1944harmonische}:\n\\begin{align}\n \\ker \\mathcal{L}_d(K)\\cong \\mathcal{H}_d(K;\\mathbb{R}).\n\\end{align}\n\n\n\\subsection{Graph Neural Networks}\n\n {\\em Graph Neural Networks} (GNNs) provide a general framework for {\\em Geometric Deep Learning}~\\cite{bronstein2021geometric}, where the input domain, represented by a {\\em graph} $G=(V,E)$, is allowed to vary together with the signals that are defined on it. More concretely, the {\\em Message Passing Graph Neural Network} (MPGNN) framework generalizes the convolution operation on the edges of a graph $G$ by employing a simple message passing scheme between features of nodes $h_u, u\\in V$, and their neighbors $v\\in \\mathcal{N}_u$. \n \nThe output of each layer $\\ell$ for each node $u$ can be broadly formulated as:\n\\begin{align}\\label{eq:gnn}\nh_u^{\\ell+1}=\\phi \\left( h_u^\\ell, \\bigoplus_{v\\in \\mathcal{N}_u} w_{u,v} \\cdot \\psi \\left(h_u^\\ell,h_v^\\ell\\right) \\right),\n\\end{align}\nwith $\\bigoplus$ being a {\\em permutation invariant} aggregation, $\\phi$ and $\\psi$ learnable functions, and $w_{u,v}$ the weight of edge $(u,v)\\in E$. Under this formulation, learnable parameters of $\\phi$ and $\\psi$ are shared across all nodes in the graph network. \n\nEach message passing layer can be described more compactly using matrix notation:\n\\begin{align}\n H^{\\ell+1}=\\phi \\left( \\tilde{L}\\psi(H^\\ell) \\right),\n\\end{align}\nwhere $\\tilde{L}=AWA^T$ is the weighted graph Laplacian matrix, and $H^0$ the $|V| \\times F$ matrix of initial node features. This formulation highlights the similarities of GNNs with Laplacian diffusion operations, a fact that we will largely exploit.\n\nThe output of a number of message passing iterations results to latent {\\em node embeddings}, largely based on the local graph topology at each node. Such embeddings can be subsequently used for node regression, node classification, or, via feature aggregation of all nodes, for graph classification and aggregation tasks.\n\n\n\\section{Related work}\\label{sec:Related}\n\n\\subsubsection{Homology Localization} \n\n\nThe {\\em minimum basis problem} in computational topology involves extracting optimal homology generators, with optimality usually expressed in terms of norm or length minimization of cycles. In dimensions exceeding one, this is an NP-hard problem~\\cite{chambers2009minimum,chen2011hardness}, whereas the 1-dimensional case succumbs to a polynomial time algorithm~\\cite{dey2010approximating}. This latter fact that spawned a significant body of work examining special cases and computational improvements~\\cite{borradaile2016minimum, chen2010measuring, dey2011optimal,erickson2005greedy, Busaryev2011,chen2021decomposition}. \n\nWhile the aforementioned methods generally output sets of simplices that form optimal homology generators in their respective class, the rest of the simplices in the complex remain largely oblivious to the location of such optimal cycles in relation to themselves. In our work we attempt to characterize each simplex in the complex with respect to its nearest homology generator, while gaining in efficiency (once the model is sufficiently trained). More similar to our line of work,~\\cite{Ebli2019harmonicclustering} implements a homology-aware clustering method for point data.\n\n\n\\subsubsection{Topological methods in ML}\n\n\nWith the marriage of homology and ML~\\cite{Hensel2021TopMLsurvey, love2021topDL, montufar2020can, Hofer2019LearningBarcodes}, it did not take long for GNNs to meet their higher dimensional counterparts in the form of simplicial~\\cite{bodnar2021weisfeilerMPNNS, ebli2020simplicial,bunch2020simplicial}, cell~\\cite{hajij2021cell, bodnar2021weisfeilerCWNs}, hypergraph~\\cite{feng2019hypergraph}, and sheaf~\\cite{hansen2020sheaf} neural networks. Most higher dimensional extensions of GNNs aim to operate on the full complex, and redefine the convolution operation in terms of the corresponding Laplacian operator. Contrary to such generalizations, we still operate on a graph. The key difference is that our graph is derived from adjacency and Hodge Laplacian information of the complex at the dimension of interest.\n\n\n\\subsubsection{Pseudoinverse \\& Hodge Laplacians in GNNs}\n\nThe pseudoinverse and shifted versions of the Laplacian operator are not new in the context of GNNs~\\cite{klicpera2019diffusion,wu2019simplifying,alfke2021pseudoinverse}. Nevertheless, they only consider spectral manipulations of the ``classic\" graph Laplacian, whose kernel is usually of no practical interest, as long as the graph is connected.\n\nMore closely to our work,~\\cite{roddenberry2019hodgenet, schaub2018flow} consider edge-flows for signal denoising, interpolation, and source localization based on the {\\em linegraph Laplacian}, and the 1-dimensional down Hodge Laplacian $\\hL_1^\\text{down}$, based on a proxy graph resulting from interchanging edges and nodes. Nevertheless, their analysis remains restricted on graph structures, disregarding any homological features. \n\n\n\n\n\\section{Dist2Cycle}\\label{sec:Model}\n\n\nHere we present a model for learning homology-aware distance functions on simplicial complexes.\n\n\\subsection{Shifted Inverted Hodge Laplacians} \\label{subsec:Ltp}\n\nThe spectral properties of the Hodge Laplacian matrices provide salient information regarding the geometry and topology of a simplicial complex, as hinted in Section~\\ref{sec:Prelims}. Furthermore, Laplacian flow dynamical systems on simplicial complexes tend to stabilize towards specific spectral regions of the Laplacian~\\cite{muhammad2006control}. Nevertheless, the choice of the Laplacian operator with which diffusion is performed impacts greatly the energy distribution on the simplices of interest. \n\nIn Figures~\\ref{fig:laplacian_diffusion}\\subref{fig:hL1},\\subref{fig:hL1_k3},\\subref{fig:hL1_3D},\\subref{fig:hL1_3D_k5} we perform 12 diffusion steps according to the Hodge Laplacian $\\hL_1$ (and its low-rank approximation using the top 3 and 5 eigenpairs, respectively), namely,\n\\begin{align} \\label{eq:diffusion}\n\\bm{x}_{i+1} =\\bm{x}_i + \\hL_1 \\bm{x}_i,\n\\end{align} \nwith $\\bm{x}_0=[1\\ 1\\ \\dots\\ 1]^T$ as the initial signal on the 1-simplices of the complex. We show the absolute value of the resulting flow vector at each simplex, on two basic examples. Energy tends to concentrate at well connected simplices, while ignoring the homological features that we are interested in.\n\nThe Laplacian diffusion of~\\eqref{eq:diffusion} can be seen as a simplified version of a graph convolution that takes place in GNNs~\\eqref{eq:gnn}, with all nonlinearities and learnable parameters pruned, and trivial initial features. Thus, in order to focus our attention on optimal homology generators, we must invert the spectrum of the Hodge Laplacian $\\hL_1$, while making sure that its kernel will replace the part of the spectrum corresponding to its top eigenvalues. For this purpose we employ a {\\em shifted inverted} version of the Hodge Laplacian\n\\begin{align*}\n\\Ltpd{d}=(\\mathbb{1}+\\hL_{d})^{+},\n\\end{align*}\nwhich makes the $\\ker \\hL_d$ the prominent part of the spectrum of $\\Ltpd{d}$ with eigenvalue 1, onto which the diffusion asymptotically converges. The effect this modified Laplacian matrix has on diffusion is shown in \\ref{fig:laplacian_diffusion}\\subref{fig:Ltpd1},\\subref{fig:Ltpd1_k3},\\subref{fig:Ltpd1_3D},\\subref{fig:Ltpd1_3D_k5}.\n\n\n\n\\begin{figure*}[htbp]\n \\centering\n \\begin{tabular}{@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}}\n \n \\multicolumn{8}{c}{{Ours}} \\\\\n \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/cropped\/BEST_pts_140_filtV_1_3883159230772615_cid_6_cmplx_ours.png} & \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/pts_200_filtV_0_014619974523328298_cid_27_cmplx_ours.png} & \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/pts_180_filtV_0_07363109339738495_cid_3_cmplx_ours.png} & \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/cropped\/WORST_pts_70_filtV_0_0023980842642273175_cid_0_cmplx_ours.png} & \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/BEST_pts_120_filtV_0_018349590565120224_cid_20_cmplx_ours.png} & \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/pts_130_filtV_0_0770273102594478_cid_49_cmplx_ours.png} & \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/pts_120_filtV_0_5535622674081626_cid_1_cmplx_ours.png} & \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/WORST_pts_120_filtV_0_014305282858018277_cid_7_cmplx_ours.png}\\\\\n \n \n \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/cropped\/BEST_pts_140_filtV_1_3883159230772615_cid_6_cmplx_ref.png} & \\includegraphics[,width=0.11\\textwidth]{images\/RESOUT\/2D\/pts_200_filtV_0_014619974523328298_cid_27_cmplx_ref.png} & \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/pts_180_filtV_0_07363109339738495_cid_3_cmplx_ref.png} & \\includegraphics[width=0.11\\textwidth]{images\/RESOUT\/2D\/cropped\/WORST_pts_70_filtV_0_0023980842642273175_cid_0_cmplx_ref.png} &\n \n \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/BEST_pts_120_filtV_0_018349590565120224_cid_20_cmplx_ref.png} & \\includegraphics[,width=0.15\\textwidth]{images\/RESOUT\/3D\/pts_130_filtV_0_0770273102594478_cid_49_cmplx_ref.png} & \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/pts_120_filtV_0_5535622674081626_cid_1_cmplx_ref.png} & \\includegraphics[width=0.15\\textwidth]{images\/RESOUT\/3D\/WORST_pts_120_filtV_0_014305282858018277_cid_7_cmplx_ref.png}\\\\\n \\multicolumn{8}{c}{{Reference}} \n \n \\end{tabular}\n\\caption{Qualitative comparisons of selected complexes from the 2D \\texttt{TORI} (four left) and 3D (four right) test set. The 1-simplices of the complexes are color-coded according to their distance from the nearest homology cycle, with blue indicating close proximity to an optimal homology cycle, and red indicating large distance from a homology cycle. } \\label{fig:complviz}\n\\end{figure*}\n \n \n\\subsection{From Simplicial Complexes to Graph Neural Networks}\\label{subsec:SCGNN}\n\\newcommand{0.11}{0.11}\n\\begin{figure}[h!]\n \\centering\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/Laplacian_graph\/complex.png}\n \\caption{Example complex $K$.}\n \\label{fig:complex}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/Laplacian_graph\/complex_Ldown.png}\n \\caption{Laplacian graph $G_{\\hL_1^\\text{down}}$.}\n \\label{fig:complex_Ldown}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/Laplacian_graph\/complex_Lup.png}\n \\caption{Laplacian graph $G_{\\hL_1^\\text{up}}$.}\n \\label{fig:complex_Lup}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.11\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/Laplacian_graph\/complex_L.png}\n \\caption{Laplacian graph $G_{\\hL_1}$.}\n \\label{fig:complex_L}\n \\end{subfigure}\n \\caption{Laplacian graph constructions on example complex $K$ for 1-simplices (square nodes with red edges overlaid on top of the original complex).}\n \\label{fig:laplacian_graphs}\n\\end{figure}\n\nIn order to employ the GNN framework for inference on the complex, we need to express the space of $k$-simplices accordingly. Furthermore, we desire the resulting graph structure to retain the spectral properties of the Laplacian operator of interest. \n\nWe interpret the $|K_d|\\times |K_d|$ Hodge Laplacian operator $\\hL_d$ (or $\\Ltpd{d}$) as the weighted graph Laplacian $\\tilde{L}_\\text{GNN}=AWA^T$ of a computational graph $G_\\text{GNN}=(V_\\text{GNN}, E_\\text{GNN})$. Under this lens, each $d$-simplex $\\sigma$ of the original complex $K$ becomes a node of $G_\\text{GNN}$, i.e. $\\sigma \\in V_\\text{GNN}$, and weighted edges are drawn according to the adjacency information encoded in $\\hL_d$ ($\\Ltpd{d})$. Namely, a weighted edge $(\\sigma, \\tau)\\in E_\\text{GNN}$ is drawn between nodes corresponding to $k$-simplices $\\sigma$ and $\\tau$, whenever the (potentially normalized) Laplacian operator $\\hL_d$ ($\\Ltpd{d}$) contains a nonzero entry in the respective position. This entry is also used to weight the corresponding edge $w_{\\sigma, \\tau}=\\hL_{\\sigma,\\tau}$, allowing self-loops. Figure~\\ref{fig:laplacian_graphs} provides an example of the resulting graph, what we call the {\\em Hodge Laplacian graph}, when using $\\hL_1^\\text{down}$, $\\hL_1^\\text{up}$, and $\\hL_1$ for extracting adjacency relations on 1-simplices (sans self-loops, for easier vizualization). A somewhat similar approach is followed in~\\cite{roddenberry2019hodgenet}, with their mapping akin to the graph in Figure~\\ref{fig:complex_Ldown} minus the 2-simplices, as they are only dealing with graphs.\n\n\nAs mentioned in Section~\\ref{subsec:Ltp}, we are interested in capturing the spectrum, and thus the connectivity information of $\\Ltp$, which is in general a dense matrix and hence computationally prohibitive to work with directly.\nTo overcome this issue, we impose the sparsity structure of $\\hL$ to $\\Ltpd{}$, masking all entries of $\\Ltpd{}$ that are zero in the original, sparse, Hodge Laplacian $\\hL$. If we denote by $A$ the adjacency matrix encoding the connectivity of $\\hL$, with\n\\[\nA_{u,v} = \\begin{cases} 1 & \\text{ if } \\hL_{u,v}\\neq 0, \\text{ and} \\\\\n 0 & \\text{ otherwise}\n \\end{cases},\n\\]\nthis can be achieved with the Hadamard product $A \\odot \\Ltp$. The resulting graph is called the {\\bf Hodge Laplacian graph} throughout this paper.\n\n\nWhile more sophisticated methods for spectral sparsification exist~\\cite{spielman2011graph}, imposing the connectivity dictated by $\\hL$ or $\\hL^\\text{down}$ seems to preserve all important adjacency information required for the task at hand, while not annihilating important spectral information. Furthermore, in the context of learning, this approach is reminiscent to inference with missing values, which GNNs are known to handle well~\\cite{Jiaxuan2020missing}\n\n\n\\begin{figure*}[htbp]\n \\centering\n \\begin{tabular}{@{}c@{}@{}c@{}@{}c@{}@{}c@{}@{}}\n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/2D\/cropped\/BEST_pts_140_filtV_1_3883159230772615_cid_6_dist_inset.png} & \n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/2D\/cropped\/pts_200_filtV_0_014619974523328298_cid_27_dist_inset.png}&\n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/2D\/cropped\/pts_180_filtV_0_07363109339738495_cid_3_dist_inset.png}&\n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/2D\/cropped\/WORST_pts_70_filtV_0_0023980842642273175_cid_0_dist_inset.png}\n \\\\\n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/3D\/cropped\/BEST_pts_120_filtV_0_018349590565120224_cid_20_dist_inset.png} & \n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/3D\/cropped\/pts_130_filtV_0_0770273102594478_cid_49_dist_inset.png}&\n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/3D\/cropped\/pts_120_filtV_0_5535622674081626_cid_1_dist_inset.png}&\n \\includegraphics[width=0.245\\textwidth]{images\/RESOUT\/3D\/cropped\/WORST_pts_120_filtV_0_014305282858018277_cid_7_dist_inset.png}\n \\end{tabular}\n \\caption{Plots of predicted distances (blue) of the simplices sorted in increasing ground truth distance value (orange) for the Tori dataset in 2D (top) and 3D (bottom). The plots show four cases (columns) ranging from the best (left) to the worst (right). }\n \\label{fig:complplots}\n\\end{figure*}\n \n \n\\subsection{Shifted Inverted Laplacian GNNs for Homology Localization} \\label{subsec:SFGNN}\n\nWe are now ready to propose a {\\em Simplicial Neural Network} model for homology localization. By following the construction described in Section~\\ref{subsec:SCGNN} we obtain a weighted computational graph $G_\\text{GNN}=(V_\\text{GNN}, E_\\text{GNN})$, with weights according to $\\Ltp$ and adjacency dictated by $\\hL$ (or $\\hL^\\text{down}$). Thus, graph convolution (message passing) on the $G_\\text{GNN}$ can be summarized as:\n\\begin{align} \\label{eq:SFGC}\n H^{\\ell+1}=\\phi \\left( A \\odot \\Ltpd{d} H^\\ell W^\\ell \\right),\n\\end{align}\nwhere $A$ is the adjacency matrix describing the selected sparsification regime according to $\\hL$ (or $\\hL^\\text{down}$), $\\Ltpd{d}$ the {\\em shifted inverted Hodge Laplacian} in dimension $d$ (Section~\\ref{subsec:Ltp}), and $\\odot$ denoting the Hadamard product. The learnable weights of the model at layer $\\ell$ are denoted as $W^\\ell$, and $\\phi$ can be any activation function, such as ReLU, Sigmoid, etc. Finally, $H^\\ell$ is the $|V_\\text{GNN}|\\times F$ feature matrix having the $F$-dimensional features of each node (i.e. $d$-simplex) as rows. \n\nTo aid the task of homology localization, we encode both local and global information at each node. Locality is incorporated by computing betti numbers $[\\beta_0, \\dots, \\beta_{d+1}]$ of the {\\em link} at each $d$-simplex $\\sigma$ --- this is the subcomplex consisting of all simplices $\\tau$ for which $\\sigma \\cap \\tau$ is empty and $\\sigma \\cup \\tau$ is a simplex in $K$. Global features manifest in the form of spectral embeddings of the $d$-simplices in the space spanned by singular vectors corresponding to the largest $k$ singular values of $\\Ltpd{d}$. Denoting the appropriately permuted singular value decomposition (SVD) of $\\Ltpd{d}$ as $\\Ltpd{d}=U\\Sigma V^T$ with $\\Sigma$ containing in its diagonal the singular values of $\\Ltp$ in descending order, the rows of the matrix $U_{1:k}$ formed by the first $k$ singular vectors constitute coordinates of the simplices in the eigenspace of $\\Ltpd{d}$. Due to the shift-invert operation of $\\Ltp$, this scheme effectively embeds the $d$-simplices in the spectral subspace corresponding to the kernel of $\\hL_{d}$, i.e. the space encoding homological information. \n\n\n\\begin{figure*}[htbp]\n \\centering\n \\begin{tabular}{@{}c@{}c@{}c@{}c@{}}\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/2D\/2D-BEST_binned_MSEerrors_log.png}\n \\caption{}\n \\label{fig:2DMSE}\n \\end{subfigure} &\n \\hfill\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/2D\/2D-BEST_simplices_error.png}\n \\caption{}\n \\label{fig:2Dsimplices}\n \\end{subfigure}&\n \\hfill\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/2D\/2D-BEST_betti_error.png}\n \\caption{}\n \\label{fig:2Dbetti}\n \\end{subfigure} &\n \\hfill\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/2D\/2D-BEST_maxCycle_error.png}\n \\caption{}\n \\label{fig:2DmaxCycle}\n \\end{subfigure} \\\\\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/3D\/3D-BEST_binned_MSEerrors_log.png}\n \\caption{}\n \\label{fig:3DMSE}\n \\end{subfigure} &\n \\hfill\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/3D\/3D-BEST_simplices_error.png}\n \\caption{}\n \\label{fig:3Dsimplices}\n \\end{subfigure} &\n \\hfill\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/3D\/3D-BEST_betti_error.png}\n \\caption{}\n \\label{fig:3Dbetti}\n \\end{subfigure} &\n \\hfill\n \\begin{subfigure}[b]{0.248\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{images\/RESOUT\/3D\/3D-BEST_maxCycle_error.png}\n \\caption{}\n \\label{fig:3DmaxCycle}\n \\end{subfigure}\n \\end{tabular}\n \n \n \\caption{MSE error plots for the \\texttt{TORI} dataset in 2D (top) and 3D (bottom). The first column shows MSE error aggregates against the stratified distance values (x-axis). Green dashed line: mean, red line: median, box limits: quartiles, whiskers: error range. Columns 2, 3 and 4 are plots of MSE against the number of simplices, the homology rank count $\\beta_1$, and the maximum cycle length, respectively. Shaded regions are standard deviation, and insets provide histograms for the respective parameters in the test set.}\n \\label{fig:2D3Derrors}\n\\end{figure*}\n\n\\section{Evaluation}\\label{sec:Experiments}\nIn this section we describe the function learned by our model, the dataset we developed to train and evaluate our model (Section~\\ref{subsec:dataset}), the experiments conducted (Section~\\ref{subsec:setting}) and an analysis of the results (Section~\\ref{subsec:results}). \n\n\\subsection{Nearest optimal homology generator}\n\nOur model approximates a function that depends on the optimal homology generators. To validate our model, we only consider 1-simplices since efficient algorithms for ground truth data only exist for $d=1$. We denote by $\\mathcal{Q}_1$ the set of optimal generators of $\\mathcal{H}_1$. For each simplex $\\sigma \\in K_1$ we seek to learn its distance from the nearest optimal $g\\in \\mathcal{Q}_1$,\n\\begin{align}\\label{eq:distTocycle}\n f(\\sigma)=\\min_{g\\in \\mathcal{Q}_1} \\hat{d}(\\sigma,g).\n\\end{align}\nAs distance $d(\\sigma, g)$ between a $k$-simplex $\\sigma$ and a $k$-dimensional homology generator we consider the minimum number of $k$-simplices required to reach any $k$-simplex $\\rho \\in g$ participating in a $k$-cycle $g$. To keep the function complex-independent, we then normalize the distance in the range $[0,1]$, obtaining $\\hat{d}(\\cdot)$, with simplices near an optimal homology generator attaining values close to zero.\n\n \n\\subsection{\\texttt{TORI} Dataset}\\label{subsec:dataset}\nOur \\texttt{TORI} datasets consist of {\\em Alpha} complexes~\\cite{edelsbrunner2010alpha} that originate from considering ``snapshots\" of filtrations~\\cite{edelsbrunner2010computational} on points sampled from tori manifolds, in 2 and 3 dimensions.We seek to capture richness of homological information, controlability in terms of scalability in the number of simplices and homology cycles, as well as ease of visualization.\n\nWe first sampled 400 point clouds from randomly generated configurations of tori and pinched tori, with number of ``holes\" ranging from 1 to 5, to which Gaussian noise is added. We then constructed Alpha filtrations on the collection of point clouds, i.e. sequences of simplicial complexes dictated by a monotonically increasing distance parameter $\\alpha$. Tracking homological changes in the sequence of complexes results in a {\\em barcode} representation, with one bar per homology feature, that spans a range of $\\alpha$ values. The longer the bar, the more persistent, and possibly ``significant\", a homological feature is. From these barcodes we considered the 5 most persistent features, expressed by the longest bars. The birth and death values of these features were deemed as appropriate points to capture ``snapshots\" of the complexes, guaranteed to contain interesting large and small scale homological information. This pipeline yields a collection of 2000 complexes for each dataset. The generality of the datasets stems from the spurious homological features occuring while considering filtrations of noisy point clouds, as confirmed by the histogram insets describing the test sets in Figure~\\ref{fig:2D3Derrors}. The number of simplices ranges from tens to thousands, and betti numbers, i.e. number of homology cycles, from 0 up to 66. Furthermore, homology generators present in the dataset can contain from 3 up to 60 1-simplices.\n\n\nWe calculate the reference function using optimal homology generators of $\\mathcal{H}_1$ via {\\em Shortloop}~\\cite{dey2010approximating} by calculating the normalized ``hop\" distance of each simplex to its nearest optimal generator(see Eq.\\eqref{eq:distTocycle}).\n\n\n\\subsection{Experimental settings}\\label{subsec:setting}\n\nWe used a GNN with 12 graph convolutional layers (for 2D as well as 3D), as described by Eq.~\\eqref{eq:SFGC}, and 128 hidden units. We chose LeakyReLU activations ($\\phi$ in Eq.~\\eqref{eq:gnn}) with negative slope $r=0.02$ for the layers, and a hyperbolic tangent Tanh for the output. Neighbor activations are aggregated via a summation ($\\bigoplus$ in Eq.~\\eqref{eq:gnn}). Learnable weights undergo Kaiming uniform initialization~\\cite{he2015delving}. Finally, node features are the result of concatenating the betti numbers describing the homology of the link at each simplex, with its 5-dimensional spectral embedding.\n\nThe dataset is split into training (80\\%) and testing (20\\%) sets and the models were trained for 1000 epochs, with a mini-batch size of 5 complexes using an Intel Xeon E5-2630 v.4 processor, a TITAN-X 64GB GPU and 64GB of RAM, using CUDA 10.1. The GNN model was implemented using the dgl library~\\cite{wang2019dgl} with the Torch backend~\\cite{paszke2017automatic}. All simplicial and homology computations were handled by the Gudhi library~\\cite{gudhi:urm}. \n\nWe apply a Laplacian smoothing post-processing step. Let $\\bm{x}$ the output of the model, i.e. the inferred distances for each 1-simplex, and $\\hat{L}=D^{-1\/2}(D-A)D^{-1\/2}$ the normalized graph Laplacian of the 1-skeleton of the complex $K$, i.e. the underlying graph spanned by the 0 and 1-simplices of $K$. The signal at the simplices are smoothed using \n$\\bm{x}'=\\bm{x}-\\hat{L}\\bm{x}$.\n\n \n\\subsection{Results}\\label{subsec:results}\n\nThe main quantitative results can be found in Figure~\\ref{fig:complplots} and Figure~\\ref{fig:2D3Derrors} with qualitative examples in Figure~\\ref{fig:complviz}. We report mean squared error (MSE) between the predicted and reference relative distances. Since the range of $\\hat{d}(\\cdot)$ is within $[0,1]$, MSE can never exceed $1$, i.e. $100\\%$ error.\n\nIn 2D as well as 3D our model learns the homology-parametrized distance function by achieving an MSE of $3.81\\% \\ (0.0381)$ (2D), and $4.47\\% \\ (0.0447)$ (3D), respectively. Figures~\\ref{fig:2DMSE} and~\\ref{fig:3DMSE} provide insights into the distribution of error at different distances (x-axis) from optimal generators. In 2D the error is monotonically increasing with the distance from the homology generators, whereas in 3D the model performs slightly better at relative distances of about a third from the optimal cycles. In both settings, areas far away from the homology cycles attain the maximum mean MSE but never in excess of 15\\%.\n\n\\Cref{fig:2Dsimplices,fig:2Dbetti,fig:2DmaxCycle}, and \\labelcref{fig:3Dsimplices,fig:3Dbetti,fig:3DmaxCycle} investigate the scalability of the model as the number of simplices, homology features ($\\beta_1$) and maximum cycle lengths, increase. The insets provide histograms of the respective parameter counts in the test sets to shed light into the standard deviation (shaded). The parameter values are non-uniformly represented in the test sets, partly explaining the larger variance towards the lower ends of the value ranges. Our model scales well in all three parameters and we observe that the error decreases for larger numbers of simplices. The maximum MSE across all parameters never exceeded 15\\%. \n\nQualitative assessment of the model's performance is provided in Figure~\\ref{fig:complviz} for examples from the test sets. The comparisons are arranged from lowest error (left) to maximum error (right) within our dataset. The top row of figures visualize the predicted distances projected onto the complex while the bottom shows the ground truth. Cool areas indicate close proximity to an optimal homology generator. \n\nFigure~\\ref{fig:complplots} plots reference distance values (orange) and our model's output (blue) for each simplex against the rank of the simplex's distance from an optimal generator (X axis). Ideally, the blue curve should be monotonically increasing and should closely match the orange curve.\n\nBoth in 2D and 3D the model can handle multiple homology cycles of various lengths, even greater than the number of convolutional layers that usually dictate the receptive field of each simplex. Problematic appear to be the cases where components of the complexes have trivial homology, evident from the rightmost column in Figure~\\ref{fig:complplots}. In such cases the model attempts to detect homological structure, where none exists.\n\n\\subsection{Limitations and Conclusion}\n\nOur model entifies simplices that are distant to homology cycles, aided by the shifted-inverted Hodge Laplacian based graph convolution and the simplified simplex adjacency construction. We experimented with a traditional approach of using a Hasse graph analogue of the complex but that model failed to learn. Another advantage of our model is that both large and small scale homology cycles are consistently localized. Although we restricted our analysis to $\\mathcal{H}_1$ for ease of evaluation, the generality of our model allows direct extension to higher dimensional simplices. \n\nOne drawback of our model is that it performs poorly when components of complexes contain no higher dimensional homological information whatsoever. In such cases, it hallucinates homology cycles while they do not exist. \n\nAnother limitation is the variance of the inferred function on the complex as shown in Figure~\\ref{fig:complplots}. This variance stems from two sources: First, the target function is piecewise constant; and second, the adjacency structure that we use to sparsify an otherwise complete graph (for creating the GNN computational graph) alters the spectrum of the kernel. \n\nThe choice of a spectral sparsification method with theoretical guarantees is an interesting avenue of future work. Needless to say, our results are promising even with a basic GNN architecture. We foresee exciting opportunities to improvements in the architecture. \n\n\n\n\n\n\n\n\n\n\n\\clearpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sect:intro} \nMaterial differentiation and quantification using a standard single-energy computed tomography (SECT) is extremely challenging because different materials may have the same CT value\\cite{ref1}. To tackle this challenge, dual-energy CT (DECT) takes full advantage of the energy dependence of the linear attenuation coefficient by scanning the patients using two different energy spectra\\cite{ref2, ref3, ref4, ref5, ref6,lee2017feasibility,petrongolo2018single,xue2019accurate}. This enables DECT imaging providing energy- and material-selective images, and having been very widely used in clinical practice for many applications, such as virtual monochromatic imaging\\cite{ref7, ref8}, differentiating intracerebral hemorrhage from iodinated contrast\\cite{ref9}, automated bone removal in CT angiography\\cite{ref10, ref11, ref12, ref13}, virtual noncontrast-enhanced imaging\\cite{ref14, ref15, ref16, ref17, ref18, ref19} and urinary stone characterization\\cite{ref20, ref21, ref22, ref25}. However, it is still an open and challenging task for clinical DECT imaging due to complex practical implementations, proprietary patents for major CT vendors, and less popularity for DECT scanners compared to the standard SECT scanners.\n\nSince the low- and high-energy CT images acquired from the DECT scanners have the same anatomical structures, there is substantial redundant anatomical information between the DECT images. For the scanned patients using the same DECT imaging protocols, the low- and high-energy CT images are also correlated in the energy domain, resulting in information redundancies in the energy-domain\\cite{ref26, ref27}. Meanwhile, both DECT images are reconstructed using fully-sampled projection data which have to meet the classical Shannon-Nyquist theorem in angular-data sampling to reconstruct artifacts-free images. By fully exploiting the anatomical consistency and energy domain correlation between the DECT images, it is possible to provide high-quality artifacts-free DECT images using conventional SECT images together with sparse sampling projection data at different energy levels.\n\n\\begin{figure*}[t\n \\centering\n \\includegraphics[width=0.8\\textwidth]{workflow.png}\n \\caption{The workflow of the proposed fully-sampled low-energy and single-view high-energy DECT imaging approach. During the training phase, the denoising DECT images together with the single-view dual-energy projections are used to train the projection domain convolutional neural network (CNN) and the MD-CNN. In the testing phase, the trained networks use the input low-energy images and the single-view dual-energy projections to infer the corresponding high-energy images.}\n \\label{fig:1}\n\\end{figure*}\n\nCT imaging with the full use of as low as reasonably achievable (ALARA) principle has been commonly accepted in routine practice and further reducing radiation dose from CT scanning is clinically favorable and has been extensively studied for almost two decades. Deep learning (DL) has recently been proved to be a powerful tool for mapping complex relationships and incorporating existing knowledge into an inference model through feature extraction and representation learning\\cite{ref28, ref29, ref30, ref31, ref32, ref33, ref34, ref35, ref36, ref37}. It has also been applied in low-dose CT\\cite{ref38, ref39, ref40, ref41} and DECT imaging\\cite{ref42, ref43, ref44, ref45, ref46}.\n\nTo further significantly reduce the radiation dose of DECT imaging, in this study, we synergically exploit the energy-domain correlation and anatomical consistency between DECT images by leveraging the deep learning approach and the seamless integration of the correlation and consistency in a data-driven DECT imaging process, and eventually push the sparse sampling to the limit of a single projection view and demonstrate the feasibility of high-performance DECT imaging using a deep learning approach termed fully low-energy and single high-energy DECT imaging (FLESH-DECT).\n\n\n\n\\section{Method}\n\\label{sect:method}\nThe flowchart of proposed FLESH-DECT strategy is shown in Fig.~\\ref{fig:1}.\nThe input to the model is a single-view high-energy projection together with the low-energy image \\(I_{low}\\) which is reconstructed using fully-sampled low-energy projection data. For the low-energy image, it is firstly denoised with a denoising network to mitigate the impact of image noise. Instead of training a network directly mapping high-energy images from the low-energy images, we use a convolutional neural network (CNN) to perform material-decomposition-type operations and it is termed material decomposition CNN (MD-CNN). The input of the MD-CNN is the denoised low-energy image \\(I_{low}^{de}\\) while the output is a \"material component\" matrix \\(A\\). The matrix \\(A\\) has the same image size as \\(I_{low}^{de}\\) but with multiple channels each of which corresponds to a pseudo material-specific image. The values are the percentages of corresponding \"basis material\" on these pixels. We ensure that the sum of the percentages equals to one (mass conversation) for each unique pixel. Furthermore, another CNN is used to pre-process the differences between the given high-energy projections and its corresponding low-energy projections. This projection-domain network\nis used to fill the gap between the denoised images and the non-denoised projection. Since CT forward projection can be regarded as linear summations of pixel values, we can use the least squares method to solve the corresponding CT values of each \"material\" \\(b_{dif}\\) according to the matrix \\(A\\) and the pre-processed projection difference. The estimated high-energy image is calculated as the summation of low-energy images, and the inner product of matrix \\(A\\) and vector \\(b_{dif}\\).\nDetailed formula derivation is described in the following subsections.\n\n\\subsection{DECT Imaging}\nIn CT imaging, the attenuation coefficient at each position can be represented as a linear combination of basis materials' attenuation coefficient\\cite{ref50}.\n\\begin{equation}\n \\mu=\\alpha_1\\mu_1+\\alpha_2\\mu_2+\\dots+\\alpha_m\\mu_m\n \\label{eq:1}\n\\end{equation}\nwhere \\(m\\) is the number of basis materials, \\(\\alpha_i\\) is the percentage of the \\(i\\)-th basis material and \\(\\mu_i\\) is the attenuation coefficient of the \\(i\\)-th basis material. Since CT values in Hounsfield Unit (HU) can be represented as the linear transformation of attenuation coefficient \\(\\mu\\) with the following equation:\n\\begin{equation}\n HU=1000\\times\\frac{\\mu-\\mu_{water}}{\\mu_{water}}\n \\label{eq:2}\n\\end{equation}\nEq.\\eqref{eq:1} can also be written as:\n\\begin{equation}\n HU=\\alpha_{1}HU_{1}+\\alpha_{2}HU_{2}+\\dots+\\alpha_{m}HU_{m}\n \\label{eq:3}\n\\end{equation}\nwhere \\(HU_{i}\\) stands for the Hounsfield Unit CT value for the \\(i\\)-th basis material.\nConsidering there are \\(n_{pix}=W{\\times}H\\) pixels in an image slice, we can write Eq.\\eqref{eq:3} into the following matrix multiplication form\n\\begin{equation}\n I=A\\cdot{b}^T\n \\label{eq:4}\n\\end{equation}\nwhere \\(I\\) is the image vector sized \\(n_{pix}{\\times}1\\) containing CT values at each pixel, \\(A={[\\alpha_{ij}]}_{n_{pix}{\\times}m}\\) is the material component matrix, \\(\\alpha_{ij}\\) stands for the percentage of material \\(j\\) at pixel \\(i\\), \\(b={[HU_i]}_{m{\\times}1}\\) consists of HU values for each material.\n\nIn DECT, there are two different images \\(I_{low}\\) and \\(I_{high}\\). The material component matrix \\(A\\) remains the same for both images because pixel compositions do not change between low- and high-energy scans. Therefore, we have the following equations\n\\begin{equation}\n \\begin{cases}\n &I_{low}=A\\cdot{b}_{low}^T\\\\\n &I_{high}=A\\cdot{b}_{high}^T\n \\end{cases}\n \\label{eq:5}\n\\end{equation}\nBy subtracting the high-energy equation from the low-energy equation, we get\n\\begin{equation}\n I_{dif}=A\\cdot{b}_{dif}^T\n \\label{eq:6}\n\\end{equation}\nwhere \\(I_{dif}\\) is the difference image between \\(I_{low}\\) and \\(I_{high}\\), \\(b_{dif}\\) is the difference between \\(b_{low}\\) and \\(b_{high}\\). Let \\(P_{high}\\) and \\(P_{low}\\) be the given high- and low- energy projection measurements, and \\(R\\) be the projection matrix sized \\(n_{ray}{\\times}n_{pix}\\) corresponding to the high-energy view. We have the following equation\n\\begin{equation}\n\\begin{split}\n R{\\cdot}A\\cdot{b}_{dif}^T & = R{\\cdot}I_{dif} \\\\\n & = R{\\cdot}I_{high}-R{\\cdot}I_{low} \\\\\n & = P_{high}-P_{low}\n \\label{eq:7}\n\\end{split}\n\\end{equation}\nFor the fully-sampled low-energy and single-view high-energy CT imaging task, the unknows in Eq.\\eqref{eq:7} are the material component matrix \\(A\\) and the corresponding difference values \\(b_{dif}\\). Assuming that we have the material component matrix \\(A\\), let \\(M=R{\\cdot}A\\in\\Re^{m{\\times}n_{ray}}\\), \\(P_{dif}=P_{high}-P_{low}\\in\\Re^{1{\\times}n_{ray}}\\), the difference values \\(b_{dif}\\) can be calculated by solving the equation \\(M{\\cdot}b^{T}=P_{dif}\\). In regular CT imaging, we have \\(n_{ray}>>m\\), the best \\(b_{dif}\\) can therefore be found using least-squares method which can be computed with Cholesky decomposition, i.e.,\n\\begin{equation}\n\\begin{split}\n b_{dif} & =\\operatornamewithlimits{argmin}\\limits_{\\tilde{b}}{\\|M{\\cdot}\\tilde{b}^T-P_{dif}\\|}_2^2 \\\\\n & =[(M^TM)^{-1}M^TP_{dif}]^T\n \\label{eq:8}\n\\end{split}\n\\end{equation}\nThe only task now is to estimate the material component matrix \\(A\\) from the low energy image \\(I_{low}\\).\n\n\\subsection{Material decomposition-based dual-energy CT mapping}\nDue to its ability to learn complex relationships and incorporate existing knowledge into a nonlinear mapping model, a dedicated CNN model (termed MD-CNN) is used to estimate the material component matrix \\(A\\). When designing the MD-CNN model, a major challenge is the lack of training labels due to unknown materials in the images and their percentages. To tackle this challenge, we train the MD-CNN indirectly. We firstly denoise the dual-energy image pairs, and the denoised low-energy images \\(I_{low}^{de}\\) are inputted into the MD-CNN to acquire material component matrices \\(A_{DL}\\). Meanwhile, we put the projection differences into another 1-D projection domain CNN to preprocess the projection data. Then, we compute \\(b_{dif}\\) using Eq.\\eqref{eq:8}. The estimated denoised high-energy images \\(I_{high}^{dl}\\) can therefore be calculated as\n\\begin{equation}\n I_{high}^{dl}=I_{low}^{de} + A_{DL}{\\cdot}b_{dif}^T\n \\label{eq:10}\n\\end{equation}\nAn image similarity loss is calculated between the denoised high-energy image \\(I_{high}^{de}\\) and the DL-estimated image \\(I_{DL}^{de}\\), and the mean squared error (MSE) loss is used for the task:\n\\begin{equation}\n \\mathcal{L}_{high}=\\frac{1}{n}{\\|I_{high}^{de}-I_{DL}^{de}\\|}_2^2\n \\label{eq:11}\n\\end{equation}\nInstead of getting material component label, we focus on the target high-energy image and it is not necessary to specify each channel in \\(A_{DL}\\) to represent real material or linear combination of different materials. Meanwhile, since matrix \\(A_{DL}\\) is supposed to be the material component matrix, it should be able to recover the input denoised low-energy image as well, resulting in the follow loss function:\n\\begin{equation}\n \\mathcal{L}_{low}=\\frac{1}{n}{\\|I_{low}^{de}-A_{DL}{\\cdot}b_{low}^T\\|}_2^2\n \\label{eq:12}\n\\end{equation}\nThe same strategy in Eq.\\eqref{eq:8} is used to calculate the HU values \\(b_{low}\\) for each \"material\" under low energy,\n\\begin{equation}\n b_{low}=\\operatornamewithlimits{argmin}\\limits_{\\tilde{b}}{\\|A_{DL}{\\cdot}\\tilde{b}^T-I_{low}^{de}\\|}_2^2\n \\label{eq:13}\n\\end{equation}\nThe final loss function is computed as the summation of \\(\\mathcal{L}_{low}\\) and \\(\\mathcal{L}_{high}\\), i.e.,\n\\begin{equation}\n \\mathcal{L}=\\mathcal{L}_{low}+\\mathcal{L}_{high}\n \\label{eq:14}\n\\end{equation}\n\nDuring the inference phase, the low-energy images are also firstly denoised and then inputted into the trained MD-CNN for \\(A_{DL}\\). The projection domain CNN preprocesses the projection difference \\(P_{dif}\\). The estimated difference images are calculated according to Eq.\\eqref{eq:8}. The difference between the training and inference is the estimated high-energy image is calculated as the summation of the original low-energy image \\(I_{low}\\) and the difference image \\(I_{dif}\\) in inference phase.\n\n\\subsection{Network details}\n\\subsubsection{Denoising CNN}\n\nWe employ the denoising network in our previous work\\cite{ref46} to reduce the DECT image noise. The network uses a plain structure which encompasses 13 convolution layers to learn the residual between the input image and the denoised image. The first 12 layers are convolution layers with kernel size \\(3\\times3\\) and each layer is followed by a batch normalization layer (BN) and a rectified linear unit (ReLU) activation. The last layer is a convolution layer with kernel size \\(1\\times1\\) fusing the result. Fig.~\\ref{fig:2} shows the detailed structure of the denoising CNN. The denoised image is computed as the summation of the input image and the output from the last layer.\n\n\\begin{figure}[t\n \\centering\n \\includegraphics[width=0.48\\textwidth]{denoise_cnn.png}\n \\caption{Architecture of the fully convolutional network for image denoising. A plain structure which encompasses 13 convolution layers is applied to learn the residual between the input image and the denoised image. }\n \\label{fig:2}\n\\end{figure}\n\n\\subsubsection{MD-CNN}\n\nFor the material decomposition network, we employ a U-Net-type structure\\cite{ref51} which has a large receptive field and is quite suitable for many medical image processing tasks. There are 10 normal \\(3\\times3\\) convolution layers in the proposed network (Fig.~\\ref{fig:3}). Each convolution layer is followed by a BN layer and a ReLU activation layer. There are 3 resolution levels in total. For down-sampling, we use convolution layer with kernel size\nof \\(2\\times2\\) and stride equal to 2. Each strided convolution layer is also followed by a BN layer and a ReLU activation layer. For up-sampling, we use bilinear interpolation to double both image width and height. At the end of the network, a convolution layer with kernel size\nof \\(1\\times1\\) is added to fuse the channels. Since the values on each output channel are supposed to be the percentages of corresponding \"basis material\", we apply softmax after the convolution layer with kernel size\nof \\(1\\times1\\) to make sure that the sum of materials' proportion equals to 1 at each pixel.\n\n\\begin{figure}[t\n \\centering\n \\includegraphics[width=0.48\\textwidth]{img_cnn2.png}\n \\caption{Structure of the MD-CNN. A much simplified UNet-like structure with 14 convolution layers is used here to estimate the percentages of each corresponding \"basis material\". Numbers under each block show\nthe number of channels of the multichannel feature maps.}\n \\label{fig:3}\n\\end{figure}\n\n\\subsubsection{Projection-domain CNN}\n\nTo enhance the robustness of the lease-square problem in Eq.\\eqref{eq:8}, the projection-domain network (Fig.~\\ref{fig:4}) is applied to slightly refine the inputting projection difference and to make its noise level to be consistent with that of the denoised DECT difference image, and we employ a concise 4-layer network for the task. A residual learning structure is used here for an easier startup at the beginning iterations. All basic convolution layers have a kernel with size of \\(1\\times5\\) except for the last one which has a kernel with size of \\(1\\times1\\). The first three convolution layers are followed by a BN layer and a ReLU activation layer.\n\n\\begin{figure}[t\n \\centering\n \\includegraphics[width=0.35\\textwidth]{proj_cnn2.png}\n \\caption{Structure of the projection-domain CNN. A concise 4-layer network is employed to slightly refine the inputting projection difference. Numbers under each block show the number of channels of the multichannel feature maps.}\n \\label{fig:4}\n\\end{figure}\n\n\\subsubsection{Network training}\nThe denoising CNN was implemented in MATLAB with MatConvNet framework\\cite{ref47}. Denoising was performed as a data preprocessing step for all DECT images. The material decomposition network and projection-domain network are implemented using Python with Tensorflow framework\\cite{ref48}. Those two networks were trained together in an end-to-end fashion. The parameters in the networks were optimized using ADAM algorithm\\cite{ref49} with \\(\\beta_1=0.9\\) and \\(\\beta_2=0.999\\). Learning rate was set to \\(10^{-3}\\) during the training. The training set was randomly split into small batches in each epoch with batch size of 8. The proposed network was trained for 200 epochs in total. We perform validation after each training epoch and the model with best validation loss was selected as the final model for testing. The number of materials \\(m\\) was set to 10 in our experiments. It has to note that the model was able to achieve reasonable results even with \\(m=3\\). The networks were trained and tested on a workstation with configurations as follows: CPU is Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz; GPU is NVIDIA RTX 2080 Ti with 12G memory.\n\n\\subsection{Projectors for different CT geometries}\nIn order to calculate matrix \\(M\\) in Eq.\\eqref{eq:8}, projector corresponding to the CT geometry is indispensable in our algorithm. There are mainly two types of 2D CT geometries, fan-beam and parallel-beam. The fan-beam geometry can be further divided into two sub-types, equiangular and equispacing. For each above-mentioned geometry, we developed a projector and trained a new model for evaluation.\n\\subsubsection{Equiangular fan-beam}\nEquiangular geometry is mainly implemented with arc detectors which keep the angles between two adjacent detector pixels and the source-detector-distance the same. We test our model in the anterior-posterior (AP) direction, but it can be easily extended to any other projection view.\nTo acquire projection data using the 2D equiangular fan-beam geometry, we first calculate the intersection of each projection ray with each image row. The values at each intersection points are computed using linear interpolation. Suppose the image size is \\(W{\\times}H\\), and a matrix \\(I'\\) with size of \\(n_{ray}{\\times}H\\) can be obtained using the following rebinning equation:\n\\begin{equation}\n I'(\\phi, y)=I((D-y)\\tan(\\phi), y)\n \\label{eq:15}\n\\end{equation}\nwhere \\(\\phi\\) is the angle between the projection ray and the central ray (as shown in Fig.~\\ref{fig:8}), and \\(y\\) is the image pixel position along y-axis in the Cartesian coordinate system centered at image center \\(O\\). \\(D\\) is the distance between projection source and the rotation center which is overlapped with the image center \\(O\\). After the linear interpolation, we calculate the summation of each column in matrix \\(I'\\) to obtain raysum $S$ which is weighted by the distance to yield the final projection $P$:\n\\begin{equation}\n P(\\phi)=\\frac{dy}{\\cos{\\phi}}S(\\phi)\n \\label{eq:16}\n\\end{equation}\nwhere \\(dy\\) is the image spacing in y-axis.\nWe set \\(D=600mm\\), \\(dy=0.5mm\\), the number of detector channels \\(n_{ray}=800\\) and the angle between adjacent channels \\(ds=\\frac{7}{11000}rad\\) for all images.\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{arc_beam.png}\n \\caption{Illustration of equiangular fan-beam geometry. }\n \\label{fig:8}\n\\end{figure}\n\n\\subsubsection{Equispacing fan-beam}\nFlat detectors with equal spacing between the adjacent channels are commonly used to implement the equispacing geometry. We use similar strategy to implement the equispacing fan-beam projection in the AP direction.\nIn this case, the rebinning procedure is computed with the following equation:\n\\begin{equation}\n I'(s, y)=I(\\frac{s(D-y)}{L}, y)\n \\label{eq:17}\n\\end{equation}\nwhere \\(s\\) is the distance between current channel and the detector center (direction included), and \\(L\\) is the source to detector distance, as shown in Fig.~\\ref{fig:9}. The rest part of the implementation is the same as equiangular projection. For the parameters, we have \\(D=600mm\\), \\(L=1100mm\\), \\(dy=0.5mm\\), \\(n_{ray}=800\\) and the spacing between nearby channels \\(ds=0.78mm\\).\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.48\\textwidth]{flat_beam.png}\n \\caption{Illustration of equispacing fan-beam geometry. }\n \\label{fig:9}\n\\end{figure}\n\n\\subsubsection{Parallel-beam}\nFor the parallel-beam scenario, We assume that the detector channel has the same spacing as the image pixels and each X-ray projects exactly through an image column in the AP direction. Therefore, the projection in the AP direction can be computed as the summation of each image column.\n\n\\section{Data Specification}\n\\subsection{Training data for the denoising network}\nThe AAPM Low-Dose CT Grand Challenge data are used to train the denoising network. This dataset consists of routine dose CT and the corresponding simulated low-dose CT data from 10 patients. The routine dose scanning voltage is 100 kV or 120 kV and the X-ray tube current varies from 200mA to 500mA. The detector has \\(736\\times64\\) elements, and each element has a size of \\(1.2856\\times1.0947mm^2\\). The source-to-axial distance is \\(59.5cm\\) and the source-to-detector distance was \\(108.56cm\\). All the images were reconstructed to slice thickness of \\(1.0 mm\\) and \\(512\\times512\\) pixel. The pixel size varies from \\(0.66\\times0.66mm^2\\) to \\(0.78\\times0.78mm^2\\). To simulate the low-dose CT data, Poisson noise was introduced into the routine dose to mimic a noise level that corresponded to \\(25\\%\\) of the routine dose, and noisy projection are reconstructed to yield the low-dose image. \n\n\\subsection{DECT imaging dataset}\nClinical DECT images of 22 patients who underwent iodine contrast-enhanced DECT exams were collected for the study. All the exams were performed in Nanjing General PLA Hospital, China, with the approval by the institutional review board and patient consent forms. The DECT images (5753 slices in total) were acquired using a SOMATOM Definition Flash DECT scanner (Siemens Healthineers, Forchheim, Germany) after administering iodine contrast agent. The low- and high-energy of the DECT scans were 100 kV and 140 kV, respectively. All CT images were reconstructed using the filtered back-projection (FBP) algorithm provided by the commercial CT vendor. The dataset were split into training set, validation set and testing set randomly with 16, 3 and 3 patients included respectively.\n\n\\begin{figure}[t\n \\centering\n \\includegraphics[width=\\linewidth]{fig2_2.png}\n \\caption{Example results on a testing slice and the difference with respect to the corresponding real high-energy image. The first, third and fifth row display the images on axial, sagittal and coronal view. From left to right are real 100 kV image, real 140 kV image and result from proposed method, respectively. The second, fourth and sixth row are the corresponding differences with real 140 kV images. The CT images are displayed with a window width=300 HU and center=50 HU while the difference images are displayed under window width=300 HU and center=0 HU.}\n \\label{fig:5}\n\\end{figure}\n\n\\section{Results}\n\nWe first focus on the results of equiangular geometry, and comparison results using different geometries are presented at the end this section.\nFig.~\\ref{fig:5} shows original DECT images and the 140 kV images predicted using the proposed method for a testing patient. The first, second and third columns show the original 100 kV images, the original 140 kV images, and the predicted results, respectively. The first, third, and fifth rows show CT images in transverse, sagittal, and coronal planes, respectively. The second, fourth, and sixth rows show difference images with respect to the corresponding real high-energy CT images in transverse, sagittal, and coronal planes, respectively.\n\\begin{table}[t\n \\begin{center}\n \\caption{Quantitative comparisons between the predicted and the real high-energy CT images for the testing patients. }\n \\begin{tabular}{ c | c c c c c }\n \\hline\\hline\n & MSE & PSNR & SSIM & HU error & Time(s) \\\\\n \\hline\n Patient1 & 875.06 & 36.80 & 0.8702 & 1.65\\(\\pm\\)1.03 & 2.65 \\\\\n Patient2 & 887.60 & 36.99 & 0.8789 & 2.09\\(\\pm\\)1.48 & 2.31 \\\\\n Patient3 & 927.47 & 36.89 & 0.8745 & 1.50\\(\\pm\\)0.82 & 3.34 \\\\\n \\hline\\hline\n \\end{tabular}\n \\label{tab:1}\n \n \\end{center}\n\\end{table}\n\\begin{figure}[t\n \\centering\n \\includegraphics[width=\\linewidth]{fig6_2.png}\n \\caption{VNC images and Iodine maps reconstructed using original DECT, DL-DECT, and FLESH-DECT images. The first, third and fifth row are the VNC images on axial, sagittal and coronal view, respectively, while the second, fourth and sixth row are the corresponding Iodine maps. }\n \\label{fig:6}\n\\end{figure}\nAs can be seen, the proposed DL-derived high-energy images are highly consistent with the original high-energy images. There are some differences at sharp boundaries which also appear in difference images between original high- and low-images. Those differences may be motion introduced difference between original DECT images because there is approximately 90 degrees out of phase for the low- and high-energy data acquired using a dual-source DECT scanner. When inputting the low-energy image into the model, the model performed prediction based on the anatomical structure of the low image and can not reflect the change with respect to the original high-energy image.\n\n\\begin{table}[b\n \\begin{center}\n \\caption{Quantitative comparisons of the VNC image and the iodine maps reconstructed using original DECT and FLESH-DECT images.}\n \\begin{tabular}{ c c | c c | c c }\n \\hline\\hline\n \\multicolumn{2}{c}{\\multirow{2}{*}{}} & \\multicolumn{2}{|c|}{VNC} & \\multicolumn{2}{c}{Iodine Map} \\\\\n & & MSE & PSNR & MSE & PSNR \\\\\n \\hline\n \\multirow{2}{*}{Patient1} & DL-DECT & 3882.34 & 29.03 & 0.0313 & 23.65 \\\\\n & FLESH-DECT & \\textbf{3808.97} & \\textbf{29.11} & \\textbf{0.0308} & \\textbf{23.73} \\\\\n \\hline\n \\multirow{2}{*}{Patient2} & DL-DECT & 4216.36 & 28.95 & 0.0342 & 23.78 \\\\\n & FLESH-DECT & \\textbf{4128.01} & \\textbf{29.04} & \\textbf{0.0335} & \\textbf{23.87} \\\\\n \\hline\n \\multirow{2}{*}{Patient3} & DL-DECT & 3508.35 & 29.71 & 0.0313 & 24.85 \\\\\n & FLESH-DECT & \\textbf{3431.13} & \\textbf{29.81} & \\textbf{0.0306} & \\textbf{24.95} \\\\\n \\hline\\hline\n \\end{tabular}\n \\label{tab:2}\n \\end{center}\n\\end{table}\n\n\nQuantitative metrics were calculated to evaluate the accuracy of the predicted high-energy CT images. We use the well-established metrics MSE, PSNR and SSIM to assess the image similarity between the real and the predicted high-energy images. Additionally, more than 100 region-of-interests (ROIs) were randomly selected for each testing volume on homogeneous areas (e.g. liver and stomach). We calculated the mean HU value differences on those ROIs and the results show that the averaged HU error between the predicted and the original high-energy image is smaller than 2.09 HU. For the computation time, it takes around 2.5 seconds for the proposed method to process 300 slices. All quantitative results are shown in Table \\ref{tab:1}.\nSince most of the differences come from noise, we also compared the denoised DL-predicted images with the denoised high-energy images. In this case, the DL-predicted images are calculated by adding the denoised low-energy image to the DL-estimated difference image. For those denoised images, the proposed method achieves average MSE of 171.09, PSNR of 44.33 and SSIM of 0.9848.\n\nIn our previous work \\cite{ref46}, we propose the DL-DECT method to estimate \\(I_{dif}\\) directly from \\(I_{low}\\). In this work, with the introduction of the additional high-energy single-view projection, we can further enhance the accuracy of the predicted high-energy image and eventually the accuracy of the material- and energy-specific images. To show the benefit of the additional high-energy projection, we quantitatively compared the proposed FLESH-DECT method with the DL-DECT method.\nVirtual noncontrast (VNC) images and iodine maps are derived to demonstrate the clinical utility of the proposed method. Fig. \\ref{fig:6} depicts the VNC images and iodine maps reconstructed using different methods on the transverse, sagittal and coronal planes. Both the DL-DECT and FLESH-DECT algorithms provide high-quality VNC and iodine images that are consistent with the images generated by original DECT images. Quantitative metrics on VNC and iodine images are shown in Table \\ref{tab:2}. The results demonstrate FLESH-DECT can provide high-quality material-specific images and it outperforms the DL-DECT method.\n\n\\begin{table}[t\n \\begin{center}\n \\caption{Quantitative Noise Level Comparison on VNCs and Iodine Maps.}\n \\begin{tabular}{ c c | c c }\n \\hline\\hline\n \\multicolumn{2}{c}{Standard Deviation on ROIs} & VNC & Iodine Map \\\\\n \\hline\n \\multirow{2}{*}{Patient1} & Real & 81.58 & 0.3363 \\\\\n & DL-DECT & 29.14 & 0.1618 \\\\\n & FLESH-DECT & \\textbf{28.09} & \\textbf{0.1499} \\\\\n \\hline\n \\multirow{3}{*}{Patient2} & Real & 74.69 & 0.2837 \\\\\n & DL-DECT & 24.81 & 0.1017 \\\\\n & FLESH-DECT & \\textbf{24.12} & \\textbf{0.0998} \\\\\n \\hline\n \\multirow{3}{*}{Patient3} & Real & 76.95 & 0.2875 \\\\\n & DL-DECT & 25.97 & 0.0946 \\\\\n & FLESH-DECT & \\textbf{25.67} & \\textbf{0.0891}\\\\\n \\hline\\hline\n \\end{tabular}\n \\label{tab:3}\n \n \\end{center}\n\\end{table}\n\nFrom the VNC images and Iodine maps, we find that the DL-derived VNC images and iodine maps show a remarkably reduced noise level compared with those generated from original DECT images. Here we also compare the noise level by calculating the standard deviation in ROIs. More than 500 ROIs were selected randomly in homogeneous areas. The mean standard deviations in ROIs on each testing patient are provided and compared in Table \\ref{tab:3}. The mean standard deviations of the images derived using FLESH-DECT are close to that of the DL-DECT method which is much lower than those from the original DECT images. This result shows that the FLESH-DECT method maintains the denoising feature.\n\n\\begin{figure*}[tb]\n \\centering\n \\includegraphics[width=0.98\\textwidth]{result.png}\n \\caption{Results on three testing slices for different geometry models. From left to right are original 100 kV images, original 140 kV images, parallel-beam results, equispacing fan-beam results and equiangular fan-beam results, respectively. }\n \\label{fig:7}\n\\end{figure*}\n\nWe tested the proposed method with several 2D CT geometries. Example results on a testing volume using different geometries are displayed in Fig. \\ref{fig:7}. All CT images are displayed with window width=300HU and center=50HU while difference images are displayed with window width=300HU and center=0HU. As can be seen, all models are able to provide competitive results which are highly consistent with the original high energy image. For better comparison, we also calculate quantitative metrics on those results (Table \\ref{tab:4}). The differences between the results obtained using the proposed method with different geometries are marginal and all of them are superior to our previous DL-DECT method which does not utilize the additional single-view projection. Overall, the proposed method reduces mean-squared error (averaged for all testing cases) from 1858.32 to 898.35 while increasing PSNR from 33.83 to 36.89 and SSIM from 0.8641 to 0.8744.\n\n\\begin{table}[b\n \\begin{center}\n \\caption{Quantitative Results for Different Geometries.}\n \\begin{tabular}{ c c | c c c }\n \\hline\\hline\n \\multicolumn{2}{c}{} & MSE & PSNR & SSIM \\\\\n \\hline\n \\multirow{4}{*}{Patient1} & DL-DECT & 2466.07 & 32.37 & 0.8558 \\\\\n & Parallel & \\textbf{866.48} & \\textbf{36.84} & \\textbf{0.8707} \\\\\n & Equiangular & 875.06 & 36.80 & 0.8702 \\\\\n & Equispacing & 871.65 & 36.82 & 0.8703 \\\\\n \\hline\n \\multirow{4}{*}{Patient2} & DL-DECT & 1530.75 & 34.61 & 0.8723 \\\\\n & Parallel & \\textbf{876.68} & \\textbf{37.03} & \\textbf{0.8793} \\\\\n & Equiangular & 887.60 & 36.99 & 0.8789 \\\\\n & Equispacing & 882.76 & 37.01 & 0.8789 \\\\\n \\hline\n \\multirow{4}{*}{Patient3} & DL-DECT & 1578.13 & 34.61 & 0.8660 \\\\\n & Parallel & 927.47 & 36.89 & 0.8745 \\\\\n & Equiangular & 932.40 & 36.88 & 0.8741 \\\\\n & Equispacing & \\textbf{927.15} & \\textbf{36.90} & \\textbf{0.8747} \\\\\n \\hline\\hline\n \\end{tabular}\n \\label{tab:4}\n \\end{center}\n\\end{table}\n\n\n\\begin{table}[t\n \\begin{center}\n \\caption{Computation Time for Different Methods in Seconds.}\n \\begin{tabular}{ c c c c }\n \\hline\\hline\n & Patient1 & Patient2 & Patient3 \\\\\n \\hline\n Slices & 308 & 265 & 387 \\\\\n \\hline\n DL-DECT & 5.43 & 4.68 & 6.86 \\\\\n Parallel & \\textbf{2.33} & \\textbf{2.04} & \\textbf{2.94} \\\\\n Equiangular & 2.65 & 2.31 & 3.34 \\\\\n Equispacing & 2.53 & 2.18 & 3.23 \\\\\n \\hline\\hline\n \\end{tabular}\n \\label{tab:5}\n \\vspace{-5mm}\n \\end{center}\n\\end{table}\n\nThe computation time for different methods are displayed and compared in Table \\ref{tab:5}. Note that computation time for the denoising CNN is not included in the results. The denoising network takes about 0.01 seconds for each slice and the denoising time is similar to all methods. Compared to the DL-DECT method, the proposed method speeds-up the computation time by 2-fold which can be attributed to the reduced number of weights and the simple network structure. There are some differences among the time using different geometries which means the computational cost of the proposed method depends on the projector.\n\n\n\\section{Discussion}\nThere are substantial redundant information and correlation in both anatomical structure and energy-domain between the low- and high-energy DECT images. By incorporating the redundancy and the correlation into a deep learning model, it's possible to provide material- and energy-specific images using standard SECT scanners, which has the potential to alleviate the need for premium DECT scanners. In addition, compared to the standard fully low- and high-energy sampling DECT mechanism, the use of sparse sampling at the second energy level can significantly reduce the radiation dose of DECT imaging.\n\nOur results show both the DL-DECT and FLESH-DECT methods achieve high-performance DECT imaging by using the input low-energy CT data, and quantitative analysis shows FLESH-DECT outperforms DL-DECT in terms of HU accuracy and calculation speed. The superior performance of the FLESH-DECT can be attributed to the additional single-view high-energy projection. Different from the DL-DECT method which directly infers a high-energy image using the incorporated prior knowledge, the proposed FLESH-DECT method uses the learned knowledge to fit the measured high-energy projection. Namely, the high-energy projection introduces a penalty to constrain the projection generated by the predicted high-energy images to be consistent with the measurement, which in turn enhances the accuracy of the predicted images. The single-view high-energy projection can be obtained shortly before or after the standard SECT low-energy data acquisition. For example, one may use a prescanning scout-view image (with the corresponding high-energy kV setting) as the single-view high-energy projection, and existing SECT systems are able to implement these scanning protocols without modifying the hardware.\n\nFLESH-DECT is suitable for different geometries. In this study, we have tested the method using 2D geometries (fan-beam and parallel beam). However, the method can be applied straightforwardly to 3D geometries by extending the networks and the projection operators into 3D scenarios. Since the reconstructed size is smaller than the projection view size in the rotation axis for 3D case, projections generated using the reconstructed image volume cannot match the measured projection. To solve this issue, one can use the middle part of the measured projection which does not propagate through the region beyond the image volume.\n\n\\begin{figure*}[tb]\n \\centering\n \\includegraphics[width=0.98\\textwidth]{mat_decomp.png}\n \\caption{An example of the output of the MD-CNN. Each image represents a channel in the resulting matrix \\(A_{DL}\\). All images are displayed under the same window with center=0.5 and width=1.0.}\n \\label{fig:10}\n\\end{figure*}\n\nThe proposed method employs the MD-CNN to generate \"material-decomposition\" maps $A$ from 100 kV images. Since basis materials in the image do not change under X-ray at different energy levels, accurate material-decomposition maps are able to provide CT images under any spectrum if combined with the projection view acquired using the corresponding spectrum (e.g. 120 kV image from 120 kV projection, 150 kV image from 150 kV projection). In our experiments, the models were trained and tested using DECT images under 100 kV\/Sn 140 kV scanning protocol. Therefore, the generated \"material-decomposition\" maps are likely to be optimized for this specific protocol and may not be applied to images acquired using a different protocol (such as 80 kV\/Sn 140 kV). An example is shown in Fig. \\ref{fig:10}. However, if the model was trained using images acquired from several different spectra, the \"material-decomposition\" maps would be much closer to the real ones and the model would have the potential to generate images under different spectra without re-training the MD-CNN. Also, regularizers may further be introduced and applied to matrix $A$ during network training to enhance the robustness of the model.\n\nThere is a denoising-CNN included in the flowchart of FLESH-DECT. We use this network to reduce the impact of image noise and the results show its effectiveness. However, the denoising network is not mandatory and it can be replaced with other image denoising techniques~\\cite{ma2011low}, such as non-local mean (NLM)~\\cite{zhang2013iterative}, block-matching and 3D filtering (BM3D)~\\cite{salehjahromi2017spectral}, without seeing severe degradation in performance. The denoising step may also be removed when the inputting extremely-high quality images.\n\nDespite all the advantages and potentials mentioned above, the proposed method has limitations. FLESH-DECT relies highly on the deep neural network to perform the material-decomposition-like operation. Since the domain knowledge learned by the network highly depends on training data, it is unlikely to provide reasonable results when there is a huge difference between training and testing data. For example,\nmodels trained under 100 kV\/Sn 140 kV protocol may not generate correct results when inputting low-energy images scanned under 70 kV protocol. However, these limitations can be solved by training different models for different protocols.\n\n\n\\section{Conclusion}\nIn this paper, we proposed a deep learning approach to perform DECT imaging using a low-energy image and a single-view high-energy projection. Compared to the standard DECT imaging, the approach can provide superior material-specific images with significantly reduced noise. It also has the potential to simplify the system design and reduce the radiation dose, and allows us to perform high-quality DECT imaging without the conventional hardware-based DECT solutions. The approach may significantly extend the usage of the widespread standard SECT scanners by providing advanced DECT clinical applications, such as urinary stone characterization and as differentiating intracerebral hemorrhage from iodinated contrast, and thus lead to a new paradigm of SECT imaging.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Abstract}\n{\nWe demonstrate that the absence of stable quasiparticle excitations on parts of the Fermi surface, known as the ``nodal-antinodal dichotomy\" in underdoped cuprate superconductors, can be reproduced in models of strongly correlated electrons defined via a holographic dual. We show analytically that the anisotropy of the quantum critical continuum, which is a feature of these models, may lead to washing out the quasiparticle peak in one direction while leaving it intact in the perpendicular one. The effect relies on the qualitatively different scaling of the self-energy in different directions. Using the explicit example of the anisotropic Q-lattice model, we demonstrate how this effect emerges due to specific features of the near horizon geometry of the black hole in the dual description. \n}\n\\setcounter{tocdepth}{1}\n\\tableofcontents\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\nSystems of strongly correlated electrons are not always well-described in terms of stable quasiparticle excitations. An experimental example is the phenomenon of the nodal-antinodal dichotomy in underdoped cuprate superconductors~\\cite{shen2005nodal}, where sharp quasiparticle peaks are observed only on some parts of the Fermi surface. Non-quasiparticle regimes have been demonstrated to exist in theoretical models of strongly correlated electrons constructed using holography, also known as the AdS\/CFT correspondence \\cite{Cubrovic:2009ye,Faulkner:2009wj,Faulkner:2011tm}. In these models the features of the near horizon geometry of the black hole in the dual description have been shown to govern the self-energy part of the fermionic two-point function in such a way that the inverse quasiparticle lifetime would grow faster than its energy, rendering the stable quasiparticle concept practically useless. \n\nOne may describe this holographic effect as the coupling of the would-be quasiparticles to a ``quantum critical continuum'' bath which mediates their decay, even at small frequencies where the usual elastic decay channels are kinematically forbidden~\\cite{Faulkner:2010tq}. In systems without rotational symmetry, e.g. due to the underlying crystal lattice, this quantum critical continuum may be present only in some directions in momentum space, and hence it is possible for the quasiparticle excitations to be stable only on some parts of the Fermi surface.\n\nWe sketch such a scenario in figure~\\ref{fig:cartoon}. In figure~\\ref{fig:cartoon_continuum} we show the two contributions that we may heuristically think of as combining to give the full fermion spectral function. The plot shows these contributions as functions of momentum in two directions in momentum space, which we label as \\(k_n\\) and \\(k_a\\), where the subscripts stand for (anti)nodal. The blue curve shows the anisotropic quantum critical continuum, which is only present for momentum along one direction (\\(k_n\\)). The dashed orange curves show sharp resonance peaks at the Fermi momentum. In figure~\\ref{fig:cartoon_peaks} we show the resulting spectral function. Where the peak overlaps with the quantum critical continuum, as happens along the \\(k_a\\) cut in the figure, interactions between the two broaden the peak, destroying the quasiparticle interpretation. On the other hand, where a peak does not overlap with the continuum it remains sharp and the quasiparticle interpretation is preserved. We refer to these two situations as antinodal and nodal respectively.\n\n\\begin{figure}\n \\begin{subfigure}{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{figs\/cartoon_continuum.pdf}\n \\caption{Cuts of contributions to spectral function}\n \\label{fig:cartoon_continuum}\n \\end{subfigure}\\begin{subfigure}{0.5\\textwidth}\n \\includegraphics[width=\\textwidth]{figs\/cartoon_peaks.pdf}\n \\caption{Cuts of the full spectral function}\n \\label{fig:cartoon_peaks}\n \\end{subfigure}\n \\caption{Cartoons of the fermion spectral function in the anisotropic strongly correlated systems considered in this work, as a function of momentum in two different directions in momentum space that we label as \\(k_n\\) and \\(k_a\\). \\textbf{(a):} The spectral function receives contributions from two sources. A quantum critical continuum (solid blue) which exists only for momentum in one direction, which we take to be \\(k_n\\), and resonance peaks (dashed orange). \\textbf{(b):} When these two contributions are combined any peaks that overlap the continuum are broadened, and thus lose their interpretation as stable quasiparticle excitations.\n }\n \\label{fig:cartoon}\n\\end{figure}\n\nIn this work we provide a comprehensive illustration of how to engineer the above mechanism in holography, showing that it may be realised by an appropriate choice of anisotropic infrared (IR) scaling symmetry. For simplicity we will only focus on cases that preserve a two-fold rotational symmetry, such that the nodes and antinodes occur along orthogonal directions in momentum space. We comment about a possible generalisation to the experimentally relevant case of four-fold rotational symmetry in section~\\ref{sec:Discussion}.\n\nWe start by considering analytically the general case of a holographic model with an anisotropic IR scaling geometry at zero temperature in section~\\ref{sec:anis}. Then we specialise to the particular model of the holographic Q-lattice. This model describes a (2+1)-dimensional quantum system with a scalar operator that has a linear profile in one direction \\(x\\). This breaks the translational symmetry in the $x$ direction as well as rotational symmetry. What is important for us is that this model has regimes in which the deep infrared geometry has the aforementioned anisotropic IR scaling behaviour. The self-energy of the quasiparticle excitation then demonstrates nodal behaviour in the \\(y\\) direction and antinodal in the \\(x\\) direction.\n\nIn section~\\ref{sec:zeroT} we consider such Q-lattices at zero temperature and demonstrate the finite width of the quasiparticle peak already at the Fermi surface in the $x$-direction, while in the perpendicular $y$-direction the peak remains sharp and the ``quantum critical'' self-energy is exponentially suppressed. We also evaluate the fermionic spectral function at the Fermi surface at finite temperature and demonstrate the absence of quasiparticle peaks in certain directions and the behaviour of the corresponding width as a function of temperature. The three appendices are devoted to the details of our analytical treatment (appendix~\\ref{app:matching}) and two different numerical methods: shooting at zero temperature, appendix~\\ref{app:zeroT_shooting} and pseudospectral relaxation at finite temperature, appendix~\\ref{app:finiteT_relaxation}. Finally, in appendix~\\ref{app:extra_plots} we show some additional plots of numerical results. \n\n\n\\section{\\label{sec:anis}Fermionic Green's functions and anisotropic quantum critical continua}\n\\input{tex\/sec_anis.tex}\n\n\n\n\\section{\\label{sec:zeroT}\\label{sec:finiteT} Explicit example: anisotropic Q-lattice}\n\\input{tex\/sec_zeroT.tex}\n\n\n\n\n\n\n\\section{\\label{sec:Discussion}Discussion}\n\nIn this work we have studied the effects of anisotropy of the quantum critical continuum in systems of strongly correlated and strongly entangled quantum matter defined via their holographic duals. In the dual gravitational description, the features of quantum criticality are encoded in the scaling behaviour of the deep IR geometry. We focus on holographic systems with anisotropic deep IR geometries, with scaling behaviour with exponents obeying the conditions~\\eqref{eq:conditions_for_dichotomy}, in particular, with a negative dynamical exponent in one direction. We show analytically that in such systems the fermion spectral function behaves at small frequencies qualitatively differently in the two different directions in momentum space, in such a way as to yield an apparent nodal-antinodal dichotomy on the Fermi surface. We check this result by an explicit numerical calculation in a concrete holographic model.\n\nThe crucial part of our finding is that the behaviour we observe is solely dictated by the features of the continuum and the associated scaling exponents (modelled by the IR geometry), and is largely independent of the features of the probe (such as the mass or charge of the probe fermion). In this way it is quite general and would emerge for almost any probe used to describe phenomenological data. We thus might hope that this mechanism for producing a nodal-antinodal dichotomy would work for any type of quantum criticality, whether it can be described by holography or not. Our main statement is therefore that the angular behaviour of the fermionic self-energy, as extracted from e.g. ARPES data in experiments on underdoped cuprates, may reveal important information about the anisotropy of the quantum criticality underlying the system and doesn't have to be explained in terms of fermiology, for example by resonance scattering between the different Fermi surfaces.\n\nOur probe-agnostic treatment goes even further actually. It is easy to see that the deep IR geometry affects bosonic two-point correlation functions in a similar way to fermionic ones, and so one doesn't have to restrict oneself to fermionic probes only. Our mechanism predicts that the anisotropic features of a bosonic probe, which can resolve the angular structure of the finite momentum response, will closely follow the nodal-antinodal dichotomy observed in the fermionic probes, since both are dictated by the same quantum criticality. We suggest that this prediction can be directly checked at available momentum resolved EELS experimental facilities~\\cite{vig2017measurement,kogar2017signatures} provided the angular dependence is under control. We leave a full holographic exploration of the behaviour of bosonic probes in models of the type considered here to future work.\n\nThere are a number of other interesting directions for further research. Although our mechanism for producing a nodal-antinodal dichotomy is probe agnostic in the sense that it is independent of the mass and charge of the probe fermion, the dichotomy may be destroyed by other interactions, depending on how these interactions scale in the IR. To see why, recall that the key to obtaining a quantum critical continuum in the \\(x\\) direction is that, apart from the terms proportional to frequency, the zero-derivative terms in equation~\\eqref{eq:fermion_ir_eom_main_text} decay at least as fast as \\(\\zeta^{-1}\\) in the deep IR \\(\\zeta \\to\\infty\\). If we add non-minimal interaction terms to the Dirac equation that decay slower than this, they will destroy the quantum critical continuum.\n\nOne interesting and well-motivated non-minimal interaction is a dipole coupling, proportional to \\(\\slashed{F} \\Xi\\), which can arise in top-down constructions~\\cite{Bah:2010yt,Bah:2010cu} and has been used in holographic models of Mott insulators~\\cite{Edalati:2010ww,Edalati:2010ge}. It is straightforward to show that including such an interaction adds terms to equation~\\eqref{eq:fermion_ir_eom_main_text} that decay in the deep IR as \\(\\zeta^{-(1+\\nu_A)}\\). Given the constraint \\(\\nu_A \\geq 0\\) from equation~\\eqref{eq:exponent_conditions}, this interaction decays rapidly enough as \\(\\zeta\\to\\infty\\) that it will not destroy the quantum critical continuum.\n\nOn the other hand, suppose the gravitational background contains some scalar field \\(\\Phi\\). One could add to the Dirac equation~\\eqref{eq:Dirac_equation} a Yukawa-like coupling proportional to \\(\\Phi^n \\Xi\\) \\cite{Wu:2019orq}, where \\(n\\) is some power that we assume is positive in order to avoid a singularity at \\(\\Phi=0\\). If the scalar field scales as \\(\\Phi \\propto r^{\\a_\\Phi}\\) in the IR then this adds terms to equation~\\eqref{eq:fermion_ir_eom_main_text} proportional to \\(\\zeta^{-1+\\gamma}\\) with \\(\\gamma =\\nu_\\theta + n \\a_\\Phi (2\\nu_\\theta-1)\\). If \\(\\a_\\Phi<0\\), i.e. if the scalar field diverges as a power law in the IR, then for sufficiently large \\(n\\) we will find \\(\\gamma>0\\) even when \\(\\nu_\\theta<0\\), thus destroying the quantum critical continuum. For example, if we choose \\(\\Phi = e^{\\phi}\\) in the Q-lattice model studied in section~\\ref{sec:zeroT} then we have \\(\\a_\\Phi = -1\/4\\) and \\(\\nu_\\theta = -1\/6\\), and consequently \\(\\gamma > 0\\) for \\(n>1\/2\\). It would be valuable to perform a complete analysis of possible couplings and their effects on the continuum.\n\nNote also that while in our case the anisotropy of the IR geometry was caused by the explicit Q-lattice, this is not the only way to create such geometries. In particular, holographic models with spontaneous formation of spatially inhomogeneous order would naturally affect the IR geometry in a similar way. This is relevant for various models with spontaneous charge density waves~\\cite{Baggioli:2022pyb} including the holographic model of doped Mott insulators~\\cite{Andrade:2017ghg} and its homogeneous toy-model version~\\cite{Andrade:2018gqk}.\n\nIn another direction, it should be noted that our simple model produces a nodal-antinodal dichotomy with two-fold rotational symmetry, where the ``nodes'' and ``anti-nodes'' have an angular separation of $\\pi\/2$. This is in contrast to known examples of the dichotomy in real materials, where the rotational symmetry is four-fold~\\cite{shen2005nodal}: the nodes are separated from the anti-nodes by $\\pi\/4$ (and are also strictly aligned with the crystal lattice). In order to reproduce this phenomenology in a holographic approach one could introduce a periodic 2-dimensional crystal lattice, which will break rotations down to a discrete group and imprint the four-fold anisotropy onto the deep IR geometry. Such periodic lattice constructions have been realised for example in refs.~\\cite{Donos:2014yya,Donos:2020viz,Balm:2022bju}. The behaviour of the low frequency, finite momentum two-point functions should then be dictated by the same principles as outlined in section~\\ref{sec:anis}. It would be interesting to find such a periodic lattice model with four-fold anisotropic scaling behaviour analogous to the conditions~\\eqref{eq:conditions_for_dichotomy},which should then display the desired four-fold nodal-antinodal dichotomy, or its softer version, as described in the end of section~\\ref{sec:anis}. The advantage of our simplified model with two-fold rotational symmetry is the possibility for more complete analytic control, drastically improving our understanding of the underlying mechanism for the dichotomy.\n\n\n\\section*{Acknowledgements}\nWe thank Koenraad Schalm, Jan Zaanen, Erik van Heumen and Elias Kiritsis for useful communication and insightful comments. A.K. acknowledges the hospitality of the Lorentz Institute for Theoretical Physics, where the preliminary results of this work have been discussed. The work of A.K is supported by VR Starting Grant 2018-04542 of Swedish Research Council. The numerical computations were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC), partially funded by the Swedish Research Council through grant agreement no. 2018-05973, at SNIC Science Cloud and PDC Center for High Performance Computing, KTH Royal Institute of Technology. Nordita is supported in part by Nordforsk.\n\n\n\n\n\n\n\n\\subsection[kx term dominant]{\\(\\bar{k}_x\\) term dominant}\n\\label{sec:matching_kx_dominant}\n\nWe first consider the case in which for \\(\\zeta \\gg 1\\) and \\(\\bar{k}_x \\sim \\mathcal{O}(1)\\):\n\\begin{itemize}\n \\item \\(\\bar{k}_x \\zeta^{\\nu_x} \\gg \\bar{m} \\zeta^{\\nu_\\theta}\\), either because \\(\\bar{m} = 0\\) or because \\(\\nu_x > \\nu_\\theta\\); and\n \\item \\(\\bar{k}_x \\zeta^{\\nu_x} \\gg \\bar{q} \\zeta^{2\\nu_\\theta-\\nu_A}\\), either because \\(\\bar{q} = 0\\) or because \\(\\nu_x > 2\\nu_\\theta-\\nu_A. \\)\n\\end{itemize}\nThen at sufficiently large \\(\\zeta\\), the IR equations~\\eqref{eq:fermion_eom_ir} are well approximated by\n\\begin{equation}\n \\psi_\\pm' + i \\left(\\bar{\\omega} \\mp \\frac{\\bar{k}_x}{\\zeta^{1- \\nu_x}} \\right) \\chi_\\pm = 0,\n \\qquad\n \\chi_\\pm' + i \\left(\\bar{\\omega} \\pm \\frac{\\bar{k}_x}{\\zeta^{1 - \\nu_x}} \\right) \\psi_\\pm = 0,\n \\label{eq:fermion_eom_ir_kx}\n\\end{equation}\nIt will be convenient to rewrite these equations by defining a rescaled radial coordinate \\(s = \\zeta \\left(\\bar{\\omega}\/\\bar{k}_x \\right)^{1\/(1-\\nu_x)}\\), in terms of which they read\n\\begin{equation}\n \\psi_\\pm'(s) + i \\k \\left(1 \\mp \\frac{1}{s^{1-\\nu_x}}\\right) \\chi_\\pm(s) = 0,\n \\qquad\n \\chi_\\pm'(s) + i \\k \\left(1 \\pm \\frac{1}{s^{1-\\nu_x}}\\right) \\psi_\\pm(s) = 0,\n \\label{eq:fermion_eom_ir_kx_s}\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:kappa_definition}\n \\k \\equiv \\left(\\frac{\\bar{k}_x}{\\bar{\\omega}^{\\nu_x}}\\right)^{1\/(1-\\nu_x)}\n\\end{equation}\nWe will be interested in approximate solutions of equation~\\eqref{eq:fermion_eom_ir_kx_s} when \\(\\bar{k}_x \\sim \\mathcal{O}(1)\\) and \\(\\bar{\\omega} \\ll 1\\). We will be interested in cases where \\(\\nu_x < 0\\), and thus from equation~\\eqref{eq:kappa_definition} \\(\\k \\ll 1\\).\n\nWe solve equation~\\eqref{eq:fermion_eom_ir_kx_s} at small \\(\\k\\) by the following matching procedure. For \\(s \\gg 1\\) we can neglect the terms propotional to \\(s^{-1\/(1-\\nu_x)}\\), and thus the solution obeying ingoing boundary conditions is that given in equation~\\eqref{eq:fermion_solution_deep_ir}. In terms of \\(\\k\\) and \\(s\\) this solution reads\n\\begin{equation}\n \\psi_{\\pm,R}(s) = e^{i \\k s},\n \\qquad\n \\chi_{\\pm,R}(s) = - e^{i \\k s},\n \\label{eq:approximate_solution_R_s}\n\\end{equation}\nwhere we have added the subscripts \\(R\\) as a reminder that this solution applies to the right of \\(s=1\\). Near \\(s \\sim \\mathcal{O}(1)\\) we expect the neglected terms in equation~\\eqref{eq:fermion_eom_ir_kx_s} proportional to \\(s^{-1\/(1-\\nu_x)}\\) to become important. However, for \\(\\k \\ll 1\\) they only have a small effect on the solution. To see this, we write the full solution as \\(\\psi_\\pm(s) = \\psi_{\\pm,R}(s) + \\d \\psi_\\pm(s)\\), where \\(\\d \\psi_{\\pm}(s) \\to 0\\) as \\(s \\to \\infty\\), and similar for \\(\\chi_\\pm\\). Substituting into equation~\\eqref{eq:fermion_eom_ir_kx_s}, we then find\n\\begin{align}\n \\d\\psi_\\pm'(s) + i \\left(1 \\mp \\frac{1}{s^{1-\\nu_x}} \\right) \\k\\, \\d\\chi_\\pm(s) \\pm i \\k \\frac{e^{i\\k s}}{s^{1-\\nu_x}} &= 0,\n \\nonumber \\\\\n \\d \\chi_\\pm'(s)+ i \\left(1 \\pm \\frac{1}{s^{1-\\nu_x}} \\right) \\k\\,\\d\\psi_\\pm(s) \\pm i \\k \\frac{e^{i\\k s}}{s^{1-\\nu_x}} &= 0.\n \\label{eq:fermion_eom_ir_kx_deviation_1}\n\\end{align}\nWhen \\(\\d\\psi_\\pm\\) and \\(\\d\\chi_\\pm\\) are small, then the terms proportional to \\(\\k \\, \\d\\psi_\\pm\\) and \\(\\k \\, \\d\\chi_\\pm\\) are doubly small in the small-\\(\\k\\) limit, and may be neglected to leading order in this limit. The approximate solution to equation~\\eqref{eq:fermion_eom_ir_kx_deviation_1} subject to the boundary conditions \\(\\d\\psi_\\pm, \\d\\chi_\\pm \\to 0\\) as \\(s \\to \\infty\\) is then\n\\begin{equation}\n \\d\\psi_\\pm(s) \\approx \\d\\chi_\\pm(s) \\approx \\pm i \\k (-i \\k )^{- \\nu_x} \\Gamma(\\nu_x, - i \\k s) = \\mp \\frac{i \\k s^{\\nu_x}}{\\nu_x} + \\mathcal{O}(\\k^{1+|\\nu_x|}),\n\\end{equation}\nwhere \\(\\Gamma\\) is the incomplete Gamma function, and on the right-hand side we have expanded for fixed \\(s\\) and small \\(\\k\\). The right-hand side vanishes in the limit \\(\\k \\to 0\\), and thus \\(\\psi_{\\pm,R}\\) and \\(\\chi_{\\pm,R}\\) indeed provide good approximate around \\(s=1\\) at low frequencies. This also justifies a posteriori our neglect of the terms in equation~\\eqref{eq:fermion_eom_ir_kx_deviation_1} proportional to \\(\\k \\, \\d \\psi_\\pm\\) and \\(\\k \\, \\d \\chi_\\pm\\). On the other hand, this approximation breaks down at \\(s \\ll 1\\), where the incomplete gamma functions grow very large.\n\nFor \\(s \\ll 1\\), the terms in equation~\\eqref{eq:fermion_eom_ir_kx_s} proportional to \\(s^{-(1-\\nu_x)}\\) become dominant, and we obtain the approximate solution\n\\begin{align}\n \\psi_{\\pm,L}(s) &\\approx a_\\pm \\exp\\left(\\frac{\\k}{\\nu_x}s^{\\nu_x}\\right) + b_\\pm \\exp\\left(-\\frac{\\k}{\\nu_x}s^{\\nu_x}\\right),\n \\nonumber\\\\\n \\chi_{\\pm,L}(s) &\\approx \\mp i a_\\pm \\exp\\left(\\frac{\\k}{\\nu_x}s^{\\nu_x}\\right) \\pm i b_\\pm \\exp\\left(-\\frac{\\k}{\\nu_x}s^{\\nu_x}\\right),\n \\label{eq:approximate_solution_L_s}\n\\end{align}\nwhere we have added the subscripts \\(L\\) as a reminder that this solution applies to the left of \\(s=1\\). An argument almost identical to the one we made for \\((\\psi_{\\pm,R},\\chi_{\\pm,R})\\) shows that \\((\\psi_{\\pm,L},\\chi_{\\pm,L})\\) are good approximations to the full solution of equation~\\eqref{eq:fermion_eom_ir_kx_s} near \\(s \\sim \\mathcal{O}(1)\\) when \\(\\k \\ll 1\\).\n\nWe fix the integration constants \\(a_\\pm\\) and \\(b_\\pm\\) in equation~\\eqref{eq:approximate_solution_L_s} by matching to equation~\\eqref{eq:approximate_solution_R_s} at some \\(s_m~\\mathcal{O}(1)\\), near which both solutions provide good approximations. Setting \\(\\psi_{\\pm,L}(s_m) = \\psi_{\\pm,R}(s_m)\\) and \\(\\chi_{\\pm,L}(s_m) = \\chi_{\\pm,R}(s_m)\\), we can solve for the integration constants to find\n\\begin{align}\n a_\\pm &= \\frac{1}{\\sqrt{2}} \\exp\\left(i \\k s_m - \\frac{\\k }{\\nu_x} s_m^{\\nu_x}\\mp \\frac{i \\pi}{4}\\right) = \\frac{e^{\\mp i \\pi\/4}}{\\sqrt{2}} + \\mathcal{O}(\\k),\n \\nonumber \\\\\n b_\\pm &= \\frac{1}{\\sqrt{2}} \\exp\\left(i \\k s_m + \\frac{\\k }{\\nu_x} s_m^{\\nu_x} \\pm i \\frac{\\pi}{4}\\right) = \\frac{e^{\\pm i \\pi\/4}}{\\sqrt{2}} + \\mathcal{O}(\\k).\n \\label{eq:fermion_greens_function_matching_coefficients}\n\\end{align}\nAs a check of our matching procedure we plot the relative errors \\(|\\psi_{+,m} - \\psi_+|\/|\\psi_+|\\) and \\(|\\chi_{+,m} - \\chi_+|\/|\\chi_+|\\) for \\(\\nu_x=-1\/3\\) and \\(s_m=1\\), and sample values of \\(\\k\\) in figure~\\ref{fig:example_matching}, where \\((\\psi_+,\\chi_+)\\) are a numerical solution of equation~\\eqref{eq:fermion_eom_ir_kx_s} with ingoing boundary conditions, while \\((\\psi_{+,m},\\chi_{+,m})\\) are matched approximations, given by equation~\\eqref{eq:approximate_solution_R_s} for \\(s > 1\\) and equation~\\eqref{eq:approximate_solution_L_s} for \\(s<1\\), with coefficients~\\eqref{eq:fermion_greens_function_matching_coefficients}. The relative error indeed becomes very small at small \\(\\k\\).\n\\begin{figure}\n \\begin{center}\n \\includegraphics{figs\/example_matching_psi.pdf}\n \\includegraphics{figs\/example_matching_chi.pdf}\n \\end{center}\n \\caption{\n Demonstration of the validity of the matching procedure at small \\(\\k\\). We plot the error in \\(\\psi_+\\) and \\(\\chi_+\\) obtained from the matching calculation relative to a numerical solution of equation~\\eqref{eq:fermion_eom_ir_kx_s} with ingoing boundary conditions \\(\\psi_+ \\approx e^{i\\k s}\\) and \\(\\chi_+ \\approx e^{-i\\k s}\\) at large \\(s\\). In other words, what is plotted is the absolute value of the difference between the matching and numerical solutions, divided by the numerical solution. For concreteness we take \\(\\nu_x = -1\/3\\) (the same value used in section~\\ref{sec:zeroT}) and \\(s_m=1\\). The relative error clearly becomes very small at small \\(\\k\\).\n }\n \\label{fig:example_matching}\n\\end{figure}\n\nReturning to the radial coordinate \\(\\zeta = s (\\bar{k}_x\/\\bar{\\omega})^{1\/(1-\\nu_x)}\\), we have found the following approximate solution to equation~\\eqref{eq:fermion_eom_ir_kx}, obeying ingoing boundary conditions at \\(\\zeta \\to \\infty\\) and valid at small \\(\\bar{\\omega}\\) and fixed \\(\\bar{k}_x\\),\n\\begin{align}\n \\psi_\\pm(\\zeta) &= \\begin{cases}\n e^{i \\bar{\\omega} \\zeta}, & \\zeta > \\zeta_m,\\\\\n \\sqrt{2} \\cosh \\left(\\frac{\\bar{k}_x}{\\nu_x} \\zeta^{\\nu_x} - \\frac{i\\pi}{4} \\right),\\phantom{\\mp} & \\zeta < \\zeta_m.\n \\end{cases}\n \\nonumber \\\\\n \\chi_\\pm(\\zeta) &= \\begin{cases}\n - e^{i \\bar{\\omega} \\zeta}, & \\zeta > \\zeta_m,\\\\\n \\mp i \\sqrt{2} \\sinh \\left(\\frac{\\bar{k}_x}{\\nu_x} \\zeta^{\\nu_x} - \\frac{i\\pi}{4} \\right), & \\zeta < \\zeta_m.\n \\end{cases}\n \\label{eq:kx_dominant_matching_solution}\n\\end{align}\nwhere \\(\\zeta_m = s_m(\\bar{k}_x\/\\bar{\\omega})^{1\/(1-\\nu_x)}\\).\n\nAs \\(\\zeta\\) is decreased from infinity, a point will eventually be reached where equation~\\eqref{eq:fermion_eom_ir_kx} ceases to provide a good approximation to the full equations of motion, and thus the matching solution~\\eqref{eq:kx_dominant_matching_solution} will cease to provide a good approximate solution. This is both because of the terms we have neglected from equation~\\eqref{eq:fermion_eom_ir} and because the metric functions and gauge field will depart from their scaling forms~\\eqref{equ:scaling}. At low frequencies the approximate location at which this occurs will be independent of frequency, being determined by the relative size of the neglected terms and the terms proportional to \\(\\bar{k}_x\\) in equation~\\eqref{eq:fermion_eom_ir_kx}.\n\nWe can thus choose to evaluate the IR Green's function \\(\\mathcal{G}_{\\pm} = - i \\chi_\\pm(\\zeta_0)\/\\psi_\\pm(\\zeta_0)\\) at some frequency-independent \\(\\zeta_0\\) satisfying \\(1 \\ll \\zeta_0 \\ll \\zeta_m\\) (note that \\(\\zeta_m\\to\\infty\\) as \\(\\bar{\\omega}\\to0\\)), where we choose \\(\\zeta_0\\) to lie within the range of validity of the matching solution. We can then use equation~\\eqref{eq:kx_dominant_matching_solution} to evaluate the IR Green's function, obtaining\n\\begin{equation}\n \\mathcal{G}_{\\pm}(\\omega,k_x,0) \\approx i, \n \\qquad\n \\bar{\\omega} \\ll 1.\n\\end{equation}\nRecalling that the low frequency scaling of \\(\\Im \\mathcal{G}_\\pm\\) determines the low frequency scaling of the imaginary part of the full Green's function, we find that the spectral function tends to a non-zero constant at small frequency.\n\n\n\\subsection[q term dominant]{\\(\\bar{q}\\) term dominant}\n\\label{sec:matching_q_dominant}\n\nNow we consider the case in which the first subleading terms in equation~\\eqref{eq:fermion_eom_ir} at large \\(\\zeta\\) are the ones proportional to \\(\\bar{q}\\). This will occur if \\(\\nu_x < 2\\nu_\\theta-\\nu_A\\) and \\(\\bar{m}=0\\).\\footnote{Notice that for non-zero mass and \\(\\nu_\\theta \\leq 0\\), the mass term always dominates over the charge term, \\(\\bar{m} \\zeta^{\\nu_\\theta} \\gg \\bar{q} \\zeta^{2\\nu_\\theta-\\nu_A}\\) at large \\(\\zeta\\), due to the requirement \\(\\nu_A \\geq 0\\) in equation~\\eqref{eq:exponent_conditions}.} In this case, at sufficiently large \\(\\zeta\\) the IR equations~\\eqref{eq:fermion_eom_ir} will be well approximated by\n\\begin{equation}\n \\psi_\\pm' + i \\left(\\bar{\\omega} + \\frac{\\bar{q}}{\\zeta^{1-2\\nu_\\theta+\\nu_A}} \\right) \\chi_\\pm = 0,\n \\qquad\n \\chi_\\pm' + i \\left(\\bar{\\omega} + \\frac{\\bar{q}}{\\zeta^{1-2\\nu_\\theta+\\nu_A}} \\right) \\psi_\\pm = 0.\n \\label{eq:fermion_eom_ir_q}\n\\end{equation}\nThese can be solved exactly. With ingoing boundary conditions we have\n\\begin{equation}\n \\psi_\\pm(\\zeta) = \\exp\\left(i \\bar{\\omega} \\zeta - i \\frac{\\bar{q}\\zeta^{-(\\nu_A - 2 \\nu_\\theta)} }{\\nu_A-2\\nu_\\theta} \\right),\n \\qquad\n \\chi_\\pm(\\zeta) = - \\exp\\left(i \\bar{\\omega} \\zeta - i \\frac{\\bar{q}\\zeta^{-(\\nu_A - 2 \\nu_\\theta)} }{\\nu_A-2\\nu_\\theta} \\right),\n \\label{eq:q_dominant_solution}\n\\end{equation}\nvalid over the region for which equation~\\eqref{eq:fermion_eom_ir_q} provides a good approximation to the full equations of motion. The eigenvalues of the IR Green's function are \\(\\mathcal{G}_\\pm = - i \\chi_\\pm(\\zeta_0)\/\\psi_\\pm(\\zeta_0)\\), where \\(\\zeta_0\\) is some large value of \\(\\zeta\\) within this region. From equation~\\eqref{eq:q_dominant_solution} we find\n\\begin{equation}\n \\mathcal{G}_\\pm(\\omega,k_x,0) \\approx i, \\qquad \\bar{\\omega} \\ll 1.\n\\end{equation}\nAgain, the fermion spectral function tends to a non-zero value at zero frequency.\n\n\\subsection[m term dominant]{\\(\\bar{m}\\) term dominant}\n\\label{sec:matching_m_dominant}\n\nFinally, we consider the case in which for \\(\\zeta \\gg 1\\) and \\(\\bar{k}_x \\sim \\mathcal{O}(1)\\) that \\(\\bar{m} \\zeta^{\\nu_\\theta} \\gg \\bar{k}_x \\zeta^{\\nu_x} \\). This requires \\(\\nu_\\theta > \\nu_x\\).\nThe calculation is very similar to that of section~\\ref{sec:matching_kx_dominant}, so we will be brief with the details. At sufficiently large \\(\\zeta\\), the IR equations~\\eqref{eq:fermion_eom_ir} are well approximated by\n\\begin{equation}\n \\psi_\\pm' + \\frac{\\bar{m}}{\\zeta^{1 - \\nu_\\theta}} \\psi_\\pm + i\\bar{\\omega} \\chi_\\pm = 0,\n \\qquad\n \\chi_\\pm' - \\frac{\\bar{m}}{\\zeta^{1 - \\nu_\\theta}} \\chi_\\pm + i \\bar{\\omega} \\psi_\\pm = 0,\n \\label{eq:fermion_eom_ir_m}\n\\end{equation}\nIt will be convenient to rewrite these equations by defining a rescaled radial coordinate \\(s = \\zeta \\left(\\bar{\\omega}\/\\bar{m} \\right)^{1\/(1-\\nu_\\theta)}\\) (note that this is a different \\(s\\) from section~\\ref{sec:matching_kx_dominant}), in terms of which they read\n\\begin{equation}\n \\psi_\\pm'(s) + \\frac{\\mathfrak{m}}{s^{1-\\nu_\\theta}} \\psi(s)+ i \\mathfrak{m} \\chi(s) = 0,\n \\qquad\n \\chi_\\pm'(s) - \\frac{\\mathfrak{m}}{s^{1-\\nu_\\theta}} \\chi(s) + i \\mathfrak{m} \\chi(s) = 0,\n \\label{eq:fermion_eom_ir_m_s}\n\\end{equation}\nwhere\n\\begin{equation}\n \\mathfrak{m} = \\left(\\frac{m}{\\omega^{\\nu_\\theta}}\\right)^{1\/(1-\\nu_\\theta)} \\ll 1.\n\\end{equation}\n\nAt large \\(s\\) we neglect the terms in equation~\\eqref{eq:fermion_eom_ir_m_s} proportional to \\(s^{-1\/(1-\\nu_\\theta)}\\), obtaining the ingoing solution\n\\begin{equation}\n \\psi_{\\pm,R}(s) = e^{i \\mathfrak{m} s},\n \\qquad\n \\chi_{\\pm,R}(s) = - e^{i \\mathfrak{m} s}.\n\\end{equation}\nOn the other hand, at small \\(s\\) the terms in equation~\\eqref{eq:fermion_eom_ir_m_s} proportional to \\(s^{-1\/(1-\\nu_\\theta)}\\) are dominant, and we obtain the approximate solution\n\\begin{equation}\n \\psi_{\\pm,L}(s) = a_\\pm e^{-\\mathfrak{m} s^{\\nu_\\theta}\/\\nu_\\theta},\n \\qquad\n \\chi_{\\pm,L}(s) = b_\\pm e^{\\mathfrak{m} s^{\\nu_\\theta}\/\\nu_\\theta},\n\\end{equation}\nwhere \\(a_\\pm\\) and \\(b_\\pm\\) are integration constants.\n\nAn argument similar to the one made in section~\\ref{sec:matching_kx_dominant} shows that both \\((\\psi_{\\pm,L},\\chi_{\\pm,L})\\) and \\((\\psi_{\\pm,R}, \\chi_{\\pm,R})\\) are good approximate solutions at \\(s \\sim \\mathcal{O}(1)\\) when \\(\\mathfrak{m} \\ll 1\\). We can thus fix the integration constants by setting \\(\\psi_{\\pm,L}(s_m) = \\psi_{\\pm,R}(s_m)\\) and \\(\\chi_{\\pm,L}(s_m) = \\chi_{\\pm,R}(s_m)\\). This yields \\(a_\\pm = 1\\) and \\(b_\\pm = -1\\) to leading order at small \\(\\mathfrak{m}\\) (and thus at small \\(\\bar{\\omega}\\)). In terms of the radial coordinate \\(\\zeta\\), the approximate matching solution is then\n\\begin{equation}\n \\psi_\\pm(\\zeta) = \\begin{cases}\n e^{i\\bar{\\omega} \\zeta}, & \\zeta > \\zeta_m,\n \\\\\n e^{\\bar{m} \\zeta^{\\nu_\\theta}\/\\nu_\\theta}, & \\zeta < \\zeta_m,\n \\end{cases}\n \\qquad\n \\chi_\\pm(\\zeta) = \\begin{cases}\n e^{i\\bar{\\omega} \\zeta}, & \\zeta > \\zeta_m,\n \\\\\n -e^{\\bar{m} \\zeta^{\\nu_\\theta}\/\\nu_\\theta}, & \\zeta < \\zeta_m,\n \\end{cases}\n \\label{eq:m_dominant_matching_solution}\n\\end{equation}\nwhere \\(\\zeta_m = s_m (\\bar{m}\/\\bar{\\omega})^{1\/(1-\\nu_\\theta)}\\). The IR Green's function is \\(\\mathcal{G}_{\\pm} = - i \\chi_\\pm(\\zeta_0)\/\\psi_\\pm(\\zeta_0)\\) for some \\(\\zeta_0\\) satisfying \\(1 \\ll \\zeta_0 \\ll \\zeta_m\\). Evaluating this using equation~\\eqref{eq:m_dominant_matching_solution}, we find that at leading order at small frequencies\n\\begin{equation}\n \\mathcal{G}_{\\pm}(\\omega,k_x,0) \\approx i, \\qquad \n \\bar{\\omega} \\ll 1.\n\\end{equation}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nSpoken dialogue systems (SDS) have been deployed in many gadgets such as smartphone assistants and smart speakers in the past ten years.\nMillions of users now use them daily. In these systems, dialogue is regarded as a means to complete some tasks by the machine.\nThe typical tasks include command and control of the machine or application software (e.g., music player and alarm) and simple information retrieval (e.g., weather and routing).\nThe task goals are objective and shared by the system and users.\nThus, the dialogue should be completed as soon as possible.\nThere is a big gap from the human-human dialogue, in which task goals are not definite and the duration is flexible.\n\nConversational robots are another application of the spoken dialogue systems, but they should be designed in a different principle from ``smart'' devices mentioned above.\nWe do not need a robot to ask for weather information and alarm.\nInstead, the conversational robots are expected to simply talk to or listen to humans, as humans do.\nAn ultimate goal is to realize human-level dialogue, which is engaging and pleasant.\nApart from the quality of dialogue, there is a significant difference in the style of the dialogue.\nIn the current human-machine interface, a user is assumed to utter one sentence per one turn, which corresponds to a command or a query, to which the system will respond.\nOn the other hand, in natural human dialogue, a participant utters many sentences per one turn, during which the counterpart makes backchannels.\nThe difference can be analogous to half-duplex versus full-duplex in the communication channel.\nIt is weird to nod or backchannel to a machine, but a human-looking android would help the realization of human-level dialogue.\n\nWith these backgrounds, we are conducting a project to develop an autonomous android ERICA~\\cite{dylan2016roman,inoue2016sigdial,kawahara2018iwsds}, who behaves and interacts just like a human, including facial look and expression, gaze and gesture, and spoken dialogue.\nAn ultimate criterion is to pass a Total Turing Test, that is to convince people that ERICA's interaction is comparable to a human, or it is indistinguishable from a remote-operated android.\nWe hope that through this project we can make clear what is missing or critical in natural interaction.\nERICA is expected to replace some social roles currently done by a human, or used for conversation skill training in a realistic setting.\n\nIn this article, the major challenges and key technologies in this autonomous android are described.\nIn particular, the systems developed for attentive listening and job interview are explained with evaluations.\n\n\n\\section{Android ERICA}\n\nFigure~\\ref{fig:erica} shows a snapshot of ERICA.\nShe does not move around, but generates elaborate facial expressions and movements of heads, eyes, and body attitudes while sitting on a chair, so that she can behave and interacts exactly like a human.\nWe have also developed ASR (automatic speech recognition) and TTS (text-to-speech) systems as well as the human tracking system dedicated to ERICA.\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=90mm]{erica-crop.pdf}\n \\end{center}\n \\caption{Android ERICA}\n \\label{fig:erica}\n\\end{figure}\n\n\\subsection{Social interaction tasks}\n\nWe have explored a number of social roles suited to ERICA that can take advantage of human-like presence and would realize human-like dialogue.\nA majority of tasks conducted by the current SDSs such as information services are not adequate; they are better suited to smartphones and smart speakers.\nWhile most conventional robots are engaged in physical tasks such as moving objects or delivering goods, recently many kinds of communicative robots are designed and introduced in public spaces.\nThey serve as an attendant~\\cite{fujie2009conversation} or a receptionist~\\cite{bohus2016models}.\nThey are effective for attracting people, but the resulting interaction is usually very shallow and short such as a predefined script and simple guidance.\nIn recent years, chatting systems are also developed intensively, but the dialogue is also shallow and not so engaging. In contrast, interaction with ERICA should leverage physical presence and involve face-to-face communication.\nTherefore, we design ``social interaction'' tasks in which human-like presence matters and long deep interaction is exchanged. Here, dialogue itself is a task, and the goal of dialogue may be mutual understanding or appealing. We assign a realistic social role to ERICA, so matched users are seriously engaged beyond chatting. Specifically, we set up the following four tasks.\nThey are compared in Table~\\ref{table:task_for_erica}.\n\n\\subsubsection{Attentive listening}\n\nIn this task, ERICA mostly listens to senior people talking about topics such as memorable travels and recent activities~\\cite{lala2017}.\nAttentive listening is being recognized as effective for maintaining the communication ability of senior people, and many communicative robots are designed for this task.\nThe role of ERICA is to encourage users to speak for long.\nIn this sense, attentive listening is similar to counseling~\\cite{devault2014simsensei}.\n\n\\subsubsection{Job interview (practice)}\n\nWhile dialogue systems have been investigated for casual interviews~\\cite{kobori2016small}, a job interview is very important for both applicants, typically students, and companies hiring them.\nEach side makes a lot of preparations including rehearsal.\nIn this setting, ERICA plays the role of interviewer by asking questions.\nShe provides a realistic simulation, and is expected to replace a human interviewer in the future.\n\n\\subsubsection{Speed dating (practice)}\n\nSpeed dating is widely held for giving an opportunity for people to find a partner.\nIn this setting, two persons meet for the first time and talk freely to introduce themselves, and see if the counterpart can be a good match.\nThere was a study~\\cite{ranganath2009s} that analyzed a corpus of speed dating.\nIn our setting, ERICA plays the role of the female participant by talking about topics such as hobbies and favorite foods.\nShe provides a realistic simulation and gives proper feedbacks according to the dialogue.\n\n\\subsubsection{Lab guide}\n\nA robot is often used for introducing laboratories and museums.\nIt is expected to attract people, and needs to keep their engagement by providing proper interaction.\n\n\\begin{table}[t]\n \\caption{Dialogue tasks designed for ERICA}\n \\label{table:task_for_erica}\n \\begin{center}\n \\begin{tabular}{lcccc}\n \\hline\n & attentive listening & job interview & speed dating & lab guide \\\\\n \\hline\n role of system & listen & ask & all & talk \\\\\n dialogue initiative & user & system & mixed & system \\\\\n main speaker & user & user & both & system \\\\\n main listener & system & system & both & user \\\\\n turn-taking & few & explicit & complicate & explicit \\\\\n \\hline\n \\end{tabular}\n \\end{center}\n\\end{table}\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=150mm]{system-crop.pdf}\n \\end{center}\n \\caption{System configuration for android ERICA}\n \\label{fig:system}\n\\end{figure}\n\n\\subsection{System configuration} \\label{sec:system}\n\nFigure~\\ref{fig:system} depicts the whole system configuration for android ERICA.\nThe input devices are a 16-channel microphone array and a depth camera (Kinect v2).\nThese sensors are placed within the interaction environment but not physically within or on the android robot.\nThe sensors do not require any human contact or pre-dialogue calibration, so users can start and engage in dialogue naturally without awareness of any sensors (as opposed to using hand microphones, for example).\nWe use the sensors for sound source localization, voice activity detection, and speech enhancement to obtain the segmented speech of the target user~\\cite{carlos2016iros}.\nThe enhanced speech is then fed into the ASR (automatic speech recognition) module which uses an acoustic-to-subword end-to-end neural network model.\nSimultaneously, prosodic information such as fundamental frequency (F0) and power is extracted from the enhanced speech~\\cite{carlos2016iros}.\nThe ASR result is then fed into the dialogue manager to generate a system response which will be then played by a TTS (text-to-speech) engine designed for ERICA\\footnote{\\url{https:\/\/voicetext.jp\/news\/product\/151023\/}}.\nThe TTS engine is also enabled to play non-linguistic utterances such as backchannels, fillers, and laughing.\nLip and head movements are controlled based on the generated TTS speech~\\cite{carlos2012iros,sakai2015roman}.\n\n\n\n\\section{Technologies for human-level conversations}\n\nFor human-level conversation, linguistic processing and understanding is necessary but non-linguistic behaviors are similarly important. \nWe briefly explain several of these components for ERICA.\n\n\\subsection{Backchannel generation}\n\nBackchannels are short utterances uttered by listeners such as ``{\\it yeah}'' in English and ``{\\it un}'' in Japanese.\nListeners utter backchannels toward speakers to stimulate further talk and also to express the listener's attention and interest in the conversation.\nIn order to generate backchannels, systems need to predict the timing of backchannels, which has been tackled by many works using prosodic features~\\cite{ward2000prosodic,truong2010rule}.\nWhile Existing backchannel generation systems make a prediction of backchanneling after the end of user utterances segmented by IPUs (inter-pausal units), we implement frame-wise continuous prediction of backchannel timing with a logistic regression that predicts if the system should utter a backchannel within the next 500 milliseconds~\\cite{lala2017}.\nWe also proposed a prediction model of backchannel form (type) based on both prosodic and linguistic features~\\cite{kawahara2016prediction}.\n\n\\subsection{Flexible turn-taking using TRP}\n\nFlexible and robust turn-taking is a required function for natural communication between the system and a user.\nEnd-of-turn prediction has been widely studied using linguistic and prosodic features extracted from the user's utterance~\\cite{skantze2017sigdial,masumura2017interspeech}.\nExisting models were trained with actual turn-taking behaviors in dialogue corpora, but the turn-taking decision is sometimes arbitrary so model training can become unstable.\nTherefore, we take into account the concept of TRPs (transition-relevance places)~\\cite{sacks1974TRP} where the current turn could be completed, and are independent of the actual end-of-turn instances.\nWe manually annotated these TRP instances in our human-robot dialogue corpus~\\cite{hara2019interspeech}.\nWe then proposed a two-step turn-taking prediction model where the system first detects the TRP, and if detected, predicts if the system actually takes the turn.\nBy decomposing complex turn-taking prediction into TRP detection and subsequent turn prediction, the accuracy of turn-taking was improved.\n\nFrom the perspective of live spoken dialogue systems, another required function for the turn-taking module is to predict how long the system waits until it actually takes the turn.\nA simple approach is to set a fixed silence threshold time, but it may cause false cut-in (interruption) or unnaturally long silences.\nOur system integrates the above-mentioned turn-taking prediction model using TRP labels with an FSTTM (Finite-State Turn-Taking Machine)~\\cite{Raux2009,Lala2018} to determine the silence time the system will wait for.\nIt can quickly take the turn with a high probability of being end-of-turn, while it can also wait longer if the user says utterances such as fillers or hesitations.\n\n\\subsection{Engagement recognition}\n\nEngagement is defined as how much a user is interested in the current dialogue, and keeping users engaged is an important factor that leads to ``long and deep'' dialogue~\\cite{oertel2020engagement}.\nTo this end, engagement recognition has been widely studied by investigating multi-modal user behaviors~\\cite{dhall2019emotiw}.\nWe proposed an engagement recognition model based on listener behaviors such as backchannels, laughs, head nods, and eye contact~\\cite{inoue2018engagement}.\nWe automatically detected these listener behaviors to implement a real-time engagement recognition system.\nThis system was applied to the laboratory guide dialogue task with ERICA to control the android robot's behaviors according to the level of user engagement~\\cite{inoue2019iwsds}.\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=160mm]{response-list-ver2-crop.pdf}\n \\end{center}\n \\caption{List of listener responses in our attentive listening system (Examples of generated responses are shown in right side in this figure. Underline means the focus focus word.)}\n \\label{fig:response_list}\n\\end{figure}\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=80mm]{recording-crop.pdf}\n \\end{center}\n \\caption{Snapshot of dialogue experiment for our attentive listening system with ERICA}\n \\label{fig:experiment}\n\\end{figure}\n\n\\section{Attentive Listening System}\n\nWe implemented an attentive listening system with ERICA~\\cite{lala2017,inoue2020sigdial}.\nThe aforementioned backchannel generation model is a core component in this system.\nBesides backchannels, the system generates a variety of listener responses: partial repeats, elaborating questions, assessments, generic sentimental responses, and generic responses, as depicted in Figure~\\ref{fig:response_list}.\nPartial repeats and elaborating questions are generated based on the {\\it focus word} of user utterances.\nAssessments and generic sentimental responses are based on the sentiment (positive or negative) of user utterances.\n\nWe have conducted dialogue experiments with a total of 40 senior people and confirmed that they could engage with ERICA for 5-7 minutes without a conversation breakdown.\nFigure~\\ref{fig:experiment} shows the snapshot of this experiment.\nWe also manually evaluated each system response and found that about 60\\% of the system responses were acknowledged as appropriate, which means the other 40\\% responses can be improved.\nThe novelty of our experiment is to compare our system with a human listener implemented in a WOZ (Wizard-of-OZ) setting with a hidden human operator tele-operating ERICA~\\cite{inoue2020sigdial}.\nFrom the subjective evaluation, our system achieved comparable scores against the WOZ setting in basic skills of attentive listening such as {\\it encouragement to talk}, {\\it focused on the talk}, and {\\it actively listening}.\nOn the other hand, there is still a gap between our system and human listeners for more sophisticated skills such as {\\it dialogue understanding}, {\\it showing interest}, and {\\it empathy towards the user}.\nFrom this experiment we could identify the type of skills that differed between our system and human listeners.\nIn future work, we will improve the response generation modules to increase the ratio of appropriate responses and close the gap with human listeners.\n\n\n\\section{Job Interview System}\n\nWe also implemented a job interview system where ERICA plays the role of an interviewer~\\cite{inoue2020icmi}.\nExisting job interview systems give the same pre-defined questions to any interviewees, so interviews tend to be tedious and far from real human-human interviews.\nTo make interviews more realistic, our system generates follow-up questions dynamically based on initial responses from the interviewee with two different approaches.\nThe first approach is based on assessing the quality of the interviewee's utterance using a checklist of items that ``{\\it should be mentioned}'' in the job interview.\nFor example, when the interviewee could not mention what he or she wants to contribute to the company, the system generates a follow-up question such as ``{\\it Which part of our company do you want to contribute to?}''. \nThe second approach is based on keyword extraction from the interviewee's response.\nFor example, when the interviewee said ``{\\it I have an experience on machine learning.}'', then the system extracts a keyword as ``{\\it machine learning}'' and then generates a follow-up question such as ``{\\it Could you explain more about machine learning?}''\n\nWe conducted a dialogue experiment with university students to compare our system with a baseline system that asked only pre-defined basic questions.\nWe found that our system was significantly better than the baseline system in the quality of the questions.\nIt was also found that the perceived presence of the android interviewer was enhanced by the follow-up questions~\\cite{inoue2020icmi}.\nAnother research question in this job interview scenario is how much the interviewer's appearance affects the above-reported experimental result, so we also conducted the same experiment with a virtual agent, as shown in Figure~\\ref{fig:mmd}.\nAs a result, we confirmed a similar result for the follow-up questions even with the virtual agent interviewer, but the presence of the virtual agent interviewer was not enhanced.\nThe presence of the interviewer is an important factor in creating an interview system with a realistically tense atmosphere.\nTherefore, it is more effective to implement the follow-up questions with android ERICA as the job interviewer than a virtual agent.\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=120mm]{mmd-crop.pdf}\n \\end{center}\n \\caption{Difference of appearance between ERICA and virtual agent in our job interview dialogue experiment}\n \\label{fig:mmd}\n\\end{figure}\n\n\\section{Conclusions}\n\nBecause of the pervasiveness of SNS as a communication channels and then COVID-19, importance of face-to-face communication is questioned or emphasized.\nThe android ERICA provides a testbed for investigating what are important factors for the quality communications.\nThe android is expected to replace some social roles under and after the COVID-19 era.\n\n\\section*{Acknowledgments}\nThis work was supported by JST ERATO Grant number JPMJER1401 and JSPS KAKENHI Grant number JP19H05691.\n\n\\bibliographystyle{unsrt} \n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \n\\label{sec:intro}\nIn recent years, higher-order perturbative calculations of the quark and gluon form factors in massless Quantum Chromodynamics (QCD) have generated a great deal of interest. \nThis is primarily due to their relevance to two of the most important Large Hadron Collider processes, the production of a Drell-Yan lepton pair \\cite{Drell:1970wh}\nand the production of a Higgs boson via gluon fusion \\cite{Georgi:1977gs}. \nAt the most basic level, a calculation of the quark and gluon form factors at some fixed order provides, respectively, the purely virtual QCD corrections to Drell-Yan lepton production \nand gluon-fusion Higgs boson production in the infinite top-quark-mass limit~\\cite{Wilczek:1977zn,Shifman:1978zn,Ellis:1979jy,Inami:1982xt}.\nFurthermore, the cusp and collinear anomalous dimensions can be extracted from $\\epsilon^{-2}$ and $\\epsilon^{-1}$ poles of the bare form factors.\nThey determine the structure of the quark and gluon jet functions, objects which play an important role in the theory of the infrared divergences of multi-leg, massless scattering amplitudes\nand in the theory of soft-gluon resummation (see {\\it e.g.} reference \\cite{Becher:2014oda} for a recent review).\n\nEven if all real radiation is very soft relative to the scale of the hard scattering, the purely virtual corrections to a process are only one part of the story.\nUsing the eikonal approximation to treat all real radiation allows one to give an approximate cross section for a process at higher orders in QCD perturbation theory.\nIn reference \\cite{Harlander:2002wh}, this so-called soft-virtual approximation was shown\nto capture a sizable fraction of the total gluon-fusion Higgs boson production cross section at next-to-next-to-leading order.\nAlthough three-loop calculations of both form factors through the finite terms \\cite{Baikov:2009bg,Lee:2010cga,Gehrmann:2010ue} were key ingredients for recent\nthird-order, soft-virtual calculations of the cross sections for Drell-Yan lepton production and gluon-fusion Higgs boson production, much more work is required to obtain the final results. \nAs usual, it is also necessary to consider the radiation of additional partons and then combine all relevant real radiative\ncorrections with the purely virtual contributions. Only then are infrared-finite results obtained which can be convoluted with the appropriate parton distribution functions to produce meaningful numbers.\nIn a series of papers \\cite{Anastasiou:2013srw,Li:2013lsa,Duhr:2013msa,Anastasiou:2014vaa,Li:2014bfa,Ahmed:2014cla,Li:2014afw}, results were obtained which,\nin principle, could have determined both the Drell-Yan lepton production cross section and the gluon-fusion Higgs boson production cross section to levels of precision sufficient for the purposes of LHC physics.\nAll of this work culminated in a new milestone, namely two independent calculations of the parton-level cross sections for both processes in the soft-virtual\nnext-to-next-to-next-to-leading order (N$^3$LO) approximation \\cite{Anastasiou:2014vaa,Ahmed:2014cla,Li:2014afw}. \n\nA systematic inclusion of power-corrections in the soft-virtual Higgs-production\nanalysis~\\cite{Anastasiou:2014lda} shows significant contributions beyond the\nleading term. While the small numerical impact at high expansion order suggests that this approximate N$^3$LO prediction is sufficiently precise for the purposes of Large Hadron Collider phenomenology,\nit is still interesting to carry out the exact N$^3$LO calculation to put the analysis on more rigorous grounds.\nIt has long been known that the accuracy of predictions for the above-mentioned production processes can be improved significantly if appropriate towers of logarithms \ncoming from soft radiation near the production threshold are resummed~\\cite{Kramer:1996iq}.\nTo improve upon the current state-of-the-art, one could carry out a soft parton resummation of the next-to-next-to-next-to-leading Sudakov logarithms (N$^3$LL order) \nand then match this onto the exact fixed order N$^3$LO prediction once the result becomes available.\nAt this stage, it would also be of interest \nto include a resummation of additional terms in the virtual matrix elements which appear due to the fact that $s$-channel processes have a time-like momentum transfer \\cite{Ahrens:2008qu,Ahrens:2008nc}.\n\nIn fact, some partial progress towards this more ambitious goal has already been realized.\nGiven the impressive number of recently-obtained \nresults \\cite{Hoschele:2012xc,Anastasiou:2013mca,Kilgore:2013gba,Hoschele:2014qsa,Duhr:2014nda,Dulat:2014mda,Anastasiou:2015ema,Anastasiou:2015yha,Bonvini:2014joa,Catani:2014uta,deFlorian:2014vta,Anzai:2015wma}, \nit seems clear that the exact parton-level N$^3$LO results lie within reach and that,\nfurthermore, matching them with N$^3$LL Sudakov resummations will not be an issue once all of the required ingredients\nbecome available. Actually, almost all of the quantities required for a complete N$^3$LL resummation have already been available in the literature for quite some time, both for Drell-Yan and for Higgs. However, in order to properly carry out a \nN$^3$LL resummation, one needs the four-loop cusp anomalous dimensions. Although the three-loop cusp anomalous dimensions were calculated long ago \\cite{Moch:2004pa},\nsurprisingly little has been reported at one order higher; several interesting techniques have been \ndeveloped \\cite{Panzer:2013cha,Panzer:2014gra,Panzer:2014caa,Panzer:2015ida,vonManteuffel:2014qoa,vonManteuffel:2014ixa,Ablinger:2012ph,Ablinger:2015tua,Ruijl:2014hha,Lee:2012te,Henn:2013nsa}\nand a number of relevant master integrals (mainly propagators\\footnote{One four-loop three-point integral with off-shell external momenta was presented in \\cite[example~3.6]{Panzer:2014gra}.}) computed, \nsee \\cite{Baikov:2010hf,Smirnov:2010hd} and \\cite{Lee:2011jf,Lee:2011jt}\\footnote{Recently, it was reported that these computations can now be reproduced using the public software package\n{\\tt SummerTime} \\cite{Lee:2015eva}.}, but it is arguably the case that more progress has been made on the analogous problem\nin maximally supersymmetric gauge theory \\cite{Boels:2012ew,Boels:2015yna}.\n\nBesides the phenomenological motivation given above, a calculation of the four-loop cusp anomalous dimensions also has the potential to answer a long-standing question about the infrared structure of massless QCD. \nThrough three-loop order, it has been observed that the gluon cusp anomalous dimension can be derived from the quark cusp anomalous dimension by simply rescaling it. \nThe calculation of both the quark and the gluon cusp anomalous dimensions at four loops is therefore essential to see if this pattern continues to hold. \nThese four-loop calculations will provide the first non-trivial test of the putative Casimir scaling property of QCD:\nthe proposal that, to all loop orders in massless QCD, the gluon cusp anomalous dimension is the same as the quark cusp anomalous dimension up to an overall factor of $C_A\/C_F$, where $C_A = N_c$ and $C_F = (N_c^2-1)\/(2 N_c)$\nare respectively the quadratic Casimir invariants of the adjoint and the fundamental representations of the QCD gauge group.\\footnote{We assume a $SU(N_c)$ gauge group throughout this article.} \nA breakdown of Casimir scaling at four loop order would have profound implications. For example, it would mean that the four-loop infrared divergences in massless gauge theory scattering amplitudes at subleading color\ncannot straightforwardly be described in an abstract way which treats quarks and gluons on an equal footing \\cite{Becher:2009qa,Gardi:2009qi}.\n\nIn this paper we demonstrate the advantages of our approach \\cite{vonManteuffel:2014qoa} in a complete rederivation of the well-studied one-, two-, and three-loop form factors in perturbative QCD. \nWe found it particularly elegant to use a suitable basis of finite master integrals and made three main observations:\n\\begin{itemize}\n\t\\item\nThe $\\epsilon$ pole structure of the unexpanded form factors becomes absolutely explicit and has the striking feature that the most complicated integrals do not contribute to the cusp anomalous dimensions.\n\n\t\\item\nFinite integrals can be computed exactly and automatically in terms of multiple polylogarithms. In fact, our work constitutes the first complete analytical check of the weight eight, \n$\\mathcal{O}\\left(\\epsilon^2\\right)$ three-loop results published in \\cite{Gehrmann:2010tu}.\n\n\t\\item\nOn the numerical side, we obtained more than an order of magnitude improvement in both run time and precision for a fixed number of Monte Carlo integrand evaluations by employing a basis of finite integrals.\nRemarkably, we have since observed that, quite generally, one can substantially increase the reach and reliability of publicly available sector decomposition programs \\cite{Binoth:2000ps,Bogner:2007cr,Smirnov:2013eza,Borowka:2015mxa} \nif one first rotates to a basis of finite integrals. We will explore this in detail in a separate publication.\n\\end{itemize}\nTowards the end of this work, we take a first look at the computation of the four-loop cusp anomalous dimensions and explain how our method will allow for a significant reduction of the problem. \nWe illustrate our point at four loops by computing a non-trivial integral in an irreducible top-level sector. \nDue to the fact that the leading term in its $\\epsilon$ expansion consists solely of transcendental numbers of weight seven, it is not expected to contribute to the cusp anomalous dimensions.\n\nThis article is organized as follows. In Section~\\ref{sec:notation}, we define our $L$-loop bare form factors in massless QCD, introduce notation that we use,\nand state some useful facts about the structure of the form factors at one, two, and three loops. \nIn Section~\\ref{sec:method}, we describe our method of computation, focusing on its non-standard features.\nIn Sections \\ref{sec:1Lffs}, \\ref{sec:2Lffs}, and \\ref{sec:3Lffs}, we present our results for the one-, two- and three-loop bare form factors written as linear combinations of finite master integrals. \nWhen written in this way, the results have the remarkable feature that many of the most complicated master integrals do not contribute to the $\\epsilon^{-2}$ pole terms. \nWe explore this further in Section~\\ref{sec:cusps} and give an example of this phenomenon at the four-loop level,\nbased on the exact computation of a four-loop form factor with {\\texttt{HyperInt}}, a package for the evaluation of convergent Feynman integrals \\cite{Panzer:2014caa}.\nIn Appendix~\\ref{sec:finite}, we extend theoretical arguments given in reference \\cite{vonManteuffel:2014qoa}, showing that,\nfor scattering and decay processes which admit a Euclidean region respecting the kinematical constraints, one can always find a basis of finite loop integrals.\n\nAs a crucial supplement, ancillary files are available on arXiv.org which contain the quark and gluon form factors in terms of finite master integrals at one, two, and three loops,\nas well as all of our finite master integrals and final results $\\epsilon$-expanded to weight eight.\nWe also provide our highly-automated and parallelized setup for the computation of the $\\epsilon$-expansions using {\\texttt{HyperInt}} as described in Appendix~\\ref{sec:hyperint}.\nGiven sufficient computer resources, all of the Feynman integrals discussed in this paper may be straightforwardly reproduced by following the instructions given in the ancillary files.\n\n\\section{Notation and Conventions}\n\\label{sec:notation}\n\\begin{figure}\n\\centering\n\\includegraphics[scale=0.35]{oneloopffFeynmandiags}\n\\caption{The one-loop Feynman diagrams for the quark (upper panel) and gluon (lower panel) form factors in massless QCD. \nAt each order in the bare strong coupling constant, the effective coupling of the Higgs boson to gluons can be obtained by matching full QCD with a massive top quark onto an effective field theory in which the massive top quark is integrated out.}\n\\label{fig:samplediagrams}\n\\end{figure}\n\nIn this section, we give our definitions of the unrenormalized, massless quark and gluon form factors, $\\mathcal{F}_{\\rm bare}^q\\left(\\alpha_s^{\\rm bare}, (p_1 + p_2)^2, \\mu_\\epsilon^2, \\epsilon\\right)$ \nand $\\mathcal{F}_{\\rm bare}^g\\left(\\alpha_s^{\\rm bare}, (p_1 + p_2)^2, \\mu_\\epsilon^2, \\epsilon\\right)$. \nWe also establish some notation and state some facts about the general structure of our bare results at one, two, and three loops.\nThe $L$-loop contribution to the quark form factor is defined by the interference of the tree-level amplitude for $\\gamma^*(p_1+p_2) \\to q(p_1) \\bar{q}(p_2)$ with the $L$-loop corrections to this amplitude in massless QCD, \nnormalized to the tree-level amplitude squared. For all interferences, the sum over spin and color degrees of freedom is implied.\nSimilarly, the $L$-loop contribution to the gluon form factor is defined by the interference of the tree-level diagram for $h(p_1+p_2) \\to g(p_1) g(p_2)$ in the infinite top quark mass limit \nwith the $L$-loop corrections to this process in massless QCD, normalized to the tree-level amplitude squared (see Figure \\ref{fig:samplediagrams} for a visualization at the one-loop level). \n\nOur bare form factors have a perturbative expansion of the form\n\\begin{align}\n\\label{eq:expbareq}\n\\mathcal{F}_{\\rm bare}^q\\left(\\alpha_s^{\\rm bare}, (p_1 + p_2)^2, \\mu_\\epsilon^2, \\epsilon\\right) &= 1 + \\sum_{L = 1}^\\infty \\left(\\frac{\\alpha_s^{\\rm bare}}{4\\pi}\\right)^L \n\\left(\\frac{4\\pi \\mu_\\epsilon^2}{-(p_1+p_2)^2}\\right)^{L \\epsilon} \\frac{\\mathcal{F}_L^q(\\epsilon)}{\\Gamma^L(1-\\epsilon)}\n\\\\\n\\label{eq:expbareg}\n\\mathcal{F}_{\\rm bare}^g\\left(\\alpha_s^{\\rm bare}, (p_1 + p_2)^2, \\mu_\\epsilon^2, \\epsilon\\right) &= 1 + \\sum_{L = 1}^\\infty \\left(\\frac{\\alpha_s^{\\rm bare}}{4\\pi}\\right)^L \n\\left(\\frac{4\\pi \\mu_\\epsilon^2}{-(p_1+p_2)^2}\\right)^{L \\epsilon} \\frac{\\mathcal{F}_L^g(\\epsilon)}{\\Gamma^L(1-\\epsilon)},\n\\end{align}\nwhere $\\alpha_s^{\\rm bare}$ is the bare strong coupling constant, $(p_1 + p_2)^2$ is the momentum transfer squared, $\\mu_\\epsilon$ is the 't Hooft scale, and\n$\\epsilon$ is the parameter of dimensional regularization \\cite{'tHooft:1972fi}.\nThe dependence on the scale $(p_1+p_2)^2$ is trivial and may be reconstructed at any time by power counting. We therefore set\n\\begin{equation}\n(p_1+p_2)^2 = -1\n\\end{equation}\nto simplify our notation.\n\nThe master integrals for the form factors have traditionally been calculated in $d=4-2\\epsilon$ dimensions.\nIn the present paper, however, we make extensive use of dimensionally-shifted basis integrals.\nWe employ Minkowskian propagators and choose an absolute normalization of\n\\begin{equation}\n\\left(\\frac{\\Gamma(d\/2 - 1)}{i \\pi^{d\/2}}\\right)^{L}\n\\end{equation}\nfor our $L$-loop Feynman integrals defined in $d$ dimensions in order to prevent the appearance of spurious constants in our results.\nWe also consider propagators of higher multiplicity\nin our scalar Feynman integrals.\nThese we visualize by placing dots on the edges of the associated graphical representations,\nwhere a propagator power $\\nu + 1$ is represented by an edge with $\\nu$ dots on it.\nAs a concrete example, let us consider the one-loop Feynman integral which we actually use in Section \\ref{sec:1Lffs} below. It is the one-loop bubble integral in $d=6 - 2\\epsilon$ with both propagators squared:\n\\begin{align}\\label{eq:miff1a3}\n\t\\dimgraph{ff1a_2_3}{6} \n\t&= \n\t\\left. \\frac{\\Gamma(2-\\epsilon)}{i \\pi^{3-\\epsilon}}\\int\\!\\!\\frac{\\mathrm{d}^{6-2\\epsilon}k_1}{((k_1+p_1)^2)^2((k_1-p_2)^2)^2} ~\\right|_{(p_1+p_2)^2 = -1}\n\t\\nonumber\\\\\n\t&= \\frac{\\Gamma(2-\\epsilon)\\Gamma^2(1-\\epsilon)\\Gamma(1+\\epsilon)}{\\Gamma(2-2\\epsilon)}\n\t\\nonumber\\\\\n\t&= 1 + \\epsilon + 2 \\epsilon^2 + \\mathcal{O}\\left(\\epsilon^3\\right).\n\\end{align}\n\nIn Tables \\ref{tab:ff1fams}, \\ref{tab:ff2fams}, and \\ref{tab:ff3fams} below, we present the integral families that we used to uniquely parametrize all of our Feynman integrals. \nIn these tables, the loop momenta are denoted by $k_i$, $i=1,2,3$, and each momentum $q$ in the lists represents a scalar propagator $1\/q^2$.\nAt one-loop, the form factors are extremely simple and the three non-zero diagrams of Figure \\ref{fig:samplediagrams} can be covered with just one integral family, called $\\mathrm{A_1}$ in this work, see Table \\ref{tab:ff1fams}.\nThe color structure at one-loop is also extremely simple; $\\mathcal{F}_1^q(\\epsilon)$\nis directly proportional to $C_F$ and $\\mathcal{F}_1^g(\\epsilon)$ is directly proportional to $C_A$. \nThe two-loop form factors, on the other hand, are more complicated and have an interesting history.\nIt took three attempts to correctly calculate the two-loop quark form factor through the finite terms \\cite{Gonsalves:1983nq,Kramer:1986sg,Matsuura:1987wt}, \nand the analogous calculation for the two-loop gluon form factor was first published more than a decade later in reference \\cite{Harlander:2000mg}. \nIn reference \\cite{Gehrmann:2005pd}, the two-loop form factors were finally computed to all orders in $\\epsilon$ in terms of Gamma functions and generalized hypergeometric functions. One must use two integral families\nto cover the two-loop Feynman diagrams, chosen as $\\mathrm{A_2}$ and $\\mathrm{B_2}$ in this paper, see Table \\ref{tab:ff2fams}. Both $\\mathcal{F}_2^q(\\epsilon)$ and $\\mathcal{F}_2^g(\\epsilon)$ have three color structures,\nsome of which depend on the number of massless quarks, $N_f$. The color structures $C_F^2$, $C_F C_A$, $C_F N_f$ appear in the expression for $\\mathcal{F}_2^q(\\epsilon)$\nand the color structures $C_A^2$, $C_A N_f$, and $C_F N_f$ appear in the expression for $\\mathcal{F}_2^g(\\epsilon)$.\n\nNeedless to say, the calculation of the three-loop form factors is harder still. To obtain even approximate results at $\\mathcal{O}\\left(\\epsilon^0\\right)$ \ntook many years of work by a large number of researchers \\cite{Moch:2005tm,Gehrmann:2006wg,Heinrich:2007at,Heinrich:2009be,Baikov:2009bg}. Analytical\nresults for the three-loop form factors through the finite terms were obtained a short time later \\cite{Lee:2010cga,Gehrmann:2010ue}. \nThe three-loop form factor master integrals were first computed numerically to $\\mathcal{O}\\left(\\epsilon^2\\right)$ in reference \\cite{Lee:2010ik} using dimensional recurrence relations \\cite{Lee:2009dh} but, \nwith the help of the celebrated PSLQ algorithm \\cite{PSLQ}, the authors were able to recover the analytical solutions.\\footnote{At the time, it was conjectured that zeta and multiple zeta values of, at most, \nweight eight would appear in the higher-order results required to expand the three-loop form factors through to $\\mathcal{O}\\left(\\epsilon^2\\right)$.} \nThe explicit higher-order results for the three-loop masters are, regrettably, scattered over three different articles \\cite{Gehrmann:2005pd,Lee:2010ug,Lee:2010ik}. \nNevertheless, it should be stressed that reference \\cite{Lee:2010ik} provided the bulk of the unknown higher-order terms in the $\\epsilon$ expansions of the master integrals\nrequired for a calculation of the three-loop form factors up to and including contributions of $\\mathcal{O}\\left(\\epsilon^2\\right)$ \\cite{Gehrmann:2010tu}. \nSome time later, a subset of the higher-order results for the three-loop masters were confirmed by first solving an auxiliary system of differential equations for analogous integrals with two off-shell legs \nand then performing an asymptotic analysis on the results obtained \\cite{Henn:2013nsa}. Three integral families are needed to cover all three-loop Feynman diagrams which contribute. \nThe integral families we use coincide with those of reference \\cite{Gehrmann:2010ue} and are labeled $\\mathrm{A_3}$, $\\mathrm{B_3}$, $\\mathrm{C_3}$ in Table \\ref{tab:ff3fams}. \nThe color structures $C_F^3$, $C_F^2 C_A$, $C_F C_A^2$, $C_F^2 N_f$, $C_F C_A N_f$, $C_F N_f^2$, and $(d_{abc}d_{abc}\/N_c) N_{q\\gamma}$ appear in the expression for $\\mathcal{F}_3^q(\\epsilon)$ \nand the color structures $C_A^3$, $C_A^2 N_f$, $C_A C_F N_f$, $C_F^2 N_f$, $C_A N_f^2$, and $C_F N_f^2$ appear \nin the expression for $\\mathcal{F}_3^g(\\epsilon)$. Here, $N_{q\\gamma} = (1\/e_q){\\sum_{q^\\prime} e_{q^\\prime}}$ is the charge-weighted sum of the $N_f$ quark flavors normalized to the charge of the primary quark $q$\nand $d_{abc}d_{abc} = (N_c^2 - 1)(N_c^2 - 4)\/N_c$ for $SU(N_c)$.\n\n\\begin{table}\n\\centering\n\\begin{tabular}[h!]{l}\nFamily $\\mathrm{A_1}$\\\\[1mm]\n\\hlinewd{2pt}\n\\rule{0pt}{2ex}~~~$k_1+p_1$\\\\\n\\rule{0pt}{2ex}~~~$k_1-p_2$\\\\\n\\rule{0pt}{2ex}~~~$k_1$\n\\end{tabular}\n\\caption{One integral family which covers all one-loop form factor diagrams.}\n\\label{tab:ff1fams}\n\\end{table}\n\\begin{table}\n\\centering\n\\begin{tabular}[h!]{ll}\nFamily $\\mathrm{A_2}$\\hspace{5mm} & Family $\\mathrm{B_2}$\\hspace{5mm}\\\\[1mm]\n\\hlinewd{2pt}\n\\rule{0pt}{2ex}~$k_1+p_1$ & $k_1+p_1$\\\\\n\\rule{0pt}{2ex}~$k_2+p_1$ & $k_2+p_1$\\\\\n\\rule{0pt}{2ex}~$k_1-p_2$ & $k_1-p_2$\\\\\n\\rule{0pt}{2ex}~$k_2-p_2$ & $k_1-k_2-p_2$\\\\\n\\rule{0pt}{2ex}~$k_1-k_2$ & $k_1-k_2$\\\\\n\\rule{0pt}{2ex}~$k_1$ & $k_2$\\\\\n\\rule{0pt}{2ex}~$k_2$ & $k_1$\n\\end{tabular}\n\\caption{Two integral families which cover all two-loop form factor diagrams.}\n\\label{tab:ff2fams}\n\\end{table}\n\\begin{table}\n\\centering\n\\begin{tabular}[h!]{lll}\nFamily $\\mathrm{A_3}$\\hspace{5mm} & Family $\\mathrm{B_3}$\\hspace{5mm} & Family $\\mathrm{C_3}$\\hspace{5mm}\\\\[1mm]\n\\hlinewd{2pt}\n\\rule{0pt}{2ex}~$k_1$ & $k_1$ & $k_1$\\\\\n\\rule{0pt}{2ex}~$k_2$ & $k_2$ & $k_2$\\\\\n\\rule{0pt}{2ex}~$k_3$ & $k_3$ & $k_3$\\\\\n\\rule{0pt}{2ex}~$k_1-k_2$ & $k_1-k_2$ & $k_1-k_2$\\\\\n\\rule{0pt}{2ex}~$k_1-k_3$ & $k_1-k_3$ & $k_1-k_3$\\\\\n\\rule{0pt}{2ex}~$k_2-k_3$ & $k_1-k_2-k_3$ & $k_2-k_3$\\\\\n\\rule{0pt}{2ex}~$k_1-p_1$ & $k_1-p_1$ & $k_1-k_3-p_2$\\\\\n\\rule{0pt}{2ex}~$k_1-p_1-p_2$ & $k_1-p_1-p_2$ & $k_1-p_1-p_2$\\\\\n\\rule{0pt}{2ex}~$k_2-p_1$ & $k_2-p_1$ & $k_2-p_1$\\\\\n\\rule{0pt}{2ex}~$k_2-p_1-p_2$ & $k_2-p_1-p_2$ & $k_1-k_2-p_2$\\\\\n\\rule{0pt}{2ex}~$k_3-p_1$ & $k_3-p_1$ & $k_3-p_1$\\\\\n\\rule{0pt}{2ex}~$k_3-p_1-p_2$ & $k_3-p_1-p_2$ & $k_3-p_1-p_2$\n\\end{tabular}\n\\caption{Three integral families which cover all three-loop form factor diagrams.}\n\\label{tab:ff3fams}\n\\end{table}\n\n\\section{Computational Method}\n\\label{sec:method}\n\\begin{figure}\n\\centering\n\\begin{align*}\n&\\figgraph{.35}{ff1a_2_3}{6}\n\\\\\n&\\figgraph{.35}{ff2a_4_15}{6} \\quad \\figgraph{.35}{ff2a_3_22}{8} \\quad \\figgraph{.35}{ff2a_4_58}{10} \\quad \\figgraph{.35}{ff2b_6_63}{8}\n\\\\\n&\\figgraph{.35}{ff3a_4_172}{10} \\quad \\figgraph{.35}{ff3a_5_662}{8} \\quad \\figgraph{.35}{ff3a_5_158}{6} \\quad \\figgraph{.35}{ff3a_5_412}{10}\n\\\\\n&\\figgraph{.35}{ff3a_5_433}{8} \\quad \\figgraph{.35}{ff3a_6_2695}{6} \\quad \\figgraph{.35}{ff3a_6_1683}{10} \\quad \\figgraph{.35}{ff3a_6_691}{8}\n\\\\\n&\\figgraph{.35}{ff3a_6_1433}{10} \\quad \\figgraph{.35}{ff3a_6_429}{6} \\quad \\figgraph{.35}{ff3a_6_444}{6} \\quad \\figgraph{.35}{ff3b_7_1770}{6}\n\\\\\n&\\figgraph{.35}{ff3b_7_1780}{6} \\quad \\figgraph{.35}{ff3b_7_1766}{8} \\quad \\figgraph{.35}{ff3a_7_758}{6} \\quad \\figgraph{.35}{ff3b_7_1722}{4}\n\\\\\n&\\figgraph{.35}{ff3c_8_2959}{8} \\quad \\figgraph{.35}{ff3b_8_2750}{4} \\quad \\figgraph{.35}{ff3b_8_1662}{6} \\quad \\figgraph{.35}{ff3a_9_1790}{6}\n\\\\\n&\\figgraph{.35}{ff3b_9_1790}{6} \\quad \\figgraph{.35}{ff3c_9_1015}{6}\n\\end{align*}\n\\caption{The one-, two-, and three-loop finite form factor master integrals used in Sections \\ref{sec:1Lffs}, \\ref{sec:2Lffs}, and \\ref{sec:3Lffs}\nfor $\\mathcal{F}_1^q(\\epsilon)$, $\\mathcal{F}_1^g(\\epsilon)$, $\\mathcal{F}_2^q(\\epsilon)$, $\\mathcal{F}_2^g(\\epsilon)$, $\\mathcal{F}_3^q(\\epsilon)$, and $\\mathcal{F}_3^g(\\epsilon)$.}\n\\label{fig:basis}\n\\end{figure}\nOur calculation of the massless form factors is based on a variant of the method of dimension-shifts and dots recently introduced by us\nin reference~\\cite{vonManteuffel:2014qoa}. Its salient features can be summarized as follows.\nUsing Feynman diagrams, we express all relevant loop amplitudes in terms of scalar Feynman integrals which are then reduced to a set of basis integrals using integration-by-parts reductions.\nFor our basis integrals, we select Feynman integrals which are \\emph{finite} in the $\\epsilon \\to 0$ limit.\nStarting from the Feynman parameter representations of these integrals, we Taylor expand the integrands about $\\epsilon = 0$. Finally, we use\nmodern analytical integration techniques to integrate all expansion coefficients in terms of zeta and multiple zeta values. \nWe now proceed to describe how our calculations were carried out at a more technical level of detail.\n\nAs a first step, we generate all of the Feynman diagrams using {\\tt Qgraf} \\cite{Nogueira:1991ex} and compute the interferences required to extract the one-, two-, and three-loop form factors.\nFor this task, we use the QCD interference calculator built into the {\\tt Reduze 2} program \\cite{vonManteuffel:2012np,Studerus:2009ye,Bauer:2000cp,fermat}. \nThe result of this computation is a linear combination of scalar loop integrals whose integrands have numerator insertions up to a certain maximal rank,\nwhich are then mapped onto inverse propagators. We find that, in 't Hooft-Feynman gauge, there are up to two inverse propagators at one loop, \nup to four inverse propagators at two loops, and up to five inverse propagators at three loops.\nNext, the loop integrals are systematically reduced to a minimal set of basis integrals, the so-called master integrals.\nThese reductions are obtained from integration by parts identities in $d$ spacetime\ndimensions \\cite{Tkachov:1981wb,Chetyrkin:1981qh} with a variant of Laporta's algorithm \\cite{Laporta:2001dd}, \nas implemented in the {\\tt Reduze 2} program.\nAt this point, we employ the well-known standard integral basis of corner\nintegrals in $4-2\\epsilon$ dimensions:\none one-loop integral, four two-loop integrals, and twenty-two three-loop integrals.\n\nIn order to compute the master integrals using their Feynman parameter representations, we switch to a basis of \\emph{finite master integrals}.\\footnote{In this work, we chose to make use of scalar Feynman integrals without numerator insertions. \nHowever, it might be beneficial to take integrals with irreducible numerators into account, as this provides more finite integrals in a given number of dimensions.}\nIn previous work \\cite{vonManteuffel:2014qoa}, we showed that one can make use of so-called quasi-finite master integrals, master integrals which have convergent Feynman parameter integrations but, \npotentially, a trivial overall $1\/\\epsilon$ divergence.\nHere, we find it more convenient to work with truly finite integrals in order to make it manifestly visible which master integrals contribute to which $\\epsilon$ poles of the bare form factors.\nAs explained in reference \\cite{vonManteuffel:2014qoa} and Appendix~\\ref{sec:finite}, such a finite basis can always be constructed in dimensional regularization for any multi-leg, \nmulti-loop process, provided that there exists a Euclidean region which respects all kinematical constraints.\n\nFor each of the irreducible topologies, we enumerate finite integrals as described in Appendix~\\ref{sec:finite}, using an automated job in the development version of {\\tt Reduze 2}.\nOut of many possible choices, we find it convenient\nto keep the dotted graphical representations of our master integrals as symmetric as possible while aiming for masters possessing high maximal weights at leading order in $\\epsilon$.\nWe discard sets of master integrals for which the reduced interferences contain spurious poles worse than $\\epsilon^{-2 L}$ at $L$-loop order.\nIt is essential to avoid such spurious poles if one wishes to be able to read off by eye which master integrals contribute to which $\\epsilon$ poles.\nAll other things being equal, we also find it natural to pick candidates which inhabit smaller spacetime dimensions and have fewer dots.\nA systematic analysis of the first 100 finite integrals produced by our algorithm shows\nthat, for each sector, it is sufficient to use the lowest possible spacetime dimension ({\\it i.e.} the spacetime dimension at which finite integrals first appear) to determine the highest possible maximal weight at leading order in $\\epsilon$\nconsistent with our spurious pole veto.\\footnote{Although we have no proof that one cannot do better, we were unable to find a candidate to cover \nthe unique non-factorizable, three-point, eight-line integral topology which both had the highest maximal weight possible at leading order in $\\epsilon$ (five) and resulted in reduced three-loop integrands without $\\epsilon^{-7}$ poles.} \nIn a few cases we picked non-minimal spacetime dimensions in order to allow for symmetric dottings.\nThe results of our analysis are depicted in Figure \\ref{fig:basis}.\n\nThe next step is to relate the traditional integral basis to the basis of finite integrals using dimension shifts\nand integration by parts relations, see Section 2.3 of reference~\\cite{vonManteuffel:2014qoa} for details.\nFor this purpose, we must solve reduction identities for integrals besides the ones needed for the interferences.\nDespite the fact that some of our finite integrals carry a rather large number\nof dots, these reductions are moderate in computational complexity when compared\nto the reductions required for the form factor Feynman diagrams. For the convenience of the reader, our arXiv submission includes the relevant lists of replacement rules required to pass from the standard Reduze 2\nbases of corner integrals to the finite integral bases employed in this work (see \\file{BasisToFinite.m}).\n\nAt this stage, we have reexpressed the one-, two-, and three-loop form factors as linear combinations of master integrals whose Feynman parameter integral representations converge in the $\\epsilon \\to 0$ limit.\nIf our basis integrals satisfy certain technical requirements, we can apply modern, highly-automated, analytical integration algorithms to compute the Taylor expansion about $\\epsilon = 0$ for our master integrals.\nThe {\\texttt{HyperInt}} package \\cite{Panzer:2014caa}, written in {\\texttt{Maple}}, provides powerful routines for the evaluation of Euclidean linearly reducible Feynman integrals\\footnote{Roughly speaking, a Feynman integral \nis linearly reducible if its Feynman parametrization can be integrated out parameter-by-parameter in terms of multiple polylogarithms with rational arguments.} which possess convergent Feynman parameter representations.\nThe package implements the algorithms presented in \\cite{Brown:2008um,Brown:2009ta} and all underlying principles are discussed in \\cite{Panzer:2015ida}. In \\cite{Panzer:2014gra}, \nit was pointed out that all three-loop form factor integrals happen to be linearly reducible. This non-obvious fact implies that one can use the {\\texttt{HyperInt}} approach to compute the $\\epsilon$-expansion of the three-loop form factors. \nEven though the lower-loop results have been known to all orders in $\\epsilon$ for some time, we use {\\texttt{HyperInt}} to evaluate the $\\epsilon$-expansion of the form factors to weight eight in all cases. \nAt one loop, this requires an $\\epsilon$ expansion of the form factors up to $\\mathcal{O}\\left(\\epsilon^6\\right)$,\n at two loops, this requires an $\\epsilon$ expansion of the form factors up to $\\mathcal{O}\\left(\\epsilon^4\\right)$, \n and, at three loops, this requires an $\\epsilon$ expansion of the form factors up to $\\mathcal{O}\\left(\\epsilon^2\\right)$. In the following three sections, we summarize our results.\n\n\\section{One-Loop Form Factors}\n\\label{sec:1Lffs}\nFor the bare one-loop form factors we find\n\\begin{align}\n\\label{eq:1Lffq}\n\\mathcal{F}_1^q(\\epsilon) &=\\casimir{C_F} \\left\\{\\pole{\\frac{1}{\\epsilon^2}}\\left[a_1 \\dimgraph{ff1a_2_3}{6}\\right]\\right\\}\n\\end{align}\n\\begin{align}\n\\label{eq:1Lffg}\n\\mathcal{F}_1^g(\\epsilon) &=\\casimir{C_A} \\left\\{\\pole{\\frac{1}{\\epsilon^2}}\\left[b_1 \\dimgraph{ff1a_2_3}{6}\\right]\\right\\},\n\\end{align}\nwhere we have abbreviated the rational functions of $\\epsilon$ as\n\\begin{align}\n\\label{eq:1Lffqcoeff}\na_1 &= \\frac{-2 + \\epsilon-2 \\epsilon^2}{1-\\epsilon}\\\\\n\\label{eq:1Lffgcoeff}\nb_1 &= \\frac{-2 (1-3 \\epsilon+2 \\epsilon^2+\\epsilon^3)}{(1-\\epsilon)^2}.\n\\end{align}\nHere and in the following sections, the notation is such that both the graphically represented master integrals and the abbreviated rational\ncoefficient functions are finite at $\\epsilon = 0$ and possess a Taylor series expansion starting at $\\mathcal{O}\\left(\\epsilon^0\\right)$, recall the form of \\ Eq.\\ \\eqref{eq:miff1a3}.\nThis notation makes all divergences completely explicit and shows which integral topologies contribute to specific $\\epsilon$ poles of the form factors.\n\nAt the one-loop level, the Casimir scaling property is reflected in the fact that \n$a_1$ is equal to $b_1$ in the $\\epsilon \\to 0$ limit.\nExpanding Eqs.\\ \\eqref{eq:1Lffq} and \\eqref{eq:1Lffg} to $\\mathcal{O}\\left(\\epsilon^6\\right)$, \nwe find complete agreement with the results of references \\cite{Gehrmann:2010ue} and \\cite{Gehrmann:2010tu}. Further details of our analysis of the one-loop quark and gluon form factors are available online at arXiv.org \nin the ancillary files \\file{Fq.m} and \\file{Fg.m}.\n\n\\section{Two-Loop Form Factors}\n\\label{sec:2Lffs}\nFor the bare two-loop form factors we find\n\\begin{align}\n\\label{eq:2Lffq}\n\\mathcal{F}_2^q(\\epsilon) &=\\casimir{C_F^2} \\left\\{\\pole{\\frac{1}{\\epsilon^4}} \\left[c_1 \\dimgraph{ff2a_4_15}{6} + c_2 \\dimgraph{ff2a_3_22}{8}\\right]\n+\\pole{\\frac{1}{\\epsilon^3}} \\left[c_3 \\dimgraph{ff2a_4_58}{10}\\right]+\\pole{\\frac{1}{\\epsilon}} \\left[c_4 \\dimgraph{ff2b_6_63}{8}\\right]\\right\\}\n\\nonumber\\\\[-1pt] &\n+ \\casimir{C_F C_A} \\left\\{\\pole{\\frac{1}{\\epsilon^4}} \\left[c_5 \\dimgraph{ff2a_3_22}{8}+c_6 \\dimgraph{ff2a_4_58}{10}\\right]+\\pole{\\frac{1}{\\epsilon}} \\left[c_7 \\dimgraph{ff2b_6_63}{8}\\right]\\right\\}\n\\nonumber\\\\[-1pt] &\n+ \\casimir{C_F N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^3}} \\left[c_8 \\dimgraph{ff2a_4_58}{10}\\right]\\right\\}\n\\end{align}\n\\begin{align}\n\\label{eq:2Lffg}\n\\mathcal{F}_2^g(\\epsilon) &=\\casimir{C_A^2} \\left\\{\\pole{\\frac{1}{\\epsilon^4}} \\left[d_1 \\dimgraph{ff2a_4_15}{6}+d_2 \\dimgraph{ff2a_3_22}{8}+d_3 \\dimgraph{ff2a_4_58}{10}\\right]\n+\\pole{\\frac{1}{\\epsilon}} \\left[d_4 \\dimgraph{ff2b_6_63}{8}\\right]\\right\\}\n\\nonumber\\\\[-1pt] &\n+ \\casimir{C_A N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^3}} \\left[d_5 \\dimgraph{ff2a_3_22}{8}+d_6 \\dimgraph{ff2a_4_58}{10}\\right]+d_7 \\dimgraph{ff2b_6_63}{8}\\right\\}\n\\nonumber\\\\[-1pt] &\n+ \\casimir{C_F N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^2}} \\left[d_8 \\dimgraph{ff2a_3_22}{8}+d_9 \\dimgraph{ff2a_4_58}{10}\\right]+d_{10} \\dimgraph{ff2b_6_63}{8}\\right\\}.\n\\end{align}\nAt this order in QCD perturbation theory, the Casimir scaling property can no longer be seen by eye; for example, the leading infrared divergences in the quark form factor exponentiate \\cite{Frenkel:1976bj}\nand the $C_F^2$ color structure can therefore not contribute to the two-loop quark anomalous dimension at all. What can be deduced from Eqs.\\ \\eqref{eq:2Lffq} and \\eqref{eq:2Lffg}, \nhowever, is that the finite two-loop non-planar form factor integral in the above\ncannot contribute to the two-loop quark and gluon cusp anomalous dimensions; it cannot contribute to the $\\epsilon^{-2}$ pole terms of either form factor because\nit is finite and has prefactors which, at worst, diverge like $\\epsilon^{-1}$ in the $\\epsilon \\to 0$ limit. Expanding Eqs.\\ \\eqref{eq:2Lffq} and \\eqref{eq:2Lffg} to $\\mathcal{O}\\left(\\epsilon^4\\right)$, \nwe find complete agreement with the results of references \\cite{Gehrmann:2010ue} and \\cite{Gehrmann:2010tu}. Further details of our analysis of the two-loop quark and gluon form factors are available online at arXiv.org \nin the ancillary files \\file{Fq.m} and \\file{Fg.m}.\n\n\\section{Three-Loop Form Factors}\n\\label{sec:3Lffs}\nFor the bare three-loop form factors we find\n\\begin{align}\n\\label{eq:3Lffq}\n\\mathcal{F}_3^q(\\epsilon) &=\n\\casimir{C_F^3} \\left\\{\\pole{\\frac{1}{\\epsilon^6}} \\left[e_1 \\dimgraph{ff3a_4_172}{10}+e_2 \\dimgraph{ff3a_5_662}{8}+e_3 \\dimgraph{ff3a_5_158}{6}+e_4 \\dimgraph{ff3a_6_2695}{6}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_5 \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^5}} \\left[e_6 \\dimgraph{ff3a_5_412}{10}+e_7 \\dimgraph{ff3a_5_433}{8}+e_8 \\dimgraph{ff3a_6_1683}{10}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^4}} \\left[e_9 \\dimgraph{ff3a_6_429}{6}\\right]+\\pole{\\frac{1}{\\epsilon^3}} \\left[e_{10} \\dimgraph{ff3a_6_691}{8}+e_{11} \\dimgraph{ff3a_6_444}{6}+e_{12} \\dimgraph{ff3b_7_1770}{6}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{13} \\dimgraph{ff3b_7_1780}{6}+e_{14} \\dimgraph{ff3b_7_1766}{8}+e_{15} \\dimgraph{ff3c_8_2959}{8}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[e_{16} \\dimgraph{ff3b_8_1662}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}} \\left[e_{17} \\dimgraph{ff3a_7_758}{6}+e_{18} \\dimgraph{ff3b_7_1722}{4}+e_{19} \\dimgraph{ff3b_8_2750}{4}+e_{20} \\dimgraph{ff3a_9_1790}{6}+e_{21} \\dimgraph{ff3b_9_1790}{6}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{22} \\dimgraph{ff3c_9_1015}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_F^2 C_A} \\left\\{\\pole{\\frac{1}{\\epsilon^6}} \\left[e_{23} \\dimgraph{ff3a_4_172}{10}+e_{24} \\dimgraph{ff3a_5_662}{8}+e_{25} \\dimgraph{ff3a_5_158}{6}+e_{26} \\dimgraph{ff3a_5_412}{10}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{27} \\dimgraph{ff3a_6_1683}{10}+e_{28} \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^5}} \\left[e_{29} \\dimgraph{ff3a_5_433}{8}\\right]+\\pole{\\frac{1}{\\epsilon^4}} \\left[e_{30} \\dimgraph{ff3a_6_429}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^3}} \\left[e_{31} \\dimgraph{ff3a_6_691}{8}+e_{32} \\dimgraph{ff3a_6_444}{6}+e_{33} \\dimgraph{ff3b_7_1770}{6}+e_{34} \\dimgraph{ff3b_7_1780}{6}+e_{35} \\dimgraph{ff3b_7_1766}{8}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{36} \\dimgraph{ff3c_8_2959}{8}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[e_{37} \\dimgraph{ff3b_8_1662}{6}\\right]+\\pole{\\frac{1}{\\epsilon}} \\left[e_{38} \\dimgraph{ff3a_7_758}{6}+e_{39} \\dimgraph{ff3b_7_1722}{4}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{40} \\dimgraph{ff3b_8_2750}{4}+e_{41} \\dimgraph{ff3a_9_1790}{6}+e_{42} \\dimgraph{ff3b_9_1790}{6}+e_{43} \\dimgraph{ff3c_9_1015}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_F C_A^2} \\left\\{\\pole{\\frac{1}{\\epsilon^6}} \\left[e_{44} \\dimgraph{ff3a_4_172}{10}+e_{45} \\dimgraph{ff3a_5_662}{8}+e_{46} \\dimgraph{ff3a_5_158}{6}+e_{47} \\dimgraph{ff3a_5_412}{10}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{48} \\dimgraph{ff3a_5_433}{8}+e_{49} \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^4}} \\left[e_{50} \\dimgraph{ff3a_6_429}{6}\\right]+\\pole{\\frac{1}{\\epsilon^3}} \\left[e_{51} \\dimgraph{ff3a_6_444}{6}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+e_{52} \\dimgraph{ff3b_7_1770}{6}+e_{53} \\dimgraph{ff3b_7_1780}{6}+e_{54} \\dimgraph{ff3b_7_1766}{8}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[e_{55} \\dimgraph{ff3b_8_1662}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}} \\left[e_{56} \\dimgraph{ff3a_7_758}{6}+e_{57} \\dimgraph{ff3b_7_1722}{4}+e_{58} \\dimgraph{ff3a_9_1790}{6}+e_{59} \\dimgraph{ff3b_9_1790}{6}+e_{60} \\dimgraph{ff3c_9_1015}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_F^2 N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^5}} \\left[e_{61} \\dimgraph{ff3a_5_412}{10}+e_{62} \\dimgraph{ff3a_5_433}{8}+e_{63} \\dimgraph{ff3a_6_1683}{10}+e_{64} \\dimgraph{ff3a_6_1433}{10}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^4}} \\left[e_{65} \\dimgraph{ff3a_4_172}{10}\\right]+\\pole{\\frac{1}{\\epsilon^3}} \\left[e_{66} \\dimgraph{ff3a_6_429}{6}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[e_{67} \\dimgraph{ff3a_6_691}{8}\n+e_{68} \\dimgraph{ff3b_7_1770}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_F C_A N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^5}} \\left[e_{69} \\dimgraph{ff3a_4_172}{10}+e_{70} \\dimgraph{ff3a_5_412}{10}+e_{71} \\dimgraph{ff3a_5_433}{8}+e_{72} \\dimgraph{ff3a_6_1433}{10}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^3}} \\left[e_{73} \\dimgraph{ff3a_6_429}{6}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[e_{74} \\dimgraph{ff3a_6_444}{6}+e_{75} \\dimgraph{ff3b_7_1770}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_F N_f^2} \\left\\{\\pole{\\frac{1}{\\epsilon^4}}\\left[e_{76} \\dimgraph{ff3a_6_1433}{10}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{\\frac{d_{abc}d_{abc}}{N_c} N_{q\\gamma}} \\left\\{\\pole{\\frac{1}{\\epsilon^5}} \\left[e_{77} \\dimgraph{ff3a_4_172}{10}+e_{78} \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^4}} \\left[e_{79} \\dimgraph{ff3a_5_433}{8}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^3}} \\left[e_{80} \\dimgraph{ff3a_5_662}{8}+e_{81} \\dimgraph{ff3a_5_158}{6}+e_{82} \\dimgraph{ff3a_5_412}{10}+e_{83} \\dimgraph{ff3a_6_429}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^2}} \\left[e_{84} \\dimgraph{ff3a_6_444}{6}+e_{85} \\dimgraph{ff3b_7_1770}{6}+e_{86} \\dimgraph{ff3b_7_1780}{6}+e_{87} \\dimgraph{ff3b_7_1766}{8}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}} \\left[e_{88} \\dimgraph{ff3b_8_1662}{6}\\right]+\\left[e_{89} \\dimgraph{ff3b_7_1722}{4}+e_{90} \\dimgraph{ff3a_9_1790}{6}+e_{91} \\dimgraph{ff3b_9_1790}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\epsilon} \\left[e_{92} \\dimgraph{ff3a_7_758}{6}\\right]\\right\\}\n\\end{align}\n\\begin{align}\n\\label{eq:3Lffg}\n\\mathcal{F}_3^g(\\epsilon) &=\n\\casimir{C_A^3} \\left\\{\\pole{\\frac{1}{\\epsilon^6}} \\left[f_1 \\dimgraph{ff3a_4_172}{10}+f_2 \\dimgraph{ff3a_5_662}{8}+f_3 \\dimgraph{ff3a_5_158}{6}+f_4 \\dimgraph{ff3a_5_412}{10}+f_5 \\dimgraph{ff3a_5_433}{8}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_6 \\dimgraph{ff3a_6_2695}{6}+f_7 \\dimgraph{ff3a_6_1683}{10}+f_8 \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^4}} \\left[f_9 \\dimgraph{ff3a_6_429}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^3}} \\left[f_{10} \\dimgraph{ff3a_6_691}{8}+f_{11} \\dimgraph{ff3a_6_444}{6}+f_{12} \\dimgraph{ff3b_7_1770}{6}+f_{13} \\dimgraph{ff3b_7_1780}{6}+f_{14} \\dimgraph{ff3b_7_1766}{8}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{15} \\dimgraph{ff3c_8_2959}{8}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[f_{16} \\dimgraph{ff3b_8_1662}{6}\\right]+\\pole{\\frac{1}{\\epsilon}} \\left[f_{17} \\dimgraph{ff3a_7_758}{6}+f_{18} \\dimgraph{ff3b_7_1722}{4}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{19} \\dimgraph{ff3b_8_2750}{4}+f_{20}\\dimgraph{ff3a_9_1790}{6}+f_{21} \\dimgraph{ff3b_9_1790}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_A^2 N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^5}} \\left[f_{22} \\dimgraph{ff3a_4_172}{10}+f_{23} \\dimgraph{ff3a_5_662}{8}+f_{24} \\dimgraph{ff3a_5_158}{6}+f_{25} \\dimgraph{ff3a_5_412}{10}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{26} \\dimgraph{ff3a_5_433}{8}+f_{27} \\dimgraph{ff3a_6_1683}{10}+f_{28} \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^3}} \\left[f_{29} \\dimgraph{ff3a_6_429}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon^2}} \\left[f_{30} \\dimgraph{ff3a_6_691}{8}+f_{31} \\dimgraph{ff3a_6_444}{6}+f_{32} \\dimgraph{ff3b_7_1770}{6}+f_{33} \\dimgraph{ff3b_7_1780}{6}+f_{34} \\dimgraph{ff3b_7_1766}{8}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{35} \\dimgraph{ff3c_8_2959}{8}\\right]+\\pole{\\frac{1}{\\epsilon}} \\left[f_{36} \\dimgraph{ff3b_8_1662}{6}\\right]+\\left[f_{37} \\dimgraph{ff3a_7_758}{6}+f_{38} \\dimgraph{ff3b_7_1722}{4}\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.+f_{39} \\dimgraph{ff3b_8_2750}{4}+f_{40} \\dimgraph{ff3a_9_1790}{6}+f_{41} \\dimgraph{ff3b_9_1790}{6}+f_{42} \\dimgraph{ff3c_9_1015}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_A C_F N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^5}} \\left[f_{43} \\dimgraph{ff3a_4_172}{10}+f_{44} \\dimgraph{ff3a_5_662}{8}+f_{45} \\dimgraph{ff3a_5_158}{6}+f_{46} \\dimgraph{ff3a_5_433}{8}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{47} \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^4}} \\left[f_{48} \\dimgraph{ff3a_5_412}{10}+f_{49} \\dimgraph{ff3a_6_1683}{10}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[f_{50} \\dimgraph{ff3a_6_691}{8}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{51} \\dimgraph{ff3a_6_444}{6}+f_{52} \\dimgraph{ff3b_7_1770}{6}+f_{53} \\dimgraph{ff3b_7_1780}{6}+f_{54} \\dimgraph{ff3b_7_1766}{8}+f_{55} \\dimgraph{ff3c_8_2959}{8}\n\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}} \\left[f_{56} \\dimgraph{ff3a_6_429}{6}+f_{57} \\dimgraph{ff3b_8_1662}{6}\\right]+\\left[f_{58} \\dimgraph{ff3a_7_758}{6}+f_{59} \\dimgraph{ff3b_7_1722}{4}+f_{60} \\dimgraph{ff3b_8_2750}{4}\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left. +f_{61} \\dimgraph{ff3a_9_1790}{6}+f_{62} \\dimgraph{ff3b_9_1790}{6}+f_{63} \\dimgraph{ff3c_9_1015}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+\\casimir{C_F^2 N_f} \\left\\{\\pole{\\frac{1}{\\epsilon^4}} \\left[f_{64} \\dimgraph{ff3a_5_433}{8}+f_{65} \\dimgraph{ff3a_6_1433}{10}\\right]+\\pole{\\frac{1}{\\epsilon^3}} \\left[f_{66} \\dimgraph{ff3a_4_172}{10}+f_{67} \\dimgraph{ff3a_5_158}{6}\n\\right.\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n\\left.\n+f_{68} \\dimgraph{ff3a_5_412}{10}\\right]+\\pole{\\frac{1}{\\epsilon^2}} \\left[f_{69} \\dimgraph{ff3a_5_662}{8}+f_{70} \\dimgraph{ff3a_6_444}{6}+f_{71} \\dimgraph{ff3b_7_1780}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}} \\left[f_{72} \\dimgraph{ff3b_7_1770}{6}+f_{73} \\dimgraph{ff3b_7_1766}{8}\\right] + \\left[f_{74} \\dimgraph{ff3a_6_429}{6}+f_{75} \\dimgraph{ff3a_9_1790}{6}+f_{76} \\dimgraph{ff3c_9_1015}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\epsilon \\left[f_{77} \\dimgraph{ff3a_7_758}{6}+f_{78} \\dimgraph{ff3b_7_1722}{4}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_A N_f^2} \\left\\{\\pole{\\frac{1}{\\epsilon^4}}\\left[f_{79} \\dimgraph{ff3a_4_172}{10}+f_{80} \\dimgraph{ff3a_5_158}{6}+f_{81} \\dimgraph{ff3a_5_412}{10}\\right]+\\pole{\\frac{1}{\\epsilon^2}}\\left[f_{82} \\dimgraph{ff3a_6_1433}{10}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}}\\left[f_{83} \\dimgraph{ff3b_7_1780}{6}\\right]\\right\\}\n\\nonumber\\\\[5pt] & \\hspace{-6.2ex}\n+ \\casimir{C_F N_f^2} \\left\\{\\pole{\\frac{1}{\\epsilon^3}}\\left[f_{84} \\dimgraph{ff3a_4_172}{10}+f_{85} \\dimgraph{ff3a_5_412}{10}\\right]+\\pole{\\frac{1}{\\epsilon^2}}\\left[f_{86} \\dimgraph{ff3a_5_158}{6}\\right]\n\\right.\n\\nonumber\\\\[-1pt] &\n\\left.\n+\\pole{\\frac{1}{\\epsilon}}\\left[f_{87} \\dimgraph{ff3b_7_1780}{6}\\right]\n\\right\\}.\n\\end{align}\nIn total, six of the twenty-two finite three-loop form factor master integrals in Eqs.\\ \\eqref{eq:3Lffq} and \\eqref{eq:3Lffg} above turn out not to contribute to the three-loop cusp anomalous dimensions (see Figure \\ref{fig:noncuspintegrals}). \nThese include, in particular, all of the most complicated, nine-line, finite three-loop masters. Expanding Eqs.\\ \\eqref{eq:3Lffq} and \\eqref{eq:3Lffg} to $\\mathcal{O}\\left(\\epsilon^2\\right)$, \nwe find complete agreement with the results of references \\cite{Gehrmann:2010ue} and \\cite{Gehrmann:2010tu}. Further details of our analysis of the three-loop quark and gluon form factors are available online at arXiv.org \nin the ancillary files \\file{Fq.m} and \\file{Fg.m}.\n\\begin{figure}\n\\centering\n\\begin{align*}\n&\\figgraph{.35}{ff3a_7_758}{6} \\qquad \\figgraph{.35}{ff3b_7_1722}{4} \\qquad \\figgraph{.35}{ff3b_8_2750}{4}\n\\\\\n&\\figgraph{.35}{ff3a_9_1790}{6} \\qquad \\figgraph{.35}{ff3b_9_1790}{6} \\qquad \\figgraph{.35}{ff3c_9_1015}{6}\n\\end{align*}\n\\caption{The finite three-loop form factor master integrals in $\\mathcal{F}_3^q\\left(\\epsilon\\right)$ and $\\mathcal{F}_3^g\\left(\\epsilon\\right)$ (Eqs.\\ \\eqref{eq:3Lffq} and \\eqref{eq:3Lffg})\nwhich do not contribute to the three-loop cusp anomalous dimensions.}\n\\label{fig:noncuspintegrals}\n\\end{figure}\n\n\\section{Towards the Four-Loop Cusp Anomalous Dimensions}\n\\label{sec:cusps}\nAs has long been known, a convenient way to calculate the cusp anomalous dimensions to four loops is to perform the four-loop quark and gluon form factor calculations and then extract the four-loop cusp anomalous dimensions\nfrom the first non-trivial poles in the parameter of dimensional regularization. \nThe $\\epsilon^{-8} - \\epsilon^{-3}$ poles at four loops are predicted by the renormalization group equations satisfied by the form factors, in terms of known lower-loop results,\nand the $\\epsilon^{-2}$ poles may be used to extract the four-loop cusp anomalous dimensions (see {\\it e.g.} reference~\\cite{Moch:2005tm} for details).\\footnote{In reference \\cite{Gehrmann:2010ue},\nthe one-, two-, and three-loop form factors are given to sufficiently high orders in the $\\epsilon$ expansion for the purposes of this analysis.}\n\nNormally, for dimensionally-regulated, $L$-loop bare amplitudes in quantum field theory, which\ncan be expressed in terms of multiple zeta values or, more generally, multiple polylogarithms, one expects\ncontributions of, at most, weight $2 L + n$ in the Laurent expansion coefficient of order $n$.\\footnote{We are not aware of any proof that this will always be the case. \nHowever, to the best of our knowledge, this weight bound turns out to hold in every explicit higher-order calculation performed to date.}\nIn fact, given our experience at lower loop orders, it is natural to hope\nthat the four-loop cusp anomalous dimensions are rational linear combinations of the numbers $1$, $\\zeta_2$, $\\zeta_3$, $\\zeta_2^2$, $\\zeta_2 \\zeta_3$, $\\zeta_5$, $\\zeta_2^3$, and $\\zeta_3^2$.\nIf we further assume that there are no cancellations of spurious constants between different master integrals, we\nconclude that integrals which contain other transcendental constants at leading order in $\\epsilon$ should not contribute to the four-loop cusp anomalous dimensions.\nFor illustration, we have chosen the finite, non-planar twelve-line integral\\footnote{Here we use the convention $\\displaystyle \\zeta_{5,3}=\\sum_{0