diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzhtkz" "b/data_all_eng_slimpj/shuffled/split2/finalzzhtkz" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzhtkz" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nGraph Learning (GL), also referred to as Network Topology Inference or Graph Structure Learning, refers to the task of inferring graphs from data. Here we focus on learning undirected graphs in the supervised setting. GL has deep roots in statistics ~\\citep{dempster1972covariance} with significant contributions for probabilistic graphical model selection; see e.g.~\\citep{kolaczyk2009book,GLasso2008,drton2017structure}. Approaches dubbed `latent graph learning' have been used to learn interactions among coupled dynamical systems~\\citep{kipf2018icml}, or to obtain better task-driven representations of relational data for machine learning (ML) applications~\\citep{wang2019dynamicgraphcnn,kazi2020DGM,velickovic2020pgn}; for the related task of link-prediction see \\citep{hamilton2021book} .\nGraph signal processing (GSP) has been the source of recent advances using cardinal properties of network data such as smoothness~\\citep{dong2016learning,kalofolias2016learn} and graph stationarity~\\citep{segarra2017tsipn,pasdeloup2018tsipn}, exploiting models of network diffusion~\\citep{wasserman2022GDN, daneshmand2014estimating}, or taking a signal representation approach ~\\citep{dong2019learning,mateos2019spmag}. These works, referred to as Model-Based (MB) graph learning, use such data models to formulate the topology inference task as a (convex) optimization problem to be solved for each problem instance. When datasets are available, one can build on these MB approaches by unrolling an iterative solution procedure to produce a learned architecture optimized on the given data~\\citep{monga2021spmag}. Such Unrolling-Based (UB) methods offer several advantages over MB methods: they tend to be more expressive (MB methods are restricted to a small set of constraints expressible in a convex manner), require fewer layers than the corresponding iterative algorithm for a given performance requirement, and allow one to directly optimize for a (differentiable) metric of interest~\\citep{dong2019learning,wasserman2022GDN, shrivastava2019glad}. Thus by learning a distribution over such graphs by investing time upfront in training a model we can obtain better task-specific performance while avoiding expensively resolving each new problem instance.\n\n\nWhile the interdisciplinary history of GL has been a source of progress, it has also been a hindrance: these different fields use different tools (MATLAB\/Octave, R, Python, etc) in implementing their algorithms. This has slowed the spread of new algorithms and made comparing relative performance of different algorithms difficult. Importantly, there has been little effort to ensure implementations can leverage the increasing computation power of the GPU. Due to the reliance of GL algorithms on matrix\/vector operations, as well as the interest in handling ever increasing network sizes, GPUs offer an ideal environment for such computation to occur. \nThus the first aim of pyGSL is to provide standardized GPU-compatible implementations of fundamental GL methods, along with datasets to evaluate them on. \nThe second aim is lowering the barrier to use and develop such GL methods. As such, we provide a framework which makes it easy for researchers to extend - or build completely new - graph learning methods without needing to rebuild all the underlying software machinery. By consolidating wide-ranging GL methods into in a single GPU-compatible framework, pyGSL can be seen as the most comprehensive Python library for GL to date.\n\n\n\\section{Architecture}\n\\label{sec:architecture}\n\nBoth MB and UB methods rely heavily on matrix\/vector operations, often require hyperparameter search capabilities, and are typically applied to problems with large graph sizes and\/or large sets of graphs; see Section \\ref{sec:gl_methods} for further details on said GL approaches. For these reasons, pyGSL implements all methods in PyTorch which has a rich ecosystem of software to interface with GPUs, run and visualize hyperparameter searches via W\\&B ~\\citep{wandb}, and abstract away the bug-prone training, validation and optimization logic required in gradient based training via PyTorch-Lightning ~\\citep{Falcon_PyTorch_Lightning_2019}.\n\nImplementations of UB methods share significant structure, motivating our novel \\texttt{UnrollingBase} class, which encapsulates this shared functionality; see Fig. \\ref{fig:pyGSL}. This minimizes repeated code, and makes implementing new UB methods as simple as defining the core layer-wise logic for that specific unrolling. By doing so, users can (i) cut their development time down significantly by relying on pretested methods; and (ii) automatically gain functionality such as intermediate output visualization and metric logging.\nSee Appendix \\ref{app:python_example} for a code snippet showing how pyGSL can be used.\n\n\\begin{figure}\n \\centering\n \\includegraphics[scale=.3]{figures\/pyGSL_UML_model_horizontal.png}\n \\caption{Class diagram of pyGSL's learned unrollings with a subset of variables and methods shown. \n For a specific learned unrolling, say GDN, we would replace the LearnedUnrolling class with a GDN class which simply overwrites the setup(), shared\\textunderscore step(), and configure\\textunderscore optimizers() methods from its parent class pl.LightningModule. Since the GDN class also mixes in \\texttt{UnrollingBase}, it automatically inherits functionality common to all UB methods.} \n \\label{fig:pyGSL}\n \n\\end{figure}\n\n\n\\section{A Unifying View of Graph Learning Methods}\n\\label{sec:gl_methods}\n\n\nThe GL problem is to infer a graph from data to uncover a latent complex information structure. Here, we focus on undirected and weighted graphs $\\mathcal{G}(\\mathcal{V},\\mathcal{E})$, where $\\mathcal{V}=\\{1,\\ldots, N\\}$ is the set of nodes (henceforth common to all graphs), and $\\mathcal{E}\\subseteq \\mathcal{V}\\times \\mathcal{V}$ collects the edges. A graph signal ${\\bm{x}} = [x_1, \\ldots, x_N] \\in \\mathbb{R}^N$ is a map ${\\bm{x}}: \\mathcal{V} \\rightarrow \\mathbb{R}$ which assigns a real value (say, a feature) to each vertex. We collect the $P$ graph signal observations together into data matrix ${\\bm{X}} = [{\\bm{x}}^{(1)}, \\cdots, {\\bm{x}}^{(P)}]$. A similarity function $S({\\bm{X}}): \\mathbb{R}^{N \\times P} \\mapsto \\mathbb{R}^{N \\times N}$ is chosen to compute the observed direct similarity between nodes. Common choices for $S$ include sample covariance\/correlation or Euclidean distance. \nWe can conceptualize the output of $S({\\bm{X}})$ as the symmetric adjacency matrix of an observed graph \nfrom which we would like to recover a latent graph with symmetric adjacency matrix denoted ${\\bm{A}}_L \\in \\mathbb{R}_+^{N\\times N}$. Below we discuss three popular approaches to tackle the GL task which exemplify the link between $S({\\bm{X}})$ and ${\\bm{A}}_L$; for discussions on Deep Learning (DL) methods for GL and tradeoffs between the three methods refer to \\ref{app:DL} and \\ref{app:tradeoffs}, respectively.\n\n\n\\begin{table}[t]\n\\caption{Model-Based methods with the associated Unrolling-Based method it inspires.}\n\\label{table:model_based_methods}\n\\begin{center}\n\\begin{tabular}{lll}\n\\multicolumn{1}{c}{\\bf Data Model}\n&\\multicolumn{1}{c}{\\bf MB Iterative Solution Procedure}\n&\\multicolumn{1}{c}{\\bf UB Model}\n\\\\ \\hline \\\\\nGaussian \n& Alternating Minimization\n& GLAD\\\\\nSmoothness \n& Primal-Dual Splitting\n& L2G\\\\\nDiffusion \n& Proximal Gradient Descent\n& GDN\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\textbf{Model-Based Graph Learning.}\nThese GL approaches postulate some data model relating the observed data ${\\bm{X}}$ to the latent graph $\\mathcal{G}_L$ via ${\\bm{X}} \\sim \\mathcal{F}({\\bm{A}}_L)$.\nThis model can be the result of a network process, e.g. linear network diffusion, or statistical where ${\\bm{X}}$ follows a distribution determined by ${\\bm{A}}_L$\ne.g. in probabilistic graphical models. Thus the GL task reduces to an attempt to invert this relation $\\mathcal{F}^{-1}({\\bm{X}})$.\nTo do so, these techniques formulate a (convex) optimization problem which can be solved via iterative optimization methods.\nThese MB GL problems have the general form\n\\begin{align}\n\\label{prob:model-based-graph-learning}\n{\\bm{A}}^{*} \\in {}& \\underset{ {\\bm{A}} \\in \\mathcal{C} } {\\text{ argmin}} \\: \\{\\mathcal{L}_{\\text{data}}({\\bm{A}},{\\bm{X}}) + \n\\mathcal{L}_{\\text{reg}}({\\bm{A}})\\},\n\\end{align}\nwhere $\\mathcal{L}_{\\text{data}}({\\bm{A}},{\\bm{X}})$ is the data fidelity term, $\\mathcal{L}_{\\text{reg}}({\\bm{A}})$ is the regularization term incorporating the structural priors (e.g., $\\norm{{\\bm{A}}}{1}$ for sparsity), and $\\mathcal{C}$ encodes a convex constraint on the optimization variable ${\\bm{A}}$, e.g., symmetry, non-negativity, or hollow diagonal. We inject our assumptions about the generative model into the objective of the problem. Note that we parameterize the canonical problem form with an adjacency ${\\bm{A}}$ for notational convenience, but many methods use other graph shift operators such as the Laplacian ${\\bm{L}} := \\text{diag}({\\bm{A}} \\mathbf{1}) - {\\bm{A}}$ or its normalized counterparts.\n\nTo make this clear, consider the following data models and resulting optimization formulations for graph recovery, summarized in Table \\ref{table:model_based_methods}. When the data ${\\bm{X}}$ are assumed to be Gaussian, the GL task is to estimate the precision matrix; the sparsity-regularized log-likelihood function for such a precision matrix is convex \n$\\arg\\min_{\\mathbf{A}\\succeq\\mathbf{0}}\\left\\{{\\textrm{Tr}(S({\\bm{X}}) \\mathbf{A})} {+\\alpha\\|\\mathbf{A}\\|_1 - \\beta\\log\\det\\mathbf{A}} \\right\\}$, where $S({\\bm{X}})$ is the sample covariance matrix~\\citep{GLasso2008}.\nWhen the data is assumed to be smooth on $\\mathcal{G}_L$, i.e., the total variation Tr$({\\bm{X}}^\\top {\\bm{L}} {\\bm{X}})$ of the signals on the graph is small, a standard GL formulation is $\\arg\\min_{\\mathbf{A} \\in \\mathcal{C}} \\left\\{ {\\norm{{\\bm{A}} \\odot S({\\bm{X}})}{1}} {- \\alpha \\mathbf{1}^\\top \\text{log}({\\bm{A}} \\mathbf{1}) + \\beta \\norm{{\\bm{A}}}{F}^2} \\right\\}$, where $S({\\bm{X}})$ is the Euclidean distance matrix~\\citep{kalofolias2016learn}. \nWhen the observed graph $S({\\bm{X}})$ satisfies a graph convolutional relationship with ${\\bm{A}}_L$, i.e. \n$S({\\bm{X}}) = \\sum_{i=0}^{K} \\alpha_i {\\bm{A}}_L^i$, as is the case in linear network diffusion where again $S({\\bm{X}})$ is the sample covariance matrix, we can pose the GL task as the non-convex problem\n$\\arg\\min_{\\mathbf{A} \\in \\mathcal{C}} \\left\\{ {\\norm{S({\\bm{X}})-\\sum_{i=0}^{K} \\alpha_i {\\bm{A}}_L^i}{F}^2} {+ \\beta\\norm{{\\bm{A}}}{1}} \\right\\}$~\\citep{wasserman2022GDN}.\n\nThe MB methods that admit iterative solutions take the generic form: ${\\bm{A}}[i+1] = h_{\\mathbf{\\theta}}({\\bm{A}}[i], S({\\bm{X}}))$ where ${\\bm{A}}[i]$ is output on the $i$-th iteration, $h_{\\mathbf{\\theta}}$ is the contractive function, and $\\mathbf{\\theta}$ are the regularization parameters. We implement the function $h_{\\mathbf{\\theta}}$ using GPU compatible operations in PyTorch and wrap it in a loop until uniform convergence is achieved. \n\n\n\\textbf{Unrolling-Based Graph Learning.}\nAlgorithm unrolling uses iterative algorithms, like those often used in signal processing, as an inductive bias in the design of the neural network architectures. The algorithm is unrolled into a deep network by associating each layer to a single iteration in the truncated algorithm, producing a finite number of stacked layers as shown in Figure \\ref{fig:generic_UB}. We transform the regularization parameters of the iterative algorithms into learnable parameters of the neural network. Using a dataset, such parameters can be optimized for a given task by choosing an appropriate loss function and backproprogating gradients; see ~\\citep{monga2021spmag} for more details.\n\nAll UB models share a significant amount of code in their construction, training, and evaluation. We consolidate this code into the novel \\texttt{UnrollingBase} class. To instantiate a new UB model, one simply inherits from \\texttt{UnrollingBase} and Pytorch-Lightning's \\texttt{LightningModule}, declares a layers' learnable parameters $\\theta^{i}$, and implements the $h_{\\mathbf{\\theta}^i}({\\bm{A}}[i], S({\\bm{X}}))$ \nfunction - denoted as \\texttt{shared\\_step} in the users class. \nRecall that each layer of the resulting unrolled network performs the same operation represented by $h_{\\mathbf{\\theta}}({\\bm{A}}[i], S({\\bm{X}}))$ here; thus specifiying the entire unrolled network reduces to providing a desired depth and specifying $h$ for a single layer.\nThe \\texttt{UnrollingBase} mixin takes care of stacking the layers together, feeding outputs of one layer as inputs of the next, and implements the required interface to allow the \\texttt{LightningModule} to automate the training and optimization.\n\n\\begin{comment}\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[width=5.4in]{figures\/gdn_intermediate_outputs_adjust.png}\n\\end{center}\n\\caption{{Intermediate outputs of an Unrolling Method on a \\texttt{gl.data.diffuse} dataset.\nThis is an included plotting utility for all Unrolling-Based methods which use the \\texttt{UnrollingBase} mixin.}}\n\\end{figure}\n\\end{comment}\n\n\n\\begin{figure}[t]\n\\centering\n \\begin{minipage}[b]{0.49\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/generic_unrolling_alter_SX.png}\n \\caption{{Schematic of a generic UB method.}\n \\label{fig:generic_UB}\n \\end{minipage}\n \n \\begin{minipage}[b]{0.49\\textwidth}\n \\includegraphics[width=\\textwidth]{figures\/gdn_intermediate_outputs_alter_SX.png}\n \\caption{{Intermediate outputs plotted via pyGSL.}\n \\label{fig:iterative_optim}\n \\end{minipage}\n\\end{figure}\n\n\n\\section{Data}\n\\label{sec:data}\n\\textbf{Synthetics}\nAs discussed in Section \\ref{sec:gl_methods}, MB methods motivate the optimization problem through an assumed generative model on the observed data. Since UB methods are inspired by their MB counterparts, \nsynthetic datasets which match these assumed generative models are important for model validation across GL methods.\nWe thus provide (i) an efficient graph sampling interface using NetworkX ~\\citep{networkx} for a broad range of random graph ensembles;\nand (ii) an ability to generate signals conforming to \nsmoothness, \nlinear network diffusion (${\\bm{x}} = \\sum_{i=0}^p \\alpha_i A_L^{i}{\\bm{w}}$, where ${\\bm{w}}$ is typically white), or \nGaussianity (${\\bm{x}} \\sim \\mathcal{N}(\\mathbf{0}, {\\bm{A}}^{-1}$)). These datasets can be accessed via the \\texttt{smooth}, \\texttt{diffuse}, and \\texttt{gaussian} classes, respectively, in the \\texttt{gl.data} subdirectory.\n\n\n\\textbf{Real Data} We provide real datasets to evaluate the models on, including over $1000$ Structural Connectivity (SC) and Functional Connectivity (FC) pairs extracted from the HCP-YA neuroimaging dataset ~\\citep{glasser2016human}, as well as co-location and social network data\nfrom the Thiers13 dataset ~\\citep{Genois2018}. We also provide the ability to create `Pseudo-synthetic' data by using the graphs from the real datasets, and sampling synthetic signals on top of them that are e.g., smooth, diffused, or Gaussian. This provides a gentler transition to real data, which we found useful when developing and validating GL methods.\n\nSee Appendix \\ref{app:data} for a further discussion on the data and its associated class layout in pyGSL.\n\n\\begin{comment}\n\\section{Visualization, Logging, and Project Management}\nIn mixin already logging all metrics and loss values to WandB. Can be easily changed to any logger of choice due to the high level implimentation in pytorch lightning.\n\nShow unrolling image from GDN paper.\nShow WandB plots.\nEasy installation: Repository can be downloaded from GitHub link. Yaml files are included for simple conda environment setup to make project more agnostic to platform.\nCode quality: The code is includes unit-tests for fundamental shared functions, e.g. computing metrics, sampling graphs, etc, using pytest and hypothesis and, as of v0.3.1, test coverage is at 98. \n\\end{comment}\n\n\n\\section{Related Work}\nWhile there has been an explosion of software development in adjacent fields such as geometric DL~\\citep{pytorch_geom} and network-based modeling of complex systems~\\citep{deepgraph}, there has been little in the way of comprehensive GL packages.\n\\href{https:\/\/epfl-lts2.github.io\/gspbox-html\/doc\/learn_graph\/}{GSPBox} is (unmaintained) Matlab toolkit which focuses on traditional GSP tasks and offers a single function to perform GL on the smooth signal case ~\\citep{perraudin2016gspbox}.\nThere are many algorithms for Gaussian graphical model selection from a likelihood based ~\\citep{Yuan2007biometrika, GLasso2008}, regression based ~\\citep{meinshausen2006high, peng2009partial}, and constrained \n$\\ell 1$-minimization\n~\\citep{cai2011constrained} approaches; almost all implementations are in R or Matlab. Recently~\\citet{choi2021efficient} released an R-based GPU implementation of CONCORD-PCD, a regression based approach which uses parallelized coordinate descent for fast inference. Software in~\\citep{lassance2020mlsp} offers benchmarks for MB approaches.\nTo the best of the authors knowledge, no encompassing framework exists for UB methods, only standalone implementations released by the respective methods' authors.\n\n\n\n\\section{Concluding remarks}\nWe introduced the pyGSL framework for GPU-aware fast and scalable GL. We provide several synthetic and real datasets for model evaluation, as well as a novel class for UB architectures which dramatically lowers the time and expertise required to use and build such methods. It also ensures their compatibility as sub-modules in larger gradient-based learning systems. This is an active project, and we plan to add more benchmark datasets and the latest GL methods as they are published. We welcome more researchers and engineers to join, develop, maintain, and improve this toolkit to push forward the research and deployment of network topology inference algorithms.\n\n\n\n\n\\bibliographystyle{unsrtnat}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec1} \\setcounter{equation}{0}\n\\setcounter{section}{1}\n\n\n\n\n\nA result of Hawking \\cite{Hawking} shows that a cross section of any connected component of the event horizon in a $4$-dimensional asymptotically flat stationary spacetime satisfying the dominated energy condition, has positive Euler characteristic, and hence must be topologically a $2$-sphere. The conclusion also holds without the stationarity condition provided one replaces a cross section of the event horizon with a stable apparent horizon. These results were generalized by Galloway and Schoen \\cite{GallowaySchoen} to show that a cross section of any connected component of the event horizon in an $n$-dimensional asymptotically flat stationary spacetime is an $(n-2)$-dimensional Riemannian manifold with positive Yamabe invariant. In dimension $5$ the additional hypothesis of bi-axial symmetry restricts the possible topologies further, so that the only admissible topologies are $S^3$, $S^1\\times S^2$, and $L(p,q)$ \\cite{HollandsYazadjiev}. Explicit examples of stationary vacuum bi-axisymmetric solutions\nwith horizon topology $S^3$ and $S^1\\times S^2$ have been constructed by Myers-Perry (sphere) \\cite{MyersPerry},\nEmparan-Reall (singly spinning ring) \\cite{EmparanReall}, and Pomeransky-Sen'kov (doubly spinning ring) \\cite{PomeranskySenkov}.\nIn particular, stationary vacuum black holes are not determined solely by their mass and angular momenta in higher dimensions. That is, the no-hair conjecture fails, as\nthere exist black ring solutions having the same\nmass and angular momenta as a Myers-Perry black hole.\nNonetheless the underlying result supporting the validity of the no-hair theorem in 4-dimensions, a uniqueness theorem for harmonic maps with prescribed singularities into a nonpositively curved target, still holds in higher dimensions. In particular any bi-axially symmetric stationary vacuum solution is determined by a finite set of parameters. It is the purpose of this paper to establish a partial converse: given any admissible set of parameters, there is a unique solution of the reduced equations. Whether this solution of the reduced equations then generates a physical spacetime solution then depends on the absence of conical singularities on the axes.\n\nThe axes correspond to the locus where a closed-orbit Killing field degenerates, and in the domain $\\mathbb{R}^3$ of the harmonic map these are identified by a number of intervals on the $z$-axis called \\emph{axis rods}. The axis rods are separated by intervals corresponding to horizons, and by points which are referred to as \\emph{corners}. Note that this precludes the case of degenerate horizons, in which horizons are represented by points instead of intervals. In addition, the end points of the horizon rods are named \\emph{poles}. Denote by $\\Gamma$ the $z$-axis with the interior of all the horizon rods removed, and let $\\{p_l\\}$ represent the corners and poles. Note that there are always two semi-infinite axes, labeled north and south. We assign a pair of relatively prime integers $(m_l,n_l)$ called the \\emph{rod structure} to each axis rod $\\Gamma_l$, such that the north and south semi-infinite axes are assigned the rod structures $(1,0)$ and $(0,1)$, respectively. This pair of numbers indicates which linear combination of rotational Killing fields vanishes on the associated rod.\nIf $(m_l,n_l)$ and $(m_{l+1},n_{l+1})$ are the rod structures assigned to two consecutive axis rods separated by a corner, then the \\emph{admissibility condition} \\cite{HollandsYazadjiev} is\n\\begin{equation} \\label{admissibility0}\n\\operatorname{det}\\begin{pmatrix} m_l & n_l \\\\ m_{l+1} & n_{l+1} \\end{pmatrix} = \\pm 1.\n\\end{equation}\nThis condition is to prevent orbifold singularities at the corners \\cite{Evslin}.\nHorizon rods are assigned the rod structure $(0,0)$. Finally, assign to each axis rod $\\Gamma_l$ a constant $\\mathbf{c}_l\\in\\mathbb{R}^2$, the \\emph{potential constant}. The difference between the values of these constants on two axes adjoining a horizon rod is proportional to the angular momenta of this horizon component, as calculated by Komar integrals. A \\textit{rod data set} $\\mathcal{D}$ consists of the corners and poles $\\{p_l\\}$, the rod structures $\\{(m_l,n_l)\\}$, and the potential constants $\\{\\mathbf{c}_l\\}$ which are assumed not to vary between two consecutive rods separated by a corner. This data determines uniquely the prescribed singularities of the harmonic map $\\varphi\\colon\\mathbb{R}^3\\setminus\\Gamma\\to SL(3,\\mathbb{R})\/SO(3)$ as described more precisely in the Section \\ref{sec4}, and will be referred to as admissible if it satisfies \\eqref{admissibility0} at each corner. For technical reasons an additional \\textit{compatibility condition} will be imposed to aid the existence result. This condition only applies when two consecutive corners are present. Let $p_{l-1}$ and $p_{l}$ be two consecutive corners with axis rods $\\Gamma_{l-1}$ above $p_{l-1}$, $\\Gamma_{l}$ between $p_{l-1}$ and $p_{l}$, and $\\Gamma_{l+1}$ below $p_{l}$.\nThen the compatibility condition states that the first component of the rod structures for $\\Gamma_{l-1}$ and $\\Gamma_{l+1}$ have opposite sign if both are nonzero\n\\begin{equation}\\label{compatibilitycondition}\nm_{l-1}m_{l+1}\\leq 0,\n\\end{equation}\nwhenever the determinants \\eqref{admissibility0} for the two corners $p_{l-1}$ and $p_{l}$ are both $+1$. Note that this latter requirement on the determinants may always be achieved by multiplying each component of the rod structures for $\\Gamma_{l-1}$ and $\\Gamma_{l}$ by $-1$ if necessary; this is an operation which does not change the properties of a rod.\n\n\nIn order to determine the physical relevance of a solution,\ndefine on each bounded axis rod $\\Gamma_l$ a function\n$b_l$ to be the logarithm of the limiting ratio between the length of the closed orbit of the Killing field degenerating on $\\Gamma_l$, and $2\\pi$ times the radius from $\\Gamma_l$ to this orbit. It turns out that $b_l$ is constant on $\\Gamma_l$. The absence of a conical singularity on $\\Gamma_l$ is the \\emph{balancing condition} $b_l=0$.\n\nAn asymptotically flat stationary vacuum spacetime will be referred to as \\textit{well-behaved} if the\norbits of the stationary Killing field are complete, the\ndomain of outer communication (DOC) is globally hyperbolic, and the DOC\ncontains an acausal spacelike connected hypersurface which is asymptotic to the canonical slice in the asymptotic end and whose boundary\nis a compact cross section of the horizon. These assumptions are consistent with those of \\cite{ChruscielCosta}, and are used for the reduction of the stationary vacuum equations. The main result may now be stated as follows.\n\n\n\\begin{theorem} \\label{main}\\par\n\\noindent\n\\begin{enumerate}[(i)]\n\\item\nA well-behaved 5-dimensional asymptotically flat, stationary, bi-axially symmetric solution of the vacuum Einstein equations without degenerate horizons gives rise to a harmonic map $\\varphi\\colon\\mathbb{R}^3\\setminus\\Gamma\\to SL(3,\\mathbb{R})\/SO(3)$ with prescribed singularities associated with an admissible rod data set $\\mathcal{D}$, and satisfying $b_l=0$ on all bounded axis rods.\n\\item\nConversely,\ngiven an admissible rod data set $\\mathcal{D}$ satisfying the compatibility condition \\eqref{compatibilitycondition}, there is a unique harmonic map $\\varphi\\colon\\mathbb{R}^3\\setminus\\Gamma\\to SL(3,\\mathbb{R})\/SO(3)$ with prescribed singularities on $\\Gamma$ corresponding to $\\mathcal{D}$.\n\\item\nA well-behaved 5-dimensional asymptotically flat, stationary, bi-axially symmetric solution of the vacuum Einstein equations without degenerate horizons can be constructed from $\\varphi$ if and only if the resulting metric coefficients are sufficiently smooth across $\\Gamma$ and $b_l=0$ on any bounded axis rod.\n\\end{enumerate}\n\\end{theorem}\n\nThe reduction of the Einstein vacuum equations to a harmonic map is well known \\cites{Harmark,Maison} and follows closely the 4-dimensional case. However, there are several new difficulties associated with the analysis of the resulting problem. First, even without angular momenta the problem is nonlinear, in contrast to the linear structure present in the static 4D setting. This makes the construction of a \\emph{model map} prescribing the singular behavior near $\\Gamma$\nmuch more delicate, whereas in the 4D case the superposition of Schwarzschild solutions is sufficient. Next, the target $SL(3,\\mathbb{R})\/SO(3)$ is a rank 2 symmetric space with nonpositive sectional curvature, rather than rank 1 with negative sectional curvature in 4D. We recall that the theory of harmonic maps into rank 1 symmetric spaces, in particular real hyperbolic space $\\mathbb{H}^n$, has been extensively investigated e.g. \\cite{Schoen, LiTam}, yet comparatively little is known for the cases of higher rank targets. These properties of the target hyperbolic space $\\mathbb{H}^2= SL(2,\\mathbb{R})\/SO(2)$ in dimension four played a central role in obtaining a priori estimates to prove existence, and without these properties in the 5D case new techniques must be developed. Furthermore, in higher dimensions there is an abundance of possible rod structures, and they must obey an admissibility condition \\eqref{admissibility0} not present in four dimensions. Finally, the study of conical singularities and their formulation as the balancing condition $b_i=0$, while similar to the 4D case, requires a more precise analysis.\n\nSeveral explicit solutions of these equations and related ones have previously been found. As mentioned above, the Myers-Perry black hole \\cite{MyersPerry} generalizes the Kerr black hole to 5-dimensions, and is a 3-parameter family of solutions with spherical $S^{3}$ horizon topology. Emparan and Reall \\cite{EmparanReall} found the first example with nontrivial topology, namely a family of black ring solutions with an $S^1\\times S^2$ horizon and one angular momentum. These were later generalized by Pomeransky-Sen'kov \\cite{PomeranskySenkov} to a full 3-parameter family with two angular momenta. A multiple horizon solution with two components consisting of an $S^3$ surrounded by an $S^1\\times S^2$, referred to as black saturn, was constructed by Elvang and Figueras \\cite{ElvangFigueras}. In this solution both the sphere and ring rotate only in one plane which is associated with the $S^1$ direction of the ring.\nFurther multiple horizon solutions include the dipole black rings (or di-rings) \\cites{EvslinKrishnan,IguchiMishima} consisting of two concentric singly spinning rings rotating in the same plane, and the bicycling black rings (or bi-rings) \\cites{ElvangRodriguez,Izumi} consisting of two singly spinning rings rotating in orthogonal planes. In the minimal supergravity setting, Kunduri and Lucietti \\cite{LuciettiKunduri} found the first examples of regular black holes having a lens space topology $\\mathbb{RP}^3=L(2,1)$. These were generalized by Tomizawa and Nozawa to more general lens topology $L(p,1)$ in \\cite{TomizawaNozawa}. Both of these black lens solutions are supersymmetric and hence extremal. It is an important open problem to\nfind regular vacuum black holes with lens topology. In this direction Chen and Teo \\cite{ChenTeo} found vacuum black lenses via the inverse scattering method, however their solutions either possess conical singularities or have a naked singularity.\nA disadvantage of the methods used to construct the above examples\nis that they cannot produce all possible regular solutions. In contrast, the PDE approach used here generates all candidates with an admissible\/compatible rod structure, where the only obstruction is the possibility of conical singularities on the bounded components of the axes. Furthermore, the variety of black holes that may be constructed from admissible rod data which also satisfy the compatibility condition is vast. In particular, multiple and single component black lenses $L(p,q)$ are possible, for arbitrary relatively prime $p$ and $q$, as is shown in Proposition \\ref{lensrod} of Section \\ref{sec4}.\n\nThe existence portion of Theorem \\ref{main} may be generalized by forgoing the admissibility condition \\eqref{admissibility0}. This requires instead of \\eqref{compatibilitycondition} a \\textit{generalized compatibility condition}\n\\begin{equation}\\label{gcompatibilitycondition0}\nm_{l-1}m_{l+1} \\operatorname{det}\\begin{pmatrix} m_{l-1} & n_{l-1} \\\\ m_{l} & n_{l} \\end{pmatrix}\n\\operatorname{det}\\begin{pmatrix} m_l & n_l \\\\ m_{l+1} & n_{l+1} \\end{pmatrix} \\leq 0,\n\\end{equation}\nwhich is used in the construction of a model map. Note that if \\eqref{admissibility0} is satisfied then \\eqref{gcompatibilitycondition0} reduces to \\eqref{compatibilitycondition}. However, without the admissibility condition orbifold singularities at corner points will be present.\n\n\\begin{theorem}\\label{main2}\nGiven a rod data set $\\mathcal{D}$ satisfying the generalized compatibility condition \\eqref{gcompatibilitycondition0}, there is a unique harmonic map $\\varphi\\colon\\mathbb{R}^3\\setminus\\Gamma\\to SL(3,\\mathbb{R})\/SO(3)$ with prescribed singularities on $\\Gamma$ corresponding to $\\mathcal{D}$. From this map a well-behaved 5-dimensional asymptotically flat, stationary, bi-axially symmetric solution of the vacuum Einstein equations without degenerate horizons can be constructed having orbifold singularities at the corners if and only if the resulting metric coefficients are sufficiently smooth across $\\Gamma$ and $b_l=0$ on any bounded axis rod.\n\\end{theorem}\n\nThis result has been generalized in \\cite{KhuriWeinsteinYamada} to include the asymptotically Kaluza-Klein and asymptotically locally Euclidean cases, in which cross sections at infinity are $S^1 \\times S^2$ and quotients of $S^3$ respectively.\n\nThe organization of this paper is as follows.\nIn Section \\ref{sec2} we review the reduction of the Einstein vacuum equations, in the bi-axially symmetric stationary setting, to a harmonic map having the symmetric space $SL(3,\\mathbb{R})\/SO(3)$ as target. Relevant aspects of the geometry of this symmetric space are then discussed in Section \\ref{sec3}.\nIn Section \\ref{sec4} a detailed analysis of rod structures and the hypotheses associated with them is given. The model map is constructed in Section \\ref{sec5}, and existence and uniqueness for the harmonic map problem is proven in Section \\ref{sec7} using energy estimates established in Section \\ref{sec6}. Finally in Section \\ref{sec8} it is shown how the desired spacetime is produced from the harmonic map, and regularity issues are discussed. An appendix is included in order to give a topological characterization of corners.\n\n\n\n\n\n\\medskip\n\n\\textbf{Acknowledgements.}\nThe authors thank the Erwin Schr\\\"odinger International Institute for Mathematics and Physics and the organizers of its ``Geometry and Relativity'' program, where portions of this paper were written. The third author also thanks Koichi Kaizuka for useful conversations concerning the geometry of symmetric spaces.\n\n\\section{Dimensional Reduction of the Vacuum Einstein Equations} \\label{setup}\n\\label{sec2} \\setcounter{equation}{0}\n\\setcounter{section}{2}\n\n\n\n\nLet $\\mathcal{M}^5$ be the domain of outer communication for a well-behaved asymptotically flat, stationary vacuum, bi-axisymmetric spacetime. In particular its isometry group admits\n$\\mathbb{R}\\times U(1)^2$ as a subgroup in which the $\\mathbb{R}$-generator $\\xi$ (time translation) is timelike in the asymptotic region, and the $U(1)^2$-generators $\\eta^{(i)}$, $i=1,2$ yield spatial rotation.\nSince the three generators for the isometry subgroup commute, they may be expressed as coordinate vector fields $\\xi=\\partial_{t}$ and $\\eta^{(i)}=\\partial_{\\phi^{i}}$.\nMoreover by abusing notation so that the same symbols denote dual covectors it holds that\n\\begin{equation}\\label{aaa}\n\\star\\left(\\xi\\wedge\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge d\\xi\\right)\n=\\star\\left(\\xi\\wedge\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge d\\eta^{(1)}\\right)\n=\\star\\left(\\xi\\wedge\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge d\\eta^{(2)}\\right)\n=0,\n\\end{equation}\nwhere $\\star$ denotes the Hodge star operation. To see this, observe that the vacuum\nequations imply that the exterior derivative of the three quantities in \\eqref{aaa} vanishes, and since these functions vanish on the axis in the asymptotically flat end they must vanish everywhere. Therefore the Frobenius theorem applies to show that the 2-plane distribution orthogonal to the three Killing vectors is integrable. We may then take coordinates on one of these resulting 2-dimensional orbit manifolds, and Lie drag them to get a system of coordinates such that the spacetime metric decomposes in the following way\n\\begin{equation}\ng=\\sum_{a,b=1}^{3}q_{ab}(x)dy^{a}dy^{b}+\\sum_{c,d=4}^{5}h_{cd}(x)dx^{c}dx^{d},\n\\end{equation}\nwhere $y=(\\phi^1,\\phi^2,t)$. The fiber metric may be expressed by\n\\begin{equation}\\label{fibermetric}\nq=f_{ij}(d\\phi^{i}+v^{i}dt)(d\\phi^{j}+v^{j}dt)-f^{-1}\\rho^2 dt^{2},\n\\end{equation}\nfor some functions $v^i$ where $f=\\operatorname{det} f_{ij}$ and $\\rho^2=-\\operatorname{det} q_{ab}$. It is shown in \\cites{Chrusciel,ChruscielCosta} that the determinant of the fiber metric is nonpositive, and\nthe vacuum equations imply that $\\rho$ is harmonic with respect to the metric $fh$, since\n\\begin{equation}\n\\Delta_{fh}\\rho=\\rho^{-1}R_{tt}-\\rho f^{-1}f^{ij} R_{ij}=0.\n\\end{equation}\nFrom this it can be shown \\cites{Chrusciel,ChruscielCosta} that $\\rho$ is a well-defined coordinate function on the quotient\n$\\mathcal{M}^{5}\/\\left[\\mathbb{R}\\times U(1)^2\\right]$ away from the poles, that is $|\\nabla\\rho|\\neq 0$. Since the orbit space is simply connected \\cite{HollandsYazadjiev1} there is a globally defined harmonic conjugate function $z$, which together with $\\rho$ yields an isothermal coordinate system so that\n\\begin{equation}\nfh=e^{2\\sigma}(d\\rho^2 +dz^2),\n\\end{equation}\nfor some function $\\sigma=\\sigma(\\rho,z)$. We now have the canonical Weyl-Papapetrou expression for the spacetime metric\n\\begin{equation} \\label{metric}\ng=f^{-1}e^{2\\sigma}(d\\rho^2+dz^2)-f^{-1}\\rho^2 dt^2\n+f_{ij}(d\\phi^{i}+v^{i}dt)(d\\phi^{j}+v^{j}dt).\n\\end{equation}\n\nLet\n\\begin{equation}\ng_{3}=e^{2\\sigma}(d\\rho^2+dz^2)-\\rho^2 dt^2,\\quad\\quad\\quad\nA^{(i)}=v^{i}dt,\n\\end{equation}\nthen\n\\begin{equation}\ng=f^{-1}g_{3}+f_{ij}(d\\phi^{i}+A^{(i)})(d\\phi^{j}+A^{(j)}).\n\\end{equation}\nThis represents a Kaluza-Klein reduction with 2-torus fibers. In this setting the vacuum Einstein equations yield a 3-dimensional version of Einstein-Maxwell theory, with the `Maxwell equations' given by\n\\begin{equation}\\label{maxwell}\nd(ff_{ij}\\star_{3} dA^{(j)})=0,\n\\end{equation}\nwhere $\\star_{3}$ is the Hodge star operation with respect to $g_{3}$. It follows that there exist globally defined (due to simple connectivity) twist potentials satisfying\n\\begin{equation}\\label{chi}\nd\\omega_{i}=2f f_{ij}\\star_{3}dA^{(j)}.\n\\end{equation}\nIn particular if $v^i$ are constant then the potentials $\\omega_i$ are constant, and vice versa. To explain the geometric meaning of the forms appearing on the right-hand side of \\eqref{chi}\nobserve that $\\eta^{(i)}=f_{ij}\\left(d\\phi^j+v^j dt\\right)$ is the dual 1-form to $\\partial_{\\phi^{i}}$, and according to Frobenius' theorem\nthe forms $\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge d\\eta^{(i)}$\nmeasure the lack of integrability of the orthogonal complement distribution to the axisymmetric Killing fields. Moreover, it turns out that these forms are directly related to \\eqref{chi}. Indeed let $\\epsilon$, $\\epsilon_{3}$, and $\\star_{3}$ denote the volume forms with respect to $g$ and $g_{3}$, and the Hodge star operator with respect to $g_{3}$, respectively, then since\n\\begin{equation}\nd\\eta^{(i)}=f_{ij}d A^{(j)}+df_{ij}\\wedge\\left(f^{ja}\\eta^{(a)}\\right)\n\\end{equation}\nwe have\n\\begin{align}\\label{komar}\n\\begin{split}\n\\star(\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge d\\eta^{(i)})=&\nf_{ij}\\star(\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge dA^{(j)})\\\\\n=&f_{ij}\\epsilon(\\text{ }\\!\\cdot\\text{ }\\!,\\partial_{\\phi^1},\\partial_{\\phi^2},\\partial_{l},\\partial_{k})\n\\left(dA^{(j)}\\right)^{lk}\\\\\n=&f^{-1}f_{ij}\\epsilon_{3}(\\text{ }\\!\\cdot\\text{ }\\!,\\partial_{l},\\partial_{k})\\left(dA^{(j)}\\right)^{lk}\\\\\n=&ff_{ij}\\star_{3} dA^{(j)}.\n\\end{split}\n\\end{align}\nNote also that since the spacetime is vacuum and $\\eta^{(i)}$ are dual to Killing fields,\nstandard computations along with Cartan's `magic' formula show that the 1-forms\n\\begin{equation}\\label{komar1}\n\\star(\\eta^{(1)}\\wedge\\eta^{(2)}\\wedge d\\eta^{(i)})=\\iota_{\\eta^{(1)}}\n\\iota_{\\eta^{(2)}}\\star d\\eta^{(i)}\n\\end{equation}\nare closed, where $\\iota$ denotes interior product. This yields an alternate proof of \\eqref{maxwell}, and confirms that the twist potentials $\\omega_i$ agree with those associated with the Komar expression for angular momentum.\n\n\nNext, following Maison \\cite{Maison} define the following $3\\times 3$ matrix\n\\begin{equation}\\label{bigmatrix}\n\\Phi=\n\\left(\n \\begin{array}{ccc}\n f^{-1} & - f^{-1} \\omega_1 & - f^{-1} \\omega_2 \\\\\n -f^{-1} \\omega_1 & f_{11} + f^{-1} \\omega_1^2 & f_{12} + f^{-1} \\omega_1 \\omega_2 \\\\\n -f^{-1} \\omega_2 & f_{12} + f^{-1} \\omega_1 \\omega_2 & f_{22} + f^{-1} \\omega_2^2\n \\end{array}\n \\right)\n\\end{equation}\nwhich is symmetric, positive definite, and has $\\operatorname{det}\\Phi=1$.\nThe inverse matrix is\n\\begin{equation}\n\\Phi^{-1}=\n\\left(\n \\begin{array}{ccc}\n f + f^{11} \\omega_1^2 + f^{22} \\omega_2^2 + 2 f^{12} \\omega_1 \\omega_2 & f^{11} \\omega_1 + f^{21} \\omega_2 & f^{12} \\omega_1 + f^{22} \\omega_2 \\\\\n f^{11} \\omega_1 + f^{12} \\omega_2 & f^{11} & f^{12} \\\\\n f^{21} \\omega_1 + f^{22} \\omega_2 &f^{21} & f^{22} \\end{array}\n \\right)\n .\n\\end{equation}\nThis allows for a simplified expression of the 3-dimensional reduced Einstein-Hilbert action\n\\begin{equation}\\label{action}\n\\mathcal{S}=\\int_{\\mathbb{R}\\times \\left(\\mathcal{M}^{5}\/[\\mathbb{R}\\times U(1)^2]\\right)}\nR^{(3)}\\star_{3}1+\\frac{1}{4}\\mathrm{Tr}\\left(\\Phi^{-1}d\\Phi\\wedge\\star_{3}\\Phi^{-1}d\\Phi\\right).\n\\end{equation}\nThe Einstein-harmonic map system arising from this action is\n\\begin{equation}\\label{einstein}\nR^{(3)}_{kl}-\\frac{1}{2}R^{(3)}(g_3)_{kl}=T_{kl},\\quad\\quad \\operatorname{div}_{\\mathbb{R}^3}\\left(\\Phi^{-1}\n\\nabla\\Phi\\right)=0,\n\\end{equation}\nwhere the stress-energy tensor for the harmonic map is\n\\begin{equation}\nT_{kl}=\\mathrm{Tr}\\left(J_{k}J_{l}\\right)\n-\\frac{1}{2}g_{3}^{mn}\\mathrm{Tr}\\left(J_{m}J_{n}\\right)\n(g_{3})_{kl}\n\\end{equation}\nwith the current\n\\begin{equation}\nJ_{l}=\\Phi^{-1}\\partial_{l}\\Phi.\n\\end{equation}\nNote that by taking a trace the Einstein equations may be reexpressed as\n\\begin{equation}\nR^{(3)}_{kl}=\\mathrm{Tr}\\left(J_{k}J_{l}\\right).\n\\end{equation}\nFurthermore, in the $\\Phi$ portion of the action cancelations occur so that $e^{2\\sigma}$ does not appear, and this results in the divergence of \\eqref{einstein} with respect to\nthe Euclidean metric\n\\begin{equation}\\label{flatmetric}\n\\delta=d\\rho^2+dz^2+\\rho^2 d\\phi^2,\n\\end{equation}\nwhere $\\phi$ is an auxiliary coordinate.\nThis also implies that the stress-energy tensor is divergence free with respect to the Euclidean metric\n\\begin{equation}\\label{divergence}\n0=\\left(\\operatorname{div}_{\\mathbb{R}^3}T\\right)(\\partial_{\\rho})\n=\\partial_{\\rho}(\\rho T_{\\rho\\rho})+\\partial_{z}(\\rho T_{\\rho z}),\\quad\\quad\n0=\\left(\\operatorname{div}_{\\mathbb{R}^3}T\\right)(\\partial_{z})\n=\\partial_{\\rho}(\\rho T_{\\rho z})+\\partial_{z}(\\rho T_{z z}).\n\\end{equation}\n\nThe divergence free property of $T$ follows from the harmonic map equations. To see this in a more general harmonic map setting, consider maps $\\varphi: (M,\\mathrm{g})\\rightarrow (N,\\mathrm{h})$ with harmonic energy\n\\begin{equation}\nE=\\frac{1}{2}\\int_{M}|d\\varphi|^2 dx_{\\mathrm{g}}=\\frac{1}{2}\\int_{M}\\mathrm{g}^{\\mathrm{ij}}\n\\mathrm{h}_{\\mathrm{lk}}\\partial_{\\mathrm{i}}\\varphi^{\\mathrm{l}}\n\\partial_{\\mathrm{j}}\\varphi^{\\mathrm{k}} dx_{\\mathrm{g}}.\n\\end{equation}\nThe first variation is given by\n\\begin{equation}\n\\frac{\\delta E}{\\delta \\mathrm{g}}=\\frac{1}{2}\\int_{M}\\delta \\mathrm{g}^{\\mathrm{ij}}\\left(\\mathrm{h}_{\\mathrm{lk}}\n\\partial_{\\mathrm{i}}\\varphi^{\\mathrm{l}}\\partial_{\\mathrm{j}}\\varphi^{\\mathrm{k}}\n-\\frac{1}{2}|d\\varphi|^2 \\mathrm{g}_{\\mathrm{ij}}\\right) dx_{\\mathrm{g}},\n\\end{equation}\nand the stress-energy tensor is\n\\begin{equation}\nT_{\\mathrm{ij}}=\\langle\\partial_{\\mathrm{i}}\\varphi,\\partial_{\\mathrm{j}}\n\\varphi\\rangle_{\\mathrm{h}}\n-\\frac{1}{2}|d\\varphi|^2 \\mathrm{g}_{\\mathrm{ij}}.\n\\end{equation}\nThe harmonic map equations\n\\begin{equation}\\label{tensiondef}\n\\tau(\\varphi)=\\hat{\\nabla}^{\\mathrm{i}}\\partial_{\\mathrm{i}}\\varphi=0\n\\end{equation}\nthen imply that the stress-energy tensor is divergence free\n\\begin{align}\n\\begin{split}\n\\nabla^{\\mathrm{i}}T_{\\mathrm{ij}}=\\langle\\hat{\\nabla}^{\\mathrm{i}}\n\\partial_{\\mathrm{i}}\\varphi,\\partial_{\\mathrm{j}}\\varphi\\rangle_{\\mathrm{h}}\n+\\langle\\partial_{\\mathrm{i}}\\varphi,\\hat{\\nabla}^{\\mathrm{i}}\n\\partial_{\\mathrm{j}}\\varphi\\rangle_{\\mathrm{h}}\n-\\mathrm{g}^{\\mathrm{lm}}\\langle\\hat{\\nabla}_{\\mathrm{j}}\n\\partial_{\\mathrm{l}}\\varphi,\\partial_{\\mathrm{m}}\\varphi\\rangle_{\\mathrm{h}}=0.\n\\end{split}\n\\end{align}\nHere $\\hat{\\nabla}$ is the induced connection on the bundle $T^{*}M\\otimes \\varphi^{-1} TN$, and $\\tau(\\varphi)$ denotes the tension field which is a section of the pullback bundle $\\varphi^{-1}TN$.\n\n\nThe Einstein equations of \\eqref{einstein} may be solved via quadrature. This may be shown by computing each equation in terms of metric components. Recall that\n\\begin{equation}\nR_{kl}^{(3)}=\\partial_{m}\\Gamma_{kl}^{m}-\\partial_{l}\\Gamma_{km}^{m}\n+\\Gamma_{kl}^{m}\\Gamma_{nm}^{n}-\\Gamma_{kn}^{m}\\Gamma_{lm}^{n},\n\\end{equation}\nand\n\\begin{equation}\nR^{(3)}=g_3^{kl}R_{kl}^{(3)}\n=-\\rho^{-2}R_{tt}^{(3)}+e^{-2\\sigma}\\left(R_{\\rho\\rho}^{(3)}\n+R_{zz}^{(3)}\\right).\n\\end{equation}\nThe Christoffel symbols are\n\\begin{equation}\n\\Gamma_{tt}^{l}=\\delta^{l\\rho}e^{-2\\sigma}\\rho,\\quad\\quad\n\\Gamma_{ti}^{l}=\\delta_{t}^{l}\\delta_{i}^{\\rho}\\rho^{-1},\\quad\\quad\n\\Gamma_{ij}^{l}=\\delta_{j}^{l}\\partial_{i}\\sigma+\\delta_{i}^{l}\\partial_{j}\\sigma\n-\\delta_{ij}\\delta^{lm}\\partial_{m}\\sigma\\text{ }\\text{ }\\text{ for }\\text{ }\\text{ }i,j\\neq t.\n\\end{equation}\nIt follows that\n\\begin{equation}\nR_{tt}^{(3)}=R_{ti}^{(3)}=0, \\quad i\\neq t,\\quad\nR_{\\rho\\rho}^{(3)}=-\\Delta_{\\mathbb{R}^2}\\sigma+\\frac{1}{\\rho}\\partial_{\\rho}\\sigma,\\quad\nR_{zz}^{(3)}=-\\Delta_{\\mathbb{R}^2}\\sigma-\\frac{1}{\\rho}\\partial_{\\rho}\\sigma,\\quad\nR_{\\rho z}^{(3)}=\\frac{1}{\\rho}\\partial_{z}\\sigma.\n\\end{equation}\nFrom this the quadrature equations for $\\sigma$ are found to be\n\\begin{equation}\n\\partial_{\\rho}\\sigma=\\frac{\\rho}{2}\\left(R_{\\rho\\rho}^{(3)}-R_{zz}^{(3)}\\right)\n=\\frac{\\rho}{2}\\left(\\mathrm{Tr}(J_{\\rho}J_{\\rho})-\\mathrm{Tr}(J_{z}J_{z})\\right)\n=\\rho T_{\\rho\\rho}=-\\rho T_{zz},\n\\end{equation}\n\\begin{equation}\n\\partial_{z}\\sigma=\\rho R_{\\rho z}^{(3)}=\\rho T_{\\rho z},\n\\end{equation}\nwhich may be rewritten more conveniently as\n\\begin{equation}\\label{sigma}\nd\\sigma=-\\iota_{\\eta}\\ast\\iota_{\\partial_{z}}T\n\\end{equation}\nwhere $\\ast$ is the Hodge star operation with respect to the metric $\\delta$ on $\\mathbb{R}^3$, and $\\eta=\\partial_{\\phi}$. To see this let $\\varepsilon$ denote the volume form for $\\delta$, then\n\\begin{equation}\n(\\ast\\iota_{\\partial_{z}}T)_{ij}=\\varepsilon_{ijl}T^{lz}\n\\end{equation}\nand hence\n\\begin{equation}\n(\\iota_{\\eta}\\ast\\iota_{\\partial_{z}}T)_{j}=\\varepsilon_{ijl}\\eta^{i}T^{lz}\n=\\varepsilon(\\partial_{\\phi},\\partial_{j},\\partial_{\\rho})T^{\\rho z}\n+\\varepsilon(\\partial_{\\phi},\\partial_{j},\\partial_{z})T^{zz}.\n\\end{equation}\nWe then have\n\\begin{equation}\\label{333}\n\\iota_{\\eta}\\ast\\iota_{\\partial_{z}}T\n=\\rho T_{zz} d\\rho -\\rho T_{\\rho z} dz,\n\\end{equation}\nwhich confirms \\eqref{sigma}. Moreover, for later use observe that this form is closed in light of the harmonic map equations\n\\begin{equation}\nd\\left(\\iota_{\\eta}\\ast\\iota_{\\partial_{z}}T\\right)\n=-\\left(\\operatorname{div}_{\\mathbb{R}^3}T\\right)(\\partial_{z})d\\rho\\wedge dz=0.\n\\end{equation}\nNote that we also have to show that $\\sigma$ obtained from quadrature is bi-axisymmetric. However this follows easily from \\eqref{333}, since\n\\begin{equation}\n\\iota_{\\eta^{(i)}}d\\sigma=\\iota_{\\eta^{(i)}}\\iota_{\\eta}\\ast\\iota_{\\partial_{z}}T=0.\n\\end{equation}\n\n\n\n\n\n\n\\section{The Riemannian Geometry of $SL(3,\\mathbb{R})\/SO(3)$}\n\\label{sec3} \\setcounter{equation}{0}\n\\setcounter{section}{3}\n\n\n\n\n\nThe harmonic map arising from the dimensional reduction of the bi-axisymmetric stationary vacuum Einstein equations has as target space $SL(3,\\mathbb{R})\/SO(3)$. The geometry of this symmetric space plays an important role in the analysis of the harmonic map, and in this section the relevant aspects will be described.\n\nLet $G=SL(3,\\mathbb{R})$ then $K=SO(3)$ is a maximal compact subgroup. The quotient $\\mathbf{X}=G\/K$ is the space of equivalence classes $[A]$ in which\n\\begin{equation}\nA \\in SL(3, {\\mathbb{R}}) \\mbox{ and } A \\sim A' \\Leftrightarrow A' = A B \\mbox{ for some } B \\in SO(3).\n\\end{equation}\nIn other words $\\mathbf{X}$ is the space of left cosets of $K$ in $G$ and $G$ acts transitively on $\\mathbf{X}$ by\n\\begin{equation}\nA'K\\mapsto AA' K\\quad\\text{ }\\text{ for }\\text{ }\\quad A\\in G,\n\\end{equation}\nso that $K$ is the isotropy subgroup at $x_0=[\\mathrm{Id}]$.\nRecall now the construction of the canonical $G$-invariant Riemannian metric on the homogeneous space $\\mathbf{X}$, which yields a Riemannian symmetric space structure.\nThe Lie algebras will be denoted by\n\\begin{equation}\n\\mathfrak{g} = sl(3) = \\{Y \\in gl (3) \\,\\, | \\,\\, \\mathrm{Tr} Y = 0\\},\n\\end{equation}\nand\n\\begin{equation}\n\\mathfrak{k} = so(3) = \\{Y \\in gl(3) \\,\\, | \\,\\, Y^t = - Y\\}.\n\\end{equation}\nNote that $\\mathfrak{g}$ is semisimple since the Killing form $\\mathbf{B}:\\mathfrak{g}\\times\\mathfrak{g}\\rightarrow\\mathbb{R}$ given by\n\\begin{equation}\n\\mathbf{B}(Y, Z) = \\mathrm{Tr} (\\mathrm{ad} Y \\circ \\mathrm{ad} Z) = 6\\mathrm{Tr}(YZ)\n\\end{equation}\nis nondegenerate. Let $\\mathfrak{p}$ be the orthogonal complement of $\\mathfrak{k}$ with respect to $\\mathbf{B}$, so that we have the Cartan decomposition\n\\begin{equation}\n\\mathfrak{g} = \\mathfrak{k} \\oplus \\mathfrak{p}\n\\end{equation}\nwith\n\\begin{equation}\n\\mathfrak{p} = \\{Y \\in gl (3) \\,\\, | \\,\\, Y^t = Y, \\text{ }\\mathrm{Tr} Y = 0\\},\n\\end{equation}\nand satisfying the Cartan relations\n\\begin{equation}\n[\\mathfrak{k}, \\mathfrak{k}] \\subset \\mathfrak{k}, \\quad\\quad\n[\\mathfrak{p}, \\mathfrak{p} ] \\subset \\mathfrak{k}, \\quad\\quad\n[\\mathfrak{k},\\mathfrak{p}]\\subset\\mathfrak{p}.\n\\end{equation}\nThe Killing form $\\mathbf{B}$ is negative definite on $\\mathfrak{k}$ and positive definite on $\\mathfrak{p}$, in particular $\\mathbf{X}$ is of noncompact type.\n\nConsider the Cartan involution $\\theta: \\mathfrak{g} \\rightarrow \\mathfrak{g}$ with $\\theta|_{\\mathfrak{k}} = \\mathrm{id}, \\theta|_{\\mathfrak{p}} = - \\mathrm{id}$, where in our context $\\theta(Y) = - Y^t$. Then the quadratic form\n\\begin{equation}\n\\langle Y, Z\\rangle_{\\mathfrak{g}}=\n\\begin{cases}\n-\\frac{2}{3}\\mathbf{B}(Y,Z) & \\text{ if } Y,Z\\in\\mathfrak{k},\\\\\n- \\frac{2}{3}\\mathbf{B}(Y, \\theta(Z))& \\text{ if } Y,Z\\in\\mathfrak{p},\\\\\n0& \\text{ if } Y\\in\\mathfrak{k},\\text{ } Z\\in\\mathfrak{p},\n\\end{cases}\n\\end{equation}\nis positive definite and Ad $K$-invariant. From this the desired Riemannian metric at $x_0$ is obtained by restricting the quadratic form to $\\mathfrak{p}$ which is identified with $T_{x_0} \\mathbf{X}$, namely\n\\begin{equation}\n \\mathbf{g}_{x_0}(Y, Z) = 4 \\mathrm{Tr} (YZ^t) \\quad \\text{ for }\\quad Y, Z \\in \\mathfrak{p}.\n\\end{equation}\nThis in turn gives rise to the metric globally on $\\mathbf{X}$ via left translation. Let $L_B: \\mathbf{X} \\rightarrow \\mathbf{X}$ denote the left translation operator\n\\begin{equation}\nL_B(x) = L_B([A]) = [B A],\n\\end{equation}\nwhere $A,B\\in SL(3,\\mathbb{R})$ and $x=[A]$.\nSince $SL(3, {\\mathbb R})$ acts transitively on $\\mathbf{X}$, given $x \\in \\mathbf{X}$ there is a $B\\in SL(3, {\\mathbb R})$ such\nthat $L_B(x_0) = x$, and thus the $G$-invariant Riemannian metric at $x$ may be defined\nby pulling back the quadratic form at the identity\n\\begin{equation}\\label{symmetricspacemetric}\n\\mathbf{g}_x = L_{B^{-1}}^* \\mathbf{g}_{x_0}.\n\\end{equation}\nWith this metric $SL(3,\\mathbb{R})\/SO(3)$ becomes a symmetric space of noncompact type having rank 2 (see \\cite{BallmanGromovSchroeder}). In particular it has nonpositive curvature, with the sectional curvature of the plane spanned by orthonormal vectors $ Y, Z \\in \\mathfrak{p}$ given by $-\\parallel[Y,Z]\\parallel_{\\mathfrak{g}}^2$.\n\nIn order to connect the metric \\eqref{symmetricspacemetric} with the target space geometry associated to the harmonic map of the previous section, the following characterization of $\\mathbf{X}=SL(3,\\mathbb{R})\/SO(3)$ will be needed. Recall the polar decomposition for matrices, namely any $A\\in SL(3,\\mathbb{R})$ may be written uniquely as $A=PO$ where $O\\in SO(3)$ and $P\\in \\tilde{\\mathbf{X}}$ with\n\\begin{equation}\n\\tilde{\\mathbf{X}} = \\{A \\in SL(3, {\\mathbb R}) \\mid A \\text{ is symmetric and positive definite}\\}.\n\\end{equation}\nThis indicates that $\\mathbf{X}$ may be identified with $\\tilde{\\mathbf{X}}$, and in fact this is accomplished with the map $\\mathcal{I} : \\tilde{\\mathbf{X}} \\rightarrow \\mathbf{X}$ given by\n\\begin{equation}\n\\mathcal{I}(A) = [A^{1\/2}],\\quad\\quad\\quad \\mathcal{I}^{-1}([B])=BB^t.\n\\end{equation}\nObserve that $\\tilde{\\mathbf{X}}$ can be interpreted as the set of all ellipsoids in ${\\mathbb R}^3$ centered at the origin with unit volume, and is diffeomorphic to $\\mathbb{R}^5$ (hence the same is true of $\\mathbf{X}$). Moreover\n$SL(3,{\\mathbb{R}})$ acts transitively on $\\tilde{\\mathbf{X}}$ by the analogue of left translation $\\tilde{L}_B = \\mathcal{I}^{-1} \\circ L_B \\circ \\mathcal{I}$, that is\n\\begin{equation}\n\\tilde{L}_B (A) = B A B^t.\n\\end{equation}\n\n\n\nThe identification above naturally induces a pull-back metric $\\tilde{\\mathbf{g}}:= \\mathcal{I}^* \\mathbf{g}$ on $\\tilde{\\mathbf{X}}$. At the identity this is\n\\begin{equation}\\label{identitymetric}\n\\tilde{\\mathbf{g}}_{\\mathrm{Id}} (V, V) = \\mathbf{g}_{x_0} \\left(\\frac{V}{2}, \\frac{V}{2}\\right) = \\mathrm{Tr}(VV^t),\n\\end{equation}\nfor\n\\begin{equation}\nV\\in T_{\\mathrm{Id}} \\tilde{\\mathbf{X}} = \\{W \\in Mat_{3 \\times 3} (\\mathbb{R}) \\,\\, | \\,\\, W^t = W, \\quad \\mathrm{Tr} W = 0 \\}.\n\\end{equation}\nAs for an arbitrary point $A\\in\\tilde{\\mathbf{X}}$ and $V\\in T_{A}\\tilde{\\mathbf{X}}$,\n\\begin{align}\n\\begin{split}\n\\tilde{\\mathbf{g}}_{A}(V,V)=&\\mathbf{g}_{\\mathcal{I}(A)}\\left(d\\mathcal{I}_{A}(V),\nd\\mathcal{I}_{A}(V)\\right)\\\\\n=&L_{A^{-1\/2}}^{*}\\mathbf{g}_{x_0}\\left(d\\mathcal{I}_{A}(V),d\\mathcal{I}_{A}(V)\\right)\\\\\n=&\\mathbf{g}_{x_0}\\left(d(L_{A^{-1\/2}}\\circ\\mathcal{I})_{A}(V),\nd(L_{A^{-1\/2}}\\circ\\mathcal{I})_{A}(V)\\right)\\\\\n=&\\mathrm{Tr}\\left([(d\\tilde{L}_{A^{-1\/2}})_{A}(V)][(d\\tilde{L}_{A^{-1\/2}})_{A}(V)]^{t}\\right).\n\\end{split}\n\\end{align}\nSince\n\\begin{equation}\n(d\\tilde{L}_{A^{-1\/2}})_{A}(V)=A^{-1\/2}V(A^{-1\/2})^t,\n\\end{equation}\nit follows that\n\\begin{align}\\label{targetmetric}\n\\begin{split}\n\\tilde{\\mathbf{g}}_{A}(V,V)=&\\mathrm{Tr}\\left(A^{-1\/2}V(A^{-1\/2})^t A^{-1\/2} V(A^{-1\/2})^t\\right)\\\\\n=&\\mathrm{Tr}\\left(A^{-1\/2}VA^{-1}V(A^{-1\/2})^t\\right)\\\\\n=&\\mathrm{Tr}\\left(A^{-1}V A^{-1}V\\right).\n\\end{split}\n\\end{align}\n\n\n\nRecall from the previous section that a given 5-dimensional bi-axisymmetric stationary vacuum spacetime yields a map $\\Phi:\\mathbb{R}^3\\setminus\\Gamma\\rightarrow\\tilde{\\mathbf{X}}$, where $\\mathbb{R}^3$ is parameterized by the Weyl-Papapetrou coordinates $(\\rho,z,\\phi)$, $\\Gamma$ denotes the $z$-axis, and $\\tilde{\\mathbf{X}}$ is parameterized by $(f_{ij},\\omega_{i})$. According to \\eqref{targetmetric} the pull-back metric is then given by\n\\begin{equation}\n\\Phi^* \\tilde{\\mathbf{g}} = \\mathrm{Tr} ( \\Phi^{-1}d \\Phi \\, \\Phi^{-1} d \\Phi).\n\\end{equation}\nSince this agrees with the expression appearing in the reduced action \\eqref{action}, it follows that the bi-axisymmetric stationary vacuum Einstein equations reduce to a harmonic map problem with target space $SL(3,\\mathbb{R})\/SO(3)$.\n\n\n\n\n\n\n\\section{The Rod Structure}\n\\label{sec4} \\setcounter{equation}{0}\n\\setcounter{section}{4}\n\n\nA well-behaved asymptotically flat, stationary vacuum, bi-axisymmetric spacetime admits a global system of Weyl-Papapetrou coordinates in its domain of outer communication $\\mathcal{M}^5$, as described in Section \\ref{sec2}, in which the metric takes the form\n\\begin{equation}\\label{spacetimemetric}\ng=f^{-1}e^{2\\sigma}(d\\rho^2+dz^2)-f^{-1}\\rho^2 dt^2\n+f_{ij}(d\\phi^{i}+v^{i}dt)(d\\phi^{j}+v^{j}dt).\n\\end{equation}\nThe orbit space $\\mathcal{M}^{5}\/[\\mathbb{R}\\times U(1)^2]$ is diffeomorphic to the right-half plane $\\{(\\rho,z)\\mid \\rho>0\\}$ (see \\cite{HollandsYazadjiev1}), and\nits boundary $\\rho=0$ encodes nontrivial aspects of the topology. Let $q$ be the fiber metric \\eqref{fibermetric} consisting of the last two terms in \\eqref{spacetimemetric}. In order to avoid curvature singularities $\\mathrm{dim} \\left(\\mathrm{ker}\\text{ }\\! q(0,z)\\right)=1$ except at isolated points $p_{l}$, $l=1,\\ldots,L$ where the dimension of the kernel is 2 \\cites{Harmark,HollandsYazadjiev}. It follows that the $z$-axis is broken into $L+1$ intervals called rods\n\\begin{equation}\n\\Gamma_{1}=[z_{1},\\infty),\\text{ }\\Gamma_{2}=[z_2,z_1],\\text{ }\\ldots,\\text{ }\n\\Gamma_{L}=[z_{L},z_{L-1}],\\text{ }\\Gamma_{L+1}=(-\\infty,z_{L}],\n\\end{equation}\non which either $|\\partial_{t}+\\Omega_{1}\\partial_{\\phi^1}+\\Omega_{2}\\partial_{\\phi^2 }|$ vanishes (horizon rod) or $(f_{ij})$ fails to be of full rank (axis rod). Here $\\Omega_i$ denotes the angular velocity of the horizon and is given by $-v^i$ restricted to the rod. This must be a constant, and can be seen by solving for $dv^i$ from \\eqref{chi} and showing that it vanishes on the rod. The condition for an axis rod implies \\cite{HollandsYazadjiev} that for each such $\\Gamma_{l}$ there is a pair of relatively prime integers $(m_{l},n_{l})$ so that the Killing field\n\\begin{equation}\nm_{l}\\partial_{\\phi^1}+n_{l}\\partial_{\\phi^2}\n\\end{equation}\nvanishes on $\\Gamma_{l}$. Observe that $m_{l}$ and $n_{l}$ must be integers since elements of the isotropy subgroup at the axis are of the form $(e^{im_{l}\\phi},e^{in_{l}\\phi})$, $0\\leq\\phi<2\\pi$, and the isotropy subgroup forms a proper closed subgroup of $T^2=S^1\\times S^1$. That is, the isotropy subgroup yields a simple closed curve in the torus exactly when the slope of its winding is rational.\nThe pair $(m_{l},n_{l})$ is referred to as the rod structure for the rod $\\Gamma_{l}$, and $(0,0)$ serves as the rod structure for any horizon rod. Note that the rod structure is not unique in terms of the information that it encodes, although this type of uniqueness is valid when the rod structure is viewed as an element of $\\mathbb{RP}^1$.\n\nThe asymptotically flat condition is encoded by the rod structures of $\\Gamma_{1}$ and $\\Gamma_{L+1}$ by requiring them to be $(\\pm 1,0)$ and $(0,\\pm 1)$ or vice versa. This, of course, arises from the rod structure of Minkowski space $\\mathbb{R}^{4,1}$ which will now be described in order to motivate the definition of a `corner'. The Weyl-Papapetrou form of the Minkowski metric is derived from the polar coordinate expression with the help of Hopf coordinates $(\\theta,\\phi^1,\\phi^2)$, $\\phi^{i}\\in[0,2\\pi]$, $\\theta\\in[0,\\pi\/2]$ on the 3-sphere and a conformal mapping\n\\begin{align}\\label{Minkowski}\n\\begin{split}\n g_{0} = & -dt^{2}+ dr^2 + r^2 d \\omega_{S^3}^2 \\\\\n = & -dt^{2}+ dr^2 + r^2 \\left[d \\theta^2 + \\sin^2 \\theta (d \\phi^1)^2 + \\cos^2 \\theta (d \\phi^ 2)^2 \\right] \\\\\n = & q_{0} + dr^2 + r^2 d \\theta^2 \\\\\n = & q_{0} + \\frac{1}{4\\sqrt{\\rho^2 + z^2 }} (d \\rho^2 + dz^2).\n\\end{split}\n\\end{align}\nHere the conformal map in the complex plane is given by\n\\begin{equation}\n\\zeta \\mapsto \\zeta^2 \\quad :\\quad {\\mathbb R}_{\\geq 0} \\times {\\mathbb R}_{\\geq 0} \\rightarrow \\mbox{$\\rho z$-half plane},\n\\end{equation}\nor rather\n\\begin{equation}\n\\rho = r^2 \\sin 2 \\theta, \\quad\\quad z = r^2 \\cos 2 \\theta .\n\\end{equation}\nIf $x^i$ denote cartesian coordinates then the Killing fields\n\\begin{equation}\n\\partial_{\\phi^1} = -x^2 \\partial_{x^1} + x^1 \\partial_{x^2},\\quad\\quad\\quad\n\\partial_{\\phi^2} = -x^4 \\partial_{x^3} + x^3 \\partial_{x^4},\n\\end{equation}\nvanish on the rods $\\Gamma_{1}=[0,\\infty)$ and $\\Gamma_{2}=(-\\infty,0]$, respectively. Therefore the rod structures for these two rods are $(1,0)$ and $(0,1)$. Moreover, because the origin $p_{1}$ in the $\\rho z$-plane corresponds to the vertex of the right-half quadrant under the inverse conformal map this is called a corner. For a general set of rod structures, a corner point $p_l$ is one which separates two axis rods, and a pole point is one which separates a horizon rod from an axis rod.\n\nPotential constants $\\mathbf{c}_{l}=(c_{l}^1,c_{l}^2)\\in\\mathbb{R}^2$ are prescribed on each axis rod $\\Gamma_{l}$, and are used as boundary conditions for the twist potentials\n$\\omega_{i}|_{\\Gamma_{l}}=c_{l}^{i}$. The constants may be chosen arbitrarily modulo the condition that they do not vary between adjacent rods separated by a corner. This is necessary for the construction of a model map in the next section, as well as a well-defined notion of angular momentum. In particular, the potential constants can only change after passing over a horizon rod, and this difference yields the angular momenta for each horizon component. Let $\\mathcal{S}$ denote the 3-dimensional horizon cross section component associated with a horizon rod $\\Gamma_{k}=[z_{k},z_{k-1}]$, then \\eqref{chi}, \\eqref{komar}, and \\eqref{komar1} may be used to compute the Komar angular momenta of this component by\n\\begin{equation}\n\\mathcal{J}_{i}=\\frac{1}{8\\pi}\\int_{\\mathcal{S}}\\star d\\eta^{(i)}\n=\\frac{\\pi}{2}\\int_{\\Gamma_{k}}\\iota_{\\eta^{(1)}}\\iota_{\\eta^{(2)}}\\star d\\eta^{(i)}\n=\\frac{\\pi}{4}\\int_{\\Gamma_{k}}d\\omega_{i}=\\frac{\\pi}{4}\\left[\\omega_{i}(p_{k-1})\n-\\omega_{i}(p_{k})\\right].\n\\end{equation}\nA rod data set $\\mathcal{D}$ consists of the collection of corners and poles $\\{p_l\\}$,\nrod structures $\\{(m_l,n_l)\\}$, and potential constants $\\{\\mathbf{c}_l\\}$.\n\nConsider now the topology of spacetime in a neighborhood of a corner point $p_l$ which separates axis rods $\\Gamma_{l}$ and $\\Gamma_{l+1}$ with rod structure $(m_l,n_l)$ and $(m_{l+1},n_{l+1})$. As is shown in the Appendix, new $2\\pi$-periodic coordinates $(\\bar{\\phi}^{1},\\bar{\\phi}^2)$ may be chosen so that the rod structures with respect to these coordinates are given by $(1,0)$ and $(q,p)$, $p\\neq 0$. That is, the Killing fields $\\partial_{\\bar{\\phi}^1}$ and $q\\partial_{\\bar{\\phi}^1}+p\\partial_{\\bar{\\phi}^2}$ vanish on $\\Gamma_{l}$ and $\\Gamma_{l+1}$, respectively. Next take any semicircle in the $\\rho z$-half plane (orbit space) centered at $p_l$ that connects a point on the interior of $\\Gamma_l$ to a point on the interior of $\\Gamma_{l+1}$. Note that each point on the interior of this semicircle represents a 2-torus in a constant time slice. By analyzing which 1-cycles collapse at the end points it follows that the semicircle represents a lens space $L(p,q)$. Recall that $L(1,q)\\cong S^3$, so that when $p=\\pm 1$ a neighborhood of the corner in a time slice is foliated by spheres, or rather a neighborhood of the corner in the spacetime is diffeomorphic to $\\mathbb{R}^5$. It turns out that $p=\\pm 1$ if and only if\n\\begin{equation} \\label{admissibility1}\n\\operatorname{det}\\begin{pmatrix} m_l & n_l \\\\ m_{l+1} & n_{l+1} \\end{pmatrix} = \\pm 1,\n\\end{equation}\nand therefore the spacetime has trivial topology in a neighborhood of the corner if and only if the admissibility condition \\eqref{admissibility1} holds, otherwise it has an orbifold singularity. The admissibility condition can be interpreted as stating that the\nintersection number of the two 1-cycles that degenerate on either side of the corner is equal to $\\pm 1$.\n\nIn addition to \\eqref{admissibility1}, the main results of this paper rely on what will be referred to as the compatibility condition. This supplementary requirement is only valid when two consecutive corners are present. As described above, let $p_l$ be a corner separating axis rods $\\Gamma_{l}$ and $\\Gamma_{l+1}$, and suppose that there is another corner $p_{l-1}$ at the top end of $\\Gamma_{l}$ connecting it to axis rod $\\Gamma_{l-1}$. Assuming that the admissibility condition \\eqref{admissibility1} holds at the two points $p_{l-1}$ and $p_{l}$, it may be arranged that these two determinants are $+1$ by multiplying each component of the rod structures by $-1$ if necessary. Observe that this operation on the rod structures does not change their properties, since the linear combinations of Killing fields that vanish at the rods is preserved.\nThe compatibility condition then states that the first component of the rod structures for $\\Gamma_{l-1}$ and $\\Gamma_{l+1}$ have opposite sign if both are nonzero\n\\begin{equation}\\label{compatibilitycondition1}\nm_{l-1}m_{l+1}\\leq 0.\n\\end{equation}\nThis technical condition is used only in the construction of the model map in the next section. Unlike the admissibility condition, it is not known whether Theorem \\ref{main} remains true without it. As mentioned in the introduction, if the admissibility condition is not assumed so that orbifold singularities are allowed then \\eqref{compatibilitycondition1} should be enhanced to the generalized compatibility condition\n\\begin{equation}\\label{gcompatibilitycondition}\nm_{l-1}m_{l+1} \\operatorname{det}\\begin{pmatrix} m_{l-1} & n_{l-1} \\\\ m_{l} & n_{l} \\end{pmatrix}\n\\operatorname{det}\\begin{pmatrix} m_l & n_l \\\\ m_{l+1} & n_{l+1} \\end{pmatrix} \\leq 0.\n\\end{equation}\nNote that the only way this quantity can vanish is if either $m_{l-1}=0$ or $m_{l+1}=0$, since for a corner the determinant is always nonzero.\n\nEach connected component cross section of the event horizon has one of the following topologies \\cite{HollandsYazadjiev1}: the sphere $S^3$, the ring $S^1\\times S^2$, or a lens space $L(p,q)$. These manifolds have a singular foliation whose leaves are 2-dimensional tori, and whose singular leaves are circles resulting from the degeneration of a 1-cycle in the torus. This can be observed geometrically from the canonical metric on each manifold as follows. The round metric on $S^3$ in Hopf coordinates is given by\n\\begin{equation}\nd \\theta^2 + \\sin^2 \\theta (d \\phi^1)^2 + \\cos^2 \\theta (d \\phi^2)^2,\n\\end{equation}\nwhere $\\theta \\in [0, \\pi\/2]$, $\\phi^i \\in [0, 2 \\pi]$. For $0 < \\theta < \\pi\/2$ the level set $\\{\\theta = \\mbox{const.} \\}$ is a flat 2-torus, and when $\\theta = 0, \\pi\/2$ the level sets degenerates to $S^1$. These singular leaves are characterized by the fact that the Killing fields $\\partial_{\\phi^1}$ and $\\partial_{\\phi^2}$ vanish at $\\theta = 0, \\pi\/2$ respectively. Thus if $\\theta$ is viewed as parameterizing a horizon rod, then the rod structure at the two poles (end points) is\n$\\{(1,0),(0,1)\\}$. For the ring $S^1 \\times S^2$ the canonical product metric is\n\\begin{equation}\n[d \\theta^2 + \\sin^2 \\theta (d \\phi^1)^2] + (d \\phi^2)^2,\n\\end{equation}\nwhere $\\theta \\in [0, \\pi]$, $\\phi^i \\in [0, 2 \\pi]$. The torus fibers are once again the level sets of $\\theta$, and the singular leaves occur when $\\theta = 0, \\pi$ and coincide with the vanishing of the Killing field $\\partial_{\\phi^1}$, while the other Killing field $\\partial_{\\phi^2}$ never degenerates. The associated rod structure at the poles is then $\\{(1,0),(1,0)\\}$.\n\n\n\n\\begin{figure}\n\\includegraphics[width=5cm]{lensidentification.pdf}\n\\caption{Identification Space} \\label{toiletpaper}\n\\end{figure}\n\n\nConsider now the lens space $L(p, q)=S^{3}\/\\mathbb{Z}_{p}$ which inherits its canonical metric\n\\begin{equation}\nd \\theta^2 + \\sin^2 \\theta (d \\tilde{\\phi}^1)^2 + \\cos^2 \\theta (d \\tilde{\\phi}^2)^2\n\\end{equation}\nfrom the 3-sphere, where\n\\begin{equation}\n\\tilde{\\phi}^1 = \\phi^1 - \\frac{q}{p} \\phi^2, \\quad \\quad \\tilde{\\phi}^2 = \\frac1p \\phi^2,\n\\end{equation}\nwith $\\theta \\in [0, \\pi\/2]$, $\\phi^i \\in [0, 2 \\pi]$.\nSince $\\phi^2$ has period $2\\pi$, the following identifications are made\n\\begin{equation}\n\\tilde{\\phi}^1 \\sim \\tilde{\\phi}^1 + \\frac{2\\pi q}{p} , \\quad\\quad \\tilde{\\phi}^2 \\sim \\tilde{\\phi}^2 + \\frac{2\\pi}{p} .\n\\end{equation}\nThe singular leaves at $\\theta = 0, \\pi\/2$ are characterized by the vanishing of the Killing fields\n\\begin{equation}\n\\partial_{\\tilde{\\phi}^1}=\\partial_{\\phi^1},\\quad\\quad\n\\partial_{\\tilde{\\phi}^2}=q\\partial_{\\phi^1}+p\\partial_{\\phi^2},\n\\end{equation}\nrespectively, so that the associated rod structure at the poles is $\\{(1,0),(q,p)\\}$.\nRecall the model of the lens space as a quotient space of the unit sphere $S^3=\\{(z_1, z_2)\\in\\mathbb{C}^2 \\,\\, | \\,\\, |z_1|^2 + |z_2|^2 = 1 \\}$ via the equivalence relation\n\\begin{equation}\n(z_1, z_2 ) = (r_1 e^{ \\tilde{\\phi}^1 i}, r_2 e^{ \\tilde{\\phi}^2 i}) \\sim (r_1 e^{\\left( \\tilde{\\phi}^1 + 2 \\pi q\/p\\right)i}, r_2 e^{\\left( \\tilde{\\phi}^2 + 2 \\pi\/p \\right)i}).\n\\end{equation}\nHere the pair of variables $(r_1, r_2)$ correspond to $(\\sin \\theta, \\cos \\theta)$ in the coordinates with which the lens space metric is written. A visualization of the lens space may be obtained by appropriately identifying the top, bottom, and sides of a solid cylinder as in Figure \\ref{toiletpaper}. Namely, first collapse the external cylinder $\\{\\theta =\\pi\/2\\}$ by identifying each vertical segment to a point, then identify the top and bottom discs via an orthogonal projection after performing a $2\\pi q\/p$ rotation of the top disc. The singular torus fibers occur where the action of the coordinate fields $\\partial_{\\tilde{\\phi}^1}$ and $\\partial_{\\tilde{\\phi}^2}$ degenerate, that is at $\\theta=0,\\pi\/2$.\n\n\n\nUsing a similar analysis the topology of arbitrary rod structures may be understood. In Figure \\ref{rod} four different rod structures for the orbit space are given, labeled by the topology of their horizons. Consider the first rod structure on the left in this diagram. The two semi-infinite rods are foliated by circle fibers none of which collapse, and hence they are 2-planes with an open disc removed. The finite rod has rod structure $(0,0)$ meaning that none of the rotational Killing fields vanish there. It is foliated by 2-tori such that each of the two 1-cycles generators in the torus degenerate on opposite poles. According to the description above, this yields an 3-sphere. Similarly, any simple curve in the $\\rho z$-plane connecting the two semi-infinite rods also produces an $S^3$. In the second and third rod structures of Figure \\ref{rod} it is clear that, by comparing with the singular foliations described above, these horizon rods represent a ring $S^1\\times S^2$ and a lens $L(p,1)$, respectively. In these two examples there is also a different type of rod not present in the first example, namely a finite rod bounded by a pole on top and a corner on the bottom. This type of rod is foliated by circles with a singular leaf at the corner, and thus it gives a topological disc. The last example in Figure \\ref{rod} has two horizon components in which the inner one is a lens $L(p,1)$ and the outer one is a ring $S^1\\times S^2$, and hence the name `Black Lens Saturn'.\n\n\n\\begin{figure}\n\\includegraphics[width=12cm]{rods.pdf}\n\\caption{Rod Strucures} \\label{rod}\n\\end{figure}\n\nObserve that the rod structures of Figure \\ref{rod} satisfy the admissibility condition \\eqref{admissibility1} with $+1$ determinants, and the compatibility condition is vacuous. A natural question arises whether it is possible to produce a rod structure with a single horizon component having the general lens topology $L(p,q)$ without restricting to $q=1$, while at the same time satisfying the admissibility condition \\eqref{admissibility1} and compatibility condition \\eqref{compatibilitycondition1}. The following proposition answers this question affirmatively.\n\n\\begin{proposition}\\label{lensrod}\nLet $p$ and $q$ be integers satisfying $\\mathrm{gcd}(p,q)=1$ and $p>q\\geq 1$.\nThen there exists a rod structure appropriate for an asymptotically flat spacetime of the form\n\\begin{equation}\n\\{(1, 0), (0, 0), (q, p),(q_1, p_1), \\dots, (q_n, p_n), (0,\\pm 1)\\},\n\\end{equation}\nwhich has a single lens space horizon $L(p, q)$, satisfies the admissibility condition \\eqref{admissibility1} with positive determinants, and satisfies the compatibility condition \\eqref{compatibilitycondition1}.\n\\end{proposition}\n\nAs an example observe that the single lens horizon $L(9,7)$ is realized by the rod structures\n\\begin{equation}\n\\{(1,0), (0,0), (7,9), (-4,-5), (-3, -4), (1, 1), (0, 1)\\},\n\\end{equation}\nwhich clearly satisfy the admissibility condition with positive determinants as well as the compatibility condition. In order to prove Proposition \\ref{lensrod} we need a slightly modified version of Bezout's Lemma.\n\n\\begin{lemma}\\label{Bezout}\nLet $a \\neq 1$ and $b \\neq 1$ be relatively prime positive integers, then there exist integers $x$ and $y$ of the same sign such that\n\\begin{equation}\nax - by = 1,\n\\end{equation}\nwith $\\mathrm{gcd}(x, y)=1$ and $1\\leq |x| < b$, $1\\leq |y| < a$. Furthermore, if $a1$ we must have one of $\\overline{x}$, $\\overline{y}$ negative and the other positive. Thus there are $\\tilde{x}>0$, $\\tilde{y}>0$ so that $a \\tilde{x} - b \\tilde{y} = \\pm 1$, with $\\tilde{x} < b$ and $\\tilde{y} < a$. If $\\mathrm{gcd}(\\tilde{x},\\tilde{y})=c>1$ then $\\tilde{x}=c\\hat{x}$, $\\tilde{y}=c\\hat{y}$ and $c(a \\hat{x} - b \\hat{y}) = \\pm 1$. This, however, is impossible since $c>1$, and hence $\\mathrm{gcd}(\\tilde{x},\\tilde{y})=1$. If $a \\tilde{x} - b \\tilde{y} = 1$ then choose $(x, y)= (\\tilde{x}, \\tilde{y})$, and if $a \\tilde{x} - b \\tilde{y} = -1$\nthen choose $(x, y)=(-\\tilde{x}, -\\tilde{y})$. Lastly, neither $x$ nor $y$ may vanish as $a,b>1$.\n\nConsider now the case when $a < b$. It then follows from the equation $ax-by=1$ that either $x>y$ (when $x,y>0$) or $x\\leq y$ (when $x,y<0$). Hence $|x| \\geq |y|$ when $a < b$.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{lensrod}]\nIf $q=1$ then append the rod structure $(0,1)$ after $(q,p)$ to solve the problem.\nAssume now that $p$ and $q$ are relatively prime with $p>q>1$. Apply Bezout's Lemma with\n$(a, b) = (q, p)$ to find a pair $(q_1, p_1)$ of relatively prime integers satisfying\n\\begin{equation}\nqp_1-pq_1 =1\n\\end{equation}\nas well as\n\\begin{equation}\n 1 \\leq |q_{1}| < q, \\quad\\quad\\quad 1 \\leq |p_{1}| < p.\n\\end{equation}\nIf $|q_{1}|=1$, then by appending the rod structure $(0, \\pm 1)$ after $(q_1, p_1) = (\\pm1, p_{1})$ the desired result follows.\n\nConsider now the case when $|q_1|> 1$. Again apply Bezout's Lemma to find $(\\overline{q}_2, \\overline{p}_2)$ relatively prime and satisfying\n\\begin{equation}\n|q_1| \\overline{p}_2- |p_1| \\overline{q}_2 =1\n\\end{equation}\nas well as\n\\begin{equation}\n 1 \\leq |\\overline{q}_{2}| < |q_1|, \\quad\\quad\\quad 1 \\leq |\\overline{p}_{2}| < |p_1|.\n\\end{equation}\nNext define $(\\tilde{q}_2, \\tilde{p}_2)=\\pm (\\overline{q}_2, \\overline{p}_2)$ where the sign is chosen so that\n\\begin{equation}\nq_1 \\tilde{p}_2- p_1 \\tilde{q}_2 =1.\n\\end{equation}\n\nThe compatibility condition requires $q_0 q_2\\leq 0$, and since $q_0=q>0$ this can be achieved by setting $(q_2,p_2)=(\\tilde{q}_2,\\tilde{p}_2)$ if $\\tilde{q}_2<0$, and $(q_2,p_2)=(\\tilde{q}_2-|q_1|,\\tilde{p}_2-|p_1|)$ if $\\tilde{q}_2>0$. Clearly this also satisfies the admissibility condition\n\\begin{equation}\\label{DET}\nq_1 p_2- p_1 q_2 =1\n\\end{equation}\nas well as\n\\begin{equation}\n 1 \\leq |q_{2}| < |q_1|, \\quad\\quad\\quad 1 \\leq |p_{2}| < |p_1|,\n\\end{equation}\nand \\eqref{DET} implies that $q_2$ and $p_2$ are relatively prime.\nNote that if it were the case that $q_0<0$ then $(|q_1|,|p_1|)$ should be added in the last step, rather than subtracted, in order to satisfy the compatibility condition.\nThis iterative process may be continued until $|q_n|=1$. Then at that point, append the rod structure $(0,\\pm 1)$ after $(q_n,p_n)=(\\pm 1,p_n)$ in order to achieve the stated outcome.\n\\end{proof}\n\nWe end this section by noting an important property of the horizon rods, which corresponds to a well-known result in 4-dimensional spacetime \\cite{HawkingEllis}*{Proposition 9.3.1}.\nRecall that a horizon rod is defined as an interval on the $z$-axis where the matrix $(f_{ij})$ is invertible, so that the torus fibers are nondegenerate there. These fibers together with the horizon rod form a codimension 2 surface in the spacetime, which will be referred to as a horizon rod surface.\n\n\\begin{lemma}\nA horizon rod surface is a future apparent horizon, and within the $t=0$ slice it is a minimal surface.\n\\end{lemma}\n\n\\begin{proof}\nAt the beginning of this section we found that associated with a horizon rod there is a Killing field\n\\begin{equation}\n\\mathcal{K}=\\partial_{t}+\\Omega_1 \\partial_{\\phi^1}+\\Omega_2 \\partial_{\\phi^2},\\quad\\quad\\quad\n\\Omega_i\\in\\mathbb{R},\n\\end{equation}\nwhich is null on the horizon rod surface $S$. Since the tangent space to $S$ is spanned by the vector fields $\\partial_{z}$ and $\\partial_{\\phi^i}$, it easily follows from the structure of the spacetime metric \\eqref{spacetimemetric} and the values for $\\Omega_{i}$ that $\\mathcal{K}$ is normal to $S$. The second fundamental form of $S$ in the $\\mathcal{K}$-direction is then given by\n\\begin{equation}\nII_{ab}= g(\\nabla_{\\partial_{a}}\\mathcal{K},\\partial_{b}),\n\\end{equation}\nwhere $\\partial_{a}$ denotes a tangent vector to $S$. Since $\\mathcal{K}$ is Killing\n\\begin{equation}\ng(\\nabla_{\\partial_{a}}\\mathcal{K},\\partial_{b})\n=-g(\\nabla_{\\partial_{b}}\\mathcal{K},\\partial_{a}),\n\\end{equation}\nand hence $II_{ab}$ is antisymmetric. Let\n\\begin{equation}\n\\gamma=f^{-1}e^{2\\sigma}dz^2\n+f_{ij}d\\phi^{i}d\\phi^{j}\n\\end{equation}\nbe the induced metric on the horizon rod surface, then the future null expansion is\n\\begin{equation}\n\\theta_{+}=\\gamma^{ab}II_{ab}=0,\n\\end{equation}\nsince $\\gamma^{ab}$ is symmetric.\nBy definition, $S$ is then a future apparent horizon.\n\nIn order to show that $S$ is minimal within the $t=0$ slice, let\n\\begin{equation}\n\\nu=(\\nabla^a t)\\partial_{a}=g^{tt}\\partial_{t}+g^{t\\phi^i}\\partial_{\\phi^i}\n\\end{equation}\nbe the unnormalized normal to the slice. Then the second fundamental form of the slice is\ngiven by\n\\begin{equation}\n|\\nu|k_{cd}=g(\\nabla_{\\partial_{c}}\\nu,\\partial_d).\n\\end{equation}\nObserve that\n\\begin{equation}\n|\\nu|k(\\partial_{\\phi^i},\\partial_{\\phi^j})\n=g^{tt}g(\\nabla_{\\partial_{\\phi^i}}\\partial_{t},\\partial_{\\phi^j})\n+g^{t\\phi^l}g(\\nabla_{\\partial_{\\phi^i}}\\partial_{\\phi^l},\\partial_{\\phi^j})\n\\end{equation}\nis antisymmetric, and\n\\begin{equation}\n|\\nu|k(\\partial_{z},\\partial_{z})=g^{tt}g(\\nabla_{\\partial_{z}}\\partial_{t},\\partial_{z})\n+g^{t\\phi^l}g(\\nabla_{\\partial_{z}}\\partial_{\\phi^l},\\partial_{z})=0,\n\\end{equation}\nsince $\\partial_{t}$, $\\partial_{\\phi^i}$ are Killing. It follows that\n\\begin{equation}\n\\mathrm{Tr}_{S}k=\\gamma^{ab}k_{ab}=f e^{-2\\sigma}k(\\partial_{z},\\partial_{z})\n+f^{ij}k(\\partial_{\\phi^i},\\partial_{\\phi^j})=0.\n\\end{equation}\nLet $n$ denote the outward unit normal to $S$ within the $t=0$ slice, then\n$n+\\nu\/|\\nu|=\\psi\\mathcal{K}$ for some function $\\psi$ on $S$. We then have\n\\begin{equation}\n0=\\psi\\theta_{+}=H_{S}+\\mathrm{Tr}_{S}k=H_{S}\n\\end{equation}\nwhere $H_{S}$ denotes mean curvature, and therefore $S$ is a minimal surface within the slice.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The Model Map}\n\\label{sec5} \\setcounter{equation}{0}\n\\setcounter{section}{5}\n\n\n\n\n\n\nIn this section a so called model map $\\Phi_0\\colon\\mathbb{R}^3\\setminus\\Gamma\\to \\tilde{\\mathbf{X}}\\cong SL(3,\\mathbb{R})\/SO(3)$ is constructed, which encodes the prescribed asymptotic behavior near the axis and at infinity for the desired harmonic map, and also has finite tension. It may be viewed as an approximate solution to the singular harmonic map problem near the axes and at infinity.\n\nThe construction bears some similarity to the one in \\cite{weinstein96}, but is more complex due to the abundance of rod structures, and the fact that even the non-rotating case is already nonlinear. We detail the construction in the case of a single component but the same approach works for all rod structures satisfying the compatibility condition. Where needed, we will point out differences required to make the approach work in the more general case.\n\n\nThe canonical Riemannian metric on $\\tilde{\\mathbf{X}}$ was constructed in Section \\ref{sec3}, and it was noted that this space is parameterized by a $2\\times 2$ symmetric positive definite matrix $F=(f_{ij})$ and a $2$-vector $\\omega=(\\omega_1,\\omega_2)^t$. If $f=\\operatorname{det} F$ then the metric in these coordinates \\cite{IdaIshibashiShiromizu} is given by\n\\begin{align}\n\\begin{split}\n\t\\tilde{\\mathbf{g}} =& \\frac14 \\frac{df^2}{f^2} +\\frac14 f^{ij}f^{kl}df_{ik}df_{jl} + \\frac12 \\frac{f^{ij} d\\omega_i d\\omega_j}{f} \\\\\n\t=& \\frac14 [\\mathrm{Tr}(F^{-1}dF)]^2 +\\frac14 \\mathrm{Tr}(F^{-1}dF\\, F^{-1} dF) + \\frac12 \\frac{d\\omega^t \\, F^{-1}\\, d\\omega}{f}.\n\\end{split}\n\\end{align}\nA computation shows that the components of the tension \\eqref{tensiondef} of a map $\\Phi_0=(F,\\omega)$ are\n\\begin{align}\\label{eulerlagrange}\n\\begin{split}\n\\tau^{f_{lj}}=&\\Delta f_{lj}-f^{km}\\nabla^{\\mu}f_{lm}\\nabla_{\\mu}f_{kj}\n+f^{-1}\\nabla^{\\mu}\\omega_{l}\\nabla_{\\mu}\\omega_{j},\\\\\n\\tau^{\\omega_{j}}=&\\Delta\\omega_{j}-f^{kl}\\nabla^{\\mu}f_{jl}\\nabla_{\\mu}\\omega_{k}\n-f^{lm}\\nabla^{\\mu}f_{lm}\\nabla_{\\mu}\\omega_{j},\n\\end{split}\n\\end{align}\nwhere $\\Delta$ is the Laplacian and $\\nabla$ the connection associated with the flat metric \\eqref{flatmetric} on $\\mathbb{R}^3$. This yields the harmonic map equations $\\tau=0$ in these coordinates. Let\n\\begin{equation}\nH=F^{-1}\\nabla F,\\quad\\quad G=f^{-1}F^{-1}\\left(\\nabla\\omega\\right)^2,\\quad\\quad\nK=f^{-1}F^{-1}\\nabla\\omega,\n\\end{equation}\nthat is\n\\begin{equation}\n H_{\\mu}{}^i{}_j= f^{ik}\\nabla_\\mu f_{kj},\\quad\\quad\n G^{i}_j=f^{-1} f^{ik} \\nabla_\\mu\\omega_k\\,\\nabla^\\mu\\omega_j,\\quad\\quad\n K_\\mu{}^i=f^{-1}f^{ij}\\nabla_\\mu\\omega_j,\n\\end{equation}\nand observe that\n\\begin{equation}\n\\left(\\operatorname{div}H+G\\right)^{i}_j=f^{il}\\tau^{f_{lj}},\\quad\\quad\n\\left(\\operatorname{div}K\\right)^{i}=f^{-1}f^{ij}\\tau^{\\omega_{j}}.\n\\end{equation}\nWe then have\n\\begin{equation}\n |\\tau|^2=\\frac14 \\left[ \\mathrm{Tr}(\\operatorname{div}H + G)\\right]^2 + \\frac14 \\mathrm{Tr} \\left[(\\operatorname{div}H + G)(\\operatorname{div}H + G) \\right]\n + \\frac12 f (\\operatorname{div}K)^t F (\\operatorname{div}K).\n\\end{equation}\n\nIn order to state the main result of this section we will say that a map $\\Phi_0=(F,\\omega)$ \\textit{respects} a rod data set $\\mathcal{D}$, if $(m_l,n_l)$ is the rod structure and $\\mathbf{c}_l$ the potential constant within $\\mathcal{D}$ for an axis rod $\\Gamma_l$ then\n\\begin{equation}\n(m_l,n_l)\\in\\mathrm{ker \\text{ }} F|_{\\Gamma_l},\\quad\\quad\\quad \\omega|_{\\Gamma_l}=\\mathbf{c}_l.\n\\end{equation}\n\n\n\\begin{theorem} \\label{model}\nGiven a rod data set $\\mathcal{D}$ satisfying the generalized compatibility condition\n\\eqref{gcompatibilitycondition}, there exists a model map $\\Phi_0\\colon\\mathbb{R}^3\\setminus\\Gamma\\to \\tilde{X}$ with uniformly bounded tension having decay $|\\tau|=O(r^{-7\/2})$ which respects $\\mathcal{D}$.\n\\end{theorem}\n\n\\begin{proof}\nAs mentioned above, we give a detailed proof for the case of the rod configuration corresponding to a single lens horizon $L(p,1)$, see Figure~\\ref{domain}. However, we will indicate below the changes required for the general case.\n\n\\begin{figure}\n\\includegraphics[width=10cm]{domain.pdf}\n\\caption{Model Map Construction} \\label{domain}\n\\end{figure}\n\n\nThe only requirement of the map $\\Phi_0$ within the white area in Figure~\\ref{domain} will be that it is a smooth extension of the map which will be defined explicitly in the gray region. This can easily be achieved since the white area remains a fixed distance away from the singular set $\\Gamma$, and this clearly implies that the tension is bounded within the white area.\n\nFor convenience, we define a pair of harmonic functions needed in the construction. For $a\\in\\mathbb{R}$ let $r_{a}$ be the Euclidean distance from the point $z=a$ on the $z$-axis, and let $\\theta_a$ be the polar angle about this center. Then set\n\\begin{equation}\n\tu_a =\\log(r_a-(z-a))=\\log\\bigl(2r_a\\sin^2(\\theta_a\/2)\\bigr), \\quad\\quad\nv_a = \\log(r_a+(z-a))=\\log\\bigl(2r_a\\cos^2(\\theta_a\/2)\\bigr).\n\\end{equation}\nIt is easy to check that these functions are harmonic. Furthermore $u_a$ behaves like $2\\log\\rho$ near the $z>a$ part of the $z$-axis and is locally bounded below on the $za$ part of the $z$-axis.\n\nWe begin with the definition of $\\Phi_0$ outside a large ball. The map there is based on the Minkowski metric \\eqref{Minkowski} and is given by\n\\begin{equation}\n\tF=\\begin{pmatrix} e^{u_0-\\log2} & 0 \\\\ 0 & e^{v_0-\\log2} \\end{pmatrix}, \\qquad \\omega=\\omega(\\theta),\n\\end{equation}\nwhere $\\theta=\\theta_0$. The function $\\omega(\\theta)$ is smooth and chosen so that $\\omega$ is the appropriate constant on $[0,\\epsilon]\\cup[\\pi-\\epsilon,\\pi]$, with $0<\\epsilon<\\pi\/2$ fixed so that $\\omega$ is constant on the regions $\\mathcal N_0$ and $\\mathcal S_0$. Observe that this map is harmonic wherever $\\omega$ is constant, since $G=0$ and $\\div F^{-1}\\nabla F=0$. It will now be shown that the tension $|\\tau|$ decays like $O(r^{-7\/2})$, which as will be seen later is sufficient for the main existence and uniqueness arguments. Since the tension vanishes for $\\theta\\in[0,\\epsilon]\\cup[\\pi-\\epsilon,\\pi]$, we need only estimate $|\\tau|$ on the interval $[\\epsilon,\\pi-\\epsilon]$. An explicit calculation gives\n\\begin{align}\n\\begin{split}\n\tf (\\operatorname{div}K)^t F (\\operatorname{div}K) =&\n \\frac{4\\csc^2\\theta \\sin^2(\\theta\/2)}{r^7}\n \\left[\\csc^4(\\theta\/2) \\bigl(\\omega_1''-(\\csc\\theta+2\\cot\\theta) \\omega_1'\\bigr)^2 \\right. \\\\\n &\\left. + 4 \\csc^2\\theta \\bigl(\\omega_2''+(\\csc\\theta-2\\cot\\theta) \\omega_2'\\bigr)^2\\right]\\\\\n =& O(r^{-7}),\n\\end{split}\n\\end{align}\nand\n\\begin{equation}\n\tG=\\frac{\\csc^2(\\theta\/2)\\sec^2(\\theta\/2)}{r^5}\n\t\\begin{pmatrix}\n\t\t \\omega _1'^2\\csc^2(\\theta\/2) &\n\t\t \\omega _1'\\omega _2\\csc^2(\\theta\/2) \\\\[1ex]\n\t\t \\omega _1'\\omega _2 \\sec^2(\\theta\/2) &\n\t\t \\omega _2'^2 \\sec^2(\\theta\/2) &\n\t\\end{pmatrix}\n\t=O(r^{-5}).\n\\end{equation}\nSince $\\div H=0$, it follows that $|\\tau|=O(r^{-7\/2})$.\n\nIt remains to define the map inside the two tubular neighborhoods capped with hemispheres. Consider first the northern tubular neighborhood. Let $z=b$ indicate the location of the point $N$. Then in this region define\n\\begin{equation}\n\tF=\\begin{pmatrix} e^{u} & 0 \\\\ 0 & e^{v} \\end{pmatrix}, \\qquad \\omega=\\mathbf{c}_1,\n\\end{equation}\nwhere\n\\begin{equation}\nu=\\lambda (u_0-\\log2) + (1-\\lambda) u_b,\\quad\\quad v=\\lambda (v_0-\\log2),\n\\end{equation}\nand $\\lambda=\\lambda(z)$ is a smooth cut-off function with $\\lambda=1$ in $\\mathcal N_0$ and $\\lambda=0$ in $\\mathcal N_1$. This leads to the correct rod structure, and the definitions outside the large ball and in $\\mathcal N_0$ agree. Moreover\n\\begin{equation}\n\t\\div H = \\begin{pmatrix} \\Delta [\\lambda(u_0-u_b)] & 0 \\\\ 0 & \\Delta [\\lambda v_0]\\end{pmatrix},\n\\end{equation}\nwhich is bounded. Indeed\n\\begin{equation}\n\t\\Delta[\\lambda(u_0-u_b)] = (u_0-u_b)\\Delta\\lambda + 2(\\partial_{z}\\lambda) \\partial_z(u_0-u_b),\n\\end{equation}\nand $\\partial_z u_a = 1\/r_a$ (on the $z$-axis) for $a=0, b$\nis clearly bounded in the transition region. Similarly $\\Delta[\\lambda v_0]$ is bounded since $\\partial_z v_0 = -1\/r_0=-1\/r$ (on the $z$-axis) is bounded. It follows that $|\\tau|$ is bounded in the northern region, as $G=0$ and $K=0$ due to the constancy of $\\omega$.\n\n\nConsider now the southern tubular neighborhood. The map in $\\mathcal S_0$ is defined exactly as in $\\mathcal N_0$, that is with the same $F$ but with $\\omega=\\mathbf{c}_2$. In fact $\\omega$ is set to be the constant $\\mathbf{c}_2$ in the entire southern tubular neighborhood. Next, let the south pole $S$ and corner point $C$ be located at $z=c$ and $z=0$,\nrespectively. Then in $\\mathcal S_1$ the remainder of the map is defined by\n\\begin{equation}\n\tF= hF_0h^t = h \\begin{pmatrix} e^{u} & 0 \\\\ 0 & e^{v} \\end{pmatrix}h^t,\n\\end{equation}\nwhere\n\\begin{equation} \\label{h}\n\th = \\begin{pmatrix} 1& -p \\\\ 0 & 1 \\end{pmatrix}\n\\end{equation}\nand $v=v_0-\\log2$, $u=u_0-u_c$. As before $\\div(F_0^{-1}\\nabla F_0)=0$ and hence \\begin{equation}\n\\div(F^{-1}\\nabla F) = h^{-t} \\div(F_0^{-1}\\nabla F_0) h^t = 0,\n\\end{equation}\nwhere for notational convenience $h^{-t}:=(h^t)^{-1}$.\nIt follows that $\\Phi_0$ is a harmonic map in $\\mathcal{S}_1$. In order to verify\nthat the rod structure is correct, observe that\n\\begin{equation} \\label{structure}\n\tF\\vector{1}{0} = \\vector{e^u+p^2e^v}{-pe^v}, \\quad\\quad F\\vector01 = \\vector{-pe^v}{e^v}, \\quad\\quad F\\vector1p = \\vector{e^u}{0}.\n\\end{equation}\nFrom this it is clear that the only direction which degenerates on the disk rod (between $S$ and $C$) is $(1,p)$, and the only direction that degenerates on the south rod (below $C$) is $(0,1)$. Furthermore, since $F_0$ is nonsingular on the horizon rod the same is true of $F$.\n\nLastly, the map will be defined on the southern transition region. Recall that $\\omega$ is constant. Moreover if $F$ defined in $\\mathcal{S}_1$ can be transitioned\nto a diagonal $F$ satisfying $\\div(F^{-1}\\nabla F)=0$, then we can complete the transition in the same manner as in the northern transition region. Thus it remains to demonstrate the transition to a diagonal $F$. Set\n\\begin{equation}\n\tF = h(z) F_0 h(z)^t, \\qquad\\quad h(z) = \\begin{pmatrix} 1& -p\\lambda(z) \\\\ 0 & 1 \\end{pmatrix},\n\\end{equation}\nwhere $F_0$ is as above, and $\\lambda(z)$ is a smooth cut-off function which is equal to $1$ near $\\mathcal S_1$ and equal to $0$ near $\\mathcal S_0$. To verify that $\\div(F^{-1}\\nabla F)$ is bounded in the transition region compute\n\\begin{equation}\n\tF^{-1} \\nabla F\n\t= (F_0h^t)^{-1} (h^{-1}\\nabla h) F_0h^t + h^{-t} (F_0^{-1}\\nabla F_0) h^t + h^{-t}\\nabla h,\n\\end{equation}\nand\n\\begin{align}\n\\begin{split}\n\\label{divFdF}\n\t\\div(F^{-1} \\nabla F) = &[\\nabla(F_0h^t)^{-1}]\\cdot (h^{-1}\\nabla h) F_0h^t + (F_0h^t)^{-1} \\div (h^{-1}\\nabla h) F_0h^t\\\\\n\t&+ (F_0h^t)^{-1} (h^{-1}\\nabla h)\\cdot \\nabla (F_0h^t)\n+ (\\nabla h^{-t}) \\cdot (F_0^{-1}\\nabla F_0) h^t\\\\\n\t&+h^{-t} \\div(F_0^{-1}\\nabla F_0) h^t + h^{-t} (F_0^{-1}\\nabla F_0)\\cdot \\nabla h^t + \\div(h^{-t}\\nabla h).\n\\end{split}\n\\end{align}\nEach term may now be estimated individually. First note that\nthe fifth term vanishes and the seventh term is clearly bounded.\nFurthermore\n\\begin{equation}\n\tF_0^{-1} \\nabla F_0 = \\begin{pmatrix} \\nabla u & 0 \\\\ 0 & \\nabla v \\end{pmatrix},\n\\end{equation}\nand since $h$ depends only on $z$ we may replace $\\nabla u$ and $\\nabla v$ in \\eqref{divFdF} by $\\partial_z u$ and $\\partial_z v$, respectively. As explained above these $z$-derivatives are bounded, and since $h^t$, $h^{-t}$, $\\partial_{z}h^t$ and $\\partial_{z}h^{-t}$ are bounded it follows that the fourth and sixth terms are bounded. Next observe that the second term becomes\n\\begin{equation}\n\t (F_0h^t)^{-1} \\div (h^{-1}\\nabla h) F_0h^t= pe^{v-u}\\lambda''\n\t \\begin{pmatrix} p\\lambda & -1 \\\\ p^2\\lambda^2 & -p\\lambda \\end{pmatrix},\n\\end{equation}\nwhich is bounded. Furthermore the sum of the first and third terms is\n\\begin{align}\n\\begin{split}\n\t&[\\nabla(F_0h^t)^{-1}]\\cdot (h^{-1}\\nabla h) F_0h^t\n\t+ (F_0h^t)^{-1} (h^{-1}\\nabla h)\\cdot \\nabla (F_0h^t)\\\\\n =&\n\tpe^{v-u}\\lambda'\n\t\\begin{pmatrix} p\\bigl[\\lambda (\\partial_z v-\\partial_z u)+\\lambda'\\bigr] & \\partial_z u-\\partial_z v \\\\\n\tp^2\\lambda\\bigl[\\lambda(\\partial_z v-\\partial_z u)+2\\lambda'\\bigr] & -p\\bigl[\\lambda(\\partial_z v-\\partial_z u)+\\lambda'\\bigr] \\end{pmatrix},\n\\end{split}\n\\end{align}\nwhich again is bounded. It follows that $|\\tau|$ is bounded in the southern region, and this completes the proof for the rod data set associated with a single component lens horizon $L(p,1)$.\n\n\\begin{remark} \\label{integer}\nWe note that in the argument above showing that $\\div(F^{-1}\\nabla F)$ is bounded no use was made of the fact that $p$ is an integer. This is will be important in what follows.\n\\end{remark}\n\nConsider now the case of a general rod data set, in which consecutive corners may be present. In this situation the map will be defined inductively one corner at a time, with a transition region between any two consecutive corners, as well as a transition region on each of the two semi-infinite rods. The only feature which remains to be treated is the case of two consecutive corners. Suppose then that consecutive corners occur at points $C_N$ and $C_S$ along the $z$-axis, with $z=\\mathrm{a}$ and $z=\\mathrm{b}$ at $C_N$ and $C_S$ respectively. Let there be rod structures $(m,n)$ above $C_N$, $(p,q)$ between $C_N$ and $C_S$, and $(r,s)$ below $C_S$. It will be assumed that $m\\neq 0$, $p\\neq 0$, $r\\neq 0$, and that the generalized compatibility condition is satisfied\n\\begin{equation}\\label{1256}\nmr(ps-rq)(mq-np)\\leq 0.\n\\end{equation}\nNote that this quantity is nonzero (and hence negative) since $ps-rq\\neq 0$ and $mq-np\\neq 0$ due to the fact that $C_N$ and $C_S$ are genuine corners.\n\nLet $v=u_{\\mathrm{b}}-u_{\\mathrm{a}}$ and $u=2\\log\\rho-v$ and set\n\\begin{equation}\n\tF_0=\\begin{pmatrix} e^u & 0 \\\\ 0 & e^v \\end{pmatrix},\n\\end{equation}\nso that $F_0$ gives rod structure $(1,0)$ above $C_N$ and below $C_S$, and $(0,1)$ between $C_N$ and $C_S$. Next define $F_N=h_NF_0h_N^t$ near $C_N$ and $F_S=h_SF_0h_S^t$ near $C_S$, where\n\\begin{equation} \\label{hnhs}\n\th_N=\\begin{pmatrix} -q\/p & -n\/m \\\\ 1 & 1 \\end{pmatrix}, \\qquad\n\th_S=\\begin{pmatrix} -q\/p & -s\/r \\\\ 1 & 1 \\end{pmatrix}.\n\\end{equation}\nIt is straightforward to check that the maps $F_N$ and $F_S$ yield the desired rod structures on each of the three rods in neighborhoods of $C_N$ and $C_S$ respectively, and that $(F_N,\\omega)$ and $(F_S,\\omega)$ are harmonic whenever $\\omega$ is constant. This latter property arises from the fact that although $F\\mapsto hFh^t$, $\\omega\\mapsto h\\omega$ is an isometry of $\\tilde{\\mathbf{X}}$ if and only if $\\operatorname{det} h=\\pm1$, this determinant condition is not required here for the harmonic map equations to be satisfied since $\\omega$ is constant. It remains to define $F$ in a transition region between $C_N$ and $C_S$. In order to do this first let $\\bar{F}_N=\\mathbf{k} F_0 \\mathbf{k}^t$ and $\\bar{F}_S=F_0$, where\n\\begin{equation}\\label{987}\n\t\\mathbf{k} = h_S^{-1}h_N=\\begin{pmatrix} 1 & \\frac{p(ms-nr)}{m(ps-qr)} \\\\[1ex]\n 0 & -\\frac{r(mq-np)}{m(ps-qr)} \\end{pmatrix}.\n\\end{equation}\nIf there is a smooth transition $\\mathbf{k}=\\mathbf{k}(z)$ from $h_S^{-1}h_N$ to\n\\begin{equation}\\label{jfh}\n\t \\begin{pmatrix} 1 & \\frac{p(ms-nr)}{m(ps-qr)} \\\\[1ex] 0 & 1 \\end{pmatrix},\n\\end{equation}\nthen by Remark~\\ref{integer}\nit is clear that we can further transition $\\mathbf{k}$ to the identity as in the arguments above the remark, since the only difference between \\eqref{jfh} and $h$ in \\eqref{h} is the fact that the off-diagonal element is an integer in the latter matrix.\nIt follows that $\\bar{F}$ would then be defined in the whole region encompassing both corners, having the property that it is equal to $\\bar{F}_N$ near $C_N$ and equal to $\\bar{F}_S$ near $C_S$. Finally, taking $F=h_S \\bar{F} h_S^t$ produces a map with finite tension which coincides with $F_N$ near $C_N$ and $F_S$ near $C_S$.\n\n\n\nIt remains to define the transition from \\eqref{987} to \\eqref{jfh}. Set\n\\begin{equation}\n\t\\mathbf{k}(z) = \\begin{pmatrix} 1 & \\varsigma \\\\ 0 & \\lambda(z) \\end{pmatrix},\\quad\\quad\\quad \\varsigma=\\frac{p(ms-nr)}{m(ps-qr)},\n\\end{equation}\nwhere $\\lambda(z)$ is a smooth cut-off function satisfying $\\lambda(z)=-\\frac{r(mq-np)}{m(ps-qr)}$ near $C_N$ and $\\lambda(z)=1$ for $z<(\\mathrm{a}+\\mathrm{b})\/2$.\nAccording to the generalized compatibility condition \\eqref{1256},\n$\\lambda(z)$ may be chosen strictly positive. The arguments following \\eqref{divFdF} may now be repeated to show that the tension remains bounded. In particular, the terms four through seven of \\eqref{divFdF} are bounded in the current setting. By denoting $F_{\\mathbf{k}}=F_0 \\mathbf{k}^t$ the second term becomes\n\\begin{equation}\n\tF_{\\mathbf{k}}^{-1} \\div (\\mathbf{k}^{-1}\\nabla \\mathbf{k})F_{\\mathbf{k}} = \\varsigma e^{v-u}\\lambda'\n\t\\begin{pmatrix} -\\dfrac{\\varsigma}{\\lambda} & -1 \\\\[1.5ex]\n\t\\dfrac{\\varsigma^2+e^{u-v}}{\\lambda^2} & \\dfrac{\\varsigma^2e^{u-v}}{\\varsigma\\lambda}\n\t\\end{pmatrix},\n\\end{equation}\nand the sum of the first and third terms is\n\\begin{equation}\n\t\\nabla F_{\\mathbf{k}}^{-1} \\cdot(\\mathbf{k}^{-1}\\nabla \\mathbf{k}) F_{\\mathbf{k}} + F_{\\mathbf{k}}^{-1} (\\mathbf{k}^{-1}\\nabla \\mathbf{k})\\cdot \\nabla F_{\\mathbf{k}} = \\varsigma e^{v-u}\\lambda'\n\t\\begin{pmatrix} \\dfrac{\\varsigma(u_z-v_z)}{\\lambda} & u_z - v_z - \\dfrac{\\lambda'}{\\lambda} \\\\[1.5ex]\n\t\\dfrac{\\varsigma^2\\lambda(u_z-v_z)+(\\varsigma^2+e^{u-v}\\lambda')}{\\lambda^3} & \\dfrac{\\varsigma(v_z-u_z)}{\\lambda}\n\t\\end{pmatrix},\n\\end{equation}\nboth of which are bounded. Similar arguments may be used to treat the cases when one of $m$, $p$, $r$ is zero.\n\\end{proof}\n\n\n\n\\section{Energy Estimates}\n\\label{sec6} \\setcounter{equation}{0}\n\\setcounter{section}{6}\n\n\nIn the rank 1 case treated in \\cite{weinstein96}, a priori estimates for the singular harmonic map problem relied heavily on the uniformly strict negative curvature of the target spaces. In the current setting the target symmetric space $\\mathbf{X}=SL(3,\\mathbb{R})\/SO(3)$ is of rank 2, that is the dimension of a maximal flat subspace is 2. It follows that $X$ is of nonpositive curvature and the methods of \\cite{weinstein96} break down. In order to overcome this difficulty, we will employ a generalization of horospherical coordinates from hyperbolic space so that the flat directions as well as the coordinate planes of strict negative curvature are explicitly identified, and are thus more easily exploited. Coordinate systems of the symmetric space $\\mathbf{X}=SL(3,\\mathbb{R})\/SO(3)$ have been investigated previously, as in \\cite{MazzeoVasy}, yet what we need requires a different set of properties.\n\nConsider the Iwasawa decomposition \\cite{BallmanGromovSchroeder} of $G=SL(3,\\mathbb{R})$ given by $G=KAN$ where the three subgroups are $K=SO(3)$,\n\\begin{equation}\nA = \\{ \\mathrm{diag} (\\lambda_1, \\lambda_2, \\lambda_3) \\,\\, | \\,\\, \\lambda_i > 0,\\text{ for }i=1,2,3,\\text{ } \\lambda_1 \\lambda_2 \\lambda_3 =1 \\},\n\\end{equation}\nand\n\\begin{equation}\nN=\\{\\text{upper triangular matrices with 1's on the diagonal}\\}.\n\\end{equation}\nFor each $g\\in G$ there exist unique elements $k\\in K$, $a\\in A$, and $n\\in N$ such that\n$g=kan$. Moreover by taking inverses we have $G=NAK$, and hence $\\mathbf{X}=G\/K$ may be identified with the subgroup $NA$. Let $x_0=[Id]\\in \\mathbf{X}$ then the orbit $A\\cdot x_0$ represents a maximal flat so that it is a totally geodesic submanifold with vanishing curvature. The last property follows from the curvature formula in Section \\ref{sec3}, and the fact that the Lie algebra\n\\begin{equation}\n\\mathfrak{a} = \\{ \\mathrm{diag} (\\lambda_1, \\lambda_2, \\lambda_3) \\,\\, | \\,\\, \\sum \\lambda_i = 0\\}\n\\end{equation}\nassociated with $A$ is abelian ie. $[\\alpha_1,\\alpha_2]=0$ for all $\\alpha_1, \\alpha_2\\in \\mathfrak{a}$.\nOn the other hand, the orbit $N\\cdot x_0$ is a horocycle determined by the Weyl chamber\n\\begin{equation}\n\\mathfrak{a}^+ = \\{ \\mathrm{diag} (\\lambda_1, \\lambda_2, \\lambda_3) \\,\\, | \\,\\,\n\\lambda_1>\\lambda_2>\\lambda_3,\\text{ }\\sum \\lambda_i = 0\\}\\subset\\mathfrak{a}.\n\\end{equation}\nIt is a closed submanifold with the property that every flat which is asymptotic to the Weyl chamber at infinity\n\\begin{equation}\n\\mathrm{w}^{+}:=(A^{+}\\cdot x_0)(\\infty)=\\{\\gamma(\\infty)\\mid \\gamma(s)=\\mathrm{exp}(s\\alpha^{+})\\cdot x_0,\\text{ }\\alpha^{+}\\in\\mathfrak{a}^{+}\\},\n\\end{equation}\nintersects the horocycle orthogonally in exactly one point; recall that a flat $\\mathcal{F}$ is asymptotic to a Weyl chamber $\\mathrm{w}$ at infinity if $\\mathrm{w}\\subset\\mathcal{F}(\\infty)$.\nIn particular, the horocycle $N \\cdot x_0$ and flat $\\mathcal{F}_{x_0}:=A \\cdot x_0$\nintersect orthogonally at $x_0$, as can be seen from the orthogonality between the respective Lie algebras $\\mathfrak{n}$ (all upper triangular matrices with zeros on the diagonal) and $\\mathfrak{a}$ with respect to the Riemannian metric at $x_0$ given in Section \\ref{sec3}.\n\nA foliation by flats may be constructed \\cite{BallmanGromovSchroeder} from the action of $N$. More precisely\n\\begin{equation}\n\\mathbf{X} = \\bigcup_{n \\in N} n \\cdot \\mathcal{F}_{x_0},\n\\end{equation}\nwhere $n\\cdot \\mathcal{F}_{x_0} \\cap n' \\cdot \\mathcal{F}_{x_0} = \\emptyset$ for $n \\neq n'$ and each $n \\cdot\\mathcal{F}_{x_0}$ is asymptotic to the Weyl chamber $\\mathrm{w}^+$. Since each point $x \\in \\mathbf{X}$ can be uniquely written as\n$n a \\cdot x_0$, and $a \\cdot \\mathcal{F}_{x_0} = \\mathcal{F}_{x_0}$ as sets,\nthe assignment $x \\mapsto \\mathcal{F}_x=na\\cdot\\mathcal{F}_{x_0}$ defines a smooth foliation of $\\mathbf{X}$ whose leaves are the set of totally geodesic submanifolds $\\{ n \\cdot F_{x_0}\\}_{n \\in N}$, each of which is isometric to ${\\mathbb R}^2$. By homogeneity of $\\mathbf{X} = G\/K$, the 3-dimensional horocycle $N \\cdot x$ and the 2-dimensional flat $\\mathcal{F}_x$ intersect orthogonally at (and only at) $x$. In this sense, the pair $(a, n)$ gives a horocyclic orthogonal coordinate system for $\\mathbf{X}$.\n\nLet $\\gamma_{x_0}(s)$ be an arc-length parameterized geodesic satisfying $\\gamma_{x_0}(0) = x_0$, and $\\gamma_{x_0}(\\infty) \\in \\mathrm{w}^+$. Equivalently $\\gamma_{x_0}'(0) \\in T_{x_0} X$ is an element of a Weyl chamber $\\mathfrak{a}^+$, so that $\\gamma_{x_0}$ is regular in the sense that it is contained in a unique 2-dimensional flat, namely $\\mathcal{F}_{x_0}$. Since the action by $na$ on $\\mathbf{X}$ is isometric and preserves the combinatorial structure of the Weyl chambers projected to $\\mathbf{X}(\\infty)$, it follows that $\\gamma_{x}(s):=na\\cdot \\gamma_{x_0}(s)$ is a regular geodesic contained in the flat $n\\cdot\\mathcal{F}_{x_0}$, and is asymptotic to $\\mathrm{w}^+$. In fact, the distance $d_{\\mathbf{X}}(n\\cdot \\gamma_{x_0}(s),\\gamma_{x_0}(s))$ decays exponentially and $d_{\\mathbf{X}}(na\\cdot \\gamma_{x_0}(s),\\gamma_{x_0}(s))\\rightarrow d_{\\mathbf{X}}(a\\cdot x_0, x_0)$.\n\n\nOn the flat $\\mathcal{F}_{x_0}$ there is a natural Euclidean coordinate system $r=(r_1, r_2)$, where the origin is identified with $x_0$, the $r_1$-axis coincides with the regular geodesic $\\gamma_{x_0}(s)$, and the $r_2$-axis is the orthogonal line to $\\gamma_{x_0}(s)$. The $r_1$ axis is chosen to have the opposite orientation from that of $\\gamma_{x_0}$, so that $r_1\\rightarrow\\infty$ corresponds to $s\\rightarrow -\\infty$, and similarly for $r_2$.\nThe $(r_1, r_2)$ coordinate system may then be pushed forward to the flat $n \\cdot \\mathcal{F}_{x_0}$ where the origin is identified with $n \\cdot x_0$, the $r_1$-axis is the geodesic $\\gamma_{n \\cdot x_0}(s)$, and the $r_2$-axis is again the orthogonal line to $\\gamma_{n x_0}(s)$ in the flat. Hence the the horocyclic coordinates $(a, n)$ may be represented by $(r, n)$. Moreover, for each $n'\\in N$ there is an isometry which preserves the $r$-coordinates and for each $r'$ there is a diffeomorphism which preserves the $n$-coordinates\n\\begin{equation}\n\\Xi_{n'} : (r_1, r_2, n) \\mapsto (r_1, r_2, n' n),\\quad\\quad\n\\Xi_{r'}: (r_1, r_2, n) \\mapsto (r_1+r_{1}', r_2+r_{2}', n).\n\\end{equation}\nThe $r$-translations map horocycles to horocylces, and thus if\n$\\theta=(\\theta^1,\\theta^2,\\theta^3)$ is a system of global coordinates on $N\\cdot x_0\\cong\\mathbb{R}^3$ then they may be pushed forward to all horocycles by the action of $\\Xi_{r'}$.\nIt follows that $(r,\\theta)$ form a system of global coordinates on $\\mathbf{X}$ with the property that the coordinate fields $\\partial_{r_i}$ and $\\partial_{\\theta^j}$ are orthogonal. By combining the observations above, the $G$-invariant Riemannian metric on $\\mathbf{X}$ can be expressed in these coordinates by\n\\begin{equation}\n\\mathbf{g}= dr^2+Q(d\\theta,d\\theta) = dr_{1}^2 +dr_{2}^2 + Q_{ij}d \\theta^{i} d\\theta^{j},\n\\end{equation}\nwhere the coefficients $Q_{ij}=Q_{ij}(r,\\theta)$ are smooth functions.\n\n\nAs a demonstration of this framework in the simpler setting of rank 1, consider the hyperbolic plane $\\mathbb{H}^2$. The half plane coordinates $(U,V)$, $U>0$ may be transformed to orthogonal horocyclic coordinates $(r,\\theta)$ by $r=\\log U$ and $\\theta=V$ to find\n\\begin{equation}\n\\mathbf{g}_{-1}=\\frac{dU^2 + dV^2}{U^2} = dr^2 + e^{-2r} d\\theta^2.\n\\end{equation}\nHere the flat $\\mathcal{F}_{x_0}$ in the upper half plane model with $x_0 =(0,1)$ is the positive $U$-axis $\\{V=0\\}$, and the horocycle $N\\cdot x_0$ is the horizontal line $\\{U=1\\}$.\n\n\n\nFor any unit tangent vector $Z \\in T_x \\mathbf{X}$ perpendicular to $\\mathcal{F}_x$, the sectional curvature\n\\begin{equation}\n\\mathcal{K}(Z,\\gamma_{x}'(0))=\\langle R(Z, \\gamma_{x}'(0))\\gamma_{x}'(0), Z \\rangle\n\\end{equation}\nis negative, since $\\mathcal{F}_x$ is a flat of maximal dimension. Moreover, such curvatures are uniformly negative (bounded away from zero) by compactness of the set of unit normal vectors to $\\mathcal{F}_x$ and the homogeneity of $\\mathbf{X}$. The uniform (in $x$ as well as choice of 2-plane) upper and the lower bounds of these curvatures will be denoted by\n\\begin{equation}\\label{curvaturebounds}\n-c^2 \\leq \\mathcal{K} \\leq -b^{2} < 0.\n\\end{equation}\n\n\\begin{lemma}\\label{lemmaimhof}\nLet $J$ be a Jacobi field perpendicular to the\nflat $\\mathcal{F}_x$ along an arc-length parameterized geodesic $\\gamma(s)\\in\\mathcal{F}_x$. Assume further that the Jacobi field is stable in that it is bounded as $s\\rightarrow -\\infty$, then\n\\begin{equation}\ne^{bs}|J(0)| \\leq|J(s)|\\leq e^{cs}|J(0)|.\n\\end{equation}\n\\end{lemma}\n\n\n\\begin{proof}\nThis follows with slight modification from the proof of Theorem 2.4 in \\cite{HeintzeImhof}, which relies on Proposition 4.1 in \\cite{ImhofRuh}. The key observation is that the proof of Proposition 4.1 in \\cite{ImhofRuh} does not\nuse the bounds on all sectional curvatures, but rather only those appearing in \\eqref{curvaturebounds}.\n\\end{proof}\n\n\n\n\\begin{lemma}\\label{lemmaQ}\nFor any vector $\\xi\\in\\mathbb{R}^3$ and $i=1,2$\n\\begin{equation} \\label{ineq:ab}\n2b Q(\\xi,\\xi) \\leq \\partial_{r_i} Q(\\xi,\\xi) \\leq 2c Q(\\xi,\\xi).\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nLet $\\psi:\\mathbb{R}^5\\rightarrow\\mathbf{X}$ denote the global coordinate patch constructed above, so that $\\psi^{-1}(x)=(r(x),\\theta(x))$. Consider the geodesics\n$\\gamma_{\\xi_0+\\varepsilon\\xi}: s\\mapsto \\psi(s,0 ,\\xi_0+\\varepsilon\\xi)$ where $\\varepsilon$ is a variation parameter and $\\xi_0\\in\\mathbb{R}^3$ is fixed. If $v=(0,\\xi)\\in\\mathbb{R}^5$ then $J_{\\xi}=d\\psi(v)$ is a Jacobi field along the geodesic $\\gamma_{\\xi_0}$. Moreover this Jacobi field is stable since\n$d_{\\mathbf{X}}(\\gamma_{\\xi_0+\\varepsilon\\xi}(s),\\gamma_{\\xi_0}(s))$ is bounded as $s\\rightarrow -\\infty$. Observe that\n\\begin{equation}\nQ_{x}(\\xi,\\xi)=\\mathbf{g}(d\\psi_{\\psi^{-1}(x)}(v),d\\psi_{\\psi^{-1}(x)}(v))\n=|J_{\\xi}(x)|^2,\n\\end{equation}\nso the inequalities \\eqref{ineq:ab} measure the logarithmic growth rate of stable Jacobi fields.\n\nIf $s \\leq t$ then Lemma \\ref{lemmaimhof} implies that\n\\begin{equation}\ne^{2b(t-s)}|J_{\\xi}(\\gamma_{\\xi_0}(s))|^2\n\\leq|J_{\\xi}(\\gamma_{\\xi_0}(t))|^2\\leq e^{2c(t-s)}|J_{\\xi}(\\gamma_{\\xi_0}(s))|^2.\n\\end{equation}\nThe desired result now follows for $i=1$ by taking logarithms, dividing by $t -s$, and letting $t\\rightarrow s$. Similar arguments hold for $i=2$.\n\\end{proof}\n\n\nConsider a smooth map $\\varphi:\\mathbb{R}^3\\setminus\\Gamma\\rightarrow \\mathbf{X}$ with Dirichlet energy density\n\\begin{equation}\n|d \\varphi|^2 = | \\nabla (r_1 \\circ \\varphi) |^2 + | \\nabla (r_2 \\circ \\varphi) |^2 + Q \\Big( \\nabla (\\theta \\circ \\varphi), \\nabla (\\theta \\circ \\varphi) \\Big),\n\\end{equation}\nwhere the norms are computed with respect to the Euclidean metric $\\delta$ in \\eqref{flatmetric} and\n\\begin{equation}\nQ \\Big( \\nabla (\\theta \\circ \\varphi), \\nabla (\\theta \\circ \\varphi) \\Big)\n=Q_{ij}\\left(\\partial_{\\rho}\\theta^{i}\\partial_{\\rho}\\theta^{j}\n+\\partial_{z}\\theta^{i}\\partial_{z}\\theta^{j}\\right).\n\\end{equation}\nLet $\\overline{\\Omega}\\subset\\mathbb{R}^3\\setminus\\Gamma$ be the closure of a bounded domain situated away from the axis, and define the local Dirichlet energy\n\\begin{equation}\nE_{\\Omega}(\\varphi)=\\frac{1}{2}\\int_{\\Omega}|d\\varphi|^2.\n\\end{equation}\nTwo of the harmonic map equations associated with the Dirichlet energy are\n\\begin{equation}\\label{hmequations}\n\\Delta_\\delta r_i = \\partial_{r_i} Q(\\nabla \\theta, \\nabla \\theta),\\quad\\quad\\quad i= 1,2.\n\\end{equation}\nIt then follows from Lemma \\ref{lemmaQ} that each $r_i$ is subharmonic. Therefore if $\\overline{\\Omega}\\subset\\Omega'$ with $\\overline{\\Omega'}\\subset\\mathbb{R}^{3}\\setminus\\Gamma$ and $\\chi\\in C^{\\infty}_{c}(\\Omega')$ is a cut-off function with $\\chi=1$ on $\\Omega$, then multiplying by $\\chi^2 r_i$ and integrating by parts produces\n\\begin{equation}\\label{estimate1}\n\\int_{\\Omega'} \\chi^2 |\\nabla r_i|^2 \\leq 4 \\left(\\sup_{\\Omega'} r_i^2\\right) \\int_{\\Omega'} | \\nabla \\chi |^2.\n\\end{equation}\n\nNext combine \\eqref{ineq:ab} with \\eqref{hmequations} to obtain\n\\begin{equation}\n\\Delta_{\\delta}r_{i}\\geq 2b Q(\\nabla\\theta,\\nabla\\theta).\n\\end{equation}\nThen multiplying by $\\chi^2$, integrating by parts, and applying \\eqref{estimate1} yields\n\\begin{equation}\\label{estimate2}\n\\int_{\\Omega'} \\chi^2 Q(\\nabla \\theta, \\nabla \\theta) \\leq \\frac{1}{b} \\int_{\\Omega'} \\chi \\nabla \\chi \\cdot \\nabla r \\leq\n\\frac{2}{b} \\left(\\sup_{\\Omega'} r_i\\right) \\int_{\\Omega'}| \\nabla \\chi |^2.\n\\end{equation}\nTogether \\eqref{estimate1} and \\eqref{estimate2} give the desired local energy estimate\n\\begin{equation}\nE_{\\Omega}(\\varphi) \\leq \\Big[ 4 ( \\sup_{\\Omega'} r_1^2 + \\sup_{\\Omega'} r_2^2 ) + \\frac{2}{b} (\\sup_{\\Omega'} r_1 + \\sup_{\\Omega'} r_2 ) \\Big] \\int_{\\Omega'}|\\nabla \\chi|^2.\n\\end{equation}\n\n\\begin{theorem}\\label{energybound}\nLet $\\varphi:\\mathbb{R}^3 \\setminus\\Gamma\\rightarrow\\mathbf{X}$ be a harmonic map and $\\Omega\\subset\\mathbb{R}^3 \\setminus\\Gamma$ be a bounded domain. If $\\varphi:\\Omega\\rightarrow B_{\\mathcal{R}}(x_0)$ then\n\\begin{equation}\nE_{\\Omega}(\\varphi)\\leq C,\n\\end{equation}\nwhere the constant $C$ depends only on the radius $\\mathcal{R}$ of the geodesic ball and $\\Omega$.\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Existence and Uniqueness} \\label{existence}\n\\label{sec7} \\setcounter{equation}{0}\n\\setcounter{section}{7}\n\n\n\n\nIn this section, we complete the proof of Theorem \\ref{main} and prove the existence and uniqueness of a harmonic map $\\varphi\\colon\\mathbb{R}^3\\setminus\\Gamma\\to \\mathbf{X}$ asymptotic to the model map $\\varphi_0$ constructed in Section \\ref{sec5}. Now that all the ingredients are in place, the proof is the same as in \\cite{weinstein96}. Nevertheless, we include it here for the sake of completeness.\nLet $\\varepsilon>0$ and define $\\Omega_\\varepsilon=\\{y\\in\\mathbb{R}^3\\colon d_{\\mathbb{R}^3}(y,\\Gamma)>\\varepsilon, \\text{ } y\\in B_{1\/\\varepsilon}(0)\\}$. Since the target $\\mathbf{X}$ is nonpositively curved, there is a smooth harmonic map $\\varphi_\\varepsilon\\colon\\Omega_\\varepsilon\\to\\mathbf{X}$ such that $\\varphi_\\varepsilon=\\varphi_0$ on $\\partial\\Omega_\\varepsilon$. We quote the following lemma from \\cite{weinstein96}, which essentially shows that the obstruction to a subharmonic distance function is given by the tension.\n\n\\begin{lemma}\nLet $\\varphi_1,\\varphi_2\\colon\\Omega\\to\\mathbf{X}$ be smooth maps into a nonpositively curved target. Then\n\\begin{equation}\n\t\\Delta\\left( \\sqrt{1 + d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)^2} \\right) \\geq\n\t-\\left( |\\tau(\\varphi_1)| + |\\tau(\\varphi_2)| \\right).\n\\end{equation}\n\\end{lemma}\n\nSet $\\varphi_1=\\varphi_\\varepsilon$ and $\\varphi_2=\\varphi_0$, and note that $\\tau(\\varphi_\\varepsilon)=0$. The remaining tension may be estimated by $\\Delta w\\leq -|\\tau(\\varphi_0)|$, where $w>0$ and $w\\rightarrow 0$ at infinity in $\\mathbb{R}^3$.\nThis is possible due to the boundedness and decay of $|\\tau(\\varphi_0)|$ as given in Theorem \\ref{model}. In particular we may take $w=c(1+r^2)^{-1\/4}$ so that\n\\begin{equation}\n\\Delta w\\leq -\\frac{c}{4}(1+r^2)^{-5\/4}\\leq -|\\tau(\\varphi_0)|,\n\\end{equation}\nif the constant $c>0$ is chosen sufficiently large. It follows that\n\\begin{equation}\n\t\\Delta\\left( \\sqrt{1 + d_{\\mathbf{X}}(\\varphi_{\\varepsilon},\\varphi_0)^2} - w \\right) \\geq 0, \\quad\\quad\\quad\\quad\n\t\\sqrt{1 + d_{\\mathbf{X}}(\\varphi_{\\varepsilon},\\varphi_0)^2} - w \\leq 1 \\text{ } \\text{ on }\\text{ }\\partial\\Omega_\\varepsilon.\n\\end{equation}\nThe maximum principle then yields a uniform $L^\\infty$ bound\n\\begin{equation}\\label{c0bound}\n\\sqrt{1 + \\operatorname{dist}_{\\mathbf{X}}(\\varphi_{\\varepsilon},\\varphi_0)^2} \\leq 1 + w \\text{ }\\text{ }\\text{ on }\\text{ }\\text{ } \\Omega_\\varepsilon.\n\\end{equation}\nFix a domain $\\Omega$ such that $\\overline{\\Omega}\\subset\\mathbb{R}^3\\setminus\\Gamma$ and take $\\varepsilon>0$ small enough to have $\\overline{\\Omega}\\subset\\Omega_\\varepsilon$. The $L^\\infty$ estimate combined with Theorem \\ref{energybound} produces an energy bound on $\\Omega$ independent of $\\varepsilon$. Furthermore, consider the Bochner identity\n\\begin{equation}\n\\Delta|d\\varphi_\\varepsilon|^2 = |\\hat{\\nabla} d\\varphi_\\varepsilon|^2\n\t- \\prescript{\\mathbf{X}}{}{\\operatorname{Riem}(d\\varphi_\\varepsilon,\nd\\varphi_\\varepsilon,d\\varphi_\\varepsilon,d\\varphi_\\varepsilon)}.\n\\end{equation}\nNonpositivity of the curvature shows that $|d\\varphi_{\\varepsilon}|^2$ is subharmonic. Thus a Moser iteration may be applied to find a uniform pointwise bound from the the energy estimate, namely\n\\begin{equation}\n\t\\sup_{\\Omega'} |d\\varphi_\\varepsilon|^2 \\leq C \\int_\\Omega |d\\varphi_\\varepsilon|^2\\leq C'\n\\end{equation}\nwhere $\\overline{\\Omega}'\\subset\\Omega$. Finally, using the harmonic map equations combined with the pointwise gradient and $L^{\\infty}$ bounds, we may now bootstrap to obtain uniform a priori estimates for all derivatives of $\\varphi_\\varepsilon$ on $\\Omega'$.\nBy letting $\\varepsilon\\rightarrow 0$, it follows that there exists a subsequence which converges together with any number of derivatives on $\\Omega'$. In the usual way, by choosing a sequence of exhausting domains and taking a diagonal subsequence, a sequence $\\varphi_{\\varepsilon_{i}}$ is produced which converges uniformly on compact subsets as $\\varepsilon_i\\rightarrow 0$. The limit $\\varphi$ is smooth and harmonic, and satisfies the $L^\\infty$ bound so that it is also asymptotic to $\\varphi_0$.\n\nThe proof of uniqueness is straightforward. If $\\varphi_1$ and $\\varphi_2$ are two harmonic maps asymptotic to $\\varphi_0$, then they are asymptotic to each other so that $d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)\\leq C$.\nMoreover\n\\begin{equation}\n\\Delta\\left(\\sqrt{1 + d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)^2} \\right)\\geq 0,\n\\end{equation}\nand since the set $\\Gamma$ on which $d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)$ may not be fully regular is of codimension 2, $\\sqrt{1 + d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)^2}$ is weakly subharmonic and the maximum principle applies \\cite{Weinstein}*{Lemma 8}.\nAs $\\sqrt{1 + d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)^2}\\to 1$ at infinity, it follows that $\\sqrt{1 + d_{\\mathbf{X}}(\\varphi_1,\\varphi_2)^2}\\leq 1$.\nConsequently $\\varphi_1=\\varphi_2$.\n\n\n\\subsection{Rod Data for the Harmonic Map}\n\n\n\n\nHaving constructed a harmonic map asymptotic to a prescribed model map, it remains to show that the rod data set arising from the harmonic map agrees with that of the model map. Let $\\Phi=(F,\\omega):\\mathbb{R}^3\\setminus\\Gamma\\rightarrow\\tilde{\\mathbf{X}}\\cong SL(3,\\mathbb{R})\/SO(3)$ denote the characterization of the harmonic map in the space of symmetric positive definite matrices, and let $\\Phi_0=(F_0,\\omega_0)$ denote the model map asymptotic to $\\Phi$. Recall that $F=(f_{ij})$ is a $2\\times 2$ symmetric positive definite matrix on $\\mathbb{R}^3\\setminus\\Gamma$ representing the fiber metric (associated with the rotational Killing directions) in a bi-axisymmetric stationary spacetime, and $\\omega=(\\omega_1,\\omega_2)^t$ are the twist potentials. The rod data associated with $\\Phi$ consists of the kernel of $F$ and the value of $\\omega$ on the axis.\n\n\\begin{theorem}\\label{rodstructharmonic}\nIf $\\Phi$ is asymptotic to $\\Phi_0$ then $\\mathrm{ker} \\text{ }F=\\mathrm{ker}\\text{ } F_0$ at each point of $\\Gamma$, and $\\omega=\\omega_0$ on each axis rod. In particular, the two maps respect the same rod data set. Furthermore, if $\\Phi$ is harmonic then $d_{\\tilde{\\mathbf{X}}}(\\Phi_0,\\Phi)\\rightarrow 0$ at infinity in $\\mathbb{R}^3$.\n\\end{theorem}\n\nBefore proving this result we record several observations. Since the metric on $\\tilde{\\mathbf{X}}$ is $G$-invariant, the distance function is preserved under the action of left translation\n\\begin{equation}\nd_{\\tilde{\\mathbf{X}}}(\\Phi_0,\\Phi)=\nd_{\\tilde{\\mathbf{X}}}(Id,\\tilde{L}_{B^{-1}}\\Phi),\n\\end{equation}\nwhere $B\\in SL(3,\\mathbb{R})$ satisfies $BB^{t}=\\Phi_0$. Note that\n\\begin{equation}\n\\tilde{L}_{B^{-1}}\\Phi=B^{-1}\n\\Phi(B^{-1})^{t}=e^{W}\n\\end{equation}\nfor some symmetric $W$ with $\\mathrm{Tr}\\text{ }W=0$. Since the Riemannian exponential map and the matrix exponential coincide for $\\tilde{\\mathbf{X}}$, Hadamard's theorem applies (using the fact that $\\tilde{\\mathbf{X}}$ is complete, simply connected, with nonpositive curvature) to show that the exponential map is a diffeomorphism, and the geodesic $\\gamma(t)= e^{tW}$ is minimizing. Therefore \\eqref{identitymetric} yields\n\\begin{equation}\nd_{\\tilde{\\mathbf{X}}}(Id,\\tilde{L}_{B^{-1}}\\Phi)=|\\gamma'(0)|=|W|\n=\\sqrt{\\mathrm{Tr}(WW^t)}\n=\\sqrt{\\mathrm{Tr}(W^2)}.\n\\end{equation}\n\nNow consider the function from the Mazur identity \\cite{IdaIshibashiShiromizu}, namely\n\\begin{align}\n\\begin{split}\n\\mathrm{Tr}\\left(\\Phi_{0}^{-1}\\Phi\\right)=&\n\\mathrm{Tr}\\left((B^{-1})^{t}B^{-1}\n\\Phi(B^{-1})^{t}B^{t}\\right)\\\\\n=&\\mathrm{Tr}\\left(B^{-1}\\Phi(B^{-1})^t\\right)\\\\\n=&\\mathrm{Tr}\\text{ } e^{W}.\n\\end{split}\n\\end{align}\nSince $e^{W}$ is symmetric and positive definite it may be diagonalized with positive eigenvalues\n$\\lambda_{i}$, $i=1,2,3$. We then have\n\\begin{equation}\n\\mathrm{Tr}\\text{ }e^{W}=\\lambda_{1}+\\lambda_{2}+\\lambda_{3},\\quad\\quad\\quad\n\\mathrm{Tr}(W^2)=(\\log\\lambda_{1})^2+(\\log\\lambda_{2})^2+(\\log\\lambda_{3})^2,\n\\end{equation}\nand since $W$ has zero trace\n\\begin{equation}\\label{jfjfj}\n\\log\\lambda_1+\\log\\lambda_2+\\log\\lambda_3=0.\n\\end{equation}\nIf $\\mathrm{Tr}\\text{ }e^{W}\\leq c$ then it is not difficult to see that \\eqref{jfjfj} implies $\\mathrm{Tr}(W^2)\\leq c_1$. Conversely if $\\mathrm{Tr}(W^2)\\leq c^2$\nthen each $|\\log\\lambda_{i}|\\leq c$, and it holds that $\\mathrm{Tr}\\text{ }e^{W}\\leq 3e^{c}$. We have thus proved the following.\n\n\\begin{lemma}\\label{lem1}\n$d_{\\tilde{\\mathbf{X}}}(\\Phi_0,\\Phi)$ is uniformly bounded if and only if\nthe Mazur quantity $\\mathrm{Tr}\\left(\\Phi_{0}^{-1}\\Phi\\right)$\nis uniformly bounded.\n\\end{lemma}\n\n\\noindent\\textit{Proof of Theorem \\ref{rodstructharmonic}.}\nIf $\\Phi$ is asymptotic to $\\Phi_0$ then $d_{\\tilde{\\mathbf{X}}}(\\Phi_0,\\Phi)\\leq c_0$, that is the distance is uniformly bounded, in particular near $\\Gamma$.\nBy Lemma \\ref{lem1} this implies that the Mazur function is also uniformly bounded\n\\begin{equation}\n\\mathrm{Tr}\\left(\\Phi_{0}^{-1}\\Phi\\right)\\leq c.\n\\end{equation}\nMoreover this quantity may be computed in terms of $F$, $F_0$, $\\omega$, and $\\omega_0$ as\n\\begin{equation}\\label{computationFF}\n\\mathrm{Tr}\\left(\\Phi_{0}^{-1}\\Phi\\right)\n=\\frac{f_0}{f}+\\mathrm{Tr}(FF_{0}^{-1})+\\frac{1}{f}(\\omega-\\omega_0)^{t}F_{0}^{-1}\n(\\omega-\\omega_0),\n\\end{equation}\nwhere $f=\\operatorname{det} F$ and $f_0=\\operatorname{det} F_0$. Since each of the terms on the right-hand side is nonnegative and the roles of $\\Phi$ and $\\Phi_0$ maybe reversed, we have\n\\begin{equation}\\label{potential}\n\\frac{f_0}{f}\\leq c,\\quad\\quad \\frac{f}{f_0}\\leq c,\\quad\\quad\n\\mathrm{Tr}(FF_{0}^{-1})\\leq c,\\quad \\quad \\frac{1}{f}(\\omega-\\omega_0)^{t}F_{0}^{-1}(\\omega-\\omega_{0})\\leq c.\n\\end{equation}\nIt follows that\n\\begin{equation}\nc^{-1}f_{0}\\leq f\\leq cf_{0}.\n\\end{equation}\n\nNext, since $F_0$ is symmetric it may be diagonalized with an orthogonal matrix $O$ so that $F_0=ODO^t$ where $D=\\mathrm{diag}(\\mu_1, \\mu_2)$. Working now at a point on an axis rod, the kernel of $F_0$ is 1-dimensional and so it may be assumed without loss of generality that $c^{-1}f_0\\leq \\mu_1\\leq cf_0$ and $00\\}$, which will serve as the orbit space for the spacetime. The spacetime metric is given by \\eqref{metric}, and it suffices to show how each coefficient in \\eqref{metric} arises from $\\Phi$. The resulting spacetime will be asymptotically flat in light of the\ndecay of the model map $\\Phi_0$ and the fact that, by\nTheorem \\ref{rodstructharmonic}, $d_{\\tilde{\\mathbf{X}}}(\\Phi_0,\\Phi)\\rightarrow 0$ at infinity in $\\mathbb{R}^3$.\n\nFirst observe that $\\sigma$ is immediately obtained from \\eqref{sigma}, since the orbit space is simply connected and the form on the right-hand side is closed as a result of the harmonic map equations. It remains to find $A^{(i)}=v^{i}dt$, which will be derived from the harmonic map components $\\omega_{i}$. By solving for $dA^{(i)}$ in \\eqref{chi} we get\n\\begin{equation}\\label{0}\nd A^{(i)}=-\\frac{1}{2}f^{-1}f^{ij}\\star_{3} d\\omega_{j}.\n\\end{equation}\nObserve that from Cartan's magic formula and the fact that $\\partial_{t}$ is a Killing field\n\\begin{equation}\n\\iota_{\\partial_{t}} d A^{(i)}=-d \\iota_{\\partial_{t}}A^{(i)}=-dv^{i}.\n\\end{equation}\nIt follows that if\n\\begin{equation}\\label{1}\n\\iota_{\\partial_{t}}\\left(f^{-1}f^{ij}\\star_{3} d\\omega_{j}\\right)\n\\end{equation}\nis closed, then we may find $v^{i}$ by quadrature from the equation\n\\begin{equation}\ndv^{i}=\\frac{1}{2}\\iota_{\\partial_{t}}\\left(f^{-1}f^{ij}\\star_{3} d\\omega_{j}\\right).\n\\end{equation}\nIt turns out that showing \\eqref{1} is closed is equivalent to parts of the harmonic map equations. To see this, let $\\epsilon_{3}$ denote the volume form of $g_{3}$. Then\n\\begin{equation}\n(\\star_{3} d\\omega_{j})^{ab}=\\epsilon_{3}^{abc}\\partial_{c}\\omega_{j},\n\\end{equation}\nand\n\\begin{align}\\label{222}\n\\begin{split}\n\\iota_{\\partial_{t}}\\star_{3}d\\omega_{j}\n=&\\epsilon_{3}(\\partial_{t},\\partial_{\\rho},\\partial_{c})\n\\partial^{c}\\omega_{j} d\\rho\n+\\epsilon_{3}(\\partial_{t},\\partial_{z},\\partial_{c})\\partial^{c}\\omega_{j} dz\\\\\n=&\\rho e^{2\\sigma}\\partial^{z}\\omega_{j} d\\rho\n-\\rho e^{2\\sigma}\\partial^{\\rho}\\omega_{j} dz\\\\\n=&\\rho\\partial_{z}\\omega_{j} d\\rho-\\rho\\partial_{\\rho}\\omega_{j} dz.\n\\end{split}\n\\end{align}\nTherefore\n\\begin{align}\n\\begin{split}\nd\\left(f^{-1}f^{ij}\\iota_{\\partial_{t}}\\star_{3} d\\omega_{j}\\right)=&\nd\\left(\\rho f^{-1}f^{ij}\\partial_{z}\\omega_{j} d\\rho\n-\\rho f^{-1}f^{ij}\\partial_{\\rho}\\omega_{j} dz\\right)\\\\\n=&\\left[\\partial_{z}(\\rho f^{-1}f^{ij}\\partial_{z}\\omega_{j})+\n\\partial_{\\rho}(\\rho f^{-1}f^{ij}\\partial_{\\rho}\\omega_{j}) \\right]dz\\wedge d\\rho\\\\\n=&\\operatorname{div}_{\\mathbb{R}^3}\\left(f^{-1} f^{ij}\\nabla\\omega_{j}\\right)dz\\wedge d\\rho\\\\\n=&0,\n\\end{split}\n\\end{align}\nwhere the last equality arises from the second set of harmonic maps equations\nin \\eqref{eulerlagrange}.\nAnother way to obtain this calculation is to observe that\n\\begin{equation}\\label{2}\nf^{-1}f^{ij}\\iota_{\\partial_{t}}\\star_{3} d\\omega_{j}\n=\\ast\\left(f^{-1} f^{ij} d\\omega_{j}\\right)\n\\end{equation}\nand $\\operatorname{div}_{\\mathbb{R}^3}=\\ast d \\ast$,\nwhere $\\ast$ is the Hodge star operator with respect to $\\delta$ on $\\mathbb{R}^3$.\nLastly, it is clear from the equations involved that $\\sigma$ and $v^i$ are bi-axisymmetric.\n\n\n\n\n\\subsection{Regularity} \\label{conical}\n\n\n\nThe metric reconstructed above from a solution of the harmonic map problem is defined on $\\mathbb{R}\\times(\\mathbb{R}^3\\setminus\\Gamma)\\times U(1)$. In order to extend this metric across $\\Gamma$, two steps must be completed as described below.\n\n\\subsubsection{Analytic regularity}\n\nThe metric coefficients in~\\eqref{metric} must be shown to be smooth and even in $\\rho$ up to $\\Gamma$. This was achieved in the 4D case in~\\cite{weinstein90}, and then extended to the non-axially symmetric case in~\\cite{litian}. We believe that these methods are applicable to the 5D setting as well.\n\n\n\\subsubsection{Conical singularities}\n\nIn addition to the analytic regularity mentioned above, conical singularities on axis rods must be ruled out. A conical singularity at a point on an axis rod $\\Gamma_{l}$ is measured by the angle deficiency $\\theta\\in(-\\infty,2\\pi)$ given by\n\\begin{equation}\n\\frac{2\\pi}{2\\pi-\\theta}=\\lim_{\\rho\\rightarrow 0}\\frac{2\\pi\\cdot\\mathrm{Radius}}\n{\\mathrm{Circumference}}=\\lim_{\\rho\\rightarrow 0}\n\\frac{\\int_{0}^{\\rho}\\sqrt{f^{-1}e^{2\\sigma}}}\n{\\sqrt{f_{ij}u^{i}u^{j}}}=\\lim_{\\rho\\rightarrow 0}\n\\sqrt{\\frac{\\rho^{2}f^{-1}e^{2\\sigma}}{f_{ij}u^{i}u^{j}}},\n\\end{equation}\nwhere $u=(u^1,u^2)=(m_{l},n_{l})$ is the associated rod structure so that $u$ is in the kernel of $F$ at $\\rho=0$. Absence of a conical singularity is characterized by a zero angle deficiency, that is when the right-hand side is 1; this is referred to as the balancing condition in Section \\ref{sec1}. By a standard change of coordinates from polar to Cartesian, it is straightforward to check that once analytic regularity has been established this condition is necessary and sufficient for the metric to be extendable across the axis.\n\nLet us denote by $b_l$ the value of $\\log\\left(\\frac{2\\pi}{2\\pi-\\theta}\\right)$ on the axis rod $\\Gamma_l$. Then, similarly to the 4D case, it can be shown from~\\eqref{sigma} that $b_l$ is constant on $\\Gamma_l$. Moreover asymptotic flatness implies that $b_l=0$ on the two semi-infinite axis rods, $l=1,L+1$. Thus it it remains to investigate the value of $b_l$ on the bounded axis rods. In the example from Figure~\\ref{domain}, to show regularity would only require showing that $b_3=0$ so that the angle deficit vanishes on the disk rod, between points $S$ and $C$.\n\nIn 4D very few cases have been worked out, see~\\cites{weinstein94,litian91}. In the current 5D setting, it is known that some configurations without any conical singularity do exist as mentioned in the introduction. We conjecture that many more such regular solutions can be found. These questions will be investigated in a future paper.\n\n\\begin{comment}\nSome preliminary observations are that we might have\n\\begin{equation}\n\\frac{\\rho^2}{f_{ij}v^{i}v^{j}}\\leq C\n\\end{equation}\nsince $F$ is asymptotic to the model map, and also\n\\begin{equation}\nf^{-1}e^{2\\sigma}\\leq C.\n\\end{equation}\n\\end{comment}\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\IEEEPARstart{T}{he} 3D human digitization has a wide range of applications in industries such as film, animation, games, and virtual try-on. Existing approaches to obtain high-quality 3D human reconstruction often require expensive equipment such as multiple synchronized cameras~\\cite{textured_neural_avatars}, and RGB-D sensor~\\cite{selfportraits}, limiting their applications in practical scenarios. On the other hand, for various 3D human reconstruction approaches~\\cite{PIFu,PIFuHD,Detailed_Human_Avatars,Video_avatars,MonoClothCap}, modeling complex geometric details such as hair, glasses, and cloth wrinkles of real humans remains a challenging problem.\n\n\\par\nIn this paper, we target to obtain photo-realistic 3D human avatars from monocular RGB videos, which are one of the most accessible video forms in daily life. Different from existing approaches based on pre-scanned human models~\\cite{AMAS} or parametric body models~\\cite{Detailed_Human_Avatars, MonoClothCap, Video_avatars}, our approach implicitly reconstructs the human geometry and appearance via generalizing Neural Radiance Fields (NeRF)~\\cite{NeRF}, which uses a neural network to encode color and density as a function of location and viewing angle, and generates photo-realistic images by volume rendering. \nNeRF~\\cite{NeRF} has shown impressive ability in reconstructing a static scene from multi-view images, and inspires many researchers in extending NeRF to scenes with severe lighting changes~\\cite{nerf-w} and non-rigid deformations~\\cite{nerfies,NGNeRF}. However, these approaches are uncontrollable and limited to nearly static scenes with small movements, failing to deal with human subjects with large movements.\n\nTo handle the dynamic human from monocular videos, we combine neural radiance fields with a parametric body model of SMPL~\\cite{SMPL}, which enables more precise human geometry and appearance modeling, and further makes the neural radiance fields controllable. \nIn particular, our approach extends NeRF via introducing the pose-guided deformation, which unwarps the observation space near the human body to a constant canonical space through the deformation of the SMPL.\nWe observe that even the state-of-the-art SMPL estimator from monocular videos cannot obtain accurate parameters, which inevitably leads to blurry results. To address this problem, we propose to jointly optimize the NeRF and SMPL parameters via analysis-by-synthesis, which not only obtains better results but also accelerates the convergence of training. \nWe demonstrate the superiority of the proposed method on multiple datasets, with both quantitative and qualitative results on novel view synthesis, 3D human reconstruction, and novel pose synthesis.\n\nIn summary, our work has the following contributions:\n\\begin{itemize}\n \\item We propose a method explicitly deforming the points according to SMPL pose to reconstruct a canonical view NeRF model, relaxing the requirement of the static object and preserving details such as clothing and hair.\n \\item We incorporate pose refinement into our analysis-by-synthesis approach to account for the inaccurate SMPL estimates, resulting in refined SMPL pose and greatly improved reconstruction quality.\n \\item We achieve high-quality 3D human reconstruction from monocular RGB video, and can render photo-realistic images from novel views.\n \\item Due to our controllable SMPL-based geometry deformation, we can synthesize novel pose images, showing that our learned canonical space NeRF model is animatable.\n\\end{itemize}\n\n\n\n\\section{Related work}\n\n\\textbf{3D Human Reconstruction}. Reconstructing 3D human has been more and more popular in recent years. \nVarious approaches attempt to digitize a human from a single-view image~\\cite{PIFu,PIFuHD,360degree,tex2shape,ARCH}, multi-view images~\\cite{PIFu,PIFuHD,dvv,PaMIR}, RGB videos~\\cite{Video_avatars, Detailed_Human_Avatars,lrp,MonoClothCap,DoubleFusion}, or RGB-D videos~\\cite{selfportraits,texmesh}. One stream of these approaches~\\cite{Detailed_Human_Avatars, Video_avatars, MonoClothCap} utilize a parametric body model such as SMPL~\\cite{SMPL} to represent a human body with cloth deformations, which produces an animatable 3D model with high-quality textures but struggles with limited expressive ability in complex geometries such as hair and dresses. On the other hand, PIFu~\\cite{PIFu} and PIFuHD~\\cite{PIFuHD} based methods use an implicit representation to reconstruct a 3D surface and achieves impressive results in handling people with complex poses, hairstyles and clothing, which however, suffers from a blurry appearance and requires further registration for animation. To handle more complex pose inputs, these methods~\\cite{ARCH,PaMIR,S3,ipnet} combine implicit representations and parametric models to obtain more robust results and are animatable.\n \n\\textbf{Neural Representations}. Representing a scene with neural networks has achieved stunning success in recent years.\nSRN~\\cite{SRN} proposes an implicit neural representation that assigns feature vectors to 3D positions, and uses a differentiable ray marching algorithm for image generation.\nNeRF~\\cite{NeRF} establishes a static scene that maps 3D coordinates and viewing direction to density and color using a neural network. These methods~\\cite{SRN,NeRF,NSVF} can render very realistic images, but they are all limited to static scenes.\nDynamic NeRFs~\\cite{nerfies,D-NeRF,NSVF,NGNeRF,NSFF,HyperNeRF} extend NeRF to dynamic scenes by introducing the latent deformation field or scene flow fields.\nThese methods where the deformations are learned by networks allow to handle more general deformation, and synthesize novel poses by using interpolation in the latent space. \nHowever, it is difficult to implicitly control the complex non-rigid deformation of human body motion.\nThese works~\\cite{NerFACE,Neural_Body,A-NeRF,SCANimate,SMPLicit,LoopReg} combine scene representation network with parametric models~\\cite{3DMM,SMPL} to reconstruct dynamic humans.\nInstead of using latent codes or expression parameters as input, we use the human body model SMPL to explicitly deform over different poses and shapes and reconstruct the human body in the canonical pose.\nAt the same time, this explicit method allows us to fine-tune the parameters of the SMPL which is very practical in real scenarios.\nSimilar ideas with us have been used in recent works\\cite{Neural_Body, Anim-NeRF, Neural_Actor}, but these methods usually require multi-view images or accurate registered SMPL.\nIt is more challenging for monocular videos because of the difficulty of SMPL estimation.\n\n\\textbf{Human Motion Transfer}.\nHuman motion transfer aims to synthesize an image of a person with the appearance from a source human and the motion from a reference image. Recent advances using Generative Adversarial Networks (GAN) have shown convincing performance without recovering detailed 3D geometry.\nThese works~\\cite{monkey_net,eveybody_dance_now,LWGAN,LWGANPlusPlus} use image-to-image translation~\\cite{pix2pix,pix2pixhd} to map 2D skeleton images to rendering output. Due to the lack of 3D reasoning, the geometry of the generated humans is usually not consistent across multiple views and motions.\nTo better preserve the appearance of the source subject, these methods ~\\cite{StylePoseGAN,HumanGAN} use UV map to transform features from screen space to UV texture space to obtain the neural texture, then render the feature maps in the target pose by neural rendering network.\nIn addition to these general approaches between arbitrary subjects, there are other person-specific methods~\\cite{textured_neural_avatars,NHR,smplpix}.\nTextured Neural Avatar~\\cite{textured_neural_avatars} learns a uniform neural texture from different views and poses.\nSMPLpix\\cite{smplpix} and NHR\\cite{NHR} project the point clouds to 2D images and then feed them into an image-to-image translation network.\nHowever, these neural rendering methods fail to generate photorealistic results for novel poses that were not seen during training.\n\n\n\\begin{figure*}[ht]\n \\begin{center}\n\t\\begin{tabular}{@{}c}\n\t\t\\includegraphics[width=0.8\\linewidth, ]{figures\/Overview.png} \n\t\\end{tabular}\n\t\\end{center}\n\t\\caption{Overview of the proposed Animatable Neural Radiance Fields. Given a video sequence, we estimate the camera $K_t$ and SMPL parameters $M(\\theta_{t}, \\beta_{t})$ of the human subject for initialization. We use volume rendering to sample points $(x_t, y_t, z_t)$ along the camera ray in observation space, and transform these points to canonical space according to pose-guided deformation. Then we input these points $(x_t^0, y_t^0, z_t^0)$ into the neural radiance field to get densities $\\sigma$ and colors $\\mathbf{c}$. Then we use the integral equation Eq.~\\eqref{eq:rendering equation} to render the image, and jointly optimize the neural radiance field parameters $\\phi$ and SMPL parameters $\\theta_t,\\beta_t$ by minimizing the error $\\mathcal{L}\\left(\\tilde{I}_t, I_t\\right)$ between the rendered image $\\tilde{I}_t$ and the ground truth image $I_t$ with the mask.} \\label{fig:framework}\n\\end{figure*}\n\n\\section{Method}\nIn this section, we will describe the method to create a human avatar from a single portrait video of a person as shown in Fig.~\\ref{fig:framework}.\nGiven a $n$-frame video sequence $\\left\\{I_t\\right\\}_{t=1}^{n}$ of a single human subject turning around before the camera and holding an A-pose or T-pose, we estimate the SMPL~\\cite{SMPL,SMPL-X} parameters $M(\\theta_{t}, \\beta_{t})$ and camera intrinsics $K_{t}$ of each frame using existing human body shape and pose estimation models~\\cite{VIBE}. \nIn order to avoid the influence of background changes caused by camera movement, we first use a segmentation network~\\cite{RP-R-CNN} to obtain the foreground human mask and set the background color to white uniformly.\nOur animatable NeRF (Sec.~\\ref{Anim-NeRF}) can be decomposed into pose-guided deformation (Sec.~\\ref{Pose-Guided_deform}) and a neural radiance field (NeRF) defined in the canonical space. We can use the volume rendering (Sec.~\\ref{Volume_Render}) to render our neural radiance field. In order to avoid the negative effects of inaccurate SMPL parameters, we propose to jointly optimize the neural radiance field and SMPL parameters (Sec.~\\ref{Pose_Refine}). We also introduce background regularization and pose regularization to improve the robustness of optimization (Sec.~\\ref{Object_Fun}).\n\n\\subsection{Animatable Neural Radiance Fields}\\label{Anim-NeRF}\nTo model human appearance and geometry with complex non-rigid deformation, we introduce the parameterized human model SMPL~\\cite{SMPL} into the neural radiance field and present the animatable neural radiance fields (animatable NeRF) $F$ which maps the 3D position $\\mathbf{x}=(x, y, z)$, shape $\\mathbf{\\beta}_{t}$ and pose $\\mathbf{\\theta}_{t}$ into color $\\mathbf{c}=(r, g, b)$ and density $\\sigma$:\n\\begin{equation}\nF\\left(D\\left(\\mathbf{x}, \\mathbf{\\theta}_{t}, \\mathbf{\\beta}_{t} \\right)\\right) = \\left(\\mathbf{c},\\sigma\\right)\n\\end{equation}\nwhere $D\\left(\\mathbf{x}, \\mathbf{\\theta}_{t}, \\mathbf{\\beta}_{t} \\right)$ transforms the 3D position $\\mathbf{x}=(x, y, z)$ in the observation space to $\\mathbf{x}^{0}=(x^{0}, y^{0}, z^{0})$ in canonical space, aiming to handle human movements between different frames. The view dependence in NeRF is mainly for dealing with specular reflections of materials such as metal and glass. But the skin and clothes of humans are mainly diffuse reflective materials, so we remove the viewing direction from the input. We will discuss the impact of viewing direction in Sec.~\\ref{impact_of_view_direction}.\n\n\\subsection{Pose-guided Deformation}\\label{Pose-Guided_deform}\nIn contrast to~\\cite{NGNeRF,nerfies} that implicitly control of the deformation of spatial points, we use the parametric body model - SMPL, to explicitly guide the deformation of spatial points. \nHere we define the observed image as the observation space and attempt to learn a template human in the canonical space.\nThe articulated SMPL model enables the explicit transformation of the spatial points (i.e. from observation space to canonical space), which facilitates the learning of a specified meaningful canonical space, and reduces the reliance of diverse input poses to generalize to unseen poses so that we can learn the NeRF space from dynamic scenes (containing moving person) and animate this person after training.\nThe template pose in the canonical space is defined as X-pose $\\theta^{0}$ (as shown in Fig.~\\ref{fig:framework}), due to its good visibility and separability of each body part.\nBy using the inverse transformation of the linear skinning of SMPL, the pose $\\theta_{t}$ in observation space can be transformed into the X-pose $\\theta^{0}$ in canonical space. Considering that the transformation functions are only defined on the surface vertices of the body mesh, we extend them to the space near the mesh surface based on the intuition that points in space near the mesh should move along with neighboring vertices. Following PaMIR~\\cite{PaMIR} we define the transformation of a point $\\mathbf{x}$ from observation space to canonical space as\n\n\n\n\n\n\\begin{equation}\n\\label{eq:transformation}\n\\begin{aligned}\n\\left[\\begin{matrix}\\mathbf{x}^{0} \\\\ 1\\end{matrix} \\right]&=\\mathbf{M}(\\mathbf{x}, \\beta_{t}, \\theta_{t}, \\theta^{0}) \\left[\\begin{matrix}\\mathbf{x} \\\\ 1\\end{matrix} \\right] \\\\\n\\mathbf{M}(\\mathbf{x}, \\beta_{t}, \\theta_{t}, \\theta^{0}) &=\\sum_{v_i \\in \\mathcal{N}\\left(\\mathbf{x}\\right)} \\frac{\\omega_{i}}{\\omega} \\mathbf{M}_{i}\\left(\\beta_{t}, \\theta^{0}\\right)\\left(\\mathbf{M}_{i}(\\beta_{t}, \\theta_{t})\\right)^{-1}\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{N}\\left(\\mathbf{x}\\right)$ denotes the SMPL vertex set near $\\mathbf{x}$ in the observation space, and Eq.~\\eqref{eq:transformation} indicates that the movement of $\\mathbf{x}$ relies on the movement of neighboring vertices. \nThe transformation weight $\\omega_{i}$ that the vertex $v_i$ affects the point $\\mathbf{x}$ is defined as \n\n\\begin{equation}\n\\begin{aligned}\n\\omega_{i} &=\\exp \\left(-\\frac{\\left\\|\\mathbf{x}-v_{i}\\right\\| \\left\\|\\hat{\\mathbf{b}}-\\mathbf{b}_{i}\\right\\|}{2 \\sigma^{2}}\\right) \\\\\n\\omega &=\\sum_{v_i \\in \\mathcal{N}(\\mathbf{x})} \\omega_{i}\n\\end{aligned}\n\\end{equation}\nwhere $\\mathbf{b}_{i}$ is the blend weight of $v_i$ and $\\hat{\\mathbf{b}}$ is the blend skinning weight of the nearest vertex, and $\\|\\mathbf{x}-v_{i}\\|$ computes the L2 distance between $\\mathbf{x}$ and $v_i$.\nConsider the fact that a point might be affected by different body parts, leading to ambiguous or even non-meaningful transformation, we adopt the blend skinning weight which characterizes the movement patterns of a vertex along with the SMPL joints~\\cite{SMPL}, to strengthen the movement impact of the nearest neighbor. $\\omega$ is used for weight normalization.\n\nFollowing SMPL\\cite{SMPL}, the transformation matrix $\\mathbf{M}_{i}\\left(\\beta, \\theta\\right)$ of mesh vertex $v_i$ from rest pose to $\\theta$-pose is computed by\n\\begin{equation}\n\\mathbf{M}_{i}(\\beta, \\theta) = \\left(\\sum_{j=1}^{K}b_{i,j}\\mathbf{G}_j \\right)\n\\begin{bmatrix}\n \\mathbf{I} & \\mathbf{B}_{S,i}(\\beta)+\\mathbf{B}_{P, i}(\\theta) \\\\\n \\mathbf{0}^T & 1\n\\end{bmatrix}\n\\end{equation}\nwhere $\\mathbf{G}_j\\in \\mathbb{R}^{4\\times4}$ is the world transformation of joint $j$, $b_{i,j}$ is the blend skinning weight representing how much the rotation of part $j$ affects vertex $v_i$, $\\mathbf{B}_{P, i}(\\theta)\\in \\mathbb{R}^{3}$ and $\\mathbf{B}_{S,i}(\\beta)\\in \\mathbb{R}^{3}$ are the pose blendshape and shape blendshape of vertex $v_i$ respectively. \n\n\\subsection{Volume Rendering}\\label{Volume_Render}\n\\label{sec:vr}\nWe use the same volume rendering techniques as in NeRF~\\cite{NeRF} to render the neural radiance field into a 2D image. For a given video frame $I_t$, we first convert the camera coordinate system to the SMPL coordinate system, that is, transform the SMPL global rotation and translation to the camera. Then the pixel colors are obtained by accumulating the colors and densities along the corresponding camera ray $\\mathbf{r}$. In practice, the continuous integration is approximated by sampling $N$ points $\\left\\{\\mathbf{x}_{k}\\right\\}_{k=1}^{N}$ between the near plane and the far plane along the camera ray $\\mathbf{r}$ as\n\\begin{equation}\n\\label{eq:rendering equation}\n\\begin{array}{c}\n\\tilde{C}_{t}(\\mathbf{r})=\\sum\\limits_{k=1}^{N}T_{k}\\left(1-\\exp \\left(\\eta_{t}(\\mathbf{x}_{k})\\sigma_{t}\\left(\\mathbf{x}_{k}\\right) \\delta_{k}\\right)\\right) \\mathbf{c}_{t}\\left(\\mathbf{x}_{k}\\right) \\\\\n\\tilde{D}_{t}(\\mathbf{r})=\\sum\\limits_{k=1}^{N}T_{k}\\left(1-\\exp \\left(\\eta_{t}(\\mathbf{x}_{k})\\sigma_{t}\\left(\\mathbf{x}_{k}\\right) \\delta_{k}\\right)\\right) \\\\\nT_{k}=\\exp \\left(-\\sum\\limits_{j=1}^{k-1} \\eta_{t}(\\mathbf{x}_{k})\\sigma_{t}\\left(\\mathbf{x}_{j}\\right) \\delta_{j}\\right)\n\\end{array}\n\\end{equation}\nwhere $\\delta_{k}=\\left\\|\\mathbf{x}_{k+1}-\\mathbf{x}_{k}\\right\\|_{2}$ is the distance between adjacent sampling points, and $\\eta_{t}(\\mathbf{x}_{k})$ is a prior 3D \\textit{mask} (detailed in the following) used to provide geometric prior guidance and deal with ambiguity during pose-guided deformation.\n\nSince we only focus on modeling a single human subject, here we introduce an assumption for learning a more accurate neural radiance field: The densities should be zeros for points far from the surface of human mesh;\n\\begin{equation}\n\\label{distance constraint}\n\\begin{aligned}\n\\eta_{t}(\\mathbf{x}_{k}) &= d(\\mathbf{x}_{k}) \\leq \\delta\\\\\nd(\\mathbf{x}_{k}) &= \\sum_{v_i \\in \\mathcal{N}\\left(\\mathbf{x}_{k}\\right)} \\frac{\\omega_{j}}{\\omega} \\left\\|\\mathbf{x}_{k}-v_{i}\\right\\|\n\\end{aligned}\n\\end{equation}\nwhere $d(\\mathbf{x}_{k})$ is the weighted distance from point $\\mathbf{x_k}$ to the nearest neighbor vertices $\\mathcal{N}\\left(\\mathbf{x}_{k}\\right)$ in the observation space. $\\delta$ is the distance threshold limiting distance between the sample point to the SMPL surface in the observation space. In experiments we follow NeRF~\\cite{NeRF} to perform hierarchical volume sampling to obtain $\\tilde{C}_{t}^c(\\mathbf{r})$ and $\\tilde{C}_{t}^f(\\mathbf{r})$ with the coarse and fine networks, respectively.\n\n\\subsection{Pose Refinement via Analysis-by-Synthesis} \\label{Pose_Refine}\nOur proposed method learns an animatable NeRF for human subjects via explicitly deforming the observation space from different frames to a constant canonical space, under the guidance of SMPL transformations. Although the current state-of-the-art pose and shape estimation methods~\\cite{VIBE} is adopted to obtain more stable SMPL parameters, in experiments we observe that results estimated by these methods do not align well with the ground truth, especially in depth. The inaccurate human body estimation could easily lead to blurry results. To address this problem, we propose to fine-tune the SMPL parameters during training. \nSpecifically, we use VIBE~\\cite{VIBE} to estimate SMPL parameters $M(\\theta_{t}, \\beta_{t})$ for each frame $I_t$ as initialization of variables, which will be optimized during training.\nWe use the mean shape parameters $\\beta=\\frac{1}{n}\\sum_{t=1}^{n}\\beta_{t}$ for different frames. It turns out that the refined SMPL can better fit the input image, and helps to obtain clearer and sharper results as shown in Fig.~\\ref{fig:Comparision_with_different_method} and Fig.~\\ref{fig:3D_Rec_on_mutil_garment}.\n\n\n\\subsection{Objective Functions} \\label{Object_Fun}\nGiven a monocular video sequence, we learn the animatable NeRF by optimizing the following objective function\n\\begin{equation}\n\\mathcal{L}= \\mathcal{L}_{c} + \\mathcal{L}_{p} + \\lambda_{d} * \\mathcal{L}_{d}\n\\end{equation}\nwhere $\\mathcal{L}_{c}$, $\\mathcal{L}_{p}$, and $\\mathcal{L}_{d}$ are reconstruction loss, pose regularization, and background regularization, respectively. $\\lambda_{d}$ aims to balance the importance of background regularization.\n\n{\\bf Reconstruction Loss}. The reconstruction loss aims to minimize the error between the rendered images and the corresponding observed frames, which is defined as \n\\begin{equation}\n\\mathcal{L}_{c}=\\sum_{t}\\sum_{\\mathbf{r}}\\left\\|\\tilde{C}_{t}^{c}(\\mathbf{r})-C_{t}(\\mathbf{r})\\right\\|_{2}^{2} + \\left\\|\\tilde{C}_{t}^{f}(\\mathbf{r})-C_{t}(\\mathbf{r})\\right\\|_{2}^{2}\n\\end{equation}\nwhere $\\mathbf{r}$ is a camera ray passing through the image $I_{t}$.\n$C_t(\\mathbf{r})$ is the ground truth color of the pixel intersected by ray $\\mathbf{r}$ on the observed image $I_{t}$.\nAnd $\\tilde{C}_{t}^{c}(\\mathbf{r})$, $\\tilde{C}_{t}^{f}(\\mathbf{r})$ are the corresponding rendered colors from the coarse and fine networks respectively (see Sec.~\\ref{sec:vr}). \n\n\n{\\bf Pose Regularization}. \nTo obtain stable and smooth pose parameters, we add the following pose regularization term to encourage the optimized pose parameter to stay close to the initial pose, and the pose parameters between adjacent frames to be similar.\n\\begin{equation}\n\\mathcal{L}_{p}=\\lambda_1\\left\\|\\tilde{\\theta}_{t}-\\theta_{t}\\right\\|+\\lambda_2\\left\\|\\tilde{\\theta}_{t}-\\tilde{\\theta}_{t+1}\\right\\|\n\\end{equation}\nwhere $\\theta_{t}$ is the initial pose parameters, $\\tilde{\\theta}_{t}$ and $\\tilde{\\theta}_{t+1}$ are the optimized pose parameters of frame $t$ and $t+1$. $\\lambda_1$ and $\\lambda_2$ are the corresponding penalty weights.\n\n\n{\\bf Background Regularization}. \nWe only focus on reconstructing the human no matter what the background is, which means, ideally, density exists only inside the human. \nTo better achieve this goal, we first set the background color to white with the help of an off-the-shelf segmentation network.\nWe minimize the difference between the rendered integral density and the mask obtained by segmentation.\nSince the foreground (i.e. human) region is $1$ and the background region is $0$ in the mask, we are encouraging the human region's integral density to be 1 and encouraging the background region's integral density to be 0, resulting in a much cleaner empty space estimation and more solid and clearer person estimation in our canonical NeRF space.\nMathematically, our background regularization term is defined as follows,\n\\begin{equation}\n\\mathcal{L}_{d}=\\sum_{t}\\sum_{\\mathbf{r}}\\left\\|\\tilde{D}_{t}^{c}(\\mathbf{r})-D_{t}(\\mathbf{r})\\right\\| + \\left\\|\\tilde{D}_{t}^{f}(\\mathbf{r})-D_{t}(\\mathbf{r})\\right\\|\n\\end{equation}\nwhere $\\tilde{D}_{t}^{c}$ and $\\tilde{D}_{t}^{f}$ is the rendered integral density of the coarse and fine network for the camera ray $\\mathbf{r}$ from the image $I_{t}$, and $D_{t}(\\mathbf{r})$ is the corresponding segmentation mask and $D_{t}(\\mathbf{r})=1$ in the foreground region and $D_{t}(\\mathbf{r})=0$ in the background region.\n\n\\begin{figure*}[t]\n \\centering\n \\subfigure[NeRF]{\n\t\\begin{minipage}[b]{0.14\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_nerf.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[SMPLpix]{\n\t\\begin{minipage}[b]{0.14\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_smplpix.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[NeuralBody]{\n\t\\begin{minipage}[b]{0.14\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_neuralbody.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[NeRF+U]{\n\t\\begin{minipage}[b]{0.14\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_nerf_u.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[Ours]{\n\t\\begin{minipage}[b]{0.14\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_ours.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[GT]{\n\t\\begin{minipage}[b]{0.14\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_gt.png}\n\t\\end{minipage}\n\t}\n \\caption{Visual comparison of different methods about novel view synthesis on People-snapshot\\cite{Video_avatars}(1-2 rows) and iPER\\cite{LWGAN}(3-4 rows). \n NeRF\\cite{NeRF} is struggling to handle dynamic scenes because the movement of the subject violates the multi-view consistency requirement. With the help of our proposed pose-guide deformation, NeRF+U (NeRF + Unpose) achieves much better results (row 1\\&2) if the estimated SMPL poses are accurate but still produces blurry results (row 3\\&4) if they are not.\n Further adding pose refinement (ours) greatly improves the robustness as long as the estimated SMPL pose is reasonably good.\n Compared with NeuralBody\\cite{Neural_Body} and SMPLpix\\cite{smplpix}, our approach can produce realistic images with well preserved identity and cloth details.\n \n \n }\n \\label{fig:Comparision_with_different_method}\n\\end{figure*}\n\n\n\\begin{figure*}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\begin{minipage}[b]{0.1116\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_iper_input.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[view 0]{\n\t\\begin{minipage}[b]{0.1116\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_iper_rec.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[view 1]{\n\t\\begin{minipage}[b]{0.10\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_iper_left.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[view 2]{\n\t\\begin{minipage}[b]{0.10\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_iper_right.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[Input]{\n\t\\begin{minipage}[b]{0.10\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_ps_input.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[view 0]{\n\t\\begin{minipage}[b]{0.10\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_ps_rec.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[view 1]{\n\t\\begin{minipage}[b]{0.10\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_ps_left.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[view 2]{\n\t\\begin{minipage}[b]{0.10\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_view_ps_right.png}\n\t\\end{minipage}\n\t}\n \\caption{Results of Novel View Synthesis on iPER (a-d) and People-Snapshot (e-h). Our method can synthesize realistic and multi-view consistent results from different camera views while maintaining the subject pose fixed.}\n \\label{fig:Novel_View}\n\\end{figure*}\n\n\n\\setlength{\\tabcolsep}{2.8pt}\n\\begin{table*}[ht]\n \\caption{Quantitative comparison of novel view synthesis on People-Snapshot\\cite{Video_avatars} and iPER\\cite{LWGAN}.}\n\t\\label{Albation}\n\t\\begin{center}\n\t\t\\renewcommand{\\arraystretch}{1.1}\n\t\t\\begin{tabular}{|c|ccccc|ccccc|ccccc|}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{Subject ID} & \\multicolumn{5}{c|} {PSNR$\\uparrow$} & \\multicolumn{5}{c|}{SSIM$\\uparrow$} &\n\t\t\t\\multicolumn{5}{c|}{LIPIS$\\downarrow$} \\\\\n\t\t\t\\cline{2-16}\n\t\t\t& NeRF & SMPLpix & NB & NeRF+U & OURS & NeRF &SMPLpix & NB & NeRF+U & OURS & NeRF &SMPLpix & NB & NeRF+U & OURS \\\\\n\t\t\t\\hline\n\t\t\tmale-3-casual & 20.64 & 23.74 & 24.94 & 23.88 & \\textbf{29.37} & .8993 & .9229 & .9428 & .9329 & \\textbf{.9703} & .1008 & .0222 & .0326 & .0438 & \\textbf{.0168} \\\\\n\t\t\tmale-4-casual & 20.29 & 22.43 & 24.71 & 23.13 & \\textbf{28.37} & .8803 & .9095 & .9469 & .9276 & \\textbf{.9605} & .1445 & .0305 & .0423 & .0554 & \\textbf{.0268} \\\\\n\t\t\tfemale-3-casual & 17.43 & 22.33 & 23.87 & 22.45 & \\textbf{28.91} & .8605 & .9288 & .9504 & .9413 & \\textbf{.9743} & .1696 & .0270 & .0346 & .0498 & \\textbf{.0215} \\\\\n\t\t\tfemale-4-casual & 17.63 & 23.35 & 24.37 & 23.13 & \\textbf{28.90} & .8578 & .9258 & .9451 & .9276 & \\textbf{.9678} & .1827 & .0239 & .0382 & .0556 & \\textbf{.0174} \\\\\n\t\t\t\\hline\n\t\t\tiper-009-4-1 & 19.54 & 20.25 & 25.46 & 21.56 & \\textbf{30.23} & .7870 & .9018 & .9378 & .8667 & \\textbf{.9466} & .2641 & \\textbf{.0293} & .0558 & .1197 & .0335 \\\\\n\t\t\tiper-023-1-1 & 17.41 & 19.48 & 25.44 & 20.25 & \\textbf{27.26} & .7623 & .8945 & .9330 & .8656 & \\textbf{.9457} & .2769 & .0442 & .0493 & .1109 & \\textbf{.0285} \\\\\n\t\t\tiper-002-1-1 & 16.01 & 19.64 & 23.06 & 18.75 & \\textbf{26.99} & .7500 & .8886 & .9394 & .8708 & \\textbf{.9502} & .3363 & .0392 &.0476 & .1205 & \\textbf{.0285} \\\\\n\t\t\tiper-026-1-1 & 17.09 & 19.03 & 23.77 & 18.48 & \\textbf{26.85} & .7580 & .8574 & .9351 & .8623 & \\textbf{.9542} & .2928 & .0494 & .0550 & .1282 & \\textbf{.0315} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table*}\n\n\\subsection{Applications}\nThe proposed approach learns an animatable NeRF, allowing us to reconstruct the implicit neural representation of the geometry and appearance of the human body, from a monocular video of a person turning around before a camera while holding the A-pose. For the original NeRF~\\cite{NeRF}, novel view images (Sec.~\\ref{NVS}) can be rendered through volume rendering, and the surface geometry (Sec.~\\ref{3DR}) of the scene can be extracted with the Marching Cubes algorithm~\\cite{MCubes}.\nSince we have explicitly incorporated the SMPL model into the NeRF training process, we can deform the neural radiance field to desired poses for rendering by our pose-guided deformation. This makes our NeRF \\textit{animatable}, and thus a new application that can demonstrate new poses or animating the reconstructed people (Sec.~\\ref{NPS}) is enabled as shown in Fig.~\\ref{fig:motion_transfer} and Table~\\ref{novel_pose_synthesis_with_neuralbody}.\n\n\n\n\\section{Experiments}\n\n\\subsection{Implementation Details}\nFollowing NeRF~\\cite{NeRF}, we use coarse and fine networks to represent the human body, and use $64$ coarse and $64+32$ fine rays samples for all experiments. \nFocusing on the foreground subject, we set $90\\%$ of the rays to be sampled from the foreground, and the remaining $10\\%$ to be sampled from the background. \nWe set the hyper-parameters as $\\left|\\mathcal{N}(i)\\right|=4$, $\\delta=0.2$, $\\lambda_1=0.001$, $\\lambda_2=0.01$ and $\\lambda_d=0.1$. \nWe use $512 \\times 512$ image in all experiments. \nFor training the model, we adopt the Adam optimizer~\\cite{Adam}, and it spends about 13 hours on 2 Nvidia GeForce RTX 3090 24GB GPUs.\n\n\n\\begin{figure*}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3d_rec_input.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[NeRF]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3d_rec_nerf.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[NeRF+L]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3D_rec_nerf_latent.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[NeRF+U]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3D_rec_nerf_unpose.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[Ours]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3d_rec_ours.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[NeRF+U(GT)]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3d_rec_nerf+unpose_gt.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[GT]{\n\t\\begin{minipage}[b]{0.12\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/3d_rec_gt.png}\n\t\\end{minipage}\n\t}\n \\caption{Visualization of 3D reconstruction on Multi-Garment. NeRF\\cite{NeRF} and NeRF+U (NeRF + Unpose) fail to reconstruct 3D geometry due to the movement of the subject and the inaccurate SMPL. Compared with NeRF+L (NeRF + Latent) which produces over-smooth or under-smooth results, our results are more reasonable. As a reference, NeRF+U(GT) uses GT SMPL and learns geometry with very high precision, demonstrating the effectiveness of our pose-guided deformation and showing the importance of obtaining accurate SMPL for 3D reconstruction tasks.}\n \\label{fig:3D_Rec_on_mutil_garment}\n\\end{figure*}\n\n\\setlength{\\tabcolsep}{2.8pt}\n\\begin{table*}[ht]\n \\caption{Quantitative comparison of 3D reconstruction on Multi-Garment.}\n\t\\label{3D_Rec_on_mutil-garment}\n\t\\begin{center}\n\t\t\\renewcommand{\\arraystretch}{1.1}\n\t\t\\begin{tabular}{|c|ccccc|ccccc|}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{Subject ID} & \\multicolumn{5}{c|} {P2S$\\downarrow$} & \\multicolumn{5}{c|}{Chamfer$\\downarrow$} \\\\\n\t\t\t\\cline{2-11}\n\t\t\t&NeRF &NeRF+L &NeRF+U &OURS &NeRF+U(GT) &NeRF &NeRF+L &NeRF+U &OURS &NeRF+U(GT) \\\\\n\t\t\t\\hline\n\t\t\tpeople1 & 65.53 & 13.57 & 33.51 & \\textbf{4.09} & 0.86 & 89.32 & 13.96 & 41.81 & \\textbf{4.25} & 0.25 \\\\\n\t\t\tpeople2 & 36.26 & 11.67 & 28.50 & \\textbf{1.55} & 0.85 & 34.95 & 10.78 & 28.86 & \\textbf{0.96} & 0.25 \\\\\n\t\t\tpeople3 & 34.78 & 16.01 & 36.40 & \\textbf{4.17} & 1.17 & 33.62 & 13.83 & 38.36 & \\textbf{3.30} & 0.43 \\\\\n\t\t\tpeople4 & 33.29 & 26.84 & 32.74 & \\textbf{3.53} & 1.06 & 33.70 & 26.59 & 32.08 & \\textbf{2.68} & 0.36 \\\\\n\t\t\t\\hline\n\t\t\tAverage & 42.46 & 17.02 & 33.28 & \\textbf{3.32} & 0.99 & 47.90 & 16.29 & 34.79 & \\textbf{2.80} & 0.32 \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table*}\n\n\n\n\\subsection{Datasets and Evaluation}\n\n\\noindent \\textbf{Datasets}.\nTo evaluate the effectiveness of the proposed method, we conduct experiments on 3 different datasets, including People-Snapshot~\\cite{Video_avatars}, iPER~\\cite{LWGAN}, and Multi-Garment~\\cite{MGNet}.\nPeople-Snapshot\\cite{Video_avatars} and iPER\\cite{LWGAN} datasets both contain different monocular RGB videos captured in real-world scenes, where the subjects hold an A-pose and turn around before a fixed camera. \nIn addition, iPER dataset also contains videos of the same person with random motion sequences. Multi-Garment~\\cite{MGNet} dataset contains 3D scanned human body models and textures and the corresponding registered SMPLD models that can be used for animation.\nWe selected 4 human body models to synthesize the videos, according to motion sequences which the subjects rotate while holding an A-pose in People-Snapshot dataset.\nPeople-Snapshot and iPER datasets are mainly used for the evaluation of novel view synthesis and novel pose synthesis experiments.\nAnd the synthetic data from Multi-Garment dataset are used to evaluate the quality of the 3D reconstructions.\n\n\n\\noindent \\textbf{Evaluation}. \nIn our experiments, we use A-pose frames (2 circles) for training, and the remaining A-pose frames (1 circle) for testing novel view synthesis and random pose frames for testing novel pose synthesis. \nSince there are depth and scale ambiguities in optimizing the SMPL parameters for monocular videos, \nwe also optimize the SMPL parameters of the test frames for quantitative evaluation. \nNote that the parameters of the neural radiance field network remain fixed. \nFor quantitative evaluation, we evaluate our method for novel view synthesis and novel pose synthesis using the following metrics: peak signal-to-noise ratio (PSNR), structural similarity index (SSIM~\\cite{SSIM}), and learned perceptual image patch similarity (LPIPS~\\cite{lpips}). \nFor 3D reconstruction, we use point-to-surface Euclidean distance (P2S) and Chamfer distance~\\cite{Chamfer} (in cm) between the reconstructed and the ground truth surfaces. \nWe register our meshes to ground truth geometry for comparison in consideration of scale and depth ambiguities. \nThe datasets from the real scenes don't have the corresponding ground truth geometry, and we only provide qualitative results.\n\n\n\\begin{figure}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\includegraphics[height=0.72\\linewidth, trim=0 0 0 0,clip]{figures\/3D_rec_with_smpld_input.png}\n\t}\n\t\\subfigure[Ours]{\n\t\\includegraphics[height=0.72\\linewidth, trim=10 0 20 0,clip]{figures\/3D_rec_with_smpld_ours.png}\n\t}\n\t\\subfigure[Video Avatars]{\n\t\\includegraphics[height=0.72\\linewidth, trim=20 0 0 0,clip]{figures\/3D_rec_with_smpld_video_avatar.png}\n\t}\n \\caption{Comparisons of 3D reconstruction results on People-Snapshot with video avatars~\\cite{Video_avatars}. Compared with Video Avatars\\cite{Video_avatars}, our approach can generate more details such as hairs and clothes wrinkles.}\n \\label{fig:3D_Rec_with_SMPLD}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figures\/3D_rec.png}\n \\caption{Visualization of our reconstructed geometry on iPER from different views.}\n \\label{fig:3D_Rec_iPER}\n\\end{figure}\n\n\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figures\/compare_with_neuralbody_on_unseen_pose_in_detail.png}\n \\caption{Comparisons between NeuralBody\\cite{Neural_Body} (first row) and Ours (second row) on novel pose synthesis task. In contrast to NeuralBody, which fails to synthesize novel poses, our approach generalizes better on novel poses that are very different from the training poses.}\n \\label{compare_with_neuralbody_on_unseen_pose_in_detail}\n\\end{figure}\n\n\\subsection{Novel view synthesis}\n\\label{NVS}\nLike the original NeRF~\\cite{NeRF}, our animatable NeRF can be rendered from arbitrary views (of the same pose). \nSince the monocular video does not have corresponding novel view images, we use the first part of the A-pose video frames to train our model and the remaining frames to test the rendered novel view images\\footnote{Technically, this is not a ``novel'' view apart from slightly different human pose}.\nTo compare against the original NeRF, we first transform the global rotation and translations of the SMPL estimation from every video frame to camera pose as if we are handling a collection of multi-view images of an almost static scene since the subject is holding an A-pose.\nIn order to verify the effectiveness of our pose fine-tuning strategy, we first use VIBE~\\cite{VIBE} to estimate the parameters of SMPL, and conduct comparative experiments between ``Ours'' (NeRF + Unpose + Pose Refinement) and ``NeRF+U'' (NeRF + Unpose) as shown in Fig.~\\ref{fig:Comparision_with_different_method}.\nWe also compare the proposed method with several state-of-the-art (SOTA) methods, including NeuralBody\\cite{Neural_Body}(NB) and SMPLpix\\cite{smplpix}.\nNeuralBody, which also combines SMPL and NeRF, is able to reconstruct dynamic human bodies from monocular video.\nSMPLpix takes SMPL pose as input to generate images via a neural rendering network.\nTable~\\ref{Albation} quantitatively compares the results of different approaches about novel view synthesis on the People-Snapshot and iPER datasets. \nAs described in the table, our proposed approach achieves higher PSNR and SSIM scores compared to other approaches. \nWe also provide qualitative comparisons in Fig.~\\ref{fig:Comparision_with_different_method} with the person examples drawn from the People-Snapshot the iPER datasets. \nWe can see that the proposed approach produces more realistic and reliable results.\nNeRF fails to handle such dynamic scenes since the movement of the subject violates the multi-view consistency requirement. \nExperiments in Fig.~\\ref{fig:Comparision_with_different_method}(d) and Table~\\ref{Albation} (see NeRF+U) also show that inaccurate SMPL parameters cause a very negative impact. \nIn contrast, after taking pose refinement into consideration, the quality of novel view synthesis has been significantly improved. \nAs shown in Fig.~\\ref{fig:Novel_View}(b)(c), SMPLpix and NeuralBody seem to overfit the training frames, while our results better preserve the details such as faces and hands.\nFig.~\\ref{fig:Novel_View} shows the realistic rendering results of more views of the proposed method on more persons with different dresses and hairstyles, indicating the applicability and robustness of the proposed method in real scenarios.\n\n\n\\subsection{3D human reconstruction}\n\\label{3DR}\nOn this task, we compare against the original NeRF~\\cite{NeRF} and NeRF+L baselines. NeRF+L extends NeRF to condition it on a (per-frame) learnable latent deformation code to handle dynamic scenes as shown in Fig~\\ref{fig:3D_Rec_on_mutil_garment}(c) and Table~\\ref{3D_Rec_on_mutil-garment}.\nFor synthetic data, we also show results using ground truth SMPL parameters (NeRF+U(GT)) for pose-guided deformation as the upper bound as shown in Fig~\\ref{fig:3D_Rec_on_mutil_garment}(f) and Table~\\ref{3D_Rec_on_mutil-garment}.\nQuantitative compassion of different strategies in 3D human reconstruction is shown in Table~\\ref{3D_Rec_on_mutil-garment}. \nWe can see that the proposed approach achieves much lower P2S and Chamfer distances, demonstrating the superiority of the proposed approach in reconstructing accurate 3D geometry. \nFig.~\\ref{fig:3D_Rec_on_mutil_garment} compares the qualitative results of 3D reconstruction. \nWe can see that NeRF fails to learn reasonable 3D geometry of the human subject with \\textit{small} movements. \nNeRF+U(see Sec. \\ref{NVS}) also produces messy results. Compared with NeRF+L, which produces over-smooth or under-smooth results, the proposed approaches better capture the geometric details such as cloth wrinkles, faces, and hairs. \nWith ground truth SMPL parameters, the P2S and Chamfer distance are much lower than all the approaches, which demonstrates the necessity of obtaining accurate poses and the effectiveness of our approximated pose-guided deformation. \nIn Fig.~\\ref{fig:3D_Rec_with_SMPLD}, we compare the reconstruction results with video avatar~\\cite{Video_avatars}, which deforms vertices of the SMPL model to fit the 2D human silhouettes over the video sequence. \nWe can see that the implicit learning of subject geometry with animatable NeRF generates a better quality of details, including cloth wrinkles, hair, and accessories. \nIn Fig.~\\ref{fig:3D_Rec_iPER} we show the reconstruction results of persons with varied clothes and hairstyles from iPER dataset. \nAlthough our pose-guided deformation does not take the deformation of clothes into account, the proposed method is capable of capturing the high-quality 3D geometry details, such as the hood (second line) and pigtail (third line), as long as the clothes do not have violent deformation.\n\n\n\\begin{figure*}[ht]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figures\/motion_transfer.png}\n \\caption{Novel pose synthesis on People-Snapshot\\cite{Video_avatars} and iPER\\cite{LWGAN}. We can feed novel SMPL pose parameters to the trained animatable NeRF to synthesize novel pose images. \n Although trained only on A-pose images, our animatable NeRF has the capability to stably render new images containing complex poses.\n }\n \\label{fig:motion_transfer}\n\\end{figure*}\n\n\\subsection{Novel pose synthesis} \n\\label{NPS}\nDue to our explicit control of deformation via SMPL, our method can synthesize images under unseen poses even with only simple A-pose sequences as input. \nTo quantitatively evaluate the capability of our method on novel pose synthesis, we train the model using A-pose videos and test it using random pose videos of the same person. \nNeuralBody\\cite{Neural_Body} is the most similar work to ours in the sense that it also combines NeRF with SMPL. \nCompared to ours, it handles complex cloth geometry (which is not modeled by SMPL) better due to its use of latent code. \nHowever, each vertex's latent code would affect a much larger region after several sparse convolution layers, resulting in unpredictable artifacts for novel pose synthesis (see Fig. \\ref{compare_with_neuralbody_on_unseen_pose_in_detail}).\nTable~\\ref{novel_pose_synthesis_with_neuralbody} shows that our method achieves much better results than NeuralBody on novel pose synthesis. \nQualitative visualizations of novel pose synthesis on People-snapshot and iPER are provided in Fig.~\\ref{fig:motion_transfer}. \nSpecifically, different poses are fed into the trained animatable NeRF to obtain the aforementioned renderings. \nDespite the significant differences between the test novel poses and the training poses, the results show that our method can still produce realistic images with well preserved identity and cloth details of the subjects.\n\n\\setlength{\\tabcolsep}{2.8pt}\n\\begin{table}[ht]\n \\caption{Quantitative comparison about novel pose synthesis with NeuralBody(NB)\\cite{Neural_Body} on the iPER dataset.}\n\t\\label{novel_pose_synthesis_with_neuralbody}\n\t\\begin{center}\n\t\t\\renewcommand{\\arraystretch}{1.1}\n\t\t\\begin{tabular}{|c|cc|cc|cc|}\n\t\t\t\\hline\n\t\t\t\\multirow{2}{*}{Subject ID} & \\multicolumn{2}{c|} {PSNR$\\uparrow$} & \\multicolumn{2}{c|} {SSIM$\\uparrow$} &\n\t\t\t\\multicolumn{2}{c|} {LPIPS$\\downarrow$} \\\\\n\t\t\t\\cline{2-7}\n\t\t\t& NB & OURS & NB & OURS & NB & OURS \\\\\n\t\t\t\\hline\n\t\t\tiper-009-4-2 & 20.95 & \\textbf{24.11} & \\textbf{.9035} & .8927 & .0980 & \\textbf{.0782} \\\\\n\t\t\t\\hline\n\t\t\tiper-023-1-2 & 20.28 & \\textbf{21.98} & \\textbf{.9009} & .8940 & .0870 & \\textbf{.0644} \\\\\n\t\t\t\\hline\n\t\t\tiper-026-1-2 & 17.42 & \\textbf{19.27} & \\textbf{.8795} & .8713 & .1192 & \\textbf{.0990} \\\\\n\t\t\t\\hline\n\t\t\tiper-002-1-2 & 19.07 & \\textbf{23.47} & .8957 & \\textbf{.9165} & .0749 & \\textbf{.0483} \\\\\n\t\t\t\\hline\n\t\t\\end{tabular}\n\t\\end{center}\n\\end{table}\n\n\n\\section{Discussion}\n\nIn the following, we will discuss the proposed approach in details from the following aspects: analysis of pose refinement (Sec.~\\ref{analysis_of_pose_refinement}) and canonical poses (Sec.~\\ref{analysis_of_canonical_poses}), Impact of background regularization (Sec.~\\ref{impact_of_background_regularization}) and view direction (Sec.~\\ref{impact_of_view_direction}).\n\n\n\\subsection{Analysis of Pose Refinement}\n\\label{analysis_of_pose_refinement}\n\nHere we discuss the impact of pose refinement on our approach. \nOur method relies on SMPL parameters for explicit deformation, so inaccurate SMPL estimation may lead to catastrophic results as shown in Fig.~\\ref{fig:Comparision_with_different_method}(d).\nWe initialize the SMPL parameters with estimations from VIBE\\cite{VIBE}, which is the state-of-the-art pose and shape estimation method. \nHowever, pose estimation from monocular videos usually suffers from depth ambiguity.\nAs shown in Fig.~\\ref{Pose_Refine_Vis}(a), although our input video is a simple A-pose, the SMPL estimated by VIBE is usually misaligned at the foot joints.\nAfter pose refinement, the SMPL model is better fitting to the input image as shown in Fig.~\\ref{Pose_Refine_Vis}(b).\n\n\\begin{figure}[ht]\n \\centering\n\t\\subfigure[VIBE est.]{\n\t\\begin{minipage}[b]{0.46\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/pose_refine_vibe.png}\n\t\\end{minipage}\n\t} \n\t\\subfigure[Ours]{\n\t\\begin{minipage}[b]{0.46\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/pose_refine_ours.png}\n\t\\end{minipage}\n\t} \n \\caption{Visual comparison before and after pose refinement on iPER\\cite{LWGAN}. After pose refinement, the SMPL model is better aligned with the input image (e.g. foot joints).\n \n }\n\t\\label{Pose_Refine_Vis}\n\\end{figure}\n\n\\subsection{Analysis of Canonical Poses}\n\\label{analysis_of_canonical_poses}\n\nIn this section, we discuss the effect of using different canonical poses in canonical space on the reconstruction results. \nSince the pose-guided deformation is explicitly based on SMPL\\cite{SMPL}, we will get different canonical spaces with different canonical poses. \nTherefore, the choice of canonical poses has a crucial impact on the reconstruction and novel pose synthesis. Here we will discuss the reconstruction results of three different canonical poses: A-pose, T-pose, and X-pose. \nA-pose is the average pose of the body poses of the training frames, which is the closest to the poses in the training frames. \nT-pose is the SMPL model's rest pose, where the arms are far away from the body, but the legs are closer to each other. \nIn comparison, our customized X-pose offers more spread body parts (see Fig. \\ref{Canonical_pose}(d)).\n\nAs shown in Fig. \\ref{Canonical_pose}, using A-pose as the canonical pose offers the best quality for canonical space NeRF learning, while using T-pose and X-pose as canonical poses result in some artifacts under the axilla and the thighs. \nIf a point is close to two different SMPL body parts (e.g. body and arm, two legs), it is hard to decide which part the point belongs to since SMPL models unclothed human body only without taking the offset of the clothes into consideration.\n\n\\begin{figure}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\begin{minipage}[b]{0.182\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/canonical_input.png}\n\t\\end{minipage}\n\t} \n\t\\subfigure[A-pose]{\n\t\\begin{minipage}[b]{0.182\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/canonical_A-pose.png}\n\t\\end{minipage}\n\t} \n\t\\subfigure[T-pose]{\n\t\\begin{minipage}[b]{0.24\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/canonical_T-pose.png}\n\t\\end{minipage}\n\t} \n \\subfigure[X-pose]{\n\t\\begin{minipage}[b]{0.24\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/canonical_X-pose.png}\n\t\\end{minipage}\n\t} \n \\caption{Visualization of different canonical NeRF spaces with different canonical poses during training on People-Snapshot\\cite{Video_avatars}.}\n\t\\label{Canonical_pose}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\begin{minipage}[b]{0.18\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_pose_input.png}\n\t\\end{minipage}\n\t}\n\t\\subfigure[A-pose]{\n\t\\begin{minipage}[b]{0.21\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_pose_A-pose.png}\n\t\\end{minipage}\n\t} \n\t\\subfigure[T-pose]{\n\t\\begin{minipage}[b]{0.21\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_pose_T-pose.png}\n\t\\end{minipage}\n\t} \n \\subfigure[X-pose]{\n\t\\begin{minipage}[b]{0.21\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/novel_pose_X-pose.png}\n\t\\end{minipage}\n\t} \n \\caption{Novel pose synthesis with different canonical poses on People-Snapshot\\cite{Video_avatars}. X-pose is a better choice for novel pose synthesis compared to A-pose and T-pose, which produce unacceptable artifacts (e.g. multiple legs).}\n\t\\label{novel_pose_in_difference_canonical_pose}\n\\end{figure}\n\nFor reconstruction, A-pose is the best choice, but X-pose is more suitable for new pose synthesis. As shown in Fig. ~\\ref{novel_pose_in_difference_canonical_pose}. \nWhen using A-pose and T-pose as canonical poses for synthesizing a pose different from the poses in the training frames, there exist some unacceptable artifacts (e.g. multiple legs). This is because the different body parts are too close to each other in the canonical space of A-pose or T-pose, so that one body part (e.g. left leg) will be deformed by the transformation of another body part (e.g. right leg), resulting in multiple legs in Fig. ~\\ref{novel_pose_in_difference_canonical_pose}(b)(c). \n\n\n\\subsection{Impact of Background Regularization}\n\\label{impact_of_background_regularization}\n\nIn this section, we discuss the benefits of background regularization for our approach. Our approach focuses on the animatable NeRF of the human from a monocular video. \nTo avoid possible negative influence from the background, we set the background to white uniformly (with the help of an off-the-shelf segmentation network). \nHowever, it is common to appear some noisy density regions in the empty space (i.e. non-human space) after training.\nAs shown in Fig. \\ref{BG_Regularization}(b), although the background of the image is white, we notice some noisy non-zero density regions from the depth map. \nThis is because there is an ambiguity between the background (white in our case) and the cloth which happens to be the same color as the background. \nTo deal with this problem, we introduce background regularization to encourage the density of the background region to be zero. \nWith background regularization, the artifacts in the empty space are significantly reduced as shown in Fig.~\\ref{BG_Regularization}(c).\n\n\\begin{figure}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\begin{minipage}[b]{0.18\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/iper_002_input.jpg}\n\t\\end{minipage}\n\t} \n\t\\subfigure[w\/o background reg.]{\n\t\\begin{minipage}[b]{0.35\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/iper_002_without_BG.png}\n\t\\end{minipage}\n\t} \n\t\\subfigure[w\/ background reg.]{\n\t\\begin{minipage}[b]{0.36\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/iper_002_with_BG.png}\n\t\\end{minipage}\n\t} \n \\caption{Impact of background regularization on iPER\\cite{LWGAN}. The background regularization can effectively reduce the artifacts in the background region.}\n\t\\label{BG_Regularization}\n\\end{figure}\n\n\\begin{figure}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\begin{minipage}[b]{0.22\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/iper_009_input.jpg}\n\t\\end{minipage}\n\t} \n\t\\subfigure[w\/ viewing direction]{\n\t\\begin{minipage}[b]{0.33\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/iper_009_with_view.png}\n\t\\end{minipage}\n\t} \n\t\\subfigure[w\/o viewing direction]{\n\t\\begin{minipage}[b]{0.33\\linewidth}\n\t\\includegraphics[width=1.0\\linewidth, trim=0 0 0 0,clip]{figures\/iper_009_without_view.png}\n\t\\end{minipage}\n\t} \n \\caption{Impact of viewing direction on novel view synthesis on iPER. After removing viewing direction from the input, our model produces more consistent result across different views.}\n\t\\label{Impact_of_view_direction}\n\\end{figure}\n\n\n\\begin{figure}[ht]\n \\centering\n \\subfigure[Input]{\n\t\\includegraphics[width=0.20\\linewidth]{figures\/complex_pose_input.png}\n\t}\n\t\\subfigure[w\/o pose refinement]{\n\t\\includegraphics[width=0.34\\linewidth]{figures\/complex_pose_without_pose_refinement.png}\n\t}\n\t\\subfigure[Ours]{\n\t\\includegraphics[width=0.35\\linewidth]{figures\/complex_pose_ours.png}\n\t}\n \\caption{Visualization result of novel pose synthesis on a complex pose video 009-4-2 of the iPER\\cite{LWGAN}.}\n \\label{complex_pose_result}\n\\end{figure}\n\n\\subsection{Impact of viewing direction}\n\\label{impact_of_view_direction}\n\nUnlike NeRF\\cite{NeRF}, which maps the 3D position and viewing direction \nto color and density, our approach excludes viewing direction from our input for robust dynamic reconstruction.\nNeRF's viewing direction is mainly used to handle specular reflections for materials such as glass and metal. \nFor dynamic scenes, it is very difficult to deal with changes of illumination, so we assume that the appearance of the subject is not view-dependent in our experiment.\nAlso, human skin and clothes are mainly diffuse reflective materials, and there exist very few specular reflections.\nAs shown in the Fig.~\\ref{Impact_of_view_direction}, the novel view synthesis results generated with viewing direction as a condition during training show unpredictable artifacts.\n\n\n\\section{Limitations}\n\n\nOur method reconstructs a detailed 3D human body model and renders realistic images from a monocular video. \nTypically, the training videos capture subjects turning around before the camera and holding an A-pose or T-pose.\nWhen trained on a video containing complex poses, our method still obtains reasonable results (Fig. \\ref{complex_pose_result}). But noticeable losses of details are observed compared to previous simple training videos.\nThe main reason is that it is more challenging to obtain accurate enough SMPL estimations for videos containing complex poses.\nAnother limitation is that it is difficult for our method to handle extremely loose clothes or complex non-rigid deformations of the garments, \nbecause our explicit pose-guided deformation associates spatial points to SMPL mesh, without explicit modeling of the garments.\nThus, the novel view synthesis results inevitably lose some details on the clothes. So, to the get best results, the performer should slowly turn around and hold a simple pose so that their clothes almost remain still relative to their body for high-quality rendering. \nLike all NeRF based methods trained for only one scene, Our method cannot reconstruct invisible parts, such as the underarms and the inner thighs, so the input video needs to cover the whole body of the human body as much as possible. \n\n\\section{Conclusion}\nIn this paper, we propose to learn an animatable neural radiance field from a monocular video, which allows us to perform photo-realistic novel-view synthesis, reconstruct the 3D geometry of the person with high-quality details, and animate the person with novel poses. \nTo achieve these goals, we extend the neural radiance field to dynamic scenes with human movements via introducing an explicit pose-guided deformation module and an analysis-by-synthesis pose refinement strategy. \nSpecifically, the pose-guided deformation attempts to deform the 3d position according to the neighboring SMPL vertices to learn a good and controllable human template in the canonical space, as well as to learn accurate 3d geometry.\nThe pose refinement strategy compensates for the negative impact of inaccurate pose estimation from existing approaches and provides more consistent\nguidance for learning better geometry (i.e. density) and appearance (i.e. RGB). \nExperiments on both synthetic data and real data demonstrate the effectiveness of the proposed approach. \n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nWe give some heuristics for counting elliptic curves with certain\nproperties. In particular, we re-derive the Brumer-McGuinness\nheuristic for the number of curves with positive\/negative discriminant\nup to~$X$, which is an application of lattice-point counting.\nWe then introduce heuristics (with refinements from random matrix theory)\nthat allow us to predict how often we expect an elliptic curve~$E$\nwith even parity to have $L(E,1)=0$. It turns out that we roughly expect\nthat a curve with even parity has $L(E,1)=0$ with probability proportional\nto the square root of its real period, and, since the real period\nis very roughly~$1\/\\Delta^{1\/12}$, this leads us to the\nprediction that almost all curves with even parity should have~$L(E,1)\\neq 0$.\nBy the conjecture of Birch and Swinnerton-Dyer, this says that\nalmost all such curves have rank~0.\n\nWe then make similar heuristics when enumerating by conductor.\nThe first task here is simply to count curves with conductor up to~$X$,\nand for this we use heuristics involving how often large powers of\nprimes divide the discriminant.\nUpon making this estimate, we are then able to imitate\nthe argument we made previously,\nand thus derive an asymptotic for the number of curves\nwith even parity and $L(E,1)=0$ under the ordering by conductor.\nWe again get the heuristic that almost all curves\nwith even parity should have~$L(E,1)\\neq 0$.\n\nWe then make a few remarks regarding how often curves should\nhave nontrivial isogenies and\/or torsion under different orderings,\nand then present some data regarding average ranks. We conclude by\ngiving data for Mordell-Weil lattice distribution for rank 2 curves,\nand speculating about symmetric power $L$-functions.\n\n\\section{The Brumer-McGuinness Heuristic}\n\nFirst we re-derive the Brumer-McGuinness\nheuristic~\\cite{brumer-mcguinness} for the number of elliptic\ncurves whose absolute discriminant is less than a given bound~$X$;\nthe technique here is essentially lattice-point counting, and we\nderive our estimates via the assumption that these counts are\nwell-approximated by the areas of the given regions.\n\n\\begin{conjecture}\\label{conj:bmcg}[Brumer-McGuinness]\nThe number $A_\\pm(X)$ of rational elliptic curves with a global\nminimal model (including at $\\infty$) and positive or negative\ndiscriminant whose absolute value is less than~$X$\nis asymptotically $A_\\pm(X)\\sim {\\alpha_{\\pm}\\over\\zeta(10)}X^{5\/6}$, where\n$\\alpha_{\\pm}={\\sqrt 3\\over 10}\\int_{\\pm1}^\\infty {dx\\over\\sqrt{x^3\\mp1}}$.\n\\end{conjecture}\n\nAs indicated by Brumer and McGuinness, the identity\n$\\alpha_{-}=\\sqrt{3}\\alpha_{+}$ was already known to Legendre,\nand is related to complex multiplication.\nThese constants can be expressed in terms of Beta integrals\n$B(u,v)=\\int_0^1 x^{u-1}(1-x)^{v-1}\\, dx=\n{\\Gamma(u)\\Gamma(v)\\over \\Gamma(u+v)}$ as\n$\\alpha_{+}={1\\over 3}{B}(1\/2,1\/6)$ and $\\alpha_{-}={B}(1\/2,1\/3)$.\n\nRecall that every rational elliptic curve has a unique integral\nminimal model $y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$ with\n$a_1,a_3\\in\\{0,1\\}$ and $|a_2|\\le 1$.\nFix one of the 12 choices of~$(a_1,a_2,a_3)$. Since these are all\nbounded by~1 the discriminant is thus approximately~$-64a_4^3-432a_6^2$.\nSo we wish to count the number\nof $(a_4,a_6)$-lattice-points with~$|64a_4^3+432a_6^2|\\le X$,\nnoting that Brumer and McGuinness divide the curves according to the\nsign of the discriminant. The lattice-point count\nfor $a_1=a_2=a_3=0$ is given by\n$$\\mathop{\\sum\\sum}_{0<-64a_4^3-432a_6^20$.\n\\end{heuristic}\n\nIn particular, note that we get the prediction\nthat almost all curves with\neven parity have $L(E,1)\\neq 0$ under this ordering.\n\n\\subsection{Random matrix theory}\nOriginally developed in mathematical statistics by Wishart \\cite{wishart}\nin the 1920s and then in mathematical physics\n(especially the spectra of highly excited nuclei)\nby Wigner \\cite{wigner}, Dyson, Mehta, and others\n(particularly~\\cite{marcenko-pastur}), random matrix theory \\cite{mehta}\nhas now found some applications in number theory,\nthe earliest being the oft-told story of\nDyson's remark to Montgomery regarding the pair-correlation of zeros of\nthe Riemann $\\zeta$-function.\nBased on substantial numerical evidence, random matrix theory appears\nto give reasonable models for the distribution of $L$-values in families,\nthough the issue of what constitutes a proper family is a delicate one\n(see particularly \\cite[\\S 3]{CFKRS}, where the notion of family comes from\nthe ability to calculate moments of $L$-functions rather than\nfrom algebraic geometry).\n\nThe family of quadratic twists of a given elliptic curve\n$E:y^2=x^3+Ax+B$ is given by $E_d:y^2=x^3+Ad^2x+Bd^3$ for squarefree~$d$.\nThe work (most significantly a monodromy computation)\nof Katz and Sarnak \\cite{katz-sarnak} regarding families of\ncurves over function fields implies that when we restrict to quadratic twists\nwith even parity, we should expect that the $L$-functions are modelled\nby random matrices with even orthogonal symmetry.\nThough we have no function field analogue in our case, we brazenly\nassume (largely from looking at the sign in the functional equation)\nthat the symmetry type is again orthogonal with even parity.\nWhat this means is that we want to model properties of the $L$-function \nvia random matrices taken from ${\\rm SO}(2M)$ with respect to Haar measure.\nHere we wish the mean density of zeros\nof the \\hbox{$L$-functions} to match the mean density of eigenvalues\nof our matrices, and so, as in~\\cite{keating-snaith},\nwe should take $2M\\approx 2\\log N$. We suspect that the $L$-value\ndistribution is approximately given by the distribution of the evaluations\nat~$1$ of the characteristic polynomials of our random matrices.\nIn the large, this distribution is determined entirely by\nthe symmetry type, while finer considerations are distinguished\nvia arithmetic considerations.\n\nWith this assumption, via the moment conjectures of \\cite{keating-snaith}\nand then using Mellin inversion, as $t\\rightarrow 0$ we have\n(see (21) of \\cite{random-matrix-theory}) that\n\\begin{equation}\\label{RMTprob}\n{\\rm Prob}[L(E,1)\\le t]\\sim \\alpha(E) t^{1\/2}M^{3\/8}.\n\\end{equation}\nThis heuristic is stated for fixed $M\\approx\\log N$,\nbut we shall also allow $M\\rightarrow\\infty$.\nIt is not easy to understand this probability, as both the constant\n$\\alpha(E)$ and the matrix-size $M$ depend on~$E$.\nWe can take curves with $e^M\\le N\\le e^{M+1}$ to mollify the\nimpact of the conductor, but in order to average over a set of\ncurves, we need to understand how $\\alpha(E)$ varies.\nOne idea is that $\\alpha(E)$ separates into two parts, one of which\ndepends on local structure (Frobenius traces) of the curve,\nand the other of which depends only upon the size of the conductor~$N$.\nLetting $G$ be the Barnes $G$-function (such that $G(z+1)=\\Gamma(z)G(z)$\nwith $G(1)=1$) and $M=\\lfloor\\log N\\rfloor$ we have that\n$$\\alpha(E)=\\alpha_R(M)\\cdot \\alpha_A(E)$$\n$\\text{with}\\>\\>\\alpha_R(M)\n\\rightarrow\\hat\\alpha_R=2^{1\/8}G(1\/2)\\pi^{-1\/4}\\>\\>\n\\text{as}\\>\\> M\\rightarrow\\infty$\nand\n\\begin{equation}\\label{eqn:Fp}\n\\alpha_A(E)=\\prod_p F(p)=\n\\prod_p \\biggl(1-{1\\over p}\\biggr)^{3\/8} \\biggl({p\\over p+1}\\biggr)\n\\biggl({1\\over p}+{L_p(1\/p)^{-1\/2}\\over 2}+{L_p(-1\/p)^{-1\/2}\\over 2}\\biggr)\n\\end{equation}\nwhere $L_p(X)=(1-a_pX+pX^2)^{-1}$ when $p\\nmid\\Delta$\nand $L_p(X)=(1-a_pX)^{-1}$ otherwise;\nsee (10) of \\cite{random-matrix-theory} evaluated at~$k=-1\/2$,\nthough that equation is wrong at primes that\ndivide the discriminant --- see (20) of~\\cite{CPRW},\nwhere $Q$ should be taken to be~1.\nNote that the Sato-Tate conjecture \\cite{tate} implies that\n$a_p^2$ is $p$ on average, and this implies that the above\nEuler product converges.\n\n\\subsection{Discretisation of the $L$-value distribution}\nFor precise definitions of the Tamagawa numbers, torsion group,\nperiods, and Shafarevich-Tate group, see~\\cite{silverman},\nthough below we give a brief description of some of these.\nWe let $\\tau_p(E)$ be the Tamagawa number of~$E$ at the\n(possibly infinite) prime~$p$, and write $\\tau(E)=\\prod_p \\tau_p(E)$\nfor the Tamagawa product and $T(E)$ for the size of the torsion group.\nWe also write $\\Omega_{\\rm re}(E)$ for the real\nperiod and ${\\mbox{\\textcyr{Sh}}}_{\\rm an}(E)$ for the size of the Shafarevich-Tate group\nwhen $L(E,1)\\neq 0$, with ${\\mbox{\\textcyr{Sh}}}_{\\rm an}(E)=0$ when $L(E,1)=0$.\n\nWe wish to assert that sufficiently small values of~$L(E,1)$\nactually correspond to~$L(E,1)=0$.\nWe do this via the conjectural formula of Birch and\nSwinnerton-Dyer~\\cite{BSD}, which asserts that\n$$L(E,1)=\n\\Omega_{\\rm re}(E)\\cdot {\\tau(E)\\over T(E)^2}\\cdot {\\mbox{\\textcyr{Sh}}}_{\\rm an}(E).$$\nOur discretisation\\footnote\n{The precision of this discretisation might be the most-debatable\n methodology we use. Indeed, we are essentially taking a ``sharp cutoff'',\n while it might be better to have a more smooth transition function.\n For this reason, we do not specify the leading\n constant in our final heuristic.}\nwill be that\n$$L(E,1)<\\Omega_{\\rm re}(E)\\cdot {\\tau(E)\\over T(E)^2}\n\\quad\\text{implies}\\quad L(E,1)=0.$$\nNote that we are only using that ${\\mbox{\\textcyr{Sh}}}_{\\rm an}$ takes on integral\nvalues, and do not use the (conjectural) fact that it is square.\n\nUsing \\eqref{RMTprob}, we estimate the number of curves\nwith positive (for simplicity) discriminant less than~$X$\nand even parity and $L(E,1)=0$ via the lattice-point sum\n$$W(X)=\\mathop{\\sum\\sum}_{\\text{$c_4,c_6$ minimal}\\atop 0\\text{mod}\\>576,c_6\\>\\text{mod}\\>1728)$\nthat can give minimal models, and so we get a factor of~$288\/(576\\cdot 1728)$,\nassuming that each congruence class has the same impact\non all the entities in the sum. Indeed, this independence\n(on average) of various quantities with respect\nto $c_4$ and $c_6$ is critical in our estimation of~$W(X)$.\nThere is also the question of non-minimal models,\\footnote\n{At $p=2,3$, non-minimality occurs when $c_4\/p^4$ and $c_6\/p^6$\n satisfy the congruences.}\nfrom which we get a factor of~$1\/\\zeta(10)$.\n\n\\begin{guess}\\label{GUESS}\nThe lattice-point sum $W(X)$ can be approximated as $X\\rightarrow\\infty$ by\n$$\\hat W(X)={288\\over (576\\cdot 1728)}{1\\over\\zeta(10)}\\cdot\n\\hat\\alpha_R\\bar\\alpha_A\\beta_\\tau\\cdot\n\\mathop{\\int\\int}_{1\\le{u_4^3-u_6^2\\over 1728}e_2>e_3$ for the roots of the cubic polynomial on the right side.\nWe have\n$$1\/\\Omega_{\\rm re}=\n{\\rm agm}\\bigl(\\sqrt{e_1-e_2},\\sqrt{e_1-e_3}\\bigr)\/\\pi.$$\nWe also have that $(e_1-e_2)(e_1-e_3)(e_2-e_3)=\\sqrt{\\Delta\/16}$\nfrom the formula for the discriminant.\nWe next write $e_1-e_2=\\Delta^{1\/6}\\lambda$ and $e_2-e_3=\\Delta^{1\/6}\\mu$\nso that we have $\\mu\\lambda(\\lambda+\\mu)=1\/4$,\nwhile $e_1={\\Delta^{1\/6}\\over 3}(\\mu+2\\lambda)$,\n$e_2={\\Delta^{1\/6}\\over 3}(\\mu-\\lambda)$,\nand $e_3=-{\\Delta^{1\/6}\\over 3}(2\\mu+\\lambda)$.\nThus we get\n$$-c_6\/864=-e_1e_2e_3=\n{\\Delta^{1\/2}\\over 27}(\\mu+2\\lambda)(\\mu-\\lambda)(2\\mu+\\lambda)$$\nand\n$$-c_4\/48=e_1e_2+e_1e_3+e_2e_3=\n-{\\Delta^{1\/3}\\over 3}(\\mu^2+\\lambda\\mu+\\lambda^2).$$\nChanging variables in the $\\hat W$-integral\ngives a Jacobian of $432\/\\Delta^{1\/6}\\sqrt{\\mu^4+\\mu}$\nso that\n$$\\hat W(X)=\\tilde c\\int_1^X\\int_0^\\infty {(\\log \\Delta)^{3\/8}\\over\n\\sqrt{\\Delta^{1\/12}\\,{\\rm agm}(\\sqrt\\lambda,\\sqrt{\\lambda+\\mu})}}\n{d\\mu\\,d\\Delta\\over\\Delta^{1\/6}\\sqrt{\\mu^4+\\mu}},$$\nwhere $\\lambda=(\\sqrt{\\mu^4+\\mu}-\\mu^2)\/2\\mu$.\nThus the variables are nicely separated, and since the $\\mu$-integral\nconverges, we do indeed get $\\hat W(X)\\sim cX^{19\/24}(\\log X)^{3\/8}$.\nA similar argument can be given for curves with negative discriminant.\nThis concludes our derivation of Heuristic~\\ref{conj:rank2disc},\nand now we turn to giving some reasons for our expectation that\nthe arithmetic factors can be mollified by taking their averages.\n\n\\subsection{Expectations for arithmetic factors on average}\nIn the next section we shall explain (among other things)\nwhy we expect that $\\log N\\approx \\log\\Delta$ for almost all curves,\nand in section~\\ref{section:torsion},\nwe shall recall the classical parametrisations of $X_1(N)$\ndue to Fricke to indicate why we expect the torsion size\nis 1 on average. Here we show how to compute the various averages\n(with respect to ordering by discriminant)\nof the square root of the Tamagawa product\nand the arithmetic factors~$\\alpha_A(E)$.\n\nFor both heuristics, we shall make the assumption that curves satisfying\nthe discriminant bound $|\\Delta|\\le X$ behave essentially the same as\nthose that satisfy $|c_4|\\le X^{1\/3}$ and $|c_6|\\le X^{1\/2}$.\nThat is, we approximate our region by a big box.\nWe write $D$ for the absolute value of~$\\Delta$.\nFirst we consider the Tamagawa product.\n\nWe wish to know how often a prime divides the discriminant to a high power.\nFix a prime~$p\\ge 5$ with $p$ a lot smaller than~$X^{1\/3}$.\nWe can estimate the probability that~$p^k|\\Delta$\nby considering all $p^{2k}$ choices of $c_4$ and $c_6$ modulo~$p^k$,\nthat is, by counting the number of solutions $S(p^k)$\nto $c_4^3-c_6^2=1728\\Delta\\equiv 0$ (mod~$p^k$).\nThis auxiliary curve $c_4^3=c_6^2$ is singular at $(0,0)$ over~${\\bf F}_p$,\nand has $(p-1)$ non-singular ${\\bf F}_p$-solutions\nwhich lift to $p^{k-1}(p-1)$ points modulo~$p^k$.\n\nFor $p^k$ sufficiently small, our $(c_4,c_6)$-region is so large that we can \nshow that the probability that $p^k|\\Delta$ is~$S(p^k)\/p^{2k}$.\nWe assume that big primes act (on average) in the same manner,\nwhile a similar heuristic can be given for~$p=2,3$.\nCurves with $p^4|c_4$ and $p^6|c_6$ will not be given by their\nminimal model; indeed, we want to exclude these curves, and\nthus will multiply our probabilities by $\\kappa_p=(1-1\/p^{10})^{-1}$\nto make them conditional on this criterion. For instance, the above\ncounting of points says that there is a probability of $(p^2-p)\/p^2$\nthat~$p\\nmid D$, and so upon conditioning upon minimal models we get\n$\\kappa_p(1-1\/p)$ for this probability.\n\nWhat is the probability $P_m(p,k)$ that a curve given by a minimal model\nhas multiplicative reduction at $p\\ge 5$ and $p^k\\|D$ for some~$k>0$?\nIn terms of Kodaira symbols, this is the case of I$_k$.\nFor multiplicative reduction we need that $p\\nmid c_4,c_6$.\nThese events are independent and each has a probability $(1-1\/p)$\nof occurring. Upon assuming these conditions and working modulo~$p^k$,\nthere are $(p^k-p^{k-1})$ such choices for each,\nand of the resulting $(c_4,c_6)$\npairs we noted above that $p^{k-1}(p-1)$ of them have~$p^k|D$.\nSo, given a curve with~$p\\nmid c_4,c_6$,\nwe have a probability of $1\/p^{k-1}(p-1)$ that~$p^k|D$,\nwhich gives $1\/p^k$ for the probability that~$p^k\\|D$.\nIn symbols, we have that (for~$p\\ge 5$ and $k\\ge 1$)\n$${\\rm Prob}\\Bigl[p^k\\|(c_4^3-c_6^2) \\Bigm| p\\nmid c_4,c_6\\Bigr]=1\/p^k.$$\nIncluding the conditional probability for minimal models, we get\n\\begin{equation}\\label{eqn:probm}\nP_m(p,k)=(1-1\/p^{10})^{-1}(1-1\/p)^2\/p^k\\quad\\text{(for $p\\ge 5$ and $k\\ge 1$).}\n\\end{equation}\nNote that summing this over $k\\ge 1$ gives $\\kappa_p(1-1\/p)\/p$ for the\nprobability for an elliptic curve to have multiplicative reduction at~$p$.\n\nWhat is the probability $P_a(p,k)$ that a curve given by a minimal\nmodel has additive reduction at $p\\ge 5$ and $p^k\\|D$ for some~$k>0$?\nWe shall temporarily ignore the factor\nof $\\kappa_p=(1-1\/p^{10})^{-1}$ from non-minimal models \nand include it at the end.\nWe must have that $p|c_4,c_6$, and thus get that~$k\\ge 2$.\nFor $k=2,3,4$, which correspond to Kodaira symbols II, III, and IV\nrespectively, the computation is not too bad:\nwe get that $p^2\\|D$ exactly when $p|c_4$ and~$p\\|c_6$,\nso that the probability is $(1\/p)\\cdot (1-1\/p)\/p=(1-1\/p)\/p^2$;\nfor $p^3\\|D$ we need $p\\|c_4$ and $p^2|c_6$\nand thus get $(1-1\/p)\/p\\cdot(1\/p^2)=(1-1\/p)\/p^3$ for the probability;\nand for $p^4\\|D$ we need $p^2|c_4$ and $p^2\\|c_6$ and so get\n$(1\/p^2)\\cdot(1-1\/p)\/p^2=(1-1\/p)\/p^4$ for the probability.\nNote that the case $k=5$ cannot occur.\nThus we have (for~$p\\ge 5$) the formula\n$P_a(p,k)=(1-1\/p^{10})^{-1}(1-1\/p)\/p^k$ for $k=2,3,4$.\n\nMore complications occur for $k\\ge 6$, where now we split into two\ncases depending upon whether additive reduction persists upon taking\nthe quadratic twist by~$p$. This occurs when $p^3|c_4$ and $p^4|c_6$,\nand we denote by $P_a^n(p,k)$ the probability that $p^k\\|D$ in this subcase.\nJust as above, we get that\n$P_a^n(p,k)=(1-1\/p^{10})^{-1}(1-1\/p)\/p^{k-1}$ for $k=8,9,10$.\nThese are respectively the cases of Kodaira symbols IV$^\\star$,\nIII$^\\star$, and~II$^\\star$.\nFor $k=11$ we have~$P_a^n(p,k)=0$, while for $k\\ge 12$ our\ncondition of minimality implies that we should take~$P_a^n(p,k)=0$.\n\nWe denote by $P_a^t(p,k)$ the probability that $p^6|D$ with either\n$p^2\\|c_4$ or $p^3\\|c_6$. First we consider curves for which $p^7|D$,\nand these have multiplicative reduction at~$p$ upon twisting.\nIn particular, these curves have $p^2\\|c_4$ and $p^3\\|c_6$,\nand the probability of this is $(1-1\/p)\/p^2\\cdot (1-1\/p)\/p^3$.\nConsider $k\\ge 7$. We then take $c_4\/p^2$ and $c_6\/p^3$ both modulo $p^{k-6}$,\nand get that $p^{k-6}\\|(D\/p^6)$ with probability $1\/p^{k-6}$\nin analogy with the above.\nSo we get that $P_a^t(p,k)=(1-1\/p^{10})^{-1}(1-1\/p)^2\/p^{k-1}$ for~$k\\ge 7$.\nThis corresponds to the case of I$_{k-6}^\\star$.\n\nFinally, for $p^6\\|D$ (which is the case I$_0^\\star$)\nwe get a probability of $(1\/p^2)\\cdot(1\/p^3)$ for the chance\nthat $p^2|c_4$ and $p^3|c_6$, and (since there are $p$ points mod~$p$\non the auxiliary curve $(c_4\/p^2)^3\\equiv(c_6\/p^3)^2 \\pmod{p}$)\na conditional probability of $(p^2-p)\/p^2$ that~$p^6\\|D$.\nSo we get that $P_a^t(p,6)=(1-1\/p^{10})^{-1}(1-1\/p)\/p^5$.\n\nWe now impose our current notation on the previous paragraphs,\nand naturally let $P_a^t(p,k)=0$ and $P_a^n(p,k)=P_a(p,k)$ for~$k\\le 5$.\nOur final result is that\n\\begin{equation}\\label{eqn:proba}\nP_a^n(p,k)=\n\\begin{cases}\n(1-1\/p^{10})^{-1}(1-1\/p)\/p^k & \\quad k=2,3,4\\\\\n(1-1\/p^{10})^{-1}(1-1\/p)\/p^{k-1} & \\quad k=8,9,10\n\\end{cases}\n\\end{equation}\n\\begin{equation}\\label{eqn:probd}\nP_a^t(p,k)=\n\\begin{cases}\n(1-1\/p^{10})^{-1}(1-1\/p)\/p^5 & \\quad k=6 \\\\\n(1-1\/p^{10})^{-1}(1-1\/p)^2\/p^{k-1} & \\quad k\\ge 7\n\\end{cases}\n\\end{equation}\nwith $P_a^n(p,k)$ and $P_a^t(p,k)$ equal to zero for other~$k$.\nWe conclude by defining $P_0(p,k)$ to be zero for $k>0$\nand to be the probability $(1-1\/p^{10})^{-1}(1-1\/p)$ that $p\\nmid D$ for~$k=0$.\nWe can easily check that we really do have the required probability relation\n$\\sum_{k=0}^\\infty \\bigl[P_m(p,k)+P_a^n(p,k)+P_a^t(p,k)+P_0(p,k)\\bigr]=1$,\nas: the cases of multiplicative reduction give $\\kappa_p(1-1\/p)\/p$;\nthe cases of Kodaira symbols II, III, and~IV give $\\kappa_p(1\/p^2-1\/p^5)$;\nthe cases of Kodaira symbols IV$^\\star$, III$^\\star$, and~II$^\\star$\ngive $\\kappa_p(1\/p^7-1\/p^{10})$; the cases of I$^\\star_k$ summed\nfor~$k\\ge 1$ give $\\kappa_p(1-1\/p)\/p^6$; the case of I$^\\star_0$\ngives $\\kappa_p(1-1\/p)\/p^5$; and the sum of these with\n$P_0(p,0)=\\kappa_p(1-1\/p)$ does indeed give us~1.\nWe could do a similar (more tedious) analysis for $p=2,3$,\nbut this would obscure our argument.\n\nGiven a curve of discriminant~$D$,\nwe can now compute the expectation for its Tamagawa number.\nWe consider primes~$p|D$ with $p\\ge 5$,\nand compute the local Tamagawa number~$t(p)$.\nWhen $E$ has multiplicative reduction at~$p$ and~$p^k\\|D$,\nthen $t(p)=k$ if $-c_6$ is square mod~$p$,\nand else $t(p)=1,2$ depending upon whether $k$ is odd or even.\nSo the average of $\\sqrt{t(p)}$ for this case is\n$\\epsilon_m(k)={1\\over 2}(1+\\sqrt k),{1\\over 2}(\\sqrt 2+\\sqrt k)$\nfor $k$ odd\/even respectively.\n\nWhen $E$ has potentially multiplicative reduction at~$p$ with~$p^k\\|D$,\nfor $k$ odd we have $t(p)=4,2$ depending on whether\n$(c_6\/p^3)\\cdot(\\Delta\/p^k)$ is square mod~$p$,\nand for $k$ even we have $t(p)=4,2$ depending on\nwhether $\\Delta\/p^k$ is square mod~$p$.\nIn both cases the average of $\\sqrt{t(p)}$ is~${1\\over 2}(\\sqrt 2+\\sqrt 4)$.\nIn the case of I$_0^\\star$ reduction where we have~$p^6\\|D$,\nwe have that $t(p)=1,2,4$ corresponding to whether the cubic\n$x^3-(27c_4\/p^2)x-(54c_6\/p^3)$ has $0,1,3$ roots modulo~$p$.\nSo the average of $\\sqrt{t(p)}$ is\n$${\\sqrt 1\\bigl((p-1)(p+1)\/3\\bigr)+\\sqrt 2\\bigl(p(p-1)\/2\\bigr)+\n\\sqrt 4\\bigl((p-1)(p-2)\/6\\bigr)\\over\n\\bigl((p-1)(p+1)\/3\\bigr)+\\bigl(p(p-1)\/2\\bigr)+\n\\bigl((p-1)(p-2)\/6\\bigr)}={2\\over 3}+{\\sqrt 2\\over 2}-{1\\over 3p}.$$\nin this case.\n\nFor the remaining cases,\nwhen $p^2\\|D$ or $p^{10}\\|D$ we have~$t(p)=1$,\nwhile when $p^3\\|D$ or $p^9\\|D$ we have~$t(p)=2$.\nFinally, when $p^4\\|D$ we have $t(p)=3,1$ depending\non whether $-6c_6\/p^2$ is square mod~$p$,\nand similarly when $p^8\\|D$ we have $t(p)=3,1$ depending\non whether $-6c_6\/p^4$ is square mod~$p$, so that the average\nof $\\sqrt{t(p)}$ in both cases is~${1\\over 2}(1+\\sqrt 3)$.\nWe get that $\\epsilon_a^n(k)=1,\\sqrt 2,{1\\over 2}(1+\\sqrt 3),\n{1\\over 2}(1+\\sqrt 3),\\sqrt 2,1$ for $k=2,3,4,8,9,10$, while\n\\begin{equation}\\label{eqn:epsilon}\n\\epsilon_m(k)=\n\\begin{cases}{1\\over 2}(1+\\sqrt k),&\\text{$k$ odd}\\\\\n{1\\over 2}(\\sqrt 2+\\sqrt k),&\\text{$k$ even}\\end{cases}\n\\quad\\text{and}\\quad\n\\epsilon_a^t(p,k)=\\begin{cases}\n{2\\over 3}+{\\sqrt 2\\over 2}-{1\\over 3p},&k=6\\\\\n{1\\over 2}(\\sqrt 2+\\sqrt 4),&k\\ge 7\\end{cases}\n\\end{equation}\nwith $\\epsilon_a^n(k)$ and $\\epsilon_a^t(p,k)$ equal to zero for other~$k$.\n\nWe define the expected square root of the Tamagawa number $K(p)$ at~$p$ by\n\\begin{equation}\\label{eqn:tama}\nK(p)=\\sum_{k=0}^\\infty\n\\bigl[\\epsilon_m(k) P_m(p,k)+\\epsilon_a^n(k) P_a^n(p,k)+\n\\epsilon_a^t(p,k) P_a^t(p,k)+P_0(p,k)\\bigr]\n\\end{equation}\nand the expected global\\footnote\n{Note that the Tamagawa number at infinity is 1 when $E$\n has negative discriminant and else is 2, the former\n occurring approximately $\\sqrt{3}\/(1+\\sqrt{3})\\approx 63.4\\%$ of the time.}\nTamagawa number to be~$\\beta_\\tau=\\prod_p K(p)$.\nThe convergence of this product follows from an analysis of the\ndominant $k=0,1,2$ terms of~\\eqref{eqn:tama}, which gives \na behaviour of~$1+O(1\/p^2)$.\nSo we get that the Tamagawa product is a constant on average,\nwhich we do not bother to compute explicitly (we would need to\nconsider $p=2,3$ more carefully to get a precise value).\n\nTo compute the average value of $\\alpha_A(E)=\\prod_p F(p)$\nin~\\eqref{eqn:Fp} we similarly assume\\footnote\n{This argumentative technique can also be used to bolster our assumption that\n using Connell's conditions should be independent of other considerations.}\nthat each prime acts independently; we then compute the\naverage value for each prime by calculating the distribution of $F(p)$\nwhen considering all the curves modulo~$p$\n(including those with singular reduction,\nand again making the slight adjustment for non-minimal models).\nThis gives some constant for the average $\\bar\\alpha_A$ of~$\\alpha_A(E)$,\nwhich we do not compute explicitly.\nNote that $\\prod_p F(p)$ converges if we assume the Sato-Tate\nconjecture \\cite{tate} since in this case we have that\n$a_p^2$ is $p$ on average.\n\n\\section{Relation between conductor and discriminant}\nWe now give heuristics for how often we expect the ratio\nbetween the absolute discriminant and the conductor to be large.\nThe main heuristic we derive is:\n\n\\begin{heuristic}\\label{conj:conductor}\nThe number $B(X)$ of rational elliptic curves whose conductor\nis less than~$X$ satisfies $B(X)\\sim cX^{5\/6}$ for an explicit\nconstant~$c>0$.\n\\end{heuristic}\n\nTo derive this heuristic, we estimate the proportion of curves with a given\nratio of (absolute) discriminant to conductor. Since the conductor\nis often the squarefree kernel of the discriminant, by way of\nexplanation we first consider the behaviour of $f(n)=n\/{\\rm sqfree}(n)$.\nThe probability that $f(n)=1$ is given by the probability that $n$\nis squarefree, which is classically known to be $1\/\\zeta(2)=6\/\\pi^2$.\nGiven a prime power~$p^m$, to have $f(n)=p^m$ says that $n=p^{m+1}u$\nwhere $u$ is squarefree and coprime to~$p$. The probability that\n$p^{m+1}\\|n$ is $(1-1\/p)\/p^{m+1}$, and given this, the conditional\nprobability that $\\bigl(n\/p^{m+1}\\bigr)$\nis squarefree is $(6\/\\pi^2)\\cdot(1-1\/p^2)^{-1}$.\nExtending this multiplicatively beyond prime powers, we get that\n$${\\rm Prob}\\bigl[n\/{\\rm sqfree}(n)=q\\bigr]=\n{6\\over\\pi^2}\\prod_{p^m\\|q} {1\/p^{(m+1)}\\over (1+1\/p)}=\n{6\\over\\pi^2}{1\\over q}\\prod_{p|q} {1\\over p+1}.$$\nIn particular, the average of $f(n)^\\gamma$ exists for~$\\gamma<1$;\nin our elliptic curve analogue, we will require such\nan average for~$\\gamma=5\/6$. We note that it appears to be\nan interesting open question to prove an asymptotic\nfor~$\\sum\\limits_{n\\le X} n\/{\\rm sqfree}(n)$.\n\n\\subsection{Derivation of the heuristic}\nWe keep the notation $D=|\\Delta|$ and wish to compute\nthe probability that $D\/N=q$ for a fixed positive integer~$q$.\nFor a prime power~$p^v$ with $p\\ge 5$, the probability that $p^v\\|(D\/N)$\nis given by: the probability that $E$ has multiplicative reduction\nat~$p$ and~$p^{v+1}\\|D$, that is~$P_m(p,v+1)$; plus the probability\nthat $E$ has additive reduction at~$p$ and~$p^{v+2}\\|D$, that is~$P_a(p,v+2)$;\nand the contribution from~$P_0(p,v)$, which is zero for $v>0$\nand for $v=0$ is the probability that $p$ does not divide~$D$.\nSo, writing $v=v_p(q)$, we get that\n(with a similar modified formula for~$p=2,3$)\n\\begin{equation}\\label{eqn:prob}\n{\\rm Prob}\\bigl[D\/N=q\\bigr]=\n\\prod_p E_p(v_p(q))=\\prod_p\\bigl[P_m(p,1+v)+P_a(p,2+v)+P_0(p,v)\\bigr].\n\\end{equation}\nIt should be emphasised that this probability is with respect to\n(as in the previous section) the ordering of the curves by discriminant.\nWe have\n\\begin{equation}\\label{eqn:sum}\n\\sum_{E: N_E\\le X} 1\n\\approx\\sum_{q=1}^\\infty\\sum_{E: D\\le qX} {\\rm Prob}\\bigl[D\/N=q\\bigr]\n\\sim\\sum_{q=1}^\\infty\\alpha (qX)^{5\/6}\\cdot{\\rm Prob}\\bigl[D\/N=q\\bigr],\n\\end{equation}\nwhere $\\alpha=\\alpha_++\\alpha_-$\nfrom the Brumer-McGuinness heuristic~\\ref{conj:bmcg}.\nIf this last sum converges, then we get Heuristic~\\ref{conj:conductor}.\n\nTo show the last sum in~\\eqref{eqn:sum} does indeed converge,\nwe upper-bound the probability in~\\eqref{eqn:prob}.\nWe have that $P_m(p,v+1)\\le 1\/p^{v+1}$ and~$P_a(p,v+2)\\le 2\/p^{v+1}$,\nwhich implies\n$$\\hat f(q)={\\rm Prob}\\bigl[D\/N=q\\bigr]=\n\\prod_p E_p(v_p(q))\\le {1\\over q}\\prod_{p|q} {3\\over p}.$$\nWe then estimate\n$$\\sum_{q=1}^\\infty q^{5\/6}\\hat f(q)\\le\n\\sum_{q=1}^\\infty {1\\over q^{1\/6}}\\prod_{p|q} {3\\over p}=\n\\prod_p\\biggl(1+\\sum_{l=1}^\\infty {3\/p\\over (p^l)^{1\/6}}\\biggr)\n\\le\\prod_p\\biggl(1+{3\/p\\over p^{1\/6}-1}\\biggr),$$\nand the last product is convergent upon comparison to~$\\zeta(7\/6)^3$.\nThus we shown that the last sum in~\\eqref{eqn:sum} converges,\nso that Heuristic~\\ref{conj:conductor} follows.\n\nWe can note that Fouvry, Nair, and Tenenbaum \\cite{FNT}\nhave shown that the number of minimal models with $D\\le X$\nis at least $cX^{5\/6}$, and that the number of curves with\n$D\\le X$ with Szpiro ratio ${\\log D\\over\\log N}\\ge \\kappa$\nis no more than $c_\\epsilon X^{1\/\\kappa+\\epsilon}$ for every~$\\epsilon>0$.\n\n\\subsection{Dependence of $D\/N$ and the Tamagawa product}\nWe expect that $D\/N$ should be independent of the real period,\nbut the Tamagawa product and $D\/N$ should be somewhat related.\\footnote\n{The size of the torsion subgroup should also be related to~$D\/N$,\n but in the next section we argue that curves with nontrivial\n torsion are sufficiently sparse so as to be ignored.}\nWe compute the expected square root of the Tamagawa product when~$D\/N=q$.\nAs with~\\eqref{eqn:prob} and using the $\\epsilon$ defined\nin~\\eqref{eqn:epsilon}, we find that this is given by\n$$\\eta(q)=\\prod_p\n{\\bigl[\\epsilon_m(v_1)P_m(p,v_1)+\\epsilon_a^n(v_2)P_a^n(p,v_2)+\n\\epsilon_a^t(p,v_2)P_a^t(p,v_2)+P_0(p,v)\\bigr]\n\\over \\bigl[P_m(p,v_1)+P_a(p,v_2)+P_0(p,v)\\bigr]},$$\nwhere $v_1=v+1$ and $v_2=v+2$ and $v=v_p(q)$.\n\n\\subsection{The comparison of $\\log\\Delta$ with $\\log N$}\nWe now want to compare $\\log\\Delta$ with~$\\log N$,\nand explicate the replacement therein in Guess~\\ref{GUESS}.\nIn order to bound the effect of curves with large~$D\/N$, we note that\n$${\\rm Prob}\\bigl[D\/N\\ge Y\\bigr]=\\sum_{q\\ge Y} \\hat f(q)\n\\le\\sum_{q\\ge Y} {1\\over q}\\prod_{p|q}{3\\over p},$$\nand use Rankin's trick, so that for any $0<\\alpha<1$ we have\n(using $p^\\alpha-1\\ge\\alpha\\log p$)\n\\begin{align*}\n{\\rm Prob}\\bigl[D\/N\\ge Y\\bigr]&\n\\le\\sum_{q=1}^\\infty\n\\biggl({q\\over Y}\\biggr)^{1-\\alpha}\\cdot {1\\over q}\\prod_{p|q}{3\\over p}\n={Y^\\alpha\\over Y}\\prod_p\\biggl(1+{3\\over p^{1+\\alpha}}+\n{3\\over p^{1+2\\alpha}}+\\cdots\\biggr)\\\\\n&={Y^\\alpha\\over Y}\\prod_p\\biggl(1+{3\/p\\over p^\\alpha-1}\\biggr)\n\\ll{Y^\\alpha\\over Y}\\exp\\biggl(\\sum_p{\\hat c\/p\\over \\alpha\\log p}\\biggr)\n\\ll {e^{c\\sqrt{\\log Y}}\\over Y}\n\\end{align*}\nfor some constants~$\\hat c,c$, by taking $\\alpha=1\/\\sqrt{\\log Y}$\n(this result is stronger than needed).\n\nHowever, a more pedantic derivation of Guess \\ref{GUESS} does not simply\nallow replacing $\\log N$ by~$\\log\\Delta$, but requires analysis\n(assuming $\\Omega_{\\rm re}(E)$ to be independent of~$q$) of\n$${\\hat\\alpha_R\\hat\\alpha_A\\over 3456\\,\\zeta(10)}\n\\cdot\\kern-10pt\\mathop{\\int\\int}_{\\sqrt X\\le {u_4^3-u_6^2\\over 1728}\\le X}\n\\kern-10pt\\Omega_{\\rm re}(E)\n\\cdot\\biggl[\\sum_{q<\\Delta} \\eta(q)(\\log \\Delta\/q)^{3\/8}\n\\cdot{\\rm Prob}\\bigl[D\/N=q\\bigr]\\biggr]\\, du_4\\,du_6.$$\nThe above estimate on the tail of the probability\nand a simple bound on $\\eta(q)$ in terms of the divisor function\nshows that we can truncate the $q$-sum at~$Y$\nwith an error of~$O(1\/Y^{8\/9})$, and choosing (say) $Y=e^{\\sqrt{\\log X}}$\ngives us that~$\\log(\\Delta\/q)\\sim\\log\\Delta$\n(note that we restricted to~$\\Delta>\\sqrt X$).\nSo the bracketed term becomes the desired\n$$\\sum_{q0$.\n\\end{heuristic}\n\nFrom Guess~\\ref{GUESS} we get that the number of even parity curves\nwith $0<\\Delta\\text{mod}\\>576,c_6\\>\\text{mod}\\>1728\\bigr)$.\nMany of these classes force the prime 2 to divide the discriminant,\nand thus do not produce any curves of prime conductor.\nFor each class $(\\tilde c_4,\\tilde c_6)$, we took the\n10000 parameter selections\n$$(c_4,c_6)=\\bigl(576(1000+i)+\\tilde c_4,1728(100000+j)+\\tilde c_6\\bigr)\n\\>\\text{for}\\> (i,j)\\in [1..10]\\times [1..1000],$$\nand then of these 2880000 curves,\ntook the 89913 models that had prime discriminant\n(note that all the discriminants are positive).\nThis gives us good distribution across congruence classes,\nand while the real period does not vary as much as possible,\nbelow we will attempt to understand how this affects the average rank.\n\nIt then took a few months to compute the (suspected) analytic ranks for\nthese curves. We got about $0.937$ for the average rank. We then\ndid a similar experiment for curves with negative discriminant given by\n$$(c_4,c_6)=\\bigl(576(-883+i)+\\tilde c_4,1728(100000+j)+\\tilde c_6\\bigr)\n\\>\\text{for}\\>(i,j)\\in [1..10]\\times [1..1000],$$\ntook the subset of 89749 curves with prime conductor,\nand found the average rank to be about~$0.869$.\nThis discrepancy between positive and negative discriminant is\nalso in the Brumer-McGuinness and Stein-Watkins datasets, and\nindeed was noted in \\cite{brumer-mcguinness}.\\footnote\n{``An interesting phenomenon was the systematic influence of the\n discriminant sign on all aspects of the arithmetic of the curve.''}\nWe do not average the results from positive and negative\ndiscriminant; the Brumer-McGuinness Conjecture \\ref{conj:bmcg}\nimplies that the split is not~50-50.\n\nIn any case, our results show a substantial drop in the average rank,\nwhich, at the very least, indicates that the average rank is not constant.\nThe alternative statistic of frequency of positive rank for curves with\neven parity also showed a significant drop. For prime positive discriminant\ncurves it was 44.1\\% for Brumer-McGuinness and 41.7\\% for Stein-Watkins,\nbut only 36.0\\% for our dataset --- for negative discriminant curves,\nthese numbers are 37.7\\%, 36.4\\%, and 31.3\\%.\n\n\\subsection{Variation of real period}\nOur random sampling of curves with prime conductor of size $10^{14}$\nmust account for various properties of the curves if our results\nare to possess legitimacy. Above we speculated that\nthe real period plays the most significant r\\^ole,\nand so we wish to understand how our choice has affected it.\n\nTo judge the effect that variation of the real period might have,\nwe did some comparisons with the Stein-Watkins database.\nFirst consider curves of positive prime discriminant,\nand write $E$ as $y^2=4x^3+b_2x^2+2b_4x+b_6$\nand $e_1>e_2>e_3$ for the real roots of the cubic.\nWe looked at curves with even parity and\nconsidered the frequency of positive rank as a function\nof the root quotient $t={e_1-e_2\\over e_1-e_3}$, noting that\\footnote\n{The calculation follows as in the previous sections;\n via calculus, we can compute that this function is maximised\n at $t\\approx .0388505246188$ with a maximum just below $4.414499094$.}\n$\\Omega_{\\rm re}\\Delta^{1\/12}={2^{1\/3}\\pi(t-t^2)^{1\/6}\\over{\\rm agm}(1,\\sqrt t)}$.\nThe curves we considered all had $0.6170$: Curve distribution as a function of~$t$\n\\label{fig:posdisc-rootquotient}}\n\\end{figure}\n\\vspace*{-1ex}\n\\begin{figure}[H]\n\\begin{center}\n\\scalebox{0.97}{\\includegraphics{fig2.eps}}\n\\end{center}\n\\vskip-12pt\\noindent\n\\caption\n{$\\Delta>0$:\\kern-0.5pt{} {}Positive rank frequency as a function of the\n\\newline\\mbox{}\\hspace{83pt}\nroot quotient~$t$, and $\\Omega_{\\rm re}\\Delta^{1\/12}$ as a function of~$t$.\n\\label{fig:posdisc-rankfreq}}\n\\end{figure}\n\nNext we plot the frequency of $L(E,1)=0$ as a function of the root quotient\nin Figure~\\ref{fig:posdisc-rankfreq}.\nSince there are only about 1000 curves in some of our bins, we do not\nget such a nice graph. Note that the left-most and especially the\nrightmost dots are much below their nearest neighbors,\nthe graph slopes down in general, and drops more at the end.\nWe see no evidence that our results should be overly biased.\nIn particular, the frequency of $L(E,1)=0$\nis 41.7\\% amongst all even parity curves of\nprime discriminant in the Stein-Watkins database,\nand is 42.8\\% for the 12324 such curves with $0.6170$ the imaginary part of the conjugate pair of nonreal roots.\nLetting $\\tilde r=r+b_2\/12$ and $c=\\tilde r\/Z$ we then have\\footnote\n{This is maximised at $c\\approx -33.58515148525$,\n with the maximum a bit less than $8.82921518$.}\n\\vskip-2pt\\noindent\n$$\\Omega_{\\rm re}|\\Delta|^{1\/12}=\n{\\pi\\sqrt{2}\\over (1+9c^2\/4)^{1\/12}\n{\\rm agm}\\Bigl(1,\\sqrt{{1\\over 2}+{3c\\over 4\\sqrt{1+9c^2\/4}}}\\Bigr)}.$$\nWe renormalise via taking~$C=1\/2+\\arctan(c)\/\\pi$,\nand graph the distribution of curves versus~$C$ in\nFigure~\\ref{fig:negdisc-rootquotient}.\nThe symmetry\\footnote\n{The blotches around 0.22-0.23 and 0.77-0.78 appear to come from\n the fact that curves with $a_4$ small (in particular~$\\pm 1$)\n tend to have $C$ in these ranges (for our discriminant range),\n and this causes instability in the counting function.}\nof the graph might indicate that the coordinate transform is reasonable.\nAll our curves have~$0.5550$.\nIt could be argued that\nwe should order curves according to the conductor of the symmetric\npower $L$-function rather than that of the curve, but we do not think\nsuch concerns are that relevant to our imprecise discussion.\nIn particular, the above estimate predicts that there are\nfinitely many curves with extra vanishing when~$k\\ge 5$. It should be\nsaid that this heuristic will likely mislead us about curves with\ncomplex multiplication, for which the symmetric power $L$-function\nfactors (it is imprimitive in the sense of the Selberg class),\nwith each factor having a 50\\% chance of having odd parity. However,\neven ignoring CM curves, the data of \\cite{martin-watkins} find a handful\nof curves for which the 9th, 11th and even the 13th symmetric powers appear\n(to 12 digits of precision)\nto have a central zero of order~2. We find this surprising, and casts some\ndoubt about the validity of our methodology of modelling of vanishings.\n\n\\subsection{Quadratic twists of higher symmetric powers}\nThe techniques we used earlier in this paper have also\nbeen used to model vanishings in quadratic twist families,\nand we can extend the analyses to symmetric powers.\n\n\\subsubsection{Non-CM curves}\nWe fix a non-CM curve~$E$ and let $E_d$ be its $d$th quadratic twist,\ntaking $d$ to be a fundamental discriminant. From an analogue of the\nBirch--Swinnerton-Dyer conjecture we expect to get a small-denominator\nrational from the quotient\\footnote\n{The contribution from the conductor actually comes from non-integral\n Tamagawa numbers from the Bloch-Kato exponential map, and in the case\n of quadratic twists, the twisting parameter~$d$ should not appear in\n the final expression.} \n$L({\\rm Sym}^3 E_d,2)(2\\pi N_E)\/\\Omega_{\\rm im}(E_d)^3\\Omega_{\\rm re}(E_d)$.\nWe have that $\\Omega_{\\rm im}(E_d)^3\\Omega_{\\rm re}(E_d)\n\\approx \\Omega_{\\rm im}(E)\/d^{3\/2}\\cdot \\Omega_{\\rm re}(E)\/d^{1\/2}$\nand so we expect the number of fundamental discriminants $|d|0$ is additionally primitive when~$8\\|d$.\nNote also that 27a and 36a have the same symmetric cube $L$-function.\n\n\\vskip-12pt\\noindent\n\\begin{table}[h]\n\\caption{Counts of double order zeros for primitive twists\\label{tbl:overview}}\n\\vskip-8pt\\noindent\n\\begin{center}\n\\begin{tabular}{|c|ccccccccccc|}\\hline\n&27a&32a&36a&49a&121a&256a&256b&361a&1849a&4489a&26569a\\\\\\hline\n3rd&59&32&-&67&78&32&21&45&28&31&1\\\\\n5th&3&1&5&2&1&2&2&0&0&0&0\\\\\n7th&0&0&2&0&1&0&0&0&0&0&0\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\vskip-6pt\\noindent\n\nTable~\\ref{tbl:overview} lists the following results for counts\nof central double zeros (to 32 digits) for the $L$-functions\nof the 3rd, 5th, and 7th symmetric powers.\\footnote\n{We found no even twists with~$L(\\psi_d^9,5)=0$,\n and no triple zeros appeared in the data.}\nTables~\\ref{tbl:data} and \\ref{tbl:higher}\nlist the primitive discriminants that yield\nthe double zeros. The notable signedness can be\nexplained via the sign of the functional equation.\\footnote\n{The local signs at $p=2,3$ involve wild ramification are more complicated\n (see~\\cite{whitehouse,kobayashi,dm} for a theoretical description),\n and thus there is no complete correlation in some cases.}\nWe are unable to explain the paucity of double zeros for twists of~26569a;\nLiu and Xu have the latest results~\\cite{liu-xu} on the vanishing of\nsuch $L$-functions, but their bounds are far from the observed data.\nSimilarly, the last-listed double zero for 4489a at 67260 seems quite small.\n\nThere appear to be implications vis-a-vis higher vanishings in some cases;\nfor instance, except for 27a, in the thirteen cases that $L(\\psi_d^5,s)$ has\na double zero at $s=3$ then $L(\\psi_d,s)$ also has a double zero at~$s=1$.\nSimilarly, the 7th symmetric power for the 27365th twist of 121a has a\ndouble zero, as does the 3rd symmetric power, while the $L$-function\nof the twist itself has a triple zero.\nAlso, the 22909th twist of 36a has double zeros\nfor its first, third, and fifth powers (note that 36a does not appear\nin Table~\\ref{tbl:data} as the data are identical to that for~27a).\n\n\\begin{table}[H]\n\\vspace*{-18pt}\n\\caption{Primitive $d$ with\n $\\mathop{\\rm ord\\,}\\limits_{s=2} L(\\psi_d^3,s)=2$\n\\label{tbl:data}}\n\\vspace*{-6pt}\n\\begin{center}\n\\begin{tabular}{|r|l|}\\hline\n27a&172 524 1292 1564 1793 3016 4169 4648 6508 9149 9452 9560 10636\\\\\n&11137 12040 13784 14284 15713 17485 17884 22841 22909 22936 25729\\\\\n&27065 27628 29165 30392 34220 35749 38636 40108 41756 44221 47260\\\\\n&51512 54385 57548 58933 58936 58984 59836 59996 62353 64268 70253\\\\\n&74305 77320 77672 78572 84616 86609 86812 87013 92057 95861 96556\\\\\n&97237 99817\\\\\\hline\n32a& $-395$ $-5115$ $-17803$ $-25987$ $-58123$\\\\\n& $-60347$ $-73635$ $-79779$ $-84651$ $-99619$\\\\\n& 257 1217 2201 2465 14585 26265 45201 82945\\\\\n&4632 5336 5720 7480 9560 30328 30360\\\\\n&31832 38936 45848 69784 71832 83512 92312\\\\\\hline\n49a&\n$-79$ $-311$ $-319$ $-516$ $-856$ $-1007$\n$-1039$ $-1243$ $-1391$ $-1507$ $-1795$\\\\\n&$-2024$ $-2392$ $-2756$ $-2923$ $-3527$\n$-3624$ $-4087$ $-4371$ $-4583$ $-4727$\\\\\n&$-5431$ $-5524$ $-5627$ $-6740$ $-7167$\n$-7871$ $-8095$ $-8283$ $-10391$ $-10628$\\\\\n& $-13407$ $-13656$ $-13780$ $-16980$ $-18091$\n$-22499$ $-27579$ $-28596$ $-30083$\\\\\n& $-30616$ $-32303$ $-32615$ $-36311$\n$-36399$ $-38643$ $-39127$ $-40127$ $-42324$\\\\\n& $-52863$ $-64031$ $-64399$ $-66091$ $-66776$\n $-66967$ $-69647$ $-70376$ $-71455$\\\\\n& $-72663$ $-73487$ $-73559$ $-77039$ $-84383$\n$-90667$ $-91171$ $-98655$ $-98927$\\\\\\hline\n$11^2$&\n12 140 632 1160 1208 1308 1704 1884 2072 2136 2380 2693 2716 3045\\\\\n&4120 4121 5052 5528 5673 5820 6572 7532 11053 11208 12277 12568\\\\\n&12949 13884 14844 15465 16136 18588 18885 19020 19884 24060 25788\\\\\n&27365 27597 28265 28668 29109 29573 32808 32828 35261 36552 37164\\\\\n&38121 38297 44232 44873 49512 49765 50945 52392 54732 55708 56076\\\\\n&56721 58460 59340 65564 66072 66833 71688 72968 79557 80040 80184\\\\\n&83388 84504 84620 84945 86997 87576 92460 95241\\\\\\hline\n\\kern-1.5pt 256a&\n401 497 2513 3036 3813 6933 6941 9596 9932 11436 14721 17133 17309\\\\\n&18469 21345 21749 26381 26933 28993 29973 30461 33740 51469 53084\\\\\n&62556 63980 67721 69513 73868 76241 81164 87697\\\\\\hline\n\\kern-1.5pt 256b&\n73 345 3521 5133 6693 7293 21752 25437 27113 34657 38485 41656\\\\\n&42433 44088 46045 75581 79205 83480 89737 93624 96193\\\\\\hline\n$19^2$&\n44 60 1429 1793 3297 3340 3532 3837 3880 4109 5228 5628 7761 8808\\\\\n&9080 9388 12280 12313 12545 13373 13516 13897 19164 22204 23241\\\\\n&25036 25653 41205 41480 42665 43429 44121 44285 44508 45660 48828\\\\\n&50584 52989 64037 74585 75324 76921 81885 85036 96220\\\\\\hline\n$43^2$&\n88 152 440 2044 4268 5852 6376 7880 8908 9880 14252\\\\\n&15681 17864 20085 20353 28492 29477 45368 55948 56172\\\\\n&57409 60177 68136 79916 84524 85580 86853 96216\\\\\\hline\n$67^2$&\n17 57 869 1612 1628 3260 6380 6385 7469 8328 11017 13772\\\\\n&14152 14268 14552 15901 22513 24605 24664 27992 29676 33541\\\\\n&33789 36344 36588 38028 40280 43041 49884 62353 67260 \\\\\\hline\n\\kern-1.5pt$163^2$&30720\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\vspace*{-7.5pt}\n\\caption{Primitive $d$ with\n $\\mathop{\\rm ord\\,}\\limits_{s=k} L(\\psi_d^{2k-1},s)=2$\n for some $k\\ge 3$\\label{tbl:higher}}\n\\vspace*{-2pt}\n\\begin{center}\n\\begin{tabular}{|r|l|r|l|}\\hline\n27a&5th: $-13091$ 4040 18044&49a&5th: 437 19317\\\\\n32a&5th: 1704&121a&5th: $-183$\\quad 7th: 27365\\\\\n36a&5th: $-856$ $-2104$ $-31592$ $-88580$ 22909&256a&5th: $-79$ $-21252$\\\\\n36a&7th: $-95$ 2488&256b&5th: $-511$ 89320\\\\\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\\goodbreak\n\n\\subsubsection{Comparison between the CM and non-CM cases}\nFor the twist computations for the symmetric powers,\nwe can go much further (about 20 times as far) in the CM case\nbecause the conductors do not grow as rapidly.\\footnote\n{In \\cite[\\S 8]{RVZ} Rodriguez Villegas and Zagier\n mention the possibility of a Waldspurger-type formula\n for the twists of the Hecke Gr\\\"ossencharacters,\n but it does not seem that such a formula has ever appeared.\n Similarly, one might hope to extend the work of Coates and Wiles~\\cite{cw}\n and\/or Gross and Zagier~\\cite{gz} to powers of Gr\\\"ossencharacters;\n there is some early work (among others) of Damerell \\cite{damerell}\n in this regard, while Guo \\cite{guo} shows partial results toward the\n Bloch-Kato conjecture.}\nFor the 3rd symmetric power, the crude prediction is that we should have\n(asymptotically) many more extra vanishings for twists in the CM case\nthan in the non-CM case, but this is not borne out by the data.\nAdditionally, we have no triple zeros in the CM case (where the\ndataset is almost 100 times as large),\nwhile we already have six for the non-CM curves.\nThis is directly antithetical to our suspicion that there should\nbe more extra vanishings in the CM case.\nAs before, this might cast some doubt on our methodology of modelling\nof vanishings.\n\n\\section{Acknowledgements}\nThe author was partially supported by\nEngineering and Physical Sciences Research Council (EPSRC)\ngrant GR\/T00658\/01 (United Kingdom).\nHe thanks N.~D.~Elkies, H.~A.~Helfgott, and A.~Venkatesh for useful comments,\nand N.~P.~Jones for the reference~\\cite{duke}.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{Sec:Intro}\n\nMaximum entropy models are an important class of statistical models for biology. \nFor instance, they have been found to be a good model for protein folding \\cite{russ2005,Socolich2005},\nantibody diversity \\cite{Mora2010}, neural population activity \\cite{Schneidman2006,shlens2006,Tkacik2006,tang2008,bethge2008,yu2008,shlens2009}, and \nflock behavior \\cite{Bialek2012}. In this paper we develop a general framework for studying maximum entropy distributions on weighted graphs, extending recent work of Chatterjee, Diaconis, and Sly~\\cite{Chatterjee}. The development of this theory is partly motivated by the problem of sensory coding in neuroscience.\n\nIn the brain, information is represented by discrete electrical pulses, called \\textit{action potentials} or \\textit{spikes} \\cite{rieke1999}. This includes neural representations of sensory stimuli which can take on a continuum of values. For instance, large photoreceptor arrays in the retina respond to a range of light intensities in a visual environment, but the brain does not receive information from these photoreceptors directly. Instead, retinal ganglion cells must convey this detailed input to the visual cortex using only a series of binary electrical signals. Continuous stimuli are therefore converted by networks of neurons to sequences of spike times. \n\nAn unresolved controversy in neuroscience is whether information is contained in the precise timings of these spikes or only in their ``rates\" (i.e., counts of spikes in a window of time). Early theoretical studies \\cite{mackay1952} suggest that information capacities of timing-based codes are superior to those that are rate-based (also see \\cite{hopfield1995} for an implementation in a simple model). Moreover, a number of scientific articles have appeared suggesting that precise spike timing \\cite{abeles82, bair1996, neuenschwander1996, victor1996, liu2001variability, butts2007, Maldonado, nemenman2008, Desbordes2008, Kilavik} and synchrony \\cite{uhlhaas2009} are important for various computations in the brain.\\footnote{Although it is well-known that precise spike timing is used for time-disparity computation in animals \\cite{carr1993}, such as when owls track prey with binocular hearing or when electric fish use electric fields around their bodies for locating objects.} Here, we briefly explain a possible scheme for encoding continuous vectors with spiking neurons that takes advantage of precise spike timing and the mathematics of maximum entropy distributions.\n\nConsider a network of $n$ neurons in one region of the brain which transmits a continuous vector $\\theta \\in \\mathbb{R}^n$ using sequences of spikes to a second receiver region. We assume that this second region contains a number of coincidence detectors that measure the absolute difference in spike times between pairs of neurons projecting from the first region. We imagine three scenarios for how information can be obtained by these detectors. In the first, the detector is only measuring for synchrony between spikes; that is, either the detector assigns a 0 to a nonzero timing difference or a 1 to a coincidence of spikes. In another scenario, timing differences between projecting neurons can assume an infinite but countable number of possible values. Finally, in the third scenario, we allow these differences to take on any nonnegative real values. We further assume that neuronal output and thus spike times are stochastic variables. A basic question now arises: How can the first region encode $\\theta$ so that it can be recovered robustly by the second? \n\nWe answer this question by first asking the one symmetric to this: How can the second region recover a real vector transmitted by an unknown sender region from spike timing measurements? We propose the following possible solution to this problem. Fix one of the detector mechanics as described above, and set $a_{ij}$ to be the measurement of the absolute timing difference between spikes from projecting neurons $i$ and $j$. We assume that the receiver population can compute the (local) sums $d_i = \\sum_{j \\neq i} a_{ij}$ efficiently. The values $\\mathbf{a} = (a_{ij})$ represent a weighted graph $G$ on $n$ vertices, and we assume that $a_{ij}$ is randomly drawn from a distribution on timing measurements $(A_{ij})$. Making no further assumptions, a principle of Jaynes \\cite{jaynes1957} suggests that the second region propose that the timing differences are drawn from the (unique) distribution over weighted graphs with the highest entropy \\cite{shannon48, coverthomas} having the vector $\\mathbf{d} = (d_1,\\ldots,d_n)$ for the expectations of the degree sums $\\sum_{j \\neq i} A_{ij}$. Depending on which of the three scenarios described above is true for the coincidence detector, this prescription produces one of three different maximum entropy distributions.\n\nConsider the third scenario above (the other cases are also subsumed by our results). As we shall see in Section~\\ref{Sec:Cont}, the distribution determined in this case is parameterized by a real vector $\\theta = (\\theta_1,\\ldots,\\theta_n)$, and finding the maximum likelihood estimator (MLE) for these parameters using $\\mathbf{d}$ as sufficient statistics boils down to solving the following set of $n$ algebraic equations in the $n$ unknowns $\\hat{\\theta}_1,\\ldots,\\hat{\\theta}_n$:\n\\begin{equation}\\label{Eq:RetinaEqs}\nd_i = \\sum_{j \\neq i} \\frac{1}{\\hat{\\theta}_i+\\hat{\\theta}_j} \\quad \\text{ for } i = 1,\\dots,n.\n\\end{equation}\nGiven our motivation, we call the system of equations~\\eqref{Eq:RetinaEqs} the \\textit{retina equations} for theoretical neuroscience, and note that they have been studied in a more general context by Sanyal, Sturmfels, and Vinzant~\\cite{SturmEntDisc} using matroid theory and algebraic geometry. Remarkably, a solution $\\hat{\\theta}$ to~\\eqref{Eq:RetinaEqs} has the property that with high probability, it is arbitrarily close to the original parameters $\\theta$ for sufficiently large network sizes $n$ (in the scenario of binary measurements, this is a result of~\\cite{Chatterjee}). In particular, it is possible for the receiver region to recover reliably a continuous vector $\\theta$ from a \\textit{single} cycle of neuronal firing emanating from the sender region. \n\nWe now know how to answer our first question: \\textit{The sender region should arrange spike timing differences to come from a maximum entropy distribution}. We remark that this conclusion is consistent with modern paradigms in theoretical neuroscience and artificial intelligence, such as the concept of the Boltzmann machine \\cite{ackley1985}, a stochastic version of its (zero-temperature) deterministic limit, the Little-Hopfield network \\cite{little1974,hopfield1982}.\n\n\n\\paragraph{Organization.}\nThe organization of this paper is as follows. In Section~\\ref{Sec:General}, we lay out the general theory of maximum entropy distributions on weighted graphs. In Section~\\ref{Sec:Specific}, we specialize the general theory to three classes of weighted graphs. For each class, we provide an explicit characterization of the maximum entropy distributions and prove a generalization of the Erd\\H{o}s-Gallai criterion for weighted graphical sequences. Furthermore, we also present the consistency property of the MLE of the vertex parameters from one graph sample. Section~\\ref{Sec:Proofs} provides the proofs of the main technical results presented in Section~\\ref{Sec:Specific}. Finally, in Section~\\ref{Sec:Discussion} we discuss the results in this paper and some future research directions.\n\n\n\\paragraph{Notation.}\nIn this paper we use the following notation. Let $\\mathbb{R}_+ = (0, \\infty)$, $\\mathbb{R}_0 = [0,\\infty)$, $\\mathbb{N} = \\{1,2,\\dots\\}$, and $\\mathbb{N}_0 = \\{0,1,2,\\dots\\}$. We write $\\sum_{\\{i,j\\}}$ and $\\prod_{\\{i,j\\}}$ for the summation and product, respectively, over all $\\binom{n}{2}$ pairs $\\{i,j\\}$ with $i \\neq j$. For a subset $C \\subseteq \\mathbb{R}^n$, $C^\\circ$ and $\\overline{C}$ denote the interior and closure of $C$ in $\\mathbb{R}^n$, respectively. For a vector $x = (x_1,\\dots,x_n) \\in \\mathbb{R}^n$, $\\|x\\|_1 = \\sum_{i=1}^n |x_i|$ and $\\|x\\|_\\infty = \\max_{1 \\leq i \\leq n} |x_i|$ denote the $\\ell_1$ and $\\ell_\\infty$ norms of $x$. For an $n \\times n$ matrix $J = (J_{ij})$, $\\|J\\|_\\infty$ denotes the matrix norm induced by the $\\|\\cdot\\|_\\infty$-norm on vectors in $\\mathbb{R}^n$, that is,\n\\begin{equation*}\n\\|J\\|_\\infty = \\max_{x \\neq 0} \\frac{\\|Jx\\|_\\infty}{\\|x\\|_\\infty} = \\max_{1 \\leq i \\leq n} \\sum_{j=1}^n |J_{ij}|.\n\\end{equation*}\n\n\n\n\n\n\\section{General theory via exponential family distributions}\n\\label{Sec:General}\n\nIn this section we develop the general machinery of maximum entropy distributions on graphs via the theory of exponential family distributions~\\cite{Jordan}, and in subsequent sections we specialize our analysis to some particular cases of weighted graphs.\n\n\nConsider an undirected graph $G$ on $n \\geq 3$ vertices with edge $(i,j)$ having weight $a_{ij} \\in S$, where $S \\subseteq \\mathbb{R}$ is the set of possible weight values. We will later consider the following specific cases:\n\\begin{enumerate}\n \\item {\\em Finite discrete weighted graphs,} with edge weights in $S = \\{0,1,\\dots,r-1\\}$, $r \\geq 2$.\n \\item {\\em Infinite discrete weighted graphs,} with edge weights in $S = \\mathbb{N}_0$.\n \\item {\\em Continuous weighted graphs,} with edge weights in $S = \\mathbb{R}_0$.\n\\end{enumerate}\nA graph $G$ is fully specified by its \\textit{adjacency matrix} $\\mathbf{a} = (a_{ij})_{i,j=1}^n$, which is an $n \\times n$ symmetric matrix with zeros along its diagonal. For fixed $n$, a probability distribution over graphs $G$ corresponds to a distribution over adjacency matrices $\\mathbf{a} = (a_{ij}) \\in S^{\\binom{n}{2}}$. Given a graph with adjacency matrix $\\mathbf{a} = (a_{ij})$, let $\\deg_i(\\mathbf{a}) = \\sum_{j \\neq i} a_{ij}$ be the degree of vertex $i$, and let $\\deg(\\mathbf{a}) = (\\deg_1(\\mathbf{a}), \\dots, \\deg_n(\\mathbf{a}))$ be the degree sequence of $\\mathbf{a}$. \n\n\n\\subsection{Characterization of maximum entropy distribution}\n\nLet $\\mathcal{S}$ be a $\\sigma$-algebra over the set of weight values $S$, and assume there is a canonical $\\sigma$-finite probability measure $\\nu$ on $(S,\\mathcal{S})$. Let $\\nu^{\\binom{n}{2}}$ be the product measure on $S^{\\binom{n}{2}}$, and let $\\mathfrak{P}$ be the set of all probability distributions on $S^{\\binom{n}{2}}$ that are absolutely continuous with respect to $\\nu^{\\binom{n}{2}}$. Since $\\nu^{\\binom{n}{2}}$ is $\\sigma$-finite, these probability distributions can be characterized by their density functions, i.e.\\ the Radon-Nikodym derivatives with respect to $\\nu^{\\binom{n}{2}}$. Given a sequence $\\mathbf{d} = (d_1, \\dots, d_n) \\in \\mathbb{R}^n$, let $\\mathfrak{P}_\\mathbf{d}$ be the set of distributions in $\\mathfrak{P}$ whose expected degree sequence is equal to $\\mathbf{d}$,\n\\begin{equation*}\n\\mathfrak{P}_\\mathbf{d} = \\{ \\P \\in \\mathfrak{P} \\colon \\mathbb{E}_\\P[\\deg(A)] = \\mathbf{d}\\},\n\\end{equation*}\nwhere in the definition above, the random variable $A = (A_{ij}) \\in S^{\\binom{n}{2}}$ is drawn from the distribution $\\P$. Then the distribution $\\P^\\ast$ in $\\mathfrak{P}_\\mathbf{d}$ with maximum entropy is precisely the exponential family distribution with the degree sequence as sufficient statistics~\\cite[Chapter~3]{Jordan}. Specifically, the density of $\\P^\\ast$ at $\\mathbf{a} = (a_{ij}) \\in S^{\\binom{n}{2}}$ is given by\\footnote{We choose to use $-\\theta$ in the parameterization~\\eqref{Eq:MaxEntDist}, instead of the canonical parameterization $p^\\ast(\\mathbf{a}) \\propto \\exp(\\theta^\\top \\deg(\\mathbf{a}))$, because it simplifies the notation in our later presentation.}\n\\begin{equation}\\label{Eq:MaxEntDist}\np^\\ast(\\mathbf{a}) = \\exp \\big( -\\theta^\\top \\deg(\\mathbf{a}) - Z(\\theta) \\big),\n\\end{equation}\nwhere $Z(\\theta)$ is the \\emph{log-partition function},\n\\begin{equation*}\nZ(\\theta) = \\log \\int_{S^{\\binom{n}{2}}} \\exp\\big( -\\theta^\\top \\deg(\\mathbf{a}) \\big) \\; \\nu^{\\binom{n}{2}}(d\\mathbf{a}),\n\\end{equation*}\nand $\\theta = (\\theta_1, \\dots, \\theta_n)$ is a parameter that belongs to the \\emph{natural parameter space}\n\\begin{equation*}\n\\Theta = \\{\\mathbf{\\theta} \\in \\mathbb{R}^n \\colon Z(\\mathbf{\\theta}) < \\infty\\}.\n\\end{equation*}\nWe will also write $\\P^\\ast_\\theta$ if we need to emphasize the dependence of $\\P^\\ast$ on the parameter $\\theta$.\n\nUsing the definition $\\deg_i(\\mathbf{a}) = \\sum_{j \\neq i} a_{ij}$, we can write\n\\begin{equation*}\n\\exp \\big( -\\theta^\\top \\deg(\\mathbf{a}) \\big)\n= \\exp \\Big( -\\sum_{\\{i,j\\}} (\\theta_i+\\theta_j) a_{ij} \\Big)\n= \\prod_{\\{i,j\\}} \\exp \\big(-(\\theta_i+\\theta_j) a_{ij} \\big).\n\\end{equation*}\nHence, we can express the log-partition function as\n\\begin{equation}\\label{Eq:DecompositionLogPartition}\nZ(\\theta) = \\log \\prod_{\\{i,j\\}} \\int_S \\exp \\big(-(\\theta_i+\\theta_j) a_{ij} \\big) \\; \\nu(da_{ij}) = \\sum_{\\{i,j\\}} Z_1(\\theta_i+\\theta_j),\n\\end{equation}\nin which $Z_1(t)$ is the marginal log-partition function\n\\begin{equation*}\nZ_1(t) = \\log \\int_S \\exp (-ta) \\: \\nu(da).\n\\end{equation*}\nConsequently, the density in~\\eqref{Eq:MaxEntDist} can be written as\n\\begin{equation*}\np^\\ast(\\mathbf{a}) = \\prod_{\\{i,j\\}} \\exp \\big( -(\\theta_i + \\theta_j) a_{ij} - Z_1(\\theta_i + \\theta_j) \\big).\n\\end{equation*}\nThis means the edge weights $A_{ij}$ are independent random variables, with $A_{ij} \\in S$ having distribution $\\P_{ij}^\\ast$ with density\n\\begin{equation*}\np_{ij}^\\ast(a) = \\exp \\big( -(\\theta_i + \\theta_j) a - Z_1(\\theta_i + \\theta_j) \\big).\n\\end{equation*}\nIn particular, the edge weights $A_{ij}$ belong to the same exponential family distribution but with different parameters that depend on $\\theta_i$ and $\\theta_j$ (or rather, on their sum $\\theta_i + \\theta_j$). The parameters $\\theta_1, \\dots, \\theta_n$ can be interpreted as the potential at each vertex that determines how strongly the vertices are connected to each other. Furthermore, we can write the natural parameter space $\\Theta$ as\n\\begin{equation*}\n\\Theta = \\{\\theta \\in \\mathbb{R}^n \\colon Z_1(\\theta_i+\\theta_j) < \\infty \\; \\text{ for all } \\; i \\neq j \\}. \n\\end{equation*}\n\n\n\n\\subsection{Maximum likelihood estimator and moment-matching equation}\n\nUsing the characterization of $\\P^\\ast$ as the maximum entropy distribution in $\\mathfrak{P}_\\mathbf{d}$, the condition $\\P^\\ast \\in \\mathfrak{P}_\\mathbf{d}$ means we need to choose the parameter $\\theta$ for $\\P^\\ast_\\theta$ such that $\\mathbb{E}_\\theta[\\deg(A)] = \\mathbf{d}$.\\footnote{Here we write $\\mathbb{E}_\\theta$ in place of $\\mathbb{E}_{\\P^\\ast}$ to emphasize the dependence of the expectation on the parameter $\\theta$.} This is an instance of the moment-matching equation, which, in the case of exponential family distributions, is well-known to be equivalent to finding the maximum likelihood estimator (MLE) of $\\theta$ given an empirical degree sequence $\\mathbf{d} \\in \\mathbb{R}^n$.\n\nSpecifically, suppose we draw graph samples $G_1,\\dots,G_m$ i.i.d.\\ from the distribution $\\P^\\ast$ with parameter $\\theta^\\ast$, and we want to find the MLE $\\hat \\theta$ of $\\theta^\\ast$ based on the observations $G_1,\\dots,G_m$. Using the parametric form of the density~\\eqref{Eq:MaxEntDist}, this is equivalent to solving the maximization problem\n\\begin{equation*}\n\\max_{\\theta \\in \\Theta} \\: \\mathcal{F}(\\theta) \\equiv -\\theta^\\top \\mathbf{d} - Z(\\theta),\n\\end{equation*}\nwhere $\\mathbf{d}$ is the average of the degree sequences of $G_1,\\dots,G_m$. Setting the gradient of $\\mathcal{F}(\\theta)$ to zero reveals that the MLE $\\hat \\theta$ satisfies\n\\begin{equation}\\label{Eq:MLEEquation}\n-\\nabla Z(\\hat \\theta) = \\mathbf{d}.\n\\end{equation}\nRecall that the gradient of the log-partition function in an exponential family distribution is equal to the expected sufficient statistics. In our case, we have $-\\nabla Z(\\hat \\theta) = \\mathbb{E}_{\\hat \\theta}[\\deg(A)]$, so the MLE equation~\\eqref{Eq:MLEEquation} recovers the moment-matching equation $\\mathbb{E}_{\\hat \\theta}[\\deg(A)] = \\mathbf{d}$.\n\nIn Section~\\ref{Sec:Specific} we study the properties of the MLE of $\\theta$ from a {\\em single} sample $G \\sim \\P^\\ast_\\theta$. In the remainder of this section, we address the question of the existence and uniqueness of the MLE with a given empirical degree sequence $\\mathbf{d}$.\n\nDefine the \\emph{mean parameter space} $\\mathcal{M}$ to be the set of expected degree sequences from all distributions on $S^{\\binom{n}{2}}$ that are absolutely continuous with respect to $\\nu^{\\binom{n}{2}}$,\n\\begin{equation*}\n\\mathcal{M} = \\{ \\mathbb{E}_\\P[\\deg(A)] \\colon \\P \\in \\mathfrak{P} \\}.\n\\end{equation*}\nThe set $\\mathcal{M}$ is necessarily convex, since a convex combination of probability distributions in $\\mathfrak{P}$ is also a probability distribution in $\\mathfrak{P}$. Recall that an exponential family distribution is {\\em minimal} if there is no linear combination of the sufficient statistics that is constant almost surely with respect to the base distribution. This minimality property clearly holds for $\\P^\\ast$, for which the sufficient statistics are the degree sequence. We say that $\\P^\\ast$ is \\emph{regular} if the natural parameter space $\\Theta$ is open. By the general theory of exponential family distributions~\\cite[Theorem~3.3]{Jordan}, in a regular and minimal exponential family distribution, the gradient of the log-partition function maps the natural parameter space $\\Theta$ to the interior of the mean parameter space $\\mathcal{M}$, and this mapping\\footnote{The mapping is $-\\nabla Z$, instead of $\\nabla Z$, because of our choice of the parameterization in~\\eqref{Eq:MaxEntDist} using $-\\theta$.}\n\\begin{equation*}\n-\\nabla Z \\colon \\Theta \\to \\mathcal{M}^\\circ\n\\end{equation*}\nis bijective. We summarize the preceding discussion in the following result.\n\n\\begin{proposition}\\label{Prop:RegularMinimal}\nAssume $\\Theta$ is open. Then there exists a solution $\\theta \\in \\Theta$ to the MLE equation $\\mathbb{E}_{\\theta}[\\deg(A)] = \\mathbf{d}$ if and only if $\\mathbf{d} \\in \\mathcal{M}^\\circ$, and if such a solution exists then it is unique.\n\\end{proposition}\n\nWe now characterize the mean parameter space $\\mathcal{M}$. We say that a sequence $\\mathbf{d} = (d_1, \\dots, d_n)$ is \\emph{graphic} (or a {\\em graphical sequence}) if $\\mathbf{d}$ is the degree sequence of a graph $G$ with edge weights in $S$, and in this case we say that $G$ \\emph{realizes} $\\mathbf{d}$. It is important to note that whether a sequence $\\mathbf{d}$ is graphic depends on the weight set $S$, which we consider fixed for now.\n\n\\begin{proposition}\\label{Prop:MConvW}\nLet $\\mathcal{W}$ be the set of all graphical sequences, and let $\\text{conv}(\\mathcal{W})$ be the convex hull of $\\mathcal{W}$. Then $\\mathcal{M} \\subseteq \\text{conv}(\\mathcal{W})$. Furthermore, if $\\mathfrak{P}$ contains the Dirac delta measures, then $\\mathcal{M} = \\text{conv}(\\mathcal{W})$.\n\\end{proposition}\n\\begin{proof}\nThe inclusion $\\mathcal{M} \\subseteq \\text{conv}(\\mathcal{W})$ is clear, since any element of $\\mathcal{M}$ is of the form $\\mathbb{E}_\\P[\\deg(A)]$ for some distribution $\\P$ and $\\deg(A) \\in \\mathcal{W}$ for every realization of the random variable $A$. Now suppose $\\mathfrak{P}$ contains the Dirac delta measures $\\delta_B$ for each $B \\in S^{\\binom{n}{2}}$. Given $\\mathbf{d} \\in \\mathcal{W}$, let $B$ be the adjacency matrix of the graph that realizes $\\mathbf{d}$. Then $\\mathbf{d} = \\mathbb{E}_{\\delta_B}[\\deg(A)] \\in \\mathcal{M}$, which means $\\mathcal{W} \\subseteq \\mathcal{M}$, and hence $\\text{conv}(\\mathcal{W}) \\subseteq \\mathcal{M}$ since $\\mathcal{M}$ is convex.\n\\end{proof}\n\nAs we shall see in Section~\\ref{Sec:Specific}, the result above allows us to conclude that $\\mathcal{M} = \\text{conv}(\\mathcal{W})$ for the case of discrete weighted graphs. On the other hand, for the case of continuous weighted graphs we need to prove $\\mathcal{M} = \\text{conv}(\\mathcal{W})$ directly since $\\mathfrak{P}$ in this case does not contain the Dirac measures.\n\n\\begin{remark}\nWe emphasize the distinction between a \\emph{valid} solution $\\theta \\in \\Theta$ and a \\emph{general} solution $\\theta \\in \\mathbb{R}^n$ to the MLE equation $\\mathbb{E}_\\theta[\\deg(A)] = \\mathbf{d}$. As we saw from Proposition~\\ref{Prop:RegularMinimal}, we have a precise characterization of the existence and uniqueness of the valid solution $\\theta \\in \\Theta$, but in general, there are multiple solutions $\\theta$ to the MLE equation. In this paper we shall be concerned only with the valid solution; Sanyal, Sturmfels, and Vinzant study some algebraic properties of general solutions~\\cite{SturmEntDisc}.\n\\end{remark}\n\nWe close this section by discussing the symmetry of the valid solution to the MLE equation. Recall the decomposition~\\eqref{Eq:DecompositionLogPartition} of the log-partition function $Z(\\theta)$ into the marginal log-partition functions $Z_1(\\theta_i+\\theta_j)$. Let $\\text{Dom}(Z_1) = \\{t \\in \\mathbb{R} \\colon Z_1(t) < \\infty\\}$, and let $\\mu \\colon \\text{Dom}(Z_1) \\to \\mathbb{R}$ denote the (marginal) \\emph{mean function}\n\\begin{equation*}\n\\mu(t) = \\int_S a \\: \\exp \\big(-ta - Z_1(t) \\big) \\: \\nu(da).\n\\end{equation*}\nObserving that we can write\n\\begin{equation*}\n\\mathbb{E}_\\theta[A_{ij}] = \\int_S a \\: \\exp \\big( -(\\theta_i + \\theta_j) a - Z_1(\\theta_i + \\theta_j) \\big) \\: \\nu(da) = \\mu(\\theta_i+\\theta_j),\n\\end{equation*}\nthe MLE equation $\\mathbb{E}_\\theta[\\deg(A)] = \\mathbf{d}$ then becomes\n\\begin{equation}\\label{Eq:MLE-Sym}\nd_i = \\sum_{j \\neq i} \\mu(\\theta_i+\\theta_j) \\quad \\text{ for } i = 1, \\dots, n.\n\\end{equation}\n\n\n\nIn the statement below, $\\text{sgn}$ denotes the sign function: $\\text{sgn}(t) = t\/|t|$ if $t \\neq 0$, and $\\text{sgn}(0) = 0$.\n\n\\begin{proposition}\\label{Prop:SymSoln}\nLet $\\mathbf{d} \\in \\mathcal{M}^\\circ$, and let $\\theta \\in \\Theta$ be the unique solution to the system of equations~\\eqref{Eq:MLE-Sym}. If $\\mu$ is strictly increasing, then\n\\begin{equation*}\n\\text{sgn}(d_i-d_j) = \\text{sgn}(\\theta_i-\\theta_j) \\quad \\text{ for all } i \\neq j,\n\\end{equation*}\nand similarly, if $\\mu$ is strictly decreasing, then\n\\begin{equation*}\n\\text{sgn}(d_i-d_j) = \\text{sgn}(\\theta_j-\\theta_i) \\quad \\text{ for all } i \\neq j.\n\\end{equation*}\n\\end{proposition}\n\\begin{proof}\nGiven $i \\neq j$,\n\\begin{equation*}\n\\begin{split}\nd_i-d_j\n&= \\Big( \\mu(\\theta_i+\\theta_j) + \\sum_{k \\neq i,j} \\mu(\\theta_i+\\theta_k) \\Big)\n- \\Big( \\mu(\\theta_j+\\theta_i) + \\sum_{k \\neq i,j} \\mu(\\theta_j+\\theta_k) \\Big) \\\\\n&= \\sum_{k \\neq i,j} \\big( \\mu(\\theta_i+\\theta_k) - \\mu(\\theta_j+\\theta_k) \\big).\n\\end{split}\n\\end{equation*}\nIf $\\mu$ is strictly increasing, then $\\mu(\\theta_i+\\theta_k)-\\mu(\\theta_j+\\theta_k)$ has the same sign as $\\theta_i-\\theta_j$ for each $k \\neq i,j$, and thus $d_i-d_j$ also has the same sign as $\\theta_i-\\theta_j$. Similarly, if $\\mu$ is strictly decreasing, then $\\mu(\\theta_i+\\theta_k)-\\mu(\\theta_j+\\theta_k)$ has the opposite sign of $\\theta_i-\\theta_j$, and thus $d_i-d_j$ also has the opposite sign of $\\theta_i-\\theta_j$.\n\\end{proof}\n\n\n\n\n\n\n\\section{Analysis for specific edge weights}\n\\label{Sec:Specific}\n\nIn this section we analyze the maximum entropy random graph distributions for several specific choices of the weight set $S$. For each case, we specify the distribution of the edge weights $A_{ij}$, the mean function $\\mu$, the natural parameter space $\\Theta$, and characterize the mean parameter space $\\mathcal{M}$. We also study the problem of finding the MLE $\\hat \\theta$ of $\\theta$ from one graph sample $G \\sim \\P^\\ast_\\theta$ and prove the existence, uniqueness, and consistency of the MLE. Along the way, we derive analogues of the Erd\\H{o}s-Gallai criterion of graphical sequences for weighted graphs. We defer the proofs of the results presented here to Section~\\ref{Sec:Proofs}.\n\n\n\n\n\\subsection{Finite discrete weighted graphs}\n\\label{Sec:FiniteDiscrete}\n\nWe first study weighted graphs with edge weights in the finite discrete set $S = \\{0,1,\\dots,r-1\\}$, where $r \\geq 2$. The case $r = 2$ corresponds to unweighted graphs, and our analysis in this section recovers the results of Chatterjee, Diaconis, and Sly~\\cite{Chatterjee}. The proofs of the results in this section are provided in Section~\\ref{Sec:ProofFiniteDisc}.\n\n\n\\subsubsection{Characterization of the distribution}\n\nWe take $\\nu$ to be the counting measure on $S$. Following the development in Section~\\ref{Sec:General}, the edge weights $A_{ij} \\in S$ are independent random variables with density\n\\begin{equation*}\np_{ij}^\\ast(a) = \\exp\\big(-(\\theta_i+\\theta_j)a - Z_1(\\theta_i+\\theta_j)\\big), \\quad 0 \\leq a \\leq r-1,\n\\end{equation*}\nwhere the marginal log-partition function $Z_1$ is given by\n\\begin{equation*}\nZ_1(t) = \\log \\sum_{a = 0}^{r-1} \\exp(-at) =\n\\begin{cases}\n\\log \\frac{1-\\exp(-rt)}{1-\\exp(-t)} \\quad & \\text{ if } t \\neq 0,\\\\\n\\log r &\\text{ if } t = 0.\n\\end{cases}\n\\end{equation*}\n\nSince $Z_1(t) < \\infty$ for all $t \\in \\mathbb{R}$, the natural parameter space $\\Theta = \\{\\theta \\in \\mathbb{R}^n \\colon Z_1(\\theta_i+\\theta_j) < \\infty, i \\neq j\\}$ is given by $\\Theta = \\mathbb{R}^n$. The mean function is given by\n\\begin{equation}\\label{Eq:MeanFuncFiniteDiscrete}\n\\mu(t) = \\sum_{a=0}^{r-1} a \\exp(-at - Z_1(t))\n= \\frac{\\sum_{a=0}^{r-1} a \\: \\exp(-at)}{\\sum_{a=0}^{r-1} \\exp(-at)}.\n\\end{equation}\nAt $t = 0$ the mean function takes the value\n\\begin{equation*}\n\\mu(0) = \\frac{\\sum_{a=0}^{r-1} a}{r} = \\frac{r-1}{2},\n\\end{equation*}\nwhile for $t \\neq 0$, the mean function simplifies to\n\\begin{equation}\\label{Eq:MeanFuncFiniteDiscreteAlt}\n\\mu(t)\n= -\\left(\\frac{1-\\exp(-t)}{1-\\exp(-rt)}\\right) \\cdot \\frac{d}{dt} \\sum_{a=0}^{r-1} \\exp(-at)\n= \\frac{1}{\\exp(t)-1}-\\frac{r}{\\exp(rt)-1}.\n\\end{equation}\nFigure~\\ref{Fig:FiniteDiscMean} shows the behavior of the mean function $\\mu(t)$ and its derivative $\\mu'(t)$ as $r$ varies.\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{cc}\n\\widgraph{0.42\\textwidth}{mu_plot} &\n\\widgraph{0.42\\textwidth}{muprime_plot} \\\\\n(a) & (b)\n\\end{tabular}\n\\caption{Plot of the mean function $\\mu(t)$ (left) and its derivative $\\mu'(t)$ (right) as $r$ varies.}\n\\label{Fig:FiniteDiscMean}\n\\end{center}\n\\end{figure}\n\n\n\\begin{remark}\nFor $r = 2$, the edge weights $A_{ij}$ are independent Bernoulli random variables with\n\\begin{equation*}\n\\P^\\ast(A_{ij} = 1) = \\mu(\\theta_i+\\theta_j) = \\frac{\\exp(-\\theta_i-\\theta_j)}{1 + \\exp(-\\theta_i - \\theta_j)} = \\frac{1}{1+\\exp(\\theta_i+\\theta_j)}.\n\\end{equation*}\nAs noted above, this is the model recently studied by Chatterjee, Diaconis, and Sly~\\cite{Chatterjee} in the context of graph limits. When $\\theta_1 = \\theta_2 = \\dots = \\theta_n = t$, we recover the classical Erd\\H{o}s-R\\'enyi model with edge emission probability $p = 1\/(1+\\exp(2t))$.\n\\end{remark}\n\n\n\n\\subsubsection{Existence, uniqueness, and consistency of the MLE}\n\nConsider the problem of finding the MLE of $\\theta$ from one graph sample. Specifically, let $\\theta \\in \\Theta$ and suppose we draw a sample $G \\sim \\P^\\ast_\\theta$. Then, as we saw in Section~\\ref{Sec:General}, the MLE $\\hat \\theta$ of $\\theta$ is a solution to the moment-matching equation $\\mathbb{E}_{\\hat \\theta}[\\deg(A)] = \\mathbf{d}$, where $\\mathbf{d}$ is the degree sequence of the sample graph $G$. As in~\\eqref{Eq:MLE-Sym}, the moment-matching equation is equivalent to the following system of equations:\n\\begin{equation}\\label{Eq:MLEEqFiniteDisc}\nd_i = \\sum_{j \\neq i} \\mu(\\hat \\theta_i + \\hat \\theta_j), \\quad i = 1,\\dots,n.\n\\end{equation}\n\nSince the natural parameter space $\\Theta = \\mathbb{R}^n$ is open, Proposition~\\ref{Prop:RegularMinimal} tells us that the MLE $\\hat \\theta$ exists and is unique if and only if the empirical degree sequence $\\mathbf{d}$ belongs to the interior $\\mathcal{M}^\\circ$ of the mean parameter space $\\mathcal{M}$.\n\nWe also note that since $\\nu^{\\binom{n}{2}}$ is the counting measure on $S^{\\binom{n}{2}}$, all distributions on $S^{\\binom{n}{2}}$ are absolutely continuous with respect to $\\nu^{\\binom{n}{2}}$, so $\\mathfrak{P}$ contains all probability distributions on $S^{\\binom{n}{2}}$. In particular, $\\mathfrak{P}$ contains the Dirac measures, and by Proposition~\\ref{Prop:MConvW}, this implies $\\mathcal{M} = \\text{conv}(\\mathcal{W})$, where $\\mathcal{W}$ is the set of all graphical sequences.\n\nThe following result characterizes when $\\mathbf{d}$ is a degree sequence of a weighted graph with edge weights in $S$; we also refer to such $\\mathbf{d}$ as a {\\em (finite discrete) graphical sequence}. The case $r = 2$ recovers the classical Erd\\H{o}s-Gallai criterion~\\cite{ErdosGallai}.\n\n\\begin{theorem}\\label{Thm:GraphicalFiniteDisc}\nA sequence $(d_1,\\dots,d_n) \\in \\mathbb{N}_0^n$ with $d_1 \\geq d_2 \\geq \\dots \\geq d_n$ is the degree sequence of a graph $G$ with edge weights in the set $S = \\{0,1,\\dots,r-1\\}$, if and only if $\\sum_{i=1}^n d_i$ is even and\n\\begin{equation}\\label{Eq:GraphicalFiniteDisc}\n\\sum_{i=1}^k d_i \\leq (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j, (r-1)k\\} \\quad \\text{ for } k = 1,\\dots,n.\n\\end{equation}\n\\end{theorem}\n\nAlthough the result above provides a precise characterization of the set of graphical sequences $\\mathcal{W}$, it is not immediately clear how to characterize the convex hull $\\text{conv}(\\mathcal{W})$, or how to decide whether a given $\\mathbf{d}$ belongs to $\\mathcal{M}^\\circ = \\text{conv}(\\mathcal{W})^\\circ$. Fortunately, in practice we can circumvent this issue by employing the following algorithm to compute the MLE. The case $r = 2$ recovers the iterative algorithm proposed by Chatterjee et al.~\\cite{Chatterjee} in the case of unweighted graphs.\n\n\\begin{theorem}\\label{Thm:MLEAlgFiniteDisc}\nGiven $\\mathbf{d} = (d_1, \\dots, d_n) \\in \\mathbb{R}_+^n$, define the function $\\varphi \\colon \\mathbb{R}^n \\to \\mathbb{R}^n$ by $\\varphi(\\mathbf{x}) = (\\varphi_1(\\mathbf{x}), \\dots, \\varphi_n(\\mathbf{x}))$, where\n\\begin{equation}\\label{Eq:VarphiFiniteDisc}\n\\varphi_i(\\mathbf{x}) = x_i + \\frac{1}{r-1} \\left( \\log \\sum_{j \\neq i} \\mu(x_i+x_j) - \\log d_i\\right).\n\\end{equation}\nStarting from any $\\theta^{(0)} \\in \\mathbb{R}^n$, define\n\\begin{equation}\\label{Eq:MLEAlgFiniteDisc}\n\\theta^{(k+1)} = \\varphi(\\theta^{(k)}), \\quad k \\in \\mathbb{N}_0.\n\\end{equation}\nSuppose $\\mathbf{d} \\in \\text{conv}(\\mathcal{W})^\\circ$, so the MLE equation~\\eqref{Eq:MLEEqFiniteDisc} has a unique solution $\\hat \\theta$. Then $\\hat \\theta$ is a fixed point of the function $\\varphi$, and the iterates~\\eqref{Eq:MLEAlgFiniteDisc} converge to $\\hat \\theta$ geometrically fast: there exists a constant $\\beta \\in (0,1)$ that only depends on $(\\|\\hat \\theta\\|_\\infty, \\|\\theta^{(0)}\\|_\\infty)$, such that\n\\begin{equation}\\label{Eq:MLERateOfConvFiniteDisc}\n\\|\\theta^{(k)}-\\hat\\theta\\|_\\infty \\leq \\beta^{k-1} \\: \\|\\theta^{(0)}-\\hat\\theta\\|_\\infty, \\quad k \\in \\mathbb{N}_0.\n\\end{equation}\nConversely, if $\\mathbf{d} \\notin \\text{conv}(\\mathcal{W})^\\circ$, then the sequence $\\{\\theta^{(k)}\\}$ has a divergent subsequence.\n\\end{theorem}\n\n\\begin{figure}[h]\n\\begin{center}\n\\begin{tabular}{ccc}\n\\widgraph{0.38\\textwidth}{alg_conv} &\n\\widgraph{0.28\\textwidth}{scatter_r2} &\n\\widgraph{0.28\\textwidth}{scatter_r5} \\\\\n(a) & (b) & (c)\n\\end{tabular}\n\\caption{(a) Plot of $\\log \\|\\theta^{(t)} - \\hat\\theta\\|_\\infty$ for various values of $r$, where $\\hat\\theta$ is the final value of $\\theta^{(t)}$ when the algorithm converges; (b) Scatter plot of the estimate $\\hat \\theta$ vs.\\ the true parameter $\\theta$ for $r = 2$; (c) Scatter plot for $r = 5$.}\n\\label{Fig:FiniteDisc}\n\\end{center}\n\\end{figure}\n\nFigure~\\ref{Fig:FiniteDisc} demonstrates the performance of the algorithm presented above. We set $n = 200$ and sample $\\theta \\in [-1,1]^n$ uniformly at random. Then for each $2 \\leq r \\leq 10$, we sample a graph from the distribution $\\P^\\ast_\\theta$, compute the empirical degree sequence $\\mathbf{d}$, and run the iterative algorithm starting with $\\theta^{(0)} = 0$ until convergence. The left panel (Figure~\\ref{Fig:FiniteDisc}(a)) shows the rate of convergence (on a logarithmic scale) of the algorithm for various values of $r$. We observe that the iterates $\\{\\theta^{(t)}\\}$ indeed converge geometrically fast to the MLE $\\hat \\theta$, but the rate of convergence decreases as $r$ increases. By examining the proof of Theorem~\\ref{Thm:MLEAlgFiniteDisc} in Section~\\ref{Sec:ProofFiniteDisc}, we see that the term $\\beta$ has the expression\n\\begin{equation*}\n\\beta^2 = 1-\\frac{1}{(r-1)^2} \\: \\left(\\min \\left\\{\\frac{\\exp(2K)-1}{\\exp(2rK)-1}, \\; -\\frac{\\mu'(2K)}{\\mu(-2K)} \\right\\}\\right)^2,\n\\end{equation*}\nwhere $K = 2\\|\\hat\\theta\\|_\\infty + \\|\\theta^{(0)}\\|_\\infty$. This shows that $\\beta$ is an increasing function of $r$, which explains the empirical decrease in the rate of convergence as $r$ increases.\n\n\nFigures~\\ref{Fig:FiniteDisc}(b) and (c) show the plots of the estimate $\\hat \\theta$ versus the true $\\theta$. Notice that the points lie close to the diagonal line, which suggests that the MLE $\\hat \\theta$ is very close to the true parameter $\\theta$. Indeed, the following result shows that $\\hat \\theta$ is a consistent estimator of $\\theta$. Recall that $\\hat \\theta$ is {\\em consistent} if $\\hat \\theta$ converges in probability to $\\theta$ as $n \\to \\infty$.\n\n\\begin{theorem}\\label{Thm:ConsistencyFiniteDisc}\nLet $M > 0$ and $k > 1$ be fixed. Given $\\theta \\in \\mathbb{R}^n$ with $\\|\\theta\\|_\\infty \\leq M$, consider the problem of finding the MLE $\\hat \\theta$ of $\\theta$ based on one graph sample $G \\sim \\P^\\ast_\\theta$. Then for sufficiently large $n$, with probability at least $1-2n^{-(k-1)}$ the MLE $\\hat \\theta$ exists and satisfies\n\\begin{equation*}\n\\|\\hat \\theta - \\theta\\|_\\infty \\leq C \\sqrt{\\frac{k \\log n}{n}},\n\\end{equation*}\nwhere $C$ is a constant that only depends on $M$.\n\\end{theorem}\n\n\n\n\\subsection{Continuous weighted graphs}\n\\label{Sec:Cont}\n\nIn this section we study weighted graphs with edge weights in $\\mathbb{R}_0$. The proofs of the results presented here are provided in Section~\\ref{Sec:ProofCont}.\n\n\n\\subsubsection{Characterization of the distribution}\n\nWe take $\\nu$ to be the Lebesgue measure on $\\mathbb{R}_0$. The marginal log-partition function is\n\\begin{equation*}\nZ_1(t) = \\log \\int_{\\mathbb{R}_0} \\exp(-ta) \\: da =\n\\begin{cases}\n\\log(1\/t) & \\text{ if } t > 0 \\\\\n\\infty \\quad & \\text{ if } t \\leq 0.\n\\end{cases}\n\\end{equation*}\nThus $\\text{Dom}(Z_1) = \\mathbb{R}_+$, and the natural parameter space is\n\\begin{equation*}\n\\Theta = \\{(\\theta_1, \\dots, \\theta_n) \\in \\mathbb{R}^n \\colon \\theta_i+\\theta_j > 0 \\text{ for } i \\neq j\\}.\n\\end{equation*}\nFor $\\theta \\in \\Theta$, the edge weights $A_{ij}$ are independent exponential random variables with density\n\\begin{equation*}\np_{ij}^\\ast(a) = (\\theta_i+\\theta_j) \\: \\exp\\big(-(\\theta_i+\\theta_j) \\: a\\big) \\quad \\text{ for } a \\in \\mathbb{R}_0\n\\end{equation*}\nand mean parameter $\\mathbb{E}_\\theta[A_{ij}] = 1\/(\\theta_i+\\theta_j)$. The corresponding mean function is given by\n\\begin{equation*}\n\\mu(t) = \\frac{1}{t}, \\quad t > 0.\n\\end{equation*}\n\n\n\\subsubsection{Existence, uniqueness, and consistency of the MLE}\n\nWe now consider the problem of finding the MLE of $\\theta$ from one graph sample $G \\sim \\P^\\ast_\\theta$. As we saw previously, the MLE $\\hat \\theta \\in \\Theta$ satisfies the moment-matching equation $\\mathbb{E}_{\\hat \\theta}[\\deg(A)] = \\mathbf{d}$, where $\\mathbf{d}$ is the degree sequence of the sample graph $G$. Equivalently, $\\hat \\theta \\in \\Theta$ is a solution to the system of equations\n\\begin{equation}\\label{Eq:MLEEqCont}\nd_i = \\sum_{j \\neq i} \\frac{1}{\\hat{\\theta}_i + \\hat{\\theta}_j}, \\quad i = 1,\\dots,n.\n\\end{equation}\n\n\\begin{remark}\nThe system~\\eqref{Eq:MLEEqCont} is a special case of a general class that Sanyal, Sturmfels, and Vinzant~\\cite{SturmEntDisc} study using algebraic geometry and matroid theory (extending the work of Proudfoot and Speyer~\\cite{proudfootspeyer}). Define\n\\begin{equation*}\n\\chi(t) = \\sum_{k=0}^n \\left(\\stirlingtwo{n}{k} +n \\stirlingtwo{n-1}{k} \\right)(t-1)_k^{(2)},\n\\end{equation*}\nin which $\\stirlingtwo{n}{k}$ is the Stirling number of the second kind and $(x)_{k+1}^{(2)} = x(x-2)\\cdots (x-2k)$ is a generalized falling factorial. Then, there is a polynomial $H(\\mathbf{d})$ in the $d_i$ such that for $\\mathbf{d} \\in \\mathbb R^n$ with $H(\\mathbf{d}) \\neq 0$, the number of solutions $\\theta \\in \\mathbb R^n$ to~\\eqref{Eq:MLEEqCont} is $(-1)^n \\chi(0)$. Moreover, the polynomial $H(\\mathbf{d})$ has degree $2(-1)^n(n \\chi(0) + \\chi'(0))$ and characterizes those $\\mathbf{d}$ for which the equations above have multiple roots. We refer to \\cite{SturmEntDisc} for more details.\n\\end{remark}\n\nSince the natural parameter space $\\Theta$ is open, Proposition~\\ref{Prop:RegularMinimal} tells us that the MLE $\\hat \\theta$ exists and is unique if and only if the empirical degree sequence $\\mathbf{d}$ belongs to the interior $\\mathcal{M}^\\circ$ of the mean parameter space $\\mathcal{M}$. We characterize the set of graphical sequences $\\mathcal{W}$ and determine its relation to the mean parameter space $\\mathcal{M}$.\n\nWe say $\\mathbf{d} = (d_1, \\dots, d_n)$ is a {\\em (continuous) graphical sequence} if there is a graph $G$ with edge weights in $\\mathbb{R}_0$ that realizes $\\mathbf{d}$. The finite discrete graphical sequences from Section~\\ref{Sec:FiniteDiscrete} have combinatorial constraints because there are only finitely many possible edge weights between any pair of vertices, and these constraints translate into a set of inequalities in the generalized Erd\\H{o}s-Gallai criterion in Theorem~\\ref{Thm:GraphicalFiniteDisc}. In the case of continuous weighted graphs, however, we do not have these constraints because every edge can have as much weight as possible. Therefore, the criterion for a continuous graphical sequence should be simpler than in Theorem~\\ref{Thm:GraphicalFiniteDisc}, as the following result shows.\n\n\\begin{theorem}\\label{Thm:GraphicalCont}\nA sequence $(d_1, \\dots, d_n) \\in \\mathbb{R}_0^n$ is graphic if and only if\n\\begin{equation}\\label{Eq:GraphicalCont}\n\\max_{1 \\leq i \\leq n} d_i \\leq \\frac{1}{2} \\sum_{i=1}^n d_i.\n\\end{equation}\n\\end{theorem}\n\nWe note that condition~\\eqref{Eq:GraphicalCont} is implied by the case $k = 1$ in the conditions~\\eqref{Eq:GraphicalFiniteDisc}. This is to be expected, since any finite discrete weighted graph is also a continuous weighted graph, so finite discrete graphical sequences are also continuous graphical sequences.\n\nGiven the criterion in Theorem~\\ref{Thm:GraphicalCont}, we can write the set $\\mathcal{W}$ of graphical sequences as\n\\begin{equation*}\n\\mathcal{W} = \\Big\\{(d_1, \\dots, d_n) \\in \\mathbb{R}_0^n \\colon \\max_{1 \\leq i \\leq n} d_i \\leq \\frac{1}{2} \\sum_{i=1}^n d_i \\Big\\}.\n\\end{equation*}\nMoreover, we can also show that the set of graphical sequences coincide with the mean parameter space.\n\n\\begin{lemma}\\label{Lem:W-Convex}\nThe set $\\mathcal{W}$ is convex, and $\\mathcal{M} = \\mathcal{W}$.\n\\end{lemma}\n\nThe result above, together with the result of Proposition~\\ref{Prop:RegularMinimal}, implies that the MLE $\\hat \\theta$ exists and is unique if and only if the empirical degree sequence $\\mathbf{d}$ belongs to the interior of the mean parameter space, which can be written explicitly as\n\\begin{equation*}\n\\mathcal{M}^\\circ = \\Big\\{(d_1', \\dots, d_n') \\in \\mathbb{R}_+^n \\colon \\max_{1 \\leq i \\leq n} d_i' < \\frac{1}{2} \\sum_{i=1}^n d_i' \\Big\\}.\n\\end{equation*}\n\n\n\\begin{example}\nLet $n = 3$ and $\\mathbf{d} = (d_1,d_2,d_3) \\in \\mathbb{R}^n$ with $d_1 \\geq d_2 \\geq d_3$. It is easy to see that the system of equations~\\eqref{Eq:MLEEqCont} gives us\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{\\hat{\\theta}_1+\\hat{\\theta}_2} &= \\frac{1}{2}(d_1+d_2-d_3), \\\\\n\\frac{1}{\\hat{\\theta}_1+\\hat{\\theta}_3} &= \\frac{1}{2}(d_1-d_2+d_3), \\\\\n\\frac{1}{\\hat{\\theta}_2+\\hat{\\theta}_3} &= \\frac{1}{2}(-d_1+d_2+d_3),\n\\end{split}\n\\end{equation*}\nfrom which we obtain a unique solution $\\hat{\\theta} = (\\hat{\\theta}_1,\\hat{\\theta}_2,\\hat{\\theta}_3)$. Recall that $\\hat{\\theta} \\in \\Theta$ means $\\hat{\\theta}_1+\\hat{\\theta}_2 > 0$, $\\hat{\\theta}_1+\\hat{\\theta}_3 > 0$, and $\\hat{\\theta}_2+\\hat{\\theta}_3 > 0$, so the equations above tell us that $\\hat{\\theta} \\in \\Theta$ if and only if $d_1 < d_2+d_3$. In particular, this also implies $d_3 > d_1-d_2 \\geq 0$, so $\\mathbf{d} \\in \\mathbb{R}_+^3$. Hence, there is a unique solution $\\hat{\\theta} \\in \\Theta$ to the system of equations~\\eqref{Eq:MLEEqCont} if and only if $\\mathbf{d} \\in \\mathcal{M}^\\circ$, as claimed above.\n\\end{example}\n\nFinally, we prove that the MLE $\\hat \\theta$ is a consistent estimator of $\\theta$.\n\n\\begin{theorem}\\label{Thm:ConsistencyCont}\nLet $M \\geq L > 0$ and $k \\geq 1$ be fixed. Given $\\theta \\in \\Theta$ with $L \\leq \\theta_i + \\theta_j \\leq M$, $i \\neq j$, consider the problem of finding the MLE $\\hat \\theta \\in \\Theta$ of $\\theta$ from one graph sample $G \\sim \\P^\\ast_\\theta$. Then for sufficiently large $n$, with probability at least $1-2n^{-(k-1)}$ the MLE $\\hat \\theta \\in \\Theta$ exists and satisfies\n\\begin{equation*}\n\\|\\hat \\theta - \\theta\\|_\\infty \\leq \\frac{100M^2}{L} \\sqrt{\\frac{k \\log n}{\\gamma n}},\n\\end{equation*}\nwhere $\\gamma > 0$ is a universal constant.\n\\end{theorem}\n\n\n\n\n\n\n\\subsection{Infinite discrete weighted graphs}\n\\label{Sec:InfiniteDisc}\n\nWe now turn our focus to weighted graphs with edge weights in $\\mathbb{N}_0$. The proofs of the results presented here can be found in Section~\\ref{Sec:ProofInfiniteDisc}.\n\n\n\\subsubsection{Characterization of the distribution}\n\nWe take $\\nu$ to be the counting measure on $\\mathbb{N}_0$. In this case the marginal log-partition function is given by\n\\begin{equation*}\nZ_1(t) = \\log \\sum_{a = 0}^\\infty \\exp(-at) = \n\\begin{cases}\n-\\log\\big( 1-\\exp(-t) \\big) & \\text{ if } t > 0, \\\\\n\\infty \\quad & \\text{ if } t \\leq 0.\n\\end{cases}\n\\end{equation*}\nThus, the domain of $Z_1$ is $\\text{Dom}(Z_1) = (0,\\infty)$, and the natural parameter space is\n\\begin{equation*}\n\\Theta = \\{(\\theta_1,\\dots,\\theta_n) \\in \\mathbb{R}^n \\colon \\theta_i+\\theta_j > 0 \\text{ for } i \\neq j\\},\n\\end{equation*}\nwhich is the same natural parameter space as in the case of continuous weighted graphs in the preceding section. Given $\\theta \\in \\Theta$, the edge weights $A_{ij}$ are independent geometric random variables with probability mass function\n\\begin{equation*}\n\\P^\\ast(A_{ij}=a) = \\big(1-\\exp(-\\theta_i-\\theta_j)\\big) \\: \\exp\\big( -(\\theta_i+\\theta_j) \\: a \\big), \\quad a \\in \\mathbb{N}_0.\n\\end{equation*}\nThe mean parameters are\n\\begin{equation*}\n\\mathbb{E}_{\\P^\\ast}[A_{ij}] = \\frac{\\exp(-\\theta_i-\\theta_j)}{1-\\exp(-\\theta_i-\\theta_j)} = \\frac{1}{\\exp(\\theta_i+\\theta_j)-1},\n\\end{equation*}\ninduced by the mean function\n\\begin{equation*}\n\\mu(t) = \\frac{1}{\\exp(t)-1}, \\quad t > 0.\n\\end{equation*}\n\n\n\n\\subsubsection{Existence, uniqueness, and consistency of the MLE}\n\nConsider the problem of finding the MLE of $\\theta$ from one graph sample $G \\sim \\P^\\ast_\\theta$. Let $\\mathbf{d}$ denote the degree sequence of $G$. Then the MLE $\\hat \\theta \\in \\Theta$, which satisfies the moment-matching equation $\\mathbb{E}_{\\hat \\theta}[\\deg(A)] = \\mathbf{d}$, is a solution to the system of equations\n\\begin{equation}\\label{Eq:MLEEqInfiniteDisc}\nd_i = \\sum_{j \\neq i} \\frac{1}{\\exp(\\hat{\\theta}_i + \\hat{\\theta}_j)-1}, \\quad i = 1,\\dots,n.\n\\end{equation}\n\nWe note that the natural parameter space $\\Theta$ is open, so by Proposition~\\ref{Prop:RegularMinimal}, the MLE $\\hat \\theta$ exists and is unique if and only if $\\mathbf{d} \\in \\mathcal{M}^\\circ$, where $\\mathcal{M}$ is the mean parameter space. Since $\\nu^{\\binom{n}{2}}$ is the counting measure on $\\mathbb{N}_0^{\\binom{n}{2}}$, the set $\\mathfrak{P}$ contains all the Dirac measures, so we know $\\mathcal{M} = \\text{conv}(\\mathcal{W})$ from Proposition~\\ref{Prop:MConvW}. Here $\\mathcal{W}$ is the set of all {\\em (infinite discrete) graphical sequences}, namely, the set of degree sequences of weighted graphs with edge weights in $\\mathbb{N}_0$. The following result provides a precise criterion for such graphical sequences. Note that condition~\\eqref{Eq:GraphicalInfiniteDisc} below is implied by the limit $r \\to \\infty$ in Theorem~\\ref{Thm:GraphicalFiniteDisc}.\n\n\\begin{theorem}\\label{Thm:GraphicalInfiniteDisc}\nA sequence $(d_1, \\dots, d_n) \\in \\mathbb{N}_0^n$ is graphic if and only if $\\sum_{i=1}^n d_i$ is even and\n\\begin{equation}\\label{Eq:GraphicalInfiniteDisc}\n\\max_{1 \\leq i \\leq n} d_i \\leq \\frac{1}{2} \\sum_{i=1}^n d_i.\n\\end{equation}\n\\end{theorem}\n\nThe criterion in Theorem~\\ref{Thm:GraphicalInfiniteDisc} allows us to write an explicit form for the set of graphical sequences $\\mathcal{W}$,\n\\begin{equation*}\n\\mathcal{W} = \\Big\\{(d_1, \\dots, d_n) \\in \\mathbb{N}_0^n \\colon \\sum_{i=1}^n d_i \\text{ is even and } \\max_{1 \\leq i \\leq n} d_i \\leq \\frac{1}{2} \\sum_{i=1}^n d_i \\Big\\}.\n\\end{equation*}\nNow we need to characterize $\\text{conv}(\\mathcal{W})$. Let $\\mathcal{W}_1$ denote the set of all continuous graphical sequences from Theorem~\\ref{Thm:GraphicalCont}, when the edge weights are in $\\mathbb{R}_0$,\n\\begin{equation*}\n\\mathcal{W}_1 = \\Big\\{(d_1, \\dots, d_n) \\in \\mathbb{R}_0^n \\colon \\max_{1 \\leq i \\leq n} d_i \\leq \\frac{1}{2} \\sum_{i=1}^n d_i \\Big\\}.\n\\end{equation*}\nIt turns out that when we take the convex hull of $\\mathcal{W}$, we essentially recover $\\mathcal{W}_1$.\n\n\\begin{lemma}\\label{Lem:ConvWInfiniteDisc}\n$\\overline{\\text{conv}(\\mathcal{W})} = \\mathcal{W}_1$.\n\\end{lemma}\n\nRecalling that a convex set and its closure have the same interior points, the result above gives us\n\\begin{equation*}\n\\mathcal{M}^\\circ = \\text{conv}(\\mathcal{W})^\\circ = \\big(\\overline{\\text{conv}(\\mathcal{W})}\\big)^\\circ = \\mathcal{W}_1^\\circ =\n\\Big\\{(d_1, \\dots, d_n) \\in \\mathbb{R}_+^n \\colon \\max_{1 \\leq i \\leq n} d_i < \\frac{1}{2} \\sum_{i=1}^n d_i \\Big\\}.\n\\end{equation*}\n\n\\begin{example}\nLet $n = 3$ and $\\mathbf{d} = (d_1,d_2,d_3) \\in \\mathbb{R}^n$ with $d_1 \\geq d_2 \\geq d_3$. It can be easily verified that the system of equations~\\eqref{Eq:MLEEqInfiniteDisc} gives us\n\\begin{equation*}\n\\begin{split}\n\\hat{\\theta}_1+\\hat{\\theta}_2 &= \\log\\left( 1+\\frac{2}{d_1+d_2-d_3} \\right), \\\\\n\\hat{\\theta}_1+\\hat{\\theta}_3 &= \\log\\left( 1+\\frac{2}{d_1-d_2+d_3} \\right), \\\\\n\\hat{\\theta}_2+\\hat{\\theta}_3 &= \\log\\left( 1+\\frac{2}{-d_1+d_2+d_3} \\right),\n\\end{split}\n\\end{equation*}\nfrom which we can obtain a unique solution $\\hat{\\theta} = (\\hat{\\theta}_1,\\hat{\\theta}_2,\\hat{\\theta}_3)$. Recall that $\\hat{\\theta} \\in \\Theta$ means $\\hat{\\theta}_1+\\hat{\\theta}_2>0$, $\\hat{\\theta}_1+\\hat{\\theta}_3>0$, and $\\hat{\\theta}_2+\\hat{\\theta}_3>0$, so the equations above tell us that $\\hat\\theta \\in \\Theta$ if and only if $2\/(-d_1+d_2+d_3) > 0$, or equivalently, $d_1 < d_2+d_3$. This also implies $d_3 > d_1-d_2 \\geq 0$, so $\\mathbf{d} \\in \\mathbb{R}_+^3$. Thus, the system of equations~\\eqref{Eq:MLEEqInfiniteDisc} has a unique solution $\\hat{\\theta} \\in \\Theta$ if and only if $\\mathbf{d} \\in \\mathcal{M}^\\circ$, as claimed above.\n\\end{example}\n\nFinally, we prove that with high probability the MLE $\\hat \\theta$ exists and converges to $\\theta$.\n\n\\begin{theorem}\\label{Thm:ConsistencyInfiniteDisc}\nLet $M \\geq L > 0$ and $k \\geq 1$ be fixed. Given $\\theta \\in \\Theta$ with $L \\leq \\theta_i + \\theta_j \\leq M$, $i \\neq j$, consider the problem of finding the MLE $\\hat \\theta \\in \\Theta$ of $\\theta$ from one graph sample $G \\sim \\P^\\ast_\\theta$. Then for sufficiently large $n$, with probability at least $1-3n^{-(k-1)}$ the MLE $\\hat \\theta \\in \\Theta$ exists and satisfies\n\\begin{equation*}\n\\|\\hat \\theta - \\theta\\|_\\infty \\leq \\frac{8 \\: \\exp(5M)}{L} \\: \\sqrt{\\frac{k \\log n}{\\gamma n}},\n\\end{equation*}\nwhere $\\gamma > 0$ is a universal constant.\n\\end{theorem}\n\n\n\n\n\\section{Proofs of main results}\n\\label{Sec:Proofs}\n\nIn this section we provide proofs for the technical results presented in Section~\\ref{Sec:Specific}. The proofs of the characterization of weighted graphical sequences (Theorems~\\ref{Thm:GraphicalFiniteDisc},~\\ref{Thm:GraphicalCont}, and~\\ref{Thm:GraphicalInfiniteDisc}) are inspired by the constructive proof of the classical Erd\\H{o}s-Gallai criterion by Choudum~\\cite{Choudum}.\n\n\n\\subsection{Preliminaries}\n\nWe begin by presenting several results that we will use in this section. We use the definition of sub-exponential random variables and the concentration inequality presented in~\\cite{vershynin}.\n\n\n\\subsubsection{Concentration inequality for sub-exponential random variables}\n\nWe say that a real-valued random variable $X$ is {\\em sub-exponential} with parameter $\\kappa > 0$ if\n\\begin{equation*}\n\\mathbb{E}[|X|^p]^{1\/p} \\leq \\kappa p \\quad \\text{ for all } p \\geq 1.\n\\end{equation*}\nNote that if $X$ is a $\\kappa $-sub-exponential random variable with finite first moment, then the centered random variable $X-\\mathbb{E}[X]$ is also sub-exponential with parameter $2 \\kappa $. This follows from the triangle inequality applied to the $p$-norm, followed by Jensen's inequality for $p \\geq 1$:\n\\begin{equation*}\n\\begin{split}\n\\mathbb{E}\\big[\\big|X-\\mathbb{E}[X]\\big|^p\\big]^{1\/p}\n&\\leq \\mathbb{E}[|X|^p]^{1\/p} + \\big|\\mathbb{E}[X]\\big|\n\\leq 2\\mathbb{E}[|X|^p]^{1\/p}.\n\\end{split}\n\\end{equation*}\nSub-exponential random variables satisfy the following concentration inequality.\n\n\\begin{theorem}[{\\cite[Corollary~5.17]{vershynin}}]\\label{Thm:ConcIneqSubExp}\nLet $X_1, \\dots, X_n$ be independent centered random variables, and suppose each $X_i$ is sub-exponential with parameter $\\kappa_i$. Let $\\kappa = \\max_{1 \\leq i \\leq n} \\kappa_i$. Then for every $\\epsilon \\geq 0$,\n\\begin{equation*}\n\\P\\left( \\left| \\frac{1}{n} \\sum_{i=1}^n X_i \\right| \\geq \\epsilon \\right) \\leq 2\\exp\\left[-\\gamma \\, n \\cdot \\min\\Big(\\frac{\\epsilon^2}{\\kappa^2}, \\: \\frac{\\epsilon}{\\kappa} \\Big) \\right],\n\\end{equation*}\nwhere $\\gamma > 0$ is an absolute constant.\n\\end{theorem}\n\nWe will apply the concentration inequality above to exponential and geometric random variables, which are the distributions of the edge weights of continuous weighted graphs (from Section~\\ref{Sec:Cont}) and infinite discrete weighted graphs (from Section~\\ref{Sec:InfiniteDisc}).\n\n\\begin{lemma}\\label{Lem:SubExp-Exp}\nLet $X$ be an exponential random variable with $\\mathbb{E}[X] = 1\/\\lambda$. Then $X$ is sub-exponential with parameter $1\/\\lambda$, and the centered random variable $X-1\/\\lambda$ is sub-exponential with parameter $2\/\\lambda$.\n\\end{lemma}\n\\begin{proof}\nFor any $p \\geq 1$, we can evaluate the moment of $X$ directly:\n\\begin{equation*}\n\\mathbb{E}[|X|^p] = \\int_0^\\infty x^p \\cdot \\lambda \\, \\exp(-\\lambda x) \\: dx\n= \\frac{1}{\\lambda^p} \\: \\int_0^\\infty y^p \\: \\exp(-y) \\: dy\n= \\frac{\\Gamma(p+1)}{\\lambda^p},\n\\end{equation*}\nwhere $\\Gamma$ is the gamma function, and in the computation above we have used the substitution $y = \\lambda x$. It can be easily verified that $\\Gamma(p+1) \\leq p^p$ for $p \\geq 1$, so\n\\begin{equation*}\n\\mathbb{E}[|X|^p]^{1\/p} = \\frac{\\big(\\Gamma(p+1)\\big)^{1\/p}}{\\lambda} \\leq \\frac{p}{\\lambda}.\n\\end{equation*}\nThis shows that $X$ is sub-exponential with parameter $1\/\\lambda$.\n\\end{proof}\n\n\\begin{lemma}\\label{Lem:SubExp-Geo}\nLet $X$ be a geometric random variable with parameter $q \\in (0,1)$, so\n\\begin{equation*}\n\\P(X = a) = (1-q)^a \\, q, \\quad a \\in \\mathbb{N}_0.\n\\end{equation*}\nThen $X$ is sub-exponential with parameter $-2\/\\log(1-q)$, and the centered random variable $X - (1-q)\/q$ is sub-exponential with parameter $-4\/\\log(1-q)$.\n\\end{lemma}\n\\begin{proof}\nFix $p \\geq 1$, and consider the function $f \\colon \\mathbb{R}_0 \\to \\mathbb{R}_0$, $f(x) = x^p (1-q)^x$. One can easily verify that $f$ is increasing for $0 \\leq x \\leq \\lambda$ and decreasing on $x \\geq \\lambda$, where $\\lambda = -p\/\\log(1-q)$. In particular, for all $x \\in \\mathbb{R}_0$ we have $f(x) \\leq f(\\lambda)$, and\n\\begin{equation*}\n\\begin{split}\nf(\\lambda) &= \\lambda^p (1-q)^\\lambda\n= \\left( \\frac{p}{-\\log (1-q)} \\cdot (1-q)^{-1\/\\log(1-q)} \\right)^p\n= \\left( \\frac{p}{-e \\cdot \\log (1-q)} \\right)^p.\n\\end{split}\n\\end{equation*}\n\nNow note that for $0 \\leq a \\leq \\lfloor \\lambda \\rfloor - 1$ we have $f(a) \\leq \\int_a^{a+1} f(x) \\: dx$, and for $a \\geq \\lceil \\lambda \\rceil + 1$ we have $f(a) \\leq \\int_{a-1}^a f(x) \\: dx$. Thus, we can bound\n\\begin{equation*}\n\\begin{split}\n\\sum_{a=0}^\\infty f(a)\n&= \\sum_{a=0}^{\\lfloor \\lambda \\rfloor - 1} f(a) + \\sum_{a=\\lfloor \\lambda \\rfloor}^{\\lceil \\lambda \\rceil} f(a) + \\sum_{a = \\lceil \\lambda \\rceil + 1}^\\infty f(a) \\\\\n&\\leq \\int_0^{\\lfloor \\lambda \\rfloor} f(x) \\: dx + 2f(\\lambda) + \\int_{\\lceil \\lambda \\rceil}^\\infty f(x) \\: dx \\\\\n&\\leq \\int_0^\\infty f(x) \\: dx + 2f(\\lambda).\n\\end{split}\n\\end{equation*}\nUsing the substitution $y = -x \\log(1-q)$, we can evaluate the integral to be\n\\begin{equation*}\n\\begin{split}\n\\int_0^\\infty f(x) \\: dx\n&= \\int_0^\\infty x^p \\exp\\left(x \\cdot \\log (1-q) \\right) \\: dx\n= \\frac{1}{(-\\log(1-q))^{p+1}} \\: \\int_0^\\infty y^p \\: \\exp(-y) \\: dy \\\\\n&= \\frac{\\Gamma(p+1)}{(-\\log(1-q))^{p+1}}\n\\leq \\frac{p^p}{(-\\log(1-q))^{p+1}},\n\\end{split}\n\\end{equation*}\nwhere in the last step we have again used the relation $\\Gamma(p+1) \\leq p^p$.\nWe use the result above, along with the expression of $f(\\lambda)$, to bound the moment of $X$:\n\\begin{equation*}\n\\begin{split}\n\\mathbb{E}[|X|^p] &= \\sum_{a=0}^\\infty a^p \\cdot (1-q)^a \\: q\n= q \\: \\sum_{a=0}^\\infty f(a) \\\\\n&\\leq q \\: \\int_0^\\infty f(x) \\: dx + 2q \\: f(\\lambda) \\\\\n&\\leq \\left(\\frac{q^{1\/p} \\: p}{(-\\log(1-q))^{1+1\/p}} \\right)^p + \\left( \\frac{(2q)^{1\/p} \\: p}{-e \\cdot \\log (1-q)} \\right)^p \\\\\n&\\leq \\left( \\frac{q^{1\/p} \\: p}{(-\\log(1-q))^{1+1\/p}} + \\frac{(2q)^{1\/p} \\: p}{-e \\cdot \\log (1-q)} \\right)^p,\n\\end{split}\n\\end{equation*}\nwhere in the last step we have used the fact that $x^p + y^p \\leq (x+y)^p$ for $x,y \\geq 0$ and $p \\geq 1$. This gives us\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{p} \\: \\mathbb{E}[|X|^p]^{1\/p}\n&\\leq \\frac{q^{1\/p}}{(-\\log(1-q))^{1+1\/p}} + \\frac{2^{1\/p} \\: q^{1\/p}}{-e \\cdot \\log (1-q)} \\\\\n&= \\frac{1}{-\\log(1-q)} \\: \\left(\\left(\\frac{q}{-\\log(1-q)}\\right)^{1\/p} + \\frac{2^{1\/p} \\: q^{1\/p}}{e}\\right).\n\\end{split}\n\\end{equation*}\nNow note that $q \\leq -\\log(1-q)$ for $0 < q < 1$, so $(-q\/\\log(1-q))^{1\/p} \\leq 1$. Moreover, $(2q)^{1\/p} \\leq 2^{1\/p} \\leq 2$. Therefore, for any $p \\geq 1$, we have\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{p} \\: \\mathbb{E}[|X|^p]^{1\/p}\n&\\leq \\frac{1}{-\\log(1-q)} \\: \\left(1 + \\frac{2}{e}\\right)\n< \\frac{2}{-\\log(1-q)}.\n\\end{split}\n\\end{equation*}\nThus, we conclude that $X$ is sub-exponential with parameter $-2\/\\log(1-q)$.\n\\end{proof}\n\n\n\n\\subsubsection{Bound on the inverses of diagonally-dominant matrices}\n\nAn $n\\times n$ real matrix $J$ is {\\em diagonally dominant} if\n\\begin{equation*}\n\\Delta_i(J) := |J_{ii}| - \\sum_{j \\neq i} |J_{ij}| \\geq 0, \\quad \\text{for } i = 1,\\dots,n.\n\\end{equation*}\nWe say that $J$ is {\\em diagonally balanced} if $\\Delta_i(J) = 0$ for $i = 1,\\dots,n$. We have the following bound from~\\cite{HLW} on the inverses of diagonally dominant matrices. This bound is independent of $\\Delta_i$, so it is also applicable to diagonally balanced matrices. We will use this result in the proofs of Theorems~\\ref{Thm:ConsistencyCont} and~\\ref{Thm:ConsistencyInfiniteDisc}.\n\n\\begin{theorem}[{\\cite[Theorem~1.1]{HLW}}]\\label{Thm:Main}\nLet $n \\geq 3$. For any symmetric diagonally dominant matrix $J$ with $J_{ij} \\geq \\ell > 0$, we have\n\\begin{equation*}\n\\|J^{-1}\\|_\\infty \\leq \\frac{3n-4}{2\\ell(n-2)(n-1)}.\n\\end{equation*}\n\\end{theorem}\n\n\n\n\n\n\\subsection{Proofs for the finite discrete weighted graphs}\n\\label{Sec:ProofFiniteDisc}\n\n\nIn this section we present the proofs of the results presented in Section~\\ref{Sec:FiniteDiscrete}.\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:GraphicalFiniteDisc}}\n\n\nWe first prove the necessity of~\\eqref{Eq:GraphicalFiniteDisc}. Suppose $\\mathbf{d} = (d_1,\\dots,d_n)$ is the degree sequence of a graph $G$ with edge weights $a_{ij} \\in S$. Then $\\sum_{i=1}^n d_i = 2\\sum_{(i,j)} a_{ij}$ is even. Moreover, for each $1 \\leq k \\leq n$, $\\sum_{i=1}^k d_i$ counts the total edge weights coming out from the vertices $1,\\dots,k$. The total edge weights from these $k$ vertices to themselves is at most $(r-1)k(k-1)$, and for each vertex $j \\notin \\{1,\\dots,k\\}$, the total edge weights from these $k$ vertices to vertex $j$ is at most $\\min\\{d_j,(r-1)k\\}$, so by summing over $j \\notin \\{1,\\dots,k\\}$ we get~\\eqref{Eq:GraphicalFiniteDisc}.\n\nTo prove the sufficiency of~\\eqref{Eq:GraphicalFiniteDisc} we use induction on $s := \\sum_{i=1}^n d_i$. The base case $s = 0$ is trivial. Assume the statement holds for $s-2$, and suppose we have a sequence $\\mathbf{d}$ with $d_1 \\geq d_2 \\geq \\dots \\geq d_n$ satisfying~\\eqref{Eq:GraphicalFiniteDisc} with $\\sum_{i=1}^n d_i = s$. Without loss of generality we may assume $d_n \\geq 1$, for otherwise we can proceed with only the nonzero elements of $\\mathbf{d}$. Let $1 \\leq t \\leq n-1$ be the smallest index such that $d_t > d_{t+1}$, with $t = n-1$ if $d_1 = \\cdots = d_n$. Define $\\mathbf{d}' = (d_1,\\dots,d_{t-1},d_t-1,d_{t+1},\\dots,d_{n-1},d_n-1)$, so we have $d_1' = \\cdots = d_{t-1}' > d_t' \\geq d_{t+1}' \\geq \\cdots \\geq d_{n-1}' > d_n'$ and $\\sum_{i=1}^n d_i' = s-2$.\n\nWe will show that $\\mathbf{d}'$ satisfies~\\eqref{Eq:GraphicalFiniteDisc}. By the inductive hypothesis, this means $\\mathbf {d}'$ is the degree sequence of a graph $G'$ with edge weights $a_{ij}' \\in \\{0,1,\\dots,r-1\\}$. We now attempt to modify $G'$ to obtain a graph $G$ whose degree sequence is equal to $\\mathbf{d}$. If the weight $a_{tn}'$ of the edge $(t,n)$ is less than $r-1$, then we can obtain $G$ by increasing $a_{tn}'$ by $1$, since the degree of vertex $t$ is now $d_t'+1 = d_t$, and the degree of vertex $n$ is now $d_n'+1 = d_n$. Otherwise, suppose $a_{tn}' = r-1$. Since $d_t' = d_1'-1$, there exists a vertex $u \\neq n$ such that $a_{tu}' < r-1$. Since $d_u' > d_n'$, there exists another vertex $v$ such that $a_{uv}' > a_{vn}'$. Then we can obtain the graph $G$ by increasing $a_{tu}'$ and $a_{vn}'$ by $1$ and reducing $a_{uv}'$ by $1$, so that now the degrees of vertices $t$ and $n$ are each increased by $1$, and the degrees of vertices $u$ and $v$ are preserved.\n\nIt now remains to show that $\\mathbf{d}'$ satisfies~\\eqref{Eq:GraphicalFiniteDisc}. We divide the proof into several cases for different values of $k$. We will repeatedly use the fact that $\\mathbf{d}$ satisfies~\\eqref{Eq:GraphicalFiniteDisc}, as well as the inequality $\\min\\{a,b\\}-1 \\leq \\min\\{a-1,b\\}$.\n\\begin{enumerate}\n \\item For $k = n$:\n\\begin{equation*}\n\\sum_{i=1}^n d_i' = \\sum_{i=1}^n d_i-2 \\leq (r-1)n(n-1)-2 < (r-1)n(n-1).\n\\end{equation*}\n \\item For $t \\leq k \\leq n-1$:\n\\begin{equation*}\n\\begin{split}\n\\sum_{i=1}^k d_i' &= \\sum_{i=1}^k d_i-1 \\leq (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j,(r-1)k\\} - 1 \\\\\n&\\leq (r-1)k(k-1) + \\sum_{j=k+1}^{n-1} \\min\\{d_j,(r-1)k\\} + \\min\\{d_n-1,(r-1)k\\} \\\\\n&= (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j',(r-1)k\\}.\n\\end{split}\n\\end{equation*}\n \\item For $1 \\leq k \\leq t-1$: first suppose $d_n \\geq 1+(r-1)k$. Then for all $j$ we have\n\\begin{equation*}\n\\min\\{d_j',(r-1)k\\} = \\min\\{d_j,(r-1)k\\} = (r-1)k,\n\\end{equation*} \nso\n\\begin{equation*}\n\\begin{split}\n\\sum_{i=1}^k d_i' &= \\sum_{i=1}^k d_i \\leq (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j,(r-1)k\\} \\\\\n&= (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j',(r-1)k\\}.\n\\end{split}\n\\end{equation*}\n \\item For $1 \\leq k \\leq t-1$: suppose $d_1 \\geq 1 + (r-1)k$, and $d_n \\leq (r-1)k$. We claim that $\\mathbf{d}$ satisfies~\\eqref{Eq:GraphicalFiniteDisc} at $k$ with a strict inequality. If this claim is true, then, since $d_t = d_1$ and $\\min\\{d_t',(r-1)k\\} = \\min\\{d_t,(r-1)k\\} = (r-1)k$,\n\\begin{equation*}\n\\begin{split}\n\\sum_{i=1}^k d_i' &= \\sum_{i=1}^k d_i\n\\leq (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j,(r-1)k\\}-1 \\\\\n&= (r-1)k(k-1) + \\sum_{j=k+1}^{n-1} \\min\\{d_j',(r-1)k\\} + \\min\\{d_n,(r-1)k\\}-1 \\\\\n&\\leq (r-1)k(k-1) + \\sum_{j=k+1}^{n-1} \\min\\{d_j',(r-1)k\\} + \\min\\{d_n-1,(r-1)k\\} \\\\\n&= (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j',(r-1)k\\}.\n\\end{split}\n\\end{equation*}\nNow to prove the claim, suppose the contrary that $\\mathbf{d}$ satisfies~\\eqref{Eq:GraphicalFiniteDisc} at $k$ with equality. Let $t+1 \\leq u \\leq n$ be the smallest integer such that $d_u \\leq (r-1)k$. Then, from our assumption,\n\\begin{equation*}\n\\begin{split}\nk d_k &= \\sum_{i=1}^k d_i = (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j,(r-1)k\\} \\\\\n&\\geq (r-1)k(k-1) + (u-k-1)(r-1)k + \\sum_{j=u}^n d_j \\\\\n&= (r-1)k(u-2) + \\sum_{j=u}^n d_j.\n\\end{split}\n\\end{equation*}\nTherefore, since $d_{k+1} = d_k = d_1$,\n\\begin{equation*}\n\\begin{split}\n\\sum_{i=1}^{k+1} d_i &= (k+1)d_k\n\\geq (r-1)(k+1)(u-2) + \\frac{k+1}{k} \\sum_{j=u}^n d_j \\\\\n&> (r-1)(k+1)k + (r-1)(k+1)(u-k-2) + \\sum_{j=u}^n d_j \\\\\n&\\geq (r-1)(k+1)k + \\sum_{j=k+2}^n \\min\\{d_j, (r-1)(k+1)\\},\n\\end{split}\n\\end{equation*}\nwhich contradicts the fact that $\\mathbf{d}$ satisfies~\\eqref{Eq:GraphicalFiniteDisc} at $k+1$. Thus, we have proved that $\\mathbf{d}$ satisfies~\\eqref{Eq:GraphicalFiniteDisc} at $k$ with a strict inequality.\n \\item For $1 \\leq k \\leq t-1$: suppose $d_1 \\leq (r-1)k$. In particular, we have $\\min\\{d_j,(r-1)k\\} = d_j$ and $\\min\\{d_j',(r-1)k\\} = d_j'$ for all $j$. First, if we have\n\\begin{equation}\\label{Eq:ErdosGallaiProofCond}\nd_{k+2} + \\cdots + d_n \\geq 2,\n\\end{equation}\nthen we are done, since \n\\begin{equation*}\n\\begin{split}\n\\sum_{i=1}^k d_i' &= \\sum_{i=1}^k d_i\n= (k-1)d_1 + d_{k+1} \\\\\n&\\leq (r-1)k(k-1) + d_{k+1} + d_{k+2} + \\cdots + d_n - 2 \\\\\n&= (r-1)k(k-1) + \\sum_{j=k+1}^n d_j' \\\\\n&= (r-1)k(k-1) + \\sum_{j=k+1}^n \\min\\{d_j',(r-1)k\\}.\n\\end{split}\n\\end{equation*}\nCondition~\\eqref{Eq:ErdosGallaiProofCond} is obvious if $d_n \\geq 2$ or $k+2 \\leq n-1$ (since there are $n-k-1$ terms in the summation and each term is at least $1$). Otherwise, assume $k+2 \\geq n$ and $d_n = 1$, so in particular, we have $k = n-2$ (since $k \\leq t-1 \\leq n-2$), $t = n-1$, and $d_1 \\leq (r-1)(n-2)$. Note that we cannot have $d_1 = (r-1)(n-2)$, for then $\\sum_{i=1}^n d_i = (n-1)d_1 + d_n = (r-1)(n-1)(n-2) + 1$ would be odd, so we must have $d_1 < (r-1)(n-2)$. Similarly, $n$ must be even, for otherwise $\\sum_{i=1}^n d_i = (n-1)d_1 + 1$ would be odd. Thus, since $1 \\leq d_1 < (r-1)(n-2)$ we must have $n \\geq 4$. Therefore,\n\\begin{equation*}\n\\begin{split}\n\\sum_{i=1}^k d_i' &= (n-2)d_1 = (n-3)d_1 + d_{n-1} \\\\\n&\\leq (r-1)(n-2)(n-3) - (n-3) + d_{n-1} \\\\\n&\\leq (r-1)(n-2)(n-3) + (d_{n-1}-1) + (d_n-1) \\\\\n&= (r-1)(n-2)(n-3) + \\sum_{j=k+1}^n \\min\\{d_j',(r-1)k\\}.\n\\end{split}\n\\end{equation*}\n\\end{enumerate}\nThis shows that $\\mathbf{d}'$ satisfies~\\eqref{Eq:GraphicalFiniteDisc} and finishes the proof of Theorem~\\ref{Thm:GraphicalFiniteDisc}.\n\n\n\n\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:MLEAlgFiniteDisc}}\n\nWe follow the outline of the proof of~\\cite[Theorem~1.5]{Chatterjee}. We first present the following properties of the mean function $\\mu(t)$ and the Jacobian matrix of the function $\\varphi$~\\eqref{Eq:VarphiFiniteDisc}. We then combine these results at the end of this section into a proof of Theorem~\\ref{Thm:MLEAlgFiniteDisc}.\n\n\n\\begin{lemma}\\label{Lem:PropertyMeanFunction}\nThe mean function $\\mu(t)$ is positive and strictly decreasing, with $\\mu(-t) + \\mu(t) = r-1$ for all $t \\in \\mathbb{R}$, and $\\mu(t) \\to 0$ as $t \\to \\infty$. Its derivative $\\mu'(t)$ is increasing for $t \\geq 0$, with the properties that $\\mu'(t) < 0$, $\\mu'(t) = \\mu'(-t)$ for all $t \\in \\mathbb{R}$, and $\\mu'(0) = -(r^2-1)\/12$.\n\\end{lemma}\n\\begin{proof}\nIt is clear from~\\eqref{Eq:MeanFuncFiniteDiscrete} that $\\mu(t)$ is positive. From the alternative representation~\\eqref{Eq:MeanFuncFiniteDiscreteAlt} it is easy to see that $\\mu(-t) + \\mu(t) = r-1$, and $\\mu(t) \\to 0$ as $t \\to \\infty$. Differentiating expression~\\eqref{Eq:MeanFuncFiniteDiscrete} yields the formula\n\\begin{equation*}\n\\mu'(t)\n= \\frac{-(\\sum_{a=0}^{r-1} a^2 \\, \\exp(-at))(\\sum_{a=0}^{r-1} \\exp(-at)) + (\\sum_{a=0}^{r-1} a \\: \\exp(-at))^2}{(\\sum_{a=0}^{r-1} \\exp(-at))^2},\n\\end{equation*}\nand substituting $t = 0$ gives us $\\mu'(0) = -(r^2-1)\/12$. The Cauchy-Schwarz inequality applied to the expression above tells us that $\\mu'(t) < 0$, where the inequality is strict because the vectors $(a^2 \\exp(-at))_{a=0}^{r-1}$ and $(\\exp(-at))_{a=0}^{r-1}$ are not linearly dependent. Thus, $\\mu(t)$ is strictly decreasing for all $t \\in \\mathbb{R}$.\n\nThe relation $\\mu(-t) + \\mu(t) = r-1$ gives us $\\mu'(-t) = \\mu'(t)$. Furthermore, by differentiating the expression~\\eqref{Eq:MeanFuncFiniteDiscreteAlt} twice, one can verify that $\\mu''(t) \\geq 0$ for $t \\geq 0$, which means $\\mu'(t)$ is increasing for $t \\geq 0$. See also Figure~\\ref{Fig:FiniteDiscMean} for the behavior of $\\mu(t)$ and $\\mu'(t)$ for different values of $r$.\n\\end{proof}\n\n\\begin{lemma}\\label{Lem:LowerBoundRatio}\nFor all $t \\in \\mathbb{R}$, we have\n\\begin{equation*}\n\\frac{\\mu'(t)}{\\mu(t)} \\geq -r+1 + \\frac{1}{\\sum_{a=0}^{r-1} \\exp(at)} > -r+1.\n\\end{equation*}\n\\end{lemma}\n\\begin{proof}\nMultiplying the numerator and denominator of~\\eqref{Eq:MeanFuncFiniteDiscrete} by $\\exp((r-1)t)$, we can write\n\\begin{equation*}\n\\mu(t)\n= \\frac{\\sum_{a=0}^{r-1} a \\: \\exp((r-1-a)t)}{\\sum_{a=0}^{r-1} \\exp((r-1-a)t)}\n= \\frac{\\sum_{a=0}^{r-1} (r-1-a) \\exp(at)}{\\sum_{a=0}^{r-1} \\exp(at)}.\n\\end{equation*}\nTherefore,\n\\begin{equation*}\n\\begin{split}\n\\frac{\\mu'(t)}{\\mu(t)} &= \\frac{d}{dt} \\log \\mu(t)\n= \\frac{d}{dt} \\left( \\log \\sum_{a=0}^{r-1} (r-1-a)\\exp(at) - \\log \\sum_{a=0}^{r-1} \\exp(at) \\right) \\\\\n&= \\frac{\\sum_{a=0}^{r-1} a(r-1-a)\\exp(at)}{\\sum_{a=0}^{r-1} (r-1-a)\\exp(at)} - \\frac{\\sum_{a=0}^{r-1} a\\exp(at)}{\\sum_{a=0}^{r-1} \\exp(at)} \\\\\n&\\geq - \\frac{\\sum_{a=0}^{r-1} a\\exp(at)}{\\sum_{a=0}^{r-1} \\exp(at)} \\\\\n&= - \\frac{\\sum_{a=0}^{r-1} (r-1)\\exp(at) - \\sum_{a=0}^{r-1} (r-1-a)\\exp(at)}{\\sum_{a=0}^{r-1} \\exp(at)} \\\\\n&= -r+1 + \\frac{\\sum_{a=0}^{r-1} (r-1-a)\\exp(at)}{\\sum_{a=0}^{r-1} \\exp(at)} \\\\\n&\\geq -r+1+ \\frac{1}{\\sum_{a=0}^{r-1} \\exp(at)}.\n\\end{split}\n\\end{equation*}\n\\end{proof}\n\n\nWe recall the following definition and result from~\\cite{Chatterjee}. Given $\\delta > 0$, let $\\mathcal{L}_n(\\delta)$ denote the set of $n \\times n$ matrices $A = (a_{ij})$ with $\\|A\\|_\\infty \\leq 1$, $a_{ii} \\geq \\delta$, and $a_{ij} \\leq -\\delta\/(n-1)$, for each $1 \\leq i \\neq j \\leq n$.\n\n\\begin{lemma}[{\\cite[Lemma~2.1]{Chatterjee}}]\\label{Lem:LnDelta}\nIf $A,B \\in \\mathcal{L}_n(\\delta)$, then\n\\begin{equation*}\n\\|AB\\|_\\infty \\leq 1-\\frac{2(n-2)\\delta^2}{(n-1)}.\n\\end{equation*}\nIn particular, for $n \\geq 3$,\n\\begin{equation*}\n\\|AB\\|_\\infty \\leq 1-\\delta^2.\n\\end{equation*}\n\\end{lemma}\n\nGiven $\\theta,\\theta' \\in \\mathbb{R}^n$, let $J(\\theta,\\theta')$ denote the $n \\times n$ matrix whose $(i,j)$-entry is\n\\begin{equation}\\label{Eq:JacobianMLEDef}\nJ_{ij}(\\theta,\\theta') = \\int_0^1 \\frac{\\partial \\varphi_i}{\\partial \\theta_j} (t\\theta + (1-t)\\theta') \\: dt.\n\\end{equation}\n\n\\begin{lemma}\\label{Lem:NormJInfinity}\nFor all $\\theta,\\theta' \\in \\mathbb{R}^n$, we have $\\|J(\\theta,\\theta')\\|_\\infty = 1$.\n\\end{lemma}\n\\begin{proof}\nThe partial derivatives of $\\varphi$~\\eqref{Eq:VarphiFiniteDisc} are\n\\begin{equation}\\label{Eq:PartialVarphi}\n\\frac{\\partial \\varphi_i(\\mathbf{x})}{\\partial x_i} = 1 + \\frac{1}{(r-1)}\\frac{\\sum_{j \\neq i} \\mu'(x_i+x_j)}{\\sum_{j \\neq i} \\mu(x_i+x_j)},\n\\end{equation}\nand for $i \\neq j$,\n\\begin{equation}\\label{Eq:PartialVarphiMixed}\n\\frac{\\partial \\varphi_i(\\mathbf{x})}{\\partial x_j} = \\frac{1}{(r-1)}\\frac{\\mu'(x_i+x_j)}{\\sum_{k \\neq i} \\mu(x_i+x_k)} < 0,\n\\end{equation}\nwhere the last inequality follows from $\\mu'(x_i + x_j) < 0$. Using the result of Lemma~\\ref{Lem:LowerBoundRatio} and the fact that $\\mu$ is positive, we also see that\n\\begin{equation*}\n\\begin{split}\n\\frac{\\partial \\varphi_i(\\mathbf{x})}{\\partial x_i}\n&= 1 + \\frac{1}{(r-1)} \\frac{\\sum_{j \\neq i} \\mu'(x_i+x_j)}{\\sum_{j \\neq i} \\mu(x_i+x_j)}\n> 1 + \\frac{1}{(r-1)} \\frac{\\sum_{j \\neq i} (-r+1) \\mu(x_i+x_j)}{\\sum_{j \\neq i} \\mu(x_i+x_j)}\n= 0.\n\\end{split}\n\\end{equation*}\nSetting $\\mathbf{x} = t\\theta + (1-t)\\theta'$ and integrating over $0 \\leq t \\leq 1$, we also get that $J_{ij}(\\theta,\\theta') < 0$ for $i \\neq j$, and $J_{ii}(\\theta,\\theta') = 1 + \\sum_{j \\neq i} J_{ij}(\\theta,\\theta') > 0$. This implies $\\|J(\\theta,\\theta')\\|_\\infty = 1$, as desired.\n\\end{proof}\n\n\n\\begin{lemma}\\label{Lem:JInLnDelta}\nLet $\\theta,\\theta' \\in \\mathbb{R}^n$ with $\\|\\theta\\|_\\infty \\leq K$ and $\\|\\theta'\\|_\\infty \\leq K$ for some $K > 0$. Then $J(\\theta,\\theta') \\in \\mathcal{L}_n(\\delta)$, where\n\\begin{equation}\\label{Eq:DeltaBoundJ}\n\\delta = \\frac{1}{(r-1)} \\: \\min \\left\\{\\frac{\\exp(2K)-1}{\\exp(2rK)-1}, \\; -\\frac{\\mu'(2K)}{\\mu(-2K)} \\right\\}.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nFrom Lemma~\\ref{Lem:NormJInfinity} we already know that $J \\equiv J(\\theta,\\theta')$ satisfies $\\|J\\|_\\infty = 1$, so to show that $J\\in \\mathcal{L}_n(\\delta)$ it remains to show that $J_{ii} \\geq \\delta$ and $J_{ij} \\leq -\\delta\/(n-1)$ for $i \\neq j$. In particular, it suffices to show that for each $0 \\leq t \\leq 1$ we have $\\partial \\varphi_i(\\mathbf{x})\/\\partial x_i \\geq \\delta$ and $\\partial \\varphi_i(\\mathbf{x})\/\\partial x_j \\leq -\\delta\/(n-1)$, where $\\mathbf{x} \\equiv \\mathbf{x}(t) = t\\theta + (1-t)\\theta'$.\n\nFix $0 \\leq t \\leq 1$. Since $\\|\\theta\\|_\\infty \\leq K$ and $\\|\\theta'\\|_\\infty \\leq K$, we also know that $\\|\\mathbf{x}\\|_\\infty \\leq K$, so $-2K \\leq x_i+x_j \\leq 2K$ for all $1 \\leq i,j \\leq n$. Using the properties of $\\mu$ and $\\mu'$ from Lemma~\\ref{Lem:PropertyMeanFunction}, we have\n\\begin{equation*}\n0 < \\mu(2K) \\leq \\mu(x_i+x_j) \\leq \\mu(-2K)\n\\end{equation*}\nand\n\\begin{equation*}\n\\mu'(0) \\leq \\mu'(x_i+x_j) \\leq \\mu'(2K) < 0.\n\\end{equation*}\nThen from~\\eqref{Eq:PartialVarphiMixed} and using the definition of $\\delta$,\n\\begin{equation*}\n\\frac{\\partial \\varphi_i(\\mathbf{x})}{\\partial x_j} \\leq \\frac{\\mu'(2K)}{(n-1)(r-1)\\mu(-2K)} \\leq -\\frac{\\delta}{n-1}.\n\\end{equation*}\nFurthermore, by Lemma~\\ref{Lem:LowerBoundRatio} we have\n\\begin{equation*}\n\\frac{\\mu'(x_i+x_j)}{\\mu(x_i+x_j)} \\geq -r+1 + \\frac{\\exp(x_i+x_j)-1}{\\exp(r(x_i+x_j))-1}\n\\geq -r+1 + \\frac{\\exp(2K)-1}{\\exp(2rK)-1}.\n\\end{equation*}\nSo from~\\eqref{Eq:PartialVarphi}, we also get\n\\begin{equation*}\n\\begin{split}\n\\frac{\\partial \\varphi_i(\\mathbf{x})}{\\partial x_i}\n&\\geq 1 + \\frac{1}{(r-1)} \\frac{\\sum_{j \\neq i} (-r+1+\\frac{\\exp(2K)-1}{\\exp(2rK)-1}) \\mu(x_i+x_j)}{\\sum_{j \\neq i} \\mu(x_i+x_j)}\n= \\frac{1}{(r-1)} \\left( \\frac{\\exp(2K)-1}{\\exp(2rK)-1} \\right)\n\\geq \\delta,\n\\end{split}\n\\end{equation*}\nas required.\n\\end{proof}\n\n\n\nWe are now ready to prove Theorem~\\ref{Thm:MLEAlgFiniteDisc}.\n\n\\begin{proof_of}{Theorem~\\ref{Thm:MLEAlgFiniteDisc}}\nBy the mean-value theorem for vector-valued functions~\\cite[p.~341]{Lang}, for any $\\theta,\\theta' \\in \\mathbb{R}^n$ we can write\n\\begin{equation*}\n\\varphi(\\theta) - \\varphi(\\theta') = J(\\theta, \\theta') (\\theta-\\theta'),\n\\end{equation*}\nwhere $J(\\theta,\\theta')$ is the Jacobian matrix defined in~\\eqref{Eq:JacobianMLEDef}. Since $\\|J(\\theta,\\theta')\\|_\\infty = 1$ (Lemma~\\ref{Lem:NormJInfinity}), this gives us\n\\begin{equation}\\label{Eq:FiniteDiscMVTBoundOneIter}\n\\|\\varphi(\\theta) - \\varphi(\\theta')\\|_\\infty \\leq \\|\\theta-\\theta'\\|_\\infty.\n\\end{equation}\n\nFirst suppose there is a solution $\\hat \\theta$ to the system of equations~\\eqref{Eq:MLEEqFiniteDisc}, so $\\hat \\theta$ is a fixed point of $\\varphi$. Then by setting $\\theta = \\theta^{(k)}$ and $\\theta' = \\hat \\theta$ to the inequality above, we obtain\n\\begin{equation}\\label{Eq:VarphiContraction}\n\\|\\theta^{(k+1)}-\\hat\\theta\\|_\\infty \\leq \\|\\theta^{(k)}-\\hat\\theta\\|_\\infty.\n\\end{equation}\nIn particular, this shows that $\\|\\theta^{(k)}\\|_\\infty \\leq K$ for all $k \\in \\mathbb{N}_0$, where $K := 2\\|\\hat\\theta\\|_\\infty + \\|\\theta^{(0)}\\|_\\infty$. By Lemma~\\ref{Lem:JInLnDelta}, this implies $J(\\theta^{(k)},\\hat \\theta) \\in \\mathcal{L}_n(\\delta)$ for all $k \\in \\mathbb{N}_0$, where $\\delta$ is given by~\\eqref{Eq:DeltaBoundJ}. Another application of the mean-value theorem gives us\n\\begin{equation*}\n\\begin{split}\n\\theta^{(k+2)}-\\hat \\theta \n&= J(\\theta^{(k+1)}, \\hat\\theta) \\: J(\\theta^{(k)}, \\hat\\theta) \\: (\\theta^{(k)} - \\hat\\theta),\n\\end{split}\n\\end{equation*}\nso by Lemma~\\ref{Lem:LnDelta},\n\\begin{equation*}\n\\begin{split}\n\\|\\theta^{(k+2)}-\\hat \\theta\\|_\\infty\n&\\leq \\|J(\\theta^{(k+1)}, \\hat\\theta) \\: J(\\theta^{(k)}, \\hat\\theta)\\|_\\infty \\: \\|\\theta^{(k)} - \\hat\\theta\\|_\\infty\n\\leq \\left(1-\\delta^2\\right) \\|\\theta^{(k)} - \\hat\\theta\\|_\\infty.\n\\end{split}\n\\end{equation*}\nUnrolling the recursive bound above and using~\\eqref{Eq:VarphiContraction} gives us\n\\begin{equation*}\n\\|\\theta^{(k)}-\\hat\\theta\\|_\\infty \\leq (1-\\delta^2)^{\\lfloor k\/2 \\rfloor} \\|\\theta^{(0)}-\\hat\\theta\\|_\\infty \\leq (1-\\delta^2)^{(k-1)\/2} \\|\\theta^{(0)}-\\hat\\theta\\|_\\infty,\n\\end{equation*}\nwhich proves~\\eqref{Eq:MLERateOfConvFiniteDisc} with $\\tau = \\sqrt{1-\\delta^2}$.\n\nNow suppose the system of equations~\\eqref{Eq:MLEEqFiniteDisc} does not have a solution, and suppose the contrary that the sequence $\\{\\theta^{(k)}\\}$ does not have a divergent subsequence. This means $\\{\\theta^{(k)}\\}$ is a bounded sequence, so there exists $K > 0$ such that $\\|\\theta^{(k)}\\|_\\infty \\leq K$ for all $k \\in \\mathbb{N}_0$. Then by Lemma~\\ref{Lem:JInLnDelta}, $J(\\theta^{(k)},\\theta^{(k+1)}) \\in \\mathcal{L}_n(\\delta)$ for all $k \\in \\mathbb{N}_0$, where $\\delta$ is given by~\\eqref{Eq:DeltaBoundJ}. In particular, by the mean value theorem and Lemma~\\ref{Lem:NormJInfinity}, we get for all $k \\in \\mathbb{N}_0$,\n\\begin{equation*}\n\\|\\theta^{(k+3)}-\\theta^{(k+2)}\\|_\\infty \\leq (1-\\delta^2) \\|\\theta^{(k+1)}-\\theta^{(k)}\\|_\\infty.\n\\end{equation*}\nThis implies $\\sum_{k=0}^\\infty \\|\\theta^{(k+1)}-\\theta^{(k)}\\|_\\infty < \\infty$, which means $\\{\\theta^{(k)}\\}$ is a Cauchy sequence. Thus, the sequence $\\{\\theta^{(k)}\\}$ converges to a limit, say $\\hat \\theta$, as $k \\to \\infty$. This limit $\\hat \\theta$ is necessarily a fixed point of $\\varphi$, as well as a solution to the system of equations~\\eqref{Eq:MLEEqFiniteDisc}, contradicting our assumption. Hence we conclude that $\\{\\theta^{(k)}\\}$ must have a divergent subsequence.\n\\end{proof_of}\n\n\nA little computation based on the proof above gives us the following result, which will be useful in the proof of Theorem~\\ref{Thm:ConsistencyFiniteDisc}.\n\n\\begin{proposition}\\label{Prop:FiniteDiscBoundFirstIter}\nAssume the same setting as in Theorem~\\ref{Thm:MLEAlgFiniteDisc}, and assume the MLE equation~\\ref{Eq:MLEEqFiniteDisc} has a unique solution $\\hat \\theta$. Then\n\\begin{equation*}\n\\|\\theta^{(0)}-\\hat\\theta\\|_\\infty \\leq \\frac{2}{\\delta^2} \\|\\theta^{(0)}-\\theta^{(1)}\\|_\\infty,\n\\end{equation*}\nwhere $\\delta$ is given by~\\eqref{Eq:DeltaBoundJ} with $K = 2\\|\\hat\\theta\\|_\\infty + \\|\\theta^{(0)}\\|_\\infty$.\n\\end{proposition}\n\\begin{proof}\nWith the same notation as in the proof of Theorem~\\ref{Thm:MLEAlgFiniteDisc}, by applying the mean-value theorem twice and using the bound in Lemma~\\ref{Lem:NormJInfinity}, for each $k \\geq 0$ we have\n\\begin{equation*}\n\\|\\theta^{(k+3)}-\\theta^{(k+2)}\\|_\\infty \\leq (1-\\delta^2) \\|\\theta^{(k+1)}-\\theta^{(k)}\\|_\\infty.\n\\end{equation*}\nTherefore, since $\\{\\theta^{(k)}\\}$ converges to $\\hat \\theta$,\n\\begin{equation*}\n\\begin{split}\n\\|\\theta^{(0)}-\\hat\\theta\\|_\\infty &\\leq \\sum_{k=0}^\\infty \\|\\theta^{(k)}-\\theta^{(k+1)}\\|_\\infty\n\\leq \\frac{1}{\\delta^2} \\big( \\|\\theta^{(0)}-\\theta^{(1)}\\|_\\infty + \\|\\theta^{(1)}-\\theta^{(2)}\\|_\\infty \\big)\n\\leq \\frac{2}{\\delta^2} \\|\\theta^{(0)}-\\theta^{(1)}\\|_\\infty,\n\\end{split}\n\\end{equation*}\nwhere the last inequality follows from~\\eqref{Eq:FiniteDiscMVTBoundOneIter}.\n\\end{proof}\n\n\n\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:ConsistencyFiniteDisc}}\n\nOur proof of Theorem~\\ref{Thm:ConsistencyFiniteDisc} follows the outline of the proof of Theorem~1.3 in~\\cite{Chatterjee}. Recall that $\\mathcal{W}$ is the set of graphical sequences, and the MLE equation~\\eqref{Eq:MLEEqFiniteDisc} has a unique solution $\\hat \\theta \\in \\mathbb{R}^n$ if and only if $\\mathbf{d} \\in \\text{conv}(\\mathcal{W})^\\circ$. We first present a few preliminary results. We will also use the properties of the mean function $\\mu$ as described in Lemma~\\ref{Lem:PropertyMeanFunction}.\n\nThe following property is based on~\\cite[Lemma~4.1]{Chatterjee}.\n\\begin{lemma}\\label{Lem:FiniteDiscBoundMLE}\nLet $\\mathbf{d} \\in \\text{conv}(\\mathcal{W})$ with the properties that\n\\begin{equation}\\label{Eq:LemFiniteDiscBoundMLE1}\nc_2 (r-1)(n-1) \\leq d_i \\leq c_1 (r-1)(n-1), \\quad i = 1,\\dots,n,\n\\end{equation}\nand\n\\begin{equation}\\label{Eq:LemFiniteDiscBoundMLE}\n\\min_{\\substack{B \\subseteq \\{1,\\dots,n\\},\\\\|B| \\geq c_2^2 (n-1)}} \\left\\{ \\sum_{j \\notin B} \\min\\{d_j, (r-1)|B|\\} + (r-1)|B| (|B|-1) - \\sum_{i \\in B} d_i \\right\\} \\geq c_3 n^2,\n\\end{equation}\nwhere $c_1, c_2 \\in (0,1)$ and $c_3 > 0$ are constants. Then the MLE equation~\\eqref{Eq:MLEEqFiniteDisc} has a solution $\\hat \\theta$ with the property that $\\|\\hat\\theta\\|_\\infty \\leq C$, where $C \\equiv C(c_1,c_2,c_3)$ is a constant that only depends on $c_1,c_2,c_3$.\n\\end{lemma}\n\\begin{proof}\nFirst assume $\\hat \\theta$ exists, so $\\hat \\theta$ and $\\mathbf{d}$ satisfy\n\\begin{equation*}\nd_i = \\sum_{j \\neq i} \\mu(\\hat \\theta_i + \\hat \\theta_j), \\quad i = 1,\\dots,n.\n\\end{equation*}\nLet\n\\begin{equation*}\nd_{\\max} = \\max_{1 \\leq i \\leq n} d_i, \\quad d_{\\min} = \\min_{1 \\leq i \\leq n} d_i, \\quad \\hat{\\theta}_{\\max} = \\max_{1 \\leq i \\leq n} \\hat \\theta_i, \\quad \\hat \\theta_{\\min} = \\min_{1 \\leq i \\leq n} \\hat \\theta_i,\n\\end{equation*}\nand let $i^*, j^* \\in \\{1,\\dots,n\\}$ be such that $\\hat \\theta_{i^*} = \\hat \\theta_{\\max}$ and $\\hat \\theta_{j^*} = \\hat \\theta_{\\min}$.\n\nWe begin by observing that since $\\mu$ is a decreasing function and we have the assumption~\\eqref{Eq:LemFiniteDiscBoundMLE1},\n\\begin{equation*}\nc_2(r-1) \\leq \\frac{d_{\\min}}{n-1} \\leq \\frac{d_{i^\\ast}}{n-1} = \\frac{1}{(n-1)} \\sum_{j \\neq i^\\ast} \\mu(\\hat \\theta_{\\max} + \\hat \\theta_j) \\leq \\mu(\\hat \\theta_{\\max} + \\hat \\theta_{\\min}),\n\\end{equation*}\nso\n\\begin{equation*}\n\\hat \\theta_{\\max} + \\hat \\theta_{\\min} \\leq \\mu^{-1}\\big(c_2(r-1)\\big).\n\\end{equation*}\nThus, if we have a lower bound on $\\hat \\theta_{\\min}$ by a constant, then we also get a constant upper bound on $\\hat \\theta_{\\max}$ and we are done.\n\nWe now proceed to prove the lower bound $\\hat \\theta_{\\min} \\geq -C$. If $\\theta_{\\min} \\geq 0$, then there is nothing to prove, so let us assume that $\\hat \\theta_{\\min} < 0$. We claim the following property.\n\n\\paragraph{Claim.} \\textit{If $\\hat \\theta_{\\min}$ satisfies $\\mu(\\hat \\theta_{\\min}\/2) \\geq c_1(r-1)$ and $\\mu(\\hat \\theta_{\\min}\/4) \\geq (r-1)\/(1+c_2)$, then the set $A = \\{i \\colon \\hat \\theta_i \\leq \\hat \\theta_{\\min}\/4\\}$ has $|A| \\geq c_2^2 (n-1)$.}\n\\vspace{2mm}\\\\\n\\begin{proof_of}{claim}\nLet $S = \\{i \\colon \\hat \\theta_i < -\\hat \\theta_{\\min}\/2\\}$ and $m = |S|$. Note that $j^\\ast \\in S$ since $\\hat \\theta_{j^*} = \\hat \\theta_{\\min} < 0$, so $|m| \\geq 1$. Then using the property that $\\mu$ is a decreasing function and the assumption on $\\mu(\\hat \\theta_{\\min}\/2)$, we obtain\n\\begin{equation*}\n\\begin{split}\nc_1(r-1)(n-1) &\\geq d_{\\max} \\geq d_{j^\\ast}\n= \\sum_{i \\neq j^\\ast} \\mu(\\hat \\theta_{\\min} + \\hat \\theta_i) \\\\\n&\\geq \\sum_{i \\in S \\setminus \\{j^*\\}} \\mu(\\hat \\theta_{\\min} + \\hat \\theta_i)\n> (m-1) \\: \\mu\\left(\\frac{\\hat \\theta_{\\min}}{2} \\right)\n\\geq c_1(r-1)(m-1).\n\\end{split}\n\\end{equation*}\nThis implies $m < n$, which means there exists $i \\notin S$, so $\\hat \\theta_i \\geq -\\hat \\theta_{\\min}\/2 > 0$.\n\nLet $S_i = \\{j \\colon j \\neq i, \\hat \\theta_j > -\\hat \\theta_i\/2\\}$, and let $m_i = |S_i|$. Then, using the properties that $\\mu$ is decreasing and bounded above by $r-1$, and using the assumption on $\\mu(\\hat \\theta_{\\min}\/4)$, we get\n\\begin{equation*}\n\\begin{split}\nc_2(r-1)(n-1) &\\leq d_{\\min} \\leq d_i\n= \\sum_{j \\in S_i} \\mu(\\hat \\theta_i + \\hat \\theta_j) + \\sum_{j \\notin S_i, j \\neq i} \\mu(\\hat \\theta_i + \\hat \\theta_j) \\\\\n&< m_i \\: \\mu\\left(\\frac{\\hat \\theta_i}{2}\\right) + (n-1-m_i)(r-1) \\\\\n&= (n-1)(r-1)-m_i \\left(r-1-\\mu\\left(\\frac{\\hat \\theta_i}{2}\\right) \\right) \\\\\n&= (n-1)(r-1)-m_i \\: \\mu\\left(-\\frac{\\hat \\theta_i}{2}\\right) \\\\\n&\\leq (n-1)(r-1) - m_i \\: \\mu\\left(\\frac{\\hat \\theta_{\\min}}{4}\\right) \\\\\n&\\leq (n-1)(r-1) - \\frac{m_i(r-1)}{1+c_2}.\n\\end{split}\n\\end{equation*}\nRearranging the last inequality above gives us $m_i \\leq (1-c_2^2)(n-1)$.\n\nNote that for every $j \\neq S_i$, $j \\neq i$, we have $\\hat \\theta_j \\leq -\\hat \\theta_i\/2 \\leq \\hat \\theta_{\\min}\/4$. Therefore, if $A = \\{j \\colon \\hat \\theta_j \\leq \\hat \\theta_{\\min} \/4\\}$, then we see that $S_i^c \\setminus \\{i\\} \\subseteq A$, so\n\\begin{equation*}\n|A| \\geq |S_i^c \\setminus \\{i\\}| = n-m_i-1 \\geq c_2^2 (n-1),\n\\end{equation*}\nas desired.\n\\end{proof_of}\n\\vspace{4mm}\n\n\nNow assume\n\\begin{equation*}\n\\hat \\theta_{\\min} \\leq \\min \\left\\{ 2\\mu^{-1}(c_1(r-1)), \\; 4\\mu^{-1}\\left(\\frac{r-1}{1+c_2}\\right), \\; -16\\right\\},\n\\end{equation*}\nfor otherwise we are done. Then $\\mu(\\hat \\theta_{\\min}\/2) \\geq c_1(r-1)$ and $\\mu(\\hat \\theta_{\\min}\/4) \\geq (r-1)\/(1+c_2)$, so by the claim above, the size of the set $A = \\{i \\colon \\hat \\theta_i \\leq \\hat \\theta_{\\min}\/4\\}$ is at least $c_2^2 (n-1)$. Let\n\\begin{equation*}\nh = \\sqrt{-\\hat \\theta_{\\min}} > 0,\n\\end{equation*}\nand for integers $0 \\leq k \\leq \\lceil h\/16 \\rceil - 1$, define the set\n\\begin{equation*}\nD_k = \\left\\{i \\colon -\\frac{1}{8} \\hat \\theta_{\\min} + kh \\leq \\hat \\theta_i < -\\frac{1}{8} \\hat \\theta_{\\min} + (k+1)h \\right\\}.\n\\end{equation*}\nSince the sets $\\{D_k\\}$ are disjoint, by the pigeonhole principle we can find an index $0 \\leq k^\\ast \\leq \\lceil h\/16 \\rceil - 1$ such that\n\\begin{equation*}\n|D_{k^\\ast}| \\leq \\frac{n}{\\lceil h\/16 \\rceil} \\leq \\frac{16n}{h}.\n\\end{equation*}\nFix $k^*$, and consider the set\n\\begin{equation*}\nB = \\left\\{ i \\colon \\hat \\theta_i \\leq \\frac{1}{8} \\hat \\theta_{\\min} - \\left(k^* + \\frac{1}{2} \\right) h \\right\\}.\n\\end{equation*}\nNote that $\\hat \\theta_{\\min}\/4 \\leq \\hat \\theta_{\\min}\/8 - (k^* + 1\/2)h$, which implies $A \\subseteq B$, so $|B| \\geq |A| \\geq c_2^2 (n-1)$. For $1 \\leq i \\leq n$, define\n\\begin{equation*}\nd_i^B = \\sum_{j \\in B\\setminus\\{i\\}} \\mu(\\hat \\theta_i + \\hat \\theta_j),\n\\end{equation*}\nand observe that\n\\begin{equation}\\label{Eq:DjBRelation}\n\\sum_{j \\notin B} d_j^B = \\sum_{j \\notin B} \\sum_{i \\in B} \\mu(\\hat \\theta_i + \\hat \\theta_j)\n= \\sum_{i \\in B} (d_i - d_i^B).\n\\end{equation}\n\nWe note that for $i \\in B$ we have $\\hat \\theta_i \\leq \\hat \\theta_{\\min}\/8$, so\n\\begin{equation}\\label{Eq:SumIinB}\n\\begin{split}\n(r-1)|B|(|B|-1) - \\sum_{i \\in B} d_i^B\n&= \\sum_{i \\in B} \\sum_{j \\in B \\setminus \\{i\\}} \\left(r-1-\\mu(\\hat \\theta_i + \\hat \\theta_j)\\right) \\\\\n&\\leq |B|(|B|-1)\\left(r-1-\\mu\\left(\\frac{\\hat\\theta_{\\min}}{4}\\right) \\right) \\\\\n&= |B|(|B|-1) \\: \\mu\\left(-\\frac{\\hat \\theta_{\\min}}{4}\\right)\n\\leq n^2 \\mu\\left(\\frac{h^2}{4}\\right),\n\\end{split}\n\\end{equation}\nwhere in the last inequality we have used the definition $h^2 = -\\hat \\theta_{\\min} > 0$.\nNow take $j \\notin B$, so $\\hat \\theta_j > \\hat \\theta_{\\min}\/8 - (k^*+1\/2)h$. We consider three cases:\n\\begin{enumerate}\n \\item If $\\hat \\theta_j \\geq -\\hat \\theta_{\\min}\/8 + (k^*+1)h$, then for every $i \\notin B$, we have $\\hat \\theta_i + \\hat \\theta_j \\geq h\/2$, so\n\\begin{equation*}\n\\min\\{d_j, \\: (r-1)|B|\\} - d_j^B\n\\leq d_j - d_j^B\n= \\sum_{i \\notin B, i \\neq j} \\mu(\\hat \\theta_j + \\hat \\theta_i)\n\\leq n \\mu\\left(\\frac{h}{2}\\right).\n\\end{equation*}\n \\item If $\\hat \\theta_j \\leq -\\hat \\theta_{\\min}\/8 + k^* h$, then for every $i \\in B$, we have $\\hat \\theta_i + \\hat \\theta_j \\leq -h\/2$, so\n\\begin{equation*}\n\\begin{split}\n\\min\\{d_j, \\: (r-1)|B|\\} - d_j^B\n&\\leq (r-1)|B| - \\sum_{i \\in B} \\mu(\\hat \\theta_j + \\hat \\theta_i) \\\\\n&\\leq (r-1)|B| - |B| \\: \\mu\\left(-\\frac{h}{2}\\right)\n= |B| \\: \\mu\\left(\\frac{h}{2}\\right)\n\\leq n \\mu\\left(\\frac{h}{2}\\right).\n\\end{split}\n\\end{equation*}\n \\item If $-\\hat \\theta_{\\min}\/8 + k^* h \\leq \\hat \\theta_j \\leq -\\hat \\theta_{\\min}\/8 + (k^*+1)h$, then $j \\in D_{k^*}$, and in this case\n\\begin{equation*}\n\\min\\{d_j, \\: (r-1)|B|\\} - d_j^B \\leq (r-1)|B| \\leq n(r-1).\n\\end{equation*} \n\\end{enumerate}\nThere are at most $n$ such indices $j$ in both the first and second cases above, and there are at most $|D_k^*| \\leq 16n\/h$ such indices $j$ in the third case. Therefore,\n\\begin{equation*}\n\\sum_{j \\notin B} \\big( \\min\\{d_j, \\: (r-1)|B|\\} - d_j^B \\big) \\leq n^2 \\mu\\left(\\frac{h}{2}\\right) + \\frac{16n^2(r-1)}{h}.\n\\end{equation*}\nCombining this bound with~\\eqref{Eq:SumIinB} and using~\\eqref{Eq:DjBRelation} give us\n\\begin{equation*}\n\\sum_{j \\notin B} \\min\\{d_j, (r-1)|B|\\} + (r-1)|B| (|B|-1) - \\sum_{i \\in B} d_i \\leq n^2 \\mu\\left(\\frac{h}{2}\\right) + \\frac{16n^2(r-1)}{h} + n^2 \\mu\\left(\\frac{h^2}{4}\\right).\n\\end{equation*}\nAssumption~\\eqref{Eq:LemFiniteDiscBoundMLE} tells us that the left hand side of the inequality above is bounded below by $c_3 n^2$, so we obtain\n\\begin{equation*}\n\\mu \\left(\\frac{h}{2}\\right) + \\frac{16(r-1)}{h} + \\mu\\left(\\frac{h^2}{4}\\right) \\geq c_3.\n\\end{equation*}\nThe left hand side is a decreasing function of $h > 0$, so the bound above tells us that $h \\leq C(c_3)$ for a constant $C(c_3)$ that only depends on $c_3$ (and $r$), and so $\\hat \\theta_{\\min} = -h^2 \\geq -C(c_3)^2$, as desired.\n\n\\paragraph{Showing existence of $\\hat \\theta$.}\nNow let $\\mathbf{d} \\in \\text{conv}(\\mathcal{W})$ satisfy~\\eqref{Eq:LemFiniteDiscBoundMLE1} and~\\eqref{Eq:LemFiniteDiscBoundMLE}. Let $\\{\\mathbf{d}^{(k)}\\}_{k \\geq 0}$ be a sequence of points in $\\text{conv}(\\mathcal{W})^\\circ$ converging to $\\mathbf{d}$, so by Proposition~\\ref{Prop:RegularMinimal}, for each $k \\geq 0$ there exists a solution $\\hat \\theta^{(k)} \\in \\mathbb{R}^n$ to the MLE equation~\\eqref{Eq:MLEEqFiniteDisc} with $\\mathbf{d}^{(k)}$ in place of $\\mathbf{d}$. Since $\\mathbf{d}$ satisfy~\\eqref{Eq:LemFiniteDiscBoundMLE1},~\\eqref{Eq:LemFiniteDiscBoundMLE}, and $\\mathbf{d}^{(k)} \\to \\mathbf{d}$, for all sufficiently large $k$, $\\mathbf{d}^{(k)}$ also satisfy~\\eqref{Eq:LemFiniteDiscBoundMLE1} and~\\eqref{Eq:LemFiniteDiscBoundMLE} with some constants $c_1', c_2', c_3'$ depending on $c_1, c_2, c_3$. The preceding analysis then shows that $\\|\\hat \\theta^{(k)}\\|_\\infty \\leq C$ for all sufficiently large $k$, where $C \\equiv C(c_1',c_2',c_3') = C(c_1,c_2,c_3)$ is a constant depending on $c_1, c_2, c_3$. This means $\\{\\hat \\theta^{(k)}\\}_{k \\geq 0}$ is a bounded sequence, so it contains a convergent subsequence $\\{\\hat \\theta^{(k_i)}\\}_{k_i \\geq 0}$, say $\\hat \\theta^{(k_i)} \\to \\hat \\theta$. Then $\\|\\hat \\theta\\|_\\infty \\leq C$, and since $\\hat \\theta^{(k_i)}$ is a solution to the MLE equation~\\eqref{Eq:MLEEqFiniteDisc} for $\\mathbf{d}^{(k_i)}$, $\\hat \\theta$ is necessarily a solution to~\\eqref{Eq:MLEEqFiniteDisc} for $\\mathbf{d}$, and we are done.\n\\end{proof}\n\n\n\nWe are now ready to prove Theorem~\\ref{Thm:ConsistencyFiniteDisc}.\n\n\\begin{proof_of}{Theorem~\\ref{Thm:ConsistencyFiniteDisc}}\nLet $\\mathbf{d}^\\ast = (d_1^*, \\dots, d_n^*)$ denote the expected degree sequence under $\\P^*_\\theta$, so $d_i^* = \\sum_{j \\neq i} \\mu(\\theta_i + \\theta_j)$. Since $-2M \\leq \\theta_i + \\theta_j \\leq 2M$ and $\\mu$ is a decreasing function, we see that \n\\begin{equation}\\label{Eq:ProofConsistencyFiniteDisc1}\n(n-1) \\: \\mu(2M) \\leq d_i^* \\leq (n-1) \\: \\mu(-2M), \\quad i = 1,\\dots,n.\n\\end{equation}\nFor $B \\subseteq \\{1,\\dots,n\\}$, let\n\\begin{equation*}\ng(\\mathbf{d}^*, B) = \\sum_{j \\notin B} \\min\\{d_j^*, \\: (r-1)|B|\\} + (r-1) |B| (|B|-1) - \\sum_{i \\in B} d_i^*,\n\\end{equation*}\nand similarly for $g(\\mathbf{d},B)$. Using the notation $(d_j^*)^B$ as introduced in the proof of Lemma~\\ref{Lem:FiniteDiscBoundMLE}, we notice that for $j \\notin B$,\n\\begin{equation*}\nd_j^* = \\sum_{i \\neq j} \\mu(\\theta_j + \\theta_i)\n\\geq \\sum_{i \\in B} \\mu(\\theta_j + \\theta_i)\n= (d_j^*)^B,\n\\end{equation*}\nand similarly,\n\\begin{equation*}\n(r-1)|B| \\geq \\sum_{i \\in B} \\mu(\\theta_j + \\theta_i) = (d_j^*)^B.\n\\end{equation*}\nTherefore, using the relation~\\eqref{Eq:DjBRelation}, we see that\n\\begin{equation*}\n\\begin{split}\ng(\\mathbf{d}^*, B)\n&\\geq \\sum_{j \\notin B} (d_j^*)^B + (r-1)|B|(|B|-1) - \\sum_{i \\in B} d_i^* \\\\\n&= (r-1)|B|(|B|-1) - \\sum_{i \\in B} (d_i^*)^B \\\\\n&= \\sum_{i \\in B} \\sum_{j \\in B \\setminus \\{i\\}} \\big(r-1-\\mu(\\theta_i + \\theta_j)) \\\\\n&\\geq |B|(|B|-1) \\: \\big(r-1-\\mu(-2M)\\big) \\\\\n&= |B|(|B|-1) \\: \\mu(2M).\n\\end{split}\n\\end{equation*}\n\nWe now recall that the edge weights $(A_{ij})$ are independent random variables taking values in $\\{0,1,\\dots,r-1\\}$, with $\\mathbb{E}_\\theta[A_{ij}] = \\mu(\\theta_i + \\theta_j)$. By Hoeffding's inequality~\\cite{hoeffding}, for each $i = 1,\\dots,n$ we have\n\\begin{equation*}\n\\begin{split}\n\\P\\left(|d_i-d_i^*| \\geq (r-1) \\sqrt{\\frac{k n \\log n}{2}} \\right)\n&\\leq \\P\\left(|d_i-d_i^*| \\geq (r-1) \\sqrt{\\frac{k (n-1) \\log n}{2}} \\right) \\\\\n&= \\P\\left(\\left|\\frac{1}{n-1} \\sum_{j \\neq i} (A_{ij} - \\mu(\\theta_i + \\theta_j)) \\right| \\geq (r-1) \\sqrt{\\frac{k \\log n}{2(n-1)}} \\right) \\\\\n&\\leq 2\\exp\\left( -\\frac{2(n-1)}{(r-1)^2} \\cdot \\frac{(r-1)^2 k \\log n}{2(n-1)} \\right) \\\\\n&= \\frac{2}{n^k}.\n\\end{split}\n\\end{equation*}\nTherefore, by union bound, with probability at least $1-2\/n^{k-1}$ we have $\\|\\mathbf{d}-\\mathbf{d}^*\\|_\\infty \\leq (r-1) \\sqrt{kn \\log n\/2}$. Assume we are in this situation. Then from~\\eqref{Eq:ProofConsistencyFiniteDisc1} we see that for all $i = 1,\\dots,n$,\n\\begin{equation*}\n(n-1) \\: \\mu(2M) - (r-1) \\sqrt{\\frac{k n \\log n}{2}} \\leq d_i \\leq (n-1) \\: \\mu(-2M) + (r-1) \\sqrt{\\frac{k n \\log n}{2}}.\n\\end{equation*}\nThus, for sufficiently large $n$, we have\n\\begin{equation*}\nc_2 (r-1) (n-1) \\leq d_i \\leq c_1(r-1)(n-1), \\quad i = 1,\\dots,n.\n\\end{equation*}\nwith\n\\begin{equation*}\nc_1 = \\frac{3\\mu(-2M)}{2(r-1)}, \\qquad\nc_2 = \\frac{\\mu(2M)}{2(r-1)}.\n\\end{equation*}\nMoreover, it is easy to see that for every $B \\subseteq \\{1,\\dots,n\\}$ we have $|g(\\mathbf{d},B)-g(\\mathbf{d}^*,B)| \\leq \\sum_{i=1}^n |d_i-d_i^*| \\leq n\\|\\mathbf{d}-\\mathbf{d}^*\\|_\\infty$. Since we already know that $g(\\mathbf{d}^*,B) \\geq |B|(|B|-1) \\: \\mu(2M)$, this gives us \n\\begin{equation*}\ng(\\mathbf{d},B) \\geq g(\\mathbf{d}^*,B) - n\\|\\mathbf{d}-\\mathbf{d}^*\\|_\\infty \\geq |B|(|B|-1) \\mu(2M) - (r-1) \\sqrt{\\frac{kn^3 \\log n}{2}}.\n\\end{equation*}\nThus, for $|B| \\geq c_2^2(n-1)$ and for sufficiently large $n$, we have $g(\\mathbf{d},B) \\geq c_3n^2$ with $c_3 = \\frac{1}{2} c_2^4 \\: \\mu(2M)$.\n\nWe have shown that $\\mathbf{d}$ satisfies the properties~\\eqref{Eq:LemFiniteDiscBoundMLE1} and~\\eqref{Eq:LemFiniteDiscBoundMLE}, so by Lemma~\\ref{Lem:FiniteDiscBoundMLE}, the MLE $\\hat \\theta$ exists and satisfies $\\|\\hat \\theta\\|_\\infty \\leq C$, where the constant $C$ only depends on $M$ (and $r$). Assume further that $C \\geq M$, so $\\|\\theta\\|_\\infty \\leq C$ as well.\n\nTo bound the deviation of $\\hat \\theta$ from $\\theta$, we use the convergence rate in the iterative algorithm to compute $\\hat \\theta$. Set $\\hat \\theta^{(0)} = \\theta$ in the algorithm in Theorem~\\ref{Thm:MLEAlgFiniteDisc}, so by Proposition~\\ref{Prop:FiniteDiscBoundFirstIter}, we have\n\\begin{equation}\\label{Eq:ProofConsistencyFiniteDisc2}\n\\|\\hat \\theta-\\theta\\|_\\infty \\leq \\frac{2}{\\delta^2} \\|\\theta-\\varphi(\\theta)\\|_\\infty,\n\\end{equation}\nwhere $\\delta$ is given by~\\eqref{Eq:DeltaBoundJ} with $K = 2\\|\\hat\\theta\\|_\\infty + \\|\\theta\\|_\\infty \\leq 3C$. From the definition of $\\varphi$~\\eqref{Eq:VarphiFiniteDisc}, we see that for each $1 \\leq i \\leq n$,\n\\begin{equation*}\n\\theta_i - \\varphi_i(\\theta) = \\frac{1}{r-1} \\left( \\log d_i - \\log \\sum_{j \\neq i} \\mu(\\theta_i + \\theta_j) \\right)\n= \\frac{1}{r-1} \\log \\frac{d_i}{d_i^*}.\n\\end{equation*}\nNoting that $(y-1)\/y \\leq \\log y \\leq y-1$ for $y > 0$, we have $|\\log (d_i\/d_i^*)| \\leq |d_i-d_i^*| \/ \\min\\{d_i, d_i^*\\}$. Using the bounds on $\\|\\mathbf{d}-\\mathbf{d}^*\\|_\\infty$ and $d_i, d_i^*$ that we have developed above, we get\n\\begin{equation*}\n\\begin{split}\n\\|\\theta-\\varphi(\\theta)\\|_\\infty\n&\\leq \\frac{\\|\\mathbf{d}-\\mathbf{d}^*\\|_\\infty}{\\min\\{ \\min_i d_i, \\; \\min_i d_i^*\\}}\n\\leq (r-1) \\sqrt{\\frac{kn \\log n}{2}} \\cdot \\frac{2}{\\mu(2M) \\: (n-1)}\n\\leq \\frac{2(r-1)}{\\mu(2M)} \\sqrt{\\frac{k \\log n}{n}}.\n\\end{split}\n\\end{equation*}\nPlugging this bound to~\\eqref{Eq:ProofConsistencyFiniteDisc2} gives us the desired result.\n\\end{proof_of}\n\n\n\n\n\n\n\n\\subsection{Proofs for the continuous weighted graphs}\n\\label{Sec:ProofCont}\n\nIn this section we present the proofs of the results presented in Section~\\ref{Sec:Cont}.\n\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:GraphicalCont}}\n\nClearly if $(d_1, \\dots, d_n) \\in \\mathbb{R}_0^n$ is a graphical sequence, then so is $(d_{\\pi(1)}, \\dots, d_{\\pi(n)})$, for any permutation $\\pi$ of $\\{1,\\dots,n\\}$. Thus, without loss of generality we can assume $d_1 \\geq d_2 \\geq \\cdots \\geq d_n$, and in this case condition~\\eqref{Eq:GraphicalCont} reduces to\n\\begin{equation}\\label{Eq:ErdosGallai-R2}\nd_1 \\leq \\sum_{i=2}^n d_i.\n\\end{equation}\n\nFirst suppose $(d_1, \\dots, d_n) \\in \\mathbb{R}_0^n$ is graphic, so it is the degree sequence of a graph with adjacency matrix $\\mathbf{a} = (a_{ij})$. Then condition~\\eqref{Eq:ErdosGallai-R2} is satisfied since\n\\begin{equation*}\nd_1 = \\sum_{i=2}^n a_{1i} \\leq \\sum_{i=2}^n \\sum_{j \\neq i} a_{ij} = \\sum_{i=2}^n d_i.\n\\end{equation*}\nFor the converse direction, we first note the following easy properties of weighted graphical sequences:\n\\begin{enumerate}[(i)]\n \\item\\label{p:Cha} The sequence $(c, c, \\dots, c) \\in \\mathbb{R}_0^n$ is graphic for any $c \\in \\mathbb{R}_0$, realized by the ``cycle graph'' with weights $a_{i,i+1} = c\/2$ for $1 \\leq i \\leq n-1$, $a_{1n} = c\/2$, and $a_{ij} = 0$ otherwise.\n\n \\item\\label{p:Eq} A sequence $\\mathbf{d} = (d_1, \\dots, d_n) \\in \\mathbb{R}_0^n$ satisfying~\\eqref{Eq:ErdosGallai-R2} with an equality is graphic, realized by the ``star graph'' with weights $a_{1i} = d_i$ for $2 \\leq i \\leq n$ and $a_{ij} = 0$ otherwise.\n\n \\item\\label{p:Ext} If $\\mathbf{d} = (d_1, \\dots, d_n) \\in \\mathbb{R}_0^n$ is graphic, then so is $\\overline{\\mathbf{d}} = (d_1, \\dots, d_n, 0, \\dots, 0) \\in \\mathbb{R}_0^{n'}$ for any $n' \\geq n$, realized by inserting $n'-n$ isolated vertices to the graph that realizes $\\mathbf{d}$.\n \n \\item\\label{p:Sum} If $\\mathbf{d}^{(1)}, \\mathbf{d}^{(2)} \\in \\mathbb{R}_0^n$ are graphic, then so is $\\mathbf{d}^{(1)} + \\mathbf{d}^{(2)}$, realized by the graph whose edge weights are the sum of the edge weights of the graphs realizing $\\mathbf{d}^{(1)}$ and $\\mathbf{d}^{(2)}$.\n\\end{enumerate}\n\nWe now prove the converse direction by induction on $n$. For the base case $n = 3$, it is easy to verify that $(d_1,d_2,d_3)$ with $d_1 \\geq d_2 \\geq d_3 \\geq 0$ and $d_1 \\leq d_2 + d_3$ is the degree sequence of the graph $G$ with edge weights\n\\begin{equation*}\na_{12} = \\frac{1}{2}(d_1 + d_2 - d_3) \\geq 0, \\qquad\na_{13} = \\frac{1}{2}(d_1 - d_2 + d_3) \\geq 0, \\qquad\na_{23} = \\frac{1}{2}(-d_1 + d_2 + d_3) \\geq 0.\n\\end{equation*}\nAssume that the claim holds for $n-1$; we will prove it also holds for $n$. So suppose we have a sequence $\\mathbf{d} = (d_1, \\dots, d_n)$ with $d_1 \\geq d_2 \\geq \\cdots \\geq d_n \\geq 0$ satisfying~\\eqref{Eq:ErdosGallai-R2}, and let\n\\begin{equation*}\nK = \\frac{1}{n-2} \\left( \\sum_{i=2}^n d_i - d_1 \\right) \\geq 0\n\\end{equation*}\nIf $K = 0$ then~\\eqref{Eq:ErdosGallai-R2} is satisfied with an equality, and by property~\\eqref{p:Eq} we know that $\\mathbf{d}$ is graphic. Now assume $K > 0$. We consider two possibilities.\n\\begin{enumerate}\n \\item Suppose $K \\geq d_n$. Then we can write $\\mathbf{d} = \\mathbf{d}^{(1)} + \\mathbf{d}^{(2)}$, where\n\\begin{equation*}\n\\mathbf{d}^{(1)} = (d_1-d_n, \\: d_2-d_n, \\: \\dots, \\: d_{n-1}-d_n, \\: 0) \\in \\mathbb{R}_0^n\n\\end{equation*}\nand\n\\begin{equation*}\n\\mathbf{d}^{(2)} = (d_n, d_n, \\dots, d_n) \\in \\mathbb{R}_0^n.\n\\end{equation*}\nThe assumption $K \\geq d_n$ implies $d_1-d_n \\leq \\sum_{i=2}^{n-1} (d_i-d_n)$, so $(d_1-d_n, d_2-d_n, \\dots, d_{n-1}-d_n) \\in \\mathbb{R}_0^{n-1}$ is a graphical sequence by induction hypothesis. Thus, $\\mathbf{d}^{(1)}$ is also graphic by property~\\eqref{p:Ext}. Furthermore, $\\mathbf{d}^{(2)}$ is graphic by property~\\eqref{p:Cha}, so $\\mathbf{d} = \\mathbf{d}^{(1)} + \\mathbf{d}^{(2)}$ is also a graphical sequence by property~\\eqref{p:Sum}.\n\n \\item Suppose $K < d_n$. Then write $\\mathbf{d} = \\mathbf{d}^{(3)} + \\mathbf{d}^{(4)}$, where\n\\begin{equation*}\n\\mathbf{d}^{(3)} = (d_1-K, \\: d_2-K, \\: \\dots, \\: d_n-K) \\in \\mathbb{R}_0^n\n\\end{equation*}\nand\n\\begin{equation*}\n\\mathbf{d}^{(4)} = (K, K, \\dots, K) \\in \\mathbb{R}_0^n.\n\\end{equation*}\nBy construction, $\\mathbf{d}^{(3)}$ satisfies $d_1-K = \\sum_{i=2}^n (d_i-K)$, so $\\mathbf{d}^{(3)}$ is a graphical sequence by property~\\eqref{p:Eq}. Since $\\mathbf{d}^{(4)}$ is also graphic by property~\\eqref{p:Cha}, we conclude that $\\mathbf{d} = \\mathbf{d}^{(3)} + \\mathbf{d}^{(4)}$ is graphic by property~\\eqref{p:Sum}.\n\\end{enumerate}\nThis completes the induction step and finishes the proof of Theorem~\\ref{Thm:GraphicalCont}.\n\n\n\n\n\\subsubsection{Proof of Lemma~\\ref{Lem:W-Convex}}\n\nWe first prove that $\\mathcal{W}$ is convex. Given $\\mathbf{d} = (d_1, \\dots, d_n)$ and $\\mathbf{d}' = (d_1', \\dots, d_n')$ in $\\mathcal{W}$, and given $0 \\leq t \\leq 1$, we note that\n\\begin{equation*}\n\\begin{split}\n\\max_{1 \\leq i \\leq n} \\big( t d_i + (1-t) d_i' \\big)\n&\\leq t \\max_{1 \\leq i \\leq n} d_i + (1-t) \\max_{1 \\leq i \\leq n} d_i' \\\\\n&\\leq \\frac{1}{2} t \\sum_{i=1}^n d_i + \\frac{1}{2} (1-t) \\sum_{i=1}^n d_i' \\\\\n&= \\frac{1}{2} \\sum_{i=1}^n \\big( t d_i + (1-t) d_i' \\big),\n\\end{split}\n\\end{equation*}\nwhich means $t\\mathbf{d} + (1-t) \\mathbf{d}' \\in \\mathcal{W}$.\n\nNext, recall that we already have $\\mathcal{M} \\subseteq \\text{conv}(\\mathcal{W}) = \\mathcal{W}$ from Proposition~\\ref{Prop:MConvW}, so to conclude $\\mathcal{M} = \\mathcal{W}$ it remains to show that $\\mathcal{W} \\subseteq \\mathcal{M}$. Given $\\mathbf{d} \\in \\mathcal{W}$, let $G$ be a graph that realizes $\\mathbf{d}$ and let $\\mathbf{w} = (w_{ij})$ be the edge weights of $G$, so that $d_i = \\sum_{j \\neq i} w_{ij}$ for all $i = 1,\\dots,n$. Consider a distribution $\\P$ on $\\mathbb{R}_0^{\\binom{n}{2}}$ that assigns each edge weight $A_{ij}$ to be an independent exponential random variable with mean parameter $w_{ij}$, so $\\P$ has density\n\\begin{equation*}\np(\\mathbf{a}) = \\prod_{\\{i,j\\}} \\frac{1}{w_{ij}} \\exp\\left(-\\frac{a_{ij}}{w_{ij}}\\right),\n\\quad \\mathbf{a} = (a_{ij}) \\in \\mathbb{R}_0^{\\binom{n}{2}}.\n\\end{equation*}\nThen by construction, we have $\\mathbb{E}_\\P[A_{ij}] = w_{ij}$ and\n\\begin{equation*}\n\\mathbb{E}_\\P[\\deg_i(A)] = \\sum_{j \\neq i} \\mathbb{E}_\\P[A_{ij}] = \\sum_{j \\neq i} w_{ij} = d_i, \\quad i = 1, \\dots, n.\n\\end{equation*}\nThis shows that $\\mathbf{d} \\in \\mathcal{M}$, as desired.\n\n\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:ConsistencyCont}}\n\nWe first prove that the MLE $\\hat \\theta$ exists almost surely. Recall from the discussion in Section~\\ref{Sec:Cont} that $\\hat \\theta$ exists if and only if $\\mathbf{d} \\in \\mathcal{M}^\\circ$. Clearly $\\mathbf{d} \\in \\mathcal{W}$ since $\\mathbf{d}$ is the degree sequence of the sampled graph $G$. Since $\\mathcal{M} = \\mathcal{W}$ (Lemma~\\ref{Lem:W-Convex}), we see that the MLE $\\hat \\theta$ does not exist if and only if $\\mathbf{d} \\in \\partial \\mathcal{M} = \\mathcal{M} \\setminus \\mathcal{M}^\\circ$, where\n\\begin{equation*}\n\\partial \\mathcal{M} = \\left\\{ \\mathbf{d}' \\in \\mathbb{R}_0^n \\colon \\min_{1 \\leq i \\leq n} d_i' = 0 \\; \\text{ or } \\; \\max_{1 \\leq i \\leq n} d_i' = \\frac{1}{2} \\sum_{i=1}^n d_i' \\right\\}.\n\\end{equation*}\nIn particular, note that $\\partial \\mathcal{M}$ has Lebesgue measure $0$. Since the distribution $\\P^\\ast$ on the edge weights $A = (A_{ij})$ is continuous (being a product of exponential distributions) and $\\mathbf{d}$ is a continuous function of $A$, we conclude that $\\P^\\ast(\\mathbf{d} \\in \\partial \\mathcal{M}) = 0$, as desired.\n\nWe now prove the consistency of $\\hat \\theta$. Recall that $\\theta$ is the true parameter that we wish to estimate, and that the MLE $\\hat \\theta$ satisfies $-Z(\\hat \\theta) = \\mathbf{d}$. Let $\\mathbf{d}^\\ast = -\\nabla Z(\\theta)$ denote the expected degree sequence of the maximum entropy distribution $\\P^\\ast_\\theta$. By the mean value theorem for vector-valued functions~\\cite[p.~341]{Lang}, we can write\n\\begin{equation}\\label{Eq:ConsistencyContMVT}\n\\mathbf{d} - \\mathbf{d}^\\ast = \\nabla Z(\\theta) - \\nabla Z(\\hat \\theta) = J(\\theta - \\hat \\theta).\n\\end{equation}\nHere $J$ is a matrix obtained by integrating (element-wise) the Hessian $\\nabla^2 Z$ of the log-partition function on intermediate points between $\\theta$ and $\\hat \\theta$:\n\\begin{equation*}\nJ = \\int_0^1 \\nabla^2 Z(t \\theta + (1-t) \\hat \\theta) \\: dt.\n\\end{equation*}\n\nRecalling that $-\\nabla Z(\\theta) = \\mathbb{E}_\\theta[\\deg(A)]$, at any intermediate point $\\xi \\equiv \\xi(t) = t \\theta + (1-t) \\hat \\theta$, we have\n\\begin{equation*}\n\\big(\\nabla Z(\\xi)\\big)_i = -\\sum_{j \\neq i} \\mu(\\xi_i + \\xi_j) = -\\sum_{j \\neq i} \\frac{1}{\\xi_i + \\xi_j}.\n\\end{equation*} \nTherefore, the Hessian $\\nabla^2 Z$ is given by\n\\begin{equation*}\n\\big( \\nabla^2 Z(\\xi) \\big)_{ij} = \\frac{1}{(\\xi_i + \\xi_j)^2} \\quad i \\neq j,\n\\end{equation*}\nand\n\\begin{equation*}\n\\big( \\nabla^2 Z(\\xi) \\big)_{ii} = \\sum_{j \\neq i} \\frac{1}{(\\xi_i+\\xi_j)^2} = \\sum_{j \\neq i} \\big( \\nabla^2 Z(\\xi) \\big)_{ij}.\n\\end{equation*}\nSince $\\theta,\\theta' \\in \\Theta$ and we assume $\\theta_i+\\theta_j \\leq M$, it follows that for $i \\neq j$,\n\\begin{equation*}\n0 < \\xi_i + \\xi_j \\leq \\max\\{\\theta_i+\\theta_j, \\: \\hat \\theta_i + \\hat \\theta_j\\} \\leq \\max\\{M, 2\\|\\hat\\theta\\|_\\infty\\} \\leq M + 2\\|\\hat \\theta\\|_\\infty.\n\\end{equation*}\nTherefore, the Hessian $\\nabla^2 Z$ is a {\\em diagonally balanced} matrix with off-diagonal entries bounded below by $1\/(M + 2\\|\\hat \\theta\\|_\\infty)^2$.\nIn particular, $J$ is also a symmetric, diagonally balanced matrix with off-diagonal entries bounded below by $1\/(M + 2\\|\\hat \\theta\\|_\\infty)^2$, being an average of such matrices. By Theorem~\\ref{Thm:Main}, $J$ is invertible and its inverse satisfies the bound\n\\begin{equation*}\n\\|J^{-1}\\|_\\infty \\leq \\frac{(M+2\\|\\hat\\theta\\|_\\infty)^2(3n-4)}{2(n-1)(n-2)} \\leq \\frac{2}{n} \\: (M + 2\\|\\hat \\theta\\|_\\infty)^2,\n\\end{equation*}\nwhere the last inequality holds for $n \\geq 7$. Inverting $J$ in~\\eqref{Eq:ConsistencyContMVT} and applying the bound on $\\|J^{-1}\\|_\\infty$ gives \n\\begin{equation}\\label{Eq:Consistency-R1}\n\\|\\theta-\\hat\\theta\\|_\\infty\n\\leq \\|J^{-1}\\|_\\infty \\: \\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty\n\\leq \\frac{2}{n} \\: (M + 2\\|\\hat \\theta\\|_\\infty)^2 \\: \\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty.\n\\end{equation}\n\nLet $A = (A_{ij})$ denote the edge weights of the sampled graph $G \\sim \\P^\\ast_\\theta$, so $d_i = \\sum_{j \\neq i} A_{ij}$ for $i = 1,\\dots,n$. Moreover, since $\\mathbf{d}^\\ast$ is the expected degree sequence from the distribution $\\P^\\ast_\\theta$, we also have $d_i^\\ast = \\sum_{j \\neq i} 1\/(\\theta_i+\\theta_j)$. Recall that $A_{ij}$ is an exponential random variable with rate $\\lambda = \\theta_i + \\theta_j \\geq L$, so by Lemma~\\ref{Lem:SubExp-Exp}, $A_{ij} - 1\/(\\theta_i + \\theta_j)$ is sub-exponential with parameter $2\/(\\theta_i+\\theta_j) \\leq 2\/L$. For each $i = 1,\\dots,n$, the random variables $(A_{ij} - 1\/(\\theta_i+\\theta_j), j \\neq i)$ are independent sub-exponential random variables, so we can apply the concentration inequality in Theorem~\\ref{Thm:ConcIneqSubExp} with $\\kappa = 2\/L$ and\n\\begin{equation*}\n\\epsilon = \\left(\\frac{4k \\log n}{\\gamma (n-1) L^2} \\right)^{1\/2}.\n\\end{equation*}\nAssume $n$ is sufficiently large such that $\\epsilon\/\\kappa = \\sqrt{k \\log n \/ \\gamma (n-1)} \\leq 1$. Then by Theorem~\\ref{Thm:ConcIneqSubExp}, for each $i = 1,\\dots,n$ we have\n\\begin{equation*}\n\\begin{split}\n\\P\\left(|d_i - d_i^\\ast| \\geq \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} \\right)\n&\\leq \\P\\left(|d_i - d_i^\\ast| \\geq \\sqrt{\\frac{4 k (n-1) \\log n}{\\gamma L^2}} \\right) \\\\\n&= \\P\\left(\\Bigg|\\frac{1}{n-1}\\sum_{j \\neq i} \\left(A_{ij}-\\frac{1}{\\theta_i+\\theta_j}\\right)\\Bigg| \\geq \\sqrt{\\frac{4k \\log n}{\\gamma (n-1) L^2}} \\right) \\\\\n&\\leq 2\\exp\\left(-\\gamma \\: (n-1) \\cdot \\frac{L^2}{4} \\cdot \\frac{4k \\log n}{\\gamma (n-1) L^2}\\right) \\\\\n&= \\frac{2}{n^k}.\n\\end{split}\n\\end{equation*}\nBy the union bound,\n\\begin{equation*}\n\\begin{split}\n\\P\\Bigg(\\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty \\geq \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} \\; \\Bigg)\n&\\leq \\sum_{i=1}^n \\P\\left(|d_i - d_i^\\ast| \\geq \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}}\\right)\n\\leq \\frac{2}{n^{k-1}}.\n\\end{split}\n\\end{equation*}\n\nAssume for the rest of this proof that $\\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty \\leq \\sqrt{4kn \\log n\/(\\gamma L^2)}$, which happens with probability at least $1-2\/n^{k-1}$. From~\\eqref{Eq:Consistency-R1} and using the triangle inequality, we get\n\\begin{equation*}\n\\|\\hat \\theta\\|_\\infty\n\\leq \\|\\theta-\\hat\\theta\\|_\\infty + \\|\\theta\\|_\\infty\n\\leq \\frac{4}{L} \\: \\sqrt{\\frac{k \\log n}{\\gamma n}} \\: (M + 2\\|\\hat \\theta\\|_\\infty)^2 + M.\n\\end{equation*}\nWhat we have shown is that for sufficiently large $n$, $\\|\\hat \\theta\\|_\\infty$ satisfies the inequality $G_n(\\|\\hat \\theta\\|_\\infty) \\geq 0$, where $G_n(x)$ is the quadratic function\n\\begin{equation*}\nG_n(x) = \\frac{4}{L} \\: \\sqrt{\\frac{k \\log n}{\\gamma n}} \\: (M + 2x)^2 - x + M.\n\\end{equation*}\nIt is easy to see that for sufficiently large $n$ we have $G_n(2M) < 0$ and $G_n(\\log n) < 0$. Thus, $G_n(\\|\\hat \\theta\\|_\\infty) \\geq 0$ means either $\\|\\hat \\theta\\|_\\infty < 2M$ or $\\|\\hat \\theta\\|_\\infty > \\log n$. We claim that for sufficiently large $n$ we always have $\\|\\hat \\theta\\|_\\infty < 2M$. Suppose the contrary that there are infinitely many $n$ for which $\\|\\hat \\theta\\|_\\infty > \\log n$, and consider one such $n$. Since $\\hat \\theta \\in \\Theta$ we know that $\\hat \\theta_i + \\hat \\theta_j > 0$ for each $i \\neq j$, so there can be at most one index $i$ with $\\hat \\theta_i < 0$. We consider the following two cases:\n\\begin{enumerate}\n \\item \\textbf{Case 1:} suppose $\\hat \\theta_i \\geq 0$ for all $i = 1,\\dots,n$. Let $i^\\ast$ be an index with $\\hat \\theta_{i^\\ast} = \\|\\hat \\theta\\|_\\infty > \\log n$. Then, using the fact that $\\hat \\theta$ satisfies the system of equations~\\eqref{Eq:MLEEqCont} and $\\hat \\theta_{i^\\ast} + \\hat \\theta_j \\geq \\hat \\theta_{i^\\ast}$ for $j \\neq i^\\ast$, we see that\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{M} &\\leq \\frac{1}{n-1} \\sum_{j \\neq i^\\ast} \\frac{1}{\\theta_{i^\\ast} + \\theta_j} \\\\\n&\\leq \\frac{1}{n-1} \\left| \\sum_{j \\neq i^\\ast} \\frac{1}{\\theta_{i^\\ast}+\\theta_j} - \\sum_{j \\neq i^\\ast} \\frac{1}{\\hat \\theta_{i^\\ast}+\\hat \\theta_j} \\right| + \\frac{1}{n-1} \\sum_{j \\neq i^\\ast} \\frac{1}{\\hat \\theta_{i^\\ast} + \\hat \\theta_j} \\\\\n&= \\frac{1}{n-1} \\left| d_i^\\ast - d_i \\right| + \\frac{1}{n-1} \\sum_{j \\neq i^\\ast} \\frac{1}{\\hat \\theta_{i^\\ast} + \\hat \\theta_j} \\\\\n&\\leq \\frac{1}{n-1} \\| \\mathbf{d}^\\ast - \\mathbf{d} \\|_\\infty + \\frac{1}{\\|\\hat \\theta\\|_\\infty} \\\\\n&\\leq \\frac{1}{n-1} \\: \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} + \\frac{1}{\\log n},\n\\end{split}\n\\end{equation*}\nwhich cannot hold for sufficiently large $n$, as the last expression tends to $0$ as $n \\to \\infty$.\n\n \\item \\textbf{Case 2:} suppose $\\hat \\theta_i < 0$ for some $i = 1,\\dots,n$, so $\\hat \\theta_j > 0$ for $j \\neq i$ since $\\hat \\theta \\in \\Theta$. Without loss of generality assume $\\hat \\theta_1 < 0 < \\hat \\theta_2 \\leq \\cdots \\leq \\hat \\theta_n$, so $\\hat \\theta_n = \\|\\hat\\theta\\|_\\infty > \\log n$. Following the same chain of inequalities as in the previous case (with $i^\\ast = n$), we obtain\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{M} &\\leq \\frac{1}{n-1} \\| \\mathbf{d}^\\ast - \\mathbf{d} \\|_\\infty + \\frac{1}{n-1} \\left(\\frac{1}{\\hat \\theta_n + \\hat \\theta_1} + \\sum_{j = 2}^{n-1} \\frac{1}{\\hat \\theta_j + \\hat \\theta_n} \\right) \\\\\n&\\leq \\frac{1}{n-1} \\: \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} + \\frac{1}{(n-1)(\\hat \\theta_n + \\hat \\theta_1)} + \\frac{n-2}{(n-1)\\|\\hat \\theta\\|_\\infty} \\\\\n&\\leq \\frac{1}{n-1} \\: \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} + \\frac{1}{(n-1)(\\hat \\theta_n + \\hat \\theta_1)} + \\frac{1}{\\log n}.\n\\end{split}\n\\end{equation*}\nSo for sufficiently large $n$,\n\\begin{equation*}\n\\frac{1}{\\hat \\theta_1 + \\hat \\theta_n} \\geq (n-1)\\left(\\frac{1}{M} - \\frac{1}{n-1} \\: \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} - \\frac{1}{\\log n}\\right) \\geq \\frac{n}{2M},\n\\end{equation*}\nand thus $\\hat \\theta_1 + \\hat \\theta_i \\leq \\hat \\theta_1 + \\hat \\theta_n \\leq 2M\/n$ for each $i = 2,\\dots,n$. However, then\n\\begin{equation*}\n\\begin{split}\n\\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} &\\geq \\|\\mathbf{d}^\\ast - \\mathbf{d} \\|_\\infty\n\\geq |d_1^\\ast - d_1|\n\\geq -\\sum_{j=2}^n \\frac{1}{\\theta_1 + \\theta_j} +\\sum_{j=2}^n \\frac{1}{\\hat\\theta_1 + \\hat\\theta_j}\n\\geq - \\frac{(n-1)}{L} + \\frac{n(n-1)}{2M},\n\\end{split}\n\\end{equation*}\nwhich cannot hold for sufficiently large $n$, as the right hand side of the last expression tends to $\\infty$ faster than the left hand side.\n\\end{enumerate}\nThe analysis above shows that $\\|\\hat \\theta\\|_\\infty < 2M$ for all sufficiently large $n$. Plugging in this result to~\\eqref{Eq:Consistency-R1}, we conclude that for sufficiently large $n$, with probability at least $1-2n^{-(k-1)}$ we have the bound\n\\begin{equation*}\n\\|\\theta-\\hat\\theta\\|_\\infty \\leq \\frac{2}{n} \\: (5M)^2 \\: \\sqrt{\\frac{4k n \\log n}{\\gamma L^2}} = \\frac{100M^2}{L} \\sqrt{\\frac{k \\log n}{\\gamma n}},\n\\end{equation*}\nas desired.\n\n\n\n\n\n\n\n\\subsection{Proofs for the infinite discrete weighted graphs}\n\\label{Sec:ProofInfiniteDisc}\n\nIn this section we prove the results presented in Section~\\ref{Sec:InfiniteDisc}.\n\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:GraphicalInfiniteDisc}}\n\nWithout loss of generality we may assume $d_1 \\geq d_2 \\geq \\dots \\geq d_n$, so condition~\\eqref{Eq:GraphicalInfiniteDisc} becomes $d_1 \\leq \\sum_{i=2}^n d_i$. The necessary part is easy: if $(d_1,\\dots,d_n)$ is a degree sequence of a graph $G$ with edge weights $a_{ij} \\in \\mathbb{N}_0$, then $\\sum_{i=1}^n d_i = 2\\sum_{\\{i,j\\}} a_{ij}$ is even, and the total weight coming out of vertex $1$ is at most $\\sum_{i=2}^n d_i$. For the converse direction, we proceed by induction on $s = \\sum_{i=1}^n d_i$. The statement is clearly true for $s = 0$ and $s = 2$. Assume the statement is true for some even $s \\in \\mathbb{N}$, and suppose we are given $\\mathbf{d} = (d_1,\\dots,d_n) \\in \\mathbb{N}_0^n$ with $d_1 \\geq \\dots \\geq d_n$, $\\sum_{i=1}^n d_i = s+2$, and $d_1 \\leq \\sum_{i=2}^n d_i$. Without loss of generality we may assume $d_n \\geq 1$, for otherwise we can proceed with only the nonzero elements of $\\mathbf{d}$. Let $1 \\leq t \\leq n-1$ be the smallest index such that $d_t > d_{t+1}$, with $t = n-1$ if $d_1 = \\dots = d_n$, and let $\\mathbf{d}' = (d_1, \\dots, d_{t-1}, d_t-1, d_{t+1}, \\dots, d_n-1)$. We will show that $\\mathbf{d}'$ is graphic. This will imply that $\\mathbf{d}$ is graphic, because if $\\mathbf{d}'$ is realized by the graph $G'$ with edge weights $a_{ij}'$, then $\\mathbf{d}$ is realized by the graph $G$ with edge weights $a_{tn} = a_{tn}'+1$ and $a_{ij} = a_{ij}'$ otherwise.\n\nNow for $\\mathbf{d}' = (d_1', \\dots, d_n')$ given above, we have $d_1' \\geq \\dots \\geq d_n'$ and $\\sum_{i=1}^n d_i' = \\sum_{i=1}^n d_i-2 = s$ is even. So it suffices to show that $d_1' \\leq \\sum_{i=2}^n d_i'$, for then we can apply the induction hypothesis to conclude that $\\mathbf{d}'$ is graphic. If $t = 1$, then $d_1' = d_1-1 \\leq \\sum_{i=2}^n d_i -1 = \\sum_{i=2}^n d_i'$. If $t > 1$ then $d_1 = d_2$, so $d_1 < \\sum_{i=2}^n d_i$ since $d_n \\geq 1$. In particular, since $\\sum_{i=1}^n d_i$ is even, $\\sum_{i=2}^n d_i - d_1 = \\sum_{i=1}^n d_i - 2d_1$ is also even, hence $\\sum_{i=2}^n d_i - d_1 \\geq 2$. Therefore, $d_1' = d_1 \\leq \\sum_{i=2}^n d_i-2 = \\sum_{i=2}^n d_i'$. This finishes the proof of Theorem~\\ref{Thm:GraphicalInfiniteDisc}.\n\n\n\n\n\n\n\\subsubsection{Proof of Lemma~\\ref{Lem:ConvWInfiniteDisc}}\n\nClearly $\\mathcal{W} \\subseteq \\mathcal{W}_1$, so $\\overline{\\text{conv}(\\mathcal{W})} \\subseteq \\mathcal{W}_1$ since $\\mathcal{W}_1$ is closed and convex, by Lemma~\\ref{Lem:W-Convex}. Conversely, let $\\mathbb{Q}$ denote the set of rational numbers. We will first show that $\\mathcal{W}_1 \\cap \\mathbb{Q}^n \\subseteq \\text{conv}(\\mathcal{W})$ and then proceed by a limit argument. Let $\\mathbf{d} \\in \\mathcal{W}_1 \\cap \\mathbb{Q}^n$, so $\\mathbf{d} = (d_1, \\dots, d_n) \\in \\mathbb{Q}^n$ with $d_i \\geq 0$ and $\\max_{1 \\leq i \\leq n} d_i \\leq \\frac{1}{2} \\sum_{i=1}^n d_i$. Choose $K \\in \\mathbb{N}$ large enough such that $Kd_i \\in \\mathbb{N}_0$ for all $i = 1,\\dots,n$. Observe that $2K \\mathbf{d} = (2Kd_1, \\dots, 2Kd_n) \\in \\mathbb{N}_0^n$ has the property that $\\sum_{i=1}^n 2Kd_i \\in \\mathbb{N}_0$ is even and $\\max_{1 \\leq i \\leq n} 2Kd_i \\leq \\frac{1}{2} \\sum_{i=1}^n 2Kd_i$, so $2K\\mathbf{d} \\in \\mathcal{W}$ by definition. Since $0 = (0,\\dots,0) \\in \\mathcal{W}$ as well, all elements along the segment joining $0$ and $2K\\mathbf{d}$ lie in $\\text{conv}(\\mathcal{W})$, so in particular, $\\mathbf{d} = (2K\\mathbf{d})\/(2K) \\in \\text{conv}(\\mathcal{W})$. This shows that $\\mathcal{W}_1 \\cap \\mathbb{Q}^n \\subseteq \\text{conv}(\\mathcal{W})$, and hence $\\overline{\\mathcal{W}_1 \\cap \\mathbb{Q}^n} \\subseteq \\overline{\\text{conv}(\\mathcal{W})}$.\n\nTo finish the proof it remains to show that $\\overline{\\mathcal{W}_1 \\cap \\mathbb{Q}^n} = \\mathcal{W}_1$. On the one hand we have \n\\begin{equation*}\n\\overline{\\mathcal{W}_1 \\cap \\mathbb{Q}^n} \\subseteq \\overline{\\mathcal{W}_1} \\cap \\overline{\\mathbb{Q}^n} = \\mathcal{W}_1 \\cap \\mathbb{R}_0^n = \\mathcal{W}_1. \n\\end{equation*}\nFor the other direction, given $\\mathbf{d} \\in \\mathcal{W}_1$, choose $\\mathbf{d}_1, \\dots, \\mathbf{d}_n \\in W_1$ such that $\\mathbf{d}, \\mathbf{d}_1, \\dots, \\mathbf{d}_n$ are in general position, so that the convex hull $C$ of $\\{\\mathbf{d}, \\mathbf{d}_1, \\dots, \\mathbf{d}_n\\}$ is full dimensional. This can be done, for instance, by noting that the following $n+1$ points in $\\mathcal{W}_1$ are in general position:\n\\begin{equation*}\n\\{0, \\; \\mathbf{e}_1 + \\mathbf{e}_2, \\; \\mathbf{e}_1 + \\mathbf{e}_3, \\; \\cdots, \\; \\mathbf{e}_1 + \\mathbf{e}_n, \\; \\mathbf{e}_1 + \\mathbf{e}_2 + \\cdots + \\mathbf{e}_n\\},\n\\end{equation*}\nwhere $\\mathbf{e}_1, \\dots, \\mathbf{e}_n$ are the standard basis of $\\mathbb{R}^n$. For each $m \\in \\mathbb{N}$ and $i = 1, \\dots, n$, choose $\\mathbf{d}_i^{(m)}$ on the line segment between $\\mathbf{d}$ and $\\mathbf{d}_i$ such that the convex hull $C_m$ of $\\{\\mathbf{d}, \\mathbf{d}_1^{(m)}, \\dots, \\mathbf{d}_n^{(m)}\\}$ is full dimensional and has diameter at most $1\/m$. Since $C_m$ is full dimensional we can choose a rational point $\\mathbf{r}_m \\in C_m \\subseteq C \\subseteq \\mathcal{W}_1$. Thus we have constructed a sequence of rational points $(\\mathbf{r}_m)$ in $\\mathcal{W}_1$ converging to $\\mathbf{d}$, which shows that $\\mathcal{W}_1 \\subseteq \\overline{\\mathcal{W}_1 \\cap \\mathbb{Q}^n}$.\n\n\n\n\n\\subsubsection{Proof of Theorem~\\ref{Thm:ConsistencyInfiniteDisc}}\n\nWe first address the issue of the existence of $\\hat \\theta$. Recall from the discussion in Section~\\ref{Sec:InfiniteDisc} that the MLE $\\hat \\theta \\in \\Theta$ exists if and only if $\\mathbf{d} \\in \\mathcal{M}^\\circ$. Clearly $\\mathbf{d} \\in \\mathcal{W}$ since $\\mathbf{d}$ is the degree sequence of the sampled graph $G$, and $\\mathcal{W} \\subseteq \\text{conv}(\\mathcal{W}) = \\mathcal{M}$ from Proposition~\\ref{Prop:MConvW}. Therefore, the MLE $\\hat \\theta$ does not exist if and only if $\\mathbf{d} \\in \\partial \\mathcal{M} = \\mathcal{M} \\setminus \\mathcal{M}^\\circ$, where the boundary $\\partial \\mathcal{M}$ is explicitly given by\n\\begin{equation*}\n\\partial \\mathcal{M} = \\left\\{ \\mathbf{d}' \\in \\mathbb{R}_0^n \\colon \\min_{1 \\leq i \\leq n} d_i' = 0 \\; \\text{ or } \\; \\max_{1 \\leq i \\leq n} d_i' = \\frac{1}{2} \\sum_{i=1}^n d_i' \\right\\}.\n\\end{equation*}\nUsing union bound and the fact that the edge weights $A_{ij}$ are independent geometric random variables, we have\n\\begin{equation*}\n\\begin{split}\n\\P(d_i = 0 \\text{ for some } i)\n\\leq \\sum_{i=1}^n \\P(d_i = 0)\n&= \\sum_{i=1}^n \\P(A_{ij} = 0 \\text{ for all } j \\neq i) \\\\\n&= \\sum_{i=1}^n \\prod_{j \\neq i} \\left(1-\\exp(-\\theta_i-\\theta_j)\\right)\n\\leq n\\left(1-\\exp(-M)\\right)^{n-1}.\n\\end{split}\n\\end{equation*}\nFurthermore, again by union bound,\n\\begin{equation*}\n\\P\\left(\\max_{1 \\leq i \\leq n} d_i = \\frac{1}{2} \\sum_{i=1}^n d_i \\right)\n= \\P\\left(d_i = \\sum_{j \\neq i} d_j \\text{ for some } i \\right)\n\\leq \\sum_{i=1}^n \\P\\left(d_i = \\sum_{j \\neq i} d_j\\right).\n\\end{equation*}\nNote that we have $d_i = \\sum_{j \\neq i} d_j$ for some $i$ if and only if the edge weights $A_{jk} = 0$ for all $j,k \\neq i$. This occurs with probability\n\\begin{equation*}\n\\P\\left(A_{jk} = 0 \\text{ for } j,k \\neq i \\right)\n= \\prod_{\\substack{j,k \\neq i\\\\j \\neq k}}\\left(1-\\exp(-\\theta_j-\\theta_k)\\right)\n\\leq \\left(1-\\exp(-M)\\right)^{\\binom{n-1}{2}}.\n\\end{equation*}\nTherefore,\n\\begin{equation*}\n\\begin{split}\n\\P(\\mathbf{d} \\in \\partial \\mathcal{M})\n&\\leq \\P(d_i = 0 \\text{ for some } i) + \\P\\left(\\max_{1 \\leq i \\leq n} d_i = \\frac{1}{2} \\sum_{i=1}^n d_i \\right) \\\\\n&\\leq n\\left(1-\\exp(-M)\\right)^{n-1} + n\\left(1-\\exp(-M)\\right)^{\\binom{n-1}{2}} \\\\\n&\\leq \\frac{1}{n^{k-1}},\n\\end{split}\n\\end{equation*}\nwhere the last inequality holds for sufficiently large $n$. This shows that for sufficiently large $n$, the MLE $\\hat \\theta$ exists with probability at least $1-1\/n^{k-1}$.\n\nWe now turn to proving the consistency of $\\hat \\theta$. For the rest of this proof, assume that the MLE $\\hat \\theta \\in \\Theta$ exists, which occurs with probability at least $1-1\/n^{k-1}$. The proof of the consistency of $\\hat \\theta$ follows the same outline as in the proof of Theorem~\\ref{Thm:ConsistencyCont}. Let $\\mathbf{d}^\\ast = -\\nabla Z(\\theta)$ denote the expected degree sequence of the distribution $\\P^\\ast_\\theta$, and recall that the MLE $\\hat \\theta$ satisfies $\\mathbf{d} = -\\nabla Z(\\hat \\theta)$. By the mean value theorem~\\cite[p.~341]{Lang}, we can write\n\\begin{equation}\\label{Eq:Consistency-MVT-Disc}\n\\mathbf{d} - \\mathbf{d}^\\ast = \\nabla Z(\\theta) - \\nabla Z(\\hat \\theta) = J(\\theta - \\hat\\theta),\n\\end{equation}\nwhere $J$ is the matrix obtained by integrating the Hessian of $Z$ between $\\theta$ and $\\hat \\theta$,\n\\begin{equation*}\nJ = \\int_0^1 \\nabla^2 Z(t\\theta + (1-t)\\hat \\theta) \\: dt.\n\\end{equation*}\n\nLet $0 \\leq t \\leq 1$, and note that at the point $\\xi = t\\theta + (1-t) \\hat \\theta$ the gradient $\\nabla Z$ is given by\n\\begin{equation*}\n\\big( \\nabla Z(\\xi) \\big)_i = -\\sum_{j \\neq i} \\frac{1}{\\exp(\\xi_i+\\xi_j)-1}.\n\\end{equation*}\nThus, the Hessian $\\nabla^2 Z$ is\n\\begin{equation*}\n\\big( \\nabla^2 Z(\\xi) \\big)_{ij} = \\frac{\\exp(\\xi_i+\\xi_j)}{(\\exp(\\xi_i+\\xi_j)-1)^2} \\quad i \\neq j,\n\\end{equation*}\nand\n\\begin{equation*}\n\\big( \\nabla^2 Z(\\xi) \\big)_{ii} = \\sum_{j \\neq i} \\frac{\\exp(\\xi_i+\\xi_j)}{(\\exp(\\xi_i+\\xi_j)-1)^2} = \\sum_{j \\neq i} \\big( \\nabla^2 Z(\\xi) \\big)_{ij}.\n\\end{equation*}\nSince $\\theta, \\hat \\theta \\in \\Theta$ and we assume $\\theta_i+\\theta_j \\leq M$, for $i \\neq j$ we have\n\\begin{equation*}\n0 < \\xi_i + \\xi_j \\leq \\max\\{\\theta_i + \\theta_j, \\; \\hat \\theta_i + \\hat \\theta_j \\} \\leq \\max\\{M, \\; 2\\|\\hat \\theta\\|_\\infty\\} \\leq M + 2\\|\\hat \\theta\\|_\\infty.\n\\end{equation*}\nThis means $J$ is a symmetric, diagonally dominant matrix with off-diagonal entries bounded below by $\\exp(M + 2\\|\\hat\\theta\\|_\\infty)\/(\\exp(M + 2\\|\\hat\\theta\\|_\\infty)-1)^2$, being an average of such matrices. Then by Theorem~\\ref{Thm:Main}, we have the bound\n\\begin{equation*}\n\\|J^{-1}\\|_\\infty\n\\leq \\frac{(3n-4)}{2(n-2)(n-1)} \\: \\frac{(\\exp(M + 2\\|\\hat\\theta\\|_\\infty)-1)^2}{\\exp(M + 2\\|\\hat\\theta\\|_\\infty)}\n\\leq \\frac{2}{n} \\: \\frac{(\\exp(M + 2\\|\\hat\\theta\\|_\\infty)-1)^2}{\\exp(M + 2\\|\\hat\\theta\\|_\\infty)},\n\\end{equation*}\nwhere the second inequality holds for $n \\geq 7$. By inverting $J$ in~\\eqref{Eq:Consistency-MVT-Disc} and applying the bound on $J^{-1}$ above, we obtain\n\\begin{equation}\\label{Eq:Consistency-R2}\n\\|\\theta-\\hat\\theta\\|_\\infty \\leq \\|J^{-1}\\|_\\infty \\: \\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty \\leq \\frac{2}{n} \\: \\frac{(\\exp(M + 2\\|\\hat\\theta\\|_\\infty)-1)^2}{\\exp(M + 2\\|\\hat\\theta\\|_\\infty)} \\: \\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty.\n\\end{equation}\n\nLet $A = (A_{ij})$ denote the edge weights of the sampled graph $G \\sim \\P^\\ast_\\theta$, so $d_i = \\sum_{j \\neq i} A_{ij}$ for $i = 1,\\dots,n$. Since $\\mathbf{d}^\\ast$ is the expected degree sequence from the distribution $\\P^\\ast_\\theta$, we also have $d_i^\\ast = \\sum_{j \\neq i} 1\/(\\exp(\\theta_i+\\theta_j)-1)$. Recall that $A_{ij}$ is a geometric random variable with emission probability\n\\begin{equation*}\nq = 1-\\exp(-\\theta_i-\\theta_j) \\geq 1-\\exp(-L),\n\\end{equation*}\nso by Lemma~\\ref{Lem:SubExp-Geo}, $A_{ij} - 1\/(\\exp(\\theta_i + \\theta_j)-1)$ is sub-exponential with parameter $-4\/\\log(1-q) \\leq 4\/L$. For each $i = 1,\\dots,n$, the random variables $(A_{ij} - 1\/(\\exp(\\theta_i+\\theta_j)-1), j \\neq i)$ are independent sub-exponential random variables, so we can apply the concentration inequality in Theorem~\\ref{Thm:ConcIneqSubExp} with $\\kappa = 4\/L$ and\n\\begin{equation*}\n\\epsilon = \\left(\\frac{16k \\log n}{\\gamma (n-1) L^2} \\right)^{1\/2}.\n\\end{equation*}\nAssume $n$ is sufficiently large such that $\\epsilon\/\\kappa = \\sqrt{k \\log n\/\\gamma (n-1)} \\leq 1$. Then by Theorem~\\ref{Thm:ConcIneqSubExp}, for each $i = 1,\\dots,n$ we have\n\\begin{equation*}\n\\begin{split}\n\\P\\left(|d_i - d_i^\\ast| \\geq \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} \\right)\n&\\leq \\P\\left(|d_i - d_i^\\ast| \\geq \\sqrt{\\frac{16 k (n-1) \\log n}{\\gamma L^2}} \\right) \\\\\n&= \\P\\left(\\Bigg|\\frac{1}{n-1}\\sum_{j \\neq i} \\left(A_{ij}-\\frac{1}{\\exp(\\theta_i+\\theta_j)-1}\\right)\\Bigg| \\geq \\sqrt{\\frac{16 k \\log n}{\\gamma (n-1) L^2}} \\right) \\\\\n&\\leq 2\\exp\\left(-\\gamma \\: (n-1) \\cdot \\frac{L^2}{16} \\cdot \\frac{16 k \\log n}{\\gamma (n-1) L^2} \\right) \\\\\n&= \\frac{2}{n^k}.\n\\end{split}\n\\end{equation*}\nThe union bound then gives us\n\\begin{equation*}\n\\P\\left(\\|\\mathbf{d} - \\mathbf{d}^\\ast\\|_\\infty \\geq \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} \\right)\n\\leq \\sum_{i=1}^n \\P\\left(|d_i - d_i^\\ast| \\geq \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} \\right)\n\\leq \\frac{2}{n^{k-1}}.\n\\end{equation*}\n\nAssume now that $\\|\\mathbf{d} - \\hat{\\mathbf{d}}\\|_\\infty \\leq \\sqrt{16 kn \\log n\/(\\gamma L^2)}$, which happens with probability at least $1-2\/n^{k-1}$. From~\\eqref{Eq:Consistency-R2} and using the triangle inequality, we get\n\\begin{equation*}\n\\|\\hat \\theta\\|_\\infty\n\\leq \\|\\theta-\\hat\\theta\\|_\\infty + \\|\\theta\\|_\\infty\n\\leq \\frac{8}{L} \\: \\sqrt{\\frac{k\\log n}{\\gamma n}} \\: \\frac{(\\exp(M + 2\\|\\hat \\theta\\|_\\infty)-1)^2}{\\exp(M + 2\\|\\hat \\theta\\|_\\infty)} + M.\n\\end{equation*}\nThis means $\\|\\hat \\theta\\|_\\infty$ satisfies the inequality $H_n(\\|\\hat \\theta\\|_\\infty) \\geq 0$, where $H_n(x)$ is the function\n\\begin{equation*}\nH_n(x) = \\frac{8}{L} \\: \\sqrt{\\frac{k\\log n}{\\gamma n}} \\: \\frac{(\\exp(M + 2x)-1)^2}{\\exp(M + 2x)} - x + M.\n\\end{equation*}\nOne can easily verify that $H_n$ is a convex function, so $H_n$ assumes the value $0$ at most twice, and moreover, $H_n(x) \\to \\infty$ as $x \\to \\infty$. It is also easy to see that for all sufficiently large $n$, we have $H_n(2M) < 0$ and $H_n(\\frac{1}{4} \\log n) < 0$. Therefore, $H_n(\\|\\hat \\theta\\|_\\infty) \\geq 0$ implies either $\\|\\hat \\theta\\|_\\infty < 2M$ or $\\|\\hat \\theta\\|_\\infty > \\frac{1}{4} \\log n$. We claim that for sufficiently large $n$ we always have $\\|\\hat \\theta\\|_\\infty < 2M$. Suppose the contrary that there are infinitely many $n$ for which $\\|\\hat \\theta\\|_\\infty > \\frac{1}{4} \\log n$, and consider one such $n$. Since $\\hat \\theta_i + \\hat \\theta_j > 0$ for each $i \\neq j$, there can be at most one index $i$ with $\\hat \\theta_i < 0$. We consider the following two cases:\n\\begin{enumerate}\n \\item \\textbf{Case 1:} suppose $\\hat \\theta_i \\geq 0$ for all $i = 1,\\dots,n$. Let $i^\\ast$ be an index with $\\hat \\theta_{i^\\ast} = \\|\\hat \\theta\\|_\\infty > \\frac{1}{4} \\log n$. Then, since $\\hat \\theta_{i^\\ast} + \\hat \\theta_j \\geq \\hat \\theta_{i^\\ast}$ for $j \\neq i^\\ast$,\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{\\exp(M)-1} &\\leq \\frac{1}{n-1} \\sum_{j \\neq i^\\ast} \\frac{1}{\\exp(\\theta_{i^\\ast} + \\theta_j)-1} \\\\\n&\\leq \\frac{1}{n-1} \\left| \\sum_{j \\neq i^\\ast} \\frac{1}{\\exp(\\theta_{i^\\ast}+\\theta_j)-1} - \\sum_{j \\neq i^\\ast} \\frac{1}{\\exp(\\hat \\theta_{i^\\ast}+\\hat \\theta_j)-1} \\right|\n + \\frac{1}{n-1} \\sum_{j \\neq i^\\ast} \\frac{1}{\\exp(\\hat \\theta_{i^\\ast} + \\hat \\theta_j)-1} \\\\\n&\\leq \\frac{1}{n-1} \\| \\mathbf{d} - \\mathbf{d}^\\ast \\|_\\infty + \\frac{1}{\\exp(\\|\\hat \\theta\\|_\\infty)-1} \\\\\n&\\leq \\frac{1}{n-1} \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} + \\frac{1}{n^{1\/4}-1},\n\\end{split}\n\\end{equation*}\nwhich cannot hold for sufficiently large $n$, as the last expression tends to $0$ as $n \\to \\infty$.\n\n \\item \\textbf{Case 2:} suppose $\\hat \\theta_i < 0$ for some $i = 1,\\dots,n$, so $\\hat \\theta_j > 0$ for $j \\neq i$. Without loss of generality assume $\\hat \\theta_1 < 0 < \\hat \\theta_2 \\leq \\cdots \\leq \\hat \\theta_n$, so $\\hat \\theta_n = \\|\\hat \\theta\\|_\\infty > \\frac{1}{4} \\log n$. Following the same chain of inequalities as in the previous case (with $i^\\ast = n$), we obtain\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{\\exp(M)-1} &\\leq \\frac{1}{n-1} \\| \\mathbf{d} - \\mathbf{d}^\\ast \\|_\\infty + \\frac{1}{n-1} \\left(\\frac{1}{\\exp(\\hat \\theta_n + \\hat \\theta_1)-1} + \\sum_{j = 2}^{n-1} \\frac{1}{\\exp(\\hat \\theta_j + \\hat \\theta_n)-1} \\right) \\\\\n&\\leq \\frac{1}{n-1} \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} + \\frac{1}{(n-1)(\\exp(\\hat \\theta_n + \\hat \\theta_1)-1)} + \\frac{n-2}{(n-1)(\\exp(\\|\\hat \\theta\\|_\\infty)-1)} \\\\\n&\\leq \\frac{1}{n-1} \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} + \\frac{1}{(n-1)(\\exp(\\hat \\theta_n + \\hat \\theta_1)-1)} + \\frac{1}{n^{1\/4}-1}.\n\\end{split}\n\\end{equation*}\nThis implies\n\\begin{equation*}\n\\begin{split}\n\\frac{1}{\\exp(\\hat \\theta_1 + \\hat \\theta_n)-1}\n&\\geq (n-1)\\left(\\frac{1}{\\exp(M)-1} - \\frac{1}{n-1} \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} - \\frac{1}{n^{1\/4}-1}\\right)\n\\geq \\frac{n}{2(\\exp(M)-1)},\n\\end{split}\n\\end{equation*}\nwhere the last inequality assumes $n$ is sufficiently large. Therefore, for $i = 2,\\dots,n$,\n\\begin{equation*}\n\\frac{1}{\\exp(\\hat \\theta_1 + \\hat \\theta_i)-1} \\geq \\frac{1}{\\exp(\\hat \\theta_1 + \\hat \\theta_n)-1} \\geq \\frac{n}{2(\\exp(M)-1)}.\n\\end{equation*}\nHowever, this implies\n\\begin{equation*}\n\\begin{split}\n\\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}} \\geq \\|\\mathbf{d} - \\mathbf{d}^\\ast \\|_\\infty\n\\geq |d_1-d_1^\\ast|\n&\\geq -\\sum_{j=2}^n \\frac{1}{\\exp(\\theta_1 + \\theta_j)-1} +\\sum_{j=2}^n \\frac{1}{\\exp(\\hat\\theta_1 + \\hat\\theta_n)-1} \\\\\n&\\geq - \\frac{(n-1)}{\\exp(L)-1} + \\frac{n(n-1)}{2(\\exp(M)-1)},\n\\end{split}\n\\end{equation*}\nwhich cannot hold for sufficiently large $n$, as the right hand side in the last expression grows faster than the left hand side on the first line.\n\\end{enumerate}\n\nThe analysis above shows that we have $\\|\\hat \\theta\\|_\\infty < 2M$ for all sufficiently large $n$. Plugging in this result to~\\eqref{Eq:Consistency-R2} gives us\n\\begin{equation*}\n\\begin{split}\n\\|\\theta-\\hat\\theta\\|_\\infty\n&\\leq \\frac{2}{n} \\: \\frac{(\\exp(5M)-1)^2}{\\exp(5M)} \\: \\sqrt{\\frac{16 kn \\log n}{\\gamma L^2}}\n\\leq \\frac{8 \\: \\exp(5M)}{L} \\: \\sqrt{\\frac{k \\log n}{\\gamma n}}.\n\\end{split}\n\\end{equation*}\nFinally, taking into account the issue of the existence of the MLE, we conclude that for sufficiently large $n$, with probability at least\n\\begin{equation*}\n\\left(1-\\frac{1}{n^{k-1}}\\right)\\left(1-\\frac{2}{n^{k-1}}\\right) \\geq 1-\\frac{3}{n^{k-1}},\n\\end{equation*}\nthe MLE $\\hat \\theta \\in \\Theta$ exists and satisfies\n\\begin{equation*}\n\\|\\theta-\\hat\\theta\\|_\\infty\n\\leq \\frac{8 \\: \\exp(5M)}{L} \\: \\sqrt{\\frac{k \\log n}{\\gamma n}},\n\\end{equation*}\nas desired. This finishes the proof of Theorem~\\ref{Thm:ConsistencyInfiniteDisc}.\n\n\n\n\n\n\\section{Discussion and future work}\n\\label{Sec:Discussion}\n\nIn this paper, we have studied the maximum entropy distribution on weighted graphs with a given expected degree sequence. In particular, we focused our study on three classes of weighted graphs: the finite discrete weighted graphs (with edge weights in the set $\\{0,1,\\dots,r-1\\}$, $r \\geq 2$), the infinite discrete weighted graphs (with edge weights in the set $\\mathbb{N}_0$), and the continuous weighted graphs (with edge weights in the set $\\mathbb{R}_0$). We have shown that the maximum entropy distributions are characterized by the edge weights being independent random variables having exponential family distributions parameterized by the vertex potentials. We also studied the problem of finding the MLE of the vertex potentials, and we proved the remarkable consistency property of the MLE from only one graph sample.\n\nIn the case of finite discrete weighted graphs, we also provided a fast, iterative algorithm for finding the MLE with a geometric rate of convergence. Finding the MLE in the case of continuous or infinite discrete weighted graphs can be performed via standard gradient-based methods, and the bounds that we proved on the inverse Hessian of the log-partition function can also be used to provide a rate of convergence for these methods. However, it would be interesting if we can develop an efficient iterative algorithm for computing the MLE, similar to the case of finite discrete weighted graphs.\n\nAnother interesting research direction is to explore the theory of maximum entropy distributions when we impose additional structures on the underlying graph. We can start with an arbitrary graph $G_0$ on $n$ vertices, for instance a lattice graph or a sparse graph, and consider the maximum entropy distributions on the subgraphs $G$ of $G_0$. By choosing different types of the underlying graphs $G_0$, we can incorporate additional prior information from the specific applications we are considering.\n\nFinally, given our initial motivation for this project, we would also like to apply the theory that we developed in this paper to applications in neuroscience, in particular, in modeling the early-stage computations that occur in the retina. There are also other problem domains where our theory are potentially useful, including applications in clustering, image segmentation, and modularity analysis.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}