diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzallf" "b/data_all_eng_slimpj/shuffled/split2/finalzzallf" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzallf" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nNetwork models are nowadays ubiquitous in the natural, information, social,\nand engineering sciences. The last 15 years or so have seen the emergence of the\nvast, multidisciplinary field of Network Science, with contributions\nfrom a wide array of researchers including physicists, mathematicians, \ncomputer scientists, engineers, biologists, and social scientists \n\\cite{Linked,EstradaBook,NewmanBook}. Applications of Network\nScience range from biology to public health, from social network analysis to \nhomeland security, from economics to the humanities, from marketing\nto information retrieval. Network analysis is\nalso an essential ingredient in the design of information, communication,\nand transportation networks, as well as in energy-related disciplines\nsuch as power grid maintenance, control, and optimization \\cite{Pinar2010}.\nGraph theory and linear algebra provide abstractions and quantitative \ntools that can be employed in the analysis and design of large and\ncomplex network models.\n\nReal-world networks are characterized by structural properties \nthat make them very different from both regular graphs on one hand, and \ncompletely random graphs on the other. Real networks frequently exhibit\na highly skewed degree distribution (often following a power law), small\ndiameter, high clustering coefficient (the two last properties together are\noften referred to as the {\\em small world} property), the presence of\nmotifs, communities, and other signatures of complexity. \n\nSome of the basic questions in network analysis concern node and edge centrality, \ncommunity detection, communicability, and diffusion \n\\cite{Brandes,EstradaBook,NewmanBook}. Related to these are the\nimportant notions of network robustness (or its opposite, vulnerability) and \nconnectivity \\cite{Cohen}. These latter properties refer to the degree of resiliency\ndisplayed by the network in the face of random accidental failures or \ndeliberate, targeted attacks, which can be modeled in terms of edge or node\nremoval. Generally speaking, it is desirable to design networks that are at\nthe same time highly sparse (in order to reduce costs) and highly connected,\nmeaning that disconnecting or disrupting the network would require the removal of a large\nnumber of edges. Such networks should not contain bottlenecks, and they\nshould allow for the rapid exchange of communication between nodes.\nExpander graphs \\cite{E06b,HLW06} are an important class of graphs \nwith such properties. \n\n\nIn this paper we describe some techniques that can be brought to bear on the\nproblems described above and related questions.\nOur approach is based on the notion of {\\em total communicability} of a\nnetwork, introduced in \\cite{Benzi2013} on the basis of earlier work\nby Estrada and Hatano \\cite{EH08,EHB12}. Total communicability, defined\nas the (normalized) sum of the entries in the exponential of the adjacency\nmatrix of the network, provides a global measure of how well the nodes\nin a graph can exchange information. \nCommunicability is based on the number and length\nof graph walks connecting pairs of nodes in the network. Pairs of nodes $(i,j)$\nwith high communicability correspond to large entries $[e^{A}]_{ij}$ in the\nmatrix exponential of $A$, the adjacency matrix of the network.\n\nTotal network communicability can also be used to measure the \nconnectivity of the network as a whole. For instance, given two alternative\nnetwork designs (with a similar ``budget\" in terms of number\nof candidate edges), one can compare the two designs by computing the\nrespective total communicabilities and pick the network with the highest one,\nassuming that a well-connected network with high node communicability is\nthe desired goal. \nIt is important to stress that\nthe total communicability of a network can be efficiently computed or estimated\neven for large networks using Lanczos or Arnoldi based algorithms without\nhaving to compute any individual entry of $e^A$ (only the ability to perform\nmatrix-vector products with $A$ is required).\n\n\nIn this paper we consider three different problems. \nLet $G=(V,E)$ be a connected, undirected and sparse graph. \nThe {\\it downdating problem} consists of selecting an edge $(i,j)$ to \nbe removed from the network so as to minimize the decrease in its \ntotal communicability while preserving its connectedness.\n\nThe goal when tackling the {\\it updating problem}, on the other hand, is to select a pair of \nnodes $i\\neq j$ such that $(i,j)\\not\\in E$ in such a way that the increase in \nthe total communicability of the network is maximized. \n\nFinally, the {\\it rewiring problem} has the same goal as the updating problem, but it requires \nthe selection of two modifications which constitute the downdate-then-update step to be performed.\n\nThe importance of the first two problems for network analysis and\ndesign is obvious.\nWe note that an efficient solution to the second problem would also \nsuggest how to proceed if the goal was to identify existing edges whose\nremoval would {\\em maximize} the decrease in communicability, which could\nbe useful, e.g., in planning anti-terrorism operations or public health policies \n(see, e.g., \\cite{Tong, VM2011}).\nThe third problem is motivated by the observation that\nfor transportation networks (e.g., flight routes) it is sometimes \ndesirable to redirect edges in order to improve the performance \n(i.e., increase the number of travellers) without increasing too much \nthe costs. \nHence, in such cases, one wants to eliminate a route used only by a few \ntravellers and to add a route that may be used by a lot of people.\n\nThe above problems may arise not only in the design of infrastructural networks\n(such as telecommunication or transportation networks), but also in other\ncontexts. For instance, in social networks the addition of a friendship\/collaborative\ntie may change dramatically the structure of the network, leading\nto a more cohesive\ngroup, and hence preventing the splitting of the community into smaller\nsubgroups.\n \n\n\nThe work is organized as follows. \nSection \\ref{sec:background} contains some basic facts from linear \nalgebra and graph theory, and introduces the modifications of the adjacency \nmatrix we will perform. \n In this section we also provide further justification\nfor the use of the total network communicability as the objective\nfunction.\nIn section \\ref{sec:bounds} we describe bounds for the \ntotal communicability \nvia the Gauss--Radau quadrature rule and we show how these bounds \nchange when a rank-two modification of the adjacency matrix is performed.\nSection \\ref{sec:modification} is devoted to the introduction of the \nmethods to controllably modify the graph in order to adjust the value of its \ntotal communicability.\nNumerical studies to assess the effectiveness and performance of the \ntechniques introduced are provided in section \\ref{sec:test_tc} for \nboth synthetic and \nreal-world networks.\nIn section \\ref{sec:nc+gap} we discuss the evolution of \na popular measure of network connectivity, known as\nthe {\\em free energy} (or {\\em natural connectivity}), \nwhen the same modifications are performed. \nThis section provides further evidence that motivates the use of the total \ncommunicability as a measure of connectivity. \nFinally, in section \\ref{sec:conclusions} we draw conclusions and we describe \nfuture directions.\n\n\n\n\\section{Background and definitions}\n\\label{sec:background}\nIn this section we provide some basic\ndefinitions, notations, and properties associated with graphs.\n\nA {\\itshape graph} or {\\itshape network}\n$G=(V,E)$ is defined by a set of $n$ nodes (vertices) \n$V$ and a set of $m$ edges $E=\\{(i,j)|i,j\\in V\\}$ between the nodes. \nAn edge is said to be {\\itshape incident} to a vertex $i$ if there exists \na node $j\\neq i$ such that either $(i,j)\\in E$ or $(j,i)\\in E$.\nThe {\\itshape degree} of a vertex, denoted by $d_i$, is the number of \nedges incident to $i$ in $G$. \nThe graph is said to be {\\itshape undirected} if the edges are formed by \nunordered pairs of vertices. \nA {\\itshape walk} of length $k$ in $G$ is a set of nodes $i_1, i_2,\\ldots,i_k, \ni_{k+1}$ such that for all $1\\leq l\\leq k$, $(i_l,i_{l+1})\\in E$.\nA {\\itshape closed walk} is a walk for which $i_1=i_{k+1}$. \nA {\\itshape path} is a walk with no repeated nodes. \nA graph is {\\itshape connected} if there is a path connecting every pair of nodes.\nA graph with unweighted edges, no self-loops (edges from a node to itself), \nand no multiple edges is said to be {\\itshape simple}. \nThroughout this work, we will consider undirected, simple, and connected networks.\n\nEvery graph can be represented as a matrix \n$A=\\left(a_{ij}\\right)\\in\\mathbb{R}^{n\\times n}$, called the \n{\\itshape adjacency matrix} of the graph. \nThe entries of the adjacency matrix of an unweighted graph $G=(V,E)$ are \n\\begin{equation*}\na_{ij}=\\left\\{\n\\begin{array}{ll}\n1 & \\mbox{if } (i,j)\\in E\\\\\n0 & \\mbox{otherwise}\n\\end{array}\n\\right.\\qquad \\forall i,j\\in V.\n\\end{equation*}\nIf the network is simple, the diagonal elements of the adjacency matrix \nare all equal to zero.\nIn the special case of an undirected network, the associated adjacency matrix \nis symmetric, and thus its eigenvalues are real.\n\nWe label the eigenvalues in non-increasing order: \n$\\lambda_1\\geq\\lambda_2\\geq\\cdots \\geq \\lambda_n$.\nSince $A$ is a real-valued, symmetric matrix, we can decompose $A$ \ninto $A=Q\\Lambda Q^T$ where $\\Lambda$ is a diagonal matrix containing the \neigenvalues of $A$ and $Q=[\\mathbf{q}_1,\\ldots,\\mathbf{q}_n]$ is orthogonal, \nwhere $\\mathbf{q}_i$ is an eigenvector associated with $\\lambda_i$.\nMoreover, if $G$ is connected, $A$ is irreducible and from the \nPerron--Frobenius Theorem \\cite[Chapter 8]{Meyer00} we deduce that $\\lambda_1>\\lambda_2$ \nand that the leading eigenvector $\\mathbf{q}_1$, \nsometimes referred to as the {\\itshape Perron vector}, \ncan be chosen such that its components $q_1(i)$ are positive \nfor all $i\\in V$.\n\nWe can now introduce the basic operations which will be performed \non the adjacency matrix $A$ associated with the network $G=(V,E)$.\nWe define the {\\itshape downdating} of the edge $(i,j)\\in E$ as the \nremoval of this edge from the network.\nThe resulting graph $\\widehat{G}=(V,\\widehat{E})$, which may be disconnected, \nhas adjacency matrix \n\\begin{equation*}\n\\widehat{A}=A-UW^T, \\qquad U=[\\mathbf{e}_i,\\mathbf{e}_j],\n\\quad W=[\\mathbf{e}_j,\\mathbf{e}_i],\n\\end{equation*}\nwhere here and in the rest of this work the vectors \n$\\mathbf{e}_i$, $\\mathbf{e}_j$ represent\nthe $i$th and $j$th vectors of the standard basis of $\\mathbb{R}^n$, \nrespectively.\n\nSimilarly, let $(i,j)\\in\\overline{E}$ be an element in the complement of $E$. \nWe will call this element a {\\itshape virtual edge} for the graph $G$. \nWe can construct a new graph $\\tilde{G}=(V,\\tilde{E})$ obtained from $G$ \nby adding the virtual edge $(i,j)$ to the graph.\nThis procedure will be referred to as the {\\itshape updating} of the \nvirtual edge $(i,j)$.\nThe adjacency matrix of the resulting graph is\n\\begin{equation*}\n\\tilde{A}=A+UW^T, \\qquad U=[\\mathbf{e}_i,\\mathbf{e}_j],\n\\quad W=[\\mathbf{e}_j,\\mathbf{e}_i].\n\\end{equation*}\nHence, these two operations can both be described as \nrank-two modifications of the adjacency matrix of the original graph.\n\nThe operation of downdating an edge and successively updating a virtual \nedge will be referred to as {\\itshape rewiring}. \n\\begin{rem}\n{\\rm These operations are all performed in a symmetric fashion, since in this paper we \nconsider exclusively undirected networks.}\n\\end{rem}\n\n\\subsection{Centrality and total communicability}\n\\label{subsec:centrality}\nOne of the main goals when analyzing a network is to identify the\nmost influential nodes in the network. \nOver the years, various measures of the importance, or centrality, \nof nodes have been developed \\cite{Brandes,EstradaBook,NewmanBook}. \nIn particular the {\\itshape (exponential) subgraph centrality} \nof a node $i$ (see \\cite{Estrada2005}) is defined as the $i$th diagonal \nelement of the matrix exponential \\cite{Higham2008}:\n$$e^A=I+A+\\frac{A^2}{2!}+\\ldots=\\sum_{k=0}^\\infty\\frac{A^k}{k!},$$\nwhere $I$ is the $n\\times n$ identity matrix.\nAs it is well known in graph theory, given an adjacency matrix $A$ \nof an unweighted network and $k\\in\\mathbb{N}$, \nthe element $\\left(A^k\\right)_{ij}$ counts the total number of walks \nof length $k$ starting from node $i$ and ending at node $j$.\nTherefore, the subgraph centrality of node $i$ counts the total number \nof closed walks centered at node $i$, weighting walks of length $k$ by a factor \n$\\frac{1}{k!}$, hence giving more importance to shorter walks. \nThe subgraph centrality then accounts for the returnability of the \ninformation to the node which was the source of this same information. \nLikewise, the off-diagonal entries of the adjacency matrix \n$\\left(e^A\\right)_{ij}$ ({\\itshape subgraph communicability} of nodes $i$ and $j$) \naccount for the ability of nodes $i$ and $j$ to exchange information \n\\cite{EH08,EHB12}.\n\nStarting from these observations and with the aim of reducing the cost \nof the computation of the rankings, in \\cite{Benzi2013} it was suggested to use\nas a centrality measure \nthe \\emph{total communicability of a node} $i$, defined as\nthe $i$th entry of the vector\n$e^A\\mathbf{1}$, where $\\mathbf{1}$ denotes the vector of all ones:\n\\begin{equation}\\label{tnc}\nTC(i):= [e^A\\mathbf{1}]_i = \\sum_{j=1}^n \\left[e^A\\right]_{ij}.\n\\end{equation}\nThis measure of centrality is given by a weighted sum of walks from\nevery node in the network (including node $i$ itself), and thus quantifies\nboth the ability of a node to\nspread information across the network and the returnability of the information to \nthe node itself.\n\nThe value resulting from summing these quantities over all the nodes\ncan be interpreted as a global measure of how effectively the \ncommunication takes place across the whole network.\nThis index is called {\\itshape total (network) communicability} \n\\cite{Benzi2013} and can be written as\n\\begin{equation}\\label{tc_spec}\nTC(A):=\\mathbf{1}^Te^A\\mathbf{1}=\\sum_{i=1}^n\\sum_{j=1}^n(e^A)_{ij} = \n\\sum_{k=1}^n e^{\\lambda_k}({\\bf q}_k^T{\\bf 1})^2.\n\\end{equation} \nThis value can be efficiently computed, e.g., by means of a \nKrylov method as implemented in S.~G\\\"uttel's Matlab toolbox \\texttt{funm\\_kryl} \nsee \\cite{Krylov1,Guettel} or by Lanczos-based techniques \nas discussed below. In the toolbox \\cite{Guettel}\nan efficient algorithm for evaluating $f(A)\\mathbf{v}$ is implemented; \nwith this method the vector $e^A\\mathbf{1}$ can be constructed in \nroughly $O(n)$ operations (note that the prefactor can vary\nfor different types of networks) and the total communicability is easily derived.\n\nAs it is clear from its definition, the value of $TC(A)$ may be very large. \nSeveral normalizations have been proposed; the simplest is the normalization \nby the number of nodes $n$ (see \\cite{Benzi2013}), which we \nwill use throughout the paper.\nIt is easy to prove that the normalized \ntotal communicability satisfies\n\\begin{equation}\\label{coarse_bounds}\n\\frac{1}{n}\\sum_{i=1}^n\\left(e^A\\right)_{ii}\\leq \\frac{TC(A)}{n}\\leq e^{\\lambda_1},\n\\end{equation}\nwhere the lower bound is attained by the graph with $n$ nodes and no \nedges and the upper bound is attained by the complete graph with $n$ nodes. \n\n\\begin{rem}\\label{rem:lambda1}\n{\\rm The last equality in equation \\eqref{tc_spec} shows that \nthe main contribution to the value of $TC(A)$ is likely to come from \nthe term $e^{\\lambda_1}\\|{\\bf q}_1\\|_1^2$.} \n\\end{rem}\n\n\n\\subsection{Rationale for targeting the total communicability} \\label{why_TM}\nAs already mentioned, the total communicability provides a good measure\nof how efficiently information (in the broad sense \nof the term) is diffused across the network. Typically, very\nhigh values of $TC(A)$ are observed\nfor highly optimized infrastructure networks \n(such as airline routes or computer networks) and for highly cohesive social and\ninformation networks (like certain type of collaboration networks). \nConversely, the total network communicability is relatively low for spatially\nextended, grid-like networks (such as many road networks) or for \nnetworks that consist of two or more communities with poor communication\nbetween them (such as the Zachary network).\\footnote{ Numerical\nvalues of the normalized total network communicability for a broad\ncollection of networks are reported in\nthe experimental sections of this paper, in the Supplementary Material,\nand in \\cite{Benzi2013}.}\nAs a further example,\nreduced values of the communicability between different brain regions have\nbeen detected in stroke patients compared to healthy individuals \n\\cite{CH09}.\nWe refer to \\cite{EHB12} for an extensive survey on communicability,\nincluding applications for which it has been found to be useful.\n\nAnother reason in support of the use of the total communicability as\nan objective function is that it is closely related to the {\\em natural\nconnectivity} (or {\\em free energy}) of the network, while being\ndramatically easier to compute; see section \\ref{sec:nc+gap} below. \nSparse networks with high\nvalues of $TC(A)$ are very well connected and thus less likely to \nbe disrupted by either random failures or targeted attacks leading\nto the loss of edges. This justifies trying to design sparse networks with\nhigh values of the total communicability.\n\nAn important observation is that the total network communicability $TC(A)$\ncan be interpreted in at least two different ways. Since it is given by the\nsum of all the pairwise communicabilities $C(i,j)=[e^A]_{ij}$, it is a\nglobal measure of the ability of the network to diffuse information.\nHowever, recalling the definition (\\ref{tnc}) of total node communicability,\nthe normalized total communicability can also be seen as ``the average\ntotal communicability\" of the nodes in the network: \n$$\\frac{TC(A)}{n} = \\frac{1}{n}\\sum_{i=1}^n TC(i).$$\nSince the total node\ncommunicability is a centrality measure \\cite{Benzi2013}, our goal \ncan then be rephrased as the problem of constructing sparse networks\nhaving high average node centrality, where the node centrality is given\nby total node communicability. Since this is merely one of a large number\nof centrality measures proposed in the literature, it is a legitimate \nquestion to ask why the total node communicability should be used instead of\na different centrality index. In other words, given any node\ncentrality function $f:V \\longrightarrow \\mathbb R_+$, we could consider \ninstead the problem of, say, adding a prescribed number of edges\nso as to maximize the increase in the global average centrality\n$$\\bar f = \\frac{1}{n}\\sum_{i=1}^n f(i).$$\n\nAs it turns out, most other centrality indices are either computationally\ntoo expensive to work with (at least for large networks), or lead to objective\nfunctions which do not make much sense. The following is a brief discussion\nof some of the most popular centrality indices used in the field of\nnetwork science.\n\n\\vspace{0.1in}\n\n\\begin{enumerate}\n\\item {\\bf Degree:} Consider first the simplest centrality index, the degree. \nObviously, adding $K$ edges according to {\\em any} criteria will\nproduce exacty the same variation in the average degree of a network.\nHence, one may as well add edges at random. Doing so, however,\ncannot be expected to be greatly beneficial if the goal is to \nimprove the robustness or efficiency of the network.\n\\item {\\bf Eigenvector centrality:} Let ${\\bf q}_1$ be the principal\neigenvector of $A$, normalized so that $\\|{\\bf q}_1\\|_2 = 1$. The eigenvector\ncentrality of node $i\\in V$ is the $i$th component of ${\\bf q}_1$, denoted\nby $q_1(i)$. It is straightforward to see that the problem of maximizing\nthe average eigenvector centrality\n$$\\frac{q_1(1) + q_1(2) + \\cdots + q_1(n)}{n}$$\nsubject to the constraint $\\|{\\bf q}_1\\|_2 = 1$ has as its only solution\n$$q_1(1) = q_1(2) = \\cdots = q_1(n) = \\frac{1}{\\sqrt n}.$$\nThis implies that $A$ has constant row sums or, in other words, that the\ngraph is regular --- every node in $G$ has the same degree. Hence, any\nheuristic aimed at maximizing the average eigenvector centrality will\nresult in graphs that are close to being regular. However, regular \ngraph topologies are not, {\\em per se}, endowed with any especially good \nproperties when it comes to diffusing information or being robust: think\nof a cycle graph, for example. Regular graphs {\\em can} be very well connected\nand robust (this is the case of\nexpander graphs), but there is no reason to think that\nsimply making the degree distribution of a given network more regular will\nimprove its expansion properties.\n\\item {\\bf Subgraph centrality}: the average subgraph centrality of a\nnetwork is known in the literature as the normalized {\\em Estrada index}:\n$$\\frac{1}{n}EE(A) = \\frac{1}{n}{\\text Tr} (e^A) = \n\\frac{1}{n}\\sum_{i=1}^n [e^A]_{ii} = \\frac{1}{n}\\sum_{i=1}^n \ne^{\\lambda_i}.$$\nIt can also be interpreted as the average self-communicability of the\nnodes. As we mentioned, this is a lower bound for the average total\ncommunicability. Evaluation of this quantity requires knowledge\nof all $n$ diagonal entries of $e^A$, or of all the eigenvalues of $A$ \nand is therefore much more expensive to compute. The heuristics we\nderive in this paper have a similar effect on $TC(A)$ and on the\nEstrada index, as we demonstrate in section \\ref{sec:nc+gap}. So, \nusing subgraph centrality instead of total communicability centrality\nwould lead to exactly the same heuristics and results, with the\ndisadvantage that evaluating the objective function, if necessary, would\nbe much more expensive.\n\\item {\\bf Katz centrality}: the Katz centrality of node $i\\in V$ is\ndefined as the $i$th row sum of the matrix resolvent\n $(I - \\alpha A)^{-1}$, where\nthe parameter $\\alpha$ is chosen in the interval $(0,\\frac{1}{\\lambda_1})$,\nso that the power series expansion\n$$(I - \\alpha A)^{-1} = I + \\alpha A + \\alpha^2 A^2 + \\cdots$$\nis convergent \\cite{Katz}.\nSince this centrality measure can be interpreted in terms of walks,\nusing it instead of the total communicability\nwould lead to the same heuristics and very similar\nresults, especially when\n$\\alpha$ is sufficiently close to $\\frac{1}{\\lambda_1}$ or if the\nspectral gap $\\lambda_1 - \\lambda_2$ is large; see \\cite{Benzi2015}.\nUsing Katz centrality, however, requires the careful selection of the\nparameter $\\alpha$, which leads to some complications. For example,\nafter each update one needs to recompute the dominant eigenvalue\nof the adjacency matrix in order to check whether the value of\n$\\alpha$ used is still within the range of permissible values or if\nit has to be reduced, making\nthis approach computationally very expensive. This\nproblem does not arise if the matrix exponential is used instead\nof the resolvent. \n\\item {\\bf Other centrality measures}: So far we have only discussed\ncentrality measures that can be expressed in terms of the adjacency matrix $A$.\nThese centrality measures are all connected to the notion of walk in a graph,\nand they can often be understood in terms of spectral graph theory.\nOther popular centrality measures, such as betweenness centrality and\ncloseness centrality (see, e.g., \\cite{NewmanBook}) do not have a simple\nformulation in terms of matrix properties. They are based on the assumption\nthat all communication in a graph tends to take place along shortest paths,\nwhich is not always the case (this was a major motivation for the\nintroduction of walk-based measures, which postulate that communication\nbetween nodes can take place along walks of any length, with a preference\ntowards shorter ones). A further disadvantage is that they are quite\nexpensive to compute, although randomized approximations can bring the\ncost down to acceptable levels \\cite{Brandes}. For these reasons we do not\nconsider them in this paper, where the focus is on linear algebraic\ntechniques. It remains an open question whether heuristics for\nmanipulating graph edges so as to tune some gloabl average of these\ncentrality measures can lead to networks with desirable connectivity\nand robustness properties.\n\\end{enumerate}\n\n\\vspace{0.1in}\n\nFinally, in view of the bounds (\\ref{coarse_bounds}), the evolution of\nthe total communicability under network modifications is closely tied\nto the evolution of \nthe dominant eigenvalue $\\lambda_1$. This quantity plays a\ncrucial role in network analysis, for example in the definition\nof the {\\em epidemic threshold}; see, for instance, \\cite[p.~664]{NewmanBook} and\n\\cite{VM2011}. In particular, a decrease in the total network \ncommunicability can be expected to lead to an increase in the\nepidemic threshold. \nThus, edge modification techniques developed for tuning $TC(A)$\ncan potentially be used to\nalter epidemics dynamics.\n\n\n\\section{Bounds via quadrature rules}\n\\label{sec:bounds}\nIn the previous section we saw the simple bounds (\\ref{coarse_bounds}) on\nthe normalized total network communicability. \nMore refined bounds for this index can be obtained by means of quadrature \nrules as described in \\cite{Benzi2010,Benzi1999,Golub2010,Fenu2013}.\n\nThe following theorem contains our result on the bounds for the normalized total communicability.\n\\begin{theorem}\\label{thm:bounds}\nLet $A$ be the adjacency matrix of an unweighted and undirected network. Then\n\\begin{equation*}\n\\Phi\\left(\\beta,\\omega_1+\\frac{\\gamma_1^2}{\\omega_1-\\beta}\\right)\\leq \n\\frac{TC(A)}{n}\\leq\\Phi\\left(\\alpha,\\omega_1+\\frac{\\gamma_1^2}{\\omega_1-\\alpha}\\right)\n\\end{equation*}\nwhere $[\\alpha,\\beta]$ is an interval containing the spectrum of $-A$ \n(i.e., $\\alpha \\le -\\lambda_1$ and $\\beta\\ge -\\lambda_n$), \n$\\omega_1=-\\mu=-\\frac{1}{n}\\sum_{i=1}^nd_i$ is the negative mean of the degrees, \n$\\gamma_1=\\sigma=\\sqrt{\\frac{1}{n}\\sum_{k=1}^n(d_k-\\mu)^2}$ is the standard deviation, and\n\\begin{equation}\\label{eq:bound}\n\\Phi(x,y)=\\frac{c \\left(e^{-x}-e^{-y}\\right)+xe^{-y}-ye^{-x}}{x-y}, \\qquad c=\\omega_1. \n\\end{equation}\n\\end{theorem}\n\n\nA proof of this result can be found in the Supplementary Materials \naccompanying the paper.\n\nAnalogous bounds can be found for the \nadjacency matrix of the graph after performing \na downdate or an update. \nThese results are summarized in the following Corollaries.\n\n\\begin{corollary}\\label{Dwdt} [Downdating]\nLet $\\widehat{A}=A-UW^T$, where $U=[\\mathbf{e}_i,\\mathbf{e}_j]$ and\n$W=[\\mathbf{e}_j,\\mathbf{e}_i]$ be the adjacency matrix of an unweighted and \nundirected network obtained after the downdate of the edge $(i,j)$ from the matrix $A$.\nLet $\\omega_1=-\\mu=-\\frac{1}{n}\\sum_{i=1}^nd_i$ and $\\gamma_1=\\sigma=\n\\sqrt{\\frac{1}{n}\\sum_{k=1}^n(d_k-\\mu)^2}$, where $d_i$ is the degree of node $i$ in the \noriginal graph. \nThen\n\\begin{equation*}\n\\Phi\\left(\\beta_{-},\\omega_{-}+\\frac{\\gamma_{-}^2}{\\omega_{-}-\\beta_{-}}\\right)\n\\leq\\frac{TC(\\widehat{A})}{n}\\leq\n\\Phi\\left(\\alpha_-,\\omega_-+\\frac{\\gamma_-^2}{\\omega_--\\alpha_-}\\right)\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\left\\{\n\\begin{array}{l}\n\\omega_{-}=\\omega_1+\\frac{2}{n};\\\\[6pt]\n\\gamma_{-}=\\sqrt{\\gamma_1^2-\\frac{2}{n}\\left(d_i+d_j-1+2\\omega_1+\n\\frac{2}{n}\\right)}\n\\end{array}\n\\right. ,\n\\end{equation*}\n$\\alpha_-$ and $\\beta_-$ are approximation of the smallest and largest \neigenvalues of $-\\widehat{A}$ respectively, \nand $\\Phi$ is defined as in equation \\eqref{eq:bound} with $c=\\omega_-$.\n\\end{corollary}\n\nNote that if bounds $\\alpha$ and $\\beta$ for the extremal eigenvalues of \nthe original matrix are known, we can then use $\\alpha_-=\\alpha$ and $\\beta_-=\\beta+1$.\nIndeed, if we order the eigenvalues of $\\widehat{A}$ in non--increasing order \n$\\widehat{\\lambda}_1>\\widehat{\\lambda}_2\\geq \\cdots \\geq\\widehat{\\lambda}_n$ we\nobtain, as a consequence of Weyl's Theorem\n(see \\cite[Section 4.3]{Horn}), that\n\\begin{equation*}\n\\alpha-1 \\leq -\\lambda_1-1 < -\\widehat{\\lambda}_1 < \n-\\widehat{\\lambda}_2 \\leq \\cdots \\leq -\\widehat{\\lambda}_n < -\\lambda_n+1 \\leq \\beta+1.\n\\end{equation*}\n\nFurthermore, the Perron--Frobenius Theorem ensures that, when performing a \ndowndate, the largest eigenvalue of the adjacency matrix cannot increase; \nhence, we deduce the more stringent bounds $\\alpha\\leq-\\widehat{\\lambda}_1\\leq\n-\\widehat{\\lambda}_2\\leq\\cdots\\leq -\\widehat{\\lambda}_n\\leq \\beta +1.$\n\nSimilarly, we can derive bounds for the normalized total communicability \nof the matrix $\\tilde{A}$ \nobtained from the matrix $A$ after performing the update of the virtual edge $(i,j)$.\n\n\\begin{corollary}\\label{Updt} [Updating]\nLet $\\tilde{A}=A+UW^T$, where $U=[\\mathbf{e}_i,\\mathbf{e}_j]$ and\n$W=[\\mathbf{e}_j,\\mathbf{e}_i]$ be the adjacency matrix of an unweighted and \nundirected network obtained after the update of the virtual edge \n$(i,j)$ in the matrix $A$.\nLet $\\omega_1=-\\mu=-\\frac{1}{n}\\sum_{i=1}^nd_i$ and $\\gamma_1=\\sigma=\n\\sqrt{\\frac{1}{n}\\sum_{k=1}^n(d_k-\\mu)^2}$, where $d_i$ is \nthe degree of node $i$ in the \noriginal graph.\nThen\n\n\\begin{equation*}\n\\Phi\\left(\\beta_{+},\\omega_{+}+\\frac{\\gamma_{+}^2}{\\omega_{+}-\\beta_{+}}\\right)\\leq \n\\frac{TC(\\tilde{A})}{n}\n\\leq\\Phi\\left(\\alpha_+,\\omega_++\\frac{\\gamma_+^2}{\\omega_+-\\alpha_+}\\right)\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\left\\{\n\\begin{array}{l}\n\\omega_{+}=\\omega_1-\\frac{2}{n};\\\\[6pt] \n\\gamma_{+}=\\sqrt{\\gamma_1^2+\\frac{2}{n}\\left(d_i+d_j+1+2\\omega_1-\\frac{2}{n}\\right)}\n\\end{array}\n\\right. ,\n\\end{equation*}\n$\\alpha_+$ and $\\beta_+$ are bounds for the smallest and largest eigenvalues of \n$-\\tilde{A}$ respectively, and $\\Phi$ is defined as in equation \n\\eqref{eq:bound} with $c=\\omega_+$.\n\\end{corollary}\n\nNotice that again, if bounds $\\alpha$ and $\\beta$ for the extremal \neigenvalues of $-A$ are known, we can then take $\\alpha_+=\\alpha-1$ and $\\beta_+=\\beta$.\nIn fact, the spectrum of the rank-two symmetric perturbations $UW^T$ and $-UW^T$ is \n$\\{\\pm 1,0\\}$ and hence we can \nuse Weyl's Theorem as before and then improve the upper bound\nusing the Perron--Frobenius Theorem.\n\n\nIn the next section we will see how the new bounds can be used to \nguide the updating and downdating process.\n\n\n\n\\section{Modifications of the adjacency matrix}\n\\label{sec:modification}\nIn this section we develop techniques that allow us to tackle \nthe following problems.\n\n\\begin{itemize}\n\\item[(P1)] Downdate: select $K$ edges that can be downdated from \nthe network without disconnecting it and that cause the smallest\ndrop in the total \ncommunicability of the graph;\n\\item[(P2)] Update: select $K$ edges to be added to the network \n(without creating self--loops or multiple edges) so as to increase as much as \npossible the total communicability of the graph;\n\\item[(P3)] Rewire: select $K$ edges to be rewired in the network \nso as to increase as much as possible the value of $TC(A)$. The\nrewiring process must not disconnect the network or \ncreate self--loops or multiple edges in the graph.\n\\end{itemize}\n\nAs we will show below, (P3) can be solved using combinations of methods \ndeveloped to solve (P1) and (P2).\nHence, we first focus on the downdate and the update separately. \nNote that to decrease as little as possible the total communicability \nwhen removing an edge it would suffice to select $(i^*,j^*)\\in E$ so as\nto minimize the quantities\n$$\\mathbf{1}^TA^k\\mathbf{1} -\n\\mathbf{1}^T(A-UW^T)^k\\mathbf{1}\\qquad \\forall k=1,2,\\ldots ,$$\nsince $TC(A)=\\sum_{k=0}^\\infty\\frac{\\mathbf{1}^TA^k\\mathbf{1}}{k!}$.\nSimilarly, to increase as much as possible $TC(A)$ by addition of a virtual edge, \nit would suffice to select $(i^*,j^*)\\in\\overline{E}$ \nthat maximizes the differences \n$$\\mathbf{1}^T(A+UW^T)^k\\mathbf{1}-\\mathbf{1}^TA^k\\mathbf{1}\\qquad \\forall k=1,2,\\ldots $$\nHowever, it is easy to show that in general one can not find a choice for\n $(i^*,j^*)$ that works for all such $k$. \nIndeed, numerical experiments on small synthetic graphs \n(not shown here) show that in general the optimal edge selection \nfor $k=2$ is different from the one for $k=3$.\nBecause of this, \nit is unlikely that one can find a simple\n``closed form solution\" to the problem, and\nwe need to develop approximation techniques.\n\nThe majority of the heuristics we will develop are based on new edge centrality measures. \nThe idea underlying these is that it seems reasonable to assume that an edge is more likely used as communication channel if its adjacent \nnodes are given a lot of information to spread. \nWe thus introduce three new centrality measures for edges based on this principle: edges connecting important nodes are themselves \nimportant. \n\n\n\n\\begin{definition}\nFor any $i,j\\in V$ we define the {\\rm edge subgraph centrality} of \nan existing\/virtual edge $(i,j)$ as\n\\begin{equation}\n^eSC(i,j)=\\left(e^A\\right)_{ii}\\left(e^A\\right)_{jj}.\n\\label{eq:edge_subgraph}\n\\end{equation}\n\\label{def:subgraph}\n\\end{definition}\n\nThis definition, based on the subgraph centrality of nodes, exploits the fact that the matrix exponential is symmetric positive definite \nand hence $(e^A)_{ii}(e^A)_{jj}>(e^A)_{ij}^2$. \nTherefore, the diagonal elements of $e^A$ somehow control its off-diagonal entries, \nhence they may contain \nenough information to infer the ``payload'' of the edges or of the virtual edges of interest. \n\n\\begin{definition}\nFor any $i,j\\in V$ we define the {\\rm edge total communicability centrality}\nof an existing\/virtual\nedge $(i,j)$ as\n\\begin{equation}\n^eTC(i,j) = [e^A{\\bf 1}]_i [e^A{\\bf 1}]_j.\n\\end{equation}\n\\label{def:tc_edge}\n\\end{definition}\n\nIt is important to observe that when\nthe spectral gap $\\lambda_1-\\lambda_2$ is ``large enough'', \nthen the subgraph centrality $\\left(e^A\\right)_{ii}$ \nand the total communicability centrality $[e^A{\\bf 1}]_i$ are\nessentially \ndetermined by \n$e^{\\lambda_1}q_1(i)^2$ and $e^{\\lambda_1}q_1(i)\\|{\\bf q}_1\\|_1$, respectively (see, e.g., \\cite{Benzi2013,Benzi2015,E06b}); it \nfollows that in this case \nthe two centrality measures introduced and a centrality measure based on the eigenvector centrality for nodes can be expected to provide similar rankings. \nThis is especially true when attention is restricted to the top edges (or nodes).\nThis observation motivates the introduction of the following edge centrality measure.\n\n\\begin{definition}\nFor any $i,j\\in V$ we define the {\\rm edge eigenvector centrality} of an existing\/virtual \nedge $(i,j)$ as\n\\begin{equation}\n^eEC(i,j)=q_1(i)q_1(j).\n\\label{eq:edge_eigenvector}\n\\end{equation}\n\\label{def:eigenvector}\n\\end{definition}\nAs a further justification for this definition, note that \n\n$$\\lambda_1-2 \\left( ^eEC(i,j)\\right)\\leq\\widehat{\\lambda}_1\\leq\\lambda_1,\\qquad\n\\tilde{\\lambda}_1\\geq\\lambda_1+2 \\left( ^eEC(i,j)\\right),$$\n\nwhere $\\widehat{\\lambda}_1$ is the leading eigenvalue of the matrix $\\widehat{A}$ and $\\tilde{\\lambda}_1$ \nis the leading eigenvalue of the matrix $\\tilde{A}$, as defined in \nsection \\ref{sec:background}.\nThese inequalities show that the edge eigenvector centrality \nof an existing\/virtual edge $(i,j)$ is strictly connected \nto the change in the value of the leading eigenvalue of the adjacency matrix, \nwhich influences\nthe evolution of the total communicability when we modify $A$ (see Remark \\ref{rem:lambda1}). \n\n\\begin{rem}\n{\\rm The edge eigenvector centrality has been used in \\cite{Tong, VM2011} to \ndevise edge removal techniques aimed to reduce significantly $\\lambda_1$, so as \nto increase the {\\em epidemic threshold} of networks.}\n\\end{rem}\n\n\nNote that we defined these measures of centrality for \nboth existing and virtual edges (as in \\cite{Berry2013}). \nThe reason for this as well as the justification for these definitions \nwill become clear in the next subsections. \n\n\nWe now discuss\n how to use these definitions to tackle the problems previously described. \nThe computational aspects concerning the implementation of the heuristics we are about to introduce \nand the derivation of their computational costs are described in the \nSupplementary Materials. \n\n\\subsection*{(P1) Downdate}\nThe downdate of any edge in the network will result in a reduction of \nits total communicability. \nNote that since we are focusing on the case of connected networks, we \nwill only perform downdates that keep the resulting graph connected.\nIn practice, it is desirable to further\nrestrict the choice of downdates to a subset of\nall existing edges, on the basis of criteria to be discussed shortly.\n\nAn ``optimal\" approach would select at each step of the downdating \nprocess a candidate edge corresponding to the minimum decrease of \ncommunicability.\\footnote{Strictly speaking, this would correspond to\na greedy algorithm which is only locally optimal. In general, this\nis unlikely to result in ``globally optimal\" network communicability.\nIn this paper, the term ``optimal\" will be understood in this limited\nsense only.} \nNote that for large networks this method is too costly to be practical.\nFor this reason we aim to develop inexpensive techniques that will hopefully\ngive close--to--optimal results.\nNevertheless, for small networks we will use the\n``optimal\" approach (where we systematically try all feasible\nedges and delete the one causing the least drop in total communicability) \nas a baseline method against which we compare the various\nalgorithms discussed below. This method will be henceforth\nreferred to as {\\tt optimal}.\n\nThe next methods we introduce perform the downdate of the lowest ranked existing edge according to \nthe edge centrality measures previously introduced whose removal does not disconnect the network. \nWe will refer to these methods as {\\tt subgraph}, {\\tt nodeTC}, and {\\tt eigenvector}, which are \nbased on definitions \\ref{def:subgraph}, \\ref{def:tc_edge}, and \\ref{def:eigenvector}, respectively. \nFrom the point of view of the communicability,\nthese methods downdate an edge connecting two nodes which are peripheral (i.e., have low centrality)\nand therefore are not expected to \ngive a large contribution to the spread of information along the network.\nHence, the selected edge is connecting two nodes whose ability to\nexchange information is already very low, and\nwe do not expect the total communicability to suffer too much from this edge removal.\nThis observation also suggests that such downdates \ncan be repeatedly applied without the need to recompute the ranking\nof the edges after each downdate. \nAs long as the number of downdates performed remains small compared to the total number of edges, \nwe expect good results at a greatly reduced total cost. \nNote also that such downdates can be performed simultaneously rather than sequentially.\nWe will refer to these variants as {\\tt subgraph.no}, {\\tt nodeTC.no}, and {\\tt eigenvector.no}.\n\n\n\nFinally, we consider a technique motivated by the bounds \nobtained via quadrature rules derived in section \\ref{sec:bounds}.\nFrom the expression for the function $\\Phi$ in the special case of the downdate \n(cf.~Corollary \\ref{Dwdt}),\nwe infer that a potentially good choice may be to remove the edge\nhaving incident nodes $i,j$ for which the sum $d_i+d_j$ is minimal, if its \nremoval does not disconnect the network.\nIndeed, this choice reduces the upper bound only slightly and the total \ncommunicability may mirror this behavior.\nAnother way to justify this strategy is to observe that it is indeed\nthe optimal strategy if we approximate $e^A$ with its second-order\napproximation $I + A + \\frac{1}{2}A^2$ in the definition of total\ncommunicability.\nThis technique will be henceforth referred to as {\\tt degree}. \nWe note that a related measure, namely, the average \nof the out-degrees $\\frac{d_i+d_j}{2}$, \nwas proposed in \\cite{Berry2013} as a measure for the centrality of \nan edge $(i,j)$ in directed graphs.\n\n\\subsection*{(P2) Update}\nMost real world networks are characterized by low average degree. \nAs a consequence, the adjacency matrices of such networks are sparse ($m=O(n)$). \nFor the purpose of selecting a virtual edge to be updated, this implies \nthat we have approximately $\\frac{1}{2}\\left(n^2-cn\\right)$ possible choices \nif we want to avoid the formation of multiple edges or self--loops (here $c$ is\na moderate constant).\nEach one of these possible updates will result in an increase of the total \ncommunicability of the network, but not every one of these will result in a\nsignificant increment. \n\nOne natural updating technique is to connect two nodes having high \ncentralities, i.e., add the virtual \nedge having the highest ranking according to the corresponding edge centrality. \nIts incident nodes, being quite central, can be expected to have an \nimportant role in the spreading of information along the network; \non the other hand, the communication between them may be relatively poor \n(think for example of the case where the two nodes sit in two distinct\ncommunities). \nHence, giving them a preferential communication channel, such as an \nedge between them, should result in a better spread of information along the \nwhole network. \nAgain, we will use the labels {\\tt subgraph}, {\\tt nodeTC}, and {\\tt eigenvector} to describe these \nupdating strategies. \nAs before, in order to reduce the computational cost, we also test the effectiveness \nof \nthese techniques\nwithout the recomputation of the \nranking of the virtual edges after each update.\nThese variants (referred to as {\\tt subgraph.no}, {\\tt nodeTC.no}, and {\\tt eigenvector.no}) are \nexpected to return good results as well, since \nthe selected update should not radically change the ranking of the edges. \nIndeed, they make central nodes even more central, and the ranking of the \nedges remains consequently almost unchanged.\nNote again that these updates can be performed simultaneously \nrather than sequentially.\n\n\nAs for the case of downdating, \nthe bounds via quadrature rules derived in section \\ref{sec:bounds} \nsuggest an updating technique, i.e., adding the virtual edge $(i,j)$ for which $d_i+d_j$ is maximal. Indeed,\nsuch a choice would maximize the lower bound on the total communicability, see\nCorollary \\ref{Updt}. Again, this choice can also be justified by noting that it\nis optimal if $e^A$ is replaced by its quadratic Maclaurin approximant.\nWe will again use the label {\\tt degree} to refer to this updating strategy.\n\nAll these techniques will be compared with the {\\tt optimal} one, \nbased on systematically trying all feasible virtual edges and selecting\nat each step the one resulting in the largest increase of the\ntotal communicability. Due to the very high cost of this brute force\napproach, we will use it only on small networks.\n\n\\begin{table}\n\\footnotesize\n\\centering\n\\caption{Brief description of the techniques introduced in the paper.}\n\\begin{tabular}{lcc}\n\\hline\n Method & Downdate: $(i,j)\\in E$ & Update: $(i,j)\\not\\in E$\\\\\n\\hline\n {\\tt optimal} & $\\arg\\min\\{TC(A)-TC(\\widehat{A})\\}$ & $\\arg\\max\\{TC(\\tilde{A})-TC(A)\\}$ \\\\\n {\\tt subgraph(.no)} & $\\arg\\min\\{^eSC(i,j)\\}$ & $\\arg\\max\\{^eSC(i,j)\\}$\\\\\n {\\tt eigenvector(.no)} & $\\arg\\min\\{^eEC(i,j)\\}$ & $\\arg\\max\\{^eEC(i,j)\\}$\\\\\n {\\tt nodeTC(.no)} & $\\arg\\min\\{^eTC(i,j)\\}$ & $\\arg\\max\\{^eTC(i,j)\\}$\\\\\n {\\tt degree} & $\\arg\\min\\{d_i+d_j\\}$ & $\\arg\\max\\{d_i+d_j\\}$ \\\\\n\\hline\n\\end{tabular}\n\\label{tab:description}\n\\end{table}\n\nThe heuristics introduced to tackle (P1) and (P2) are summarized in Table \\ref{tab:description}.\n\n\\subsection*{(P3) Rewire}\n\\label{subsec:rewire}\nAs we have already noted, there are situations in which\nthe rewire of an edge may be preferable \nto the addition of a new one.\nThere are various possible choices for the rewiring strategy to follow. \nThe greatest part of those found in literature are variants of random rewiring \n(see for example \\cite{Beygelzimer2005,Louzada2013}). \nIn this paper, on the other hand, we are interested in devising\nmathematically informed rewiring strategies. \nFor comparison purposes, however, we will compare our rewiring methods\nto the random rewire method, {\\tt random}, \nwhich downdates an edge (chosen uniformly at random\namong all edges whose removal does not disconnect the network) and then updates a \nvirtual edge, also chosen uniformly at random.\n\nCombining the various downdating and updating methods previously introduced \nwe obtain different rewiring strategies based on the centralities of edges and \non the bounds for the total communicability. \nConcerning the methods based on the edge subgraph, eigenvector,\nand total communicability centrality, \nwe note that since a single downdate does not dramatically change \nthe communication capability of the network, we do not need to recompute \nthe centralities and the ranking of the edges after each downdating step,\nat least as long as the number of rewired edges remains relatively small \n(numerical experiments not shown here support this claim).\nOn the other hand, after each update we may or may not recalculate\nthe edge centralities. As before, we use {\\tt subgraph}\/{\\tt subgraph.no},\n{\\tt eigenvector}\/{\\tt eigenvector.no} and {\\tt nodeTC}\/{\\tt nodeTC.no} to\nrefer to these three variants of rewiring.\nAdditionally,\nwe introduce another rewiring strategy, henceforth referred to\nas {\\tt node}, based on the subgraph centrality of the nodes. \nIn this method we disconnect the most central node from the least central node among \nits immediate neighbors;\nthen we connect it to the most central node among those it is not linked to.\nIt is worth emphasizing that this strategy is philosophically different from the \nprevious ones based on the edge subgraph centrality in the downdating phase \n(the updating step is the same). \nIn fact, in those methods we use information on the nodes in order to \ndeduce some information on the edges connecting them; on the other hand, \nthe {\\tt node} algorithm does not take into account the potentially \nhigh ``payload'' of the edges involved, whose removal may result in a \ndramatic drop in the total communicability.\n\n\n\n\n\n\n\n\n\n\\section{Numerical studies}\n\\label{sec:test_tc}\nIn this section we discuss the results of numerical studies performed in order\nto assess the effectiveness and efficiency of the proposed techniques.\nThe tests have been performed on both synthetic and real-world networks, \nas described below. \nWe refer to the Supplementary Materials for the results of computations\nperformed on four small social networks, aimed at comparing our heuristics with\n{\\tt optimal}. These results show that for these small networks, the resulting\ntotal communicabilities are essentially identical to those obtained with the\n{\\tt optimal} strategy.\n\n\\subsection{Real-world networks}\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\caption{Description of the Data Set.}\n\\label{tab:Datasets}\n\\begin{tabular}{cccccc\n\\hline\nNAME & $n$ & $m$ & $\\lambda_1$ & $\\lambda_2$ & $\\lambda_1-\\lambda_2$ \\\\\n\\hline\nMinnesota & 2640 & 3302 & 3.2324 & 3.2319 & 0.0005 \\\\\nUSAir97 & 332 & 2126 & 41.233 & 17.308 & 23.925 \\\\\nas--735 & 6474 & 12572 & 46.893 & 27.823 & 19.070 \\\\\nErd\\\"os02 & 5534 & 8472 & 25.842 & 12.330 & 13.512 \\\\\nca--HepTh & 8638 & 24806& 31.034 & 23.004 & 8.031 \\\\\nas--22july06 & 22963 & 48436 & 71.613 & 53.166 & 18.447 \\\\\nusroad--48 & 126146 & 161950 & 3.911 & 3.840 & 0.071 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\nAll the\nnetworks used in the tests can be found in the University of \nFlorida Sparse Matrix Collection \\cite{Davis} under different ``groups''.\nThe USAir97 and Erd\\\"os02 networks are from the Pajek group. \nThe USAir97 network describes the US Air flight routes in 1997, while \nthe Erd\\\"os02 network represents the Erd\\\"os collaboration network, \nErd\\\"os included. \nThe network as--735, from the SNAP group, is the communication network \nof a group of autonomous system (AS) measured over 735 days between \nNovember 8, 1997 and January 2, 2000. \nCommunication occurs when routers from two ASs exchange information.\nThe Minnesota network from the Gleich group represents the Minnesota road network. \nThese latter three networks are not connected, therefore the tests were \nperformed on their largest connected component.\nWe point out that the original largest connected component \nof the network as--735 has 1323 ones on the main diagonal \nwhich were retained in our tests.\nThe network ca--HepTh is from the SNAP group and represents the \ncollaboration network of arXiv High Energy Physics Theory;\nthe network as--22july06 is from the Newman group and represents the \n(symmetrized) structure of Internet routers as of July 22, 2006.\nFinally, the network usroad--48, which is from the Gleich group, \nrepresents the continental US road network. For each network,\nTable \\ref{tab:Datasets} reports the number of nodes ($n$), \nthe number of edges ($m$), the \ntwo largest eigenvalues, and the spectral gap.\nWe use the first four networks to test all methods described\nin the previous section (except for {\\tt optimal}, which is only\napplied to the four smallest networks --- see the Supplementary Materials)\n and the last three to illustrate the\nperformance of the most efficient among the methods tested.\n\n\n\n\n\n\\begin{figure}[t]\n\\centering\n\\caption{Evolution of the normalized total communicability vs.~number \nof downdates, updates and rewires\nfor networks Minnesota and as735.}\n\\label{fig:Minnesota_as735}\n\\includegraphics[width=.9\\textwidth]{Minnesota_as735.eps}\n\\end{figure}\n\n\\begin{figure}[h]\n\\centering\n\\caption{Evolution of the normalized total communicability vs.~number\nof downdates, updates and rewires for networks USAir97 and Erd\\\"os02.}\n\\label{fig:USAir97_Erdos02}\n\\includegraphics[width=.9\\textwidth]{USAir97_Erdos02.eps}\n\\end{figure}\n\n\nWe first consider the networks \nMinnesota, as735, USAir97, and Erd\\\"os02, for which we perform $K=50$ modifications. \nFor these networks the set $\\overline{E}$ \n(the complement of the set $E$ of edges) is large enough \nthat performing an extensive search for the \nedge to be updated is expensive. Hence, we form the set $S$ containing the top\n$10\\%$ of the nodes ordered according to the eigenvector centrality \nand we restrict our search to virtual edges incident to these nodes\nonly. An exception is\nthe network USAir97 where we have used the set $S$ corresponding to\nthe top $20\\%$ of the nodes, since in the case of $10\\%$ this set contained \nonly 52 virtual edges.\nIn Figures \\ref{fig:Minnesota_as735} and \\ref{fig:USAir97_Erdos02}\nwe show results for the methods {\\tt eigenvector}, {\\tt eigenvector.no},\n{\\tt subgraph}, {\\tt subgraph.no} and {\\tt degree}. \nBefore commenting on these, we want to stress the poor performance of {\\tt node} \nwhen tackling (P3); this shows that the use of edge centrality measures (as opposed to \nnode centralities alone) is indispensable in this framework. \nThe results for these networks clearly show \nthe effectiveness of\nthe {\\tt eigenvector} and {\\tt subgraph} algorithms\nand of their less expensive variants {\\tt eigenvector.no} and {\\tt subgraph.no} in\nnearly all cases; similar results were obtained with\n{\\tt nodeTC} and {\\tt nodeTC.no} (not shown). The only exception is in the downdating\nof the Minnesota network,\nwhere the eigenvector-based techniques\ngive slightly worse results.\nThis fact is easily explained in view of the tiny spectral\ngap characterizing this and similar networks\\footnote{Small spectral gaps are typical\nof large, grid-like\nnetworks such as the road networks or the graphs corresponding to\nuniform triangulations or discretizations of physical domains.}\n(see Table \\ref{tab:Datasets}).\nBecause of this property, eigenvector centrality is a poor approximation\nof subgraph centrality and cannot be expected to give\nresults similar to those obtained with {\\tt subgraph}\nand {\\tt subgraph.no}. \n\nThe results for the downdate show that the inexpensive {\\tt degree}\nmethod does not perform as well on these networks, except\nperhaps on Minnesota. The relatively poor performance of\nthis method is due to the fact that the information used by this\nmethod to select an edge for downdating is too local.\n\nNote, however, the scale on the vertical axis in Figures \n\\ref{fig:Minnesota_as735}--\\ref{fig:USAir97_Erdos02}, \nsuggesting that for these networks (excluding perhaps Minnesota)\nall the edge centrality-based methods perform well with only very small relative\ndifferences between the resulting total communicabilities. \n\n\nOverall, these results indicate that the edge centrality-based\nmethods, especially the inexpensive {\\tt eigenvector.no} and {\\tt nodeTC.no} variants, \nare an excellent choice in almost all cases and to tackle all the problems. \nIn the case of downdating\nnetworks with small spectral gaps, \n{\\tt subgraph.no} may be preferable but at a higher cost.\n\nThe behavior of the {\\tt degree} method depends strongly on the \nnetwork on which it is used. Our tests indicate that it behaves\nwell in some cases (for example, P2 for Erd\\\"os02) but poorly in others (P2 for Minnesota).\nWe speculate that this method may perform adequately when tackling (P2) on scale-free\nnetworks (such as Erd\\\"os02) where a high degree is an indication\nof centrality in spreading information across the network.\n\nSome comments on the difference in the results for updating as compared to those \nfor rewiring (downdating followed by updating) are in order.\nRecall that our downdating strategies aim to reduce as little as \npossible the decrease in the value of the total communicability, whereas the \nupdating techniques aim to increase this index as much as possible.\nWith this in mind, it is not surprising to see that the \ntrends of the evolution of the total communicability after rewiring reflect those \nobtained with the updating strategies.\n The values obtained\nusing the updates are in general higher than those obtained using the rewiring \nstrategies, since updating implies the addition of edges whereas in\nrewiring the number of edges remains the same. \nExperiments not reported here indicate that\nthe methods based on the edge eigenvector \nand total communicability\ncentrality are more stable than the others \nunder rewiring and to dampen the effect of the downdates\n\nIn Figures \\ref{fig:large_down}-\\ref{fig:large_up} we show results for \nthe three largest networks in our data set (ca--HepTh, as-22july06 and\nusroad-48). \nIn the case of the updating, we have selected the virtual edges among those in the subgraph \ncontaining the top $1\\%$ of nodes ranked according to the eigenvector centrality. \nWe compare the following methods:\n{\\tt eigenvector}, {\\tt eigenvector.no}, {\\tt nodeTC}, {\\tt nodeTC.no},\n{\\tt subgraph.no} and {\\tt degree}; \nrandom downdating was also tested and found to give poor results. \nNote that network\nusroad-48 behaves similarly to Minnesota; this is not surprising in view of\nthe fact that these are both road networks with a tiny spectral gap.\nLooking at the scale on the vertical axis, however, it is clear that\nthe decrease in total communicability is negligible with all the methods\ntested here.\nThe results on these networks confirm the general trend observed so far;\nin particular, we note the excellent behavior of {\\tt nodeTC} and\n{\\tt nodeTC.no}.\n\n\\begin{figure}[t]\n\\centering\n\\caption{Downdates for large networks: normalized total communicability vs.~number of modifications.}\n\\includegraphics[width=.9\\textwidth]{Large_down_new.eps}\n\\label{fig:large_down}\n\\end{figure}\n\n\\begin{figure}[t]\n\\centering\n\\caption{Updates for large networks: normalized total communicability vs.~number of modifications.}\n\\includegraphics[width=.9\\textwidth]{Large_up_new.eps}\n\\label{fig:large_up}\n\\end{figure}\n\n\n\n\\subsection{Synthetic networks}\nThe synthetic examples used in the tests were produced using \nthe CONTEST toolbox for Matlab (see \\cite{Contest,Taylor2009}).\nWe tested two types of graphs: the preferential attachment \n(Barab\\'asi--Albert) model and the small world (Watts--Strogatz) model. \n\n\\begin{figure}[t]\n\\centering\n\\caption{Evolution of the total communicability when 50 downdates,\nupdates or rewires are performed on two synthetic networks with $n=1000$ nodes.}\n\\includegraphics[width=.9\\textwidth]{synthetic.eps}\n\\label{fig:synthetic}\n\\end{figure}\n\nThe preferential attachment model \n\\cite{prefattach} was designed to produce networks with \nscale--free degree distributions as well as the small world property,\ncharacterized by short average path length and relatively high clustering\ncoefficient.\nIn CONTEST, preferential attachment networks are constructed using \nthe command \\texttt{pref(n,d)} where $n$ is the number of nodes and $d\\geq 1$ is the \nnumber of edges each new node is given when it is first introduced to the network.\nThe network is created by adding nodes one by one (each new node with $d$ edges).\nThe edges of the new node connect to nodes already in the network with \na probability proportional to the degree of the already existing nodes.\nThis results in a scale--free degree distribution.\n\n\nThe second class of synthetic test matrices used in our experiments \ncorresponds to Watts--Strogatz small world networks.\nThe small world model was developed as a way to impose a high \nclustering coefficient onto classical random graphs \\cite{Watts1998}.\nThe function used to build these matrices takes the form \\texttt{smallw(n,k,p)}.\nHere $n$ is the number of nodes in the network, originally arranged in a\nring and connected to their $k$ nearest neighbors. Then each node is\nconsidered independently and, with probability $p$, an edge is added\nbetween the node and one of the other nodes in the graph, chosen uniformly\nat random (self-loops and multiple edges are not allowed). \nIn our tests, we have used matrices with $n=1000$ nodes which were \nbuilt using the default values for the functions previously described. \nWe used $d=2$ in the Barab\\'asi--Albert model \nand $k=2$, $p=0.1$ in the Watts--Strogatz model. \n\n\nThe results for our tests are presented in Figure \\ref{fig:synthetic}.\nThese results agree with what we have seen previously on real-world networks. \nInterestingly, {\\tt degree} does not perform well for the downdate when \nworking on the preferential attachment model; \nthis behavior reflects what we have seen for the networks USAir97, as--735, \nand Erd\\\"os02,\nwhich are indeed scale--free networks.\n\n\\begin{figure}[t]\n\\centering\n\\caption{Timings in seconds for scale-free graphs of increasing size (500 modifications).} \n\\includegraphics[width=.9\\textwidth]{time_nest_new2.eps}\n\\label{fig:timings_nest}\n\\end{figure}\n\n\n\n\n\\subsection{Timings for synthetic networks}\\label{sec:ltime_synth}\nWe have performed some experiments with synthetic networks of increasing\nsize in order to assess the scalability of the various methods introduced\nin this paper. A sequence of seven adjacency matrices corresponding to\nBarab\\'asi--Albert scale-free graphs was generated using the CONTEST\ntoolbox. The order of the matrices ranges from 1000 to 7000; the average\ndegree is kept constant at 5. A fixed number of modifications ($K=500$)\nwas carried out on each network.\nAll experiments were performed using Matlab Version 7.12.0.635 (R2011a) \non an IBM ThinkPad running Ubuntu 12.04.5 LTS, a 2.5 GHZ Intel Core i5 processor, and \n3.7 GiB of RAM. \nWe used the built-in Matlab function {\\tt eigs} (with the default settings) to\napproximate the dominant eigenvector of the adjacency matrix $A$, \nthe Matlab toolbox {\\tt mmq} \\cite{mmq} to estimate the diagonal \nentries of $e^A$ (with a fixed number of five nodes in the Gauss--Radau\nquadrature rule, hence five Lanczos steps per estimate),\nand the toolbox {\\tt funm\\_kryl} to compute the vector \n$e^A{\\bf 1}$ of total communicabilities, \nalso with the default parameter settings.\n\nThe results are shown in Figure \\ref{fig:timings_nest}. The approximate\n(asymptotic) linear scaling\nbehavior of the various methods (in particular of {\\tt nodeTC.no} and {\\tt eigenvector.no},\nwhich are by far the fastest, see the insets) is clearly displayed \nin these plots. \n\n\n\n\\subsection{Timings for larger networks}\\label{sec:large}\n\nIn Tables \\ref{tab:timings_down}--\\ref{tab:timings_up} we \nreport the timings for various methods \nwhen $K=2000$ downdates and updates are selected for the three largest networks listed \nin Table \\ref{tab:Datasets}. \n\nThe timings presented refer to the selection of the edges to be downdated or updated, which\ndominates the computational effort. For the method {\\tt subgraph.no} in the case\nof downdates, we restricted the search of\ncandidate edges to a subset of $E$ in order to reduce\ncosts. For the three test networks we used $40\\%$, $45\\%$ and $15\\%$ of the nodes,\nrespectively,\nchosen by taking those with lowest eigenvector centrality, and the corresponding\nedges. We found the results to be very close to those obtained working with the\ncomplete set $E$, but at a significantly lower cost (especially for the largest\nnetwork).\n\nThese results clearly show that algorithms\n{\\tt nodeTC.no} and {\\tt eigenvector.no} are orders of magnitude\nfaster than the other methods; method {\\tt subgraph.no}, while significantly\nmore expensive, is still\nreasonably \nefficient\\footnote{It is worth mentioning that in principle it is possible to \ngreatly reduce the cost of this method using parallel processing, since each \nsubgraph centrality can be computed independently of the others.}\nand can be expected to give better results in \nsome cases (e.g.,\non networks with a very small spectral gap). The {\\tt degree} algorithm, on the\nother hand, cannot be recommended in general since it gives somewhat inferior results.\nThe remaining methods {\\tt eigenvector}, {\\tt nodeTC} and {\\tt subgraph} (not\nshown here) are prohibitively expensive for large networks, at least when the\nnumber $K$ of modifications is high (as it is here).\n\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\caption{Timings in seconds for $K=2000$ downdates performed on the \nthree largest networks in Table \\ref{tab:Datasets}.}\n\\label{tab:timings_down}\n\\begin{tabular}{cccc}\n\\hline\n & ca--HepTh & as--22july06 & usroad--48 \\\\\n\\hline\n{\\tt eigenvector} & 278.13 & 599.83 & 11207.39 \\\\\n{\\tt eigenvector.no}& 0.07 & 1.79 & 4.08 \\\\\n{\\tt nodeTC} & 553.04 & 1234.49 & 2634.27 \\\\\n{\\tt nodeTC.no} & 0.34 & 0.83 & 1.34 \\\\\n{\\tt subgraph.no} & 107.36 & 383.34 & 1774.07 \\\\\n{\\tt degree} & 29.67 & 53.42 & 153.52 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\n\\begin{table}[t]\n\\footnotesize\n\\centering\n\\caption{Timings in seconds for $K=2000$ updates performed on the \nthree largest networks in Table \\ref{tab:Datasets}}\n\\label{tab:timings_up}\n\\begin{tabular}{cccc}\n\\hline\n & ca--HepTh & as--22july06 & usroad--48\\\\\n\\hline\n{\\tt eigenvector} & 192.8 & 436.9 & 1599.5 \\\\\n{\\tt eigenvector.no}& 0.19 & 0.33 & 5.85 \\\\\n{\\tt nodeTC} & 561.9 & 1218.8 & 2932. \\\\\n{\\tt nodeTC.no} & 0.30 & 0.55 & 1.59 \\\\\n{\\tt subgraph.no} & 3.13 & 7.20 & 121.4 \\\\\n{\\tt degree} & 11.1 & 12.4 & 175.8 \\\\\n\\hline\n\\end{tabular}\n\\end{table}\n\nWe also observe that downdating is generally a more\nexpensive process than updating, since in the latter case the edges are to be\nchosen among a fairly small subset of all virtual edges, whereas in the downdating process\nwe work on the whole set $E$ of existing edges (or on a large subset of $E$). \nFor some methods the difference in cost becomes significant when\nthe networks are sufficiently large and the number of modifications to be\nperformed is high.\n\nSummarizing, the method labelled {\\tt nodeTC.no} is the fastest and gives excellent\nresults, quite close to those of the more expensive methods, and therefore we can\nrecommend its use for the type of problems considered here. The methods labelled\n{\\tt eigenvector.no} and {\\tt subgraph.no} are also effective and\nmay prove useful in some settings, especially for updating.\n \n \n\n\n\n\\section{Evolution of other connectivity measures}\n\\label{sec:nc+gap}\nIn this section we want to highlight another facet of the methods we \nhave introduced for (approximately) optimizing the total communicability.\nIn particular, we\nlook at the evolution of other network properties under our updating strategies.\nWhen building or modifying a network, there are various \nfeatures that one may want to achieve.\nTypically, there are two main desirable properties: first, the network should\ndo a good job at spreading information, i.e., have a high total communicability;\nsecond, the network should be robust under targeted attacks or random failure, which \nis equivalent to the requirement that it should be difficult to ``isolate'' \nparts of the network, i.e., the network should be ``well connected''.\nThis latter property can be measured by means of various indices. \nOne such measure is the spectral gap $\\lambda_1 - \\lambda_2$. As a consequence\nof the Perron--Frobenius Theorem, adding an edge to a connected network\ncauses the dominant eigenvalue $\\lambda_1$ of $A$ to increase. Test results\n(not shown here) show that when a network is updated using one of our\ntechniques, the first eigenvalue increases rapidly with the number of updates.\nOn the other hand, the second eigenvalue $\\lambda_2$ tends to change little\nwith each update and it may even decrease (recall that the matrix\n$UW^T = {\\bf e}_i{\\bf e}_j^T + {\\bf e}_j{\\bf e}_i^T$ being added to $A$\nin an update is indefinite). Therefore, the spectral gap $\\lambda_1 - \\lambda_2$\nwidens rapidly with the number of updates.\\footnote{This fact, incidentally,\nmay serve as further justification for the effectiveness\nof algorithms like {\\tt nodeTC.no}\nand {\\tt eigenvector.no}.}\n It has been pointed out\nby various\n authors (see, e.g., \\cite{E06b,Puder2013}) that a large spectral gap is typical\nof complex networks with good expansion properties. \n\nHere we focus on a related measure, the so-called {\\em free energy}\n(also known in the literature as {\\em natural connectivity}) of the network.\nIn particular, we investigate the effect of our proposed methods of network updating\non the evolution of this index.\n\n\\subsection{Tracking the free energy (or natural connectivity)}\nIn \\cite{Jun2010} the authors discuss a measure of network connectivity which \nis based on an intuitive notion of robustness and\nwhose analytical expression has a clear meaning and can be derived from the \neigenvalues of $A$; they refer to it as the {\\em natural connectivity}\nof the network (see also \\cite{Wu2012}).\nThe idea underlying this index is that a network is more robust if there exists \nmore than one route to get from one node to another; this property \nensures that if a route becomes unusable, there is an alternative way to get \nfrom the source of information to the target.\nTherefore, intuitively a network is more robust if it has a lot of (apparently) \nredundant routes connecting its vertices \nor, equivalently, if each of its nodes is involved in a lot of closed walks.\nThe natural connectivity aims at quantifying\nthis property by using an existing measure for\nthe total number of closed walks in a graph, namely, the \n{\\em Estrada index} \\cite{Estrada2000}.\nThis index, denoted by $EE(G)$, \nis defined as the trace of the matrix exponential.\nNormalizing this value and taking the natural logarithm, \none obtains the {\\itshape natural connectivity} \nof the graph:\n$$\\overline{\\lambda}(A)=\\ln\\left(\\frac{1}{n}\\sum_{i=1}^ne^{\\lambda_i}\\right)=\\ln(EE(G))-\\ln(n).$$\n\n\n\nIt turns out, however, that essentially the same index was already\npresent in the literature. Indeed, the natural connectivity is only one \nof the possible interpretations\nthat can be given to the logarithm of the (normalized) Estrada index.\nAnother, earlier interpretation was given in \\cite{Estrada2007}, where the authors\nrelated this quantity to the Helmholtz free energy of the network $F=-\\ln\\left(EE(G)\\right)$.\nTherefore, since $\\overline{\\lambda}=-F-\\ln(n)$, the behavior of $F$ completely describes that of\n$\\overline{\\lambda}$ (and conversely) as the graph is modified by adding or removing links.\n\n\nThe natural connectivity has been recently used (see \\cite{Chan2014}) to derive manipulation \nalgorithms that directly optimize this robustness measure. \nIn particular, the updating algorithm introduced in \\cite{Chan2014} appears to be \nsuperior to existing heuristics, such as those proposed in \n\\cite{Beygelzimer2005,Frank1970,Shargel2003}. \nThis algorithm, which costs $O(mt+Kd_{max}^2t+Knt^2)$ where $d_{max}=\\max_{i\\in V}d_i$ \nand $t$ is the (user-defined) number of leading eigenpairs, \nselects $K$ edges to be added to the network\nby maximizing a quantity that involves the elements of the leading $t$ \neigenpairs of $A$.\\footnote{A description of the \nalgorithm can be found in the Supplementary Material.} \n\n \n\nWe have compared our updating techniques with that described in \\cite{Chan2014}.\nResults for four representative networks are shown in Figure \\ref{fig:TCeNC_big}.\nIn our tests, we use the value $t=50$ (as in \\cite{Chan2014}), and we select $K=500$ edges. \nNote that, when $K$ is large, the authors recommend to recompute the set of $t$ \nleading eigenpairs every $l$ iterations. \nThis operation requires an additional effort that our faster methods do not need.\nSince the authors in \\cite{Chan2014} show numerical experiments in which the methods \nwith and without the recomputation return \nalmost exactly the same results, we did not recompute the eigenpairs after \nany of the updates. \n\nFigure \\ref{fig:TCeNC_big} displays the results for both the evolution of the natural \nconnectivity and of the normalized total communicability, where\nthe latter is plotted in a semi--logarithmic scale.\nA total of 500 updates have been performed.\nThe method labelled {\\tt Chan} selects the edges according to\nthe algorithm described in \\cite{Chan2014} choosing from all the virtual edges of the graph. \nFor our methods we used, as before, the virtual edges in the subgraph obtained \nselecting the top $10\\%$ or $20\\%$ of nodes ranked according \nto the eigenvector centrality.\nAs one can easily see, our methods generally outperform the algorithm proposed in \n\\cite{Chan2014}. In particular, {\\tt nodeTC.no} and {\\tt eigenvector.no} \ngive generally better results than {\\tt Chan} and\nare much faster in practice. For instance, the execution time\nwith {\\tt Chan} on the network ca-HepTh was over 531 seconds, and \nmuch higher for the two larger networks.\nWe recall (see Table \\ref{tab:timings_up})\n that the execution times for {\\tt nodeTC.no} and {\\tt eigenvector.no}\nare about three orders of magnitude smaller.\n\n \n\n\\begin{figure}[t]\n\\caption{Evolution of the natural connectivity and of the normalized total \ncommunicability (in a semi--logarithmic scale plot) when up to 500 updates are \nperformed on four real-world networks.}\n\\centering\n\\includegraphics[width=.9\\textwidth]{TCeNC.eps}\n\\label{fig:TCeNC_big}\n\\end{figure}\n\nIt is striking to see how closely the evolution of the natural connectivity \nmirrors the behavior of the normalized total communicability. This is likely\ndue to the fact that both indices depend on the eigenvalues of $A$\n(with a large contribution coming from the terms containing $\\lambda_1$,\nsee (\\ref{tc_spec}) and the subsequent remark), and all the updating strategies used here\ntend to make $\\lambda_1$ appreciably larger.\n\nReturning to the interpretation in terms of statistical physics, \nfrom Figure \\ref{fig:TCeNC_big} we deduce that the free energy of the\ngraph decreases as we add edges to the network.\nIn particular this means that the network is evolving toward a more stable\nconfiguration and, in the limit, toward equilibrium, which is the configuration\nwith maximum entropy.\\footnote{The relation between the free energy and the \nGibbs entropy is described in more detail in the Supplementary Material.}\n\n\n\\begin{comment}\nIndeed, the free energy of the system is related to the Gibbs entropy $S$ as\n$TS=H-F$, \nwhere $T$ is the absolute temperature and $H$ is the total energy of the graph.\nTherefore, the Gibbs entropy,\nwhich measures the effective number of states sharing the same energy,\nincreases as $F$ decreases.\n\\end{comment}\n\n\nThese findings indicate\nthat the normalized total communicability is equally effective an index as\nthe natural connectivity\n(equivalently, the free energy) for the purpose of \ncharacterizing network connectivity.\nSince the network total communicability can be computed very fast (in $O(n)$ time),\nwe believe that the normalized total communicability should be used instead\nof the natural connectivity, especially for large networks. \n\nIndeed, computing the natural connectivity requires evaluating the trace of \n$e^A$; even when stochastic trace estimation is used \\cite{AT11}, this is \nseveral times more expensive, for large networks, than the total communicability. \n\n\n\\begin{comment}\nIndeed, computing\nthe natural connectivity requires evaluating all the diagonal entries of $e^A$ and is\ntherefore significantly more expensive, for large networks, than the total\ncommunicability.\n\\end{comment}\n\n\\section{Conclusions and future work}\n\\label{sec:conclusions}\nIn this paper we have introduced several techniques that can be used \nto modify an existing network so as to obtain networks that are highly \nsparse, and yet have \na large total communicability.\n\nThese heuristics make use of various measures of edge \ncentrality, a few of which have been introduced in this work. \nFar from being {\\em ad hoc}, these heuristics are widely\napplicable and mathematically justified.\nAll our techniques can be implemented using well-established tools from numerical\nlinear algebra: algorithms for eigenvector computation, Gauss-based quadrature\nrules for estimating quadratic forms, and Krylov subspace methods for computing\nthe action of a matrix function on a vector. At bottom, the Lanczos algorithm\nis the main player. High quality, public domain software exists to perform these\nmodifications efficiently.\n\nAmong all the methods introduced here, the best results are obtained by \nthe {\\tt nodeTC.no} and {\\tt eigenvector.no} algorithms, \nwhich are based on the edge total communicability\nand eigenvector centrality, respectively. These methods are extremely fast \nand returned excellent results in virtually all \nthe tests we have performed. For updating networks characterized by a small\nspectral gap, a viable alternative is the algorithm {\\tt subgraph.no}.\nWhile more expensive than {\\tt nodeTC.no} and {\\tt eigenvector.no}, this\nmethod scales linearly with the number of nodes and yields consistently\ngood results.\n\nFinally, we have shown that the total communicability can be effectively \nused as a measure of network connectivity, which plays an important role\nin designing robust networks.\n Indeed, the total communicability does a very good job at quantifying \ntwo related properties of networks: \nthe ease of spreading information, and the extent to which the network is \n``well connected''. Our results show that the total communicability\nbehaves in a manner very similar to the natural connectivity (or free\nenergy) under network\nmodifications, while it can be computed much more quickly.\n\nFuture work should include the extension of these techniques\nto other types of networks,\nincluding directed and weighted ones.\n\n\n\\section*{Acknowledgements}\nWe are grateful to Ernesto Estrada for providing some of the networks \nused in the numerical experiments and for pointing out some useful references.\nThe first author would like to thank Emory University for the hospitality \noffered in 2014, when this work was completed.\nWe also thank two anonymous referees for helpful suggestions.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction\\label{sec:intro}}\nWith the advent of modern sky surveys that map out significant contiguous fractions of the observable Universe in ever greater detail~(e.g.,~\\cite{6dFGS,BOSS,DES,DESI,EBOSS,EUCLID,LSST,SDSS,SPHEREX,VIPERS,WFIRST}), it has become possible to investigate its least luminous and most extended constituents: cosmic voids, vast regions of relatively empty space. Voids are not only fascinating objects in their own right, they may also hold the keys to resolving some of today's open problems in cosmology, a fact that has come into focus only recently (see references~\\cite{vdWeygaert2009,vdWeygaert2016,Pisani2019} for an overview). Cosmic voids can be thought of pocket universes in which dark energy became important much earlier than elsewhere in the cosmos~\\cite{Bos2012,Pisani2015a,Verza2019}. Making up the bulk of large-scale structure, they play a major role in the formation of its web-like pattern~\\cite{Gregory1978,Zeldovich1982,deLapparent1986,Sheth2004,vdWeygaert2009}. This pattern contains a wealth of information on the fundamental properties of the Universe and voids have been shown to be sensitive probes thereof, such as its initial conditions~\\cite{vdWeygaert1993,Chan2019}, its matter~\\cite{Peebles2001,Nusser2005,Park2007,Lavaux2010,Sutter2012b,Sutter2014b,Hamaus2016,Mao2017} and energy components~\\cite{Granett2008,Biswas2010,Ilic2013,Cai2014,Planck2014,Kovacs2019}. Moreover not only cosmology, but the very nature of gravity can be investigated with voids~\\cite{Clampitt2013,Zivick2015,Cai2015,Achitouv2016,Falck2018,Paillas2019,Perico2019}, because it is gravity that gives rise to their formation and evolution in the first place. This happens via gravitational collapse of initially over-dense regions in the mass distribution into sheets, filaments, and clusters where galaxies form. The remaining space is occupied by voids that are characterized by the coherent flow of (predominantly dark) matter~\\cite{Shandarin2011,Abel2012,Hahn2015}. Baryonic matter is even more scarce inside voids~\\cite{Paillas2017,Pollina2017}, implying a significant advantage in the attempt to model their evolution when compared to the other structure types. This opens up the opportunity to use voids as laboratories for the physics of dark matter~\\cite{Yang2015,Reed2015,Baldi2018} and other elusive particles, such as neutrinos, that freely permeate their interiors~\\cite{Massara2015,Banerjee2016,Kreisch2019,Schuster2019}.\n\nOn the whole, cosmic voids offer radically novel avenues towards probing the fundamental laws of physics that govern our Universe. General Relativity (GR) relates the distribution of matter and energy to the geometry of spacetime via Einstein's field equations. Consequently, observations of the cosmic expansion history allow constraining the material components of the Universe. In this manner supernova distance measurements have inferred the existence of dark energy (in the form of a cosmological constant $\\Lambda$) that dominates the cosmic energy budget today and is responsible for the observed accelerated expansion~\\cite{Riess1998,Perlmutter1999}. Yet, the fundamental nature of dark energy remains mysterious and further efforts are necessary towards explaining its origin. This has been attempted in studying the expansion history by employing standard rulers, such as the Baryon Acoustic Oscillation~(BAO) feature imprinted in the spatial distribution of galaxies on scales of $\\sim105h^{-1}{\\rm Mpc}$~\\cite{Eisenstein2005}. Because the physics of recombination is well understood, the BAO feature can be modeled from first principles and therefore provides a scale of known extent: a standard ruler. Observations of the BAO in the pairwise distribution of galaxies have been successful in constraining the expansion history and so far consistently confirmed the $\\Lambda$CDM paradigm (e.g.,~\\cite{Alam2017,SanchezA2017,Beutler2017a}).\n\nA similar approach can be adopted for objects of known shape: standard spheres. Both methods are based on the cosmological principle, stating the Universe obeys statistical isotropy and homogeneity. A relatively novel technique is the use of cosmic voids in this context. After averaging over all orientations, their shape obeys spherical symmetry, even though individual voids may not~\\cite{Park2007,Platen2008}. Therefore, stacked voids can be considered as standard spheres~\\cite{Ryden1995,Lavaux2012,Sutter2012b,Sutter2014b,Hamaus2015,Hamaus2016,Mao2017} with sizes typically ranging from $10h^{-1}{\\rm Mpc}$ to $100h^{-1}{\\rm Mpc}$. This means that in a finite survey volume one can find a substantially larger number of such spheres than rulers in the form of BAO, allowing a significant reduction of statistical uncertainties and to probe a wider range of scales. Standard spheres can be used to constrain the expansion history: only if the fiducial cosmological model in converting redshifts to distances is correct, stacked voids appear spherically symmetric, a technique known as the Alcock-Paczynski (AP) test~\\cite{Alcock1979}. In principle this test merely involves a trivial rescaling of coordinates. However, in observational data the spherical symmetry is broken by redshift-space distortions (RSD), which are caused by the peculiar motions of galaxies along the line of sight. Therefore, a successful application of the AP test to constrain cosmological parameters from voids crucially relies on the ability to robustly model their associated RSD~\\cite{Ryden1996}. The latter are notoriously complex and difficult to model in the clustering statistics of galaxies, especially on intermediate and small scales, where non-linear clustering and shell crossing occurs. It has been shown that these limitations can be mitigated in voids, which are dominated by a laminar, single-stream flow of matter that is well described even by linear theory~\\cite{Paz2013,Hamaus2014b,Hamaus2014c,Pisani2015b,Hamaus2015,Hamaus2016}. This, and the additional virtue of enabling constraints on the growth rate of structure, has sparked the recent interest for void RSD in the literature~\\cite{Cai2016,Chuang2017,Achitouv2017a,Hawken2017,Hamaus2017,Correa2019,Achitouv2019,Nadathur2019a,Nadathur2019b,Hawken2020}.\n\nIn this paper we present a first cosmological analysis of voids from the combined galaxy sample of the final BOSS~\\cite{BOSS} data. Our model self-consistently accounts for RSD and the AP effect, without the need for any external inputs from simulations or mock catalogs. The detailed derivation of the underlying theory is outlined in section~\\ref{sec:theory}, along with a definition of all relevant observables. Section~\\ref{sec:analysis} presents the observed and simulated data sets considered and our method for the identification and characterization of voids therein. Our analysis pipeline is then validated based on mock data in the first part of section~\\ref{sec:analysis}, the second part is devoted to process the real data. We demonstrate that the AP test with voids offers cosmological constraints that are competitive with other large-scale structure probes. Section~\\ref{sec:discussion} is used to summarize our constraints and to discuss them in the light of previous works on voids (see figure~\\ref{fig:comparison}), representing the strongest such constraints in the literature. Finally, we draw our conclusions in section~\\ref{sec:conclusion}.\n\n\n\\section{Theory \\label{sec:theory}}\n\n\\subsection{Dynamic distortion \\label{subsec:dynamic}}\nIn cosmology, our observables are the redshifts $z$ and angular sky coordinates $\\boldsymbol{\\theta}=(\\vartheta,\\varphi)$ of an astronomical object. The comoving distance of this object is defined as\n\\begin{equation}\n\\chi_\\parallel(z)=\\int_0^z\\frac{c}{H(z')}\\mathrm{d}z'\\;,\n\\label{chi_par}\n\\end{equation}\nwhere $H(z)$ is the Hubble rate and $c$ the speed of light. The observed redshift $z$ can contain contributions from many different physical effects, but the most important ones are the cosmological Hubble expansion $z_h$ and the Doppler effect $z_d$. The total observed redshift $z$ is then given by~\\cite{Davis2014}\n\\begin{equation}\n1+z = (1+z_h)(1+z_d)\\;. \\label{z_tot}\n\\end{equation}\nThe Doppler effect is caused by peculiar motions along the line of sight, $z_d=v_\\parallel\/c$. Because $z_d$ is typically small compared to $z_h$, we can write\n\\begin{equation}\n\\chi_\\parallel(z)\\simeq\\chi_\\parallel(z_h) + \\frac{c(1+z_h)}{H(z_h)}z_d\\;.\n\\label{chi_rsd}\n\\end{equation}\nThe transverse comoving distance for an observed angle $\\theta\\equiv|\\boldsymbol{\\theta}|$ on the sky is defined as\n\\begin{equation}\n\\chi_\\perp(z) = D_\\mathrm{A}(z)\\,\\theta\\;,\n\\label{chi_per}\n\\end{equation}\nwhere the comoving angular diameter distance is given by\n\\begin{equation} D_\\mathrm{A}(z) = \\frac{c}{H_0\\sqrt{-\\Omega_\\mathrm{k}}}\\sin\\left(\\frac{H_0\\sqrt{-\\Omega_\\mathrm{k}}}{c}\\chi_\\parallel(z)\\right)\\;,\n\\label{D_A}\n\\end{equation}\nwith the Hubble constant $H_0\\equiv H(z=0)$ and present-day curvature parameter $\\Omega_\\mathrm{k}$. In a flat universe with $\\Omega_\\mathrm{k}=0$, equation~(\\ref{D_A}) reduces to $D_\\mathrm{A}(z)=\\chi_\\parallel(z)$. Now, given the observed coordinates $(z,\\vartheta,\\varphi)$, we can transform to the comoving space vector $\\mathbf{x}$ via\n\\begin{equation}\n\\mathbf{x}(z,\\vartheta,\\varphi) = D_\\mathrm{A}(z)\\begin{pmatrix}\\cos\\vartheta\\cos\\varphi\\\\\\sin\\vartheta\\cos\\varphi\\\\\\sin\\varphi\\end{pmatrix}\\;,\n\\label{x_comoving}\n\\end{equation}\nwhere $D_\\mathrm{A}(z)\\simeq D_\\mathrm{A}(z_h)+cz_d(1+z_h)\/H(z_h)$, analogously to equation~(\\ref{chi_rsd}). Hence, using $z_d=v_\\parallel\/c$, we can write\n\\begin{equation}\n\\mathbf{x}(z) \\simeq \\mathbf{x}(z_h) + \\frac{1+z_h}{H(z_h)}\\mathbf{v}_\\parallel\\;,\n\\label{x_rsd}\n\\end{equation}\nwhere $\\mathbf{v}_\\parallel$ is the component of the velocity vector $\\mathbf{v}$ along the line-of-sight direction. We describe the location and motion of tracers by vectors in comoving space, upper-case letters are used for void centers, lower-case letters for galaxies. The observer's location is chosen to be at the origin of our coordinate system, the void center position is denoted by $\\mathbf{X}$ with redshift $Z$ and the galaxy position by $\\mathbf{x}$ with redshift $z$. The redshift $Z$ of the void center is not a direct observable, but it is constructed via the redshifts of all the surrounding galaxies that define it (see section~\\ref{subsec:voids}). Moreover, we pick the direction of the void center as our line of sight, i.e. $\\mathbf{X}\/|\\mathbf{X}|$, and adopt the distant-observer approximation, assuming that $\\mathbf{x}$ and $\\mathbf{X}$ are parallel.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim= 0 0 0 40]{fig\/VoidStretch.pdf}}\n\t\\caption{Separation vector between the comoving void center location $\\mathbf{X}$ and the galaxy location $\\mathbf{x}$ in real space ($\\mathbf{r}$, left) and in redshift space ($\\mathbf{s}$, right). The peculiar line-of-sight velocity $\\mathbf{v}_\\parallel$ of every galaxy that defines the void can be decomposed into the peculiar velocity of the void center $\\mathbf{V}_\\parallel$ and the galaxy's relative velocity $\\mathbf{u}_\\parallel$ with respect to this center. For simplicity, the illustration displays $\\mu$ instead of $\\cos^{-1}(\\mu)$ to indicate line-of-sight angles and shows velocity displacements in units of $(1+z_h)\/H(z_h)$. This yields the relation $\\mathbf{s}=\\mathbf{r}+\\mathbf{u}_\\parallel$ between real- and redshift-space separations.}\n\t\\label{fig:voidstretch}\n\\end{figure}\n\nLet us first consider real space, where the Doppler effect is neglected ($z_d=0$) and hence $\\mathbf{x}(z)=\\mathbf{x}(z_h)$. The vector $\\mathbf{r}\\equiv\\mathbf{x}-\\mathbf{X}$ connects the two positions at a comoving distance of $r=|\\mathbf{r}|$. Similarly, we define the relative velocity $\\mathbf{u}$ between a void center of velocity $\\mathbf{V}$ and a galaxy of velocity $\\mathbf{v}$ as $\\mathbf{u}\\equiv\\mathbf{v}-\\mathbf{V}$. Now, if we consider redshift space with $z_d\\ne0$ and use equation~(\\ref{x_rsd}), the separation vector between void center and galaxy becomes\n\\begin{equation}\n\\mathbf{x}(z)-\\mathbf{X}(Z) \\simeq \\mathbf{x}(z_h)-\\mathbf{X}(Z_h) + \\frac{1+z_h}{H(z_h)}\\left(\\mathbf{v}_\\parallel-\\mathbf{V}_\\parallel\\right) = \\mathbf{r} + \\frac{1+z_h}{H(z_h)}\\mathbf{u}_\\parallel \\equiv \\mathbf{s}\\;.\n\\label{s(r)}\n\\end{equation}\nHere we have used the approximation $z_h\\simeq Z_h$ for the Doppler term, which is accurate to $\\mathcal{O}(10^{-3})$ on scales of $r\\sim\\mathcal{O}(10^{1})h^{-1}{\\rm Mpc}$ and velocities of $u_\\parallel\\sim\\mathcal{O}(10^2)\\frac{\\rm km}{\\rm s\\,Mpc}$. This approximation becomes exact when we consider the average of this vector over many different voids -- a so-called void stack. In that case the cosmological principle ensures statistical homogeneity and isotropy such that $\\langle\\mathbf{V}\\rangle=\\langle\\mathbf{V}_\\parallel\\rangle=0$, since there is no preferred observer (or void, for that matter) in the Universe. In other words: the average void moves with the background Hubble flow. Thus, the vector $\\mathbf{s}$, which connects void centers and galaxies in redshift space, does not depend on the individual motions of galaxies or void centers, but only on their relative velocity $\\mathbf{u}_\\parallel$ along the line of sight. Of course this only applies to those galaxies that are part of the same void, not to the relative velocities of galaxies that are associated with distinct voids at larger separation (see~\\cite{Hamaus2014a,Chan2014,Hamaus2014c,Liang2016,Chuang2017,Voivodic2020} for an account on large-scale void-galaxy cross-correlations and~\\cite{Sutter2014c,Ruiz2015,Lambas2016,Ceccarelli2016,Wojtak2016,Lares2017b} on the motions and pairwise velocity statistics of voids). An illustration of this is shown in figure~\\ref{fig:voidstretch}: voids experience a translation and a deformation between real and redshift space, but the translational component does not enter in the separation vector $\\mathbf{s}$ for galaxies that belong to the void. As long as voids can be considered as coherent extended objects, their centers move along with the galaxies that define them from real to redshift space. This property distinguishes voids from galaxies, which are typically treated as point-like particles in the context of large-scale structure. The same reasoning applies to galaxy clusters, the overdense counterparts of voids: the center of mass is defined by the cluster member galaxies, which are observed in redshift space. Their relative (virial) motion with respect to the center of mass results in an elongated stacked cluster shape, irrespective of the movement of the entire cluster~\\cite{Croft1999,Zu2013,Marulli2017,Farahi2016,Cai2017b}.\n\n\\subsection{Geometric distortion \\label{subsec:geometric}}\nGiven the observed angular sky-coordinates and redshifts of galaxies and void centers, the comoving distance $s = (s_\\parallel^2 + s_\\perp^2)^{1\/2}$ in redshift space between any void-galaxy pair is determined by equations~(\\ref{chi_par}) and~(\\ref{chi_per}),\n\\begin{equation}\ns_\\parallel = \\frac{c}{H(z)}\\delta z\\quad\\mathrm{and}\\quad s_\\perp = D_\\mathrm{A}(z)\\delta\\theta\\;, \\label{comoving}\n\\end{equation}\nwhere $\\delta z$ and $\\delta\\theta$ are the redshift and angular separation of the pair, respectively. This transformation requires the Hubble rate $H(z)$ and the comoving angular diameter distance $D_\\mathrm{A}(z)$ as functions of redshift, so a particular cosmological model has to be adopted. We assume a flat $\\Lambda$CDM cosmology with\n\\begin{equation}\nH(z) = H_0\\sqrt{\\Omega_\\mathrm{m}(1+z)^3+\\Omega_\\Lambda}\\;, \\label{H(z)}\n\\end{equation}\nwhere $\\Omega_\\mathrm{m}$ and $\\Omega_\\Lambda=1-\\Omega_\\mathrm{m}$ are today's density parameters for matter and a cosmological constant, respectively. For the low redshift range considered in this paper, we can neglect the radiation density for all practical purposes. Given a cosmology with parameters $\\boldsymbol{\\Omega}$ we can now convert angles and redshifts into comoving distances according to equation~(\\ref{comoving}). However, the precise value of these parameters is unknown, as is the underlying cosmology, so we can only estimate the true $H(z)$ and $D_\\mathrm{A}(z)$ with a fiducial model and parameter set $\\boldsymbol{\\Omega}'$, resulting in the estimated distances\n\\begin{equation}\ns_\\parallel' = \\frac{H(z)}{H'(z)}s_\\parallel\\equiv q_\\parallel^{-1} s_\\parallel\\;\\mathrm{,}\\qquad s_\\perp' = \\frac{D_\\mathrm{A}'(z)}{D_\\mathrm{A}(z)}s_\\perp\\equiv q_\\perp^{-1} s_\\perp\\;,\n\\end{equation}\nwhere the primed quantities are evaluated in the fiducial cosmology and $(q_\\parallel, q_\\perp)$ are defined as the ratios between true and fiducial values of $H^{-1}(z)$ and $D_\\mathrm{A}(z)$, respectively~\\cite{SanchezA2017}. Therefore, both magnitude and direction of the separation vector $\\mathbf{s}$ may differ from the truth when a fiducial cosmology is assumed. Defining the cosine of the angle between $\\mathbf{s}$ and the line of sight $\\mathbf{X}\/|\\mathbf{X}|$ as\n\\begin{equation}\n\\mu_s\\equiv\\frac{\\mathbf{s}\\cdot\\mathbf{X}}{|\\mathbf{s}||\\mathbf{X}|}=\\frac{s_\\parallel}{s}\\;,\n\\label{mu_s}\n\\end{equation}\none can obtain the true $s$ and $\\mu_s$ from the fiducial $s'$ and $\\mu_s'$ via\n\\begin{gather}\ns = \\sqrt{q_\\parallel^2 s_\\parallel'^2+q_\\perp^2s_\\perp'^2} = s'\\mu_s'q_\\parallel\\sqrt{1+\\varepsilon^2(\\mu_s'^{-2}-1)}\\;,\n\\label{s_fid}\n\\\\\n\\mu_s = \\frac{\\mathrm{sgn}(\\mu_s')}{\\sqrt{1+\\varepsilon^2(\\mu_s'^{-2}-1)}}\\;,\n\\label{mu_s_fid}\n\\end{gather}\nwhere\n\\begin{equation}\n\\varepsilon \\equiv \\frac{q_\\perp}{q_\\parallel} = \\frac{D_\\mathrm{A}(z)H(z)}{D_\\mathrm{A}'(z)H'(z)}\\;.\n\\label{epsilon}\n\\end{equation}\nIf the fiducial cosmology agrees with the truth, $\\varepsilon=q_\\parallel=q_\\perp=1$ and $s=s'$, $\\mu_s=\\mu_s'$. Conversely, if one of these parameters is measured to be different from unity, one may iteratively vary the fiducial cosmology until the true parameter values $\\boldsymbol{\\Omega}$ are found. As apparent from equations~(\\ref{s_fid}) and~(\\ref{mu_s_fid}), absolute distances $s$ in redshift space depend on both $q_\\parallel$ and $q_\\perp$, whereas angles $\\mu_s$ merely depend on their ratio~$\\varepsilon$. Exploiting the spherical symmetry of stacked voids via the AP effect therefore constrains $\\varepsilon$, but $q_\\parallel$ and $q_\\perp$ remain degenerate without calibration of $s$ with a known scale (such as the BAO scale, for example). However, void-centric distances are typically expressed in units of the effective void radius $R$, which is defined via the cubic root of the void volume in redshift space (see section~\\ref{subsec:voids}). The observed volume is proportional to $s'_\\parallel s'^2_\\perp$, implying $R=q_\\parallel^{1\/3}q_\\perp^{2\/3}R'$ to relate true with fiducial void radii. Then, the ratio $s\/R$ only depends on $\\varepsilon$, as it is the case for $\\mu_s$,\n\\begin{equation}\n\\frac{s}{R}=\\frac{s'}{R'}\\mu_s'\\varepsilon^{-2\/3}\\sqrt{1+\\varepsilon^2(\\mu_s'^{-2}-1)}\\;.\n\\label{s_R_fid}\n\\end{equation} \n\n\n\\subsection{Void-galaxy cross-correlation function \\label{subsec:correlation}}\nThe probability of finding a galaxy at comoving distance $r$ from a void center in real space is given by $1+\\xi(r)$, where $\\xi(r)$ is the void-galaxy cross-correlation function. Due to statistical isotropy, it only depends on the magnitude of the separation vector $\\mathbf{r}$, not its orientation. This is no longer the case in redshift space, where peculiar motions break isotropy via the Doppler effect. However, since this causes RSD exclusively along the line-of-sight direction, we can eliminate their impact by projecting the correlation function onto the plane of the sky. This yields the projected correlation function $\\xi_p$,\n\\begin{equation}\n1+\\xi_p(r_\\perp) = \\frac{\\int\\left[1+\\xi(r)\\right]\\mathrm{d}r_\\parallel}{\\int\\mathrm{d}r_\\parallel} = \\frac{\\int\\left[1+\\xi^s(\\mathbf{s})\\right]\\mathrm{d}s_\\parallel}{\\int\\mathrm{d}s_\\parallel} = 1+\\xi^s_p(s_\\perp)\\;,\n\\label{xi_p}\n\\end{equation}\nwhere $r=(r_\\parallel^2+r_\\perp^2)^{1\/2}$ and $\\xi^s(\\mathbf{s})$ is the redshift-space correlation function, which can now be expressed as\n\\begin{equation}\n1+\\xi^s(\\mathbf{s}) = \\left[1+\\xi(r)\\right]\\frac{\\mathrm{d}r_\\parallel}{\\mathrm{d}s_\\parallel}\\;.\n\\label{xi^s}\n\\end{equation}\nEquation~(\\ref{s(r)}) provides the relation between $\\mathbf{s}$ and $\\mathbf{r}$. In particular, its line-of-sight component can be obtained via taking the dot product with $\\mathbf{X}\/|\\mathbf{X}|$,\n\\begin{equation}\ns_\\parallel = r_\\parallel + \\frac{1+z_h}{H(z_h)}u_\\parallel\\;,\n\\label{s_par(r_par)}\n\\end{equation}\nand hence\n\\begin{equation}\n\\frac{\\mathrm{d}r_\\parallel}{\\mathrm{d}s_\\parallel} = \\left(1 + \\frac{1+z_h}{H(z_h)}\\,\\frac{\\mathrm{d}u_\\parallel}{\\mathrm{d}r_\\parallel}\\right)^{-1}\\;.\n\\label{dr_par\/ds_par}\n\\end{equation}\nThe relative peculiar velocity $\\mathbf{u}$ between void centers and their surrounding galaxies can be derived by imposing local mass conservation. At linear order in the matter-density contrast~$\\delta$ and assuming spherical symmetry in real space, the velocity field is given by~\\cite{Peebles1980}\n\\begin{equation}\n\\mathbf{u}(\\mathbf{r}) = -\\frac{f(z_h)}{3}\\frac{H(z_h)}{1+z_h}\\Delta(r)\\,\\mathbf{r}\\;,\n\\label{u(r)}\n\\end{equation}\nwhere $f(z)\\equiv-\\frac{\\mathrm{d}\\!\\ln D(z)}{\\mathrm{d}\\!\\ln(1+z)}$ is the linear growth rate, defined as the logarithmic derivative of the linear growth factor $D(z)$ with respect to the scale factor. In $\\Lambda$CDM, the linear growth rate is well approximated by\n\\begin{equation}\nf(z)=\\left[\\frac{\\Omega_\\mathrm{m}(1+z)^3}{H^2(z)\/H_0^2}\\right]^\\gamma,\n\\label{growth_rate}\n\\end{equation}\nwith a growth index of $\\gamma\\simeq0.55$~\\cite{Lahav1991,Linder2005}. Furthermore, $\\Delta(r)$ is the average matter-density contrast inside a spherical region of comoving radius~$r$,\n\\begin{equation}\n\\Delta(r) = \\frac{3}{r^3}\\int_0^r\\delta(r')r'^2\\,\\mathrm{d}r'\\;. \\label{Delta(r)}\n\\end{equation}\nAlthough the matter-density contrast in the vicinity of void centers is not necessarily in the linear regime (i.e., $|\\delta|\\ll1$), contrary to over-dense structures (such as galaxy clusters and their dark matter halos) it is bounded from below by the value of $-1$. In simulations it has been shown that equation~(\\ref{u(r)}) provides an extremely accurate description of the local velocity field in and around most voids~\\cite{vdWeygaert1993,Hamaus2014b}. While peculiar velocities at the void boundaries can be due to very non-linear structures, spherical averaging over large sample sizes helps to restore the validity of linear theory to a high degree. Only the smallest and most underdense voids exhibit a non-linear behavior close to their centers and may in fact collapse anisotropically under their external mass distribution~\\cite{vdWeygaert1993,Sheth2004}. We can now evaluate the derivative term in equation~(\\ref{dr_par\/ds_par}) as\n\\begin{equation}\n\\frac{1+z_h}{H(z_h)}\\frac{\\mathrm{d}u_\\parallel}{\\mathrm{d}r_\\parallel} = -\\frac{f(z_h)}{3}\\Delta(r)-f(z_h)\\mu_r^2\\left[\\delta(r)-\\Delta(r)\\right]\\;,\n\\label{du_par\/dr_par}\n\\end{equation}\nwhere $\\mu_r=r_\\parallel\/r$ and the identity $\\frac{\\mathrm{d}\\Delta(r)}{\\mathrm{d}r} = \\frac{3}{r}\\left[\\delta(r)-\\Delta(r)\\right]$ was used. Plugging this back into equation~(\\ref{xi^s}) we obtain\n\\begin{equation}\n1+\\xi^s(\\mathbf{s}) = \\frac{1+\\xi(r)}{1-\\frac{f}{3}\\Delta(r)-f\\mu_r^2\\left[\\delta(r)-\\Delta(r)\\right]}\\;.\n\\label{xi^s_nonlin}\n\\end{equation}\nIn order to evaluate this equation at a given observed separation $\\mathbf{s}$, we make use of equations~(\\ref{s_par(r_par)}) and (\\ref{u(r)}),\n\\begin{equation}\nr_\\parallel = \\frac{s_\\parallel}{1 - \\frac{f}{3}\\Delta(r)}\\;,\n\\label{r_par(s_par)}\n\\end{equation}\nand calculate $r=(r_\\parallel^2+r_\\perp^2)^{1\/2}$ with $r_\\perp=s_\\perp$. However, equation~(\\ref{r_par(s_par)}) already requires knowledge of $r$ in the argument of $\\Delta(r)$, so it can only be evaluated by iteration. We therefore start with using $\\Delta(s)$ as initial step, and iteratively calculate $r_\\parallel$ and $\\Delta(r)$ until convergence is reached. In practice we find $5$ iterations to be fully sufficient for that purpose.\n\nFurthermore, in equation~(\\ref{xi^s_nonlin}) both the void-galaxy cross-correlation function $\\xi(r)$, as well as the void-matter cross-correlation function $\\delta(r)$ are required in real space. The former can be obtained via deprojection of equation~(\\ref{xi_p}),\n\\begin{equation}\n\\xi(r) = -\\frac{1}{\\pi}\\int_r^\\infty\\frac{\\mathrm{d}\\xi^s_p(s_\\perp)}{\\mathrm{d}s_\\perp}\\frac{\\mathrm{d}s_\\perp}{\\sqrt{s_\\perp^2-r^2}}\\;,\n\\label{xi_d}\n\\end{equation}\nmaking use of the inverse Abel transform~\\cite{Pisani2014,Hawken2017}. The latter function $\\delta(r)$, also referred to as the void density profile, is not directly observable. However, it has been investigated in $N$-body simulations and can be inferred via the gravitational lensing effect in imaging surveys~\\cite{Krause2013,Higuchi2013,Melchior2014}. The parametric form suggested in reference~\\cite{Hamaus2014b} (HSW profile) has been shown to accurately describe both simulated~\\cite{Sutter2014a,Hamaus2015,Barreira2015,Falck2018,Pollina2017,Perico2019}, as well as observational data~\\cite{Hamaus2016,SanchezC2017,Pollina2019,Fang2019},\n\\begin{equation}\n\\delta_{\\scriptscriptstyle\\mathrm{HSW}}(r) = \\delta_c\\frac{1-(r\/r_s)^\\alpha}{1+(r\/R)^\\beta}\\;. \\label{HSW}\n\\end{equation}\nHere $R$ is the effective void radius, the scale radius $r_s$ determines where $\\delta_{\\scriptscriptstyle\\mathrm{HSW}}(r_s)=0$, the central underdensity is defined as $\\delta_c\\equiv\\delta_{\\scriptscriptstyle\\mathrm{HSW}}(r=0)$, and the power-law indices $\\alpha$ and $\\beta$ control the inner and outer slopes of the profile. In equation~(\\ref{xi^s_nonlin}) these quantities can then be included as free parameters to be constrained by the observed $\\xi^s(\\mathbf{s})$. This approach has been pursued in the framework of the Gaussian streaming model (GSM)~\\cite{Hamaus2015,Hamaus2016}, which incorporates an additional parameter for the velocity dispersion $\\sigma_v$ of galaxies. In the limit of $\\sigma_v\\rightarrow0$, the GSM recovers the result of equation~(\\ref{xi^s_nonlin}) at linear order in $\\delta$~\\cite{Cai2016}. We note that equation~(\\ref{HSW}) describes the spherically averaged density profile with respect to the void center, but a similar parametrization exists for profiles centered on the void boundary~\\cite{Cautun2016}.\n\nAnother option to constrain the void density profile $\\delta(r)$ is through its relation to $\\xi(r)$, which is equivalent to the void density profile in galaxies. Both simulations~\\cite{Pollina2017,Ronconi2019,Contarini2019} and observational approaches~\\cite{Pollina2019,Fang2019} have established robust evidence for the relationship between $\\delta(r)$ and $\\xi(r)$ to be a linear one, such that\n\\begin{equation}\n\\xi(r) = b\\delta(r)\\;,\n\\label{xi(delta)}\n\\end{equation}\nwith a single proportionality constant $b$. This is similar to the relation between the overdensity of tracers and the underlying matter distribution on large scales, where $|\\delta|\\ll1$~\\cite{Desjacques2018}. That condition is not necessarily satisfied in the interiors and immediate surroundings of voids, and the large-scale linear bias does not coincide with the value of $b$ in general, even for the same tracer population. However, the two bias values approach each other for voids of increasing size, and converge in the limit of large effective void radius $R$~\\cite{Pollina2017,Pollina2019,Contarini2019}. Using equation~(\\ref{xi(delta)}) for $\\delta(r)$, we can simply exchange it with $\\xi(r)$ by making the replacements $f\\rightarrow f\/b$ and $\\Delta(r)\\rightarrow\\overline{\\xi}(r)$, with\n\\begin{equation}\n\\overline{\\xi}(r) = \\frac{3}{r^3}\\int_0^r\\xi(r')r'^2\\mathrm{d}r'\\;.\n\\label{xibar}\n\\end{equation}\nNow equation~(\\ref{xi^s_nonlin}) can be written as\n\\begin{equation}\n1+\\xi^s(\\mathbf{s}) = \\frac{1+\\xi(r)}{1-\\frac{1}{3}\\frac{f}{b}\\overline{\\xi}(r)-\\frac{f}{b}\\mu_r^2\\left[\\xi(r)-\\overline{\\xi}(r)\\right]}\\;.\n\\label{xi^s_nonlin2}\n\\end{equation}\nMoreover, we can expand this to linear order in $\\delta$ (or equivalently, $\\xi$) for consistency with the perturbative level of the mass conservation equation~(\\ref{u(r)})~\\cite{Cai2016,Hamaus2017},\n\\begin{equation}\n\\xi^s(\\mathbf{s}) \\simeq \\xi(r) + \\frac{1}{3}\\frac{f}{b}\\overline{\\xi}(r) + \\frac{f}{b}\\mu_r^2\\left[\\xi(r)-\\overline{\\xi}(r)\\right]\\;.\n\\label{xi^s_lin}\n\\end{equation}\nThe function $\\xi^s(\\mathbf{s})$ can be decomposed into independent multipoles via\n\\begin{equation}\n\\xi^s_\\ell(s) = \\frac{2\\ell+1}{2}\\int\\limits_{-1}^1\\xi^s(s,\\mu_s)\\mathcal{L}_\\ell(\\mu_s)\\mathrm{d}\\mu_s\\;,\n\\label{multipoles}\n\\end{equation}\nwith the Legendre polynomials $\\mathcal{L}_\\ell(\\mu_s)$ of order $\\ell$. For equation~(\\ref{xi^s_lin}) the integral can be performed analytically and the only non-vanishing multipoles at linear order in $\\xi$ and $\\overline{\\xi}$ are the monopole ($\\ell=0$) and quadrupole ($\\ell=2$) with\n\\begin{eqnarray}\n\\xi^s_0(s) &=& \\left(1+\\frac{f\/b}{3}\\right)\\xi(r)\\;,\\label{xi_0} \\\\\n\\xi^s_2(s) &=& \\frac{2f\/b}{3}\\left[\\xi(r)-\\overline{\\xi}(r)\\right]\\;.\\label{xi_2}\n\\end{eqnarray}\nThis can be recast into the following form~\\cite{Cai2016,Hamaus2017},\n\\begin{equation}\n\\xi^s_0(s) - \\overline{\\xi}^s_0(s) = \\xi^s_2(s)\\frac{3+f\/b}{2f\/b}\\;,\n\\label{xi_0_2}\n\\end{equation}\nproviding a direct link between monopole and quadrupole in redshift space without reference to any real-space quantity. However, note that equations~(\\ref{xi_0}),~(\\ref{xi_2}) and (\\ref{xi_0_2}) only hold for the case of $\\varepsilon=1$, and multipoles of higher order can be generated via geometric distortions when assuming a fiducial cosmology that is different from the truth, as discussed in section~\\ref{subsec:geometric}.\n\n\n\\section{Analysis \\label{sec:analysis}}\n\n\\subsection{BOSS galaxies and mocks \\label{subsec:galaxies}}\nWe consider galaxy catalogs from the final data release 12 (DR12) of the SDSS-III~\\cite{Eisenstein2011} Baryon Oscillation Spectroscopic Survey (BOSS)~\\cite{Dawson2013}. In particular, we make use of the combined sample of the individual target selections denoted as LOWZ and CMASS~\\cite{Reid2016}. With a total sky area of about $10\\,000$ square degrees from both the northern and southern Galactic hemispheres the sample contains $1\\,198\\,006$ galaxies in a redshift range of $0.20 N_\\mathrm{s}\\left[\\frac{4\\pi}{3}n(z)\\right]^{-1\/3}\\;,\n\\label{ats}\n\\end{equation}\nwhere $n(z)$ is the number density of tracers at redshift $z$ and $N_\\mathrm{s}$ determines the minimum considered void size in units of the average tracer separation. The smaller $N_\\mathrm{s}$, the larger the contamination by spurious voids that may arise from Poisson fluctuations~\\cite{Neyrinck2008,Cousinou2019}. This cut also preferentially removes voids that may have been misidentified due to RSDs~\\cite{Pisani2015b}. As a default we assume a conservative value of $N_\\mathrm{s}=4$, which yields a minimum effective void radius of $R=34.9h^{-1}{\\rm Mpc}$ in our catalog. We note that this criterion depends on the specific tracer sample considered for void identification. It is known that Poisson point processes exhibit highly non-Gaussian Voronoi cell volume distributions~\\cite{vdWeygaert2009}, which can cause spurious void detections. Reference~\\cite{Cousinou2019} finds a very low contamination fraction of spurious voids in the BOSS DR12 CMASS sample based on a multivariate analysis of void properties in training and test samples. This is attributed to the relatively high clustering bias of CMASS galaxies, which is very similar to the combined BOSS sample used here.\n\nThe top of figure~\\ref{fig:box} presents a three-dimensional view of the selected void centers from the northern (right) and southern (left) Galactic hemispheres in comoving space, with the observer located at the origin. Below it, a narrow slice of about one degree in declination within the northern Galactic hemisphere visualizes the distribution of void centers together with their tracer galaxies. Despite the sparsity of the BOSS DR12 combined sample, intricate features of the cosmic web-like structure become apparent. Note that due to the extended three-dimensional geometry of voids, their centers do not necessarily intersect with the slice, leaving some seemingly empty regions without associated void center. The left panel of figure~\\ref{fig:nz} shows the redshift distribution of galaxies, randoms, and voids from the data. For visibility, we have rescaled the total number of randoms to the number of galaxies (by a factor of $50$). Voids are roughly two orders of magnitude scarcer than galaxies, but their redshift distribution follows a similar trend. This is because higher tracer densities allow the identification of smaller voids, as expected from simulation studies~\\cite{Jennings2013,Sutter2014a,Chan2014,Wojtak2016}. The right panel of figure~\\ref{fig:nz} shows the distribution of effective void radii with Poisson error bars, also known as the void-size function. The latter is a quantity of interest for cosmology on its own~\\cite{Pisani2015a,Ronconi2019,Contarini2019,Verza2019}, but in this paper we do not investigate it for that purpose any further, it is shown here only as supplementary information. We repeat the void finding procedure on each of the PATCHY mocks, allowing us to scale down all statistical uncertainties by roughly a factor of $\\sqrt{N_\\mathrm{m}}$, where $N_\\mathrm{m}$ is the number of mock catalogs considered.\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=0 -3 0 0]{fig\/nz.pdf}\n\t\t\\includegraphics{fig\/n_rv.pdf}}\n\t\\caption{LEFT: Number density of galaxies, randoms (scaled to the galaxy density), and \\textsc{vide} voids as a function of redshift in the BOSS DR12 combined sample. RIGHT: Number density of all \\textsc{vide} voids as a function their effective radius (void-size function, only shown for illustration).}\n\t\\label{fig:nz}\n\\end{figure}\n\n\\subsection{Estimators \\label{subsec:estimators}}\nWe need to define an estimator to measure the observed void-galaxy cross-correlation function $\\xi^s(\\mathbf{s})$ in redshift space. In order to take into account the survey geometry, we make use of a random catalog that samples the masked survey volume without intrinsic clustering. We adopt the expression derived in reference~\\cite{Hamaus2017},\n\\begin{equation}\n\\xi^s(\\mathbf{s}) = \\frac{\\langle\\mathbf{X},\\mathbf{x}\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}\\rangle\\langle\\mathbf{x}\\rangle} - \\frac{\\langle\\mathbf{X},\\mathbf{x}_r\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}\\rangle\\langle\\mathbf{x}_r\\rangle}\\;,\n\\label{estimator}\n\\end{equation}\nwith the void center, galaxy, and random positions $\\mathbf{X}$, $\\mathbf{x}$, and $\\mathbf{x}_r$, respectively. Here, the angled brackets with two vectors represent the average pair count between objects at separation $\\mathbf{s}$, whereas the brackets with only one vector indicate the mean number count of an object in the corresponding redshift slice. This estimator is a limiting case of Landy \\& Szalay~\\cite{Landy1993} and has been validated using mock data on the relevant scales for voids in BOSS data~\\cite{Hamaus2017}. We will consider its two-dimensional variant $\\xi^s(s_\\parallel,s_\\perp)$, with explicit dependence on distances along and perpendicular to the line of sight, as well as its multipoles $\\xi^s_\\ell(s)$ with $\\ell=(0,2,4)$, i.e. monopole, quadrupole and hexadecapole. Calculating the multipoles is particularly simple with this estimator, because it allows the direct application of equation~(\\ref{multipoles}) on the pair counts by using Legendre polynomials as weights in the integral without the need to define bins in $\\mu_s$~\\cite{Hamaus2017}. The mean number counts in the denominator of equation~(\\ref{estimator}) can be pulled outside the integral, as they do not depend on $\\mathbf{s}$. This way the estimation of multipoles becomes more accurate, especially at small separations $s$ and when $\\mu_s$ approaches one, where the binning in both quantities inevitably results in a coarse spatial resolution~\\cite{Cai2016}. We have explicitly compared this estimator with other common choices in the literature~\\cite{Vargas2013} and refer the reader to section~\\ref{subsec:systematics} for more details on this.\n\nWatershed voids exhibit an angle-averaged density profile of universal character, irrespective of their absolute size~\\cite{Hamaus2014b,Sutter2014a,Cautun2016}. In order to capture this unique characteristic in the two-point statistics of a void sample with a broad range of effective radii, it is beneficial to express all separations in units of $R$ (see section~\\ref{subsec:comparison}). When this is done for every void individually in the estimation of $\\xi^s$, it is commonly denoted as a void stack. Like the majority of void-related studies in the literature, we adopt this approach in our analysis and refer to $\\xi^s(\\mathbf{s}\/R)$ as the stacked void-galaxy cross-correlation function. For simplicity, we will omit this explicit notation in the following and bear in mind that all separations $\\mathbf{s}$ are expressed in units of~$R$.\n\nThe degree of uncertainty in a measurement of the void-galaxy cross-correlation function can be quantified by its covariance matrix. It is defined as the covariance of $\\xi^s$ at separations $\\mathbf{s}_i$ and $\\mathbf{s}_j$ from $N$ independent observations,\n\\begin{equation}\n\\mathbf{C}_{ij} = \\bigl<\\bigl(\\xi^s(\\mathbf{s}_i)-\\langle\\xi^s(\\mathbf{s}_i)\\rangle\\bigr)\\bigl(\\xi^s(\\mathbf{s}_j)-\\langle\\xi^s(\\mathbf{s}_j)\\rangle\\bigr)\\bigr>\\;,\n\\label{covariance}\n\\end{equation}\nwhere the angled brackets indicate averages over the sample size. Although we can only observe a single universe, we have a large sample of voids at our disposal that enables an estimate of the covariance matrix as well. Note that we are considering mutually exclusive voids, each of which provide an independent patch of large-scale structure. As we are primarily interested in $\\xi^s$ on scales up to the void extent, as opposed to inter-void scales, we can employ a jackknife resampling strategy to estimate $\\mathbf{C}_{ij}$~\\cite{Hamaus2017}. For this we simply remove one void at a time in the estimator of $\\xi^s$ from equation~(\\ref{estimator}), which provides $N_\\mathrm{v}$ jackknife samples in total. These samples can then be used in equation~(\\ref{covariance}) to calculate $\\mathbf{C}_{ij}$, albeit with an additional factor of $(N_\\mathrm{v}-1)$ to account for the statistical weight of the jackknife sample size. We use the square root of the diagonal elements of the covariance matrix to quote error bars on our measurements of $\\xi^s$. The identical procedure can be applied to the multipoles $\\xi^s_\\ell$, except in that case one can use equation~(\\ref{covariance}) to additionally calculate the covariance between multipoles of different order.\n\nThere are several advantages of this jackknife technique over other common methods for covariance estimation, which typically rely on simulated mock catalogs. Most importantly, it is based on the observed data itself and does not involve prior model assumptions about cosmology, structure formation, or galaxy evolution. In addition, a statistically reliable estimation of $\\mathbf{C}_{ij}$ requires large sample sizes, which are expensive in terms of numerical resources when considering realistic mocks from $N$-body simulations. Our void catalog already provides $\\mathcal{O}(10^3)$ spatially independent samples at no additional cost. It has been shown that the jackknife technique provides covariance estimates that are consistent with those obtained from independent mock catalogs in the limit of large jackknife sample sizes~\\cite{Favole2020}.\n\n\n\\subsection{Likelihood \\label{subsec:likelihood}}\nEquipped with the theory from section~\\ref{subsec:correlation} we can now define the likelihood $L(\\hat{\\xi}^s|\\boldsymbol{\\Omega})$ of the measurement given a model, which we approximate to be of Gaussian form,\n\\begin{equation}\n\\ln L(\\hat{\\xi}^s|\\boldsymbol{\\Omega}) = -\\frac{1}{2N_\\mathrm{m}}\\sum\\limits_{i,j}\\Bigl(\\hat{\\xi}^s(\\mathbf{s}_i)-\\xi^s(\\mathbf{s}_i,\\boldsymbol{\\Omega})\\Bigr)\\,\\hat{\\mathbf{C}}_{ij}^{-1}\\Bigl(\\hat{\\xi}^s(\\mathbf{s}_j)-\\xi^s(\\mathbf{s}_j,\\boldsymbol{\\Omega})\\Bigr)\\;.\n\\label{likelihood}\n\\end{equation}\nThe hat symbols indicate a measured quantity to be distinguished from the model, which explicitly depends on the parameters $\\boldsymbol{\\Omega}$. Here we have dropped the normalization term involving the determinant of $\\hat{\\mathbf{C}}_{ij}$, since it only adds a constant. The form of equation~(\\ref{likelihood}) can be applied to either the two-dimensional void-galaxy cross-correlation function $\\xi^s(s_\\parallel,s_\\perp)$, or its multipoles $\\xi^s_\\ell(s)$. We use $\\xi^s(s_\\parallel,s_\\perp)$, which contains the information from the multipoles of all orders. However, we have verified that only including the multipoles of orders $\\ell=(0,2,4)$ yields consistent results. When analyzing mock catalogs we scale their covariance in equation~(\\ref{likelihood}) by the number of mock samples $N_\\mathrm{m}$ used, allowing us to validate the statistical constraining power of the data. When analyzing the data itself, we set $N_\\mathrm{m}=1$. We vary $\\boldsymbol{\\Omega}$ until a global maximum of the likelihood at the best-fit parameter set is found. The quality of the fit can be assessed by evaluation of the reduced chi-square statistic,\n\\begin{equation}\n\\chi^2_\\mathrm{red} = -\\frac{2N_\\mathrm{m}}{N_\\mathrm{dof}}\\ln L(\\hat{\\xi}^s|\\boldsymbol{\\Omega}) \\;,\n\\label{chi2}\n\\end{equation}\nfor $N_\\mathrm{dof}=N_\\mathrm{bin}-N_\\mathrm{par}$ degrees of freedom, where $N_\\mathrm{bin}$ is the number of bins for the data and $N_\\mathrm{par}$ the number of parameters. Moreover, we explore the likelihood surface in the neighborhood of the global maximum using the Monte Carlo Markov Chain (MCMC) sampler \\textsc{emcee}~\\cite{Foreman-Mackey2019}, which enables us to access the posterior probability distribution of the model parameters.\n\n\\subsection{Parameters \\label{subsec:parameters}}\nInstead of using the fundamental cosmological parameters, we express $\\boldsymbol{\\Omega}$ in terms of derived parameters that directly affect the void-galaxy cross-correlation function, namely the linear growth rate to bias ratio $f\/b$, and the AP parameter $\\varepsilon$. To account for potential systematics in the data that can be caused by discreteness noise or selection effects, we further allow for two additional nuisance parameters $\\mathcal{M}$ and $\\mathcal{Q}$. The parameter $\\mathcal{M}$ may adjust for possible inaccuracies arising in the deprojection technique and a contamination of the void sample by Poisson fluctuations, which can attenuate the amplitude of the monopole~\\cite{Cousinou2019}. On the other hand, the parameter $\\mathcal{Q}$ accounts for potential selection effects when voids are identified in anisotropic redshift space~\\cite{Pisani2015b,Nadathur2019b}. A physical origin of this can be violent shell-crossing and virialization events that change the topology of void boundaries~\\cite{Hahn2015}, causing a so-called Finger-of-God (FoG) effect~\\cite{Jackson1972,Hamilton1998,Scoccimarro2004}. It appears around compact overdensities, such as galaxy clusters, generating elongated features along the line of sight that extend over several $h^{-1}{\\rm Mpc}$~\\cite{Peacock2001,Zehavi2011,Pezzotta2017}. A similar effect can be caused by cluster infall regions, leading to closed caustics in redshift space that may be misinterpreted as voids~\\cite{Kaiser1987,Hamilton1998}. Therefore, this can have a non-trivial impact on the identification of voids with diameters of comparable size~\\cite{Stanonik2010,Kreckel2011a,vdWeygaert2011b}, although the tracer sample we use consists of massive luminous red galaxies that typically reside in the centers of clusters and do not sample the entire cluster profile in its outer parts. $\\mathcal{M}$ (monopole-like) is used as a free amplitude of the deprojected correlation function $\\xi(r)$ in real space, and $\\mathcal{Q}$ (quadrupole-like) is a free amplitude for the quadrupole term proportional to $\\mu_r^2$. Hence, equations~(\\ref{xi^s_nonlin2}) and (\\ref{xi^s_lin}) can be extended by the replacements $\\xi(r)\\rightarrow\\mathcal{M}\\xi(r)$ and $\\mu_r^2\\rightarrow\\mathcal{Q}\\mu_r^2$, which results in the following form for the final parametrization of our model at linear perturbation order\n\\begin{equation}\n\\xi^s(\\mathbf{s}) = \\mathcal{M}\\left\\{\\xi(r) + \\frac{1}{3}\\frac{f}{b}\\overline{\\xi}(r) + \\frac{f}{b}\\mathcal{Q}\\mu_r^2\\left[\\xi(r)-\\overline{\\xi}(r)\\right]\\right\\}\\;,\n\\label{xi^s_lin2}\n\\end{equation}\ntogether with the equivalent replacements in equation~(\\ref{r_par(s_par)}) for the mapping from the observed separation $\\mathbf{s}$ to $\\mathbf{r}$: $r_\\parallel = s_\\parallel\/[1-\\frac{1}{3}\\frac{f}{b}\\mathcal{M}\\overline{\\xi}(r)]$. One can think of equation~(\\ref{xi^s_lin2}) as an adaptive template that attempts to extract those anisotropic distortions that match the radial and angular shape as predicted by linear theory. For the nuisance parameters we assign default values of $\\mathcal{M}=\\mathcal{Q}=1$, as they are not known a priori. Note that this extension does not introduce any fundamental parameter degeneracies, due to the different functional forms of $\\xi(r)$ and $\\overline{\\xi}(r)$. For both our model and nuisance parameters we assume uniform prior ranges of $\\left[-10,+10\\right]$, given that each of their expected values is of order unity. We checked that an extension of these boundaries has no impact on our results.\n\n\\subsection{Model validation \\label{subsec:validation}}\nThe theory model derived in section~\\ref{subsec:correlation} requires knowledge of the void density profile $\\delta(r)$ and the void-galaxy cross-correlation function $\\xi(r)$ in real space for equation~(\\ref{xi^s_nonlin}). Its linear version, equation~(\\ref{xi^s_lin}), only requires $\\xi(r)$. $\\delta(r)$ can be accurately modeled with the HSW profile (cf. equation~(\\ref{HSW})), at the price of including additional parameters to be constrained by the data, an approach that has already been successfully applied to BOSS data~\\cite{Hamaus2016}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics{fig\/Xip_test.pdf}\n\t\t\\includegraphics{fig\/Xip_mock.pdf}}\n\t\\caption{LEFT: Projection (dashed red) and deprojection (dotted green) of a model void-galaxy cross-correlation function (solid orange) based on the HSW profile from equation~(\\ref{HSW}), using the Abel transform. RIGHT: Projected void-galaxy cross-correlation function (red wedges, dashed line) of voids from $30$ PATCHY mock catalogs in redshift space, and its real-space counterpart after deprojection (green triangles, dotted line). The redshift-space monopole in the mocks (blue dots) follows the same functional form, in agreement with the linear model (blue solid line) from equation~(\\ref{xi_0}).}\n\t\\label{fig:xi_p_mock}\n\\end{figure}\nIn this paper we instead choose a more data-driven approach and use the real-space profile $\\xi(r)$ obtained by deprojecting the measured projected void-galaxy cross-correlation function $\\xi^s_p(s_\\perp)$ using the inverse Abel transform, equation~(\\ref{xi_d}). We first test this procedure based on the model template from equation~(\\ref{HSW}) and use equation~(\\ref{xi(delta)}) to define $\\xi_{\\scriptscriptstyle\\mathrm{HSW}}(r)\\equiv b\\delta_{\\scriptscriptstyle\\mathrm{HSW}}(r)$. The left panel of figure~\\ref{fig:xi_p_mock} shows $\\xi_{\\scriptscriptstyle\\mathrm{HSW}}$ with the parameter values $(r_s\/R,\\delta_c,\\alpha,\\beta)\\simeq(0.82,-0.36,1.6,9.1)$ and $b=2.2$. These values have been chosen to match the mock data reasonably well (see below). We then use the forward Abel transform to obtain the projected correlation function,\n\\begin{equation}\n\\xi^s_p(s_\\perp) = 2\\int_{s_\\perp}^\\infty\\xi(r)\\frac{r\\mathrm{d}r}{\\sqrt{r^2-s_\\perp^2}}\\;.\n\\label{xi_p2}\n\\end{equation}\nFinally, we apply the inverse Abel transform of equation~(\\ref{xi_d}) to infer the original void-galaxy cross-correlation function $\\xi_{\\scriptscriptstyle\\mathrm{HSW}}$. As evident from the perfectly overlapping lines in the plot, this procedure works extremely well with noiseless data. In reality, however, we have to measure correlation functions with an estimator, which is unavoidably associated with a finite covariance. In order to estimate $\\xi^s_p$ from the data, we adopt equation~(\\ref{estimator}) for pairs on the plane of the sky, which can be achieved by exchanging the three-dimensional position $\\mathbf{x}$ of an object by $x_\\perp=(|\\mathbf{x}|^2-x_\\parallel^2)^{1\/2}$ and counting pairs at a given projected separation $s_\\perp$ over the redshift range. We restrict the line-of-sight projection range to $s_\\parallel=3R$ at the near and far sides from the void center, where $\\xi^s$ has well converged to zero. The right panel of figure~\\ref{fig:xi_p_mock} shows the result when stacking $N_\\mathrm{v}\\simeq2\\times10^5$ voids from $N_\\mathrm{m}=30$ PATCHY mock catalogs. In this case the situation is very similar to the test case from the left panel. The deprojection via the inverse Abel transform results in a smooth curve, only close to the void center we observe mild fluctuations away from the expected shape, due to larger statistical uncertainties~\\cite{Pisani2014}. We verified that reprojecting our result using equation~(\\ref{xi_p2}) agrees well with the original $\\xi^s_p$. We also plot the measured monopole $\\xi^s_0$ from the same PATCHY mocks, including its best-fit model from equation~(\\ref{xi^s_lin2}) using the deprojected $\\xi^s_p$. It provides an excellent agreement with the mock data, and confirms the predicted proportionality between $\\xi^s_0(s)$ and $\\xi(r)$ of equation~(\\ref{xi_0}).\n\nHaving validated the deprojection procedure to obtain $\\xi(r)$ from the data, we are now ready to test our model for the void-galaxy cross-correlation function in redshift space.\n\\begin{figure}[h]\n\t\\centering\n\t\\resizebox{0.86\\hsize}{!}{\\includegraphics[trim=-40 0 0 10]{fig\/Xi2d_mock.pdf}}\n\t\\resizebox{0.86\\hsize}{!}{\\includegraphics[trim=0 10 0 10]{fig\/Xi_ell_mock.pdf}}\n\t\\caption{TOP: Estimation of the stacked void-galaxy cross-correlation function $\\xi^s(s_\\parallel\/R,s_\\perp\/R)$ from voids in 30 PATCHY mock catalogs (color scale with black contours) and the best-fit model (white contours) from equation~(\\ref{xi^s_lin2}). BOTTOM: Monopole (blue dots), quadrupole (red triangles) and hexadecapole (green wedges) from the same mock data with corresponding model fits (solid, dashed, dotted lines). The mean redshift and effective radius of the void sample is shown at the top.}\n\t\\label{fig:xi_mock}\n\\end{figure}\n\\begin{figure}[h]\n\t\\centering\n\t\\resizebox{0.7\\hsize}{!}{\\includegraphics{fig\/triangle_mock.pdf}}\n\t\\caption{Posterior probability distribution of the model parameters that enter in equation~(\\ref{xi^s_lin2}), obtained via MCMC from the PATCHY mock data shown in figure~\\ref{fig:xi_mock}. Dark and light shaded areas show $68\\%$ and $95\\%$ confidence regions with a cross marking the best fit, dashed lines indicate fiducial values of the RSD and AP parameters $(f\/b=0.344,\\,\\varepsilon=1)$, and default values for the nuisance parameters $(\\mathcal{M}=\\mathcal{Q}=1)$. The top of each column states the mean and standard deviation of the 1D marginal distributions.}\n\t\\label{fig:triangle_mock}\n\\end{figure}\nFigure~\\ref{fig:xi_mock} presents the corresponding measurement from voids in 30 PATCHY mock catalogs, both its 2D variant with separations along and perpendicular to the line of sight (top panel), as well as the multipoles of order $\\ell=(0,2,4)$ (bottom panel). For the former we use $18$ bins per direction, resulting in $N_\\mathrm{bin}=18^2=324$, whereas for the multipoles we use $25$ radial bins, which yields $N_\\mathrm{bin}=3\\times25=75$ in total. We apply the linear model from equation~(\\ref{xi^s_lin2}) to fit this data (omitting the innermost radial bin) using the $N_\\mathrm{par}=4$ parameters $\\boldsymbol{\\Omega}=(f\/b,\\varepsilon,\\mathcal{M},\\mathcal{Q})$. As apparent from figure~\\ref{fig:xi_mock}, this yields a very good fit to the mock data, with a reduced chi-square value of $\\chi^2_\\mathrm{red}=1.86$. We note that this value corresponds to the statistical power of $30$ mock observations, so it is entirely satisfactory for the purpose of validating our model for a single BOSS catalog. An increase in the number of mock samples marginally affects our summary statistics, which exhibit a negligible amount of statistical noise compared to the real data. The anisotropy of the void-galaxy cross-correlation function is well captured close to the void center, as well as on the void boundaries at $s\\simeq R$. The flattened contours around the void interior are a result of the quadrupole term in equation~(\\ref{xi^s_nonlin}), respectively its linear version~(\\ref{xi^s_lin})~\\cite{Cai2016}. Outside the void boundaries, where the correlation function declines again, this results in elongated contours, as necessary in order to restore spherical symmetry in the limit of large separations. It is worth noting that the coordinate transformation from equation~(\\ref{r_par(s_par)}) causes a line-of-sight elongation from $r_\\parallel$ to $s_\\parallel$ for negative $\\Delta(r)$, acting in the opposite direction. However, this coordinate effect merely accounts for a small correction to the flattening caused by the quadrupole, which we explicitly checked in our analysis. Finally, we emphasize the strong evidence for a vanishing hexadecapole on all scales, in agreement with the theory prediction from section~\\ref{subsec:correlation}.\n\nWe then run a MCMC to sample the posterior distribution of the model parameters. The result is presented in figure~\\ref{fig:triangle_mock} using the \\textsc{getdist} software package~\\cite{Lewis2019}. We recover the fiducial values of the cosmologically relevant parameters $f\/b$ and $\\varepsilon$ to within the $68\\%$ confidence regions, which validates the theory model we use. Moreover, the parameter contours reveal a nearly Gaussian shape of the posterior, only the nuisance parameter $\\mathcal{Q}$ exhibits a slightly non-Gaussian behavior. While $\\mathcal{Q}$ is marginally consistent with unity to within $1\\sigma$, we find clear evidence for the parameter $\\mathcal{M}$ to exceed unity by roughly $14\\%$. This could be attributed to some degree of overdispersion beyond Poisson noise that has been used in the PATCHY algorithm to calibrate the clustering statistics of galaxies in BOSS~\\cite{Kitaura2016a}, resulting in a higher amplitude of random fluctuations inside voids. We also note a strong anti-correlation between $\\mathcal{M}$ and $f\/b$, which can be understood from equation~(\\ref{xi_2}) for the quadrupole, where both parameters enter via multiplication of $\\xi(r)$ and $\\overline{\\xi}(r)$. However, the monopole in equation~(\\ref{xi_0}) breaks their degeneracy.\n\nAs a next step we investigate the dependence on void redshift $Z$. To this end we split our catalog into subsets that contain $50\\%$ of all voids with redshifts below or above their median value. We will refer to these subsets as ``low-z'' and ``high-z'', respectively. Figure~\\ref{fig:xi_mock_Z} presents the corresponding correlation function statistics, revealing characteristic redshift trends. Note that the effective radii $R$ in our catalog are somewhat correlated with $Z$ due to the variation of tracer density $n(z)$ with redshift, as shown in figure~\\ref{fig:nz}. When $n(z)$ decreases, fewer small voids can be identified, which means that at the high-redshift end our voids tend to be larger in size. On average, smaller voids are emptier and exhibit higher compensation walls at their edges~\\cite{Sheth2004,vdWeygaert2011a,Hamaus2014b,vdWeygaert2016}, which in turn induces a higher amplitude of both monopole and quadrupole~\\cite{Hamaus2017}. A similar trend is manifest in the evolution from high to low redshift, which reveals the growth of structure on the void boundaries over time while the void core continuously deepens~\\cite{Sheth2004,vdWeygaert2011a,Hamaus2014b,vdWeygaert2016}. Both effects are supported by the mock data shown in figure~\\ref{fig:xi_mock_Z}.\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=0 30 0 5, clip]{fig\/Xi2d_mock_Z1.pdf}\n\t\t\\includegraphics[trim=0 30 0 5, clip]{fig\/Xi_ell_mock_Z1.pdf}}\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=0 10 0 5, clip]{fig\/Xi2d_mock_Z2.pdf}\n\t\t\\includegraphics[trim=0 10 0 5, clip]{fig\/Xi_ell_mock_Z2.pdf}}\n\t\\caption{As figure~\\ref{fig:xi_mock} after splitting the PATCHY void sample at its median redshift of $Z=0.51$ into $50\\%$ lowest-redshift (``low-z'', top row) and $50\\%$ highest-redshift voids (``high-z'', bottom row).}\n\t\\label{fig:xi_mock_Z}\n\\end{figure}\n\\begin{figure}[h]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics{fig\/triangle_mock_Z1.pdf}\n\t\t\\includegraphics{fig\/triangle_mock_Z2.pdf}}\n\t\\caption{As figure~\\ref{fig:triangle_mock} for the ``low-z'' (left) and ``high-z'' (right) PATCHY void sample in figure~\\ref{fig:xi_mock_Z}.}\n\t\\label{fig:triangle_mock_Z}\n\\end{figure}\n\nFinally, we repeat the model fits for the subsets in void redshift and run a full MCMC for each of them. The parameter posteriors are shown in figure~\\ref{fig:triangle_mock_Z}. As for the full void sample from before we retrieve the input cosmology of the PATCHY mocks to within $68\\%$ of the confidence levels for $f\/b$ and $\\varepsilon$. Also the posteriors of the nuisance parameters $\\mathcal{M}$ and $\\mathcal{Q}$ look similar as for the full void sample shown in figure~\\ref{fig:triangle_mock}. However, we notice a very mild increase of $\\mathcal{M}$ and a decrease of $\\mathcal{Q}$ towards higher redshifts. Although the shifts remain well within the $1\\sigma$ confidence intervals for these parameters, they may indicate a slightly lower contamination by Poisson noise, but a slightly stronger anisotropic selection effect for the smaller voids at lower redshift. This would indeed agree with previous simulation results that suggest the impact of RSDs on void identification to be more severe for voids with smaller effective radii~\\cite{Pisani2015b}. Moreover, as large-scale structures develop more non-linear over time, we do expect the FoG effect to have a stronger impact on voids at lower redshift.\n\n\\subsection{Data analysis \\label{subsec:fitting}}\nThe successful model validation from the previous section now enables us to perform model fits on the real BOSS data. To this end we simply repeat the analysis steps that have already been performed on the PATCHY mocks above. We first measure the projected void-galaxy cross-correlation function $\\xi^s_p(s_\\perp)$ and apply the deprojection technique using the inverse Abel transform to obtain $\\xi(r)$. The result is shown in figure~\\ref{fig:xi_p_data}, along with the redshift-space monopole $\\xi^s_0(s)$ and its best-fit model. We observe very similar trends as in the mocks, albeit with larger error bars as expected from the smaller sample size of voids available in the BOSS data.\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{0.86\\hsize}{!}{\\includegraphics[trim=0 10 0 10]{fig\/Xip_data.pdf}}\n\t\\caption{Measurement of the projected void-galaxy cross-correlation function (red wedges, dashed line) from the BOSS DR12 combined sample in redshift space, and its real-space counterpart after deprojection (green triangles, dotted line). The measured redshift-space monopole (blue dots) follows the same functional form, in agreement with the linear model (blue solid line) from equation~(\\ref{xi_0}).}\n\t\\label{fig:xi_p_data}\n\\end{figure}\n\\begin{figure}[h]\n\t\\centering\n\t\\resizebox{0.86\\hsize}{!}{\\includegraphics[trim=-40 0 0 10]{fig\/Xi2d_data.pdf}}\n\t\\resizebox{0.86\\hsize}{!}{\\includegraphics[trim=0 10 0 10]{fig\/Xi_ell_data.pdf}}\n\t\\caption{TOP: Measurement of the stacked void-galaxy cross-correlation function $\\xi^s(s_\\parallel\/R,s_\\perp\/R)$ from voids in the BOSS DR12 combined sample (color scale with black contours) and the best-fit model (white contours) from equation~(\\ref{xi^s_lin2}). BOTTOM: Monopole (blue dots), quadrupole (red triangles) and hexadecapole (green wedges) from the same data with corresponding model fits (solid, dashed, dotted lines). The mean redshift and effective radius of the void sample is shown at the top.}\n\t\\label{fig:xi_data}\n\\end{figure}\nFigure~\\ref{fig:xi_data} presents the two-point statistics for the void-galaxy cross-correlation function and its multipoles. Apart from the larger impact of statistical noise due to the substantially smaller sample size (by a factor of $N_\\mathrm{m}=30$), the results are in excellent agreement with the mock data. Both amplitude and shape of $\\xi^s(s_\\parallel,s_\\perp)$, as well as $\\xi^s_\\ell(s)$ are very consistent in comparison with figure~\\ref{fig:xi_mock}. A mild but noticeable difference can be seen very close to the void center, which appears more flattened in the data. One can also perceive stronger fluctuations of the quadrupole and hexadecapole in this regime, but those are simply due to the sparser statistics of galaxies near the void center and thus fully consistent with the error bars. This fact is further supported by the accurate model fit to the data, resulting in a reduced chi-square value of $\\chi^2_\\mathrm{red}=1.12$.\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{0.7\\hsize}{!}{\n\t\t\\includegraphics{fig\/triangle_data.pdf}}\n\t\\caption{Posterior probability distribution of the model parameters that enter in equation~(\\ref{xi^s_lin2}), obtained via MCMC from the BOSS DR12 data shown in figure~\\ref{fig:xi_data}. Dark and light shaded areas show $68\\%$ and $95\\%$ confidence regions with a cross marking the best fit, dashed lines indicate fiducial values of the RSD and AP parameters $(f\/b=0.409,\\,\\varepsilon=1)$, and default values for the nuisance parameters $(\\mathcal{M}=\\mathcal{Q}=1)$. The top of each column states the mean and standard deviation of the 1D marginal distributions.}\n\t\\label{fig:triangle_data}\n\\end{figure}\n\nThe full posterior parameter distribution obtained from the BOSS data is shown in figure~\\ref{fig:triangle_data}, which qualitatively resembles the mock results from figure~\\ref{fig:triangle_mock}. However, a few important differences are apparent. Firstly, the value of $f\/b$ from the data is significantly higher than in the mocks, which is partly driven by the lower value of $b=1.85$ in the BOSS data, compared to $b=2.20$ of the mocks. Further, the nuisance parameters $\\mathcal{M}$ and $\\mathcal{Q}$ both take on lower values in the data than in the mocks. In particular, $\\mathcal{M}$ is consistent with unity to within $68\\%$ confidence, which could indicate that voids in the BOSS data are less affected by discreteness noise than what was expected from the PATCHY mocks. At the same time, $\\mathcal{Q}$ is consistent with unity only at the $95\\%$ confidence level from below, suggesting an attenuation of the quadrupole amplitude when compared to the mocks. This could be caused by systematics in the BOSS data that have not been taken into account at the same level of complexity in the mocks. One such example is the foreground contamination by stars~\\cite{Reid2016}.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=0 30 0 5, clip]{fig\/Xi2d_data_Z1.pdf}\n\t\t\\includegraphics[trim=0 30 0 5, clip]{fig\/Xi_ell_data_Z1.pdf}}\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=0 10 0 5, clip]{fig\/Xi2d_data_Z2.pdf}\n\t\t\\includegraphics[trim=0 10 0 5, clip]{fig\/Xi_ell_data_Z2.pdf}}\n\t\\caption{As figure~\\ref{fig:xi_data} after splitting the BOSS void sample at its median redshift of $Z=0.52$ into $50\\%$ lowest-redshift (``low-z'', top row) and $50\\%$ highest-redshift voids (``high-z'', bottom row).}\n\t\\label{fig:xi_data_Z}\n\\end{figure}\n\\begin{figure}[h]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics{fig\/triangle_data_Z1.pdf}\n\t\t\\includegraphics{fig\/triangle_data_Z2.pdf}}\n\t\\caption{As figure~\\ref{fig:triangle_data} for the ``low-z'' (left) and ``high-z'' (right) BOSS void sample in figure~\\ref{fig:xi_data_Z}.}\n\t\\label{fig:triangle_data_Z}\n\\end{figure}\n\nAs done for the PATCHY mocks, we further investigate the redshift evolution of our constraints with the BOSS data. To this end we split our catalog into two equally sized low- and high-redshift bins again (``low-z'' and ``high-z'' sample), the resulting clustering statistics are shown in figure~\\ref{fig:xi_data_Z}. We observe the same trends as before, namely a deepening of void interiors and an increase of the quadrupole towards lower redshift. Given the lower statistical power of these bins the data evidently look more noisy, but the linear model still provides a good fit overall. Note that we neglect any uncertainties in our theory model, which relies on a measurement of the projected correlation function $\\xi^s_p(s_\\perp)$. Thus, especially for noisy data, this may result in an underestimation of the full covariance and hence a higher reduced chi-square. Nevertheless, our $\\chi^2_\\mathrm{red}$ values are still reasonably close to unity. Figure~\\ref{fig:triangle_data_Z} presents the parameter posteriors of the model fit. Evidently, even these subsets of voids can still provide interesting constraints with a good accuracy. We find our best-fit values for $f\/b$ and $\\varepsilon$ to be in agreement with the fiducial Planck cosmology to within $68\\%$ of the confidence levels. Moreover, we notice that the low amplitude for the nuisance parameter $\\mathcal{Q}$ is driven by the high-redshift bin, otherwise the best-fit values for both $\\mathcal{M}$ and $\\mathcal{Q}$ are consistent with unity to within the $68\\%$ contours.\n\nSo far our analysis has exclusively been based on the observed data without using any prior information. The model ingredients $\\xi(r)$, $\\mathcal{M}$, and $\\mathcal{Q}$ have been derived from this data self-consistently. One may argue, however, that these quantities are already available from the survey mocks to a much higher accuracy (see section~\\ref{subsec:validation}). Hence, making use of the mocks to calibrate those model ingredients allows us to evade marginalization over nuisance parameters and to use the statistical power of the data solely to constrain cosmology. We implement this calibration approach by simply using the $30$ PATCHY mocks to estimate $\\xi(r)$ via equation~(\\ref{xi_d}) and fixing the nuisance parameters $\\mathcal{M}$ and $\\mathcal{Q}$ to their best-fit values from the corresponding void sample in the mock analysis. This leaves us with only two remaining free parameters $f\/b$ and $\\varepsilon$, for which we repeat the MCMC runs.\n\nThe results are presented in figure~\\ref{fig:triangle_data_cal}, showing their posterior distribution for each of our void samples. Evidently, the mock-calibrated analysis (calib.) significantly improves upon the constraints obtained without calibration (free). While the accuracy on the AP parameter $\\varepsilon$ exhibits mild improvements of about $10\\%$ to $30\\%$, the error on $f\/b$ shrinks by roughly a factor of~$4$. This is mainly due to the considerable anti-correlation between $f\/b$ and $\\mathcal{M}$ apparent in figures~\\ref{fig:triangle_data} and~\\ref{fig:triangle_data_Z}, which is removed when $\\mathcal{M}$ is fixed to a fiducial value. We note, however, that the best-fit values for $\\mathcal{M}$ in our uncalibrated analysis differ significantly between the observed data and the mocks. In particular, we found the values of $\\mathcal{M}$ in the mocks to be higher than in the data by about $10\\%$. This may be partly due to the higher bias parameter and the level of overdispersion in the PATCHY mocks as compared to the data, but more fundamentally the mismatch reveals that not all aspects of the data are understood precisely enough to be fully represented by the mocks.\n\\begin{figure}[b]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics{fig\/triangle_data_cal_Z1.pdf}\n\t\t\\includegraphics{fig\/triangle_data_cal_Z2.pdf}\n\t\t\\includegraphics{fig\/triangle_data_cal.pdf}}\n\t\\caption{As figures~\\ref{fig:triangle_data} and~\\ref{fig:triangle_data_Z}, but focused on the RSD and AP parameters $f\/b$ and $\\varepsilon$. The red contours show constraints when the theory model is calibrated with the PATCHY mocks to determine $\\xi(r)$ and the values of the nuisance parameters $\\mathcal{M}$ and $\\mathcal{Q}$. The blue contours show the original constraints when $\\xi(r)$, $\\mathcal{M}$ and $\\mathcal{Q}$ are left free to be jointly estimated from the BOSS data. The top of each column states the mean and standard deviation of the calibrated constraints, the corresponding void sample is indicated above the figure legend of each panel.}\n\t\\label{fig:triangle_data_cal}\n\\end{figure}\n\nTherefore, we caution the use of mocks for model calibration, as such an approach is prone to cause biased constraints on cosmology. This is evident from the significant shifts of the posteriors in figure~\\ref{fig:triangle_data_cal} after performing the calibration. Another consequence is the underestimation of parameter uncertainties, which is caused by mistaking prior information from the mocks as the truth. The mocks merely represent many realizations of a single cosmological model with one fiducial parameter set and one fixed prescription of how dark matter halos are populated by galaxies (halo occupation distribution). A realistic model must therefore either take into account the dependence on these ingredients including their uncertainty, or constrain them from the data directly. Our approach follows the philosophy to exclusively rely on the observed data to obtain most robust constraints.\n\n\n\\section{Discussion\\label{sec:discussion}}\n\n\\subsection{Parameter constraints \\label{subsec:constraints}}\n\\begin{table}[b]\n\t\\centering\n\t\\caption{Constraints on RSD and AP parameters (mean values with $1\\sigma$ errors) from \\textsc{vide} voids in the final BOSS data (top rows). The middle rows show corresponding constraints after model-calibration on the PATCHY mocks, and the bottom rows provide Planck 2018~\\cite{Planck2018} results as reference values, assuming a flat $\\Lambda$CDM model. All constraints on $\\Omega_\\mathrm{m}$ in the last column assume flat $\\Lambda$CDM as well.}\\vspace{10pt}\n\t\\label{tab:constraints}\n\t\\centerline{\n\t\t\\begin{tabular}{lccccc}\n\t\t\t\\toprule\n\t\t\tSample ($\\bar{Z}$) & $f\/b$ & $f\\sigma_8$ & $\\varepsilon$ & $D_\\mathrm{A} H\/c$ & $\\Omega_\\mathrm{m}$\\\\\n\t\t\t\\midrule\n\t\t\tlow-z ($0.43$) & $0.493\\pm0.105$ & $0.590\\pm0.125$ & $0.9996\\pm0.0081$ & $0.485\\pm0.004$ & $0.306\\pm0.027$\\\\[3pt]\n\t\t\thigh-z ($0.58$) & $0.538\\pm0.146$ & $0.594\\pm0.162$ & $1.0100\\pm0.0111$ & $0.702\\pm0.008$ & $0.334\\pm0.030$\\\\[3pt]\n\t\t\tall ($0.51$) & $0.540\\pm0.091$ & $0.621\\pm0.104$ & $1.0017\\pm0.0068$ & $0.588\\pm0.004$ & $0.312\\pm0.020$\\\\[3pt]\n\t\t\t\\midrule\n\t\t\tlow-z calib. & $0.390\\pm0.025$ & $0.554\\pm0.036$ & $1.0134\\pm0.0075$ & $0.492\\pm0.004$ & $0.353\\pm0.026$\\\\[3pt]\n\t\t\thigh-z calib. & $0.288\\pm0.033$ & $0.379\\pm0.043$ & $0.9953\\pm0.0084$ & $0.691\\pm0.006$ & $0.295\\pm0.022$\\\\[3pt]\n\t\t\tall calib. & $0.347\\pm0.023$ & $0.474\\pm0.031$ & $1.0011\\pm0.0060$ & $0.588\\pm0.003$ & $0.310\\pm0.017$\\\\[3pt]\n\t\t\t\\midrule\n\t\t\tlow-z ref. & $0.398\\pm0.003$ & $0.476\\pm0.006$ & $1.0025\\pm0.0022$ & $0.487\\pm0.001$ & $0.315\\pm0.007$\\\\[3pt]\n\t\t\thigh-z ref. & $0.425\\pm0.003$ & $0.470\\pm0.005$ & $1.0031\\pm0.0028$ & $0.697\\pm0.002$ & $0.315\\pm0.007$\\\\[3pt]\n\t\t\tall ref. & $0.412\\pm0.003$ & $0.474\\pm0.006$ & $1.0028\\pm0.0025$ & $0.589\\pm0.001$ & $0.315\\pm0.007$\\\\[3pt]\n\t\t\t\\bottomrule\n\t\\end{tabular}}\n\\end{table}\n\nThe final parameter constraints from our \\textsc{vide} void samples found in the BOSS DR12 data are summarized in table~\\ref{tab:constraints}. We distinguish between uncalibrated and mock-calibrated (calib.) samples, both for the subsets at low and high redshift (low-z, high-z), as well as the full redshift range (all). The table presents measured quantities of mean void redshift $\\bar{Z}$, RSD parameter $f\/b$ and AP parameter $\\varepsilon$. Furthermore, it provides derived constraints on $f\\sigma_8$, $D_\\mathrm{A} H$ and $\\Omega_\\mathrm{m}$. For $f\\sigma_8$ we multiply our constraint on $f\/b$ by $b\\sigma_8$, with $b=1.85$ and $\\sigma_8=0.8111$ from Planck 2018~\\cite{Planck2018}. For the calibrated case we use $b=2.20$ from the mocks.\n\nIn principle the parameter combination $f\\sigma_8$ could be constrained from voids directly, but only if the theory model can explicitly account for the dependence of the void-galaxy cross-correlation function $\\xi(r)$ on $\\sigma_8$. Sometimes the assumption $\\xi(r)\\propto\\sigma_8$ is used without further justification, but evidently this must fail in the non-linear regime~\\cite{Juszkiewicz2010} where $\\xi(r)$ approaches values close to $-1$, due to the restriction $\\xi(r)>-1$. The same argument applies to the density profiles of dark matter halos or galaxy clusters, which do not simply scale linearly with $\\sigma_8$~\\cite{Brown2020}. Moreover, while the value of $\\sigma_8$ controls the amplitude of matter fluctuations and thus the formation of halos, its effect on voids identified in the distribution of galaxies that populate those halos is far from trivial. Another approach is to directly estimate $b\\sigma_8$ via an integral over the projected galaxy auto-correlation function~\\cite{Hawken2017}. However, the result contains non-linear contributions from small scales that must be accounted for, which again involves assumptions about a particular cosmological model. The covariance between measurements of $f\/b$ and $b\\sigma_8$ remains inaccessible to that approach as well.\n\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{0.51\\hsize}{!}{\n\t\t\\includegraphics[trim=10 10 10 10]{fig\/triangle_data_fs8_Om.pdf}}\n\t\\caption{Calibration-independent constraints on the cosmological parameters $f\\sigma_8$ and $\\Omega_\\mathrm{m}$ from our full \\textsc{VIDE} void sample in the final BOSS data. They are converted from the RSD and AP parameter posteriors shown in figure~\\ref{fig:triangle_data}, as described in section~\\ref{subsec:constraints}. A white cross indicates our best fit and dashed lines show mean parameter values from the Planck 2018 baseline results as a reference~\\cite{Planck2018}.}\n\t\\label{fig:triangle_data_fs8_Om}\n\\end{figure}\n\nFinally, measurements involving the parameter $\\sigma_8$ commonly ignore its implicit dependence on the Hubble parameter $h$ via the choice of $8h^{-1}{\\rm Mpc}$ as reference scale, and therefore underestimate the uncertainty. Reference~\\cite{Sanchez2020} argued to instead use $\\sigma_{12}$ with $12$Mpc, which yields about the same value as $\\sigma_8$ for a Planck-constrained value of $h$. For these reasons we decided to follow the simpler procedure described above to derive constraints on $f\\sigma_8$, allowing us to compare existing results across the literature. The constraint on $D_\\mathrm{A} H$ can be obtained via equation~(\\ref{epsilon}) by multiplying $\\varepsilon$ and its error with $D_\\mathrm{A}'H'$ from our fiducial flat $\\Lambda$CDM cosmology from section~\\ref{subsec:galaxies}. In this case the only free cosmological parameter in the product $D_\\mathrm{A} H$ is $\\Omega_\\mathrm{m}$, so we can numerically invert this function to obtain the full posterior on $\\Omega_\\mathrm{m}$. Its mean and standard deviation are shown in the last column of table~\\ref{tab:constraints}. Finally, we present our main result for the converted parameter constraints on $f\\sigma_8$ and $\\Omega_\\mathrm{m}$ in figure~\\ref{fig:triangle_data_fs8_Om}. It originates from the calibration-independent analysis of our full void sample in the final BOSS data at mean redshift $\\bar{Z}=0.51$.\n\n\n\\subsection{Systematics tests\\label{subsec:systematics}}\n\n\\subsubsection{Fiducial cosmology}\nIn order to affirm the robustness of our results, we have performed a number of systematics tests on our analysis pipeline. One potential systematic can be a residual dependence on the fiducial cosmology we assumed in section~\\ref{subsec:galaxies} when converting angular sky coordinates and redshifts into comoving space via equation~(\\ref{x_comoving}). This conversion preserves the topology of large-scale structure, but in the presence of statistical noise due to sparse sampling of tracers it can have an impact on void identification~\\cite{Mao2017}. We investigate how a change of the fiducial cosmology affects our final constraints on cosmological parameters by shifting the fiducial value for $\\Omega_\\mathrm{m}'=0.307$ to $0.247$. This shift amounts to three times the standard deviation we obtain from the posterior on $\\Omega_\\mathrm{m}=0.312\\pm0.020$ in the uncalibrated analysis of all voids (see table~\\ref{tab:constraints}). We then repeat our entire analysis including the void-finding procedure and sample the posterior on $f\/b$ and $\\Omega_\\mathrm{m}$ assuming the new value for $\\Omega_\\mathrm{m}'$. The result is presented in the left panel of figure~\\ref{fig:triangle_sys}, showing a very mild impact of the fiducial cosmology on the posterior parameter distribution. The resulting shifts of the posterior mean values are well within the $68\\%$ credible regions of both cases and their relative accuracies practically remain unchanged, suggesting the impact of our fiducial cosmology to contribute a marginal systematic effect to our final constraints.\n\\begin{figure}[b]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics{fig\/triangle_data_Om.pdf}\n\t\t\\includegraphics{fig\/triangle_mock_bias.pdf}}\n\t\\caption{LEFT: Impact of the fiducial parameter $\\Omega_\\mathrm{m}'$ on the final posterior distribution for $f\/b$ and $\\Omega_\\mathrm{m}$ from all voids in the BOSS DR12 data. The top of each column states the mean and standard deviation obtained for the assumed value of $\\Omega_\\mathrm{m}'=0.247$ and dashed lines indicate fiducial values of the default cosmology with $\\Omega_\\mathrm{m}'=0.307$. RIGHT: Impact of the bias of the galaxy sample used in the cross-correlation with all voids from the PATCHY mocks on the derived posterior for $f\\sigma_8$ and $\\varepsilon$. The top of each column states the mean and standard deviation obtained for the new value $b=1.93$.}\n\t\\label{fig:triangle_sys}\n\\end{figure}\n\n\\subsubsection{Galaxy bias}\nThe bias of the galaxy sample we use to estimate the cross-correlation with voids can contribute another systematic effect on our final parameters. This especially so for the derived combination $f\\sigma_8$, which we obtain via multiplying $f\/b$ by the average bias $b$ of the galaxy sample, and the Planck-constrained $\\sigma_8$ value (see section~\\ref{subsec:constraints}). The BOSS data does not readily allow us to define sub-samples of galaxies with known bias values that differ from the sample average. However, the PATCHY mocks provide a bias parameter for every object in the catalog, so we can investigate its influence on our analysis pipeline. As a simple test, we selected $50\\%$ of all PATCHY galaxies with a bias value below the median, which amounts to an average of $b=1.93$. Because the galaxy bias follows its own redshift evolution, we had to re-sample the random catalog in order for it to follow the same density-redshift trend as the selected galaxy sample. We then cross-correlate it with our original PATCHY void sample used in section~\\ref{subsec:validation} and compare its posterior on $f\\sigma_8$ and $\\varepsilon$ to the original one from figure~\\ref{fig:triangle_mock} in the right panel of figure~\\ref{fig:triangle_sys}. The two constraints are very consistent with each other, suggesting that the final result on $f\\sigma_8$ does not depend on the bias of the galaxy sample used for the cross-correlation.\n\n\\subsubsection{Estimator}\nThe main advantage of our clustering estimator from equation~(\\ref{estimator}) is its simplicity, allowing a fast and precise evaluation of the void-galaxy cross-correlation function and its multipoles without angular binning. In order to assess its accuracy, we have compared it with the more common Landy-Szalay estimator~\\cite{Landy1993}\n\\begin{equation}\n\\xi^s(\\mathbf{s}) = \\left(\\frac{\\langle\\mathbf{X},\\mathbf{x}\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}\\rangle\\langle\\mathbf{x}\\rangle} -\\frac{\\langle\\mathbf{X},\\mathbf{x}_r\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}\\rangle\\langle\\mathbf{x}_r\\rangle} -\\frac{\\langle\\mathbf{X}_r,\\mathbf{x}\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}_r\\rangle\\langle\\mathbf{x}\\rangle} +\\frac{\\langle\\mathbf{X}_r,\\mathbf{x}_r\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}_r\\rangle\\langle\\mathbf{x}_r\\rangle}\\right)\\left\/\n\\left(\\frac{\\langle\\mathbf{X}_r,\\mathbf{x}_r\\rangle(\\mathbf{s})}{\\langle\\mathbf{X}_r\\rangle\\langle\\mathbf{x}_r\\rangle}\\right.\\right)\\;,\n\\label{LS_estimator}\n\\end{equation}\nwhich additionally involves the random void-center positions $\\mathbf{X}_r$. From the PATCHY mocks we generate a sample of such void randoms by assigning the same angular and redshift distribution of its voids to a randomly generated set of points with $50$ times the number of objects (in analogy to the galaxy randoms, see section~\\ref{subsec:galaxies}). We also assign an effective radius to each random void, with the same distribution as the one obtained in the mocks. This guarantees a consistent stacking procedure, as described in section~\\ref{subsec:estimators}. We find that the additional terms $\\langle\\mathbf{X}_r,\\mathbf{x}\\rangle(\\mathbf{s})\/\\langle\\mathbf{X}_r\\rangle\\langle\\mathbf{x}\\rangle$ and $\\langle\\mathbf{X}_r,\\mathbf{x}_r\\rangle(\\mathbf{s})\/\\langle\\mathbf{X}_r\\rangle\\langle\\mathbf{x}_r\\rangle$ in the stacked void-galaxy correlation estimator from equation~(\\ref{LS_estimator}) are independent of the direction and magnitude of~$\\mathbf{s}$, in agreement with the findings of reference~\\cite{Hamaus2017}. However, we notice the amplitude of both terms to exceed unity by roughly $20\\%$, while their ratio remains very close to one with deviations in the order of $10^{-3}$. This results in a different overall normalization between equations~(\\ref{estimator}) and~(\\ref{LS_estimator}), making their amplitudes differ by a constant factor of about $1.2$. When we rescale one of the void-galaxy correlation functions by this number, we find the results from these two estimators to be virtually indistinguishable. Because we use the same estimator for $\\xi^s(s_\\parallel,s_\\perp)$ and the projected correlation function $\\xi^s_p(s_\\perp)$, which is used to infer the real-space $\\xi(r)$ via equation~(\\ref{xi_d}), any such normalization constant gets absorbed on both sides of equation~(\\ref{xi^s_lin}) and therefore has no effect on our model parameters.\n\n\n\\subsubsection{Covariance matrix}\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=5 10 5 5, clip]{fig\/cov_data.pdf}\n\t\t\\includegraphics[trim=5 10 5 5, clip]{fig\/cov_mock.pdf}}\n\t\\caption{Covariance matrix (normalized by its diagonal components) of the stacked void-galaxy cross-correlation function $\\xi^s(s_\\parallel,s_\\perp)$ from the BOSS DR12 data (left) and the PATCHY mocks (right).}\n\t\\label{fig:covariance}\n\\end{figure}\nAs a last consistency test we investigate the impact of the covariance matrix on our results. The left panel of figure~\\ref{fig:covariance} shows the covariance matrix estimated using the jackknife technique on the BOSS data as described in section~\\ref{subsec:estimators}, normalized by its diagonal components (i.e., the correlation matrix $\\mathbf{C}_{ij}\/\\sqrt{\\mathbf{C}_{ii}\\mathbf{C}_{jj}}\\,$). Note that this matrix contains $N_\\mathrm{bin}^2=(18^2)^2$ elements for the covariance of the two-dimensional correlation function $\\xi^s(s_\\parallel,s_\\perp)$. In order to overcome the statistical noise in the data covariance, we can measure the same quantity for all voids in our $N_\\mathrm{m}=30$ independent mock catalogs. The result is shown in the right panel figure~\\ref{fig:covariance}, featuring a very similar structure as for the real data. In our main analysis we have used the data covariance for the sake of maintaining an entirely calibration-independent approach. However, when exchanging it by the mock covariance in our likelihood from equation~(\\ref{likelihood}), we obtain posteriors that are consistent with our previous results, which is why we do not show them again. This suggests that the estimation of the covariance matrix from the data itself provides a sufficiently precise method allowing us to obtain fully calibration-independent constraints on cosmology from the observed sample of voids.\n\n\\subsection{Comparison to previous work \\label{subsec:comparison}}\n\\begin{figure}[t]\n\t\\centering\n\t\\resizebox{\\hsize}{!}{\n\t\t\\includegraphics[trim=30 50 60 80, clip]{fig\/fs8_DAH.pdf}}\n\t\\caption{Comparison of constraints on $f\\sigma_8$ and $D_\\mathrm{A} H$ (mean values with $68\\%$ confidence regions) obtained from cosmic voids in the literature, references are ordered chronologically in the figure legend. To improve readability, $D_\\mathrm{A} H$ is normalized by its reference value $D_\\mathrm{A}'H'$ in the Planck 2018 flat $\\Lambda$CDM cosmology~\\cite{Planck2018} (gray line with shaded error band). Filled markers indicate growth rate measurements without consideration of the AP effect, while open markers include the AP test. The line style of error bars indicates various degrees of model assumptions made: model-independent (solid), calibrated on simulations (dashed), calibrated on mocks (dotted), calibrated on simulations and mocks (dash-dotted). All the employed simulations and mocks assume a flat $\\Lambda$CDM cosmology. Data points at similar redshifts have been slightly shifted horizontally to avoid overlap.}\n\t\\label{fig:comparison}\n\\end{figure}\nThe observational AP test with cosmic voids has already experienced some history since it was first proposed by Lavaux and Wandelt in 2009~\\cite{Lavaux2010} and its first measurement by Sutter et al. in 2012~\\cite{Sutter2012b}. The early measurements of the AP effect did not yet account for RSD distortions by a physically motivated model, but they calibrated its impact using simulations~\\cite{Sutter2014b,Mao2017}. The first joint RSD and AP analysis from observed cosmic voids has been published in 2016 based on BOSS DR11 data~\\cite{Hamaus2016}. It demonstrated for the first time that a percent-level accuracy on $\\varepsilon$ can be achieved with voids, making them the prime candidate for observational AP tests. Since then, a number of papers appeared that either focused on the RSD analysis of voids exclusively in order to constrain the growth rate~\\cite{Achitouv2017a,Hawken2017,Hamaus2017,Achitouv2019,Hawken2020}, or performed further joint analyses including the AP effect~\\cite{Nadathur2019b}.\n\nWe summarize the constraints on $f\\sigma_8$ and $D_\\mathrm{A} H$ that have been obtained from voids throughout the literature in figure~\\ref{fig:comparison}, including the results from this paper. Evidently, the different analysis techniques have progressed over time and achieved significant improvements of accuracy. Moreover, spectroscopic data from a number of surveys covering different redshift ranges has been exploited to this end, including 6dFGS~\\cite{6dFGS}, BOSS~\\cite{BOSS}, eBOSS~\\cite{EBOSS}, SDSS~\\cite{SDSS}, and VIPERS~\\cite{VIPERS}. All of the published results are consistent with a flat $\\Lambda$CDM cosmology, in agreement with the measurements by Planck~\\cite{Planck2018}. However, some of the analyses have been calibrated using simulations and \/ or mocks to determine unknown model ingredients. If such external information has been used and has not been marginalized over, we indicate the calibrated results in figure~\\ref{fig:comparison} by different line styles of error bars, as described in the caption. A comparison based on equal terms can only be made by taking this essential fact into account. In addition to this, there are a number of analysis choices that differ among the published results. In the following we provide a list of aspects that we have investigated in more detail and encountered to be relevant for our results.\n\n\\subsubsection{Void finding in real vs. redshift space}\nLike most papers on the topic of void RSD in the literature, we define voids in observable redshift space. The recent analysis of reference~\\cite{Nadathur2019b} advocates the use of reconstruction techniques to identify voids in real space instead. Their centers in real space are then correlated with the original galaxy positions in redshift space to estimate a hybrid real\/redshift space void-galaxy cross-correlation function. We have investigated this approach using halo catalogs from $N$-body simulations and calculated the resulting two-point statistics. We confirm that this results in a more elongated shape of $\\xi^s(s_\\parallel,s_\\perp)$ along the line of sight and a change of sign in its quadrupole at small void-centric separations. This can be readily understood from the illustration in figure~\\ref{fig:voidstretch}: the separation vector between a void center in real space and one of the void's galaxies in redshift space is now given by the gray dashed line, which is more elongated along the line of sight because it contains a contribution from the velocity of the void center, $\\tilde{\\mathbf{s}}=\\mathbf{r}+\\mathbf{u}_\\parallel+\\mathbf{V}_\\parallel$ (with velocities in units of $(1+z_h)\/H$). Another consequence of this approach is that for void velocities with $|\\mathbf{V}_\\parallel|\\gtrsim R$, a significant number of galaxies from neighboring voids in redshift space will be closer to the void center in real space than the void's own member galaxies. As a result, the void-galaxy cross-correlation function contains different contributions from galaxies of the same and neighboring voids, depending on the magnitude of $\\mathbf{V}_\\parallel$. As this effect is difficult to model from first principles, reference~\\cite{Nadathur2019b} resorts to the use of mock catalogs to calibrate the form of this hybrid correlation function, and restricts its analysis to the largest $50\\%$ of all voids.\n\nThe motivation for velocity field reconstruction was grounded on the claim that the velocities $\\mathbf{V}$ of void centers cannot be accounted for in all existing models for the void-galaxy cross-correlation function in redshift space~\\cite{Nadathur2019b}. This presumption is unfounded, as these models have actually been derived assuming local mass conservation relative to the motion of the void center~\\cite{Paz2013,Hamaus2015,Cai2016}. While it is true that absolute void velocities $\\mathbf{V}$ are difficult to predict, the same holds for the absolute galaxy velocities $\\mathbf{v}$ in the vicinity of void centers. Both $\\mathbf{V}$ and $\\mathbf{v}$ contain bulk-flow contributions sourced by density fluctuations on scales beyond the void's extent. However, local mass conservation provides a very good prediction for their difference $\\mathbf{u}$, as discussed in section~\\ref{subsec:dynamic}. A consequence of this is a vanishing hexadecapole $\\xi^s_4(s)$, as explained in reference~\\cite{Cai2016} and confirmed by our analysis. We further note that the galaxy velocity field $\\mathbf{v}$ is anisotropic around void centers in redshift space, as expected from figure~\\ref{fig:voidstretch}. The model derived in section~\\ref{subsec:correlation} merely assumes statistical isotropy of the field $\\mathbf{u}$ in real space, which follows from the cosmological principle. Reference~\\cite{Nadathur2019a} speculated about a potential selection effect in favor of voids with higher outflow velocities and hence lower observed central densities in redshift space. As explained in section~\\ref{subsec:voids}, our void finder operates on local minima and their surrounding watershed basins, irrespective of any absolute density threshold. Such a selection effect therefore cannot affect voids identified with~\\textsc{vide} or~\\textsc{zobov}.\n\nAnother argument for the use of reconstruction was motivated by the impact of redshift-space distortions on the void-size function~\\cite{Nadathur2019a}. We note that the effective radii for voids of any size are expected to change between real and redshift space due to dynamic distortions, as evident from figure~\\ref{fig:voidstretch}. However, we only use the observed effective void radii as units to express all separations in either space, which leaves the mapping between $\\mathbf{r}$ and $\\mathbf{s}$ unchanged. A problematic impact of this mapping can be the destruction of voids from catastrophic redshift-space distortions, such as the FoG effect, or from shot noise due to the sparsity of tracers that may change the topology of watershed basins. Because the FoG effect is limited to scales of a few $h^{-1}{\\rm Mpc}$ and smaller voids are defined by fewer tracers, this problem becomes more relevant for voids of relatively small size. We account for this potential systematic via marginalization over the nuisance parameters introduced in section~\\ref{subsec:parameters}.\n\nIn conclusion, velocity field reconstruction is not required to account for the dynamic distortions of voids, as evident from this paper and numerous earlier works~\\cite{Paz2013,Hamaus2016,Cai2016,Achitouv2017a,Hawken2017,Hamaus2017,Correa2019,Achitouv2019,Hawken2020}. The velocity field reconstruction technique merely offers an alternative approach to model RSDs around voids, in addition to the existing models. If reconstruction is used in conjunction with another RSD model, dynamic distortions are unnecessarily taken into account twice. The disadvantages of reconstruction include its dependence on a smoothing scale, assumptions on tracer bias and growth rate relations, as well as its sensitivity to survey edges and shot noise~(e.g., \\cite{Sherwin2019,Philcox2020}). Last but not least, reconstruction makes the data a function of the theory model. Vice-versa, calibration of the theory model on survey mocks that are informed by the data generates an inverse dependence. If information from the mocks is used in the model, theory and data are intertwined to a degree that makes a rigorous likelihood analysis much more involved. Moreover, this practice forfeits the criteria necessary for an independent model validation. The authors of reference~\\cite{Nadathur2019b} claim their analysis to be ``free of systematic errors'', but neglect its systematic dependence on the assumed mock cosmology.\n\n\\subsubsection{Void center definition}\nBecause voids are aspherical by nature, the definition of their centers is not unique. In observations, which typically provide the 3D locations (but not the 3D velocities) of tracers that outline each void, there are in practice two options: the point of minimum tracer density inside the void, or the geometric center defined by the void boundary. Minimum-density centers can be defined as maximally extended empty spheres in a tracer distribution~\\cite{Zhao2016}, without requiring the sophistication of a watershed algorithm. The optimal choice of center definition depends on the specific type of application, so it is not possible to make general statements about this. However, for the sake of measuring geometric distortions via the AP effect it is desirable to enhance the amplitude of tracer fluctuations around their background density to increase the signal-to-noise ratio of anisotropic clustering measurements. As described in section~\\ref{subsec:voids}, the geometric center (barycenter) retains information about the void boundary and thereby generates a pronounced compensation wall in its cross-correlation with galaxies at a separation of one effective void radius $R$. On the other hand, the minimum-density center produces a stronger negative amplitude of the void-galaxy cross-correlation function at small separations. The number of tracers in a shell of width $\\mathrm{d}s$ grows as $s^2$ for a constant tracer density, and even faster for increasing density with $s$, as is the case inside voids. Therefore the coherent compensation walls around void barycenters serve as a lever arm to provide significantly higher signal-to-noise ratios for measurements of anisotropic clustering and hence the AP effect. We have checked this explicitly by repeating our analysis using minimum-density centers, which results in less pronounced compensation walls in $\\xi^s(s_\\parallel,s_\\perp)$, a lower amplitude of its quadrupole, and an uncertainty on the AP parameter $\\varepsilon$ of roughly double the size.\n\n\\subsubsection{Void stacking}\nThe method of void stacking is related to the previous aspect, as it affects the void-galaxy cross-correlation function in a similar way. Because voids are objects of finite extent, the correlation of their centers with tracers inside or outside their boundaries is qualitatively different~\\cite{Hamaus2014a,Chan2014,Cai2016,Voivodic2020}. This is analogous to the halo model, which ascribes two different contributions to the clustering properties of matter particles, those inside the same halo, and those among different halos~\\cite{Seljak2000,Peacock2000}. Therefore, in order to capture the characteristic clustering properties of tracers inside a sample of differently sized voids, one typically rescales the tracer coordinates by the effective radius of their respective host void, a method referred to as void stacking. This guarantees that the void boundaries coherently overlap at a separation of $s=R$, and thus creates a strong compensation wall feature in the stacked void-galaxy cross-correlation function. Without the rescaling procedure, compensation walls of different-size voids do not aggregate, which results in a smeared out correlation function with almost no feature remaining at $s=R$. This smearing in turn is disadvantageous for measurements of AP distortions, following the arguments discussed above. A similar effect can be caused by stacking voids of different evolutionary stages from a wide range of redshifts. For example, the properties of void galaxies are expected to be redshift-dependent~\\cite{Kreckel2011a,Kreckel2011b}. This can be accounted for by splitting the void sample into redshift bins. However, in the BOSS DR12 data and the PATCHY mocks we find a very mild evolution with redshift, allowing us to average over the full void sample.\n\n\\subsubsection{Correlation function estimation}\nIt is common practice to estimate correlation functions via counts in shells, i.e. by counting the number of tracers and randoms inside a spherical shell of width $\\mathrm{d}s$ at separation $s$. In the interiors of voids, however, the density of tracers is low by construction, which can result in shells with insufficiently low tracer counts to reliably estimate correlation functions that are intended to infer properties of the density field (such as its growth rate $f$). The previously discussed method on void stacking helps in this respect, as one may collect the tracers that fall into a given shell from all rescaled voids of the entire sample. The convergence of the estimator can then be assessed by increasing the void sample size, as we have done using mocks in section~\\ref{subsec:validation}. Within our approach we find no dependence of the estimators on sample size, in support of the conclusion that our correlation function statistics have converged. However, if shells with very few or no tracers are encountered in every single void, the counts-in-shell estimator yields biased results, even in the limit of infinite sample size~\\cite{Nadathur2015}.\n\nThis is particularly relevant for shells in the vicinity of the minimum-density center, which exhibits no nearby tracers by construction. As a consequence, the counts-in-shell estimator yields a value of $\\xi=-1$ for all empty shells, regardless of the nature of the tracer distribution. In fact, this is even the case for empty shells in a random distribution of tracers, an example that reveals the limitations of this estimator most clearly. As its name suggests, the counts-in-shell estimator is only defined for non-zero object counts, but returns meaningless results otherwise. Towards larger scales, as soon as the first tracers are encountered in a shell at separation $s_\\mathrm{d}$, the correlation estimate abruptly jumps to a value significantly higher than $\\xi=-1$, and finally converges to a smooth curve at separations with sufficient tracer counts~\\cite{Nadathur2019b}. If voids are not rescaled by their effective radius, this results in a kink in the average correlation function at $s=s_\\mathrm{d}$ even for arbitrarily large sample sizes. A similar behavior can be observed when estimating the radial velocity profile of the tracers~\\cite{Nadathur2019a}. The resulting bias in the counts-in-shell estimator on small scales breaks the validity of equations~(\\ref{u(r)}) and~(\\ref{xi(delta)}), which only apply in the continuous limit of high tracer counts, and can be misinterpreted as evidence for an intrinsic non-linearity or stochasticity of the tracer density field.\n\nThe shell at separation $s_\\mathrm{d}$ indicates the discreteness limit of the tracer distribution. It is determined by the average density of tracers and therefore unrelated to cosmologically induced clustering statistics that can be measured on larger scales. Moreover, discreteness artifacts are notoriously difficult to model from first principles due to their unphysical nature, which leaves no option other than to calibrate them via mock catalogs. In reference~\\cite{Nadathur2019a} it is argued that the void-galaxy cross-correlation function exhibits a ``feature'' both in its monopole and quadrupole at separation $s_\\mathrm{d}$, which is calibrated on mocks to be used for AP distortion measurements in a later publication~\\cite{Nadathur2019b}. It yet remains to be demonstrated whether such features at the discreteness limit of an estimator are of any use for the AP test. They similarly arise in scale-free Poisson distributions, which are insensitive to geometric distortions for the lack of spatial correlations and therefore satisfy the condition $\\varepsilon=1$ in any coordinate system. Consequently, in such a scenario the AP test necessarily returns the fiducial cosmology and therefore becomes dominated by confirmation bias. Changing the fiducial cosmology provides a useful sanity check to exclude the presence of confirmation bias, as we have shown in section~\\ref{subsec:systematics}.\n\n\\subsubsection{RSD model}\nWe have performed extensive tests to compare the existing RSD models for voids that are available in the literature. This essentially concerns the GSM for voids as proposed by Paz et al.~\\cite{Paz2013}, the linear model of Cai et al.~\\cite{Cai2016} used in this paper, and variants thereof. We find consistent results with the GSM, albeit with slightly weaker constraints on $f\/b$ and $\\varepsilon$ due to marginalization over the additional velocity dispersion parameter~$\\sigma_v$. Moreover, as the GSM requires an integration over the pairwise velocity probability distribution function in every bin of $\\xi^s(s_\\parallel,s_\\perp)$, it significantly slows down the model evaluation. We find the impact of velocity dispersion to marginally affect our fits to the data, so we settled on the simpler linear model from equation~(\\ref{xi^s_lin2}).\n\nFurthermore, we explored extensions of the linear model, such as the full non-linear expression~(\\ref{xi^s_nonlin2}). We also tested the model extension proposed in equation (14) of reference~\\cite{Nadathur2019a}, which contains terms of linear and second order in $\\delta$. Note that in this model, every occurrence of the parameter $f$ is multiplied by a factor of $1+\\xi(r)$, unfolding an additional degeneracy between the growth rate and the amplitude of $\\xi(r)$, which depends on $b$ and $\\sigma_8$. Moreover, that model requires the void mass density profile $\\delta(r)$ from simulations in addition to $\\xi(r)$ from the mocks, making it even more dependent on the calibration input. However, none of these extensions improves our fits to either data or mocks. We suspect that a rigorous model at the non-linear level must additionally involve an extension of the linear mass conservation relation that leads to equation~(\\ref{u(r)}), as suggested by reference~\\cite{Achitouv2017b}. Nevertheless, our analysis of the final BOSS data does not indicate any limitations of the simplest linear model from equation~(\\ref{xi^s_lin}), in agreement with previous analyses~\\cite{Hamaus2017,Achitouv2019}.\n\n\n\\section{Conclusion\\label{sec:conclusion}}\nWe have presented a comprehensive cosmological analysis of the geometric and dynamic distortions of cosmic voids in the final BOSS dataset. The extracted information is condensed into constraints on two key quantities, the RSD parameter $f\/b$, and the AP parameter $\\varepsilon$. When calibrated on survey mocks, our analysis provides a relative accuracy of $6.6\\%$ on $f\/b$ and $0.60\\%$ on $\\varepsilon$ (at $68\\%$ confidence level) from the full void sample at a mean redshift of $0.51$. This represents the tightest growth rate constraint obtained from voids in the literature. The AP result even represents the tightest constraint of its kind. However, as these results are calibrated by mock catalogs from a fixed fiducial cosmology, they need to be taken with a grain of salt. Without calibration we are still able to self-consistently model the data, obtaining a relative accuracy of $16.9\\%$ on $f\/b$ and $0.68\\%$ on $\\varepsilon$. While the weaker AP constraint still remains unrivaled, the degradation in the uncertainty on $f\/b$ is mainly due to its strong anti-correlation with the amplitude of the real-space void-galaxy cross-correlation function $\\xi(r)$, which we jointly infer from the data. We emphasize that these uncalibrated constraints are entirely independent from any prior model assumptions on cosmology, or structure formation involving baryonic components, and do not rely on mocks or simulations in any way. They exclusively emerge from the observed data and a linear-theory model that is derived from first principles. With the additional validation of this model on external survey mocks with much higher statistical power, these constraints are robust. The quality of the BOSS data even allows us to analyze sub-samples of voids in two redshift bins. This decreases the mean accuracy per bin by roughly a factor of $\\sqrt{2}$, as expected for statistically independent samples.\n\nWe account for potential systematics in our analysis via a marginalization strategy. To this end we include two nuisance parameters in the model: $\\mathcal{M}$ for modulating the amplitude of the deprojected real-space void-galaxy cross-correlation function $\\xi(r)$, and $\\mathcal{Q}$ for adjusting the quadrupole amplitude. The first parameter accounts for inaccuracies in the deprojection technique and a possible contamination of our void sample by random Poisson fluctuations that can be mistaken as voids with a shallow density profile. The second parameter accounts for anisotropic selection effects due to catastrophic RSDs (such as the FoG effect) or shot noise, that can affect the identification of voids. In the PATCHY mocks we find significant evidence for $\\mathcal{M}>1$, and very marginal evidence for $\\mathcal{Q}>1$, while in the BOSS data both parameters are consistent with unity (to within $68\\%$ confidence) in our low-redshift bin. At higher redshift we observe a mild indication of $\\mathcal{M}>1$ and $\\mathcal{Q}<1$, but only at the significance level of $2\\sigma$ at best. This observation leads us to draw the following conclusion: survey mocks do not necessarily account for all aspects of, and systematics in, the data. They typically represent different realizations drawn from one and the same set of cosmological and astrophysical parameters. Therefore, using the mocks for model calibration may lead to biased constraints on cosmology. This is particularly relevant in situations where model extensions from the fiducial mock cosmology are explored, such as curvature, the properties of dark energy, or massive neutrinos. In addition, this practice underestimates parameter uncertainties, by up to a factor of $4$ for constraints on growth from RSDs. \n\nAs a final remark, we emphasize the important role that cosmic voids will play in the cosmological analysis of future datasets with a much larger scope. Both observatories from the ground~\\cite{DESI,LSST} and from space~\\cite{EUCLID,SPHEREX,WFIRST} will soon provide an unprecedented coverage of large-scale structure in the local Universe, increasing the available sample sizes of voids to at least an order of $10^5$ per survey. This implies that currently achievable error bars on RSD and AP parameters will be further reduced by a factor of a few, potentially allowing us to perform precision cosmology at the level of sub-percent accuracy that could bring about deep ramifications concerning the standard model of cosmology. In this work we have merely investigated flat $\\Lambda$CDM, which exhibits no signs of inconsistency with the final BOSS data.\n\n\n\\begin{acknowledgments}\nWe thank David Spergel for reading over our manuscript and providing pertinent feedback and suggestions. NH would like to thank Kerstin Paech for exchange on coding best practices, Giorgia Pollina for useful suggestions to improve the quality of figures, and Enzo Branchini, Chia-Hsun Chuang, Carlos Correa, Luigi Guzzo, Martha Lippich, Nelson Padilla, Ariel S\\'anchez, Ravi Sheth, Sergey Sibiryakov, Rien van de Weygaert, and Simon White for inspiring discussions about voids at the MIAPP 2019 workshop on ``Dynamics of Large Scale Structure Formation'' in Garching. NH and JW are supported by the Excellence Cluster ORIGINS, which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2094 -- 390783311. AP is supported by NASA grant 15-WFIRST15-0008 to the Nancy Grace Roman Space Telescope Science Investigation Team ``Cosmology with the High Latitude Survey''. GL and BDW acknowledge financial support from the ANR BIG4 project, under reference ANR-16-CE23-0002. The Center for Computational Astrophysics is supported by the Simons Foundation.\n\nFunding for SDSS-III~\\cite{SDSS} has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State\/Notre Dame\/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\n\nThe nature of dark matter and dark energy remains one of the greatest\nmysteries of modern cosmology. Dark matter is responsible for the flat rotation\ncurves of the galaxies and dark energy is responsible for the accelerated\nexpansion of the Universe. It is found that dark energy represents about $70\\%$\nof the energy content of the present Universe while the proportions of dark\nmatter and baryonic matter are $25\\%$ and $5\\%$ respectively.\n\n\n\nIn a previous paper \\cite{epjp} (see also \\cite{lettre,jcap}) we have\nintroduced a new cosmological\nmodel that we called\nthe logotropic model. In this model, there is no dark matter and no dark energy.\nThere is just a single dark fluid. What we call ``dark matter'' actually\ncorresponds to its rest-mass energy and what we call ``dark energy'' corresponds\nto its internal energy.\\footnote{Many models try to\nunify dark matter and dark energy. They are called unified dark energy and dark\nmatter (UDE\/M) models. However, the interpretation of dark matter\nand dark energy that we give in Refs. \\cite{epjp,lettre} is new and\noriginal.}\n\nOur model does not contain any arbitrary parameter so that it is totally\nconstrained. It involves a fundamental constant $\\Lambda$ which is the\ncounterpart of Einstein's cosmological constant \\cite{einsteincosmo} in the\n$\\Lambda$CDM (cold dark\nmatter) model and which turns out to have the same value. Still the\nlogotropic model\nis\nfundamentally different from the $\\Lambda$CDM model. \n\n\nOn the large (cosmological) scales, the logotropic model is indistinguishable\nfrom the $\\Lambda$CDM model up to the present epoch \\cite{epjp,lettre,jcap}. The\ntwo\nmodels\nwill differ in the far future, in about $25\\, {\\rm Gyrs}$ years, after which the\n logotropic model will become phantom (the\nenergy density will increase as\nthe Universe expands)\nand present a Little Rip (the\nenergy density and the scale factor will become infinite in\ninfinite time) contrary to the $\\Lambda$CDM\nmodel in which the energy density tends towards a constant \n(de Sitter era).\n\nOn the small (galactic) scales, the logotropic model is able to solve\nsome of the problems encountered by the $\\Lambda$CDM model \\cite{epjp,lettre}.\nIn\nparticular, it is able to account, without free parameter, for the\nconstant surface density of the dark matter halos, for their mass-radius\nrelation, and for the Tully-Fisher relation. \n\nIn this paper, we explore other consequences of this model. By advocating a form\nof ``strong cosmic coincidence'', stating that the present value of\nthe dark energy density $\\rho_{\\rm de, 0}$ is equal to the fundamental constant\n$\\rho_\\Lambda$ appearing in the logotropic model, we predict that\nthe present proportion of dark energy in the Universe is $\\Omega_{\\rm\nde,0}=e\/(1+e)=0.731$ which is close to the observed value\n$0.691$ \\cite{planck2016}. The\nconsequences of this result, which implies that our epoch is very special in the\nhistory of the Universe, are intriguing and related to a form of\n anthropic cosmological\nprinciple \\cite{barrow}.\n\nWe also remark that the universal surface density of dark matter halos (found\nfrom the observations \\cite{donato} and predicted by our model\n\\cite{epjp,lettre}) and the surface density of\nthe Universe are of the same order of magnitude as the surface density of the\nelectron.\nThis makes a curious connection between cosmological and atomic scales.\nExploiting this coincidence, we can relate the Hubble constant, the\nelectron mass and the electron charge to the\ncosmological constant $\\Lambda$. We also argue that the famous\nnumbers $137$ (fine-structure constant) and $123$ (logotropic\nconstant) may actually represents\nthe same thing. This may be a hint for a theory of\nunification of microphysics and cosmophysics. Speculations are made in the\nAppendices to try to relate these interconnections to a form of holographic\nprinciple \\cite{bousso} stating that the entropy of the electron, the\nentropy of dark matter\nhalos, and the entropy of the Universe scales like their area as in the case of\nthe entropy of black\nholes \\cite{bekenstein,hawking}.\n\n\n\n\n\\section{The logotropic model}\n\\label{sec_lm}\n\n\\subsection{Unification of dark matter and dark energy}\n\nThe Friedmann equations for a flat universe without cosmological constant\nare \\cite{weinbergbook}:\n\\begin{equation}\n\\frac{d\\epsilon}{dt}+3\\frac{\\dot a}{a}(\\epsilon+P)=0,\\quad H^2=\\left\n(\\frac{\\dot a}{a}\\right )^2=\\frac{8\\pi\nG}{3c^2}\\epsilon,\n\\label{lm1}\n\\end{equation}\nwhere $\\epsilon(t)$ is the energy density of the Universe, $P(t)$ is the\npressure, $a(t)$ is the\nscale factor, and $H=\\dot a\/a$ is the Hubble parameter.\n\n\n\nFor a relativistic fluid experiencing an\nadiabatic evolution such that $Td(s\/\\rho)=0$, the first law of\nthermodynamics\nreduces to \\cite{weinbergbook}:\n\\begin{equation}\nd\\epsilon=\\frac{P+\\epsilon}{\\rho}d\\rho,\n\\label{lm2}\n\\end{equation}\nwhere $\\rho$ is the rest-mass density of the Universe. Combined\nwith the equation of continuity\n(\\ref{lm1}), we get\n\\begin{equation}\n\\frac{d\\rho}{dt}+3\\frac{\\dot a}{a}\\rho=0 \\Rightarrow \\rho=\\frac{\\rho_0}{a^3},\n\\label{lm3}\n\\end{equation}\nwhere $\\rho_0$ is the present value of the rest-mass density (the\npresent\nvalue of the scale factor is taken to be $a=1$). This equation, which\nexpresses\nthe conservation of the rest-mass, is valid\nfor an arbitrary\nequation\nof state. \n\n\nFor an equation of state specified under the form $P=P(\\rho)$, Eq.\n(\\ref{lm2}) can be integrated to obtain the relation between the energy density\n$\\epsilon$ and the rest-mass density. We obtain \\cite{epjp}:\n\\begin{equation}\n\\epsilon=\\rho c^2+\\rho\\int^{\\rho}\\frac{P(\\rho')}{{\\rho'}^2}\\, d\\rho'=\\rho\nc^2+u(\\rho).\n\\label{lm4}\n\\end{equation}\nWe note that $u(\\rho)$\ncan be interpreted as an internal energy density \\cite{epjp}. Therefore, the\nenergy density $\\epsilon$ is the sum of the rest-mass energy $\\rho c^2$ and the\ninternal energy $u(\\rho)$.\n\n\n\n\\subsection{The logotropic dark fluid}\n\\label{sec_ldf}\n\nWe assume that the Universe is filled with a single dark fluid described by\nthe\nlogotropic equation of state \\cite{epjp}:\n\\begin{equation}\nP=A\\ln\\left (\\frac{\\rho}{\\rho_P}\\right ),\n\\label{lm5}\n\\end{equation}\nwhere $\\rho_P=c^5\/\\hbar G^2=5.16\\times 10^{99}\\, {\\rm g\\, m^{-3}}$ is the Planck\ndensity and $A$ is a new fundamental constant\nof physics, with the dimension of an energy density, which is the\ncounterpart of the cosmological constant $\\Lambda$ in the $\\Lambda$CDM model\n(see below).\nUsing Eqs. (\\ref{lm4}) and (\\ref{lm5}), the relation\nbetween the energy density and the rest-mass density is\n\\begin{equation}\n\\epsilon=\\rho c^2-A\\ln \\left (\\frac{\\rho}{\\rho_P}\\right )-A=\\rho c^2+u(\\rho).\n\\label{lm6}\n\\end{equation}\nThe energy density is the\nsum of two\nterms: a rest-mass energy term $\\rho c^2=\\rho_0c^2\/ a^{3}$ that mimics the\nenergy density $\\epsilon_{\\rm m}$ of dark matter and\nan internal energy term $u(\\rho)=-A\\ln \\left ({\\rho}\/{\\rho_P}\\right )-A\n=-P(\\rho)-A=3A\\ln a-A\\ln(\\rho_0\/\\rho_P)-A$\nthat mimics the\nenergy density $\\epsilon_{\\rm de}$ of dark energy. This\ndecomposition\nleads to a natural, and physical, unification of dark matter and dark energy and\nelucidates their\nmysterious nature. \n\nSince, in our model, the rest-mass energy of the dark fluid mimics dark matter,\nwe\nidentify $\\rho_0c^2$ with the present energy density of dark matter. We thus\nset $\\rho_0c^2=\\Omega_{\\rm m,0}\\epsilon_0$,\nwhere $\\epsilon_0\/c^2={3H_0^2}\/{8\\pi G}$ is the\npresent energy density of the Universe and \n$\\Omega_{\\rm m,0}$ is the present fraction of dark matter (we also include\nbaryonic\nmatter). As a result, the present internal energy of the dark\nfluid, $u_0=\\epsilon_0-\\rho_0c^2$, is identified with the present\ndark energy density $\\epsilon_{\\rm de,0}=\\Omega_{\\rm de,0}\\epsilon_0$\nwhere\n$\\Omega_{\\rm de,0}=1-\\Omega_{\\rm m,0}$ is the present\nfraction of dark energy. Applying Eq. (\\ref{lm6}) at the present epoch ($a=1$),\nwe\nobtain the identity\n\\begin{equation}\nA=\\frac{\\epsilon_{\\rm de,0}}{\\ln\\left(\\frac{\\rho_Pc^2}{\\epsilon_{\\rm\nde,0}}\\right\n)+\\ln\\left (\\frac{\\Omega_{\\rm de,0}}{1-\\Omega_{\\rm de,0}}\\right )-1}.\n\\label{lm7}\n\\end{equation}\nAt that stage, we can have two points of view. We can consider that this\nequation determines the constant $A$ as a function of\n$\\epsilon_{0}$ and $\\Omega_{\\rm de,0}$ that are both obtained from the\nobservations \\cite{planck2016}. This allows us to determine the value of $A$.\nThis is the\npoint of view that we have adopted in our previous papers \\cite{epjp,lettre} and\nthat we adopt in Sec. \\ref{sec_B} below. However, in the following section,\nwe present another point of view leading to an intriguing result.\n\n\n\n\\subsection{Strong cosmic coincidence and\nprediction of $\\Omega_{\\rm de,0}$}\n\nLet us recall that, in our model, $A$ is considered as a fundamental constant\nwhose value is fixed by Nature. As a result, Eq. (\\ref{lm7}) relates\n$\\Omega_{\\rm de,0}$ to $\\epsilon_{0}$ for a given value of $A$. A priori, we\nhave two unknowns for just one equation. However, we can\nobtain the value of $\\Omega_{\\rm de,0}$ by the following argument. \n\n\nWe can always write the constant $A$ under the form\n\\begin{equation}\nA=\\frac{\\rho_{\\Lambda}c^2}{\\ln\\left(\\frac{\\rho_P}{\\rho_{\\Lambda}}\\right\n)}.\n\\label{lm8}\n\\end{equation}\nThis is just a change of notation. Eq. (\\ref{lm8}) defines a new\nconstant, the cosmological\ndensity $\\rho_{\\Lambda}$, in place of $A$. From the cosmological density\n$\\rho_{\\Lambda}$,\nwe can define an effective cosmological constant $\\Lambda$ by\\footnote{We stress\nthat our\nmodel is different from the $\\Lambda$CDM model so that $\\Lambda$ is\nfundamentally different from Einstein's cosmological constant\n\\cite{einsteincosmo}. However, it is always possible to introduce from the\nconstant $A$ an effective\ncosmological density $\\rho_{\\Lambda}$ and an effective cosmological constant\n$\\Lambda$ by Eqs. (\\ref{lm8}) and (\\ref{lm9}).}\n\\begin{equation}\n\\rho_{\\Lambda}=\\frac{\\Lambda}{8\\pi G}.\n\\label{lm9}\n\\end{equation}\nAgain this is just a change of notation. Therefore, the fundamental constant\nof our model is either $A$, $\\rho_{\\Lambda}$ or $\\Lambda$ (equivalently). We now\nadvocate a form of ``strong cosmic coincidence''. We assume that the present\nvalue of the dark energy density is equal to $\\rho_{\\Lambda}c^2$, i.e.,\n\\begin{equation}\n\\epsilon_{\\rm de,0}=\\rho_{\\Lambda}c^2.\n\\label{lm10}\n\\end{equation}\nSince, in the $\\Lambda$CDM model, $\\epsilon_{\\rm de}$ is a constant usually\nmeasured at the present epoch our\npostulate implies that $\\rho_{\\Lambda}c^2$ coincides with the\ncosmological density in the $\\Lambda$CDM model and that $\\Lambda$, as defined\nby Eq. (\\ref{lm9}), coincides with\nthe ordinary cosmological constant. This is why we have used the same\nnotations. Now, comparing Eqs. (\\ref{lm7}), (\\ref{lm8}) and (\\ref{lm10}) we\nobtain $\\ln\\left\n\\lbrack \\Omega_{\\rm de,0}\/(1-\\Omega_{\\rm de,0})\\right \\rbrack-1=0$\nwhich determines $\\Omega_{\\rm de,0}$. We find that\n\\begin{equation}\n\\Omega_{\\rm de,0}^{\\rm th}=\\frac{e}{1+e}\\simeq 0.731\n\\label{lm11}\n\\end{equation}\nwhich is close to the observed value $\\Omega_{\\rm de,0}^{\\rm\nobs}=0.691$ \\cite{planck2016}. This agreement is puzzling. It\nrelies on the ``strong cosmic\ncoincidence'' of Eq. (\\ref{lm10}) implying that our epoch is very special. This\nis a form of anthropic cosmological principle \\cite{barrow}. This may also\ncorrespond to a fixed point of our model. In order to avoid\nphilosophical issues, in the following, we adopt the more conventional\npoint of view discussed at the end of Sec. \\ref{sec_ldf}. \n\n\n\n\\subsection{The logotropic constant $B$}\n\\label{sec_B}\n\n\n\nWe can rewrite Eq. (\\ref{lm8}) as\n\\begin{equation}\nA=B\\rho_{\\Lambda}c^2\\qquad {\\rm\nwith}\\qquad B=\\frac{1}{\\ln\\left({\\rho_P}\/{\\rho_{\\Lambda}}\n\\right\n)}.\n\\label{lm12}\n\\end{equation}\nAgain, this is just a change of notation defining the dimensionless number\n$B$. We shall call it the logotropic constant since it is equal\nto the inverse of the logarithm of the cosmological density normalized by the\nPlanck density (see Appendix \\ref{sec_const}).\nWe note that $A$ can be expressed in terms of $B$ (see below) so that the\nfundamental constant of our model is either $A$, $\\rho_{\\Lambda}$, $\\Lambda$,\nor $B$. In the following, we shall express all the results in terms of $B$.\nFor example, the relation (\\ref{lm6}) between the energy density and the scale\nfactor can be rewritten as \n\\begin{equation}\n\\frac{\\epsilon}{\\epsilon_0}=\\frac{\\Omega_{\\rm\nm,0}}{a^3}+(1-\\Omega_{\\rm m,0})(1+3B\\ln\na).\n\\label{lm13}\n\\end{equation}\nCombined with the Friedmann equation (\\ref{lm1}) this equation determines the\nevolution of the scale factor $a(t)$ of the Universe in the logotropic model.\nThis evolution has been studied in detail\nin \\cite{epjp,lettre,jcap}. \n\n\n\n{\\it Remark:} Considering Eq. (\\ref{lm13}), we see that the\n$\\Lambda$CDM model is recovered for $B=0$.\nAccording to Eq. (\\ref{lm12}) this implies that\n$\\rho_P\\rightarrow +\\infty$, i.e., $\\hbar\\rightarrow 0$. Therefore, the \n$\\Lambda$CDM model corresponds to the semiclassical limit of the logotropic\nmodel. The fact that $B$ is intrinsically nonzero implies that\nquantum mechanics ($\\hbar\\neq 0$) plays some role in our model in addition to\ngeneral relativity. This may\nsuggest a link with a theory of quantum gravity. \n\n\n\n\\subsection{The value of $B$ from the\nobservations}\n\\label{sec_Bobs}\n\n\n\nThe fundamental constant ($A$, $\\rho_{\\Lambda}$, $\\Lambda$,\nor $B$) appearing in our model can be determined from the\nobservations by using Eq. (\\ref{lm7}). We take $\\Omega_{\\rm de,0}=0.6911$ and\n $H_0=2.195\\times 10^{-18}\\, {\\rm\ns^{-1}}$ \\cite{planck2016}. This implies $\\epsilon_0\/c^2=3H_0^2\/8\\pi\nG=8.62\\times 10^{-24}\\, {\\rm g\\, m^{-3}}$ and $\\epsilon_{\\rm\nde,0}\/c^2=\\Omega_{\\rm\nde,0}\\epsilon_0\/c^2=5.96\\times 10^{-24}\\, {\\rm g\\, m^{-3}}$. Since $\\ln\\left\n\\lbrack \\Omega_{\\rm de,0}\/(1-\\Omega_{\\rm de,0})\\right \\rbrack-1=-0.195$\nis small as\ncompared to $\\ln(\\rho_Pc^2\/\\epsilon_{\\rm de,0})=283$, we can write in very\ngood approximation $A$ as in Eq. (\\ref{lm8}) with $\\rho_{\\Lambda}\\simeq\n\\epsilon_{\\rm de,0}\/c^2$ as in Eq. (\\ref{lm10}). Therefore, \n\\begin{equation}\n\\rho_{\\Lambda}=\\frac{3\\Omega_{\\rm\nde,0}H_0^2}{8\\pi G}=5.96\\times 10^{-24}\\, {\\rm g\\, m^{-3}}\n\\label{lm14a}\n\\end{equation}\nand\n\\begin{equation}\n\\Lambda= 3\\Omega_{\\rm\nde,0}H_0^2=1.00\\times 10^{-35}\\, {\\rm s^{-2}}\n\\label{lm14b}\n\\end{equation}\nare approximately equal to\nthe cosmological density and to the cosmological constant in the $\\Lambda$CDM\nmodel. From Eq. (\\ref{lm12}) we get\n\\begin{equation}\nB=\\frac{1}{\\ln(\\rho_P\/\\rho_{\\Lambda})}\\simeq\n\\frac{1}{123\\ln(10)}\\simeq 3.53\\times 10^{-3}.\n\\label{lm15}\n\\end{equation}\nAs discussed in our previous papers \\cite{epjp,lettre,jcap}, $B$ is\nessentially the inverse of\nthe\nfamous number $123$ (see Appendix \\ref{sec_const}). Finally,\n\\begin{equation}\nA=B\\,\\rho_{\\Lambda}c^2=1.89\\times 10^{-9}\n\\, {\\rm g}\\, {\\rm m}^{-1}\\, {\\rm s}^{-2}. \n\\label{lm16}\n\\end{equation}\n\n\n\nFrom now on, we shall view $B$ given by Eq. (\\ref{lm15}) as the fundamental\nconstant of the theory. Therefore, everything should be expressed in terms of\n$B$ and the other fundamental constants of physics defining the Planck scales.\nFirst, we have\n\\begin{equation}\n\\frac{\\rho_{\\Lambda}}{\\rho_{P}}=\\frac{G\\hbar\\Lambda}{8\\pi\nc^5}=e^{-1\/B}=1.16\\times 10^{-123}.\n\\label{lm18}\n\\end{equation}\nThen,\n\\begin{equation}\n\\frac{A}{\\rho_{P}c^2}=Be^{-1\/B}=4.08\\times 10^{-126}.\n\\label{lm17}\n\\end{equation}\nThe logotropic equation of state (\\ref{lm5}) can be written as\n$P\/\\rho_Pc^2=Be^{-1\/B}\\ln(\\rho\/\\rho_P)$. Using Eq. (\\ref{lm10}) and\n$\\epsilon_{\\rm\nde,0}=\\Omega_{\\rm de,0}\\epsilon_0$, we get\n\\begin{equation}\n\\frac{\\epsilon_0}{\\rho_{P}c^2}=\\frac{1}{\\Omega_{\\rm de,0}}e^{-1\/B}=1.67\\times\n10^{-123}.\n\\label{lm19}\n\\end{equation}\nFinally, using Eq. (\\ref{lm1}),\n\\begin{equation}\nt_P H_0=\\left (\\frac{8\\pi}{3\\Omega_{\\rm de,0}}\\right\n)^{1\/2}e^{-1\/2B}=1.18\\times 10^{-61},\n\\label{lm20}\n\\end{equation}\nwhere $t_P=(\\hbar G\/c^5)^{1\/2}=5.391\\times 10^{-44}\\, {\\rm s}$ is the Planck\ntime. In the last two expressions,\nwe can either consider that $\\Omega_{\\rm de,0}$ is ``predicted'' by\nEq. (\\ref{lm11}) or take its measured value. To the order of\naccuracy that we\nconsider, this does not change the numerical values.\n\n\n\n\n\\section{Previous predictions of the logotropic model}\n\nThe interest of the logotropic model becomes apparent when it is\napplied to dark matter halos \\cite{epjp,lettre}. We assume that dark matter\nhalos are\ndescribed by the logotropic equation of state of Eq. (\\ref{lm5}) with\n$A=1.89\\times 10^{-9} \\, {\\rm g}\\, {\\rm m}^{-1}\\, {\\rm\ns}^{-2}$ (or $B=3.53\\times 10^{-3}$). At the\ngalactic scale, we can use Newtonian gravity. \n\n\n\\subsection{Surface density of dark matter halos}\n\n\n\nIt is an empirical evidence that the surface density of galaxies has\nthe same value \n\\begin{equation}\n\\Sigma_0^{\\rm obs}\\equiv \\rho_0 r_h\\simeq 295\\, {\\rm g\\, m^{-2}}\\simeq 141\\,\nM_{\\odot}\/{\\rm pc^2}\n\\label{lm21}\n\\end{equation}\neven\nif their sizes and masses vary by several orders of magnitude (up to $14$\norders of magnitude in luminosity) \\cite{donato}. Here $\\rho_0$ is the central\ndensity and $r_h$ is the halo radius at which the density has decreased by a\nfactor of $4$. The logotropic model predicts that the surface density of the\ndark matter halos is the\nsame for all the halos (because $A$ is a universal constant) and that it is\ngiven by \\cite{epjp,lettre}:\n\\begin{equation}\n\\Sigma_0^{\\rm th}=\\left (\\frac{A}{4\\pi G}\\right )^{1\/2}\\xi_h= \\left\n(\\frac{B}{32}\\right\n)^{1\/2}\\frac{\\xi_h}{\\pi}\\frac{c\\sqrt{\\Lambda}}{G},\n\\label{lm22}\n\\end{equation}\nwhere $\\xi_h=5.8458...$ is a pure number arising from the Lane-Emden equation\nof index $n=-1$ expressing the condition of hydrostatic equilibrium of\nlogotropic\nspheres.\\footnote{The logotropic spheres \\cite{epjp,lettre}, like the isothermal\nspheres \\cite{chandra}, have an infinite mass. This implies that the logotropic\nequation of state cannot\ndescribe dark matter halos at infinitely large distances. Nevertheless, it may\ndescribe the inner region of dark matter halos and this is sufficient to\ndetermine their surface density. The stability of bounded logotropic spheres has\nbeen studied in \\cite{logo} by analogy with the stability of bounded isothermal\nspheres and similar results have been obtained. In particular, bounded\nlogotropic\nspheres are stable\nprovided that the density contrast is not too large.} Numerically,\n\\begin{equation}\n\\Sigma_0^{\\rm th}= 278\\, {\\rm g\\,\nm^{-2}}\\simeq 133\\,\nM_{\\odot}\/{\\rm pc^2},\n\\label{lm23}\n\\end{equation}\nwhich is very close to the observational value\n(\\ref{lm21}). The fact that the surface density of dark matter halos is\ndetermined by the effective cosmological constant $\\Lambda$ (usually related to\nthe dark energy) tends to confirm that dark matter and dark energy are just two\nmanifestations of the {\\it same} dark fluid, as we have\nassumed in our model.\n\n{\\it Remark:} The dimensional term $c\\sqrt{\\Lambda}\/G$ in Eq. (\\ref{lm22}) can\nbe interpreted as representing the surface density of the Universe (see Appendix\n\\ref{sec_w}). We note that this term alone, $c\\sqrt{\\Lambda}\/G=14200\\, {\\rm g\\,\nm^{-2}}=6800 M_{\\odot}\/{\\rm pc}^2$, is too large to account precisely for the\nsurface density of dark matter halos so that the prefactor\n$(B\/32)^{1\/2}(\\xi_h\/\\pi)=0.01955$ is\nnecessary to reduce this number. It is interesting to remark\nthat the term $c\\sqrt{\\Lambda}\/G$ arises from classical general relativity while\nthe prefactor $\\propto B^{1\/2}$ has a quantum origin as discussed at the end of\nSec. \\ref{sec_B}. Actually, we will see that it is related to the fine-structure\nconstant $\\alpha$ [see Eq. (\\ref{lm30}) below].\n\n\n\\subsection{Mass-radius relation}\n\n \n\nThere are interesting consequences of the preceding result. For logotropic\nhalos, the mass of the halos calculated at the halo\nradius $r_h$ is given by \\cite{epjp,lettre}:\n\\begin{equation}\nM_h=1.49\\Sigma_0 r_h^2.\n\\label{lm24a}\n\\end{equation}\nThis\ndetermines the mass-radius relation of dark matter-halos. On the other hand, the\ncircular\nvelocity at the halo radius is $v_h^2=GM_h\/r_h=1.49\\Sigma_0\nG r_h$. Since the surface density of\nthe dark matter halos is constant, we obtain\n\\begin{equation}\n\\frac{M_h}{M_{\\odot}}=198 \\left (\\frac{r_h}{{\\rm pc}}\\right )^2,\\qquad\n\\left (\\frac{v_h}{{\\rm\nkm}\\, {\\rm s}^{-1}}\\right )^2=0.852\\, \\frac{r_h}{\\rm pc}.\n\\label{lm24}\n\\end{equation}\nThe scalings $M_h\\propto r_h^2$ and $v_h^2\\propto\nr_h$ (and also the prefactors) are consistent with the observations. \n\n\n\n\\subsection{The Tully-Fisher relation}\n\n\nCombining the previous equations, the logotropic model leads to the Tully-Fisher\n\\cite{tf} relation $v_h^4\\propto M_h$ or, more precisely, \n\\begin{equation}\n\\left (\\frac{M_b}{v_h^4}\\right )^{\\rm th}=\\frac{f_b}{1.49\\Sigma_0^{\\rm th}\nG^2}=46.4 M_{\\odot}{\\rm km}^{-4}{\\rm s}^4,\n\\label{lm25}\n\\end{equation}\nwhere\n$f_b=M_b\/M_h\\sim 0.17$ is the cosmic baryon fraction \\cite{mcgaugh}. The\npredicted value from Eq. (\\ref{lm25}) is\nclose to the observed one $\\left ({M_b}\/{v_h^4}\\right )^{\\rm obs}=47\\pm 6\nM_{\\odot}{\\rm\nkm}^{-4}{\\rm\ns}^4$ \\cite{mcgaugh}.\n\n\n\n{\\it Remark:} The Tully-Fisher relation is sometimes justified by\nthe MOND (Modification of Newtonian dynamics) theory \\cite{mond} which predicts\na relation of the form $v_h^4=Ga_0\nM_b$ between the asymptotic circular velocity and the baryon mass, where $a_0$\nis a critical acceleration. Our results imply\n$a_0^{\\rm th}=1.62\\times\n10^{-10}\\, {\\rm m}\\, {\\rm s}^{-2}$ which is\nclose to the value $a_0^{\\rm obs}=(1.3\\pm 0.3)\\times\n10^{-10}\\, {\\rm m}\\, {\\rm s}^{-2}$ obtained from the observations\n\\cite{mcgaugh}.\nCombining Eqs. (\\ref{lm24a}) and (\\ref{lm25}), we first get $a_0^{\\rm\nth}=(1.49\/f_b)\\Sigma_0^{\\rm th}G=GM_h\/(f_br_h^2)$ which shows that $a_0$ can be\ninterpreted as the surface gravity of the galaxies $G\\Sigma_0$ (which\ncorresponds to Newton's acceleration $GM_h\/r_h^2$) or as the surface density of\nthe Universe (see Appendix \\ref{sec_abh}). Then, using Eqs.\n(\\ref{lm14b}) and (\\ref{lm22}), we obtain $a_0^{\\rm\nth}=({1.49}\/{f_b})({B}\/{32})^{1\/2}({\\xi_h}\/{\\pi})c\\sqrt{\\Lambda}\\simeq H_0\nc\/4$ which\nexplains why $a_0$ is of the order of $H_0 c$. We emphasize,\nhowever, that we do\nnot use the MOND theory in our approach and that the logotropic model assumes\nthe existence of a dark fluid.\n\n\n\n\\subsection{The mass $M_{300}$}\n\n\n\n\n\nThe logotropic equation of state also explains the observation of Strigari {\\it\net al.} \\cite{strigari} that all the dwarf spheroidals (dSphs) of the Milky\nWay have the same total dark matter mass $M_{300}$ contained within a radius\n$r_u=300\\, {\\rm pc}$, namely $M_{300}^{\\rm obs}\\simeq 10^7\\, M_{\\odot}$\nThe logotropic model predicts the\nvalue \\cite{epjp,lettre}:\n\\begin{equation}\nM_{300}^{\\rm th}=\\frac{4\\pi \\Sigma_0^{\\rm th} r_u^2}{\\xi_h\\sqrt{2}}=1.82\\times\n10^{7}\\, M_{\\odot},\n\\label{lm26}\n\\end{equation}\nwhich is in very good agreement with the\nobservational value. \n\n\n\\section{A curious connection between atomic and\ncosmological scales}\n\\label{sec_curious}\n\n\n\n\\subsection{The surface density of the electron}\n\\label{sec_sde}\n\n\n\nThe classical radius of the\nelectron $r_e$ can be obtained qualitatively by writing that the electrostatic\nenergy of the electron, $e^2\/r_e$, is equal to its rest-mass energy $m_e c^2$.\nRecalling the value of the charge of the electron $e=4.80\\times 10^{-13}\\, {\\rm\ng^{1\/2}\\, m^{3\/2}\\, s^{-1}}$ and its mass $m_e=9.11\\times 10^{-28}\\, {\\rm g}$,\nwe obtain $r_e=e^2\/m_ec^2=2.82\\times 10^{-15}\\, {\\rm m}$. As a result, the\nsurface density of the electron is\\footnote{We note that the Thomson\ncross-section $\\sigma=(8\\pi\/3)(e^2\/m_e\nc^2)^2$ can be written as $\\sigma=(8\\pi\/3)r_e^2$ giving a physical meaning to\nthe classical electron radius $r_e$. We also note that $r_e$ can be written as \n$r_e=\\alpha\\hbar\/m_ec$ where $\\lambda_C=\\hbar\/m_e c$ is the Compton\nwavelength of the electron and $\\alpha$ is the fine-structure constant $\\alpha$\n[see Eq. (\\ref{const1})]. Similarly, \n$\\Sigma_e=(1\/\\alpha^2)m_e^3c^2\/\\hbar^2$.}\n\\begin{equation}\n\\Sigma_e=\\frac{m_e}{r_e^2}=\\frac{m_e^3c^4}{e^4}=115\\, {\\rm g\/m^2}= 54.9\\,\nM_{\\odot}\/{\\rm pc^2},\n\\label{lm27}\n\\end{equation}\nwhich is of the same order of magnitude as the surface density of dark matter\nhalos from Eq. (\\ref{lm21}). This\ncoincidence is amazing in view of the different scales (atomic\nversus cosmological) involved. More precisely, we find\n$\\Sigma_e=\\sigma\\Sigma_0^{\\rm th}$ with $\\sigma\\simeq 0.413$. Of\ncourse, the value of $\\sigma$ depends on the\nprecise manner used to define the surface density of the electron, or its\nradius, but the\nimportant point is that this number is of order unity. \n\n\n\n\\subsection{Relation between $\\alpha$ and $B$}\n\\label{sec_Ba}\n\n\n\nBy matching the two formulae (\\ref{lm22}) and (\\ref{lm27}),\nwriting $\\Sigma_e=\\sigma\\Sigma_0^{\\rm th}$, we get\n\\begin{equation}\n\\Lambda=\\frac{32\\pi^2}{B\\xi_h^2\\sigma^2}\n\\frac{m_e^6c^6G^2}{e^8}=\\frac{32\\pi^2}{B\\xi_h^2\\sigma^2\\alpha^4}\n\\frac{m_e^6c^2G^2}{\\hbar^4},\n\\label{lm28}\n\\end{equation}\nwhere we have introduced the fine-structure constant $\\alpha$ in the\nsecond equality (see Appendix \\ref{sec_const}).\nThis expression provides a curious relation between the cosmological constant,\nthe mass of the electron and its charge. This relation is similar to\nWeinberg's empirical relation (see Appendix\n\\ref{sec_w}) which can be written as [combining Eqs. (\\ref{lm14b}) and\n(\\ref{w4})]\n\\begin{equation}\n\\Lambda=192\\pi^2\\mu^2\\Omega_{\\rm de,0}\\frac{m_e^6c^6G^2}{e^8},\n\\label{w4b}\n\\end{equation}\nwhere $\\mu\\simeq 3.42$. Note that in our formula (\\ref{lm28}),\n$\\Lambda$ appears two times: on the left hand side and in $B$ (which depends\nlogarithmically on $\\Lambda$). This will have important consequences in the\nfollowing.\n\n\nB\\\"ohmer and Harko \\cite{bhLambda}, by a completely different approach, found a\nsimilar relation\\footnote{A closely related formula, involving the Hubble\nconstant instead of the cosmological constant, was first found by Stewart\n\\cite{stewart} in 1931 by trial and error.}\n\\begin{equation}\n\\Lambda=\\nu \\frac{\\hbar^2 G^2 m_e^6 c^8}{e^{12}}=\\frac{\\nu}{\\alpha^6} \\frac{G^2\nm_e^6 c^2}{\\hbar^{4}},\n\\label{lm29}\n\\end{equation}\nwhere $\\nu\\simeq 0.816$ is of order unity. Their result can be\nobtained as follows. They first introduce a minimum\nmass $m_{\\Lambda}\\sim\\hbar\\sqrt{\\Lambda}\/c^2$ interpreted as being the mass of\nthe elementary particle of dark energy, called the cosmon. Then, they define a\nradius $R$ by the relation $m_{\\Lambda}\\sim \\rho_{\\Lambda} R^3$ where\n$\\rho_{\\Lambda}= \\Lambda\/8\\pi G$ is the cosmological density considered as being\nthe lowest density in the Universe. Finally, they remark that $R$ has\ntypically the same value as the classical radius of the electron \n$r_e=e^2\/m_ec^2$. Matching $R$ and $r_e$ leads to the scaling of Eq.\n(\\ref{lm29}). We have then added a prefactor $\\nu$ and adjusted its\nvalue\nin order to exactly obtain the measured value of the cosmological constant\n\\cite{planck2016}.\nSince the approach of B\\\"ohmer and Harko \\cite{bhLambda} is essentially\nqualitative, and depends on the precise manner used to define the radius of\nthe\nelectron, their result can be at best valid up to a constant of order unity.\n\nWe would like now to compare the estimates from Eqs. (\\ref{lm28}) and\n(\\ref{lm29}). At that\nstage, we can have two points of view. If we consider that comparing the\nprefactors is meaningless because our approach can only provide ``rough'' orders\nof\nmagnitude, we conclude that Eqs. (\\ref{lm28}) and\n(\\ref{lm29}) are\nequivalent, and that they are also equivalent to Weinberg's empirical\nrelation (\\ref{w4}). Alternatively, if we take the prefactors seriously into\naccount (in particular the presence of $B$ which depends on $\\Lambda$) and match\nthe formulae (\\ref{lm28}) and\n(\\ref{lm29}), we find an interesting relation between the\nfine-structure constant\n$\\alpha$ and the logotropic constant $B$:\n\\begin{equation}\n\\alpha=\\left (\\frac{\\nu}{32}\\right\n)^{1\/2}\\frac{\\xi_h\\sigma}{\\pi}\\sqrt{B}\\simeq 0.123 \\sqrt{B}.\n\\label{lm30}\n\\end{equation}\nTherefore, the fine-structure constant (electron charge normalized by the\nPlanck charge) is determined by the logotropic constant $B$\n(cosmological density normalized by the Planck density) by a relation of the\nform $\\alpha\\propto B^{1\/2}$. This makes a connection between atomic scales and\ncosmological scales. This also suggests that the famous numbers\n$137$ and $123$\n(see Appendix \\ref{sec_const}) are related to each other, or may even represent\nthe same thing. From Eq. (\\ref{lm30}), we have\\footnote{We note that the\nprefactors in\nEqs. (\\ref{lm30}) and (\\ref{lm31}) appear to be close to $123\/1000$ and\n$123\/10$, where the number $123$ appears again (!). We do not know whether this\nis\nfortuitous or if this bears a deeper significance than is apparent at first\nsight.} \n\\begin{equation}\n137\\simeq 12.3 \\sqrt{123}.\n\\label{lm31}\n\\end{equation}\n\n\n{\\it Remark:} the logotropic constant $B$ is related to the effective\ncosmological constant $\\Lambda$ by [see Eq.\n(\\ref{lm18})]\n\\begin{equation}\nB=\\frac{1}{\\ln\\left (\\frac{8\\pi c^5}{G\\hbar\\Lambda}\\right )}.\n\\label{lm31b}\n\\end{equation}\nUsing Eqs. (\\ref{lm30}) and (\\ref{lm31b}), we can express the fine-structure\nconstant $\\alpha$ as a function of the effective cosmological\nconstant $\\Lambda$ or, using Eq. (\\ref{lm20}), as a function of the age of the\nUniverse $t_{\\Lambda}=1\/H_0$ as\n\\begin{equation}\n\\alpha=\\frac{0.123}{\\ln\\left (\\frac{8\\pi c^5}{G\\hbar\\Lambda}\\right\n)^{1\/2}}=\\frac{0.123}{\\sqrt{2}\\ln\\left \\lbrack \\left (\\frac{8\\pi}{3\\Omega_{\\rm\nde,0}}\\right )^{1\/2}\\frac{t_{\\Lambda}}{t_P}\\right\\rbrack^{1\/2}}.\n\\label{lm31c}\n\\end{equation}\nWe emphasize the scaling $1\/\\alpha\\propto (\\ln t_{\\Lambda})^{1\/2}$. It is\ninteresting to note that similar relations have been introduced in the past\nfrom pure numerology (see \\cite{kragh}, P. 428). These relations suggest\nthat the fundamental constants may change with time as argued by Dirac\n\\cite{dirac1,dirac2}. \n\n\n\\subsection{The mass and the charge of the electron\nin terms of $B$}\n\nUsing Eqs. (\\ref{lm9}), (\\ref{lm18}), (\\ref{lm28}) and (\\ref{lm30}), we find\nthat the mass and the charge of the electron\nare determined by the logotropic\nconstant $B$ according to\n\\begin{eqnarray}\n\\frac{m_e}{M_P}=\\left (\\frac{8\\pi}{\\nu}\\right )^{1\/6}\\left\n(\\frac{\\nu}{32}\\right\n)^{1\/2}\\frac{\\xi_h\\sigma}{\\pi}\\sqrt{B}e^{-1\/(6B)}\\nonumber\\\\\n=0.217\\sqrt{B}e^{\n-1\/(6B) } =4.18\\times 10^{-23},\n\\label{lm32}\n\\end{eqnarray}\n\\begin{equation}\n\\frac{e^2}{q_P^2}=\\left (\\frac{\\nu}{32}\\right\n)^{1\/2}\\frac{\\xi_h\\sigma}{\\pi}\\sqrt{B}=0.123\\sqrt{B}=7.29\\times 10^{-3},\n\\label{lm33}\n\\end{equation}\nwhere $M_P=(\\hbar c\/G)^{1\/2}=2.18\\times 10^{-5}\\, {\\rm g}$ is the\nPlanck mass and $q_P=(\\hbar c)^{1\/2}=5.62\\times 10^{-12}\\, {\\rm\ng^{1\/2}\\, m^{3\/2}\\, s^{-1}}$ is the Planck charge.\nThese relations suggest that the mass and the charge of the electron (atomic\nscales) are determined by the effective cosmological constant\n$\\Lambda$ or $B$ (cosmological scales). We emphasize the presence of\nthe exponential factor $e^{-1\/(6B)}$ in Eq. (\\ref{lm32}) explaining why the\nelectron mass is much smaller than the Planck mass while the electron charge is\ncomparable to the Planck charge. \n\n\\subsection{A prediction of $B$}\n\n\nIf we match Eqs. \n(\\ref{lm22}) and (\\ref{w3}), or equivalently Eqs. \n(\\ref{lm28}) and (\\ref{w4b}), we obtain\n\\begin{equation}\nB^{\\rm app}=\\frac{1}{6\\lambda^2\\xi_h^2\\Omega_{\\rm\nde,0}}.\n\\label{w5}\n\\end{equation}\nTaking $\\lambda^{\\rm app}=1$ (since we cannot predict its value) and\n$\\Omega_{\\rm de,0}^{\\rm th}=e\/(1+e)$ [see Eq. (\\ref{lm11})], we get $B^{\\rm\napp}=6.67\\times 10^{-3}$ instead of $B=3.53\\times\n10^{-3}$. We recall\nthat the value of $B$ was obtained in Sec. \\ref{sec_Bobs} from the\nobservations.\nOn the other hand, Eq. (\\ref{w5}) gives the correct order of magnitude of $B$\nwithout any reference to observations, up to a dimensionless constant\n$\\lambda\\simeq 1.41$ of order unity.\nConsidering that $B$ is predicted by Eq.\n(\\ref{w5}) implies that we can predict the values of $\\Lambda$, $H_0$, $\\alpha$,\n$m_e$ and $e$ without reference to observations, up to dimensionless constants\n$\\lambda\\simeq 1.41$, $\\nu\\simeq 0.816$ and $\\sigma\\simeq 0.413$\nof order unity. We note, however, that even if these dimensionless\nconstants ($\\lambda$, $\\nu$, $\\sigma$) are of order unity, their precise values\nare of importance since $B$ usually\nappears in exponentials like in Eqs. (\\ref{lm18}), (\\ref{lm20}) and\n(\\ref{lm32}). \n\n\n\n\\section{Conclusion}\n\nIn this paper, we have developed the logotropic model introduced in\n\\cite{epjp,lettre}. In this model, dark matter corresponds to the rest mass\nenergy of a dark fluid and dark energy corresponds to its internal energy. The\n$\\Lambda$CDM model may be interpreted as the semiclassical limit\n$\\hbar\\rightarrow 0$ of the logotropic model. We have first recalled that the \nlogotropic model is able to predict (without free parameter) the universal value\nof the surface density of dark matter halos $\\Sigma_0$, their mass-radius\nrelation\n$M_h-r_h$, the Tully-Fisher relation $M_b\\sim v_h^4$ and the value of the mass\n$M_{300}$ of dSphs. Then, we have argued that it also predicts the value of the\npresent fraction of dark energy $\\Omega_{\\rm de,0}$. This arises from a\nsort of ``strong cosmic coincidence'' but this could also correspond to a fixed\npoint of the model. Finally, we have observed that the surface density of the\ndark matter halos $\\Sigma_0$ is of the same order as the surface density of the\nUniverse\n $\\Sigma_\\Lambda$ and of the same order as the surface density of the electron \n$\\Sigma_e$. This makes an\nempirical connection between atomic physics and cosmology. From this connection,\nwe have obtained a relation between the fine-structure constant $\\alpha\\sim\n1\/137$ and the logotropic constant $B\\sim 1\/123$. We have also expressed the\nmass $m_e$ and the charge $-e$ of the electron as a function of $B$\n(or as a function of the effective\ncosmological constant $\\Lambda$). Finally, we have obtained a prediction of the\norder of magnitude of $B$ independent from the observations. In a sense, our\napproach which expresses the mass and the charge of the electron in terms of\nthe cosmological constant is a continuation of the program initiated by\nEddington \\cite{eddington} in his quest for a `{\\it Fundamental Theory}' of\nthe physical world in which the basic interaction strengths and elementary\nparticle masses would be prediced entirely combinatorically by simple counting\nprocesses \\cite{barrow}. In the Appendices, we try to\nrelate these interconnections to a form of holographic\nprinciple \\cite{bousso} (of course not known at the time of Eddington) stating\nthat the entropy of the electron, of dark matter\nhalos, and of the Universe scales like their area as in the case of black\nholes \\cite{bekenstein,hawking}.\n\n\nThis paper has demonstrated that physics is full of ``magic'' and mysterious\nrelations that are still not fully understood (one of them being the empirical\nWeinberg relation). Hopefully, a contribution of this\npaper is to reveal these ``mysteries'' and propose some tracks so as to induce\nfurther research towards\ntheir elucidation.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\subsection*{Introduction}%\n\\section{Introduction}\nThe isolation and control of the number of carriers in single and\nfew layer graphene flakes\\cite{Netal04t,Netal05t} has lead to a\nlarge research activity exploring all aspects of these\nmaterials\\cite{NGPNG09}. Among others, the application of graphene\nto spintronic\ndevices\\cite{HGNSB06t,CCF07,NG07,HJPJW07t,TTVJJvW08t,WPLCWSK08t,Petal09}\nand to spin qubits\\cite{TBLB07,FTL09,WYMK09} is being intensively\nstudied. The understanding of these devices requires a knowledge of\nthe electronic spin-orbit interaction. In principle, this\ninteraction turns single layer graphene into a topological\ninsulator\\cite{KM05}, which shows a bulk gap and edge states at all\nboundaries. The magnitude of the spin-orbit coupling in single layer\ngraphene has been studied\\cite{HGB06,HMING06t,YYQZF07,GKEAF09}. The\ncalculated couplings are small, typically below 0.1K. The observed\nspin relaxation\\cite{TTVJJvW08t,Jetal09} suggests the existence of\nstronger mechanisms which lead to the precession of the electron\nspins, like impurities or lattice\ndeformations\\cite{NG09,HGB09,EKGF09}.\n\nBilayer graphene is interesting because, among other properties, a\ngap can be induced by electrostatic means, leading to new ways for\nthe confinement of electrons\\cite{MF06}. The spin-orbit interactions\nwhich exist in single layer graphene modulate the gap of a graphene\nbilayer\\cite{GM09}. The unit cell of bilayer graphene contains four\ncarbon atoms, and there are more possible spin-orbit couplings than\nin single layer graphene.\n\nWe analyze in the following the intrinsic and extrinsic spin-orbit\ncouplings in bilayer graphene, using a tight binding model, and\ndescribing the relativistic effects responsible for the spin-orbit\ninteraction by a $\\vec{{\\bf L}}\\vec{{ \\bf S}}$ intraatomic coupling.\nWe use the similarities between the electronic bands of a graphene\nbilayer and the bands of three dimensional graphite with Bernal\nstacking to generalize the results to the latter.\n\n\\section{The model}.\nWe describe the electronic bands of a graphene bilayer using a tight\nbinding model, with four orbitals, the $2s$ and the three $2p$\norbitals, per carbon atom. We consider hoppings between nearest\nneighbors in the same plane, and nearest neighbors and next nearest\nneighbors between adjacent layers, see\\cite{CLM09}. The couplings\nbetween each pair of atoms is parametrized by four hoppings, $V_{ss}\n, V_{sp} , V_{pp \\pi}$ and $V_{pp \\sigma}$. The model includes also\ntwo intraatomic levels, $\\epsilon_s$ and $\\epsilon_p$, and the\nintraatomic spin-orbit coupling\n\\begin{align}\n{\\cal H}_{so} &\\equiv \\Delta_{so} \\sum_i \\vec{\\bf L}_i \\vec{\\bf S}_i\n\\end{align}\n The parameters used\nto describe the $\\pi$ bands of graphite\\cite{M57,SW58}, $\\gamma_0 ,\n\\gamma_1 , \\gamma_2 , \\gamma_3 , \\gamma_4 , \\gamma_5$ and $\\Delta$,\ncan be derived from this set of parameters. We neglect the\ndifference between different hoppings between atoms which are next\nnearest neighbors in adjacent layers, which are responsible for the\ndifference between the parameters $\\gamma_3$ and $\\gamma_4$. We also\nset the difference in onsite energies between the two inequivalent\natoms, $\\Delta$ to zero. The parameters $\\gamma_2$ and $\\gamma_5$\nare related to hoppings between next nearest neighbor layers, and\nthey do not play a role in the description of the bilayer. The total\nnumber of parameters is 15, although, without loss of generality, we\nset $\\epsilon_p = 0$. We do not consider hoppings and spin orbit\ninteractions which include $d$ levels, although they can contribute\nto the total magnitude of the spin-orbit\ncouplings\\cite{MY62,GKEAF09}. The effects mediated by $d$ orbitals\ndo not change the order of magnitude of the couplings in single\nlayer graphene, and their contribution to interlayer effects should\nbe small.\n\nThe main contribution to the effective spin-orbit at the Fermi level\ndue to the interlayer coupling is due to the hoppings between $p$\norbitals in next nearest neighbor atoms in different layers. This\ninteraction gives rise to the parameters $\\gamma_3$ and $\\gamma_4$\nin the parametrization of the bands in graphite. For simplicity, we\nwill neglect couplings between $s$ and $p$ orbitals in neighboring\nlayers. The non zero hoppings used in this work are listed in\nTable~\\ref{hoppings}.\n\n\\begin{table}\n\\begin{tabular}{||c|c||} \\hline \\hline\n$\\epsilon_s$ &-7.3 \\\\ \\hline $t^0_{ss}$ & 2.66 \\\\\n\\hline $t^0_{sp}$ & 4.98 \\\\ \\hline $t^0_{pp \\sigma}$ &2.66\n\\\\ \\hline $t^0_{pp \\pi}$ &-6.38 \\\\ \\hline $t^1_{pp \\pi}$ &0.4 \\\\ \\hline\n$t^2_{pp \\sigma}$ &0.4 \\\\ \\hline $t^2_{pp \\pi}$ &-0.4 \\\\ \\hline\n $\\Delta_{so}$ &0.02 \\\\ \\hline \\hline\n\\end{tabular}\n\\caption{Non zero tight binding parameters, in eV, used in the\nmodel. The hoppings are taken from\\cite{TL87,TS91}, and the\nspin-orbit coupling from\\cite{SCR00}. Superindices 0,1, and 2\ncorrespond to atoms in the same layer, nearest neighbors in\ndifferent layers, and next nearest neighbors in different layers.}\n\\label{hoppings}\n\\end{table}\n\nThe hamiltonian can be written as a $32 \\times 32$ matrix for each\nlattice wavevector. We define an effective hamiltonian acting on the\n$\\pi$, or $p_z$, orbitals, by projecting out the rest of the\norbitals:\n\\begin{align}\n{\\cal H}_{\\pi}^{eff} &\\equiv {\\cal H}_{\\pi} + {\\cal H}_{\\pi \\sigma}\n\\left( \\omega - {\\cal H}_{\\sigma \\sigma} \\right)^{-1} {\\cal\nH}_{\\sigma \\pi} \\label{heff}\n\\end{align}\nWe isolate the effect of the spin-orbit coupling by defining:\n\\begin{align}\n{\\cal H}_{\\pi}^{so} \\left( \\vec{\\bf k} \\right) &\\equiv {\\cal\nH}_{\\pi}^{eff} ( \\Delta_{so} ) - {\\cal H}_{\\pi}^{eff} ( \\Delta_{so}=\n0 )\n\\end{align}\nNote that ${\\cal H}_{\\pi}^{so}$ depends on the energy, $\\omega$.\n\nWe analyze ${\\cal H}_{\\pi}^{so}$ at the $K$ and $K'$ points. The two\nmatrices have a total of 16 entries, which can be labeled by\nspecifying the sublattice, layer, spin, and valley. We define\noperators which modify each of these degrees of freedom using the\nPauli matrices $\\hat{\\sigma} , \\hat{\\mu} , \\hat{s}$, and\n$\\hat{\\tau}$. The unit cell is described in\nFig.~\\ref{bilayer_lattice_so}.\n\\begin{figure}\n\\includegraphics[width=5cm]{lattice_bilayer_so.pdf}\n\\caption[fig]{(Color online). Unit cell of a graphene bilayer.\nLabels A and B define the two sublattices in each layer, while\nsubscripts 1 and 2 define the layers.} \\label{bilayer_lattice_so}\n\\end{figure}\n\nThe hamiltonian has inversion and time reversal symmetry, and it is\nalso invariant under rotations by $120^\\circ$. These symmetries are\ndefined by the operators:\n\\begin{align}\n{\\cal I} &\\equiv \\sigma_x \\mu_x \\tau_x \\nonumber \\\\\n{\\cal T} &\\equiv i s_y \\tau_x {\\cal K} \\nonumber \\\\\n{\\cal C}_{120^\\circ} &\\equiv \\left( - \\frac{1}{2} + i\n\\frac{\\sqrt{3}}{2} s_z \\right) \\times \\left( - \\frac{1}{2} - i\n\\frac{\\sqrt{3}}{2} \\tau_z \\mu_z \\right) \\times \\nonumber \\\\ &\\times\n\\left( - \\frac{1}{2} + i \\frac{\\sqrt{3}}{2} \\tau_z \\sigma_z \\right)\n\\end{align}\nwhere ${\\cal K}$ is complex conjugation.\n\nThe possible spin dependent terms which respect these symmetries\nwere listed in\\cite{DD65}, in connection with the equivalent problem\nof three dimensional Bernal graphite (see below). In the notation\ndescribed above, they can be written as\n\\begin{align}\n{\\cal H}_{\\pi}^{so} &= \\lambda_1 \\sigma_z \\tau_z s_z + \\lambda_2\n\\mu_z \\tau_z s_z + \\lambda_3 \\mu_z \\left( \\sigma_y s_x - \\tau_z\n\\sigma_x s_y \\right) + \\nonumber \\\\ &+ \\lambda_4 \\sigma_z \\left(\n\\mu_y s_x + \\tau_z \\mu_x s_y \\right) \\label{hamilso}\n\\end{align}\nThe first term describes the intrinsic spin-orbit coupling in single\nlayer graphene. The other three, which involve the matrices $\\mu_i$,\nare specific to bilayer graphene. The term proportional to\n$\\lambda_3$ can be viewed as a Rashba coupling with opposite signs\nin the two layers.\n\n\\section{Results}.\n\\subsection{Bilayer graphene}.\n\\begin{figure}\n\\includegraphics[width=8cm]{couplings_bilayer_so_E.pdf}\n\\caption[fig]{(Color online). Dependence on energy of the spin-orbit\ncouplings, as defined in eq.~\\ref{hamilso}.}\n\\label{couplings_bilayer_so_E}\n\\end{figure}\n\nThe energy dependence of the four couplings in eq.~\\ref{hamilso} is\nshown in Fig.~\\ref{couplings_bilayer_so_E}. The values of the\ncouplings scale linearly with $\\Delta_{so}$. This dependence can be\nunderstood by treating the next nearest neighbor interlayer coupling\nand the intratomic spin-orbit coupling as a perturbation. The\nspin-orbit coupling splits the spin up and spin down states of the\n$\\sigma$ bands in the two layers. The interlayer couplings couple\nthe $\\pi$ band in one layer to the $\\sigma$ band in the other layer.\nTheir value is of order $\\gamma_3$. The $\\pi$ states are shifted\nby:\n\\begin{align}\n\\delta \\epsilon_{\\pi \\pm} &\\sim - \\frac{\\gamma_3^2}{\\left|\n\\epsilon_{\\sigma \\pm}\\right|} \\propto \\mp \\Delta_{so} \\left(\n\\frac{\\gamma_3}{ \\epsilon_{\\sigma}^0 } \\right)^2\n\\label{spin_hopping}\n\\end{align}\nwhere $\\epsilon_\\sigma^0$ is an average value of a level in the\n$\\sigma$ band.\n\n The model gives for the only\nintrinsic spin-orbit coupling in single layer graphene the value\n\\begin{align}\n\\left| \\lambda_1^{SLG} \\right| &= 0.0065 {\\rm meV} \\label{lambdaSLG}\n\\end{align}\nThis coupling depends quadratically on $\\Delta_{so}$, $\\delta\n\\epsilon_{\\pi \\pm} \\sim \\pm \\Delta_{so}^2 \/\n\\epsilon_\\sigma^0$\\cite{HGB06}.\n\nThe band dispersion of bilayer graphene at low energies, in the\nabsence of spin-orbit couplings is given by four Dirac cones,\nbecause of trigonal warping effects associated with\n$\\gamma_3$\\cite{MF06}. Hence, we must to consider the couplings for\nwavevectors $\\vec{\\bf k}$ slightly away from the $K$ and $K'$\npoints. We have checked that the dependence of the couplings\n$\\lambda_i$ on momentum, in the range where trigonal warping is\nrelevant, is comparable to the changes with energy shown in\nFig.~\\ref{couplings_bilayer_so_E}.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{couplings_bilayer_so_Egap.pdf}\n\\caption[fig]{(Color online). Dependence on interlayer gap, $E_g$,\nof the spin-orbit couplings, as defined in eq.~\\ref{hamilso}.}\n\\label{couplings_bilayer_so_Egap}\n\\end{figure}\n\nA gap, $E_g$, between the two layers breaks inversion symmetry, and\ncan lead to new couplings. The calculations show no new coupling\ngreater than $10^{-6}$meV for gaps in the range $-0.1 {\\rm eV} \\le\nE_g \\le 0.1 {\\rm eV}$. The dependence of the couplings on the value\nof the gap is shown in Fig.~\\ref{couplings_bilayer_so_Egap}. This\ncalculation considers only the effect in the shift of the\nelectrostatic potential between the two layers. The existence also\nof an electric field will mix the $p_z$ and $s$ orbitals within each\natom, leading to a Rashba term similar to the one induced in single\nlayer graphene\\cite{HGB06,HMING06t}.\n\n\n\nThe effect of $\\lambda_1$ is to open a gap of opposite sign in the\ntwo valleys, for each value of $s_z$. The system will become a\ntopological insulator\\cite{H88,KM05}. The number of edge states is\ntwo, that is, even. The spin Hall conductivity is equal to two\nquantum units of conductance. A perturbation which preserves time\nreversal invariance can hybridize the edge modes and open a gap.\nSuch perturbation should be of the form $\\tau_x s_y$.\n\nThe terms with $\\lambda_3$ and $\\lambda_4$ describe spin flip\nhoppings which involve a site coupled to the other layer by the\nparameter $\\gamma_1$. The amplitude of the wavefunctions at these\nsites is suppressed at low energies\\cite{MF06}. The shifts induced\nby $\\lambda_3$ and $\\lambda_4$ in the low energy electronic levels\nwill be of order $\\lambda_3^2 \/ \\gamma_1 , \\lambda_4^2 \/ \\gamma_1$.\n\n\\begin{figure}\n\\includegraphics[width=8cm]{couplings_bilayer_so_kz.pdf}\n\\caption[fig]{(Color online). Dependence on momentum perpendicular\nto the layers in Bernal graphite of the spin-orbit couplings, as\ndefined in eq.~\\ref{hamilso}.} \\label{couplings_bilayer_so_kz}\n\\end{figure}\n\\subsection{Bulk graphite}.\nThe hamiltonian of bulk graphite with Bernal stacking can be reduced\nto a set of bilayer hamiltonians with interlayer hoppings which\ndepend on the momentum along the direction perpendicular to the\nlayers, $k_z$. We neglect in the following the (small) hoppings\nwhich describe hoppings between next nearest neighbor layers,\n$\\gamma_2$ and $\\gamma_5$, and the energy shift $\\Delta$ between\natoms in different sublattices. At the $K$ and $K'$ points of the\nthree dimensional Brillouin Zone ($2 k_z c = 0$, where $c$ is the\ninterlayer distance) the hamiltonian is that of a single bilayer\nwhere the value of all interlayer hoppings is doubled. At the $H$\nand $H'$ points, where $2 k_z c = \\pi$, the hamiltonian reduces to\ntwo decoupled layers, and in the intermediate cases the interlayer\ncouplings are multiplied by $| 2 \\cos (k_z c) |$. Carrying out the\ncalculations described in the previous section, $k_z$ dependent\neffective couplings, $\\lambda_i ( k_z )$, can be defined. These\ncouplings are shown in Fig.~\\ref{couplings_bilayer_so_kz}. The\nresults for bilayer graphene correspond to $k_z c = 2 \\pi \/ 3 , 4\n\\pi \/ 3$. The layers are decoupled for $k_z c = \\pi$. In this case,\nthe only coupling is $\\lambda_1$, which gives the coupling for a\nsingle layer, given in eq.~\\ref{lambdaSLG}.\n\nThe significant dispersion as function of momentum parallel to the\nlayers shown in Fig.~\\ref{couplings_bilayer_so_kz} implies the\nexistence of spin dependent hoppings between layers in different\nunit cells. This is consistent with the analysis which showed that\nthe spin-orbit coupling in a bilayer has a contribution from\ninterlayer hopping, see eq.~\\ref{spin_hopping}.\n\nThe spin-orbit couplings can be larger in bulk graphite than in a\ngraphene bilayer. The bands in Bernal graphite do not have\nelectron-hole symmetry. The shift in the Fermi energy with respect\nto the Dirac energy is about $E_F \\approx 20 {\\rm meV} \\gg \\lambda_1\n, \\lambda_3$\\cite{DM64}. Hence, the spin-orbit coupling is not\nstrong enough to open a gap throughout the entire Fermi surface, and\ngraphite will not become an insulator.\n\n\n\\begin{figure}\n\\includegraphics[width=8cm]{couplings_bilayer_so_ky_ortho.pdf}\n\\caption[fig]{(Color online). Dependence on wavevector, $2 k_y $, of\nthe spin-orbit couplings for orthorhombic graphite, as defined in\neq.~\\ref{couplings_ortho}. The point $k_x = 0 , k_y a \\sqrt{3} = 4\n\\pi \/3$ corresponds to the $K$ point ($a$ is the distance between\ncarbon atoms in the plane).} \\label{couplings_bilayer_so_ky_ortho}\n\\end{figure}\n\nA similar analysis applies to orthorhombic graphite, which is\ncharacterized by the stacking sequence $ABCABC \\cdots$\\cite{M69}.\nThe electronic structure of this allotrope at low energies differs\nmarkedly from Bernal graphite\\cite{GNP06,AG08}, and it can be a\nmodel for stacking defects\\cite{BCP88,GNP06,AG08}. If hoppings\nbeyond nearest neighbor layers are neglected, the hamiltonian can be\nreduced to an effective one layer hamiltonian where all sites are\nequivalent. The effective hamiltonian which describes the $K$ and\n$K'$ valleys contains eight entries, which can be described using\nthe matrices $\\sigma_i , s_i$, and $\\tau_i$. Orthorhombic graphene\nis not invariant under inversion, and a Rashba like spin-orbit\ncoupling is allowed. The spin-orbit coupling takes the form:\n\\begin{align}\n{\\cal H}_{ortho}^{so} &\\equiv \\lambda_1^{ortho} \\sigma_z s_z \\tau_z\n+ \\lambda_2^{ortho} \\left( \\sigma_y s_x - \\tau_z \\sigma_x s_y\n\\right) \\label{ortho}\n\\end{align}\nAs in the case of Bernal stacking, the couplings have a significant\ndependence on the momentum perpendicular to the layers, $k_z$, and\ninterlayer hopping terms are induced. For $\\omega = 0, \\vec{\\bf k} =\n0$ and $k_z = 0$, we find:\n\\begin{align}\n\\lambda_1^{ortho} &= 0.134 {\\rm meV} \\nonumber \\\\\n\\lambda_2^{ortho} &= 0.275 {\\rm meV} \\label{couplings_ortho}\n\\end{align}\nIn orthorhombic graphite the Fermi level is away from the $K$ and\n$K'$ points, in the vicinity of a circle defined by $| \\vec{\\bf k} |\n= \\gamma_1 \/ v_F$\\cite{GNP06,AG08}. The variation of the couplings\nas function of wavevector is shown in\nFig.~\\ref{couplings_bilayer_so_ky_ortho}.\n\n\n\n\\section{Conclusions}\nWe have studied the intrinsic spin-orbit interactions in a graphene\nbilayer and in graphite. We assume that the origin of the couplings\nis the intraatomic $\\vec{\\bf L} \\vec{\\bf S}$ interaction, and we use\na tight binding model which includes the $2s$ and $2p$ atomic\norbitals.\n\nThe intrinsic spin-orbit couplings in a graphene bilayer and in\ngraphite are about one order of magnitude larger than in single\nlayer graphene, due to mixing between the $\\pi$ and $\\sigma$ bands\nby interlayer hoppings. Still, these couplings are typically of\norder $0.01 - 0.1$meV, that is, $0.1 - 1$K.\n\nBilayer graphene becomes an insulator with an even number of edge\nstates. These states can be mixed by perturbations which do not\nbreak time reversal symmetry. These perturbations can only arise\nfrom local impurities with strong spin-orbit coupling, as a spin\nflip process and intervalley scattering are required.\n\nThe interplay of spin-orbit coupling and interlayer hopping leads to\nspin dependent hopping terms. The spin-orbit interactions are\nlargest in orthorhombic graphite, which does not have inversion\nsymmetry.\n\n\\section{Acknowledgements}\nFunding from MICINN (Spain), through grants FIS2008-00124 and\nCONSOLIDER CSD2007-00010 is gratefully acknowledged.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\chapter{INTRODUCTION}\n\nAutonomous vehicles are a rapidly expanding market and research field. In recent years, the automotive industry has increased its projects and importance in this area in order to reduce the number of deaths caused by crashes \\cite{1_importance}. Furthermore, according to \\cite{1_economic}.the autonomous vehicle market is expected to grow 36.9\\% between 2017 and 2027 and reach \\$126.8 billion dollar market value. And the fact that it is such a big economy enables it to renew itself as something. In this direction, studies are carried out for smartening and autonomy in the design of automobiles. According to a World Health Organization research, road traffic collisions are the seventh leading cause of mortality across all age categories, accounting for about 1.35 million deaths in 2016, with cyclists, motorcyclists, and pedestrians contributing for more than half of the deaths \\cite{1_who}. Considering the complex traffic situations and the increase in traffic accidents due to carelessness, misbehavior, fatigue or distraction, the improvements seen in different levels of autonomous vehicles are motivated to reduce injuries and deaths caused by complex traffic scenarios and errors. Injuries and deaths caused by traffic accidents can be reduced with more accurate Advanced Driver Assistance System(ADAS) applications. \n\n\n\nIn light of all of this knowledge, in order to be able to prevent crashes or pass through an obstacle, path planning algorithms are being developed for decades. Path planning algorithms are basically used to enable autonomous vehicles to move in environments with obstacles. Also, as mobile robots have recently started to work in dynamic environments where people are also present, path planning algorithms have gained more importance in order for autonomous vehicles to work safely. To achieve autonomous driving skills, different path planning algorithms use different approaches. While algorithms like RRT(Rapidly-Exploring Random Tree), RRT*(Rapidly-exploring random tree (star)) and PRM(Probabilistic Roadmap) use probabilistic methods for path planning, algorithms like APF use geometric approaches.\n\n\n\n\\clearpage\n\n\nWhen it comes to real-world applications, such as autonomous vehicle deployment, testing the boundaries of safety and performance on real cars is expensive, inaccessible, and dangerous. Commercially available mobile systems such as Jackal UGV(Unmanned Ground Vehicle) \\cite{1_jackal} and TurtleBot2 \\cite{1_turtle} have been developed to resolve these concerns. Several small-scale robotic platforms have been built in recent years to further research, particularly in the area of autonomous driving. Many of these platforms are built on a small-scale racecar with a mechanical framework to support the electronic components, generally 1\/10 size of an actual vehicle. In 2014, Mike Boulet, Owen Guldner, Michael Lin, and Sertac Karaman developed RACECAR (Rapid Autonomous Complex-Environment Competing Ackermann steering Robot) that was the first mobile robot with a strong graphics processing unit(GPU) \\cite{1_sertac}. By offering realistic car dynamics with Ackermann steering and drive trains, the RACECAR platform provides as a robust robotic platform for research and education. It is built on completely open-source and standardized systems that use ROS and its related libraries.\n\n\n\\section{Purpose of The Thesis}\n\nThis project mainly focuses on implementing and testing different path planning algorithms on MIT RACECAR platform. These algorithms are expected to avoid obstacles while following the road on a high curvature road. It is expected to determine the path planning algorithm that implements this scenario most successfully and to create a starting point for the solution of complex problems such as overtaking and escape maneuver using this algorithm in the period after the thesis. A lane finding algorithm based on image processing had to be developed at the same time in order to implement the scenario. Within this project, necessary tools will be developed and, route planning algorithms will be evaluated by creating a scenario environment. \n\n\n\\clearpage\n\n\n\\section{Literature Review}\n\n\nSince path planning is a subject that has been studied for a long time, many algorithms related to path planning have been developed. DWA is one of the most popular algorithms. Fox proposed DWA algorithm in 1997 but, it is still frequently used. Also, algorithms that relies on geometric relations like APF are very successful with a low computational cost \\cite{1_apf}. Also, algorithms based on optimization of dynamic parameters like TEB are proposed by R\u00f6smann and it is shown that it can navigate complex environments \\cite{3_teb_2}. Several vision-based lane detection algorithms have been presented in the literature to reliably detect lanes. These approaches may be divided into two types: feature-based and model-based. Detecting the evident properties of lanes that distinguishes them from the rest of the objects on the lane image is the basis of the feature-based technique. The task of feature detection would be split into two parts: edge detection and line detection. To identify lane edges and model lane borders, edge-based techniques have been developed. The accuracy and computing needs of more sophisticated implementations of these models rise, while the resilience to noise decreases. For feature extraction, the Canny edge detection method is presented, which offers an exact match to lane lines and is adaptable to a complex road environment \\cite{1_canny}. Improvements to Canny edge detection can successfully deal with numerous noises in the road environment, according to \\cite{1_canny2}. Furthermore, defining the targeted region of the picture where lane borders are located, called to as region of interest (ROI), enhances performance efficiency \\cite{1_roi}. To ease the identification procedure, the region of interest would be split into left and right sub-regions based on the width of the lane.\n\n\n\\chapter{Platform Overview}\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=.6\\linewidth]{2.racecar_platform\/figures\/racecar.png}}\n \\vspace{5mm}\n \\caption{MIT Racecar Platform}\n \\label{fig:racecar}\n\\end{figure}\n\nRACECAR is a full stack robotics system that provides high computing power with reliable mechanical structure. RACECAR platform serves as a strong robotic platform for research and education by providing realistic vehicle dynamics with Ackermann steering, and drive trains. It is designed based on fully open-source and standardized systems that take advantage of ROS and its associated libraries \\cite{2_racecar}. MIT RACECAR has sensors that can be listed as:\n\n\\begin{itemize}\n \\item StereoLabs ZED Stereo Camera\n \\item RPLidar A2 Lidar\n \\item Sparkfun 9DoF Razor IMU\n\\end{itemize}\n\n\n\nBecause of its numerous sensors and well documented software, MIT RACECAR was chosen to be used in this project as the robot that trajectory generation algorithms will be tested on. \n\n\n\\clearpage\n\n\n\\section{Hardware}\n\n\\subsection{Chasis}\n\nChassis of MIT Racecar platform is Slash 4\u00d74 Platinum Truck model from Traxxas. These chassis is giving providing enough performance parameters and has enough space for installation of other equipments. The chassis is able to drive 40 mph with its default Velineon 3500 Brushless Motor, but VESC software the maximum speed is limited by 5 mph for safety concerns. Power required by BLDC is provided by a 2S 4200 mah LiPo battery \\cite{2_traxxas}. \n\n\n\n\\subsection{NVIDIA Jetson TX2}\nNVIDIA Jetson TX2 is high power graphical processing unit which is widely used in robotics applications. It has enough processing power for trajectory planning and other tasks. Jetson TX2 has an ARMCortexA57 CPU(Central Process Unit) and 256 CUDA enabled GPU with an 8 GB RAM(Random Access Memory) \\cite{2_jetson}.\n\n\n\\begin{figure}[h] \n \\centerline{\\includegraphics[width=.5\\linewidth]{2.racecar_platform\/figures\/jetson.png}}\n \\caption{NVIDIA Jetson TX2 \\cite{2_jetson}}\n \\label{fig:racecar}\n\\end{figure}\n\n\n\\subsection{StereoLabs ZED Stereo Camera}\nStereoLabs ZED Stereo Camera is one of the enviromental awareness sensors MIT Racecar. ZED camera comes with dual 4 MP Camera which that provides 110\u00b0 FOV(Field of View). ZED camera provides real-time pointcloud data in addition to getting 1080p HD video at 30 FPS or WVGA at 100 FPS(Frame Per Second) thanks to its SDK(Software Development Kit) \\cite{2_zed}.\n\n\\clearpage\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[height=3cm]{2.racecar_platform\/figures\/zed.png}}\n \\caption{StereoLabs ZED Stereo Camera \\cite{2_zed}}\n \\label{fig:racecar}\n\\end{figure}\n\n\n\n\\subsection{RPLidar A2 Lidar}\n\n\nRPLidar A2 Lidar is a low-cost 2d lidar that developed by SLAMTEC will give us the obstacle distance information which is necessary for trajecrory generation. It can measure distances with a 360 degree field of view with maximum 16 meters range \\cite{2_lidar}.\n\n\n\\begin{figure}[h] \n \\centerline{\\includegraphics[width=.7\\linewidth]{2.racecar_platform\/figures\/rplidar.png}}\n \\vspace{5mm}\n \\caption{RPLidar A2 Lidar \\cite{2_lidar}}\n \\label{fig:racecar}\n\\end{figure}\n\n\n\\subsection{Sparkfun 9DoF Razor IMU}\n\nSparkfun 9DoF Razor IMU is reprogrammable IMU(Inertial Measurement Unit) with an MPU-9250 9DoF (9 Degrees of Freedom) sensor. Sparkfun IMU is a easy-to-use IMU with its various connection types like UART, I2C and USB. In MIT RACECAR platform it is fully entegrated by a USB connection but in this project it is not actively used \\cite{2_imu}.\n\n\n\\subsection{VESC Electronic Speed Controller}\n\nBLDC(Brushless DC Motor) and steering servo of the MIT Racecar is controlled by Vedder Electronic Speed Controller (VESC). Unlike conventional ESCs, VESC also provides useful data over USB connection like current and voltage data for each motor phase and wheel odometer data. VESC is capable of controlling speed and steering angle of MIT Racecar succeffully. These low level controllers of VESC allow researchers to be more focused on developing solutions to higher level problems.\n\n\n\n\\section{Software}\n\nThe MIT RACECAR software relies on JetPack and ROS as two main components. JetPack is a special software for NVIDIA Jetson computers that specializes Ubuntu for them. Along with specialized OS for Jetson it also provides libraries, APIs, samples and developer tools for AI applications and GPU computing \\cite{2_jetpack}. \n\n\nAs the other main component of The MIT RACECAR software, RACECAR ROS package provides robotic capabilities on ROS as subpackages and it consists of sub-packages like; \\textit{\\textbf{ackermann\\_cmd\\_mux, joy\\_node rplidar\\_ros, sparkfun-9dof-razor-imu, zed\\_ros\\_wrapper.}} These packages are required for collecting sensor data from vehicle, applying commanded control signals, making necessary transformations between different frames. As an addition to these packages, for data collection and logging \\textit{\\textbf{rosbag}} package is used. The rosbag package allows us to save the sensor data as time-series and then play it back in real time. \\textit{\\textbf{Rosbag}} package also allows us to measure performance of developed algorithm. Also, for visualization purposes, another ROS package \\textit{\\textbf{rqt}} is used. \n\n\n\\subsection{Robot Operating System (ROS)}\n\nROS is an open-source framework that mainly handles communication between nodes and creates a baseline for researchers. Since it also has useful packages like transformation calculations etc., it is very helpful for beginners in robotics programming and accelerate prototyping processes for researchers.\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=.7\\linewidth]{2.racecar_platform\/figures\/ros-master-node.jpg}}\n \\vspace{5mm}\n \\caption{ROS Topics Communication System}\n \\label{fig:ros_communication}\n\\end{figure}\n\n\\clearpage\n\nCommunication architecture of ROS is communication between nodes with a publish-subscribe messaging model. This communication is only possible with a \\textbf{\\textit{master}} which serves as an XML-RPC based server. \\textbf{\\textit{Master}} makes the communication between nodes with an address which is named \\textbf{\\textit{topic}}. \\textbf{\\textit{Topics}} are special messaging addresses with a specific data type and name. \n\nThe ROS nodes of MIT RACECAR can be seen in Table \\ref{ros_node_t}.\n\n\n\\begin{table}[]\n\\centering\n\\begin{tabular}{@{}\n>{\\columncolor[HTML]{EFEFEF}}c \n>{\\columncolor[HTML]{EFEFEF}}c \n>{\\columncolor[HTML]{EFEFEF}}c @{}}\n\\toprule\n\\textbf{NODE NAME} &\n \\textbf{PACKAGE} &\n \\textbf{DESCRIPTION} \\\\ \\midrule\n\\begin{tabular}[c]{@{}c@{}}ackermann\\_cmd\\\\ \\_mux\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}ackermann\\_cmd\\\\ \\_mux\\end{tabular} &\n \\begin{tabular}[c]{@{}c@{}}Ackermann velocity command inputs are \\\\ multiplexed in this node. It controls \\\\ incoming ackermann command topics but,\\\\ based on priority, allows \\\\ one topic command at a time.\\end{tabular} \\\\ \\midrule\njoy\\_node &\n joy &\n \\begin{tabular}[c]{@{}c@{}}This node is used for generic joystick to ROS. \\\\ It publishes joy message that \\\\ includes joystick all possible states \\\\ of buttons and axes.\\end{tabular} \\\\ \\midrule\nrplidarNode &\n rplidar\\_ros &\n \\begin{tabular}[c]{@{}c@{}}Driver for RPLIDAR sensor that converts messages \\\\ that comes from lidar sensor to laser scan messages \\\\ and publishes to a topic.\\end{tabular} \\\\ \\midrule\n\\begin{tabular}[c]{@{}c@{}}zed\\_camera\\\\ \\_node\\end{tabular} &\n zed-ros-wrapper &\n \\begin{tabular}[c]{@{}c@{}}Driver for ZED camera that outputs pointcloud, \\\\ camera left and right images and pose \\\\ information of robot.\\end{tabular} \\\\ \\midrule\nrosbag\\_node &\n rosbag &\n \\begin{tabular}[c]{@{}c@{}}It is a tool for recording the messages published on \\\\ topics to bag files which are specially formatted files \\\\ that stores timestamped ROS messages.\\\\ It also used for replaying the recorded topics.\\end{tabular} \\\\ \\midrule\nrqt\\_node &\n rqt &\n \\begin{tabular}[c]{@{}c@{}}This node is used for visualizatioon purpose. \\\\ It provides tools to easily plot data in real-time,\\\\ displays rostopic messages etc.\\end{tabular} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{The ROS nodes of MIT RACECAR software package}\n\\label{ros_node_t}\n\\end{table}\n\n\\clearpage\n\n\n\\section{Vehicle Kinematic Model}\n\nAckermann steered vehicles like MIT RACECAR are commonly expressed as bicycle kinematic model. Bicycle kinematic model for Ackermann steered vehicles assumes that front wheels and rear wheels are combined amongst themselves and end up with a two-wheel bicycle \\cite{2_bicycle}. In addition to this, bicycle kinematic model also assumes that the vehicle only moves on XY plane and thanks to these assumptions, bicycle kinematic model is a simple geometric relationship between the steering angle and the curvature that the rear axle of the vehicle will follow.\n\n\n\\begin{figure}[h] \n \\centerline{\\includegraphics[height=.6\\linewidth]{2.racecar_platform\/figures\/bicycle.png}}\n \\caption{Simple Bicycle Model \\cite{2_bicycle}}\n \\label{fig:bicycle}\n\\end{figure}\n\n\\begin{equation}\n tan(\\gamma) = \\frac{L}{R}\n\\label{eq_bicycle}\n\\end{equation}\n\nThe geometric relationship can be written as equation \\ref{eq_bicycle} where $\\gamma$ is steering angle, $L$ is the wheelbase and $R$ is the turning radius of the vehicle. It should be noted that this geometric relationship gives acceptable results for only low speed and moderate steering angles.\n\n\\clearpage\n\n\n\n\\chapter{PATH PLANNING}\n\nPath planning algorithms are used to enable mobile robots to perform their most basic tasks, which are their ability to move. These algorithms mainly consists of two parts which is path planning and trajectory planning. While path planning is responsible for the higher level motion of the robot, trajectory planning algorithms calculates the required actuator behavior for following via-points generated by path planning \\cite{3_global_local}. This project focuses on trajectory planning algorithms. In ROS methodology, path planning algorithms corresponds to global planner and trajectory planner corresponds to local planner. \n\n\n\\begin{figure}[h!] \n \\centering\n \\centerline{\\includegraphics[width=\\linewidth, height=14cm]{3.trajectory_planning\/figures\/planners.png}}\n \\caption{An example view of local and global planners \\cite{3_global_local_fig}.}\n \\label{fig:costmap}\n\\end{figure}\n\n\\clearpage\n\n\nIn this project, the performance of different trajectory planning algorithms like DWA, TEB and APF, will be compared by their performance. In the realization phase of the project, the algorithms will be tested on the Racecar Platform in a curvy road. The algorithms are implemented in the ROS environment. Thanks to the useful features of ROS, implementation and testing of the algorithms could be done easily. \n\n\n\\section{Main Elements of Path Planning}\n\nPath planning processes may become complex processes. In order to make path planning more tidy and less complex, some auxiliary ROS packages will be used. Before diving into path planning algorithms, these auxiliary packages will be examined. \n\n\n\\subsection{Global Planner}\n\nGlobal planners are responsible for creating a collision free path for the robot. Global planners generate this path around the obstacles from a geometric approach, and they don't take into account vehicles dynamics. Unlike local planners, generally they plan for a larger map. A*, Dijkstra's Algorithm, RRT, $RRT^*$, and PRM can be counted as examples of global planning algorithms. The output of Global planners is a roadmap or waypoints to the target point. These roadmap provides local planner the points to follow. \n\n\nIn this project, \\textbf{\\textit{base\\_global\\_planner}} package from ROS navigation stack will be used as global planner. This package provides a good baseline that can be configured to use different algorithms for global planning. In this project, since global planning is not one of the main tasks of this project, the package is configured for the Dijkstra's algorithm for calculating the shortest path that has the lowest cost without including robot kinematic parameters. Dijkstra's algorithm is mainly chosen for its simplicity and being computationally cheap. Global planners are basic algorithms for choosing the path that has minimum costs. Though, their performance heavily depends on configuration of global costmap. Since there is no mapping in this project, we are using our global planner with respect to the base\\_link frame. \n\n\n\\clearpage\n\n\\subsection{Local Planner}\n\nLocal planner is the planner that calculates the short term plan and executes the plan. Unlike global planner, local planner includes robot kinematic parameters and calculates the plan according to these parameters. This extra calculation comes with a computational cost. In order to decrease the computational cost of local planners, boundaries of the local planner generally being selected small and local planner aims the furthest point that is on the global plan that is inside of local planner boundaries, not the real target point. \n\nGlobal planners produce waypoints or roadmaps as output and local planners get the output of global planners as inputs and calculates the required control signal to follow these waypoints. This control signal is generally a ROS message type named \\textbf{\\textit{\"\/geometry\\_msgs\/Twist\"}}. The ROS message contains the commanded linear and angular velocity data, and another node takes this message and controls the vehicle with respect to these commanded velocities.\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=.8\\linewidth]{3.trajectory_planning\/figures\/local_plan.png}}\n \\vspace{5mm}\n \\caption{Local Planner trajectory generation \\cite{3_local_planner}.}\n \\vspace{5mm}\n \\label{fig:planner}\n\\end{figure}\n\n\n\n\n\n\n \n \n \n \n\n\n\n\n\\subsection{Global and Local Costmaps}\n\nCosmaps are basically maps that stores the costs of every point on the map that is calculated based on sensor measurements or provided by a known map. 2D and 3D costmaps can be constructed, but for ground vehicles like the MIT RACECAR 2D costmaps are more suitable since the movement in z-axis can be neglected. \n\n\\clearpage\n\nMainly, the costs of cells in costmaps are calculated based on whether there is an obstacle on that cell. In addition to this, another source of cost is the distance of the cell to an obstacle. This \"distance\" is named \\textbf{\\textit{inflation}} in ROS Navigation stack. The inflation is calculated by parameters called \\textbf{\\textit{inflation\\_radius}} and \\textbf{\\textit{cost\\_scaling\\_factor}}. These parameters are required for deciding whether the cell will be determined as near an obstacle or not, and decay rate of inflation cost. The effect of inflation parameters on the cost of a cell can be seen in Figure \\ref{fig:inflation}.\n \n\n\\begin{figure}[h!] \n \\centering\n \\includegraphics[width=\\linewidth]{3.trajectory_planning\/figures\/inflation.png}\n \\vspace{5mm}\n \\caption{The effect of inflation parameters on cell cost \\cite{3_ros_costmap}}\n \\vspace{5mm}\n \\label{fig:inflation}\n\\end{figure}\n\n\nAlso, another important parameters of costmaps are \\textbf{\\textit{cost\\_factor}} and \\textbf{\\textit{neutral\\_cost}}. These parameters determine the smoothness and the curvature of the calculated path. The effects of these parameters can be seen in Figure \\ref{fig:cf_effect} and Figure \\ref{fig:nc_effect}.\n\n\n\\clearpage\n \n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{3.trajectory_planning\/figures\/cf_0.01_0.55_3.55.png}}\n \\vspace{5mm}\n \\caption{The effect of cost factor on generated path (cost\\_factor=0.01, cost\\_factor=0.55, cost\\_factor=3.55) \\cite{3_tuning_guide}.}\n \\vspace{5mm}\n \\label{fig:cf_effect}\n\\end{figure}\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{3.trajectory_planning\/figures\/nc_1_66_233.png}}\n \\vspace{5mm}\n \\caption{The effect of neutral factor on generated path (neutral\\_factor=1, neutral\\_factor=66, neutral\\_factor=233) \\cite{3_tuning_guide}.}\n \\vspace{5mm}\n \\label{fig:nc_effect}\n\\end{figure}\n\n\nAnd lastly, it should be noted that, as the names suggests, global costmaps are for global planners and local costmaps are for local planners. Although, there is no difference between global costmaps and local costmaps in fact, their parameter configurations are differs. The map sizes and sensors are main elements that can be difference between them.\n\n\n\n\\clearpage\n\n\n\n\n\\section{Trajectory Planning Methods}\n\nIn this project, 3 different trajectory planning algorithms is considered and applied on MIT RACECAR platform. These algorithms are; Dynamic Window Approach(DWA), Time Elastic Band(TEB) and Artificial Potential Field(APF).\n\n\\subsection{Dynamic Window Approach}\n\nDWA is a proven concept that is used for a long time in robotics. The method was first proposed in 1999 \\cite{3_dwa}. And it is still a popular method for mobile robots. DWA provides a controller between global path and the robot. It calculates the cost function for different control inputs and searches for the maximum scored trajectory to follow. Thanks to its simplicity and easy implementation, DWA is a good choice as the first algorithm for trajectory planning. Basic working principle of DWA can be expressed as below.\n\n\\begin{enumerate}\n \\item Take dx, dy and dtheta samples from control space \n \n \\item Predict next states from current states based on sampled dx, dy and dtheta.\n \n \\item Score each trajectory from predictions with distance to obstacles, distance to the goal, distance to the global path, and speed and remove the trajectories that collide with obstacles.\n \n \\item Choose the trajectory with the highest score and send the trajectory to the robot.\n \n \\item Repeat until goal is reached.\n\\end{enumerate}\n\n\nIn order to be able to use DWA effectively, DWA needs to be configured properly. DWA has parameters that will configure robot configuration, goal tolerance, forward simulation, trajectory scoring and global plan. One of the important parameters of the DWA is sim time parameter. Sim time is essentially the time length that will DWA plan for, and this parameter heavily affects computation time of the trajectory. In tests, it is seen that when sim time is set to low values like 2.0 seconds or less, the performance of DWA is not sufficient for passing complex pass ways because it can not see what will happen after the sim time. This result in a suboptimal trajectory. Also, it should be noted that, since all trajectories generated by DWA is simple arcs, setting the sim time to high value like 5 seconds will result in long curves that are not very flexible. Thus, in order to achieve good performance with DWA, setting sim time to an optimum value is a must.\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{3.trajectory_planning\/figures\/sim_time.png}}\n \\caption{The Effect of Simulation Time on Generated Trajectory (sim\\_factor=1.5, sim\\_time=4.0 \\cite{3_tuning_guide}}\n \\label{fig:planner}\n\\end{figure}\n \n \nAside from sim time, there are a few other factors to consider. Samples of velocity vx sample and vy sample are two parameters that determine how many translational velocity samples should be taken in the x and y directions, respectively. The number of rotational velocities samples is controlled by vth sample. The number of samples you want to take is determined by your computer's processing power. Because turning is generally more complicated than moving straight ahead, it is preferred to set vth samples to be higher than translational velocity samples in most cases. Since, our vehicle is non-holonomic there is no need for velocity samples in y direction. Simulation granularity is the step size between points on a trajectory that is referred to as sim granularity. It essentially means how often the points on this trajectory should be examined. A lower value indicates a higher frequency, which necessitates more processing power. \n\nLastly, as an addition to all of these parameters, DWA can also plan both for holonomic and non-holonomic robots, but it does not support Ackermann drive robots as MIT RACECAR. Still, DWA was implemented on MIT RACECAR and got acceptable results. \n\n\n\n\\clearpage\n\n\n\n\n\n\n\n\n\\subsection{Time Elastic Band}\n\nTime Elastic Band(TEB) planner was proposed by R\u00f6smann as an improved version of elastic band algorithm \\cite{3_teb}. The classic elastic band algorithm is based on optimizing a global path for the shortest path length. However, TEB optimizes the path by the time-optimal objective function. Also, while elastic band does take into account of dynamic constraints, TEB considers kino-dynamic constraints in the trajectory planning. Additionally, TEB planner also supports non-holonomic car-like robots.\n\nTEB planner is essentially a solution to a sparse scalarized multi-objective optimization problem. The multi-objective problem includes constraints like maximum velocity, maximum acceleration, minimum turning radius etc. Since this optimization problem does not always have only one solution, TEB can get stuck in locally optimum points. Thus, sometimes the robot can not pass an obstacle even if there is a possible trajectory which the robot can follow. In order to solve this local minimum problem, an improved version of TEB was proposed \\cite{3_teb_2}. With this improved version, TEB optimizes a globally optimal trajectory parallel to the multi-objective optimization problem that it already solves. The algorithm switches to this new globally optimum solution when necessary, and this way the local minimum problem of TEB is solved.\n\nDue to being numerous kino-dynamic constraints for a car-like robot, TEB has numerous weights for each constraint and effects of weights on the generated trajectory depends on other weights. For this reason, optimizing TEB planner parameters requires good understanding of the concept, attention and controlled experiments. In this project, in order to simplify this process, initial values of the TEB planner was optimized in simulation environment and final configuration fine-tuned in real time tests. The results showed that while MIT RACECAR can avoid obstacles smoothly with TEB planner, the planner still requires more tuning for getting better results while driving narrow areas. \n\n\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{3.trajectory_planning\/figures\/teb_chart.png}}\n \\vspace{5mm}\n \\caption{Flowchart of TEB Algorithm \\cite{3_teb}}\n \\vspace{5mm}\n \\label{fig:planner}\n\\end{figure}\n\n\n\\clearpage\n\n\n\n\n\\section{Artificial Potential Field Method}\n\nArtificial Potential Field(APF) Method is a basic approach for trajectory planning that is widely used by both industrial and academical applications. APF basically creates artificial attractive vectors that diverts vehicle to the goal position and repulsive vectors from obstacles that diverts vehicle from obstacles. Basically, addition of these attractive and repulsive vectors results in a target direction and speed for the vehicle. \n\n\n\n\\begin{equation}\n U(q) = U_{attractive}(q) + U_{repulsive}(q)\n\\end{equation}\n\n\n\nAnd additionally, magnitudes of these vectors are defined by user. By tuning these magnitudes, the user can tune how close the robot will navigate through obstacles. With the help of this simple calculation, APF calculates how much it should divert from current heading.\n\nAs an addition to determining target orientation of the vehicle, speed of the vehicle can be calculated with the help of APF method by the density of obstacles. Simply, the density of obstacles determines the targeted velocity for the vehicle. \n\n\n\n\\begin{equation}\n V = V_{max} - K_{gain} \\cdot n_{obstacles}\n\\end{equation}\n\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=.4\\linewidth]{3.trajectory_planning\/figures\/apf.jpg}}\n \\vspace{5mm}\n \\caption{Basic trajectory generation of APF Method}\n \\label{fig:planner}\n\\end{figure}\n\n\nAPF method has a very basic approach for trajectory planning. And this approach provides a non-complex solution to the problem with less computational costs. The main reason for choosing this method is this direct approach, but it has some problems that we will discuss in the next chapter. \n\n\n\\subsection{Problems of APF Method}\n\n\nThe main problem with the APF method is the local minimum problem. Similar to the previous version of TEB algorithm, APF can get stuck in local minimum points. As it can be seen in figure below, when repulsive forces from obstacles are equal in both directions, it calculates the resultant vector as the vehicle moves towards to the obstacle. This can result in a collision for the vehicle. Another problem of the APF method is Goal Non-reachable with Obstacles Nearby(GNRON) problem. In such cases that the goal point is near an obstacle, the repulsive force from the obstacle can prevent the robot from reaching the goal point by resulting in a local minimum that is close to the goal point.\n\nThese two problems are primary problems of APF method and various solutions are proposed like Evolutionary APF \\cite{3_apf_evo}, Fuzzy APF \\cite{3_apf_fuzzy} as modified APF algorithms. \n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=.7\\linewidth]{3.trajectory_planning\/figures\/gnron.png}}\n \\caption{Goal points with GNRON and local minimum problem \\cite{3_apf_fig}}\n \\label{fig:planner}\n\\end{figure}\n\n\nWhile implementing APF to the MIT RACECAR, these problems were solved by implementing a local minimum detection method. This method, simply checks the calculated repulsive and attractive forces and if it determines that the robot is stuck in a local minimum point, it adds a vector that will divert the vehicle from going towards to the obstacle.\n\n\n\n\n\n\n\n\\chapter{Testing and Results}\n\nReal life experimentation of trajectory planning algorithms on MIT RACECAR requires additional work. Until this part, MIT RACECAR platform is inspected by its hardware and software. Also, numerous helper package of ROS is expressed and how trajectory planning algorithms can be applied. For the real time testing of different trajectory planning algorithms, there should be a measurement for determining how well the algorithm works. For that purpose, an environment was set up in Artificial Intelligence and Intelligent Systems(AI2S) laboratory. As the scenery, a double lane curvy road with obstacles in different locations was chosen. MIT RACECAR was expected to stay in lane and when it encounters to an obstacle, to pass the obstacle by changing the lane. \n\n\nAs a result of the chosen scenery, it is needed to detect lanes on the road. For achieving this, ZED Camera was used for image processing. Image processing in this project is achieved by OpenCV library. An image processing algorithm that detects lanes and returns goal points was designed with Python. Goal points are generated from detected lanes with respect to the look ahead distance that is determined by velocity of the vehicle dynamically.\n\nAnother aspect of the project is obstacle detection. Obstacle detection is done with the help of RPLidar A2 2D lidar. RPLidar provides users a ROS package that handles communication between computer and lidar and publishes obstacle information as a ROS topic. Lastly, all of these outputs are given to the path planning algorithm as target points and obstacles.\n\n\n\n\n\\section{Test Environment}\n\nIn order to create the required environment, a double lane curvy road was draw on the floor of AI2S Laboratory by electrical tapes. While making the road for MIT RACECAR, constraints like width of the vehicle, minimum turning radius of the vehicle are considered. Also, it is tried to avoid environmental effects like flare on the floor because of the lights. Since, the reference is being obtained from image processing, the flare is changing the quality of the reference signal in a bad manner and causes the vehicle to go out of the lane. \n\n\nAlso, the turning radius is an important constraint in such small environments, especially when obstacle avoidance is required. Thus, the curvature of the road was tried to be kept small enough to turn but also, big enough to see the limits of the algorithm.\n\nDouble lane structure of the road was chosen for lane changing when encountered by an obstacle. This structure also provides a good use case like lane changing and overtaking problems as future works. Besides, the middle lane of the road was chosen red in order to provide easier recognition of the lane. Since red color is easy to recognize with color filters like HSV, this provides a good starting point for the lane detection and leaves more time to focus on trajectory planning algorithms. The most painful part of the generated scenario for this study was that the changeable light conditions in the environment had a serious effect on the lane recognition algorithm's performance.\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/4_environment.png}}\n \\vspace{5mm}\n \\caption{Testing environment}\n \\label{fig:testing_env}\n\\end{figure}\n\n\\clearpage\n\n\n\n\n\n\n\n\n\\section{Lane Detection Algorithm}\n\nLane detection algorithm, which is a crucial part of the scenery that is chosen for the project, will be explained in this section. Lane detection algorithm consists of OpenCV functions that will not be explained since it is out of the scope of the project. The algorithm is designed to have two main parts for its being modular and easy to understand.\n\n\n\\begin{figure}[h!] \n \\vspace{5mm}\n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/main_flow.png}}\n \\vspace{5mm}\n \\caption{Main structure of the lane detection algorithm}\n\\end{figure}\n\n\nThe first part of the algorithm, the pre-processing part, is responsible for extracting lane information from camera input by eliminating everything except lanes itself from the image. The first idea for the algorithm was based on the idea to extract black lanes and red lane individually. For this purpose, a complex image processing algorithm that can be seen in Figure \\ref{fig:pre_1} was proposed. However, the computational cost of this proposed method was not affordable for Jetson TX2. As a result of this problem, feedback loop of the system is updated only on 5-8 times average in one second and this situation was resulting in a hard control problem and slow response system. \n\n\n\\clearpage\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/pre_1.png}}\n \\vspace{5mm}\n \\caption{Flow of the first proposed image pre-processing algorithm}\n \\label{fig:pre_1}\n\\end{figure}\n\n\nIn order to overcome these problems, a new, more plain algorithm that can be seen in Figure \\ref{fig:pre_2} was proposed. The idea of the new algorithm is that it is not needed to find all the lanes individually, it is only needed to find red lane which is easier to find respectively. As a result, the new proposed method is less accurate but more effective and faster, and this loss of accuracy is a neglectable amount.\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/pre_2.png}}\n \\vspace{5mm}\n \\caption{Flow of the final image pre-processing algorithm}\n \\label{fig:pre_2}\n\\end{figure}\n\n\n\n\\clearpage\n\n\n\nThe pre-processing part of the algorithm uses mainly HSV masking, morphological operations and filtering contours based on sizes and area. The flow of the algorithm with an example input image can be inspected in Figure \\ref{fig:pre_2}. \n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/pre_example.png}}\n \\vspace{5mm}\n \\caption{Example output of pre-processing image with stages}\n \\vspace{5mm}\n \\label{fig:pre_2}\n\\end{figure}\n\n\nThe second part of the algorithm takes the binary image that is the output of the pre-processing part as input. This part of the algorithm firstly takes the perspective transformation of the lane image to the Bird-Eye view. After taking transformation, the contour information of the image is extracted and applied some filtering again. After all of these processes, a basic second degree polynomial is fit for the lane. This is required because in some cases, only a small part of the lane is visible and the coordinates of the target point at the look ahead distance is required for path planning. Additionally, some coordinate transformations is applied to the target point and the point is passed as output of the lane detection algorithm. The flow of the entire algorithm can be seen in Figure \\ref{fig:entire_flow} and an example output of the lane detection algorithm is also can be seen in Figure \\ref{fig:detect_out}. \n\n\n\n\\clearpage\n\n\\begin{landscape}\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/entire_flow.png}}\n \\vspace{5mm}\n \\caption{The flow of the entire lane detection algorithm}\n \\label{fig:entire_flow}\n \\vspace{5mm}\n\\end{figure}\n\n\n\\clearpage\n\n\\end{landscape}\n\n\n\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/detect_out.png}}\n \\vspace{5mm}\n \\caption{An example output of the lane detection algorithm}\n \\vspace{5mm}\n \\label{fig:detect_out}\n\\end{figure}\n\n\n\\clearpage\n\n\n\\section{Testing and Results}\n\n\nIn this chapter, an overall qualitative assessment of the trajectory planning algorithms is given. This assessment relies on how successfully the vehicle followed the lanes, how many obstacles the vehicle pass through without collision and some special comments on the algorithm behavior. Since there is no global position data source in AI2S laboratory environment, an overall numeric error can not be calculated. \n\n\nWhile testing the algorithms, it is tried to keep the environment same for all the algorithms. Nevertheless, some environmental circumstances like lighting condition can be changed. During the testing a Rviz is used which can be seen in Figure \\ref{fig:rviz} for visualizing the vehicle condition from the vehicle's perspective, global plan and local plan that is manipulated by trajectory algorithm. \n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth]{4.testing_results\/figures\/global_local.png}}\n \\vspace{5mm}\n \\caption{An example Rviz view with global plan (red) and local plan (purple)}\n \\vspace{5mm}\n \\label{fig:rviz}\n\\end{figure}\n\n\n\nIn addition, how the trajectory planning manipulates the target points which are determined by lane detection can be seen in Figure \\ref{fig:manipulator}. At the top side of the figure, how the vehicle sees the environment can be seen and below the image from camera and lane detection algorithm can be seen. While the red arrow is the output of the lane detection algorithm, the purple arrows indicates the planned trajectory that will avoid the obstacle without leaving the road.\n\n\n\\clearpage\n\n\n\n\\begin{figure}[h!] \n \\centerline{\\includegraphics[width=\\linewidth, height=23cm]{4.testing_results\/figures\/manipulator.png}}\n \\vspace{5mm}\n \\caption{An example view that trajectory planning is running (red arrow is the output of the lane detection, the purple arrows are the planned trajectory)}\n \\label{fig:manipulator}\n\\end{figure}\n\n\n\\clearpage\n\n\nThe final assessment about the trajectory planning algorithms can be inspected in Table \\ref{tab:assessment}. But as a matter of fact, it should be said that this assessment is only according to the chosen scenery, it is not about which trajectory planning algorithm is better than the others. Besides, determining which algorithm is superior to others depends on the application scenery. As a conclusion, artificial potential field algorithm has better results according to the project requests and it is chosen to continue to the project with APF algorithm from now on. Since it provides reasonable performance with low cost, when considered possible future development of the project, the APF method is chosen.\n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{@{}\n>{\\columncolor[HTML]{EFEFEF}}c \n>{\\columncolor[HTML]{EFEFEF}}c \n>{\\columncolor[HTML]{EFEFEF}}c @{}}\n\\toprule\n\\textbf{\\begin{tabular}[c]{@{}c@{}}Trajectory Planning\\\\ Algorithm\\end{tabular}} &\n \\textbf{Obstacles Avoided} &\n \\textbf{Comments} \\\\ \\midrule\n\\begin{tabular}[c]{@{}c@{}}Dynamic Window\\\\ Approach\\end{tabular} &\n 5 out of 7 &\n \\begin{tabular}[c]{@{}c@{}}DWA planner is easy to implement and tune. \\\\ Its performance was acceptable and stable. \\\\ Also, while avoiding obstacles, \\\\ it could stay in the course, but sometimes, \\\\ its effort was not enough to avoid obstacles. \\\\ Also, it gets stuck in complex situations\\\\ like narrow pass ways\\end{tabular} \\\\ \\midrule\n\\begin{tabular}[c]{@{}c@{}}Time-Elastic Band \\\\ Planner\\end{tabular} &\n 7 out of 7 &\n \\begin{tabular}[c]{@{}c@{}}TEB planner is very successful to model the \\\\ vehicle, and it is suitable for complex \\\\ environments. But, it is very hard to tune \\\\ effectively and sometimes having so many\\\\ parameters to tune is turning into \\\\ a disadvantage instead of an advantage. \\\\ Also, since it heavily relies on \\\\ an optimization problem, computational \\\\ cost gets very high, especially \\\\ in complex environments.\\end{tabular} \\\\ \\midrule\n\\begin{tabular}[c]{@{}c@{}}Artificial Potential\\\\ Field\\end{tabular} &\n 5 out of 7 &\n \\begin{tabular}[c]{@{}c@{}}APF is the easiest to implement and tune \\\\ by far. Thanks to its basic logic based \\\\ on simple math, it provides an acceptable \\\\ result with low cost. And also, \\\\ in line with the application scenery that is \\\\ selected for this project, it is not \\\\ too much deviates the vehicle from the road.\\end{tabular} \\\\ \\bottomrule\n\\end{tabular}\n\\caption{The assessment of the experiments of the trajectory generation algorithms}\n\\label{tab:assessment}\n\\end{table}\n\\chapter{Conclusion and Future Works}\n\n\nPath planning algorithms are gaining more and more importance as autonomous vehicles become more widespread and important nowadays. There are many types of path planning applications used as the navigation unit of autonomous vehicles or for security purposes in semi-autonomous vehicles. In this project, which aims to implement and test a few of the path planning applications in real-time, a testing environment was first established and an image processing-based lane tracking algorithm was developed for this environment as reference signal to the vehicle. After the test environment was created, DWA, TEB and APF methods were implemented and tested separately. Since problems such as overtaking will be studied in the later parts of the project, it is of great importance to choose an algorithm that is predictable and low in cost. For this reason, it was deemed appropriate to continue the next stages of the project with APF. However, this does not mean that APF is better than others. However, all route planning applications have advantages and disadvantages over each other. \n\nIn addition, in the next stages of the project, it is aimed to establish an end-to-end neural network structure and to carry out an end-to-end control with this structure. It is aimed to give sensor data such as camera and lidar as input to neural network and to obtain control signals such as vehicle speed and steering angle. \n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}