diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzjpog" "b/data_all_eng_slimpj/shuffled/split2/finalzzjpog" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzjpog" @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nIn mathematical imaging the problem of segmentation refers to the process of automatically detecting regions, objects or patterns of interest. This is of particular importance in biomedical imaging for cell or organ quantification as well as in materials science and engineering. The main goal of this work is to present a new multiscale segmentation method combining variational inverse scale-space methods with nonlinear spectral analysis.\n\n\n\\textit{Image segmentation.} The task of image segmentation can sometimes be addressed by simple methods which are solely based on intensity or histogram thresholds. However those methods quickly fail in more complex experimental scenarios, where challenges in terms of different sizes, intensities, contrasts or uncertainties like noise occur. Thus, a class of commonly used mathematical techniques to overcome some of the difficulties are nonlinear variational methods. In general there are two different ways of describing a region, either by its edge or by the interior of the region. Therefore two main concepts of addressing segmentation by an energy minimization problem evolved in the previous decades: edge-based and region-based segmentation. While edge-based segmentation separates regions based on discontinuity information \\cite{Kass1988,Kichenassamy1995,Caselles1997}, region-based segmentation separates by similarity measures within the regions. In this paper we focus on a region-based approach.\n\n\\begin{figure}[t]\n \\centering \n \\subfigure[Sizes]{\\includegraphics[height=0.15\\textwidth]{results_new\/balls_size_nonoise\/bregman_cv_1.png}}\\qquad\n \\subfigure[Intensities]{\\includegraphics[height=0.15\\textwidth]{results_new\/balls_int_nonoise\/bregman_cv_1.png}}\\qquad\n \\subfigure[Shapes]{\\includegraphics[height=0.15\\textwidth]{results_new\/varying_shapes_nonoise\/bregman_cv_1.png}}\\\\\n \\caption{Minimal examples where multiscale image segmentation could offer added value}\n\\label{fig:overview_goal_segmentation}\n\\end{figure}\n\nThe proposed method in this work is mainly built upon the Chan-Vese (CV) model \\cite{Chan2001}. For a given image function $f:\\Omega \\rightarrow \\mathbb{R}$ the domain $\\Omega \\subset \\mathbb{R}^d$ should be separated into two regions $\\Omega_1$ and $\\Omega_2 := \\Omega \\setminus \\Omega_1$ via the following variational energy minimization method\n\\vspace{-2pt}\n\\begin{equation}\\label{eq:CVorig}\n\\int_{\\Omega_1} (f(x) - c_1)^2 \\mathrm{d}x + \\int_{\\Omega_2} (f(x) - c_2)^2 \\mathrm{d}x + \\alpha \\cdot Per(C)\\longrightarrow \\min_{C,c_1,c_2}\n\\end{equation}\n\\vspace{-2pt}\nwhere $C$ is the desired contour separating $\\Omega_1$ and $\\Omega_2$ and $c_i$ stands for a desired average intensity value within region $\\Omega_i$. $Per(C)$ denotes the length of the contour $C$ separating $\\Omega_1$ and $\\Omega_2$ and can be controlled via a regularization parameter $\\alpha > 0$. An easy example is the image $f$ given in Figure \\ref{fig:overview_goal_segmentation}(a) where the domain $\\Omega$ can be segmented with respect to foreground and background. In contrast to simple histogram thresholding methods this holds true even in the case of strong noise if the parameter $\\alpha$ is chosen adequately. A generalization of this model was introduced in \\cite{Vese2002} and in \\cite{Chan2000} the model has also been extended to vector-valued images like color images. Recently in \\cite{zosso2015} an extension of the CV model that can handle artifacts and illumination bias in images has been proposed.\n\n\n\\textit{Challenges\/Questions.} The CV model is very useful in segmenting regions of interest which have very similar intensity values, e.g. Figure \\ref{fig:overview_goal_segmentation}(a). However, automatically detecting single objects based on their size is more challenging. Even with a varying parameter $\\alpha$ controlling the contour length (forward scale-space), it is for example not possible to detect the smallest object as a singleton. A similar challenge occurs when segmenting separate objects due to their intensity values, e.g. Figure \\ref{fig:overview_goal_segmentation}(b). Increasing the number of constants $c_i$ to four is suboptimal because we usually a-priori don't know the number of objects. Varying a threshold or parameter $\\alpha$ could lead to a correct segmentation but the estimated intensity constants $c_1$, $c_2$ will likely be incorrect. \\textit{Hence, is it possible to automatically detect multiple scales in a nonlinear variational image segmentation model, for instance with respect to different object sizes or object intensities? Can the segmentation of an image automatically be decomposed with respect to those scales?}\n\nMany region-based segmentation methods only use constraints on the contour length or curvature as regularization. However, in view of shape optimization and dictionary learning an approach that could also automatically separate objects with respect to their shape (cf. Figure \\ref{fig:overview_goal_segmentation}(c)) would be very interesting. \\textit{Hence, what is the role of geometric shapes in a multiscale segmentation approach?}\n\n\n\\textit{Scale-space methods.}\nIn the previous decades there has been a continuous interest in the analysis of different scales and the construction of scale spaces in imaging. In general it is desired to automatically detect all scales present in an image and simultaneously determine which scales are informative and contribute most to the image. For segmentation this problem is addressed in the fundamental works by Witkin and Koenderink \\cite{Witkin1983,Koenderink1984}. In relation to those, several methods to detect and analyze interesting scales have been proposed, see for example \\cite{Lifshitz1990,Vincken1997,Tabb1997,Lindeberg1998,Florack2000,Letteboer2004}. The underlying scale-space that is examined is defined by a linear diffusion process. A drawback of those approaches is that linear diffusion smoothes edge information and is therefore in general not suitable for applications where one is interested in retaining sharp edge information. Especially in biomedical image applications this is often the case. Therefore those theories were extended to non-linear diffusion processes, see \\cite{Niessen1997,Niessen1999,Dam2000}. A drawback of these approaches is that their analysis of scales is not fully automatic and can only be used in a forward approach, thus going from fine to coarse scales and then trying to find a backward relation. In this work we concentrate on nonlinear diffusion processes for segmentation where scale automatically relates to intensity and size.\n\nA prominent example of a variational method for nonlinear diffusion is the ROF model \\cite{Osher2005}. With increasing regularization parameter $\\alpha$ a sequence of functionals generates a nonlinear forward scale space flow that filters signals from fine to coarse. However in this process the total variation regularization functional is known to lead to a systematic contrast loss in the filtered image $u$ \\cite{Meyer2001}, whereas the main discontinuities in the signal remain at their position in the domain. To tackle the problem of intensity loss Osher et al. proposed in \\cite{Osher2005} an iterative contrast enhancement procedure based on Bregman distances. This approach is known to generate a nonlinear inverse scale space flow generating filtered signals from coarse to fine and with improved quality. This idea was successfully applied to more general inverse problems. \\cite{Burger2007,Brune2011,Benning2013}\n\n\\textit{Spectral methods.} Recently Gilboa \\cite{Gilboa2013,Gilboa2014} developed a framework to detect scales based on the nonlinear total variation diffusion process. The total variation is known to retain edge information while smoothing the signal apart from the edges. In this framework scales are detected based on a spectral decomposition of the given image into TV eigenfunctions \\cite{Meyer2001}. This concept does not only hold true for higher-order regularization functionals \\cite{Papafitsoros2015511,Poeschl2015,Benning2013} but more generally for convex, one-homogenous functionals $J$ with corresponding nonlinear eigenvalue equations \\cite{Burger2015,Burger2016} of the following form\n\\vspace{-5pt}\n\\begin{equation*}\n\t\\lambda u \\in \\partial J(u) \\ \n\\end{equation*}\nwhere $\\partial J(\\cdot)$ denotes the subdifferential of $J$. When minimizing for instance the total variation $J$, those eigenfunctions $u$ simply loose contrast whereas the overall structure of the function remains the same. The magnitude of contrast loss is related to the eigenvalue $\\lambda$. The eigenfunctions shape is determined by the chosen norm in the TV functional which can be adapted to the application of interest. In this way signals are not linearly smoothed to overcome scales but are step-by-step transformed to a composition of nonlinear eigenfunctions at coarser scales. A spectral response function can be used to examine which scales have a strong contribution to the original signal and to design filters of certain scales. Moreover, such spectral method can be combined with forward and inverse scale space approaches \\cite{Burger2016}.\\newpage\n\n\n\\textit{Main contribution.} In this paper we will extend the idea of (inverse) scale space methods known for nonlinear diffusion processes to segmentation and shape detection problems.\n\\begin{itemize}\n\\setlength{\\itemsep}{3pt}\n\t\\setlength{\\itemindent}{-.2in}\n\t\\item First goal: A novel inverse multiscale segmentation method based on Bregman iterations\n\t\\item Second goal: An adaptive regularization parameter strategy for $\\alpha$ (independent of $c_1$, $c_2$)\n\t\\item Third goal: A spectral analysis for segmentation regarding shapes of eigenfunctions\n\\end{itemize}\nA TV-based forward scale space for segmentation can easily be derived from the CV model via an increasing regularization parameter. We extend this framework by an inverse scale space for segmentation, still based on the CV model and therefore on a nonlinear diffusion process. For this purpose, we make use of Bregman iterations, among others well-known for improving total variation denoising results. The relation between the forward scale space and the inverse scale space is examined. Both iterative strategies are accomplished by a spectral transform and response function, which are used to easily examine scales and to filter certain scales. Since our method uses the total variation as a nonlinear diffusion process, we can make use of relatively easy and fast numerical and parallel implementation schemes developed in recents years.\n\n\\textit{Organization.} This work is organized as follows.\nIn Section \\ref{sec:CVmodel} we start with a revision of the segmentation model by Chan and Vese including its convexification. Together with a revision of the iterative denoising strategy using Bregman iterations in Section \\ref{ref:bregmandnoising}, the combination of those two concepts forms the first ingredient of our new inverse scale space method for multiscale segmentation introduced in Section \\ref{sec:bregman-cv}. An interesting interpretation of the method as an adaptive regularization method is presented in Section \\ref{sec:reg-param}. \nSection \\ref{sec:spec-analysis} deals with nonlinear spectral methods and contains the second main ingredient of our new approach. In \\ref{sec:genTV} we start with a brief summary of generalized nonlinear total variation functionals addressing different eigenshapes and continue in \\ref{sec:spec-analysisdenoising} with recent works by Gilboa et al. solving related nonlinear eigenvalue problems in imaging. In Section \\ref{sec:spec-analysissegm} we extend those ideas from nonlinear image denoising to image segmentation.\nIn Section \\ref{sec:numrealization} we describe the numerical realization of our approach using primal-dual convex optimization methods. \nIn Section \\ref{sec:results} we first illustrate the strengths and limitations of our multiscale segmentation method by studies on synthetic datasets with a certain focus on eigenfunctions and shapes. Moreover, we underline the potential and wide applicability by three different biomedical imaging applications. Its reliable performance is demonstrated on real fluorescence microscopy images that contain Circulation Tumor Cells, with various shapes and sizes, among white blood cells and debris in \\ref{sec:resultsCTC}. Besides, we present results on electron microscopy images suffering from inhomogeneous backgrounds in \\ref{sec:resultsEM} and interesting results on network-like shapes representing vascular systems in \\ref{sec:resultsnetwork}.\nWe end with a conclusion and an outlook to future possible perspectives in Section \\ref{sec:conl}.\n\\section{Modeling Segmentation with Inverse Scale Spaces}%\nIn the following section we will shortly describe the model by Chan and Vese \\cite{Chan2001} for segmentation and the adaption of the ROF model for denoising using Bregman distances \\cite{Osher2005}. Afterwards we will introduce our novel Bregman-CV model for segmentation and show some advantages of our model. \n\\subsection{Globally Convex Segmentation}\\label{sec:CVmodel}\nThe idea of the CV model has originally been derived from the more general model for image segmentation introduced by Mumford and Shah (\\cite{Mumford1989}). Here, one seeks for a solution of the variational energy\n\\begin{equation*}\nJ^{\\text{MS}}(u,C) = \\int_{\\Omega} |f(x) - u(x)|^2 \\mathrm{d}x + \\alpha \\cdot Per(C) + \\beta \\cdot \\int_{\\Omega\\setminus C} |\\nabla u(x)| ^2 \\mathrm{d}x \\longrightarrow \\min_{u,C}\n\\end{equation*}\nwhere $u$ is a differentiable function that is allowed to be discontinuous on $C$. $C$ describes the union of the boundaries and thereby represents the contour defining the segmentation. Thus, $u$ is a smooth approximation of the original image $f$ and is composed of several regions $\\Omega_i$. Within each region $\\Omega_i$, $u$ is smooth. If we restrict this model so that $u$ is composed of only two regions $\\Omega_1$ and $\\Omega_2$ we derive with $\\beta \\rightarrow \\infty$ the CV model for segmentation\n\\begin{equation*}\nJ^{\\text{CV}}(c_1,c_2,C) = \\int_{\\Omega_1} (f(x) - c_1)^2 \\mathrm{d}x + \\int_{\\Omega_2} (f(x) - c_2)^2 \\mathrm{d}x + \\alpha \\cdot Per(C)\\longrightarrow \\min_{C,c_1,c_2}.\n\\end{equation*}\nHere, $c_i$ is the intensity value of $u$ within the corresponding region $\\Omega_i$. These values need to be determined together with contour the $C$. Note that in comparison to the original model we omit the area regularization term (cf. \\eqref{eq:CVorig}).\n\nThe contour $C$ can be indirectly represented via a level-set function $\\Phi$ (\\cite{Osher1988}) with\n\\begin{equation*} \nC = \\{x \\in \\Omega: \\Phi(x) = 0\\} \n\\end{equation*} \nand $\\Phi(x)$ being positive if and only if $x \\in \\Omega_1$. Together with the Heaviside function\n\\begin{equation*}\n\tH(\\Phi(x)) = \\begin{cases}1 &\\text{ if }\\Phi(x)\\geq 0 \\\\ 0 &\\text{ if }\\Phi(x)< 0\\end{cases}\n\\end{equation*}\nand its regularized version $H_{\\epsilon}$ this results in\n\\begin{align*}\nJ^{\\text{CV2}}(c_1,c_2,\\Phi) = \\int_{\\Omega} (f(x) - c_1)^2 H_{\\epsilon}(\\Phi(x)) \\mathrm{d}x & + \\int_{\\Omega} (f(x) - c_2)^2 (1 - H_{\\epsilon}(\\Phi(x)))\\mathrm{d}x \\\\&+ \\alpha \\cdot \\int_{\\Omega}|\\nabla H_{\\epsilon}(\\Phi(x))|\\mathrm{d}x~\\longrightarrow~ \\min_{\\Phi,c_1,c_2}.\n\\end{align*}\nThe contour $C$ evolves during minimization until it reaches a minimum which, in the ideal case, describes the object boundaries. Besides the original minimization strategy by gradient descent, several minimization methods to solve the CV model have been developed, see for example \\cite{He2007,Zehiry2007,Badshah2008,Bae2009}. One disadvantage of the model is its non-convexity which makes the solution depending on the used initialization. With a badly chosen initialization the minimization might get stuck in a local minimum that corresponds to a bad or meaningless segmentation.\n\nFor a better understanding of the relation between the nonlinear denoising model by Rudin, Osher and Fatemi (\\cite{Rudin1992}) and the CV segmentation model we use the total variation defined as\n\\begin{equation}\\label{eq:TV}\nTV(u) := \\sup\\limits_{\\substack{\\varphi \\in C_0^{\\infty}(\\Omega; \\mathbb{R}^2)\\\\ ||\\varphi||_{\\infty} <1}} \\int_{\\Omega} u \\nabla\\cdot \\varphi \\;\\mathrm{d}\\mu \\text {\\ \\ with \\ \\ } BV(\\Omega) := \\{ u \\in L^{1}(\\Omega)|TV(u) < \\infty\\}.\n\\end{equation}\nThe total variation of a characteristic $u(x) = \\begin{cases} 1 &\\text{\\ if \\ } x \\in \\Omega_1 \\cup C\\\\ 0 &\\text{ if } x \\in \\Omega_2\\end{cases}$ corresponds to the contour length $|C|$ which can be shown by the co-area-formula.\nTherefore we can formulate the segmentation problem as\n\\begin{equation} \\label{eq:CV3} J^{CV3}(c_1,c_2,u) = \\int_{\\Omega} u\\left((f(x)-c_1)^2 - (f(x) - c_2)^2\\right)\\mathrm{d}x + \\alpha \\; TV(u)\n\\longrightarrow \\min_{\\substack{u\\in BV(\\Omega),c_1,c_2\\\\u(x)\\in \\{0,1\\}}}.\n\\end{equation}\nFor fixed $c_1, c_2$ the solution of \\eqref{eq:CV3} corresponds to the solution of an ROF problem with binary constraint (\\cite{Burger2012}):\n\\begin{equation}\\label{eq:CV-ROF}\n \\min_{\\substack{u\\in BV(\\Omega)\\\\u(x)\\in \\{0,1\\}}}\\frac{1}{2}|| u(x) - r(x)||_2^2 + \\alpha TV(u)\n \\end{equation}\nwith $r(x) = (f(x)-c_2)^2 \\!-\\! (f(x) - c_1)^2 \\!-\\!\\frac{1}{2}$.\n\nThe regularization parameter $\\alpha$ in the segmentation model \\eqref{eq:CV3} has the role of a scale parameter, meaning that $\\alpha$ determines the scale of the objects that are segmented. The CV model describes a forward scale approach, thus a small parameter $\\alpha$ corresponds to small scales that are segmented. An increased regularization parameter results in a solution where the smaller scales are not segmented but only larger ones. The meaning of scale is determined by the regularization functional, in this case the total variation. The total variation encodes a measure of the contour length as well as the height of piecewise constant areas. One disadvantage is that due to the 1-homogeneity of $TV$ our method cannot distinguish between height and contour length. Thus, a small object with a bright intensity can have the same scale as a large object with a less bright intensity. For more details see section \\ref{sec:results}.\n\\\n\\paragraph{Convexification}\nThe CV segmentation model \\eqref{eq:CV3} as well as the binary ROF model \\eqref{eq:CV-ROF} are both not convex. Even for fixed values of $c_1$ and $c_2$ both models are non-convex due to the binary constraint on $u$. As mentioned before this might result in local instead of global minimum solutions. Approaches to overcome this difficulty and find global minima of the CV model are presented for example in\\cite{Chan2006,Bresson2007,Goldstein2010,Brown2012}. In \\cite{Chan2006} the authors showed that global minimizers of \\eqref{eq:CV3} for any given fixed $c_1,c_2 \\in \\mathbb{R}$ can be found by solving\n \\begin{equation}\\label{eq:convCV}\nJ^{CV3}(c_1,c_2,v) = \\int_\\Omega v ((f(x)-c_1)^2 - (f(x)-c_2)^2) dx + \\alpha~TV(v) \\longrightarrow \\min_{v \\in BV(\\Omega),~v(x) \\in [ 0,1 ]}\n\\end{equation}\nand defining $ u(x) := v^{\\ast}(x) \\geq \\mu$ for a.e. $\\mu \\in [0,1]$.\n\nThus, the binary constraint can be relaxed and combined with a thresholding. Here, the variational model to solve is convex, though not strictly convex. One should bear in mind that the found solution is therefore not unique. Yet solutions of \\eqref{eq:convCV} are close to binary even if the constraint is relaxed. Thus for most choices of $\\mu$ we derive the same solution which means that the choice of $\\mu$ has only a very limited impact on our method. Therefore we don't see any disadvantages when choosing the global but not unique minimum $u(x) := v^{\\ast}(x) \\geq 0.5$. A fully convex formulation (including the constants $c_1$ and $c_2$) of problem \\eqref{eq:CV3} can be found in \\cite{Brown2012}. This method is computationally less efficient and currently we don't think that, in our method, the advantages of the full convexity outweigh the increased computational time. \n\\subsection{Inverse Scale Space for TV-Denoising}\\label{ref:bregmandnoising}\nBefore introducing our new segmentation model in the following subsection we will first recall some properties of the well-known ROF model \\cite{Rudin1992} and its extension by Bregman distances introduced in \\cite{Osher2005}. To denoise an image corrupted by additive Gaussian noise, \\cite{Rudin1992} proposed to solve the nonlinear variational problem \n\\begin{equation}\\label{eq:ROF}\n\\frac{1}{2}||u - f||_2^2 + \\alpha \\ TV(u) \\longrightarrow \\min_{u \\in BV(\\Omega)}\n\\end{equation}\nreferred to as the ROF model. \nSimilar to the CV model this generates a forward scale space flow regarding the scale parameter $\\alpha$. An increased parameter $\\alpha$ leads again to a solution $u$ where fine scales are removed and vice versa. The total variation regularization functional is known to lead to a systematic contrast loss in the denoised image $u$ \\cite{Meyer2001}. To tackle this problem Osher et al. proposed in \\cite{Osher2005} an iterative contrast enhancement procedure based on Bregman distances. Instead of using the total variation regularization functional as before, information about the solution $u$ that we gained from a prior solution of problem \\eqref{eq:ROF} is included. Therefore, problem \\eqref{eq:ROF} is replaced by a sequence of variational problems \n\\begin{equation}\\label{eq:Bregman-ROF}\nu_{k+1} = \\argmin_{u\\in BV(\\Omega)} \\frac{1}{2}||u - f||_2^2 + \\alpha\\ D^{p_k}_{TV}(u,u_k).\n \\end{equation}\nThe regularization $D^{p_k}_{TV}(u,u_k):=TV(u)-TV(u_k)-\\langle p_k,u-u_k\\rangle$ is the Bregman distance of $u$ to the previous iterate $u_k$ with respect to the total variation. $p_k \\in \\partial TV(u_{k})$ is an element in the subdifferential of the total variation of the prior solution $u_{k}$. Although this subdifferential might be multivalued, the iterative regularization algorithm automatically selects a unique subgradient based on the optimality condition. For $k = 0$ we set $u_0 = p_0 = 0$.\nThe iterative strategy of this model is as follows: We start with a large parameter $\\alpha$ that results in an oversmoothed solution $u_1$ that consists of only large scales. In every iteration step finer scales are added back to the solution. Thus, the scale parameter that determines the range of the scales present in $u$ is the iteration parameter $k$. In contrast to the forward approaches presented before, a small $k$ corresponds to coarse scales and a large $k$ to very fine scales. Therefore, the Bregman-ROF denoising approach is an inverse scale space approach. The authors showed that this strategy leads to enhanced contrast of the final solution $u_{k_{\\text{max}}}$ compared to the solution of \\eqref{eq:ROF}. Hence, solving \\eqref{eq:Bregman-ROF} instead of \\eqref{eq:ROF} with increasing $\\alpha$ is not only an inverted way of detecting scales. This method rather allows for a detection of solutions which cannot be obtained by an adequate choice of $\\alpha$ in the original ROF model.\\\\\n\n\\subsection{Bregman-CV Segmentation Model}\\label{sec:bregman-cv}\nIn the following section we will introduce our new inverse scale space approach for segmentation. It is based on the similarity of the ROF functional and the CV functional shown in \\eqref{eq:CV-ROF}. Similar to the Bregman-ROF denoising problem we replace the total variation regularization by an iterative regularization based on Bregman distances. Thus, the resulting novel segmentation model is given by\n\\begin{equation*}\nu_{k+1} = \\argmin_{\\substack{u\\in BV(\\Omega)\\\\u(x)\\in [0,1]}} \\int_{\\Omega} u\\left((f-c_1)^2 - (f-c_2)^2\\right)\n+ \\alpha \\ D^{p_k}_{TV}(u,u_k)\n\\end{equation*}\nBy inserting the definition of the Bregman distance $D^{p_k}_{TV}(u,u_k):=TV(u)-TV(u_k)-\\langle p_k,u-u_k\\rangle$ and ignoring the parts independent of $u$ we derive the following model.\n\\begin{equation}\\label{eq:Bregman-CV}\nu_{k+1} = \\argmin_{u\\in BV(\\Omega)} \\int_{\\Omega} u\\left((f-c_1)^2 - (f-c_2)^2\\right) + \\chi_{[0,1]}(u)\n+ \\alpha \\ \\left(TV(u)-\\right)\n\\end{equation}\nwith $p_k \\in \\partial TV(u_k)$, $p_0 = 0$ and $ \\chi_{[0,1]}(u) = 0$ if $u(x) \\in [0,1]$ and equal to infinity elsewhere. The range of the scales present in $u_{k+1}$ is again determined by the iteration index $k$. This model is an inverse scale space approach, thus a small $k$ corresponds to a large scale segmentation and vice versa. \n\nBy definition of the Bregman distance it is $p_k \\in \\partial TV(u_k)$ where the subdifferential is multivalued. Therefore we need to determine a rule to choose $p_k$. One way is to derive an update strategy based on the optimality condition of \\eqref{eq:Bregman-CV} (cf. \\cite{Osher2005}):\n\\begin{equation*}\n0 = \\left((f-c_1)^2 - (f-c_2)^2\\right) + q_{k+1} + \\alpha p_{k+1} - \\alpha p_k\\text{\\ \\ \\ (opt.cond.)}.\n\\end{equation*}\nHere, $q_{k+1}$ is an element in the subdifferential of the characteristic function $\\chi_{[0,1]}(u_{k+1})$. The subdifferential of a characteristic function is a normal cone and is in our case given by \n\\begin{equation*}\nq_k(x) \\in \\begin{cases} (-\\infty,0] &\\mbox{if} \\quad u_k(x) = 0 \\\\ \\{ 0 \\} &\\mbox{if} \\quad 0 < u_k(x) < 1 \\\\ [0,\\infty) &\\mbox{if}\\quad u_k(x) = 1\\end{cases}.\n\\end{equation*}\nThus, we can choose $q_k = 0$ and neglect it from hereon. The update strategy for $p_{k+1}$ is then given by\n\\begin{equation}\np_{k+1} = p_k - \\frac{1}{\\alpha}\\left((f-c_1)^2 - (f-c_2)^2\\right) = -\\frac{k+1}{\\alpha}\\left((f-c_1)^2 - (f-c_2)^2\\right)\\label{eq:update_p}.\n\\end{equation}\nThis update is independent of $u_k$, thus it is a pointwise constant update in every iteration.\n\n\\subsection{Interpretation as an Adaptive Regularization Approach}\\label{sec:reg-param}\nOne important question is whether solutions of this model are in some sense improved compared to the solutions of the original CV model. We mentioned before that Bregman iterations lead to a contrast enhancement when applied to the ROF functional and that solving the CV model corresponds to solving a binary ROF. Yet, one should bear in mind that a contrast enhancement is meaningless in the case of a binary image since the contrast is already determined by the binary constraint. This is supported by the following observation: when inserting \\eqref{eq:update_p} into \\eqref{eq:Bregman-CV} we get\n\\begin{align*}\nu_{k+1} & = \\argmin_{\\substack{u\\in BV(\\Omega)\\\\u(x)\\in [0,1]}}\\int_{\\Omega} u\\left((f-c_1)^2 - (f-c_2)^2\\right) + \\alpha \\left(TV(u)-\\right)\\\\\n& = \\argmin_{\\substack{u\\in BV(\\Omega)\\\\u(x)\\in [0,1]}} \\int_{\\Omega} u\\left((f-c_1)^2 - (f-c_2)^2\\right) + \\frac{\\alpha}{(k+1)} \\ TV(u)\n\\end{align*}\nWith this, it is straightforward to see that all solutions $u \\in BV(\\Omega)$ derived by the Bregman-CV model can also be found by the original CV model. Nevertheless, there are advantages of using the iterative update strategy.\n\\begin{figure}[t]\n \\centering \n \\subfigure[Linearly spaced $\\alpha$'s from 50 to 1.]{\\includegraphics[width=0.35\\textwidth]{img\/normal_alphas.png}}\\label{fig:reg_params_linear }\\quad \n \\subfigure[\"$\\alpha$'s\" resulting from 50 Bregman steps with $\\alpha = 50$.]{\\includegraphics[width=0.35\\textwidth]{img\/bregman_alphas.png}}\\label{fig:reg_params_bregman}\\quad \n \\caption{Comparison between linearly spaced regularization parameters and the automatically chosen parameters in the Bregman-CV model.}\n\\label{fig:reg_params} \n\\end{figure}\nIn Figure \\ref{fig:reg_params} b) the resulting $\\tilde\\alpha = \\frac{\\alpha}{k+1}$ for $\\alpha = 50$ and 50 Bregman iterations are shown. It is obvious that in the first iterations the decrease of the regularization parameter is much larger compared to later iterations. This is reasonable since first the large scales are reconstructed and in later Bregman iterations the finer scales are incorporated in the result. Making large steps with $\\alpha$ in large scales and becoming finer is therefore a reasonable strategy. A large decrease in the later iterations would probably miss some scales in between while in the first iterations too small steps are not reasonable. With this strategy the problem of automatically choosing the regularization parameter is less severe. By choosing a large $\\alpha$ and performing multiple Bregman iterations, a broad spectrum of scales is detected. Yet one should bear in mind that a strategy to automatically detect important scales is needed for a fully automated framework. \n\\section{A Spectral Method for Multiscale Segmentation}\\label{sec:spec-analysis} \nThe analysis of eigenvalues and spectral decomposition is a well-known theory in the field of linear signal and image processing (see e.g. \\cite{MarpleJr1987} or \\cite{Stoica2005} for a more recent overview). Since nonlinear regularization became popular in the last years, there is a growing interest in generalizing this theory to nonlinear operators. In \\cite{Benning2012} Benning et al. examined singular values for nonlinear, convex regularization functionals and in \\cite{Gilboa2013,Gilboa2014} Gilboa transferred the idea of spectral decompositions to the nonlinear total variation functional and related operators. The general idea is to examine solutions of the nonlinear eigenvalue problem \n\\begin{equation}\\label{eq:eigenval}\n\\lambda u \\in \\partial J(u)\n\\end{equation}\nwhere $\\partial J$ denotes the subdifferential of a (one-homogeneous) convex functional usually representing regularization in inverse imaging problems. \nBy transferring solutions of \\eqref{eq:eigenval} to sparse peaks in a spectral domain, advanced filters enhancing or suppressing certain image components can easily be designed. This concept was first introduced for the total variation functional and later generalized to one-homogenous functions and analyzed in \\cite{Burger2015}, \\cite{Gilboa2015} and \\cite{Burger2016}. \n\n\n\\begin{figure}[t]\n \\centering \n \\subfigure[Solution of the ROF model for different $\\alpha$.]{\\includegraphics[height=0.22\\textheight]{img\/tv_block_1d.png}}\\label{fig:TVblock }\\quad \n \\subfigure[Solution of the inverse Bregman-ROF model for different Bregman iterations.]{\\includegraphics[height=0.22\\textheight]{img\/bregmantv_block_1d.png}}\\label{fig:BregmanTVblock}\\quad \n \\caption{Solutions of the ROF and Bregman-ROF model in case the signal is a TV eigenfunction. Different regularization parameters $\\alpha$ in the forward scale (left) and Bregman steps in the inverse scale space (right).}\n\\label{fig:1dEF} \n\\end{figure}\nIn the case of a one dimensional signal the eigenfunction of the total variation corresponds to a single block with constant height, see Figure \\ref{fig:1dEF}. When increasing the regularization parameter in the ROF model \\eqref{eq:ROF}, i.e. when minimizing the total variation of this block signal, the block looses it height but the edges remain at the same position (cf. Figure \\ref{fig:1dEF} (a)). Therefore all signals can be seen as the original signal multiplied by a scalar and are eigenfunction of the total variation. The same holds true for solutions of the Bregman-ROF model \\eqref{eq:Bregman-ROF}. The only difference is that now eigenfunctions of the TV functional (blocks) are reshaped with increasing number of Bregman iterations instead of removed (see Figure \\ref{fig:1dEF}(b)). This reflects the inverse scale approach of the Bregman-ROF model. Different to our novel approach, the Bregman updates in the Bregman-ROF model cannot be reformulated as an adaptive regularization parameter choice. As far as we know it is not entirely clear how the forward and inverse TV flow for denoising relate to each other.\n\n\n\\subsection{Generalized Definition of the Total Variation}\\label{sec:genTV}\nIn the previous paragraph we mentioned that in the one dimensional case the eigenfunction of the total variation functional is a block with constant height. If we want to proceed to the two dimensional case the eigenfunctions are not as clearly defined as in the 1D case. Different than before we now have the freedom to choose the norm body that is used within the infinity norm in the TV definition \\eqref{eq:TV}. This choice determines the shape of the eigenfunctions. To reflect this dependence on the chosen norm in the TV definition we introduce a generalized version of the TV definition:\n\\begin{equation}\\label{eq:TVgen}\nTV_\\gamma(u) := \\sup\\limits_{\\substack{\\varphi \\in C_C^{1}(\\Omega; \\mathbb{R}^d)\\\\ \\varphi(x) \\cdot n <\\gamma(n) \\forall n\\in\\mathbb{R}^{d}}}\\ -\\int_{\\Omega} u \\nabla\\cdot \\varphi \\mathrm{d}x.\n\\end{equation}\nHere, $\\gamma : \\mathbb{R}^d \\rightarrow \\mathbb{R} $ is a convex, positively 1-homogeneous function such that $\\gamma (x) > 0$ for $x \\neq 0$.\nIf $u$ is a function in $\\mathcal{W}^{1,1}(\\Omega)$ the primal definition of equation \\eqref{eq:TVgen} is given by\n\\begin{equation*}\nTV_\\gamma(u) := \\int_{\\Omega} \\gamma(\\nabla u) \\mathrm{d}x.\n\\end{equation*}\nIn both definitions the choice of $\\gamma$ determines the shape of the eigenfunctions. We refer to \n \\begin{equation*}\n \\mathcal{F}_{\\gamma} := \\large\\{ z \\in \\mathbb{R}^{d} : \\gamma(z) \\leq 1\\large\\}\n \\end{equation*}\nas the Frank diagram and the corresponding Wulff shape is defined as\n\\begin{align*}\\mathcal{W}_{\\gamma} :=& \\large\\{ z \\in \\mathbb{R}^{d} : z \\cdot x \\leq \\gamma(x) \\text{ for all } x\\in \\mathbb{R}^{d} \\large\\}\\\\\n= &\\large\\{ z \\in \\mathbb{R}^{d} : \\gamma^{\\ast}(x) := \\sup\\limits_{x\\in \\mathbb{R}^{d}} \\ \\frac{z \\cdot x}{ \\gamma(x)} \\leq 1 \\large\\}.\\end{align*}\nNote that from definition \\eqref{eq:TVgen} together with the definition of the Wulff shape we can conclude that $\\varphi(x) \\in \\mathcal{W}_{\\gamma}$. Therefore, for every choice of $\\gamma$, functions with the same shape as the Wulff shape are eigenfunctions of $TV_\\gamma$ (see \\cite{esedoglu2004} Theorem 4.1). The Frank diagram $\\mathcal{F}_{\\gamma}$ and the Wulff shape $\\mathcal{W}_{\\gamma}$ for the three most common choices of $\\gamma$ are presented in Table \\ref{tab:shapes}. For simplicity we will omit the subscript $\\gamma$ in $TV_\\gamma$ from hereon, but we will come back to it in the numerical results in section \\ref{sec:results}.\n\\begin{table}\n\\centering\n\\begin{tabular}{|c|c|c|}\n\\hline\n$\\gamma(x)$ & $\\mathcal{F}_{\\gamma}$ & $\\mathcal{W}_{\\gamma} $\\\\[7pt]\n\\hline\n$\\gamma = \\| \\cdot \\|_{1}$ & \\includegraphics[height = 0.05\\textheight]{img\/square_rot.png} & \\includegraphics[height = 0.05\\textheight]{img\/square.png}\\\\[6pt]\n\\hline\n$\\gamma = \\| \\cdot \\|_{2}$& \\includegraphics[height = 0.05\\textheight]{img\/circle.png} & \\includegraphics[height = 0.05\\textheight]{img\/circle.png}\\\\[6pt]\n\\hline\n $\\gamma= \\| \\cdot \\|_{\\infty}$ &\\includegraphics[height = 0.05\\textheight]{img\/square.png} & \\includegraphics[height = 0.05\\textheight]{img\/square_rot.png}\\\\\n\\hline\n\\end{tabular}\n\\caption{Examples for $\\gamma$ and the corresponding Frank diagrams $\\mathcal{F}_{\\gamma}$ and Wulff shapes $\\mathcal{W}_{\\gamma}$. The Wulff shape corresponds to the shape of the eigenfunctions of the $TV_{\\gamma}$ functional.}\n\\label{tab:shapes}\n\\end{table}\n\n\\subsection{Spectral Analysis for Nonlinear Functionals}\\label{sec:spec-analysisdenoising}\nIn the following section we will first review the basic ideas of spectral TV analysis before transferring those ideas to segmentation in the next section. The idea to decompose an image based on the basic TV elements was presented by Gilboa in \\cite{Gilboa2013,Gilboa2014}. These basic TV elements, called eigenfunctions of the total variation functional, are all functions $u \\in BV(\\Omega)$ that solve the nonlinear eigenvalue problem\n\\begin{equation}\\label{eq:eigenvalProb}\n\\lambda u \\in \\partial TV(u).\n\\end{equation}\nIn the previous section we already presented some examples of TV eigenfunctions (see e.g. Figure \\ref{fig:1dEF} and Table \\ref{tab:shapes}). A more general description of these eigenfunctions in case $\\gamma$ is isotropic is given in \\cite{Bellettini2002}. Bellettini et al. showed that all indicator functions $ \\mathbbm{1}_C(x)$ of a convex and connected set $C$ with finite perimeter $Per(C)$ which admit\n\\begin{equation}\\label{eq:l2eig}\n\\esssup_{p \\in \\partial C} \\kappa (p) \\leq \\frac{Per(C)}{Area(C)}\n\\end{equation}\nwhere $Area(C)$ denotes the area of $C$ and $\\kappa$ the curvature of $\\partial C \\in C^{1,1}$ are solutions of \\eqref{eq:eigenvalProb} and therefore eigenfunctions. Obtaining an analog condition for anisotropic, smooth and strictly convex choices of $\\gamma$ is straightforward but more challenging in the case of non-smooth or non-strictly convex choices of $\\gamma$. Candidates of these shapes are presented in \\cite{bellettini2001}. In \\cite{esedoglu2004} Esedoglu and Osher gave an example of a TV eigenfunction for $\\gamma = \\| \\cdot \\|_{1}$ that is not a Wulff shape (see section \\ref{sec:resultsshapes} for more details).\n\nIn order to detect eigenshapes at different scales in a given signal $f$ a scale space approach is needed. One way to define a scale space based on the total variation is given by the total variation flow \\cite{Andreu2001,Andreu2002,Bellettini2002,Burger2007,Steidl2004}. The TV flow arises when minimizing the total variation with steepest descent method and is defined as \n\\begin{equation}\\label{eq:tvflow}\n\\begin{aligned}\nu_t(t,x) &= -p(t,x) \\hspace{15pt} \\text{for } p(t,x) \\in \\partial TV(u(t,x))\\\\\nu(0,x) &= f(x)\n\\end{aligned}\n\\end{equation}\nwith Neumann boundary conditions. For $f(x) = \\mathbbm{1}_C(x)$, with $C$ defined as above, the unique solution of \\eqref{eq:tvflow} is $u(t,x) = (1 - \\lambda_C t) ^{+} \\mathbbm{1}_C(x)$ with $\\lambda_C = \\frac{Per(C)}{Area(C)}$ (cf. \\cite{Bellettini2002}). Hence, the time derivative $u_t(t,x)$ is given by the original signal multiplied with a scalar and $u$ is an eigenfunction. To obtain a suitable framework to decompose or filter images based on these eigenfunction Gilboa proposed to define a spectral framework that transforms eigenfunctions to peaks in the spectral domain. If $u(t,x)$ is a solution of \\eqref{eq:tvflow} the TV spectral transform and spectral response can be defined as\n\\begin{equation}\\label{eq:tvtransform}\n\\phi (t,x) = u_{tt}(t,x)\\cdot t \\quad\\text{and}\\quad S(t) = || \\phi(t,x) ||_{L^{1}(\\Omega)}.\n\\end{equation}\nNote, that there are alternative definitions of the spectral response presented in \\cite{Burger2015}. \n\nAnother approach to construct an forward scale space is based on the variational ROF problem. Instead of solving \\eqref{eq:tvflow} the ROF model\n\\begin{equation}\\label{eq:ROFscalespace}\n \\min_{u\\in BV(\\Omega)}\\frac{1}{2}|| u(x) - f(x)||_2^2 + t \\cdot TV(u).\n\\end{equation}\nis solved for different regularization parameters. Hence, $t$ is the (artificial) time variable that determines the scale comparable to $t$ in the TV-flow approach \\eqref{eq:tvflow}. One drawback of this approach is that there is no clear rule for the choice of different $t$'s. The spectral transform and response function can be equivalently defined as in \\eqref{eq:tvtransform}. \n\nA third, but significantly different, scale space approach is an inverse scale approach. The inverse scale space flow is defined as\n\\begin{equation}\\label{eq:inverseflow}\n\\begin{aligned}\np_s(s,x) &= f(x) - u(s,x) \\hspace{15pt} \\text{for } p(s,x) \\in \\partial TV(u(s,x))\\\\\nu(0,x) &= p(0,x) = 0.\n\\end{aligned}\n\\end{equation}\nThus, the flow is now defined on the dual variable $p \\in \\partial TV(u)$ and $s$ is the time variable determining the scale. Note that in \\cite{Burger2005} it was shown that the iterative Bregman-L2-TV model \\eqref{eq:Bregman-ROF} can be associated with a discretization of \\eqref{eq:inverseflow} for $\\frac{1}{\\alpha} \\rightarrow 0$.\nAs \\eqref{eq:inverseflow} is an inverse approach the time variable $t$ can be associated with $\\frac{1}{s}$. In this case the spectral transform and response functions are defined as\n\\begin{equation}\\label{eq:tvtransform2}\n\\phi (s,x) = u_{s}(s,x) \\quad\\text{and}\\quad S(s) = || \\phi(s,x) ||_{L^{1}(\\Omega)},\n\\end{equation}\nwhere $u(s,x)$ is the solution of \\eqref{eq:inverseflow}. Note, that for small $s$ $\\phi(s,x)$ now measures changes in the coarse scales. See \\cite{Burger2015} for more details.\n\nWith all three approaches we are able to transform a signal to the spectral domain and detect different scales based on TV eigenfunctions. If we assume that $\\phi(t,x)$ is integrable over time, the original signal $f$ can be reconstruct via\n\\begin{equation*}\nf(x) = \\int_0^{\\infty} \\phi(t,x) dt + \\bar{f}\n\\end{equation*}\nwhere $\\bar{f}$ is the average of $f$. Filters can be defined with $\\phi_{H}(t,x) = H(t)\\phi(t,x)$ via \n\\begin{equation*}\nf_{H}(x) = \\int_0^{\\infty} \\phi_{H}(t,x) dt + H(\\infty)\\bar{f}.\n\\end{equation*}\n\n\n\\subsection{Spectral Response of Multiscale Segmentation}\\label{sec:spec-analysissegm} \nIn section \\ref{sec:bregman-cv} we presented two variational models to detect segmentations of a given image $f$ at different scales. To decompose the segmentation into different scales and detect important scales or clusters of scales in the segmentation we want to transfer the idea of spectral analysis based on the total variation (cf. sec. \\ref{sec:spec-analysisdenoising}) to segmentation. Therefore we need to find a suitable transformation of the segmentation $u$ to the spectral domain and vice versa. Note that our goal is not to reconstruct the original signal $f$ or filtered versions of $f$ but the reconstructed function should be a segmentation itself. To do so we make use of the idea that eigenfunctions of the TV functional should be transformed to peaks in the spectral domain. In the following we will derive this spectral transform function for a forward scale space and an inverse scale space approach. To represent the forward scale space we associate the regularization parameter $\\alpha$ in the convex version of the original model by Chan and Vese \\eqref{eq:convCV} with the artificial time variable $t$. That means, we solve\n \\begin{equation}\\label{eq:convCV-scale}\n\\int_\\Omega u ((f(x)-c_1)^2 - (f(x)-c_2)^2) dx + t \\cdot TV(u) \\longrightarrow \\min_{u \\in BV(\\Omega),~u(x) \\in [ 0,1 ]}\n\\end{equation}\nfor different $t$.This is comparable to the variational approach in \\eqref{eq:ROFscalespace}. So far, we could not find a forward scale space representation that can be associated with a flow on $u$ comparable to the TV-flow. One difficulty is that the optimality condition of this model, i.e. $0 = (f-c_1)^2 - (f-c_2)^2 + \\alpha p$ with $p \\in TV(u)$, has no direct dependence on $u$. The inverse scale space representation is based on the Bregman-CV model we introduced in \\eqref{eq:Bregman-CV}. The optimality condition in each step of the iterative Bregman-CV strategy is given as\n \\begin{equation*}\n 0 = \\left((f-c_1)^2 - (f-c_2)^2\\right) + \\alpha \\left( p_k - p_{k-1}\\right) \\text{ with } p_k \\in TV(u_k) \\ \\forall \\ k \\\\.\n \\end{equation*}\nThe resulting equation\n\\begin{equation*}\n\\frac{p_k - p_{k-1}}{\\frac{1}{\\alpha}} = (f-c_2)^2 - (f-c_1)^2\n \\end{equation*}\ncan be interpreted as a discretization with stepsize $\\frac{1}{\\alpha}$ of\n\\begin{equation}\\label{eq:invsegslow}\n\\begin{aligned}\np_s(s,x) &= (f(x)-c_2)^2 - (f(x)-c_1)^2\\hspace{15pt} \\text{with } p(s,x) \\in \\partial TV(u(s,x))\\\\\np(0,x) &= 0.\n\\end{aligned}\n\\end{equation}\nAgain, $s$ is the time variable that is inverse to the time variable $t$ in \\eqref{eq:convCV-scale}. We refer to this flow as the inverse scale space segmentation flow. Note that within this flow description there is no direct dependence on $u$ but $u$ is only indirectly given by $p \\in \\partial TV(u)$. Yet, when looking for solutions of this flow at different times $s$, we solve the Bregman-CV model for multiple Bregman iterations and there the corresponding $u$ is available. \n\n\\begin{figure}[t]\n \\centering \n \\subfigure[Evolution of $u$ in \\eqref{eq:convCV-scale}.]{\\includegraphics[width=0.4\\textwidth]{img\/peak_segm.png}}\\quad \\quad\n \\subfigure[Evolution of $u$ in \\eqref{eq:invsegslow}.]{\\includegraphics[width=0.4\\textwidth]{img\/peak_segm_bregman.png}}\n \\caption{Illustration of the evolution of $u\\in\\{0,1\\}$ over time at a fixed point when $f$ is a TV eigenfunction shown for the forward (a) and inverse scale space approach (b). Note, due to the binary constraint on u the evolution of $u$ is already a step function, i.e. $u(x)$ is either part of the segmentation (u(x)=1) or not (u(x)=0).}\n\\label{fig:peaks} \n\\end{figure}\n\nTo transform the eigenfunctions to peaks in the spectral domain we define the spectral transform function as follows:\n\\begin{equation}\\label{eq:spectransseg}\n\\phi (t,x) =\\begin{cases} -u_{t}(t,x)& \\text{ (forward case)}\\\\\n\\ \\ \\ u_t(t,x)& \\text{ (inverse case)}\\end{cases}\\\\[3pt]\n\\end{equation}\nThis definition is motivated by Figure \\ref{fig:peaks} where the evolution of TV eigenfunctions over time in \\eqref{eq:convCV-scale} and \\eqref{eq:invsegslow} is illustrated. We can see, that the first derivative (in a distributional sense) is a peak at that time point where the eigenfunction vanishes or appears respectively. For the spectral response function we use the definition\n\\begin{equation}\n\tS(t) = || \\phi(t,x) ||_{L^{1}(\\Omega)}\n\t\\label{eq:spectralResponse}\n\\end{equation}\nintroduced by Gilboa (\\cite{Gilboa2013,Gilboa2014}). Here, the influence of certain scales to the segmentation is encoded. Using this function $\\phi$ and assuming integrability over time we can get the following backtransformation:\n\\begin{equation*}\nf^{\\text{seg}}(x) = \\int_0^{\\infty} \\phi(t,x) dt,\n\\end{equation*}\nwhere $f^{\\text{seg}}(x):= (f - c_2)^2 - (f-c_1)^2 + \\frac{1}{2} < \\frac{1}{2}$ is the results of the simple clustering problem\n$\\min_{u(x)\\in \\{0,1\\}} \\int_{\\Omega} u\\left((f(x)-c_1)^2 - (f(x) - c_2)^2\\right)\\mathrm{d}x$. By choosing a function $H$ with $H(t) \\in \\{ 0,1 \\}$ certain scales of interest can be filtered via\n\\begin{equation}\\label{eq:filterseg}\nf^{\\text{seg}}_{H}(x) = \\int_0^{\\infty} \\phi_{H}(t,x) dt \\ \\ \\text{ with } \\ \\ \\phi_{H}(t,x) = H(t)\\phi(t,x).\n\\end{equation}\nWith this framework we can easily segment an image and simultaneously detect important scales in this segmentation instead of using the spectral TV analysis as in Section \\ref{sec:spec-analysisdenoising} and segmenting separately. Moreover, using the filtering approach we can easily construct segmentations of only certain scales by filtering (cf. \\eqref{eq:filterseg}). Examples are shown in section \\ref{sec:results}.%\n\\section{Numerical Methods}\\label{sec:numrealization}\nOur novel approach introduced in the two previous sections consists of two parts. First we have to solve the Bregman-CV model \\eqref{eq:Bregman-CV} introduced in section \\ref{sec:bregman-cv}. Afterwards we can analyze the detected scales using the spectral response function \\eqref{eq:spectralResponse} introduced in section \\ref{sec:spec-analysissegm}. Therefore an efficient solution of the Bregman-CV model is required. In the following section we will give a very brief introduction into primal-dual optimization schemes and then show how this can be used to solve \\eqref{eq:Bregman-CV}. We close the section by a speed comparison between a Matlab and a parallelized C\/Mex implementation of our code. \n\n\\subsection{Primal-Dual Optimization Methods}\nTo solve nonlinear problems of the form\n\\begin{equation}\\label{eq:primalprob}\n \\min_{x \\in X} \\ F(Kx) + G(x)\n \\end{equation}\nwith $F$ and $G$ being proper, convex, lower-semicontinuous functions, primal-dual optimization methods became very popular in the last years. Instead of minimizing the primal problem \\eqref{eq:primalprob} they make use of the primal-dual formulation of this problem given by\n\\begin{equation}\\label{eq:primaldualprob}\n\\min_{x\\in X} \\max_{y\\in Y}\\ \\langle Kx,y\\rangle + G(x) - F^{\\ast}(y)\n\\end{equation}\nwith $G: X \\rightarrow [0, +\\infty]$ and $F^{\\ast}:Y \\rightarrow [0, +\\infty]$ being the convex-conjugate of $F$. By updating in every iteration step a primal and a dual variable these methods are able to avoid some difficulties that arise when working on the purely primal or dual problem. One example is the minimization of variational methods with TV regularization. If the gradient is zero, the TV functional is not differentiable which leads to problems in purely primal minimization schemes like gradient descent. Some examples for primal-dual minimization algorithms are the PDHG algorithm \\cite{Zhu2008}, a generalization of PDHG by Esser et al. \\cite{Esser2010}, the Split Bregman algorithm by Goldstein and Osher \\cite{Goldstein2009}, Bregman iterative algorithms \\cite{Yin2008}, the second-order CGM algorithm \\cite{Chan1999} and inexact Uzawa methods \\cite{Zhang2011}.\n\nIn \\cite{Chambolle2011} Chambolle and Pock proposed an algorithm which can be seen as a generalization of PDHG as well. This algorithm was originally proposed by Pock et al. in \\cite{Pock2009} to minimize a convex relaxation of the Mumford-Shah functional. The efficient first-order primal-dual algorithm to minimize general problems of the form \\eqref{eq:primaldualprob} is presented in Algorithm \\ref{alg:cp}. \n$ (I + \\sigma \\partial F^{\\ast})^{-1}$ and $(I + \\tau \\partial G)^{-1}$ are the resolvent operators of $F^{\\ast}$ and $G$ respectively which are defined through \n\\eq{y = (I + \\tau \\partial G)^{-1}(x) = \\argmin_{y} \\left\\{ \\frac{\\| y-x\\|^2}{2\\tau} + G(y) \\right\\}.}\n\\begin{algorithm}\n\t\\caption{First-order primal-dual algorithm by Chambolle and Pock (\\cite{Chambolle2011})}\n\t\\label{alg:cp}\n\t{\\fontsize{10}{10}\\selectfont\n\t\\begin{algorithmic}\n\t\t\\State \\textbf{Parameters:} $\\tau, \\sigma > 0$, $\\theta \\in [0,1]$\n\t\t\\State \\textbf{Initialization:} $n=0,\\ x^0 \\in X,\\ y^0 \\in Y, \\bar{x}^0 = x^0$\\\\\n\t\t\\State \\textbf{Iteration:}\\\\\n \\State \\textbf{for } $(n\\geq 0)$\\ \\textbf{ do }\n\t\t\\begin{enumerate}\\itemsep5pt\n\t\t\t\\item $y^{n+1} = (I + \\sigma \\partial F^{\\ast})^{-1}(y^n + \\sigma K \\bar{x}^n)$.\n\t\t\t\\item $x^{n+1} = (I + \\tau \\partial G)^{-1}(x^n - \\tau K^{\\ast} y^{n+1})$.\n\t\t\t\\item $\\bar{x}^{n+1} = x^{n+1} + \\theta (x^{n+1} - x^{n})$.\n\t\t\t\\item Set $n=n+1$.\n\t\t\\end{enumerate}\\\\\n \n\t\t\\State \\textbf{end for}\\\\\n\t\t\\State \\Return $x^n$\n\t\\end{algorithmic}\n\t}\n\\end{algorithm}\n\n\n\\subsection{Numerical Realization Bregman-CV}\nIn order to use Algorithm \\ref{alg:cp} to solve the constraint problem \\eqref{eq:Bregman-CV} we reformulate the problem into\n\\begin{equation}\\label{eq:primBregCV}\n \\int_{\\Omega} u\\left((f-c_1)^2 - (f-c_2)^2\\right)\n+ \\text{id}_{[0,1]}(u) + \\alpha \\ \\left(TV(u)-\\right) \\longrightarrow \\min_{u\\in BV(\\Omega)}\n\\end{equation}\nwith $p_k \\in \\partial TV(u_k)$ and $p_0 = 0$. $\\text{id}_{[0,1]}(u)$ is the indicator function of the interval $[0,1]$ defined as 0 if $u \\in [0,1]$ and $\\infty$ otherwise. To derive an minimization strategy based on Algorithm \\ref{alg:cp} we set $x = u$, $K(u) = \\grad u$ and\n\\begin{equation*}\nF(u) = \\| u\\|_{1} \\text{ and } G(u) = \\text{id}_{[0,1]}(u) + \\int_{\\Omega} u\\left( (f - c_1)^2 - (f - c_2)^2 - \\alpha p_k\\right).\n\\end{equation*}\nThe convex-conjugate of $F(u) = \\| u\\|_{1}$ is given by $F^{\\ast}(p) = \\delta_{P}(p)$ with $P = \\left\\{p: \\| p\\|_{\\infty} \\leq 1\\right\\}$ and\n\\begin{equation}\n\t\\delta_{P}(p) = \\begin{cases} 0 &\\mbox{if } p \\in P\\\\ \\infty &\\mbox{if } p\\notin P\\end{cases} .\n\\end{equation}\nTogether with \\eqref{eq:primaldualprob} we derive the primal-dual variant of \\eqref{eq:primBregCV}:\n\\eqn{\\label{eq:dualBregCV}\\langle \\grad u,p\\rangle + \\text{id}_{[0,1]}(u) + \\int_{\\Omega} u\\left[ (f - c_1)^2 - (f - c_2)^2 - \\alpha p_k\\right] - \\alpha \\delta_{P}(p) \\longrightarrow \\min_{u}\\max_{p}.}\nThe resolvent operators for $G$ and $F^{\\ast}$ are defined through\n\\begin{align}\nu = (I + \\tau \\partial G)^{-1}(\\tilde{u}) &= \\text{Proj}_{[0,1]}\\left[\\tilde{u} - \\tau\\left((f - c_1)^2 - (f - c_2)^2 - \\alpha p_k\\right)\\right]\\notag\\\\\n& = \\max\\left(0,\\min\\left(1, \\tilde{u} - \\tau\\left((f - c_1)^2 - (f - c_2)^2 - \\alpha p_k\\right)\\right)\\right)\\label{eq:proxprimal}\n\\end{align}\nand\n\\begin{equation}\np = (I + \\sigma \\partial F^{\\ast})^{-1}(\\tilde{p}) = \\text{Proj}_{\\left\\{\\left\\{p: \\| p\\|_{\\infty} \\leq 1\\right\\}\\right\\}}\\left(\\tilde{p_{ij}}\\right).\\label{eq:proxdual}\n\\end{equation}\nNote that the $L^{\\infty}$ norm $\\| p\\|_{\\infty}$ is in the discrete setting defined as $\\| p\\|_{\\infty} = \\max_{i,j} \\large\\{\\gamma^{\\ast}(p_{i,j})\\large\\}$ where the choice of $\\gamma$ determines the shape of the eigenfunctions of the TV functional. For $\\gamma = \\| \\cdot \\|_{\\ell^{p}}$ with $p = 1$ the unit ball defined by $$\\left\\{ (x,y) \\in \\Omega | \\max\\{|x|,|y|\\} \\leq 1\\right\\}$$ is an TV eigenfunction, for $p = 2$ the unit ball defined by $$\\left\\{ (x,y) \\in \\Omega | \\sqrt{|x|^2+|y|^2} \\leq 1\\right\\}$$ and for $p=\\infty$ the unit ball defined by $$\\left\\{ (x,y) \\in \\Omega | (|x|+|y|) \\leq 1\\right\\}.$$ However these are not the only eigenfunctions.\nWith \\eqref{eq:proxprimal} and \\eqref{eq:proxdual}, we derive the primal-dual algorithm presented in Algorithm \\ref{alg:cpforcv} to minimize \\eqref{eq:Bregman-CV}. Note that we are not updating the constants $c_1$ and $c_2$, but start with a good estimate and leave it fixed. To a certain extend the varying regularization parameter can compensate for an error in those constants. Some examples are presented in Section \\ref{sec:results}.\n\\begin{algorithm}\n\t\\caption{First-order primal-dual algorithm to solve \\eqref{eq:Bregman-CV}.}\n\t\\label{alg:cpforcv}\n\t{\\fontsize{10}{10}\\selectfont\n\t\\begin{algorithmic}%\n\t\t\\State \\textbf{Parameters:} data $f$, reg. param. $\\alpha \\geq 0$, $\\tau, \\sigma > 0$, $\\theta \\in [0,1]$, \t\t\t $maxIts \\in \\mathbb{N}, \\ maxBregIts \\in \\mathbb{N}$\n\t\t\\State \\textbf{Initialization:} $l=1,\\ u^0_1=0, \\ p_0:=0, \\bar{u}^0 = u^0$\\\\\n\t\t\\State \\textbf{Iteration:}\\\\\n\t\t\\State \\textbf{while } \\big($k$ 100 mCrab) and $\\sim 0.2-0.04$ \\% in faint neutron stars (10-100 mCrab), taking into account expected observing times and prior knowledge of the orbits. A 10 m$^2$ instrument would also be able to detect oscillations in individual Type I X-ray bursts down to amplitudes of 0.6\\% rms (for a 1s exposure and a typical burst of brightness of 4 Crab); by stacking bursts sensitivity would improve. \n\n\\subsubsection{Spin distribution and evolution}\n\nMapping the spin distribution more fully, so that the accreting neutron star sample is no longer limited by small number statistics, is also extremely valuable. One of the big open questions in stellar evolution is how precisely the recycling scenario progresses - and whether it does indeed account for the formation of the entire MSP population. The discovery of the first accreting millisecond X-ray pulsar by \\citet{Wijnands98}, and the recent detection of transitional pulsars, that switch from radio pulsars to accreting X-ray sources \\citep{Archibald09,Papitto13,Bassa14,Patruno14,Stappers14,Bogdanov14,Bogdanov15}, seems to confirm the basic picture. However key details of the evolutionary process, in particular the specifics of mass transfer and magnetic field decay, remain to be resolved \\citep[see for example the discussion in][]{Tauris12}. Comparison of the spin distributions of the MSPs and the accreting neutron stars is a vital part of that effort. \n\nThe torques that operate on rapidly spinning accreting neutron stars also remain an important topic of investigation. Accretion torques, mediated by the interaction between the star's magnetic field and the accretion flow \\citep[first explored in detail by][]{Ghosh78,Ghosh79a,Ghosh79b}, clearly play a very large role \\cite[see][for a review of more recent work]{Patruno12}. There are also several mechanisms, such as core r-modes \\citep[a global oscillation of the fluid, restored by the Coriolis force, see][for a recent review]{Haskell15} and crustal mountains \\citep[see][for a recent review]{Chamel13}, that may generate gravitational waves and hence a spin-down torque. These mechanisms are expected to depend in part on the EOS \\citep[see, for example,][]{Ho11,Moustakidis15}. In addition there are potential interactions between internal magnetic fields and an unstable r-mode that may be important \\citep[see for example][]{Mendell01}, and the physics of the weak interaction at high densities also becomes relevant, since weak interactions control the viscous processes that are an integral part of the gravitational wave torque mechanisms \\citep{Alford12}. \n\nTorque mechanisms can be probed in two ways: firstly, by examining the maximum spin reached, which may be below theoretical break-up rates, since both magnetic torques and gravitational wave torques may act to halt spin-up \\citep{Bildsten98,Lamb05,Andersson05}; and secondly by high precision tracking of spin evolution, enabled by increased sensitivity to pulsations. Whilst extracting EOS information from the spin distribution and spin evolution will clearly be more challenging than the clean constraint that would come from the detection of a single rapid spin, it is nonetheless an important part of the models and one that can be tested. Ultimately, more and better quality timing data are needed to confirm if it is, indeed, the magnetic field that regulates the spin of the fastest observed accreting neutron stars or if additional torques are needed. On the one hand, it has been argued that the spin-evolution during and following an accretion outburst of IGR~J00291+5934 is consistent with the `standard' magnetic accretion model \\cite{Falanga05,Patruno10,Hartman11}. On the other hand, the results are not quite consistent and there is still room for refinements and\/or additional torques \\cite{Andersson14,Ho14}. Whether this means that there is scope for a gravitational-wave element or not remains unclear \\cite{Ho11,Haskell12}, but a large area X-ray instrument should take us much closer to the answer.\n\nPrecision ephemerides from X-ray timing are very important enablers for simultaneous gravitational wave searches, since one has to fold long periods of data to detect the weak signals, and the gravitational wave frequency depends on spin rate in both mountain and r-mode mechanisms. Without such ephemerides, the number of templates that must be searched makes detection of continuous wave emission from sources like Sco X-1 very difficult \\citep{Watts08b}. This is very clear when one compares the limits currently obtained for continuous wave gravitational wave searches where ephemerides are known \\citep[the radio pulsars,][]{Abbott07,Abbott08} to those obtained for systems where the spin is not known (non-pulsing systems like Cas A, \\citealt{Abadie10} and Sco X-1, \\citealt{Aasi14}). A direct detection of gravitational waves from such a system would of course have immediate consequences for potential gravitational wave emission mechanisms, and any EOS dependence. \n\n\n\\subsection{Asteroseismology}\n\nAsteroseismology is now firmly established as a precision technique for the study of the interiors of normal stars. As such the detection of seismic vibrations in neutron stars was one of RXTE's most exciting discoveries. They were found in magnetars, young, highly magnetized neutron stars that emit bursts of hard X-ray\/gamma-rays powered by decay of the strong magnetic field \\citep[see][for a review]{Woods06}. What triggers the flares remains unknown, but most likely involves either starquakes or magnetospheric instabilities. Rapid reconfiguration\/reconnection powers the electromagnetic burst: however the events are so powerful that it had already been suggested that they might set the star vibrating \\citep{Duncan98}. These vibrations, which manifest as Quasi-Periodic Oscillations (QPOs) in hard X-ray emission, were first detected in the several hundred second long tails of the most energetic giant flares from two magnetars \\citep{Israel05,Strohmayer05,Strohmayer06a,Watts06}. Similar QPOs have since been discovered during storms of short, low fluence bursts from several magnetars \\citep{Huppenkothen13,Huppenkothen14a,Huppenkothen14b}. The QPOs have frequencies that range from 18 to 1800 Hz. \n\nSeismic vibrations offer us a unique way to explore the interiors of neutron stars. The QPOs were initially tentatively identified with torsional shear modes of the neutron star crust, and torsional Alfv\\'en modes of the highly magnetized fluid core. These identifications were based on the expected mode frequencies, which are set by both the size of the resonant volume (determined by the star's radius) and the relevant wave speed. The fact that the oscillations must be computed in a relativistic framework introduces additional dependences, and for this reason they can be used to diagnose $M$ and $R$ (see for example \\citealt{Samuelsson07} for relativistic crust modes, and \\citealt{Sotani08} for relativistic core Alfv\\'en modes). Seismic vibrations also take us beyond the simple $M$-$R$ relation, constraining the non-isotropic components of the stress tensor of supranuclear density material.\n\nIn fact, for a star with a magnetar strength magnetic field, crustal vibrations and core vibrations should couple together on very short timescales \\citep{Levin07}. The current viewpoint is that the QPOs are associated with global magneto-elastic axial (torsional) oscillations of the star \\citep{Glampedakis06,Lee08,Andersson09,Steiner09,vanHoven11,vanHoven12,Colaiuda11,Colaiuda12,Gabler12,Gabler13a, Passamonti13,Passamonti14,Asai14,Glampedakis14}. Since coupled oscillations depend on the same physics, they have frequencies in the same range as the natural frequencies of the isolated elements. \n\nCurrent magneto-torsional oscillation models can in principle easily explain the presence of oscillations at 155 Hz and below. Until recently there was a significant problem with the higher frequency QPOs, which appeared to persist much longer than the models predicted, but this has now been resolved \\citep{Huppenkothen14c}. Issues currently being addressed include questions of emission \\citep{Timokhin08,DAngelo12,Gabler14}, excitation \\citep{Link14}, coupling to polar Alfv\\'en modes \\citep{Lander10,Lander11,Colaiuda12}, and resonances between the crust and core that might develop as a result of superfluid effects \\citep{Gabler13b,Passamonti14}. The latter in particular can have a large effect on the characteristics of the mode spectrum, and since superfluidity is certainly present in neutron stars, mode models must start to take this into account properly before we can make firm mode identifications. What is now clear is that mode frequencies depend not only on $M$ and $R$, but also on magnetic field strength\/geometry, superfluidity, and crust composition.\n\nSeveral papers have specifically explored EOS dependencies in neutron star asteroseismology \\citep{Strohmayer05,Strohmayer06a,Watts07,Samuelsson07,Sotani08,Steiner09,Gabler12}. Figure \\ref{seis} illustrates the constraints that result when one models the QPOs detected in the SGR 1806--20 hyperflare as torsional shear oscillations of the neutron star crust, \\citep{Samuelsson07}. This model is simple, in that it does not include magnetic coupling between crust and core. However it gives some idea of the types of constraints on $M$ and $R$ that can result from the detection of several frequencies in a single event, where having multiple simultaneous frequencies assists mode identification (the burst storm identifications discussed above involve combining data from multiple bursts, so are less useful in this regard). \n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.49\\textwidth]{seis.png}\n\\caption{$M$-$R$ diagram showing the seismological constraints for the soft gamma-ray repeater SGR 1806--20 using the relativistic torsional crust oscillation model of \\citet{Samuelsson07}, in which the 29 Hz QPO is identified as the fundamental and the 625 Hz QPO as the first radial overtone. The neutron star lies in the box where the constraints from the two frequency bands overlap. Once QPOs are detected, frequency measurement errors are negligible for this purpose. This model is very simple (it does not include) crust-core coupling, but it gives some idea of the type of constraints that might result from the detection of a harmonic sequence of seismic vibrations. More sophisticated models that take into account coupling and the other relevant physical effects are under development.}\n\\label{seis}\n\\end{figure}\n\nSadly giant flares are rare, occurring only every $\\sim$ 10 years. Ideally therefore we would like the ability to make similar detections in the more frequent but less bright events. Intermediate flares, which are detected roughly once per year, have similar peak fluxes and spectra to the tails of the giant flares, but are too brief ($\\sim$ 1 s) to permit detection of similar QPOs with current instrumentation. A $\\sim 10$ m$^2$ hard X-ray timing instrument would be sensitive to QPOs in intermediate flares with similar fractional amplitudes as those observed in the tails of giant flares, provided that the collimator permitted the transmission of higher energy (above 30 keV) photons. The latter is important since intermediate flares are unpredictable and likely to be observed off-axis, although one can increase the odds of capturing them by scheduling pointed observations during periods of high burst activity \\citep{Israel08}. Theoretically the expectation of similar fractional amplitudes is justified: mode excitation at substantial amplitude even by events releasing energies typical of intermediate flares is feasible \\citep{Duncan98}. Empirically, QPOs in giant flares tend to appear rather late in the tails, when luminosities are similar to those in intermediate flares, and given that they appear and disappear multiple time in these tails, may be triggered by magnetic starquakes at these 'low' fluxes \\citep{Strohmayer06a}. The development of similar fractional amplitude QPOs in intermediate flares is thus considered plausible. This idea has also been given a boost by the discovery of QPOs in short burst storms from two different magnetars \\citep{Huppenkothen13,Huppenkothen14a,Huppenkothen14b} including one that had also shown QPOs in a giant flare, since individually these bursts are much less energetic than the intermediate flares. The amplitudes at which the oscillations were detected in the burst storms are comparable to those of the detections in the giant flares. Upper limits on the presence of QPOs in the intermediate flares observed by current instruments, however, are above this level. \n\n\n\\section{Summary}\n\nNeutron stars are unique testing grounds for fundamental nuclear physics, the only place where one can study the equation of state of cold matter in equilibrium, at up to ten times normal nuclear densities. The stable gravitational confinement permits the formation of matter which is extremely neutron rich, and which may involve matter with strange quarks. The relativistic stellar structure equations show that there is a one to one mapping between the bulk properties of neutron stars, in particular their mass and radius, and the dense matter EOS. Efforts to measure neutron star properties for this purpose are being made by both electromagnetic and gravitational wave astronomers. In this Colloquium, we explored the techniques available using hard X-ray timing instruments: waveform fitting, spin measurements, and asteroseismology. Hard X-ray timing offers unique advantages in terms of the numbers of known sources, and the potential for cross-checks using independent techniques and source classes. \n\nThe previous generation of hard X-ray timing telescopes, in particular RXTE (a 0.6 m$^2$ telescope which operated from 1995 to 2012), uncovered many of the phenomena described in this Colloquium. To exploit them to measure the EOS, however, requires larger area instruments, and various mission concepts are now being proposed. These have included the 3 m$^2$ Advanced X-ray Timing Array \\citep[AXTAR:][]{Ray10}, and the 8.5-10 m$^2$ Large Observatory for X-ray Timing, LOFT \\citep[][see also the LOFT ESA M3 Yellow Book \\texttt{http:\/\/sci.esa.int\/loft\/53447-loft-yellow-book}]{Feroci12,Feroci14}. None have yet been successful in securing a launch slot. However the advantages that such a telescope would offer in terms of measuring the dense matter equation of state are sufficiently highly compelling that mission concept development continues apace. \n\n\\begin{acknowledgments}\nThe authors would like to thank all of the members of the LOFT Consortium, in particular the members of the LOFT Dense Matter Working Group, for useful discussions. ALW acknowledges support from NWO Vidi Grant 639.042.916, NWO Vrije Competitie Grant 614.001.201, and ERC Starting Grant 639217 CSINEUTRONSTAR. The work of KH and AS is supported by ERC Grant No. 307986 STRONGINT and the DFG through Grant SFB 634. MF, GI, and LS acknowledge support from the Italian Space Agency (ASI) under contract I\/021\/12\/0. LT acknowledges support from the Ramon y Cajal Research Programme and from Contracts No. FPA2010-16963 and No. FPA2013-43425-P of Ministerio de Economia y Competitividad, from FP7-PEOPLE-2011-CIG under Contract No. PCIG09-GA-2011-291679 as well as NewCompStar (COST Action MP1304). SMM acknowledges support from NSERC. JP acknowledges the Academy of Finland grant 268740. AP acknowledges support from NWO Vidi Grant 639.042.319. AWS was supported by the U.S. Department of Energy Office of Nuclear Physics.\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\n\nThe field of gravitational lensing has seen exponential growth\nin its physical \\cite{SEF,KSW} and mathematical \\cite{PLW,PWreview,Petters} infrastructure,\nyielding diverse applications in astronomy and cosmology.\nIn this paper we address gravitational lensing in the setting of one of the\nmost important non-spherically symmetric and non-static solutions\nof the Einstein equations, namely, Kerr black holes. This has already been the focus of many studies. Indeed, several authors have explored the Kerr's caustic structure, as well as Kerr black hole lensing in the\nstrong deflection limit, focusing on leading order effects in light\npassing close to the region of photon capture\n(e.g., Rauch \\& Blandford \\cite{RB}, Bozza \\cite{Bozza,Bozza2,Bozza3}, V\\'asquez \\& Esteban \\cite{VqzE}, \nBozza, De Luca, Scarpetta, and Sereno \\cite{Betal}, Bozza, De Luca, and Scarpetta \\cite{Betal2}, and Bozza \\& Scarpetta \\cite{BS2}).\n\nStudies of Kerr lensing have also been undertaken in the weak deflection limit. In particular, Sereno \\& De Luca \\cite{SerenoDeLuca,SerenoDeLuca2} gave an analytic treatment of caustics and two lensing observables for Kerr lensing in the weak deflection limit, while Iyer \\& Hansen \\cite{Iyer1,Iyer2} found an asymptotic expression for the equatorial bending angle. Werner \\& Petters \\cite{WernerPetters} used magnification relations for weak-deflection Kerr lensing to address the issue of Cosmic Censorship (for lensing and Cosmic Censorship in the spherically symmetric case, see Virbhadra \\& Ellis \\cite{VE2}). \n\nIn Papers I and II of our series, we are developing a comprehensive analytic framework for Kerr black\nhole lensing, with a focus on regimes\nbeyond the weak deflection limit (but not restricted to the strong\ndeflection limit). In three earlier papers \\cite{KP1,KP2,KP3}, Keeton \\& Petters\nstudied lensing by static, spherically symmetric compact objects in\ngeneral geometric theories of gravity. In \\cite{KP1,KP2}, the authors found\nuniversal relations among lensing observables for Post-Post-Newtonian (PPN) models that\nallowed them to probe the PPN spacetime geometry beyond the\nquasi-Newtonian limit. In \\cite{KP3} they considered braneworld gravity,\nwhich is an example of a model outside the standard PPN family as it\nhas an additional independent parameter arising from an extra\ndimension of space. They developed a wave optics theory (attolensing)\nto test braneworld gravity through its signature in interference\npatterns that are accessible with the Fermi Gamma-ray Space\nTelescope. Papers I and II present a similar analysis of Kerr black hole lensing beyond the weak deflection limit.\n\nThe outline of this paper is as follows. In Section~\\ref{sec:gen-lenseq} we present\na new, general lens equation and magnification formula governing lensing by a thin deflector.\nBoth equations are applicable for non-equatorial observers and \nassume that the source and observer are in the asymptotically\nflat region.\nIn addition, our lens equation incorporates \nthe displacement for a general setting that Bozza \\& Sereno \\cite{BS} \nintroduced for the case of a spherically symmetric deflector.\nThis occurs when the light ray's tangent lines\nat the source and observer do not intersect on the lens plane.\nSection~\\ref{sec:lenseq-Kerr} gives an explicit expression for this\ndisplacement when the observer is in the equatorial plane\nof a Kerr black hole as well as\nfor the case of spherical symmetry. The lens equation itself is derived in Appendix~\\ref{app:Kerr-null-geodesics}.\n\nIn Paper II we solve our lens equation perturbatively to obtain analytic\nexpressions for five lensing observables (image positions, magnifications,\ntotal unsigned magnification, centroid, and time delay) for the regime of\nquasi-equatorial lensing.\n\n\n\n\n\\section{General Lens Equation with Displacement}\n\\label{sec:gen-lenseq}\n\n\n\\subsection{Angular Coordinates on the Observer's Sky}\n\\label{sec:obs-coords}\n\nLet us define Cartesian coordinates $(x,y,z)$ centered on the\ncompact object and oriented such that the observer lies on the\npositive $x$-axis. We assume that gravitational lensing will take place outside the photon region, which is outside the ergosphere, so that $(x,y,z)$ are always spatial coordinates. We also point out that we will not be considering distances more than a Hubble time, so that our formalism ignores the expansion of the universe.\n\nAssume that the observer in the asymptotically flat region is at\nrest relative to the $(x,y,z)$ coordinates. {\\em All equations\nderived in this section are relative to the asymptotically flat\ngeometry of such an observer.}\nThe natural coordinates for the observer to use in gravitational\nlensing are angles on the sky. To describe these angles, we\nintroduce ``spherical polar\" coordinates\ndefined with respect to the observer and the optical axis (from the observer to\nthe lens), and the $yz$-planes at the deflector and the light source. The vector to the image position is then described by the\nangle $\\vth$ it makes with the optical axis, and an azimuthal\nangle $\\vphi$. Similarly, the vector to the source position is\ndescribed by the angle $\\cb$ it makes with the optical axis and\nby an azimuthal angle $\\chi$. These angles are shown in\n\\reffig{ObsCoords}. Note that the optical axis is the $x$-axis.\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.64]{ObsCoords.eps}\n\\end{center}\n\\caption{\nAngles on the observer's sky.\nAn image's position is determined by $(\\vth, \\vphi)$. The source's position is given by $(\\cb,\\chi)$.}\n\\label{fig:ObsCoords}\n\\end{figure}\nWe adopt the usual convention for spherical polar coordinates:\nthe image position has $\\vth > 0$ and $0 \\le \\vphi < 2\\pi$,\nwhile the source position has $\\cb \\ge 0$ and $0 \\le \\chi < 2\\pi$.\nIn fact, since we only need to consider the ``forward'' hemisphere from the observer\nwe can limit $\\vth$ to the interval $(0,\\pi\/2)$ and $\\cb$ to the interval $[0,\\pi\/2)$.\n\nThe\n``lens plane'' is the plane perpendicular to the optical axis\ncontaining the lens, and the ``source plane'' is the plane\nperpendicular to the optical axis containing the source; these are also shown in \\reffig{ObsCoords}. Define\nthe distances $d_L$ and $d_S$ to be the perpendicular distances\nfrom the observer to the lens plane and source plane, respectively,\nwhile $d_{LS}$ is the perpendicular distance from the lens plane\nto the source plane. Some investigators define $d_S$ to be the\ndistance from the observer to the source itself, as opposed to\nthe shortest distance to the source plane. We shall comment on\nthis distinction in Section~\\ref{sec:spherical}.\n\n\n\n\n\\subsection{General Lens Equation via Asymptotically Flat Geometry}\n\\label{sec:gen-lens-eqn}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.67]{LensGeom.eps}\n\\end{center}\n\\caption{\nA lensing scenario demonstrating that the tangent line to the segment of the ray arriving at the observer and the tangent line of the ray at the source need not intersect on the lens plane; i.e., $A' \\neq B'$ in general. The angles $\\cb$ and $\\vartheta$ are as in \\reffig{ObsCoords} (or rather, they are their projections onto the $xz$-plane), $\\hat{\\alpha}$ is the ``bending angle,\" and $d_L$, $d_{S}$, and $d_{LS}$ are the perpendicular distances between the lens plane and observer, the source plane and observer, and the lens and source planes, respectively.}\n\\label{fig:LensGeom}\n\\end{figure}\n\nConsider the lensing geometry shown in \\reffig{LensGeom}. With respect to the light ray being lensed, there are two tangent lines of interest: the tangent line to the segment of the ray arriving at the observer and the tangent line to the ray emanating from the source. \nAs first emphasized in \\cite{BS}, {\\it it is important to realize that\nthese two tangent lines need not intersect.} If they do intersect (as\nfor a spherical lens, since in that case the tangent lines are\ncoplanar), the intersection point need not lie in the lens plane.\nThis effect has often been neglected, and while it may be small in the\nweak deflection limit (see Section~\\ref{sec:spherical} below) we should include it for greater generality.\nA simple way to capture this displacement is to\nconsider the points where the two tangent lines cross the lens\nplane, namely, the points $A'$ and $B'$ in \\reffig{LensGeom}.\nIf the tangent lines do intersect in the lens\nplane, then $A' = B'$. Otherwise, as can be seen in greater detail in \\reffig{Lens-Geom3}, there is a displacement on the lens plane that\nwe quantify by defining\n\\beq \\label{eq:disp-def}\n\\cd_y = B'_y - A'_y\\,, \\qquad \\cd_z = B'_z - A'_z\\,.\n\\eeq\nNote from \\reffig{Lens-Geom3} that the\ntangent line to the segment of the ray arriving at the observer \nis determined by $(\\vth,\\vphi)$. The tangent line to the ray\nemanating from the source can likewise be described by the angles\n$(\\vth_S,\\vphi_S)$, where $-\\pi\/2 < \\vth_S < \\pi\/2$ and $0 \\leq \\vphi_S < 2\\pi$. As shown in \\reffig{Lens-Geom3}, $\\vth_S$ has vertex $B'$ and is measured from the line joining the points $B'$ and $B''$, which runs parallel to the optical axis. We adopt the following sign convention for $\\vth_S$: if $\\vth_S$ goes {\\it toward} the optical axis, then it will be positive; otherwise it is negative (e.g., the $\\vth_S$ shown in \\reffig{Lens-Geom3} is positive).\nWe will obtain the general lens equation by considering the coordinates\nof the points $A'$ and $B'$ in \\reffig{Lens-Geom3}. Using the\nasymptotically flat geometry of the observer, we have\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.62]{LensGeom-without-nu.eps}\n\\end{center}\n\\caption{\nA detailed diagram of lensing with displacement. The tangent line to the segment of the ray arriving at the observer is determined by $(\\vth,\\vphi)$ and intersects the lens plane at $A'$, while the tangent line to the ray emanating from the source is determined by $(\\vth_S,\\vphi_S)$ and intersects the lens plane at $B'$. The distance between these two points is quantified by the displacement amplitude $\\cd$, whose horizontal and vertical components we denote by $\\cd_y$ and $\\cd_z$, respectively. The deflector could be a Kerr black hole and the light ray may dip below the $xy$-plane.}\n\\label{fig:Lens-Geom3}\n\\end{figure}\n\\beqa\nA'_x &=& 0\\,, \\nonumber\\\\\nA'_y &=& d_L \\tan\\vth \\, \\cos\\vphi\\,, \\label{eq:Acoordsy}\\\\\nA'_z &=& d_L \\tan\\vth \\, \\sin\\vphi\\,,\n \\label{eq:Acoords}\\\\\nB'_x &=& 0\\,, \\nonumber\\\\\nB'_y &=& d_S \\tan\\cb \\, \\cos\\chi + d_{LS} \\tan\\vth_S \\, \\cos(\\pi-\\vphi_S)\\,,\n \\\\\nB'_z &=& d_S \\tan\\cb \\, \\sin\\chi - d_{LS} \\, \\tan \\vth_S \\, \\sin (\\pi-\\vphi_S)\\,.\n \\label{eq:Bcoords}\n\\eeqa\nSubstituting eqns.~(\\ref{eq:Acoordsy})--(\\ref{eq:Bcoords}) into eqn.~(\\ref{eq:disp-def}) yields\n\\beqa\nd_S \\tan\\cb \\, \\cos\\chi & = & d_L \\tan\\vth \\, \\cos\\vphi\n \\ + \\ d_{LS} \\tan \\vth_S \\, \\cos \\vphi_S\n \\ + \\ \\cd_y\\,,\n \\label{eq:le-h} \\\\\nd_S \\tan\\cb \\, \\sin\\chi & = & d_L \\tan\\vth \\, \\sin\\vphi\n \\ + \\ d_{LS} \\tan \\vth_S \\, \\sin \\vphi_S\n \\ + \\ \\cd_z\\,.\n \\label{eq:le-v}\n\\eeqa\nThe left-hand sides involve only the source position, while the\nright-hand sides involve only the image position.\nIn other words, {\\em this pair of equations\nis the general form of the gravitational lens equation for\nsource and observer in the asymptotically flat region, for a general isolated compact object.} Note that apart from the asymptotic flatness assumption, these equations use no properties specific to Kerr black holes; and if the deflector was a Kerr black hole, then neither the observer nor the source has been assumed to be equatorial.\nWe shall refer to eqns.~(\\ref{eq:le-h}) and (\\ref{eq:le-v}), respectively, as the\n``horizonal'' and ``vertical'' components of the lens equation\ndue to the cosine\/sine dependence on $\\chi$.\n\nConsider now the case when the light ray and\nits tangent lines lie in a plane which contains the optical axis. This forces $\\chi = \\vphi$ or $\\chi = \\vphi+\\pi$ depending on whether\nthe source is on the same or opposite side of the lens as the image. To distinguish these two cases, it is useful to define\n$\\snq = \\cos(\\chi-\\vphi)$ to be a sign that indicates whether the\nsource is on the same side of the lens as the image ($\\snq=+1$)\nor on the opposite side ($\\snq=-1$). The condition $A' \\neq B'$ will still hold in general, but the line in the lens plane from the origin to the point $B'$ will now make the same angle with respect to the $y$-axis as the point $A'$, namely, the angle $\\vphi$ (see Fig.~\\ref{fig:Lens-Geom3}). As a result, the line in the source plane from the origin to the point $B''$ will also make the angle $\\vphi$ with respect to the $y$-axis. Thus we will have $\\vphi_S = \\vphi+\\pi$. Given these conditions,\neqns.~(\\ref{eq:le-h}) and (\\ref{eq:le-v}) reduce to the single lens\nequation\n\\beq \\label{eq:le-sph}\n d_S\\,\\snq\\,\\tan\\cb = d_L \\tan\\vth - d_{LS} \\tan(\\hat{\\alpha}-\\vth) + \\cd \\,,\n\\eeq\nwhere the displacement amplitude is $\\cd = \\cd_y\/\\!\\cos\\vphi = \\cd_z\/\\!\\sin\\vphi$ (in the case of planar rays),\nand to connect with traditional descriptions of gravitational\nlensing we have introduced the light bending angle\n$\\hat{\\alpha} \\equiv \\vth + \\vth_S$. (If desired, the sign $\\snq$ can be\nincorporated into the tangent so that the left-hand side is\nwritten as $\\tan(\\snq\\cb)$, where we think of $\\snq\\cb$ as the\nsigned source position.) {\\it Eqn.~(\\ref{eq:le-sph}) is the general form of the lens equation in the case of planar rays.} If the displacement $\\cd$ is ignored,\nthen eqn.~(\\ref{eq:le-sph}) matches the spherical lens equation given\nby \\cite{virbetal2}. We consider the displacement term in\n\\refsec{spherical}.\n\n\n\n\\subsection{General Magnification Formula}\n\\label{sec:gen-mag}\n\nThe magnification of a small source is given by the ratio of the\nsolid angle subtended by the image to the solid angle subtended\nby the source\n(e.g.,\n\\cite[p.~82]{PLW}). As measured by the observer, if $\\ell$ is the\ndistance to the image (as opposed to the perpendicular distance),\nthen the small solid angle subtended by the image is\n\\beqa\n d\\Omega_I = \\frac{|(\\ell \\, d \\vth)\\, (\\ell \\sin\\vth \\, d\\vphi)|}{\\ell^2}\n=|\\sin\\vth\\ d\\vth\\ d\\vphi| = |d(\\cos\\vth)\\ d\\vphi| .\\nonumber\n\\eeqa\nSimilarly, the small solid angle subtended by the source is\n\\beqa\n d\\Omega_S = |\\sin\\cb\\ d\\cb\\ d\\chi| = |d(\\cos\\cb)\\ d\\chi| .\\nonumber\n\\eeqa\nWe then have the absolute magnification\n\\beqa\n |\\mu| = \\frac{d\\Omega_I}{d\\Omega_S} = |\\det J|^{-1} ,\\nonumber\n\\eeqa\nwhere $J$ is the Jacobian matrix\n\\beqa\n J = \\frac{\\partial (\\cos \\cb, \\chi)}{\\partial(\\cos\\vth, \\vphi)}\n = \\left[\\matrix{\n \\frac{\\partial\\cos\\cb}{\\partial\\cos\\vth}\n & \\frac{\\partial\\cos\\cb}{\\partial\\vphi } \\cr\n \\frac{\\partial\\chi }{\\partial\\cos\\vth}\n & \\frac{\\partial\\chi }{\\partial\\vphi }\n }\\right].\\nonumber\n\\eeqa\nWriting out the determinant and dropping the absolute value in\norder to obtain the signed magnification, we get\n\\beq \\label{eq:mu-general}\n \\mu = \\left[ \\frac{\\sin\\cb}{\\sin\\vth} \\left(\n \\frac{\\partial\\cb}{\\partial\\vth }\\ \\frac{\\partial\\chi}{\\partial\\vphi}\n - \\frac{\\partial\\cb}{\\partial\\vphi}\\ \\frac{\\partial\\chi}{\\partial\\vth }\n \\right) \\right]^{-1} .\n\\eeq\nIn the case of spherical symmetry, the image and source lie in\nthe same plane, so $\\partial\\cb\/\\partial\\vphi=0$ and\n$\\partial\\chi\/\\partial\\vphi=1$, recovering the familiar result\n\\beqa\n \\mu = \\left( \\frac{\\sin\\cb}{\\sin\\vth}\\ \\frac{\\partial\\cb}{\\partial\\vth}\n \\right)^{-1} .\\nonumber\n\\eeqa\n\n\n\n\n\n\\section{Lens Equation for Kerr Black Holes}\n\\label{sec:lenseq-Kerr}\n\n\n\\subsection{Kerr Metric}\n\\label{sec:Kerr-metric}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[scale=.62]{BHCoords2.eps}\n\\end{center}\n\\caption{\nCartesian $(X,Y,Z)$ and spherical polar $(r,\\kpolar,\\kazym)$\ncoordinates centered on the black hole, where $\\kpolar = \\pi\/2 - \\wp$ with $\\wp$ the polar angle; note that $-\\pi\/2 \\leq \\kpolar \\leq \\pi\/2$. The black hole spins about the $Z$-axis, which corresponds to\n$\\kpolar=\\pi\/2$, in the direction of\nincreasing $\\kazym$. The equatorial plane\nof the black hole corresponds to $\\kpolar = 0$ or the\n$(X,Y)$-plane.}\n\\label{fig:BHCoords}\n\\end{figure}\n\nNow let the deflector in Fig.~\\ref{fig:Lens-Geom3} be a Kerr black hole. The Kerr metric is the unique axisymmetric,\nstationary, asymptotically flat, vacuum solution of the Einstein\nequations describing a stationary black hole with mass $\\bhpm$\nand spin angular momentum $\\bhpJ$ (see, e.g., \\cite[pp.~322-324]{wald}).\nConsider the Kerr metric in Boyer-Lindquist coordinates\n$(t,r,\\wp,\\kazym)$, where $\\wp$ is the polar angle\nand $\\kazym$ the azimuthal angle. For our purposes, it is\nconvenient to transform $\\wp$ to $\\kpolar = \\pi\/2-\\wp$\nand work with the slightly modified Boyer-Lindquist coordinates\n$(t,r,\\kpolar,\\kazym)$; note that $-\\pi\/2 \\leq \\kpolar \\leq \\pi\/2$. The spatial coordinates are shown in\n\\reffig{BHCoords}.\n\nThe metric takes the form\n\\beqa\n\\label{eq:app:kerr-metric}\n ds^2 = g_{tt}\\, dt^2 + g_{rr}\\,dr^2 \n + g_{\\kpolar \\kpolar}\\,d\\kpolar^2\n + g_{\\kazym \\kazym}\\,d\\kazym^2 \n + 2\\,g_{t \\kazym}\\,dt\\,d\\kazym\\,,\\nonumber\n\\eeqa\nwhere $t = c {\\tt t}$ with ${\\tt t}$ being physical time. The\nmetric coefficients are\n\\beqa \\label{eq:app:kerr-components}\n g_{tt} &=& - \\frac{r(r-2\\gravr) + \\bha^2 \\sin^2\\kpolar}\n {r^2 + \\bha^2 \\sin^2\\kpolar}\\ , \\label{gtt}\\\\\n g_{rr} &=& \\frac{r^2 + \\bha^2 \\sin^2\\kpolar}{r(r-2\\gravr) + \\bha^2}\\ , \\label{grr}\\\\\n g_{\\kpolar \\kpolar} &=& r^2 + \\bha^2 \\sin^2\\kpolar\\,, \\label{gss}\\\\\n g_{\\kazym \\kazym} &=& \\frac{(r^2+\\bha^2)^2\n - \\bha^2(\\bha^2+r(r-2\\gravr))\\cos^2\\kpolar}\n {r^2 + \\bha^2 \\sin^2\\kpolar}\\ \\cos^2\\kpolar\\,, \\label{gphiphi}\\\\\n g_{t \\kazym} &=& - \\frac{2 \\gravr \\bha \\, r \\cos^2\\kpolar}\n {r^2 + \\bha^2 \\sin^2\\kpolar}\\ .\\label{gtphi}\n\\eeqa\nThe parameter $\\gravr$ is the gravitational radius, and $\\bha$ \nis the angular momentum per unit mass:\n\\beqa\n \\gravr = \\frac{G \\bhpm}{c^2}\\ , \\qquad\n \\bha = \\frac{\\bhpJ}{c \\bhpm}\\ .\\nonumber\n\\eeqa\nNote that both $\\gravr$ and $\\bha$ have dimensions of length.\nIt is convenient to define a dimensionless spin parameter:\n\\beqa\n\\label{eq:ahat}\n \\ahat = \\frac{\\bha}{\\gravr}\\ .\\nonumber\n\\eeqa\nUnless stated to the contrary, the black hole's spin is\nsubcritical; i.e., $\\ahat^2 < 1$.\n\n\n\n\\subsection{Lens Equation for an Equatorial Observer}\n\\label{sec:Kerr-lens-eqn}\n\n{\\em We now specialize to the case when the observer lies\nin the equatorial plane of the Kerr black hole,} so the coordinates\n$(x,y,z)$ in \\reffig{Lens-Geom3} coincide with the coordinates\n$(X,Y,Z)$ in \\reffig{BHCoords}. Note that we still consider\ngeneral source positions.\n\nIn \\refapp{Kerr-null-geodesics} we carefully analyze null\ngeodesics seen by an observer in the equatorial plane. By\nconsidering constants of the motion, we derive the following\nlens equation:\n\\beqa\n d_S \\tan\\cb \\cos\\chi &=& d_{LS} \\tan\\vth_S \\cos\\vphi_S\n \\ + \\ d_L\\ \\frac{\\sin\\vth \\cos\\vphi}{\\cos\\vth_S}\\ ,\n \\label{eq:le-h-Kerr} \\\\\n d_S \\tan\\cb \\sin\\chi &=& d_{LS} \\tan\\vth_S \\sin\\vphi_S\n \\ + \\ \\frac{d_L \\sin\\vth}{1 - \\sin^2\\vth_S \\sin^2\\vphi_S} \\times\n \\label{eq:le-v-Kerr} \\\\\n &&\\qquad \\left[ \\cos\\vphi \\sin\\vth_S \\tan\\vth_S \\sin\\vphi_S \\cos\\vphi_S\n + \\left( \\sin^2\\vphi - \\sin^2\\vth_S \\sin^2\\vphi_S \\right)^{1\/2} \\right] .\n \\nonumber\n\\eeqa\n{\\it This is the general form of the lens equation for an equatorial\nobserver in the Kerr metric for observer and source in the\nasymptotically flat region.} \nIt is valid for all light rays, whether they loop\naround the black hole or not, as long as they lie outside the\nregion of photon capture. No small-angle approximation is required.\n\n\n\nNote that eqns. (\\ref{eq:le-h}) and (\\ref{eq:le-v}) represent the general\nform of the lens equation, with the displacement terms explicitly\nwritten, while eqns. (\\ref{eq:le-h-Kerr}) and (\\ref{eq:le-v-Kerr}) give\nthe exact lens equation for an equatorial Kerr observer, with\nthe displacement terms implicitly included. Demanding that these\ntwo pairs of equations be equivalent allows us to identify the\ndisplacement terms for an equatorial Kerr observer:\n\\beqa\n \\cd_y &=& d_L \\sin\\vth \\cos\\vphi \\left( \\frac{1}{\\cos\\vth_S}\n - \\frac{1}{\\cos\\vth} \\right) , \\label{eq:dispy}\\\\\n \\cd_z &=& - d_L \\tan\\vth \\sin\\vphi \\ + \\ \n \\frac{d_L \\sin\\vth}{1 - \\sin^2\\vth_S \\sin^2\\vphi_S} \\times \\label{eq:dispz}\\nonumber\\\\\n &&\\qquad \\left[ \\cos\\vphi \\sin\\vth_S \\tan\\vth_S \\sin\\vphi_S \\cos\\vphi_S\n + \\left( \\sin^2\\vphi - \\sin^2\\vth_S \\sin^2\\vphi_S \\right)^{1\/2} \\right].\\label{eq:dispz}\n\\eeqa\n\n\n\n\\subsection{Schwarzschild Case}\n\\label{sec:spherical}\n\nIn the case of a spherically symmetric lens we have $\\vphi_S = \\vphi + \\pi$, and either $\\chi = \\vphi$\nor $\\chi = \\vphi+\\pi$, depending on whether the source lies on the\nsame or opposite side of the lens as the image. Once again, we define $\\snq = \\cos(\\chi-\\vphi)$ to be\na sign that distinguishes these two cases. With these conditions\neqns.~(\\ref{eq:le-h-Kerr}) and (\\ref{eq:le-v-Kerr}) combine to form the\nsingle lens equation with displacement for a Schwarzschild black hole:\n\\beqa \\label{eq:le-sph-Kerr}\n d_S\\,\\snq\\,\\tan\\cb = d_L\\ \\frac{\\sin\\vth}{\\cos\\vth_S} \\ - \\ \n d_{LS}\\,\\tan\\vth_S\\,.\n\\eeqa\nTwo comments are in order. First, our spherical lens equation\n(\\ref{eq:le-sph-Kerr}) is equivalent to the spherical lens\nequation recently derived by Bozza \\& Sereno \\cite{BS,Bozza2} (up to the sign $\\snq$,\nwhich was not discussed explicitly in \\cite{BS,Bozza2}).\nThe second comment refers to the amplitude of the displacement.\nBy comparing our general planar-ray lens equation (\\ref{eq:le-sph}) with eqn.~(\\ref{eq:le-sph-Kerr}),\nwe can identify the displacement\n\\beqa\n\\label{eq:disp-sph}\n \\cd = d_L \\sin\\vth \\left[ \\frac{1}{\\cos(\\alpha-\\vth)} - \\frac{1}{\\cos\\vth}\n \\right]\\,,\n\\eeqa\nwhere we have switched from $\\vth_S$ to the bending angle\n$\\alpha = \\vth+\\vth_S$. Now let\n$\\delta\\alpha = \\alpha \\, {\\rm mod} \\, 2 \\pi$, and assume that\n$\\vth$ and $\\delta\\alpha$ are small. (Note that we need not\nassume $\\alpha$ itself is small, only that $\\delta\\alpha$ is small.\nThis means that our analysis applies to all light rays, including those that loop\naround the lens.) Taylor expanding the displacement in $\\vth$\nand $\\alpha$ yields\n\\beqa\n \\cd = \\frac{d_L}{2} \\, (\\vth \\ \\delta \\alpha) \n (\\delta \\alpha - 2 \\vth) \\ + \\ {\\cal O}(4).\\nonumber\n\\eeqa\n\n\n\n\n\n\n\\section{Conclusions}\n\\label{sec:conclusions}\n\nRecently a lens equation was developed for Schwarzschild lensing with displacement (see Section~\\ref{sec:spherical} above), when the light ray's tangent lines\nat the source and observer do not meet on the lens plane. In this paper we found a new generalization of the lens equation with displacement for axisymmetric lenses that extends the previous work to a fully three-dimensional setting. Our formalism assumes that the source and observer are in the asymptotically flat region, and does not require a small angle approximation. Furthermore, we found a new magnification formula applicable to this more general context. Our lens equation is thus applicable to non-spherically symmetric compact bodies, such as Kerr black holes. We gave explicit formulas for the\ndisplacement when the observer is in the equatorial plane\nof a Kerr black hole and \nfor the situation of spherical symmetry. \n\n\n\n\n\\begin{acknowledgments}\n\nABA and AOP would especially like to thank Marcus C. Werner for helpful discussions. AOP acknowledges the support of NSF Grant DMS-0707003.\n\n\\end{acknowledgments}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLanguage and facial expression are strong indicators for behavior analysis. There is numerous research trying to solve the emotion recognition problem based on these data. However, the non-linguistic vocalizations are understudied even though they are very useful information. Analyzing and applying these signals are interesting topics and require more attention from researchers. The A-VB competition was conducted for that reason and is expected to explore advanced improvements in emotion science. The competition \\cite{b2} consists of 4 individual tasks as below:\n\n\\begin{itemize}\n\\item High-dimension task (A-VB-HIGH): a multi-output regression task generating 10 values in the range of [0,1] corresponding to levels of Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness, and Surprise.\n\\item Two-dimension task (A-VB-TWO): a multi-output regression task generating 2 values in the range of [0,1] corresponding to levels of Valence and Arousal.\n\\item Culture task (A-VB-CULTURE): a multi-output regression task generating 40 values in the range of [0,1] corresponding to levels of culture including China, US, South Africa, and Venezuela combined with levels of emotion including Awe, Excitement, Amusement, Awkwardness, Fear, Horror, Distress, Triumph, Sadness, Surprise.\n\\item Type task (A-VB-TYPE): a multi-class classification task predicting the type of expressive vocal including Gasp, Laugh, Cry, Scream, Grunt, Groan, Pant, Other.\n\\end{itemize}\n\nThe evaluation metric for three regression tasks is the Concordance Correlation Coefficient (CCC) and the metric for the categorical task is the Unweighted Average Recall (UAR). The detail is listed below:\n\\begin{itemize}\n\\item A-VB-HIGH: The metric is the average CCC score of 10 emotions.\n\\item A-VB-TWO: The metric is the average CCC score of valence-arousal.\n\\item A-VB-CULTURE: The metric is the average CCC score of 40 culture-emotion levels.\n\\item A-VB-TYPE: The metric is the UAR score of 8 classes of vocalizations.\n\\end{itemize}\n\nAll of the above metrics are in the range of [0,1]. The greater the score is, the better the model performs. We should also note that all the results in this paper are in percentage. \n\nIn this paper, we propose a straightforward approach using a pre-trained Wav2vec network to resolve the problem. The model accomplishes a noticeable improvement compared to the baseline provided by the organizers. Because of its simplicity, our model can be considered a new baseline for all tasks in the competition.\n\n\\section{Related work}\nIn the baseline paper \\cite{b2}, the authors introduce two different approaches, which are feature-based and end-to-end approaches. In the feature-based option, the OpenSMILE toolkit \\cite{b11} is leveraged to extract the COMPARE (COMputational PARalinguistics ChallengE \\cite{b12}) feature and EGEMAPS (The extended Geneva Minimalistic Acoustic Parameter Set \\cite{b13}) feature from an input sample. The features are then fed to a 3-layer fully-connected neural network. Mean Squared Error (MSE) loss is used for regression tasks while the classification task applies the Cross-entropy (CE) loss function.\n\nIn the end-to-end manner, the baseline model uses End2You \\cite{b14}, a multimodal profiling toolkit that is capable of training and evaluating models from raw input. Particularly, Emo-18 architecture \\cite{b15} is chosen for the competition. The model includes 1-D Convolutional Neural Network (CNN) layers to derive the features from audio frames and a Recurrent Neural Network to learn the temporal information.\n\nFor the speaker recognition task, Nik Vaessen and David A. van Leeuwen \\cite{b4} conducted fine-tuning the Wav2vec2 model by using a shared fully-connected layer. Their model and ours have one thing in common: exploiting the pre-trained Wav2vec. However, there are considerable differences between the two methods. Basically, speaker recognition is a classification problem so the authors optimize the model with CE or Binary Cross-entropy (BCE) loss. In our method, we consider two loss options for the regression tasks, which are MSE and CCC loss. Additionally, besides using a shared fully-connected layer, we also take advantage of the RNN to explore the temporal behavior.\n\\section{Method}\nThe sequence embeddings are obtained from the waveform signal by the audio extractor. They are then fed into an RNN to enrich the sequence information. Afterward, a fully connected layer changes the embeddings' dimension to the output sizes depending on the particular task. Finally, a pooling layer is used to reduce variable-length sequence embeddings to fix-size speaker embedding. The dimension of the final prediction would be 2, 10, 40, or 8 corresponding to A-VB-TWO, A-VB-HIGH, A-VB-CULTURE or A-VB-TYPE task, respectively. Figure~\\ref{fig:model} describes the architecture of our method.\n\n\\begin{figure}[htbp]\n\\centerline{\\includegraphics[height=8cm]{model.PNG}}\n\\caption{Block diagram of our proposed model.}\n\\label{fig:model}\n\\end{figure}\n\n\\subsection{Audio extractor}\n\nWe take advantage of the Pre-trained Wav2vec 2.0 models \\cite{b3} provided by Pytorch. They are trained with large unlabeled audio corpora so they can effectively capture the audio features. We conducted experiments with 4 versions of the Wav2vec2 model described below:\n\\begin{itemize}\n\\item BASE: use the base configuration of transformer trained with 960 hours of unlabeled audio from LibriSpeech \\cite{b5}.\n\\item LARGE: use the large configuration of transformer trained with 960 hours of unlabeled audio from LibriSpeech \\cite{b5}.\n\\item LARGE-LV60K: use the large configuration of transformer trained with 60,000 hours of unlabeled audio from Libri-Light \\cite{b6}.\n\\item XLSR53: use base configuration of transformer \\cite{b10} trained with 56,000 hours of unlabeled audio from multiple datasets (Multilingual LibriSpeech \\cite{b7}, CommonVoice \\cite{b8} and BABEL \\cite{b9}).\n\\end{itemize}\n\n\\subsection{Pooling Method}\nInspired by the model of \\cite{b4}, we use 4 options of pooling to fix the length of embedding: first (take the first sequence embedding to be the speaker embedding), last (take the last sequence embedding to be the speaker embedding), max, and average pooling. The performance of models with various types of pooling layers is recorded to study their impact on the result. The operation of the pooling can be described as:\n\\begin{equation}\ns_{i} = Pooling(e_1, e_2, ..., e_{m_i})\n\\end{equation}\nwhere $s_{i}$ is the speaker embedding of $i^{th}$ sample; $e_1, e_2, ..., e_{m_i}$ are the temporal embeddings and $m_i$ is the sequence length of corresponding sample.\n\n\n\\subsection{Loss function}\nWe use the CE loss for the classification task. In the remaining tasks, we want to test the effect of the loss function on the performance of the model so we did the experiments and compared the result of the model using Mean Square Error (MSE) and Concordance Correlation Coefficient (CCC) loss. The CCC loss is formulated as below:\n\\begin{equation}\n\\mathcal{L} = 1 - CCC = 1 - \\frac{2s_{xy}}{s^{2}_x + s^{2}_y + \\left(\\overline{x}-\\overline{y}\\right)^2}\n\\end{equation}\nwhere $\\overline{x}$ and $\\overline{y}$ are the mean values of ground truth and predicted values, respectively, $s_x$ and $s_x$ are corresponding variances and $s_{xy}$ is the covariance value.\n\n\\section{Dataset and Experiments}\n\\subsection{Dataset}\nThe database used for the competition is the HUME-VB dataset \\cite{b1} which consists of 59201 audio files and is split into 3 sets (training, validation, and test) of similar size. Each file has 53 labels corresponding to 4 tasks, one label is a categorical label that is used in the classification task and the remains are values in the range [0,1] representing the emotional level. The organizers provide 2 versions of the HUME-VB dataset: the raw version sampled at 48kHz and the processed version where audio files are converted to 16kHz and normalized to -3 decibels. In our experiments, we take the processed version as the input of our model.\n\n\n\\subsection{Experiments}\nOur model was implemented using the Pytorch framework. The experiments were conducted on a machine with NVIDIA RTX 2080 Ti GPU. All scenarios were run in 20 epochs, and the model with the best performance on the validation set was recorded. The batch size is 16 and the initial learning rate is $1e-4$. We used Adam optimizer with the weight decay coefficient of $0.0625$.\n\nIn our setting, we take the output from the $12^{th}$ layer of the Wav2vec network to be the sequence embeddings. The input size of the RNN network depends on the configuration of the pre-trained audio extractor, which is 768 for base configuration and 1024 for large configuration. It includes 2 LSTM layers and the hidden size is fixed to 512.\n\n\\section{Discussion}\nWe tested the performance of the model with various audio extractors to explore their effect. Table~\\ref{tab:result} shows the performance on four tasks of the competition with 4 pre-trained Wav2vec extractors. As a result, the XLSR53 pre-trained model achieves the best performance in A-VB-TWO and A-VB-TYPE when LARGE-LV60K attains the highest scores in A-VB-HIGH and A-VB-CULTURE. In the meanwhile, the BASE model produces the lowest score in A-VB-TWO and A-VB-TYPE due to its simple architecture.\n\n\\begin{table}\n \\caption{Evaluation score on the HUME-VB validation set with different extractors. Experimented with CCC loss and Last Pooling}\n\\begin{center}\n \\centering\n \\begin{tabular}{|l|l|l|l|l|l|}\n \\hline\n Model & TWO & HIGH & CULTURE & TYPE \\\\\n \\hline\n \\textit{End2You Baseline} & \\textit{49.88} & \\textit{56.38} & \\textit{43.59} & \\textit{41.66} \\\\\n \\hline\n BASE & 54.65 & 58.69 & 47.18 & 41.65 \\\\\n \\hline\n LARGE & 55.42 & 58.00 & 47.12 & 43.96 \\\\\n \\hline\n LARGE-LV60K & 61.59 & \\textbf{65.41} & \\textbf{53.39} & 47.22 \\\\\n \\hline\n XLSR53 & \\textbf{61.94} & 65.32 & 52.50 & \\textbf{49.89} \\\\\n \\hline\n \\end{tabular}\n \\label{tab:result}\n\\end{center}\n\\end{table}\n\nRegarding the pooling method, we examined their influence on the results in A-VB-HIGH. As shown in Table~\\ref{tab:pool}, in both LARGE-LV60K and XLSR53 model, the Last pooling outperforms the other options while the First pooling gets the lowest CCC score among the four methods. The result of Avg pooling is slightly better than Max pooling in both LARGE-LV60K and XLSR53 scenarios. It can be inferred that the last embedding of the sequence contains the most useful information for the prediction when using other embeddings or combining them may degrade the accuracy.\n\n\\begin{table}\n \\caption{Evaluation score on the HUME-VB validation set with different pooling methods. Experimented on A-VB-HIGH with CCC loss}\n\\begin{center}\n \\centering\n \\begin{tabular}{|l|l|l|}\n \\hline\n Pooling & LARGE-LV60K & XLSR53 \\\\\n \\hline\n First & 53.56 & 58.20 \\\\\n \\hline\n Max & 60.08 & 61.41 \\\\\n \\hline\n Avg & 61.49 & 62.40 \\\\\n \\hline\n Last & \\textbf{65.41} & \\textbf{65.32} \\\\\n \\hline\n \\end{tabular}\n \\label{tab:pool}\n\\end{center}\n\\end{table}\nNext, we conducted the training processes with MSE and CCC to explore their advantage. As a consequence, the model trained with CCC loss gives better performance on the validation set compared to the one trained with MSE loss. The detail is shown in Table~\\ref{tab:loss}.\n\\begin{table}\n \\caption{Evaluation score on the HUME-VB validation set with different loss functions. Experimented on A-VB-HIGH with Last pooling}\n\\begin{center}\n \\centering\n \\begin{tabular}{|l|l|l|}\n \\hline\n Loss function & LARGE-LV60K & XLSR53 \\\\\n \\hline\n MSE & 63.77 & 63.94 \\\\\n \\hline\n CCC & \\textbf{65.41} & \\textbf{65.32} \\\\\n \\hline\n \\end{tabular}\n \\label{tab:loss}\n\\end{center}\n\\end{table}\n\nIn addition, we carried out the ablation study to analyze the role of the RNN. According to Table~\\ref{tab:rnn}, using the RNN can significantly boost the accuracy of the model in all four tasks. It can be explained by the capability of learning temporal information of the LSTM, which can enhance the overall operation of the model.\n\n\\begin{table}\n \\caption{Evaluation score on the HUME-VB validation set with and without RNN. Experimented on A-VB-HIGH with LARGE-LV60K model, CCC loss, and Last pooling}\n\\begin{center}\n \\begin{tabular}{|l|l|l|l|l|}\n \\hline\n Model & TWO & HIGH & CULTURE & TYPE \\\\\n \\hline\n Without RNN & 47.38 & 50.70 & 40.20 & 39.10 \\\\\n \\hline\n With RNN & \\textbf{61.59} & \\textbf{65.41} & \\textbf{53.39} & \\textbf{47.22} \\\\\n \\hline\n \\end{tabular}\n \\label{tab:rnn}\n\\end{center}\n\\end{table}\n\nAfter conducting the above experiments, we concluded that the best configuration of our model is combining either LARGE-LV60K or XLSR53 pre-trained model with last pooling method and utilizing CCC loss. This setting was used to train separated model for each task in order to obtain unbiased evaluation on test set. We decided to choose LARGE-LV60K for A-VB-HIGH and A-VB-CULTURE, XLSR53 for A-VB-TWO and A-VB-TYPE. This time we trained each model for 50 epochs and applied early stopping by monitoring the validation result. Our best models and their evaluations on test set and validation set are listed in Table~\\ref{tab:test}.\n\n\\begin{table}\n\\caption{Evaluation score on the HUME-VB validation and test sets. Experimented with CCC loss and Last pooling for 50 epochs}\n\\begin{center}\n\\begin{tabular}{|c|c|c|c|}\n\\hline\n Task & Pre-trained &\\multicolumn{2}{|c|}{Performance} \\\\\n\\cline{3-4} \n name & Audio Extractor & Val set & Test set \\\\\n\\hline\n TWO & XLSR53 & 61.94 & 62.02 \\\\\n\\hline\n HIGH & LARGE-LV60K & 66.76 & 66.77 \\\\\n\\hline\n CULTURE & LARGE-LV60K & 54.93 & 54.95 \\\\\n\\hline\n TYPE & XLSR53 & 49.89 & 49.70 \\\\\n\\hline\n\\end{tabular}\n\\label{tab:test}\n\\end{center}\n\\end{table}\n\n\\section{Conclusion}\nThis paper presents our proposed method for all sub-challenges of the A-VB competition. Particularly, we fine-tuned the pre-trained Wav2vec and combined it with basic neural networks and a proper pooling method. The CCC loss and Last pooling show the best performance on four tasks among the other options. Our model outperforms the baseline of the organizer on the test set, with the CCC score of 62.02 for A-VB-TWO, 66.77 for A-VB-HIGH, 54.95 for A-VB-CULTURE and UAR metric of 49.70 for A-VB-TYPE.\n\n\\section*{Acknowledgment}\n\nThis work was supported by a National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT). (NRF-2020R1A4A1019191).\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nMassive multiple-input multiple-output (MIMO), also known as\nlarge-scale or very-large MIMO, is a promising technology to meet\nthe ever growing demands for higher throughput and better\nquality-of-service of next-generation wireless communication\nsystems \\cite{RusekPersson13,LarssonEdfors14,ChenSun16}. Massive\nMIMO systems are those that are equipped with a large number of\nantennas at the base station (BS) simultaneously serving a much\nsmaller number of single-antenna users sharing the same\ntime-frequency slot. By exploiting the asymptotic orthogonality\namong channel vectors associated with different users, massive\nMIMO systems can achieve almost perfect inter-user interference\ncancelation with a simple linear precoder and receive combiner\n\\cite{Marzetta10}, and thus have the potential to enhance the\nspectrum efficiency by orders of magnitude.\n\n\n\nDespite all these benefits, massive MIMO systems pose new\nchallenges for system design and hardware implementation. Due to\nthe large number of antennas at the BS, the hardware cost and\npower consumption could become prohibitively high if we still\nemploy expensive and power-hungry high-resolution\nanalog-to-digital convertors (ADCs) \\cite{ChenZhao14}. To address\nthis obstacle, recent studies (e.g.\n\\cite{RisiPersson14,FanJin15,ZhangDai16,JacobssonDurisi15,WangLi15,WenWang16,ChoiMo16})\nconsidered the use of low-resolution ADCs (e.g. 1-3 bits) for\nmassive MIMO systems. It is known that the hardware complexity and\npower consumption grow exponentially with the resolution (i.e. the\nnumber of bits per sample) of the ADC. Therefore lowering the\nresolution of the ADC can effectively reduce the hardware cost and\npower consumption. In particular, for the extreme one-bit case,\nthe ADC becomes a simple analog comparator. Also, automatic gain\ncontrol (AGC) is no longer needed when one-bit ADCs are used,\nwhich further simplifies the hardware complexity.\n\n\n\nMassive MIMO with low-resolution ADCs has attracted much attention\nover the past few years. Great efforts have been made to\nunderstand the effects of low-resolution ADCs on the performance\nof MIMO and massive MIMO systems. Specifically, by assuming full\nknowledge of channel state information (CSI), the capacity at both\nfinite and infinite signal-to-noise ratio (SNR) was derived in\n\\cite{MoHeath15} for one-bit MIMO systems. For massive MIMO\nsystems with low-resolution ADCs, the spectral efficiency and the\nuplink achievable rate were investigated in\n\\cite{RisiPersson14,FanJin15,ZhangDai16,LiangZhang16} under\ndifferent assumptions. The theoretical analyses suggest that the\nuse of the low cost and low-resolution ADCs can still provide\nsatisfactory achievable rates and spectral efficiency.\n\n\n\n\n\n\n\n\n\nIn this paper, we consider the problem of channel estimation for\nuplink multiuser massive MIMO systems, where one-bit ADCs are used\nat the BS in order to reduce the cost and power consumption.\nChannel estimation is crucial to support multi-user MIMO operation\nin massive MIMO systems\n\\cite{AdhikaryNam13,ChoiLove14,SunGao15,GaoDai15,FangLi17}. To\nreach the full potential of massive MIMO, accurate downlink CSI is\nrequired at the BS for precoding and other operations. Most\nliterature on massive MIMO systems, e.g.\n\\cite{Marzetta10,RusekPersson13,YinGesbert13,MullerCottatellucci14},\nassumes a time division duplex (TDD) mode in which the downlink\nCSI can be immediately obtained from the uplink CSI by exploiting\nchannel reciprocity. Nevertheless, channel estimation for massive\nMIMO systems with one-bit ADCs is challenging since the magnitude\nand phase information about the received signal are lost or\nseverely distorted due to the coarse quantization. It was shown in\n\\cite{RisiPersson14} that one-bit massive MIMO systems require an\nexcessively long training sequence (e.g. approximately 50 times\nthe number of users) to achieve an acceptable performance. The\nwork \\cite{JacobssonDurisi15} showed that for one-bit massive MIMO\nsystems, a least-squares channel estimation scheme and a\nmaximum-ratio combining scheme are sufficient to support both\nmultiuser operation and the use of high-order constellations.\nNevertheless, a long training sequence is still a requirement. To\nalleviate this issue, a Bayes-optimal joint channel and data\nestimation scheme was proposed in \\cite{WenWang16}, in which the\nestimated payload data are utilized to aid channel estimation. In\n\\cite{ChoiMo16}, a maximum likelihood channel estimator, along\nwith a near maximum likelihood detector, were proposed for uplink\nmassive MIMO systems with one-bit ADCs.\n\n\n\nDespite these efforts, channel estimation using one-bit quantized\ndata still incur much larger estimation errors as compared with\nusing the original unquantized data, and require considerably\nhigher training overhead to attain an acceptable estimation\naccuracy. To address this issue, in this paper, we study one-bit\nquantizer design and examine the impact of the choice of\nquantization thresholds on the estimation performance.\nSpecifically, the optimal design of quantization thresholds as\nwell as the training sequences is investigated. Note that one-bit\nquantization design is an interesting and important issue but\nlargely neglected by existing massive MIMO channel estimation\nstudies. In fact, most channel estimation schemes, e.g.\n\\cite{RisiPersson14,JacobssonDurisi15,WenWang16,ChoiMo16}, assume\na fixed, typically zero, quantization threshold. The optimal\nchoice of the quantization threshold was considered in\n\\cite{KochLapidoth13,Verdu02}, but addressed from an\ninformation-theoretic perspective. Our theoretical results reveal\nthat, given that the quantization thresholds are optimally\ndevised, using one-bit ADCs can achieve an estimation error close\nto (with an increase only by a factor of $\\pi\/2$) the minimum\nachievable estimation error attained by using infinite-precision\nADCs. The optimal quantization thresholds, however, are dependent\non the unknown channel parameters. To cope with this difficulty,\nwe propose an adaptive quantization (AQ) scheme by which the\nthresholds are dynamically adjusted in a way such that the\nthresholds converge to the optimal thresholds, and a random\nquantization (RQ) scheme which randomly generates a set of\nnon-identical thresholds based on some statistical prior knowledge\nof the channel. Simulation results show that our proposed schemes,\nbecause of their wisely devised quantization thresholds, present a\nsignificant performance improvement over the fixed quantization\nscheme that use a fixed (say, zero) quantization threshold. In\nparticular, the AQ scheme, even with a moderate number of pilot\nsymbols (about 5 times the number of users), can provide an\nachievable rate close to that of the perfect CSI case.\n\n\nThe rest of the paper is organized as follows. The system model\nand the problem of channel estimation using one-bit ADCs are\ndiscussed in Section \\ref{sec:system-model}. In Section\n\\ref{sec:MLE-CRB}, we develop a maximum likelihood estimator and\ncarry out a Cram\\'{e}r-Rao bound analysis of the one-bit channel\nestimation problem. The optimal design of quantization thresholds\nand the pilot sequences is studied in Section\n\\ref{sec:optimal-design}. In Section \\ref{sec:AQ-RQ}, we develop\nan adaptive quantization scheme and a random quantization scheme\nfor practical threshold design. Simulation results are provided in\nSection \\ref{sec:experiments}, followed by concluding remarks in\nSection \\ref{sec:conclusion}.\n\n\n\n\n\n\\section{System Model and Problem Formulation} \\label{sec:system-model}\nConsider a single-cell uplink multiuser massive MIMO system, where\nthe BS equipped with $M$ antennas serves $K$ ($M\\gg K$)\nsingle-antenna users simultaneously. The channel is assumed to be\nflat block fading, i.e. the channel remains constant over a\ncertain amount of coherence time. The received signal at the BS\ncan be expressed as\n\\begin{align}\n\\boldsymbol{Y}=\\boldsymbol{H}\\boldsymbol{X}+\\boldsymbol{W}\n\\label{data-model}\n\\end{align}\nwhere $\\boldsymbol{X}\\in\\mathbb{C}^{K\\times L}$ is a training\nmatrix and its row corresponds to each user's training sequence\nwith $L$ pilot symbols, $\\boldsymbol{H}\\in\\mathbb{C}^{M\\times K}$\ndenotes the channel matrix to be estimated, and\n$\\boldsymbol{W}\\in\\mathbb{C}^{M\\times L}$ represents the additive\nwhite Gaussian noise with its entries following a circularly\nsymmetric complex Gaussian distribution with zero mean and\nvariance $2\\sigma^2$.\n\nTo reduce the hardware cost and power consumption, we consider a\nmassive MIMO system which uses one-bit ADCs at the BS to quantize\nthe received signal. Specifically, at each antenna, the real and\nimaginary components of the received signal are quantized\nseparately using a pair of one-bit ADCs. Thus in total $2M$\none-bit ADCs are needed. The quantized output of the received\nsignal, $\\boldsymbol{B}\\triangleq [b_{m,l}]$, can be written as\n\\begin{align}\n\\boldsymbol{B}=\\mathcal{Q}(\\boldsymbol{Y})\n\\label{conventional-quantizer}\n\\end{align}\nwhere $\\mathcal{Q}(\\boldsymbol{Y})$ is an element-wise operation\nperformed on $\\boldsymbol{Y}$, and for each element of\n$\\boldsymbol{Y}$, $y_{m,l}$, we have\n\\begin{align}\n\\mathcal{Q}(y_{m,l})=\\text{sgn}(\\Re(y_{m,l}))+j\\text{sgn}(\\Im(y_{m,l}))\n\\end{align}\nin which $\\Re(y)$ and $\\Im(y)$ denote the real and imaginary\ncomponents of $y$, respectively, and the sign function\n$\\text{sgn}(\\cdot)$ is defined as\n\\begin{align}\n\\text{sgn}(y) \\triangleq \\left\\{ \\begin{array}{ll}\n1 & \\textrm{if $y\\ge 0$}\\\\\n-1 & \\textrm{otherwise}\n\\end{array} \\right.\n\\end{align}\nTherefore the quantized output belongs to the set\n\\begin{align}\nb_{m,l}\\in \\{1+j,-1+j,1-j,-1-j\\}\\quad \\forall m,l\n\\end{align}\nNote that in (\\ref{conventional-quantizer}), we implicitly assume\na zero threshold for one-bit quantization. Nevertheless, using\nidentically a zero threshold for all measurements is not\nnecessarily optimal, and it is interesting to analyze the impact\nof the quantization thresholds on the channel estimation\nperformance. Such an issue (i.e. choice of quantization\nthresholds), albeit important, was to some extent neglected by\nmost existing studies. To examine this problem, let\n$\\boldsymbol{T}\\triangleq [\\tau_{m,l}]$ denote the thresholds used\nfor one-bit quantization. The quantized output of the received\nsignal, $\\boldsymbol{B}$, is now given as\n\\begin{align}\n\\boldsymbol{B}=\\mathcal{Q}(\\boldsymbol{Y}-\\boldsymbol{T})\n\\label{quantizer}\n\\end{align}\n\nTo facilitate our analysis, we first convert (\\ref{data-model})\ninto a real-valued form as follows\n\\begin{align}\n\\boldsymbol{\\widetilde{Y}}=\\boldsymbol{\\widetilde{A}}\\boldsymbol{\\widetilde{H}}+\\boldsymbol{\\widetilde{W}}\n\\end{align}\nwhere\n\\begin{align}\n\\boldsymbol{\\widetilde{Y}}\\triangleq & [ \\Re(\\boldsymbol{Y}) \\\n\\Im(\\boldsymbol{Y})]^T \\nonumber\\\\\n\\boldsymbol{\\widetilde{H}} \\triangleq & [ \\Re(\\boldsymbol{H}) \\\n\\Im(\\boldsymbol{H})]^T \\nonumber\\\\\n\\boldsymbol{\\widetilde{W}} \\triangleq & [ \\Re(\\boldsymbol{W}) \\\n\\Im(\\boldsymbol{W})]^T \\nonumber\n\\end{align}\nand\n\\begin{align}\n\\boldsymbol{\\widetilde{A}} \\triangleq \\left[\\begin{array}{ccc}\n\\Re(\\boldsymbol{X}) & \\Im(\\boldsymbol{X}) \\\\\n-\\Im(\\boldsymbol{X}) & \\Re(\\boldsymbol{X})\n\\end{array}\\right]^T \\label{A-X-relationship}\n\\end{align}\nVectorizing the real-valued matrix $\\boldsymbol{\\widetilde{Y}}$,\nthe received signal can be expressed as a real-valued vector form\nas\n\\begin{align}\n\\boldsymbol{y}=\\boldsymbol{A}\\boldsymbol{h}+\\boldsymbol{w}\n\\label{data-model-vector}\n\\end{align}\nwhere\n$\\boldsymbol{y}\\triangleq\\text{vec}(\\boldsymbol{\\widetilde{Y}})$,\n$\\boldsymbol{A}\\triangleq\\boldsymbol{I}_{M}\\otimes\\boldsymbol{\\widetilde{A}}$,\n$\\boldsymbol{h}\\triangleq\\text{vec}(\\boldsymbol{\\widetilde{H}})$,\nand\n$\\boldsymbol{w}\\triangleq\\text{vec}(\\boldsymbol{\\widetilde{W}})$.\nIt can be easily verified $\\boldsymbol{y}\\in\\mathbb{R}^{2ML}$,\n$\\boldsymbol{A}\\in\\mathbb{R}^{2ML\\times 2MK}$, and\n$\\boldsymbol{h}\\in\\mathbb{R}^{2MK}$. Accordingly, the one-bit\nquantized data can be written as\n\\begin{align}\n\\boldsymbol{b}=\\text{sgn}(\\boldsymbol{y}-\\boldsymbol{\\tau})\n\\label{quantized-data-model-vector}\n\\end{align}\nwhere $\\boldsymbol{\\tau}\\triangleq \\text{vec}([\n\\Re(\\widetilde{\\boldsymbol{T}}) \\\n\\Im(\\widetilde{\\boldsymbol{T}})]^T)$ and\n$\\boldsymbol{\\tau}\\in\\mathbb{R}^{2ML}$. For simplicity, we define\n$N\\triangleq 2ML$. \n\nOur objective in this paper is to estimate the channel\n$\\boldsymbol{h}$ based on the one-bit quantized data\n$\\boldsymbol{b}$, examine the best achievable estimation\nperformance and investigate the optimal thresholds\n$\\boldsymbol{\\tau}$ as well as the optimal training sequences\n$\\boldsymbol{X}$. To this objective, in the following, we first\ndevelop a maximum likelihood (ML) estimator and carry out a\nCram\\'{e}r-Rao bound (CRB) analysis. The optimal choice of the\nquantization thresholds as well as the training sequences is then\nstudied based on the CRB matrix of the unknown channel parameter\nvector $\\boldsymbol{h}$.\n\n\n\n\n\\section{ML Estimator and CRB Analysis} \\label{sec:MLE-CRB}\n\\subsection{ML Estimator}\nBy combining (\\ref{data-model-vector}) and\n(\\ref{quantized-data-model-vector}), we have\n\\begin{align}\nb_n=\\text{sgn}(y_n-\\tau_n)\n=\\text{sgn}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}+w_n-\\tau_n), \\quad\n\\forall n\n\\end{align}\nwhere, by allowing a slight abuse of notation, we let $b_n$,\n$y_n$, $\\tau_n$, and $w_n$ denote the $n$th entry of\n$\\boldsymbol{b}$, $\\boldsymbol{y}$, $\\boldsymbol{\\tau}$, and\n$\\boldsymbol{w}$, respectively; and $\\boldsymbol{a}_n^T$ denotes\nthe $n$th row of $\\boldsymbol{A}$. It is easy to derive that\n\\begin{align}\nP(b_n=1;\\boldsymbol{h}) & =P(w_n\\geq-(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n);\\boldsymbol{h}) \\nonumber \\\\\n& =F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)\n\\end{align}\nand\n\\begin{align}\nP(b_n=-1;\\boldsymbol{h}) & =P(w_n < -(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n);\\boldsymbol{h}) \\nonumber \\\\\n& =1-F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)\n\\end{align}\nwhere $F_{w}(\\cdot)$ denotes the cumulative density function (CDF)\nof $w_n$, and $w_n$ is a real-valued Gaussian random variable with\nzero-mean and variance $\\sigma^2$. Therefore the probability mass\nfunction (PMF) of $b_n$ is given by\n\\begin{align}\np(b_n;\\boldsymbol{h})=& [1-F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)]^{(1-b_n)\/2} \\nonumber \\\\\n&\\cdot[F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)]^{(1+b_n)\/2}\n\\end{align}\nSince $\\{b_n\\}$ are independent, the log-PMF or log-likelihood\nfunction can be written as\n\\begin{align}\nL(\\boldsymbol{h}) & \\triangleq \\log p(b_1,\\dots,b_N;\\boldsymbol{h}) \\nonumber \\\\\n& = \\sum_{n=1}^{N} \\bigg\\{ \\frac{1-b_n}{2}\\log [1-F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)] \\nonumber \\\\\n& \\qquad \\quad + \\frac{1+b_n}{2} \\log\n[F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)] \\bigg\\}\n\\label{log-PMF}\n\\end{align}\nThe ML estimate of $\\boldsymbol{h}$, therefore, is given as\n\\begin{align}\n\\hat{\\boldsymbol{h}}=\\arg\\max_{\\boldsymbol{h}} \\ L(\\boldsymbol{h})\n\\label{MLE}\n\\end{align}\nIt can be proved that the log-likelihood function\n$L(\\boldsymbol{h})$ is a concave function. Hence computationally\nefficient search algorithms can be used to find the global\nmaximum. The proof of the concavity of $L(\\boldsymbol{h})$ is\ngiven in Appendix \\ref{appA}.\n\n\n\n\n\n\n\\subsection{CRB}\nWe now carry out a CRB analysis of the one-bit channel estimation\nproblem (\\ref{quantized-data-model-vector}). The CRB results help\nunderstand the effect of different system parameters, including\nquantization thresholds as well as training sequences, on the\nestimation performance. We first summarize our derived CRB results\nin the following theorem.\n\n\n\\newtheorem{theorem}{Theorem}\n\\begin{theorem} \\label{theorem1}\nThe Fisher information matrix (FIM) for the estimation problem\n(\\ref{quantized-data-model-vector}) is given as\n\\begin{align}\n\\boldsymbol{J}(\\boldsymbol{h})=\\sum_{n=1}^{N}\ng(\\tau_n,\\boldsymbol{a}_n)\\boldsymbol{a}_n\\boldsymbol{a}_n^T\n\\end{align}\nwhere $g(\\tau_n,\\boldsymbol{a}_n)$ is defined as\n\\begin{align}\ng(\\tau_n,\\boldsymbol{a}_n) \\triangleq \\frac {f_{w}^2\n(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)}\n{F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)(1-F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n))}\n\\label{g-function}\n\\end{align}\nin which $f_{w}(\\cdot)$ denotes the probability density function\n(PDF) of $w_n$. Accordingly, the CRB matrix for the estimation\nproblem (\\ref{quantized-data-model-vector}) is given by\n\\begin{align}\n\\text{CRB}(\\boldsymbol{h})=\\boldsymbol{J}^{-1}(\\boldsymbol{h}) =\n\\left( \\sum_{n=1}^{N} g(\\tau_n,\\boldsymbol{a}_n) \\boldsymbol{a}_n\n\\boldsymbol{a}_n^T \\right)^{-1} \\label{CRB}\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{appB}.\n\\end{proof}\n\nAs is well known, the CRB places a lower bound on the estimation\nerror of any unbiased estimator \\cite{Kay93} and is asymptotically\nattained by the ML estimator. Specifically, the covariance matrix\nof any unbiased estimate satisfies:\n$\\text{cov}(\\hat{\\boldsymbol{h}})-\\text{CRB}(\\boldsymbol{h})\n\\succeq \\boldsymbol{0}$. Also, the variance of each component is\nbounded by the corresponding diagonal element of\n$\\text{CRB}(\\boldsymbol{h})$, i.e., $\\text{var}(\\hat{h}_i) \\ge\n[\\text{CRB}(\\boldsymbol{h})]_{ii}$.\n\nWe observe from (\\ref{CRB}) that the CRB matrix of\n$\\boldsymbol{h}$ depends on the quantization thresholds\n$\\boldsymbol{\\tau}$ as well as the matrix $\\boldsymbol{A}$ which\nis constructed from training sequences $\\boldsymbol{X}$ (cf.\n(\\ref{A-X-relationship})). Naturally, we wish to optimize\n$\\boldsymbol{\\tau}$ and $\\boldsymbol{A}$ (i.e. $\\boldsymbol{X}$)\nby minimizing the trace of the CRB matrix, i.e. the overall\nestimation error asymptotically achieved by the ML estimator. The\noptimization therefore can be formulated as follows\n\\begin{align}\n\\min_{\\boldsymbol{X},\\boldsymbol{\\tau}}\\quad &\n\\text{tr}\\left\\{\\text{CRB}(\\boldsymbol{h})\\right\\} = \\mathrm{tr}\n\\left\\{ \\left( \\sum_{n=1}^{N} g(\\tau_n,\\boldsymbol{a}_n)\n\\boldsymbol{a}_n \\boldsymbol{a}_n^T \\right)^{-1} \\right\\}\n\\nonumber\\\\\n\\text{s.t.} \\quad &\n\\boldsymbol{A}=\\boldsymbol{I}_M\\otimes\\boldsymbol{\\widetilde{A}}\n\\nonumber\\\\\n& \\boldsymbol{\\widetilde{A}} \\triangleq \\left[\\begin{array}{ccc}\n\\Re(\\boldsymbol{X}) & \\Im(\\boldsymbol{X}) \\\\\n-\\Im(\\boldsymbol{X}) & \\Re(\\boldsymbol{X})\n\\end{array}\\right]^T \\nonumber\\\\\n& \\text{tr}(\\boldsymbol{X}\\boldsymbol{X}^H)\\leq P\n \\label{opt1}\n\\end{align}\nwhere $\\text{tr}(\\boldsymbol{X}\\boldsymbol{X}^H)\\leq P$ is a\ntransmit power constraint imposed on the pilot signals. Such an\noptimization is examined in the following section, where it is\nshown that the optimization of $\\boldsymbol{X}$ can be decoupled\nfrom the optimization of the threshold $\\boldsymbol{\\tau}$.\n\n\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=3.5in]{g-function.eps}\n\\caption{The function value of $g(\\tau_n,\\boldsymbol{a}_n)$ vs.\n$(\\boldsymbol{a}_n^T\\boldsymbol{h}-\\tau_n)$, where $\\sigma^2=1$.}\n\\label{fig1}\n\\end{figure}\n\n\n\n\n\\section{Optimal Design and Performance Analysis} \\label{sec:optimal-design}\n\\subsection{Optimal Quantization Thresholds and Pilot\nSequences} Before proceeding, we first introduce the following\nresult.\n\n\\newtheorem{proposition}{Proposition}\n\\begin{proposition} \\label{proposition1}\nFor the Gaussian random variable $w_n$,\n$g(\\tau_n,\\boldsymbol{a}_n)$ defined in (\\ref{g-function}) is a\npositive and symmetric function attaining its maximum when\n$\\tau_n=\\boldsymbol{a}_n^{T}\\boldsymbol{h}$ (see Fig. \\ref{fig1}).\n\\end{proposition}\n\\begin{proof}\nPlease see Appendix \\ref{appE}.\n\\end{proof}\n\nHence, given a fixed $\\boldsymbol{A}$ (i.e. $\\boldsymbol{X}$), the\noptimal quantization thresholds conditional on $\\boldsymbol{A}$\nare given by\n\\begin{align}\n\\tau_n^{\\star}=\\boldsymbol{a}_n^{T}\\boldsymbol{h}, \\quad \\forall\nn\\in\\{1,\\ldots,N\\} \\label{optimumthreshold}\n\\end{align}\nThe result (\\ref{optimumthreshold}) comes directly by noting that\n\\begin{align}\n\\sum_{n=1}^N\ng_n(\\tau_n^{\\star},\\boldsymbol{a}_n)\\boldsymbol{a}_n\\boldsymbol{a}_n^T-\\sum_{n=1}^N\ng_n(\\tau_n,\\boldsymbol{a}_n)\\boldsymbol{a}_n\\boldsymbol{a}_n^T\\succeq\\mathbf{0}\n\\end{align}\nand resorting to the convexity of $\\text{tr}(\\boldsymbol{P}^{-1})$\nover the set of positive definite matrix, i.e. for any\n$\\boldsymbol{P}\\succ\\mathbf{0}$, $\\boldsymbol{Q}\\succ\\mathbf{0}$,\nand $\\boldsymbol{P}-\\boldsymbol{Q}\\succeq \\mathbf{0}$, the\nfollowing inequality\n$\\text{tr}(\\boldsymbol{P}^{-1})\\leq\\text{tr}(\\boldsymbol{Q}^{-1})$\nholds (see \\cite{BoydVandenberghe03}).\n\nWe see that the optimal choice of the quantization threshold\n$\\tau_n$ is dependent on the unknown channel $\\boldsymbol{h}$. To\nfacilitate our analysis, we, for the time being, suppose\n$\\boldsymbol{h}$ is known. Substituting (\\ref{optimumthreshold})\ninto (\\ref{opt1}) and noting that\n\\begin{align}\ng(\\tau_n^{\\star},\\boldsymbol{a}_n)=\\frac {f_{w}^2 (0)}\n{F_{w}(0)(1-F_{w}(0))} = \\frac{2}{\\pi\\sigma^2} \\quad \\forall n\n\\end{align}\nthe optimization (\\ref{opt1}) reduces to\n\\begin{align}\n\\min_{\\boldsymbol{X}} \\quad & \\frac{\\pi\\sigma^2}{2} \\text{tr}\n\\left\\{ \\left( \\boldsymbol{A}^T \\boldsymbol{A} \\right)^{-1}\n\\right\\}\n\\nonumber\\\\\n\\text{s.t.} \\quad &\n\\boldsymbol{A}=\\boldsymbol{I}_M\\otimes\\boldsymbol{\\widetilde{A}}\n\\nonumber\\\\\n& \\boldsymbol{\\widetilde{A}} \\triangleq \\left[\\begin{array}{ccc}\n\\Re(\\boldsymbol{X}) & \\Im(\\boldsymbol{X}) \\\\\n-\\Im(\\boldsymbol{X}) & \\Re(\\boldsymbol{X})\n\\end{array}\\right]^T \\nonumber\\\\\n& \\text{tr}(\\boldsymbol{X}\\boldsymbol{X}^H)\\leq P \\label{opt2}\n\\end{align}\nwhich is now independent of $\\boldsymbol{h}$. We have the\nfollowing theorem regarding the solution to the optimization\n(\\ref{opt2}).\n\n\n\n\\begin{theorem} \\label{theorem2}\nThe minimum achievable objective function value of (\\ref{opt2}) is\ngiven by $(\\pi\\sigma^2 MK^2)\/P$ and can be attained if the pilot\nmatrix $\\boldsymbol{X}$ satisfies\n\\begin{align}\n\\boldsymbol{X}\\boldsymbol{X}^H = (P\/K) \\boldsymbol{I}\n\\label{theorem2:eqn1}\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nSee Appendix \\ref{appC}.\n\\end{proof}\n\nTheorem \\ref{theorem2} reveals that, for one-bit massive MIMO\nsystems, users should employ orthogonal pilot sequences in order\nto minimize channel estimation errors. Although it is a convention\nto use orthogonal pilots to facilitate channel estimation for\nconventional massive MIMO systems, to our best knowledge, its\noptimality in one-bit massive MIMO systems has not been\nestablished before.\n\n\n\\subsection{Performance Analysis}\nWe now investigate the estimation performance when the optimal\nthresholds are employed, and its comparison with the performance\nattained by a conventional massive MIMO system which assumes\ninfinite-precision ADCs. Substituting the optimal thresholds\n(\\ref{optimumthreshold}) into the CRB matrix (\\ref{CRB}), we have\n\\begin{align}\n\\text{CRB}_{\\text{OQ}}(\\boldsymbol{h})=\\frac{\\pi\\sigma^2}{2}\n\\left( \\boldsymbol{A}^T \\boldsymbol{A} \\right)^{-1} \\label{CRB-Q}\n\\end{align}\nwhere for clarity, we use the subscript OQ to represent the\nestimation scheme using optimal quantization thresholds. On the\nother hand, when the unquantized observations $\\boldsymbol{y}$ are\navailable, it can be readily verified that the CRB matrix is given\nas\n\\begin{align}\n\\text{CRB}_{\\text{NQ}}(\\boldsymbol{h})=\\sigma^2 \\left(\n\\boldsymbol{A}^T \\boldsymbol{A} \\right)^{-1} \\label{CRB-NQ}\n\\end{align}\nwhere we use the subscript NQ to represent the scheme which has\naccess to the unquantized observations. Comparing (\\ref{CRB-Q})\nwith (\\ref{CRB-NQ}), we can see that if optimal thresholds are\nemployed, then using one-bit ADCs for channel estimation incurs\nonly a mild performance loss relative to using infinite-precision\nADCs, with the CRB increasing by only a factor of $\\pi\/2$, i.e.\n\\begin{align}\n\\text{CRB}_{\\text{OQ}}(\\boldsymbol{h})=\\frac{\\pi}{2}\n\\text{CRB}_{\\text{NQ}}(\\boldsymbol{h})\n\\end{align}\nWe also take a glimpse of the estimation performance as the\nthresholds deviate from their optimal values. For simplicity, let\n$\\tau_n=\\tau_n^{\\star}+\\delta=\\boldsymbol{a}_n^{T}\\boldsymbol{h}+\\delta,\n\\forall n$, in which case the CRB matrix is given by\n\\begin{align}\n\\text{CRB}_{\\text{Q}}(\\boldsymbol{h})=\\frac\n{F_{w}(\\delta)(1-F_{w}(\\delta))}{f_{w}^2 (\\delta)}\\left(\n\\boldsymbol{A}^T \\boldsymbol{A} \\right)^{-1}\n\\end{align}\nSince $(F_{w}(\\delta)(1-F_{w}(\\delta)))\/f_{w}^2(\\delta)$ is the\nreciprocal of $g(\\tau_n,\\boldsymbol{a}_n)$, from Fig. \\ref{fig1},\nwe know that the function value\n$(F_{w}(\\delta)(1-F_{w}(\\delta)))\/f_{w}^2(\\delta)$ grows\nexponentially as $|\\delta|$ increases. This indicates that a\ndeviation of the thresholds from their optimal values results in a\nsubstantial performance loss.\n\n\nIn summary, the above results have important implications for the\ndesign of one-bit massive MIMO systems. It points out that a\ncareful choice of quantization thresholds can help improve the\nestimation performance significantly, and help achieve an\nestimation accuracy close to an ideal estimator which has access\nto the raw observations $\\boldsymbol{y}$.\n\n\n\nThe problem lies in that the optimal thresholds\n$\\boldsymbol{\\tau}$ are functions of $\\boldsymbol{h}$, as\ndescribed in (\\ref{optimumthreshold}). Since $\\boldsymbol{h}$ is\nunknown and to be estimated, the optimal thresholds\n$\\boldsymbol{\\tau}$ are also unknown. To address this difficulty,\nwe, in the following, propose an adaptive quantization (AQ) scheme\nby which the thresholds are dynamically adjusted from one\niteration to another, and a random quantization (RQ) schme which\nrandomly generates a set of nonidentical thresholds based on some\nstatistical prior knowledge of the channel.\n\n\n\n\n\n\n\n\n\\section{Practical Threshold Design Strategies} \\label{sec:AQ-RQ}\n\\subsection{Adaptive Quantization}\nOne strategy to overcome the above difficulty is to use an\niterative algorithm in which the thresholds are iteratively\nrefined based on the previous estimate of $\\boldsymbol{h}$.\nSpecifically, at iteration $i$, we use the current quantization\nthresholds $\\boldsymbol{\\tau}^{(i)}$ to generate the one-bit\nobservation data $\\boldsymbol{b}^{(i)}$. Then a new estimate\n$\\hat{\\boldsymbol{h}}^{(i)}$ is obtained from the ML estimator\n(\\ref{MLE}). This estimate is then plugged in\n(\\ref{optimumthreshold}) to obtain updated quantization\nthresholds, i.e. $\\boldsymbol{\\tau}^{(i+1)}=\\boldsymbol{A}\n\\hat{\\boldsymbol{h}}^{(i)}$, for subsequent iteration. When\ncomputing the ML estimate $\\hat{\\boldsymbol{h}}^{(i)}$, not only\nthe quantized data from the current iteration but also from all\nprevious iterations can be used. The ML estimator (\\ref{MLE}) can\nbe easily adapted to accommodate these quantized data since the\ndata are independent across different iterations. Due to the\nconsistency of the ML estimator for large data records, this\niterative process will asymptotically lead to optimal quantization\nthresholds, i.e. $\\boldsymbol{\\tau}^{(i)} \\stackrel{i \\to\n\\infty}{\\longrightarrow} \\boldsymbol{A} \\boldsymbol{h}$. In fact,\nour simulation results show that the adaptive quantization scheme\nyields quantization thresholds close to the optimal values within\nonly a few iterations.\n\nFor clarity, we summarize the adaptive quantization (AQ) scheme as\nfollows.\n\n\\begin{center}\n\\textbf{Adaptive Quantization Scheme}\n\\end{center}\n\\vspace{0cm} \\noindent\n\\begin{tabular}{lp{7.7cm}}\n\\hline 1.& Select an initial quantization threshold\n$\\boldsymbol{\\tau}^{(0)}$ and the maximum number of iterations $i_{\\text{max}}$. \\\\\n2.& At iteration $i=1,2,\\ldots$: Based on $\\boldsymbol{y}$ and\n$\\boldsymbol{\\tau}^{(i)}$, calculate the new binary data\n$\\boldsymbol{b}^{(i)}=\\text{sgn}(\\boldsymbol{y}-\\boldsymbol{\\tau}^{(i)})$. \\\\\n3.& Compute a new estimate of $\\boldsymbol{h}$,\n$\\hat{\\boldsymbol{h}}^{(i)}$,\nvia (\\ref{MLE}). \\\\\n4.& Calculate new thresholds according to $\\boldsymbol{\\tau}^{(i+1)}=\\boldsymbol{A}\\hat{\\boldsymbol{h}}^{(i)}$. \\\\\n5.& Go to Step 2 if $i < i_{\\text{max}}$. \\\\\n\\hline\n\\end{tabular}\n\n\nNote that during the iterative process, the channel\n$\\boldsymbol{h}$ is assumed constant over time. Thus the AQ scheme\ncan be used to estimate channels that are unchanged or slowly\ntime-varying across a number of consecutive frames. For example,\nfor the scenario where the relative speeds between the mobile\nterminals and the base station are slow, say, 2 meters per second,\nthe channel coherence time could be up to tens of milliseconds,\nmore precisely, about 60 milliseconds if the carrier frequency is\nset to 1GHz, according to the Clarke's model\n\\cite{TseViswanath05}. Suppose the time duration of each frame is\n10 milliseconds which is a typical value for practical LTE\nsystems. In this case, the channel remains unchanged across 6\nconsecutive frames. We can use the AQ scheme to update the\nquantization thresholds at each frame based on the channel\nestimate obtained from the previous frame. In this way, we can\nexpect that the quantization thresholds will come closer and\ncloser to the optimal values from one frame to the next, and as a\nresult, a more and more accurate channel estimate can be obtained.\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=3.5in]{AQ.eps}\n\\caption{An off-line implementation of the AQ scheme.}\n\\label{fig2}\n\\end{figure}\n\nThe above scheme assumes a static or slowly time-varying channel\nacross multiple frames. Another way of implementing the AQ scheme\nrequires no such an assumption, but at the expense of increased\nhardware complexity. The idea is to use a number of\nsample-and-hold (S\/H) circuits to sample the analog received\nsignals and to store their values for subsequent offline\nprocessing. Specifically, each antenna\/RF chain is followed by\n$2L$ S\/H circuits which are equally divided into two groups to\nsample and store the real and imaginary components, respectively\n(see Fig. \\ref{fig2}). Through a precise timing control, we ensure\nthat at each antenna, say, the $m$th antenna, the $l$th S\/H\ncircuit pair in the two groups are controlled to store the real\nand imaginary components of the $l$th received pilot symbol, i.e.\n$\\Re(y_{m,l})$ and $\\Im(y_{m,l})$, respectively. Also, to avoid\nusing a one-bit ADC for each S\/H circuit, a switch can be used to\nconnect a single one-bit ADC with multiple S\/H circuits. Once the\nanalog signals $\\boldsymbol{y}$ have been stored, the AQ scheme\ncan be implemented in an offline manner. Clearly, this offline\napproach can be implemented on a single frame basis, and thus no\nlonger requires a static channel assumption. Nevertheless, such an\nimplementation requires a number of S\/H circuits as well as\nprecise timing control for sampling and quantization. Also, this\noffline processing may cause a latency issue which should be taken\ncare of in practical systems.\n\n\n\n\n\n\n\n\n\\subsection{Random Quantization}\nThe AQ scheme requires the channel to be (approximately)\nstationary, or needs to be implemented with additional hardware\ncircuits. Here we propose a random quantization (RQ) scheme that\ndoes not involve any iterative procedure and is simple to\nimplement. The idea is to randomly generate a set of non-identical\nthresholds based on some statistical prior knowledge of\n$\\boldsymbol{h}$, with the hope that some of the thresholds are\nclose to the unknown optimal thresholds. For example, suppose each\nentry of $\\boldsymbol{h}$ follows a Gaussian distribution with\nzero mean and variance $\\sigma_h^2$. Note that different entries\nof $\\boldsymbol{h}$ may have different variances due to the reason\nthat they may correspond to different users. Nevertheless, we\nassume the same variance for all entries for simplicity. We\nrandomly generate $N$ different realizations of $\\boldsymbol{h}$,\ndenoted as $\\{\\boldsymbol{\\tilde{h}}_n\\}$, following this known\ndistribution. The $N$ quantization thresholds are then devised\naccording to\n\\begin{align}\n\\tau_n=\\boldsymbol{a}_n^{T}\\boldsymbol{\\tilde{h}}_n, \\quad \\forall\nn\\in\\{1,\\ldots,N\\} \\label{multi-thresholding}\n\\end{align}\nOur simulation results suggest that this RQ scheme can achieve a\nconsiderable performance improvement over the conventional fixed\nquantization scheme which uses a fixed (typically zero) threshold.\nThe reason is that the thresholds produced by\n(\\ref{multi-thresholding}) are more likely to be close to their\noptimal values.\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Simulation Results} \\label{sec:experiments}\nWe now carry out experiments to corroborate our theoretical\nanalysis and to illustrate the performance of our proposed one-bit\nquantization schemes, i.e. the AQ and the RQ schemes. We compare\nour schemes with the conventional fixed quantization (FQ) scheme\nwhich employs a fixed zero threshold for one-bit quantization, and\na no-quantization scheme (referred to as NQ) which uses the\noriginal unquantized data for channel estimation. For the NQ\nscheme, it can be easily verified that its ML estimate is given by\n\\begin{align}\n\\boldsymbol{\\hat{h}}=(\\boldsymbol{A}^T\\boldsymbol{A})^{-1}\\boldsymbol{A}^T\\boldsymbol{y}\n\\end{align}\nand its associated CRB is given by (\\ref{CRB-NQ}). For other\nschemes such as the RQ and the FQ, although a close-form\nexpression is not available, the ML estimate can be obtained by\nsolving the convex optimization (\\ref{MLE}). In our simulations,\nwe assume independent and identically distributed (i.i.d.)\nrayleigh fading channels, i.e. all elements of $\\boldsymbol{H}$\nfollow a circularly symmetric complex Gaussian distribution with\nzero mean and unit variance. Training sequences $\\boldsymbol{X}$\nwhich satisfy (\\ref{theorem2:eqn1}) are randomly generated. The\nsignal-to-noise ratio (SNR) is defined as\n\\begin{align}\n\\text{SNR}=\\frac{P}{KL\\sigma^2}\n\\end{align}\n\n\n\\begin{figure}[!t]\n \\centering\n\\begin{tabular}{c}\n\\includegraphics[width=3.5in]{msevsitr1}\\\\\n(a). $K=8$, $L=32$ and $\\text{SNR}=15$ dB. \\\\\n\\includegraphics[width=3.5in]{msevsitr2}\\\\\n(b). $K=16$, $L=40$ and $\\text{SNR}=15$ dB.\n\\end{tabular}\n \\caption{MSEs of the AQ scheme as a function of the number of iterations.}\n \\label{fig3}\n\\end{figure}\n\n\nWe first examine the estimation performance of our proposed AQ\nscheme which adaptively adjusts the thresholds based on the\nprevious estimate of the channel. Fig. \\ref{fig3} plots the\nmean-squared errors (MSEs) vs. the number of iterations for the AQ\nscheme, where we set $K=8$, $L=32$ for Fig. (a) and $K=16$, $L=40$\nfor Fig. (b). The SNR is set to 15dB. The MSE is calculated as\n\\begin{align}\n\\text{MSE}=\\frac{1}{K M}\n\\|\\boldsymbol{H}-\\boldsymbol{\\hat{H}}\\|_F^2\n\\end{align}\nTo better illustrate the effectiveness of the AQ scheme, we also\ninclude the CRB results in Fig. \\ref{fig3}. in which the CRB-OQ,\ngiven by (\\ref{CRB-Q}), represents the theoretical lower bound on\nthe estimation errors of any unbiased estimator using optimal\nthresholds for one-bit quantization, and the CRB-NQ, given by\n(\\ref{CRB-NQ}), represents the lower bound on the estimation\nerrors of any unbiased estimator which has access to the original\nobservations. From Fig. \\ref{fig3}, we see that our proposed AQ\nscheme approaches the theoretical lower bound CRB-OQ within only a\nfew (say, 5) iterations, and achieves performance close to the CRB\nassociated with the NQ scheme. This result demonstrates the\neffectiveness of the AQ scheme in searching for the optimal\nthresholds. In the rest of our simulations, we set the maximum\nnumber of iterations, $i_{\\text{max}}$, equal to 5 for the AQ\nscheme.\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=3.5in]{msevsL}\n\\caption{MSEs vs. number of pilot symbols, where $K=8$ and\n$\\text{SNR}=15$ dB.} \\label{fig4}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=3.5in]{msevsSNR}\n\\caption{MSEs vs. SNR(dB), where $K=8$ and $L=96$.} \\label{fig5}\n\\end{figure}\n\n\n\nWe now compare the estimation performance of different schemes.\nFig. \\ref{fig4} plots the MSEs of respective schemes as a function\nof the number of pilot symbols, $L$, where we set $K=8$ and\n$\\text{SNR}=15\\text{dB}$. The corresponding CRBs of these schemes\nare also included. Note that the CRBs for the FQ and the RQ\nschemes can be obtained by substituting the thresholds into\n(\\ref{CRB}). Results are averaged over $10^3$ independent runs,\nwith the channel and the pilot sequences randomly generated for\neach run. From Fig. \\ref{fig4}, we can see that the proposed AQ\nscheme outperforms the FQ and RQ schemes by a big margin. This\nresult corroborates our analysis that an optimal choice of the\nquantization thresholds helps achieve a substantial performance\nimprovement. In particular, the AQ scheme needs less than 30 pilot\nsymbols to achieve a decent estimation accuracy with a MSE of\n0.01, while the FQ and RQ schemes require a much larger number of\npilot symbols to attain a same estimation accuracy. On the other\nhand, we should note that although the AQ scheme has the potential\nto achieve performance close to the NQ scheme, the implementation\nof the AQ is more complicated since it involves an iterative\nprocess to learn the optimal thresholds. In contrast, our proposed\nRQ scheme is as simple as the FQ scheme to implement, meanwhile it\npresents a clear performance advantage over the FQ scheme. We can\nsee from Fig. \\ref{fig4} that the RQ requires about 100 symbols to\nachieve a MSE of 0.1, whereas the FQ needs about 250 pilot symbols\nto reach a same estimation accuracy. The reason why the RQ\nperforms better than the FQ is that some of the thresholds\nproduced according to (\\ref{multi-thresholding}) are likely to be\nclose to the optimal thresholds. In Fig. \\ref{fig5}, we plot the\nMSEs of respective schemes under different SNRs, where we set\n$K=8$ and $L=96$. Similar conclusions can be made from Fig.\n\\ref{fig5}.\n\n\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=3.5in]{servsL}\n\\caption{SERs vs. number of pilot symbols, where $K=8$, $M=64$ and\n$\\text{SNR}=5\\text{dB}$.} \\label{fig6}\n\\end{figure}\n\n\\begin{figure}[!t]\n\\centering\n\\includegraphics[width=3.5in]{ratevsL}\n\\caption{Achievable rates vs. number of pilot symbols, where\n$K=8$, $M=64$ and $\\text{SNR}=5\\text{dB}$.} \\label{fig7}\n\\end{figure}\n\n\n\n\nNext, we examine the effect of channel estimation accuracy on the\nsymbol error rate (SER) performance. For each scheme, after the\nchannel is estimated, a near maximum likelihood detector\n\\cite{ChoiMo16} developed for one-bit massive MIMO is adopted for\nsymbol detection. For a fair comparison, in the symbol detection\nstage, the quantization thresholds are all set equal to zero, as\nassumed in \\cite{ChoiMo16}. In our experiments, QPSK symbols are\ntransmitted by all users. Fig. \\ref{fig6} plots the SERs of\nrespective schemes vs. the number of pilot symbols, where we set\n$K=8$, $M=64$, and $\\text{SNR}=5\\text{dB}$. Results are averaged\nover all $K$ users. The SER performance obtained by assuming\nperfect channel knowledge is also included. It can be seen that\nthe SER performance improves as the number of pilot symbols\nincreases, which is expected since a more accurate channel\nestimate can be obtained when more pilot symbols are available for\nchannel estimation. We also observe that the AQ scheme, using a\nmoderate number (about 120 symbols that is only 15 times the\nnumber of users) of pilot symbols, can achieve SER performance\nclose to that attained by assuming perfect channel knowledge.\nMoreover, the SER results, again, demonstrate the superiority of\nthe RQ over the FQ scheme. In order to attain a same SER, say,\n$10^{-3}$, the RQ requires about 60 pilot symbols, whereas the FQ\nrequires about 100 pilot symbols.\n\nIn Fig. \\ref{fig7}, the achievable rates of respective schemes vs.\nthe number of pilot symbols are depicted, where we set $K=8$,\n$M=64$, and $\\text{SNR}=5\\text{dB}$. The achievable rate for the\n$k$th user is calculated as \\cite{MollenChoi17}\n\\begin{align}\nR_k \\triangleq \\log_2 \\left( 1+ \\frac{\n|\\mathbb{E}\\left[s_k^*(t)\\hat{s}_k(t)\\right]|^2 }\n{\\mathbb{E}\\left[|\\hat{s}_k(t)|^2\\right] -\n|\\mathbb{E}\\left[s_k^*(t)\\hat{s}_k(t)\\right]|^2} \\right)\n\\end{align}\nwhere $s_k(t)$ is the transmit symbol of the $k$th user at time\n$t$, $()^{*}$ denotes the conjugate, and $\\hat{s}_k(t)$ is the\nestimated symbol of $s_k(t)$, which is obtained via the near\nmaximum likelihood detector by using the channel estimated by\nrespective schemes. The achievable rate we plotted is averaged\nover all $K$ users. It can be seen that, even with a moderate\nnumber of pilot symbols (about 5 times the number of users), the\nAQ scheme can provide an achievable rate close to that of the\nperfect CSI case, whereas the achievable rates attained by the\nother two schemes are far below the level of the AQ scheme.\nCompared to the FQ, the RQ scheme achieves an increase of about 30\npercent in the achievable rate.\n\n\n\n\n\n\\section{Conclusions} \\label{sec:conclusion}\nAssuming one-bit ADCs at the BS, we studied the problem of one-bit\nquantization design and channel estimation for uplink multiuser\nmassive MIMO systems. Specifically, based on the derived CRB\nmatrix, we examined the impact of quantization thresholds on the\nchannel estimation performance. Our theoretical analysis revealed\nthat using one-bit ADCs can achieve an estimation error close to\nthat attained by using infinite-precision ADCs, given that the\nquantization thresholds are optimally set. Our analysis also\nsuggested that the optimal quantization thresholds are dependent\non the unknown channel parameters. We developed two practical\nquantization design schemes, namely, an adaptive quantization\nscheme which adaptively adjusts the thresholds such that the\nthresholds converge to the optimal thresholds, and a random\nquantization scheme which randomly generates a set of\nnon-identical thresholds based on some statistical prior knowledge\nof the channel. Simulation results showed that the proposed\nquantization schemes achieved a significant performance\nimprovement over the fixed quantization scheme that uses a fixed\n(typically zero) quantization threshold, and thus can help\nsubstantially reduce the training overhead in order to attain a\nsame estimation accuracy target.\n\n\n\n\n\n\n\n\n\\useRomanappendicesfalse\n\\appendices\n\n\\section{Proof of Concavity of The Log-Likelihood Function (\\ref{log-PMF})} \\label{appA}\nIt can be easily verified that\n$f_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)$ is log-concave\nin $\\boldsymbol{h}$ since the Hessian matrix of $\\log\nf_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)$, which is given\nby\n\\begin{align}\n\\frac {\\partial^2 \\log\nf_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)} {{\\partial\n\\boldsymbol{h} \\partial \\boldsymbol{h}^T}} = -\n\\frac{\\boldsymbol{a}_n \\boldsymbol{a}_n^{T}} {\\sigma^2}\n\\end{align}\nis negative semidefinite. Consequently the corresponding\ncumulative density function (CDF) and complementary CDF (CCDF),\nwhich are integrals of the log-concave function\n$f_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)$ over convex\nsets $(-\\infty,\\tau_n)$ and $(\\tau_n,\\infty)$ respectively, are\nalso log-concave, and their logarithms are concave. Since\nsummation preserves concavity, $L(\\boldsymbol{h})$ is a concave\nfunction of $\\boldsymbol{h}$.\n\n\n\\section{Proof of Theorem \\ref{theorem1}} \\label{appB}\nDefine a new variable $z_n\\triangleq\n\\boldsymbol{a}_n^{T}\\boldsymbol{h}$ and define\n\\begin{align}\nl(z_n) & \\triangleq \\frac{1-b_n}{2} \\mathrm{log} [1-F_{w}(z_n-\\tau_n)] \\nonumber \\\\\n& \\quad + \\frac{1+b_n}{2} \\mathrm{log} [F_{w}(z_n-\\tau_n)].\n\\end{align}\nThe first and second-order derivative of $L(\\boldsymbol{h})$ are\ngiven by\n\\begin{align}\n\\frac {\\partial L(\\boldsymbol{h})} {\\partial \\boldsymbol{h}} =\n\\sum_{n=1}^{N} \\frac {\\partial l(z_n)} {\\partial z_n} \\frac\n{\\partial z_n} {\\partial \\boldsymbol{h}} = \\sum_{n=1}^{N} \\frac\n{\\partial l(z_n)} {\\partial z_n} \\boldsymbol{a}_n\n\\end{align}\nand\n\\begin{align}\n\\frac {\\partial^2 L(\\boldsymbol{h})} {\\partial \\boldsymbol{h}\n\\partial \\boldsymbol{h}^T}\n&= \\sum_{n=1}^{N} \\boldsymbol{a}_n \\frac {\\partial^2 l(z_n)}\n{\\partial z_n^2}\n\\frac {\\partial z_n} {\\partial \\boldsymbol{h}^T} \\nonumber \\\\\n&= \\sum_{n=1}^{N} \\frac {\\partial^2 l(z_n)} {\\partial z_n^2}\n\\boldsymbol{a}_n \\boldsymbol{a}_n^T .\n\\end{align}\nwhere\n\\begin{align}\n\\frac {\\partial l(z_n)} {\\partial z_n} &= \\frac{1-b_n}{2} \\frac{f_{w}(z_n-\\tau_n)}{F_{w}(z_n-\\tau_n)-1} \\nonumber \\\\\n& \\quad + \\frac{1+b_n}{2}\n\\frac{f_{w}(z_n-\\tau_n)}{F_{w}(z_n-\\tau_n)} \\label{deriv1}\n\\end{align}\nand\n\\begin{align}\n\\frac {\\partial^2 l(z_n)} {\\partial z_n^2} =& \\frac{1-b_n}{2}\n\\bigg[\n\\frac{f'_{w}(z_n-\\tau_n)}{F_{w}(z_n-\\tau_n)-1} \\nonumber \\\\\n& -\\frac{f_{w}^2 (z_n-\\tau_n)}{(F_{w}(z_n-\\tau_n)-1)^2} \\bigg] + \\frac{1+b_n}{2} \\nonumber \\\\\n& \\cdot \\bigg[ \\frac{f'_{w}(z_n-\\tau_n)}{F_{w}(z_n-\\tau_n)} -\n\\frac{f_{w}^2 (z_n-\\tau_n)}{F_{w}^2 (z_n-\\tau_n)} \\bigg]\n\\label{deriv2}\n\\end{align}\nwhere $f_{w}(x)$ denotes the probability density function (PDF) of\n$w_n$, and $f'_{w}(x)\\triangleq\\frac{\\partial f_{w}(x)}{\\partial\nx}$.\n\nTherefore, the Fisher information matrix (FIM) of the estimation\nproblem is given as\n\\begin{align}\nJ(\\boldsymbol{h}) & = -E \\left[\\frac {\\partial^2\nL(\\boldsymbol{h})} {\\partial \\boldsymbol{h} \\partial\n\\boldsymbol{h}^T} \\right]\n= - \\sum_{n=1}^{N} E_{b_n} \\left[ \\frac {\\partial^2 l(z_n)} {\\partial z_n^2} \\right]\n\\boldsymbol{a}_n \\boldsymbol{a}_n^T \\nonumber \\\\\n& \\stackrel {(a)}{=} \\sum_{n=1}^{N} \\frac {f_{w}^2\n(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)}\n{F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n)(1-F_{w}(\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n))}\n\\boldsymbol{a}_n \\boldsymbol{a}_n^T\n\\end{align}\nwhere $E_{b_n}[\\cdot]$ denotes the expectation with respect to the\ndistribution of $b_n$, and $(a)$ follows from the fact that\n$b_{n}$ is a binary random variable with\n$P(b_{n}=1|\\tau_n,z_n)=F_{w}(z_n-\\tau_n)$ and\n$P(b_{n}=-1|\\tau_n,z_n)=1-F_{w}(z_n-\\tau_n)$. This completes the\nproof.\n\n\n\n\n\n\n\n\\section{Proof of Proposition \\ref{proposition1}} \\label{appE}\nBefore proceeding, we first introduce the following lemma.\n\\newtheorem{lemma}{Lemma}\n\\begin{lemma} \\label{lemma2}\nFor $x\\ge 0$, define\n\\begin{align}\n\\bar{F}(x) \\triangleq \\int_{0}^{x} f(u)\\mathrm{d}u\n\\end{align}\nwhere $f(\\cdot)$ denotes the PDF of a real-valued Gaussian random\nvariable with zero-mean and unit variance. We have $\\bar{F}(x)$\nupper bounded by\n\\begin{align}\n\\bar{F}(x) \\le \\frac{1}{2} \\sqrt{ 1-e^{-\\frac{2x^2}{\\pi}} } .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{appF}.\n\\end{proof}\n\n\nDefine the function\n\\begin{align}\n\\bar{g}(x) \\triangleq \\frac {f^2 (x)} {F(x)(1-F(x))}\n\\end{align}\nwhere $f(\\cdot)$ and $F(\\cdot)$ denotes the PDF and CDF of a\nreal-valued Gaussian random variable with zero-mean and unit\nvariance respectively. Invoking Lemma \\ref{lemma2}, we have\n\\begin{align}\n\\bar{g}(x) =\\frac {f^2 (x)} {\\frac{1}{4}-\\bar{F}^2(x)} \\le\n\\frac{2}{\\pi} e^{-(1-\\frac{2}{\\pi})x^2} \\le \\frac{2}{\\pi}.\n\\end{align}\nand $\\bar{g}(x) =\\frac{2}{\\pi}$ if and only if $x=0$. Noting that\n\\begin{align}\n\\frac{1}{\\sigma^2} \\bar{g} \\left(\n\\frac{\\boldsymbol{a}_n^{T}\\boldsymbol{h}-\\tau_n}{\\sigma} \\right) =\ng(\\tau_n,\\boldsymbol{a}_n) .\n\\end{align}\nTherefore $g(\\tau_n,\\boldsymbol{a}_n)$ attains its maximum when\n$\\tau_n=\\boldsymbol{a}_n^{T}\\boldsymbol{h}$. The proof is\ncompleted here.\n\n\n\\section{Proof of Lemma \\ref{lemma2}} \\label{appF}\nDefine two i.i.d. Gaussian random variables with zero-mean and\nunit variance, namely, $X$ and $Y$. The joint distribution\nfunction of $X$ and $Y$ is $f_{XY}(x,y)=f(x)f(y)$. Define two\nregions $D_1 \\triangleq \\{(u,v)\\mid 0 \\le u \\le x , 0 \\le v \\le x\n\\}$ and $D_2 \\triangleq \\{(u,v)\\mid u \\ge 0 , v \\ge 0, u^2+v^2\\le\n\\frac{4x^2}{\\pi} \\}$. Obviously, the areas of $D_1$ and $D_2$ are\nthe same, i.e., $\\mu(D_1)=\\mu(D_2)$, where $\\mu(\\cdot)$ denote the\narea of a region. The probabilities of $(X,Y)$ belonging in these\ntwo regions can be computed as\n\\begin{align}\nP((X,Y)\\in D_1) &= \\iint_{D_1} f_{XY}(u,v) \\mathrm{d}u\\mathrm{d}v \\nonumber \\\\\n&= \\bar{F}^2(x) \\\\\nP((X,Y)\\in D_2) &= \\iint_{D_2} f_{XY}(u,v) \\mathrm{d}u\\mathrm{d}v \\nonumber \\\\\n&= \\frac{1}{4} \\left(1-e^{-\\frac{2x^2}{\\pi}}\\right)\n\\end{align}\nLet $S_1\\setminus S_2$ denote the set obtained by excluding\n$S_2\\cap S_1$ from $S_1$. Clearly, we have\n\\begin{align}\n\\mu(D_1 \\setminus D_2)=\\mu(D_2 \\setminus D_1) \\label{appF:eqn1}\n\\end{align}\nAlso, according to the definition of $D_1$ and $D_2$, we have\n\\begin{align}\nf_{XY}(u,v)&\\le \\frac{1}{2\\pi} e^{-\\frac{2x^2}{\\pi}}, \\quad (u,v)\\in D_1 \\setminus D_2 \\\\\nf_{XY}(u,v)&\\ge \\frac{1}{2\\pi} e^{-\\frac{2x^2}{\\pi}}, \\quad\n(u,v)\\in D_2 \\setminus D_1 \\label{appF:eqn2}\n\\end{align}\nCombining (\\ref{appF:eqn1})--(\\ref{appF:eqn2}), we arrive at\n\\begin{align}\n\\iint_{D_1 \\setminus D_2} f_{XY}(u,v) \\mathrm{d}u\\mathrm{d}v \\le\n\\iint_{D_2 \\setminus D_1} f_{XY}(u,v) \\mathrm{d}u\\mathrm{d}v\n\\end{align}\nFrom the above inequality, we have $P((X,Y)\\in D_1)\\le P((X,Y)\\in\nD_2)$, i.e.\n\\begin{align}\n\\bar{F}^2(x)\\leq\\frac{1}{4}\n\\left(1-e^{-\\frac{2x^2}{\\pi}}\\right)\\Rightarrow \\bar{F}(x) \\le\n\\frac{1}{2} \\sqrt{ 1-e^{-\\frac{2x^2}{\\pi}} }\n\\end{align}\nThis completes the proof.\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorem \\ref{theorem2}} \\label{appC}\nNote that from the constraint\n$\\text{tr}(\\boldsymbol{X}\\boldsymbol{X}^H)\\leq P$, we can easily\nderive that\n\\begin{align}\n\\text{tr}(\\boldsymbol{A}^T \\boldsymbol{A})\\leq 2M P\n\\end{align}\nTo prove Theorem \\ref{theorem2}, let us first consider a new\noptimization that has the same objective function as (\\ref{opt2})\nwhile with a relaxed constraint:\n\\begin{align}\n\\min_{\\boldsymbol{A}} \\quad & \\frac{\\pi\\sigma^2}{2}\\text{tr}\n\\left\\{ \\left( \\boldsymbol{A}^T \\boldsymbol{A} \\right)^{-1}\n\\right\\}\n\\nonumber\\\\\n\\text{s.t.} \\quad & \\text{tr}(\\boldsymbol{A}^T\\boldsymbol{A})\\leq\n2M P \\label{appC:opt1}\n\\end{align}\nClearly, the feasible region defined by the constraints in\n(\\ref{opt2}) is a subset of that defined by (\\ref{appC:opt1}).\nSince $\\text{tr}(\\boldsymbol{Z}^{-1})$ is convex over the set of\npositive definite matrix, the optimization (\\ref{appC:opt1}) is\nconvex. Its optimum solution is given as follows.\n\\begin{lemma} \\label{lemma1}\nConsider the following optimization problem\n\\begin{align}\n\\min_{\\boldsymbol{Z}}\\quad &\\text{tr}(\\boldsymbol{Z}^{-1})\n\\nonumber\\\\\n\\text{s.t.}\\quad & \\text{tr}(\\boldsymbol{Z})\\leq P_0\n\\label{appC:opt2}\n\\end{align}\nwhere $\\boldsymbol{Z}\\in\\mathbb{R}^{p\\times p}$ is positive\ndefinite. The optimum solution to (\\ref{appC:opt2}) is given by\n$\\boldsymbol{Z}=(P_0\/p)\\boldsymbol{I}$ and the minimum objective\nfunction value is $p^2\/P_0$.\n\\end{lemma}\n\\begin{proof}\nSee Appendix \\ref{appD}.\n\\end{proof}\n\nFrom Lemma \\ref{lemma1}, we know that any $\\boldsymbol{A}$\nsatisfying\n\\begin{align}\n\\boldsymbol{A}^T \\boldsymbol{A} = (P\/K) \\boldsymbol{I}\n\\label{appC:eqn1}\n\\end{align}\nis an optimal solution to (\\ref{appC:opt1}). Note that the set of\nfeasible solutions (\\ref{appC:opt1}) subsumes the feasible\nsolution set of (\\ref{opt2}). Hence, if the optimal solution to\n(\\ref{appC:opt1}) is meanwhile a feasible solution of\n(\\ref{opt2}), then this solution is also an optimal solution to\n(\\ref{opt2}). It is easy to verify that if (\\ref{theorem2:eqn1})\nholds valid, the resulting $\\boldsymbol{A}$ satisfies\n(\\ref{appC:eqn1}) and is thus an optimal solution to\n(\\ref{appC:opt1}). As a consequence, it is also an optimal\nsolution to (\\ref{opt2}). This completes the proof.\n\n\n\n\n\n\n\n\n\\section{Proof of Lemma \\ref{lemma1}} \\label{appD}\nLet $\\boldsymbol{Z}=\\boldsymbol{U}\\boldsymbol{D}\\boldsymbol{U}^T$\ndenote the eigenvalue decomposition of $\\boldsymbol{Z}$, where\n$\\boldsymbol{U}\\in\\mathbb{R}^{p\\times p}$ and\n$\\boldsymbol{D}\\in\\mathbb{R}^{p\\times p}$. By replacing\n$\\boldsymbol{Z}$ with\n$\\boldsymbol{U}\\boldsymbol{D}\\boldsymbol{U}^T$, the optimization\n(\\ref{appC:opt2}) is reduced to determining the diagonal matrix\n$\\boldsymbol{D}\\triangleq \\text{diag}(d_1,\\dots,d_{p})$\n\\begin{align}\n\\min_{\\{d_i\\}} \\ & \\sum_{i=1}^{p} \\frac{1}{d_i}\n\\nonumber\\\\\n\\text{s.t.} \\ \\,\n& \\sum_{i=1}^{p} {d_i} \\leq P_0 \\nonumber\\\\\n& d_i > 0, \\qquad \\forall i\\in\\{1,\\dots,p\\}\\label{opt5}\n\\end{align}\nThe Lagrangian function associated with (\\ref{opt5}) is given by\n\\begin{align}\nL(d_i;\\lambda;\\nu_i)=\\sum_{i=1}^{p} \\frac{1}{d_i} + \\lambda \\left(\n\\sum_{i=1}^{p} {d_i} - P_0 \\right) - \\sum_{i=1}^{p} {\\nu_i d_i}\n\\end{align}\nwith KKT conditions \\cite{BoydVandenberghe03} given as\n\\begin{align}\n-\\frac{1}{d_i^2}+\\lambda-\\nu_i=0 & , \\quad \\forall i \\nonumber\\\\\n\\lambda\\left(\\sum_{i=1}^{p} {d_i} - P_0\\right) =0 & \\nonumber\\\\\n\\lambda\\geq 0 & \\nonumber\\\\\n\\nu_i d_i=0 & ,\\quad \\forall i \\nonumber\\\\\nd_i >0 & , \\quad \\forall i \\nonumber\\\\\n\\nu_i\\ge 0 & , \\quad \\forall i \\nonumber\n\\end{align}\nFrom the last three equations, we have $\\nu_i=0$, $\\forall i$.\nThen from the first equation we have\n\\begin{align}\n\\lambda=\\frac{1}{d_i^2}>0 \\label{lambda}\n\\end{align}\nand\n\\begin{align}\nd_1=d_2=\\dots=d_{p}.\n\\end{align}\nFrom (\\ref{lambda}) and the second equation, we have\n$\\sum_{i=1}^{p} {d_i} - P_0 =0$, from which $d_i$ can be readily\nsolved as $d_i=P_0\/p$, $\\forall i$, i.e., the optimal\n$\\boldsymbol{D}$ is given by $\\boldsymbol{D}^{\\star}= (P_0\/p)\n\\boldsymbol{I}$. Consequently we have $\\boldsymbol{Z}^{\\star}=\n(P_0\/p) \\boldsymbol{I}$. This completed the proof.\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}