diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzmfxh" "b/data_all_eng_slimpj/shuffled/split2/finalzzmfxh" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzmfxh" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\\label{sec:intro}\nIn this paper we consider efficient computational approaches to compute approximate\nsolutions of a linear inverse problem,\n\\begin{equation}\nb = Ax_\\text{true} + \\eta, \\quad \\quad A\\in\\mathbb{R}^{m\\times n},\\label{eq:inverse_problem}\n\\end{equation}\nwhere $A$ is a known matrix, \nvector $b$ represents known acquired data, $\\eta$ represents noise, and vector $x_\\text{true}$ \nrepresents the unknown quantity that needs to be approximated.\nWe are particularly interested in imaging applications where $x_\\text{true} \\geq 0$ and $Ax_\\text{true} \\geq 0$.\nAlthough this basic problem has been studied extensively \n(see, for example, \\cite{Engl2000Regularization,Hansen2010Discrete,Mueller2012Linear,Vogel2002Computational}\nand the references therein), the noise is typically assumed to come from a single source (or to be\nrepresented by a single statistical distribution) and the data to contain no outliers.\nIn this paper we focus on a practical situation that arises in many imaging applications, and for which relatively\nlittle work has been done, namely when the noise is comprised of a mixture of Poisson and\nGaussian components {\\em and}\nwhen there are outliers in the measured data. While some research has been done\non the two topics separately (i.e., mixed Poisson--Gaussian noise models {\\em or} \noutliers in measured data), to our knowledge no work has been done when the measured\ndata contains both issues. In the following, we review some of the approaches used to handle each of the issues.\n\\subsubsection*{Poisson--Gaussian noise}\nA Poisson--Gaussian statistical model for the measured data takes the form\n\\begin{equation}\nb_i = n_\\text{obj}(i) + g(i), \\ i = 1,\\ldots,m, \\quad \\ n_\\text{obj}(i) \\sim \\text{Pois}([Ax_\\text{true}]_i), \\ g(i) \\sim \\mathcal{N}(0,\\sigma^2), \\label{eq:noise}\n\\end{equation}\nwhere $b_i$ is the $i$th component of the vector $b$ and $[Ax_\\text{true}]_i$ the $i$th component of the true noise-free \ndata $Ax_\\text{true}$. We assume that the two random variables $n_\\text{obj}(i)$ and $g(i)$ are independent. \nThis mixed noise model arises in many important applications, such as when using charged coupled device (CCD) arrays,\nx-ray detectors, and infrared \nphotometers \\cite{Bardsley2003nonnegatively,Gravel2004method,Luisier2011Image,Makitalo2013Optimal,Snyder1993Image}.\nThe Poisson part (sometimes referred to as shot noise) can arise\nfrom the accumulation of photons over a detector, and the Gaussian part usually is due to {\\em read-out} noise from\na detector, which can be generated by thermal fluctuations in interconnected electronics.\n\nSince the log-likelihood function for the mixed Poisson--Gaussian model (\\ref{eq:noise}) has an infinite series representation \\cite{Snyder1993Image}, we assume a simplified model, where both random variables have the same type of distribution. \nThere are two main approaches one can take to generate a simplified model.\nThe first approach is to add $\\sigma^2$ to each component of the vector $b$, and from (\\ref{eq:noise}) it then follows that\n\\begin{equation}\n\\mathbb{E}(b_i + \\sigma^2) = [Ax_\\text{true}]_i + \\sigma^2 \\quad \\text{and} \\quad \\text{var}(b_i + \\sigma^2) = [Ax_\\text{true}]_i + \\sigma^2. \\label{eq:poiss_approx_noise}\n\\end{equation}\nFor large $\\sigma$, the Gaussian random variable $g(i) + \\sigma^2$ is well-approximated by a Poisson random variable with the Poisson parameter $\\sigma^2$, and therefore $b_i + \\sigma^2$ is also well approximated by a Poisson random variable with the Poisson parameter $[Ax_\\text{true}]_i + \\sigma^2$. The data fidelity function corresponding to the negative Poisson log-likelihood then has the form\n\\begin{equation}\n\\sum_{i=1}^m ([Ax_\\text{true}]_i + \\sigma^2) - (b_i +\\sigma^2)\\,\\log([Ax_\\text{true}]_i + \\sigma^2);\\label{eq:poisson}\n\\end{equation}\nsee also \\cite{Snyder1993Image}. An alternative approach is to approximate the true negative log-likelihood by a weighted least-squares function, where the weights correspond to the measured data, i.e.,\n\\begin{equation}\n \\sum_{i=1}^m \\frac{1}{2}\\left(\\frac{[Ax]_i - b_i}{\\sqrt{b_i + \\sigma^2}}\\right)^2;\\label{eq:wls_basic}\n\\end{equation}\nsee \\cite[Sec. 1.3]{Hansen2013Least}. \nA more accurate approximation can be achieved by replacing the measured data by the computed data \n(which depends on $x$), i.e., replace the fidelity function (\\ref{eq:wls_basic}) by\n\\begin{equation}\n \\sum_{i=1}^m \\frac{1}{2}\\left(\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right)^2;\\label{eq:wls}\n\\end{equation}\nsee \\cite{Stagliano2011Analysis} for more details. \nAdditional additive Poisson noise (e.g., background emission) can be incorporated into the model in a straightforward way. \n\\subsubsection*{Outliers}\nFor data corrupted solely with Gaussian noise, i.e., \n\\begin{equation}\nb_i = [Ax]_i + g(i), \\ i = 1,\\ldots,m, \\quad g(i) \\sim \\mathcal{N}(0,\\sigma^2), \\label{eq:gausssian_noise}\n\\end{equation} employing the negative log-likelihood leads to the standard least-squares functional\n\\begin{equation}\n \\sum_{i=1}^m \\frac{1}{2}\\left([Ax]_i - b_i\\right)^2.\\label{eq:ls}\n\\end{equation} \nIt is well known however that a computed solution based on least squares is not robust if outliers occur, meaning that even a small number of components with gross errors can cause a severe deterioration of our estimate. Robustness of the least squares fidelity function can be achieved by replacing the loss function $\\frac{1}{2}t^2$ used in \\eqref{eq:ls} by a function $\\rho(t)$ as\n\\begin{equation}\n \\sum_{i=1}^m \\rho\\left([Ax]_i - b_i\\right),\\label{eq:robust}\n\\end{equation}where the function $\\rho$ is less stringent towards the gross errors and satisfies the following conditions:\n\\begin{enumerate}\n\\item $\\rho(t) \\geq 0$;\n\\item $\\rho(t) = 0 \\Leftrightarrow t =0$;\n\\item $\\rho(-t) = \\rho(t)$;\n\\item $\\rho(t^\\prime) \\geq \\rho(t)$, for $t^\\prime \\geq t \\geq 0$;\n\\end{enumerate}\nsee also \\cite[Sec. 1.5]{Hansen2013Least}. A list of eight most commonly used loss functions $\\rho$ can be found in \\cite{Coleman1980system} or in MATLAB under \\texttt{robustfit}; some of them are discussed in Section~\\ref{sec:conv_anal}. Each of these functions also depends on a parameter $\\beta$ (see Section~\\ref{sec:beta}) defining the trade-off between the robustness and efficiency. Note that if we use this robust regression approach, in order to reduce the influence of possible outliers, we always sacrifice some efficiency at the model.\n\n\\bigskip\n\nIn this paper, we focus on combining these two approaches to suppress the influence of outliers for data with mixed noise \\eqref{eq:noise}. Our work has been motivated by O'Leary \\cite{OLeary1990Robust}, and more recent work by Calef \\cite{Calef2013Iteratively}. The initial ideas of our work were first outlined in the \nconference paper \\cite{Kubinova2015Iteratively}.\n\nThe paper is organized as follows. In Section~\\ref{sec:robust} we introduce a data-fidelity function suitable for data corrupted both with mixed Poisson--Gaussian noise and outliers. In Section~\\ref{sec:reg_param} we propose a regularization parameter choice method for the regularization of the resulting inverse problem, and in Section~\\ref{sec:optim} we focus on the optimization algorithm and the solution of the linear subproblems. Section~\\ref{sec:num_exp} demonstrates the performance of the resulting method on image deblurring problems with various types of outliers.\n\nThroughout the paper, $D$ (or $D$ with an accent) denotes a general real diagonal matrix, $e_i$ denotes the $i$th column of the identity matrix of a suitable size.\n\n\n\\section{Data-fidelity function}\\label{sec:robust}\nIn Section \\ref{sec:intro}, we reviewed fidelity functions \\eqref{eq:poisson}, \\eqref{eq:wls_basic}, and \\eqref{eq:wls}, commonly used for problems with mixed Poisson--Gaussian noise and also robust loss functions used to handle problems with Gaussian noise and outliers \\eqref{eq:robust}. Since we need to deal with both issues simultaneously here, we propose combining both approaches. More specifically, combining a robust loss function with the weighted least squares problem \\eqref{eq:wls}, so that the data fidelity function becomes\n\\begin{equation}\n J(x) = \\sum_{i=1}^m \\rho\\left(\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right).\n\\label{eq:robust_wls}\n\\end{equation}\nIn the remainder of this section, we investigate the properties of the proposed data-fidelity function \\eqref{eq:robust_wls} and the choice of the robustness parameter $\\beta$, which is defined in the next subsection.\n\\subsection{Choice of the loss function -- convexity analysis} \\label{sec:conv_anal}\nFor ordinary least squares, functions known under names Huber, logistic, Fair, and Talwar, shown in Figure~\\ref{fig:loss_functions}, lead to an interval-wise convex data fidelity function, see \\cite{OLeary1990Robust}, i.e., positive-semidefinite Hessian, which is favorable for Newton-type minimization algorithms. This however does not \nalways hold in our\ncase where the weighted least squares formulation \\eqref{eq:robust_wls} has solution-dependent weights. \n\nTo see this, let us begin by denoting the components of the residual as $r_i \\equiv [Ax]_i - b_i$ and the \nsolution-dependent weights as $w_i \\equiv \\frac{1}{\\sqrt{[Ax]_i + \\sigma^2}}$. Then the gradient and the Hessian of \\eqref{eq:robust_wls} can be rewritten as\n\n\\begin{align}\n\\text{grad}_J(x) &= A^Tz, & z_i &= \\left(w^\\prime_ir_i + w_i\\right)\\rho^{\\prime}(w_ir_i);\\\\\n\\text{Hess}_J(x) &= A^TDA, & \\quad D_{ii} &= (w^{\\prime\\prime}_ir_i + 2w_i^{\\prime})\\rho^{\\prime}(w_ir_i) + (w^{\\prime}_i r_i + w_i)^2\\rho^{\\prime\\prime}(w_ir_i). \\label{eq:hess_and_grad}\n\\end{align}\nWe investigate the entries $D_{ii}$ in order to examine the positive semi-definiteness of the Hessian $\\text{Hess}_J(x)$. Recall that $A^TDA$ is positive semi-definite, if $D_{ii}\\geq 0$.\n\nAssuming $\\rho^{\\prime\\prime}\\geq 0$, the signs of the diagonal entries $D_{ii}$ in \\eqref{eq:hess_and_grad} are\n\\begin{equation}\n\\text{sign}(D_{ii}) = \\left(\\circlearound{+}\\cdot\\text{sign}(r_i) + 2\\circlearound{-}\\right)\\cdot\\text{sign}(\\rho^{\\prime}(w_ir_i)) + \\circlearound{+}\\circlearound{+}, \\label{eq:dii_sign}\n\\end{equation}\nwhere we have replaced some of the quantities in the\nexpression for $D_{ii}$ shown in equation \\eqref{eq:hess_and_grad} with the symbol $\\circlearound{-}$ when the\nvalue it replaces is always a negative number and with $\\circlearound{+}$ when the value it replaces is always nonnegative. We will now investigate all possible cases with respect to $\\text{sign}(\\rho^{\\prime}(w_ir_i))$:\n\\begin{itemize}\n\\item \\textit{Case 1: $\\rho^\\prime(w_ir_i)< 0$} \\\\\n$\\rho^\\prime(w_ir_i)< 0$ yields $r_i< 0$, and therefore $D_{ii} > 0$.\n\\item \\textit{Case 2: $\\rho^\\prime(w_ir_i)= 0$} \\\\\n$\\rho^{\\prime}(w_ir_i) = 0 $ yields $D_{ii} = 0$.\n\\item \\textit{Case 3: $\\rho^\\prime(w_ir_i)> 0$} \\\\\n Substituting for $w_i$ and $r_i$ in \\eqref{eq:hess_and_grad}, we obtain\n\\begin{align*}\\label{eq:dii}\n D_{ii} &= \\left(\\frac{3}{4}([Ax]_i-b_i)([Ax]_i+\\sigma^2)^{-5\/2} - ([Ax]_i+\\sigma^2)^{-3\/2})\\right)\\rho^{\\prime}(w_ir_i)\\\\ \n\\nonumber & \\quad + \\left(-\\frac{1}{2}([Ax]_i-b_i)([Ax]_i+\\sigma^2)^{-3\/2} + ([Ax]_i+\\sigma^2)^{-1\/2}\\right)^2\\rho^{\\prime\\prime}(w_ir_i).\n\\end{align*} \nFor $[Ax]_i\\gg b_i+\\sigma^2$, to achieve $D_{ii}\\geq 0$, \n\\[\n\\sqrt{[Ax]_i}\\cdot\\rho^{\\prime\\prime}(\\sqrt{[Ax]_i})\\gtrsim \\rho^{\\prime}(\\sqrt{[Ax]_i}),\\\\\n\\]\nmust hold. This corresponds to \n\\[\n\\rho^{\\prime}(t) \\gtrsim t \\quad \\text{yielding} \\quad \\rho(t) \\gtrsim t^2\/2,\\\\\n\\]\ni.e., for large $[Ax]_i$, the loss function $\\rho$ has to grow at least quadratically.\n\\end{itemize}\n\nConcluding, for large $t$, the loss function $\\rho(t)$ has to be either constant or grow at least quadratically, which is in contradiction with the idea of robust regression. Therefore, considering the functions from \\cite{Coleman1980system}, the only loss function $\\rho$ for which the data fidelity function (\\ref{eq:robust_wls}) has positive semidefinite Hessian, is the function Talwar:\n\\begin{equation}\n\\rho(t) = \\left\\{\\begin{array}{ll}\nt^2\/2, & |t|\\leq\\beta,\\\\\n\\beta^2\/2, & |t| > \\beta.\n\\end{array} \\right.\\label{eq:talwar}\n\\end{equation} \n\\subsection{Selection of the robustness parameter}\\label{sec:beta}\nParameters $\\beta$ for $95\\%$ asymptotic efficiency with respect to the standard loss function $\\frac{1}{2}t^2$ when the disturbances come from the unit normal distribution can again be found in \\cite{Coleman1980system}. For Talwar, the 95\\% efficiency tuning parameter is \n\\begin{equation}\n\\beta_{95} = 2.795.\\label{eq:beta_opt}\n\\end{equation} \nNote that in our specific case, the random variable inside the function $\\rho$ in (\\ref{eq:robust_wls}) is already rescaled to have unit variance and therefore approximately unit normal distribution. We may therefore apply the parameter $\\beta_{95}$ without any further rescaling based on estimated variance, which is usually required in case of ordinary least squares with unknown variance of noise. Function Talwar with $\\beta = \\beta_{95}$ is shown in Figure~\\ref{fig:rho_talwar}. \n\\begin{figure}\n\\centering\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_fair} \n\\caption{Fair}\n\\end{subfigure}\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_huber} \n\\caption{Huber}\n\\end{subfigure}\n\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_logistic} \n\\caption{logistic}\n\\end{subfigure}\n\\begin{subfigure}[b]{.4\\textwidth}\n\\centering\n\\includegraphics[width = .8\\textwidth]{loss_function_talwar} \n\\caption{Talwar}\\label{fig:rho_talwar}\n\\end{subfigure}\n\n\\caption{Loss functions Fair, Huber, logistic and Talwar for the tuning parameter $\\beta$ corresponding to 95\\% efficiency (solid line) together with the standard loss function $t^2\/2$ (dashed line).\n}\\label{fig:loss_functions}\n\\end{figure}\n\n\\subsection{Non-negativity constraints}\nIn many applications, such as imaging, the reconstruction will benefit from taking into account the prior information about the component-wise non-negativity of the true solution $x_\\text{true}$. Here, however, imposing non-negativity is not just a question of visual appeal, it also guarantees the two estimates (\\ref{eq:poisson}) and (\\ref{eq:wls}) of the negative log-likelihood will provide similar results; see \\cite{Stagliano2011Analysis}. Therefore, the component-wise non-negativity constraint is an integral part of the resulting optimization problem. However, employment of the non-negativity constraint results in the need of more sophisticated optimization tools. The use of one of the possible algorithms is discussed in Section~\\ref{sec:optim}.\n\n\\section{Regularization and selection of the regularization parameter}\\label{sec:reg_param}\nAs a consequence of noise and ill-posedness of the inverse problem (\\ref{eq:inverse_problem}), some form of regularization needs to be employed in order to achieve a reasonable approximation of the true solution $x_\\text{true}$. For computational convenience, we use Tikhonov regularization with a quadratic penalization term, i.e., we minimize the functional of the form\n\\begin{equation}\nJ_\\lambda(x) \\equiv \\sum_{i=1}^m \\rho\\left(\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right) + \\frac{\\lambda}{2}\\|Lx\\|^2, \\qquad x\\geq 0.\\label{eq:functional}\n\\end{equation}\nWe assume that a good regularization parameter $\\lambda$ with respect to $L$ is used so that the penalty term is reasonably close to the prior and the residual therefore is close to noise. In case of robust regression, it is particularly important not to over-regularize, since this would lead to large residuals and too many components of the data $b$ would be considered outliers. Methods for\nchoosing $\\lambda$ are discussed in this section.\n\n\\subsection{Morozov's discrepancy principle}\nSince the residual components are scaled, and for data without outliers we have the expected value\n \\begin{equation}\nE\\left\\{\\frac{1}{n}\\sum_{i=1}^{n}\\frac{([Ax]_i - b_i)^2}{[Ax]_i + \\sigma^2}\\right\\} = 1,\\label{eq:discrepancy_estimate}\n\\end{equation}\nan obvious choice would be to use the Morozov's discrepancy principle \\cite{Morozov1966solution,Vogel2002Computational}. However, as reported in \\cite{Stagliano2011Analysis}, even without outliers, the discrepancy principle based on \\eqref{eq:discrepancy_estimate} tends to provide unsatisfactory reconstructions for problems with large signal-to-noise ratio. Therefore we will not consider this approach further.\n\n\\subsection{Generalized cross validation}\\label{sec:gcv}\nThe generalized cross validation method \\cite{Golub1979Generalized}\\cite[chap. 7]{Vogel2002Computational} is a method derived from the standard leave-one-out cross validation. To apply this method for linear Tikhonov regularization, one selects the regularization parameter $\\lambda$ such that it minimizes the GCV functional\n\\begin{equation}\n\\text{GCV}(\\lambda) = \\frac{n\\|r_\\lambda\\|^2}{(\\text{trace}(I-A_\\lambda))^2},\\label{eq:gcv_fun}\n\\end{equation}\nwhere $r_\\lambda = Ax_\\lambda - b = (A_\\lambda - I)b$ is the residual, $n$ is its length, and the influence matrix $A_\\lambda$ takes the form $A_\\lambda = A(A^TA + \\lambda L^TL)^{-1}A^T$. Here, due to the non-negativity constraints and the weights, the residual and the influence matrix have a more complicated form. An approximation of the influence matrix for problems with mixed noise, but without outliers, has been proposed in \\cite{Bardsley2009Regularization}. There the numerator of the GCV functional\ntakes the form $n\\|Wr_\\lambda\\|^2$ and the approximate influence matrix\n\\begin{equation}\nA_\\lambda = WA(D_\\lambda(A^TW^2A + \\lambda L^TL)D_\\lambda)^{\\dagger}D_\\lambda A^TW,\\label{eq:gcv_fun_new} \n\\end{equation}\nwhere $W$ and $D_\\lambda$ are diagonal matrices\n\\begin{align}\nW_{ii} &=\n\\frac{1}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}};\n\\label{eq:W_ii_old}\\\\\n[D_\\lambda]_{ii} &= \\left\\{\n\\begin{array}{cc}\n1 & [x_\\lambda]_i > 0, \\\\ \n0 &\\text{otherwise,}\n\\end{array} \\right. \\nonumber\n\\end{align}\nand \\,${}^\\dagger$ denotes the Moore-Penrose pseudoinverse. Matrix $D_\\lambda$ only handles the non-negativity constraints, \nand therefore can be adopted directly. The matrix $W$ needs a special adjustment, due to the change of the loss function to Talwar. The aim is to construct a matrix $W$ satisfying\n\\[\\|Wr_\\lambda\\|^2 = \\sum_{i=1}^m \\rho\\left(\\frac{[Ax_\\lambda]_i - b_i}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}}\\right).\\]\nSubstituting for $\\rho$ from the definition of the function Talwar \\eqref{eq:talwar}, we redefine the scaling matrix as\n\\begin{align}\nW_{ii} &\\equiv \\left\\{\n\\begin{array}{cc}\n\\frac{1}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}} & \\quad \\left\\vert\\frac{[Ax_\\lambda]_i - b_i}{\\sqrt{[Ax_\\lambda]_i + \\sigma^2}}\\right\\vert\\leq \\beta, \\\\ \n\\frac{\\beta}{[Ax_\\lambda]_i - b_i} &\\text{otherwise};\n\\end{array} \\right.\\label{eq:W_ii}\n\\end{align}\n\nIn order to make the evaluation of \\eqref{eq:gcv_fun_new} feasible for large-scale problems, we approximate the trace of a matrix using the random trace estimation \\cite{Hutchinson1990stochastic,Vogel2002Computational} as $\\text{trace}(M)\\approx v^TMv$, where the entries of $v$ take values $\\pm 1$ with equal probability. Applying the random trace estimation to \\eqref{eq:gcv_fun_new}, we obtain\n\\[\n(\\text{trace}(I-A_\\lambda))^2 \\approx (v^Tv - v^TA_\\lambda v)^2.\n\\]\nFinally, $A_\\lambda v$ is approximated by $WAy$, with $y$ obtained applying truncated conjugate gradient iteration to\n\\begin{equation}\n(D_\\lambda(A^TW^2A + \\lambda L^TL)D_\\lambda)y = D_\\lambda A^TWv.\\label{eq:gcv_lin_syst}\n\\end{equation}\n\n\\section{Minimization problem}\\label{sec:optim}\n\nIn this section we discuss numerical methods to compute a minimum of \\eqref{eq:functional}.\nWe consider incorporating a non-negative constraint and solution of linear subproblems, including\nproposing a preconitioner. \n\n\\subsection{Projected Newton's method}\nVarious methods for constrained optimization have been developed over the years, some related to image deblurring can be found in \\cite{Bardsley2003nonnegatively,Bonettini2009scaled,Hanke2000Quasi,More1991solution,Nagy2000Enforcing}. \nFor our computations, we chose a projected Newton's method\\footnote{In \\cite{Haber2015Computational}, the method was derived as the Projected Gauss--Newton method. Here, since the evaluation of the Hessian does not represent a computational difficulty, we use it as a variant of Newton's method. Therefore, in the remainder of the text, the method is referred to as the Projected Newton's Method.}, combined with projected PCG to compute the search direction in each step, see \\cite[sec. 6.4]{Haber2015Computational}. The convenience of this method lies in the fact that the projected PCG does not require any special form of the preconditioner and a generic conjugate gradient preconditioner can be employed. Besides lower bounds, upper bounds on the reconstruction can also be enforced. For completeness, we include the projected Newton method in Algorithm~\\ref{alg:projGNCG}, and projected PCG in\nAlgorithm~\\ref{alg:projPCG}.\n\n\n\\begin{algorithm}\n\\caption{Projected Newton's method \\cite{Haber2015Computational}}\n\\label{alg:projGNCG}\n\\begin{algorithmic}\n\\STATE{$k = 0$}\n\\WHILE{not converged} \n\\STATE{$\\text{Active} = (x^{(k)} \\leq 0$)} \n\\STATE{$g = \\text{grad}_{J_\\lambda}(x^{(k)})$}\n\\STATE{$H = \\text{Hess}_{J_\\lambda}(x^{(k)})$}\n\\STATE{$M = \\texttt{prec}(H)$} \\COMMENT{setup preconditioner for the Hessian}\n\\STATE{$s = \\texttt{projPCG}(H,-g,\\text{Active},M)$} \\COMMENT{compute the search direction for inactive cells}\\STATE{$g_a = g(\\text{Active})$} \n\\IF{$\\max(\\text{abs}(g_a)) > \\max(\\text{abs}(s))$} \n\\STATE {$g_a = g_a\\cdot\\max(\\text{abs}(s))\/\\max(\\text{abs}(g_a))$} \\COMMENT{rescaling needed} \\ENDIF\n\\STATE{$s(\\text{Active}) = g_a$} \\COMMENT{take gradient direction in active cells}\n\\STATE{$x^{(k+1)} = \\texttt{linesearch}(s,x^{(k)},J_\\lambda,\\text{grad}_{J_\\lambda})$}\n\\STATE{$k = k+1$}\n\\ENDWHILE\n\\RETURN $x^{(k)}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\\begin{algorithm}[H]\n\\caption{Projected PCG \\cite{Haber2015Computational}}\n\\label{alg:projPCG}\n\\begin{algorithmic}\n\\STATE{input: $A$, $b$, Active, $M$}\n\\STATE{$x_0 = 0$}\n\\STATE{$D_\\mathcal{I} = \\text{diag}(1-\\text{Active})$} \\COMMENT{projection onto inactive set}\n\\STATE{$r_0 = D_\\mathcal{I}b$}\n\\STATE{$z_0 = D_\\mathcal{I}(M^{-1}r_0)$}\n\\STATE{$p_0 = z_0$}\n\\STATE{$k = 0$}\n\\WHILE{not converged} \n\\STATE{$\\alpha_k = \\frac{r_k^Tz_k}{p^TD_\\mathcal{I}Ap_k}$} \n\\STATE{$x_{k+1} = x_k + \\alpha_kp_k $}\n\\STATE{$r_{k+1} = x_k - \\alpha_kD_\\mathcal{I}Ap_k $}\n\\STATE{$z_{k+1} = D_\\mathcal{I}(M^{-1}r_k)$}\n\\STATE{$\\beta_{k+1} = \\frac{z_{k+1}^Tr_{k+1}}{z_{k}^Tr_{k}}$}\n\\STATE{$p_{k+1} = z_{k+1} + \\beta_kp_k$}\n\\STATE{$k = k+1$}\n\\ENDWHILE\n\\RETURN $x_{k}$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\subsection{Solution of the linear subproblems}\\label{sec:lin_systems}\nEach step of the projected Newton method (Algorithm \\ref{alg:projGNCG}) \nrequires solving a linear system with the Hessian:\n\\begin{align}\n\\text{Hess}_{J_\\lambda}(x^{(k)})s &= -\\text{grad}_{J_\\lambda}(x^{(k)})\\nonumber \\\\\n(A^TD^{(k)}A + \\lambda L^TL)s &= -\\left(A^Tz^{(k)} + \\lambda L^TLx^{(k)}\\right).\\label{eq:lin_system}\n\\end{align}\nFor the objective functional \\eqref{eq:functional}, the diagonal matrix $D^{(k)}$ and the vector $z^{(k)}$ have the form:\n\\begin{align}\nz_{i} &= \\left\\{\\begin{array}{ll}\n\\frac{1}{2}\\left(1 - \\frac{\\left(b_i + \\sigma^2\\right)^2}{\\left([Ax]_i + \\sigma^2\\right)^2}\\right), & \\left\\vert\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right\\vert\\leq \\beta,\\\\\n0, & \\text{otherwise}.\n\\end{array}\n\\right.\\\\\nD_{ii} &= \\left\\{\\begin{array}{ll}\n\\frac{\\left(b_i + \\sigma^2\\right)^2}{\\left([Ax]_i + \\sigma^2\\right)^3}, & \\left\\vert\\frac{[Ax]_i - b_i}{\\sqrt{[Ax]_i + \\sigma^2}}\\right\\vert\\leq \\beta,\\\\\n0, & \\text{otherwise}.\\label{eq:row_scaling}\n\\end{array}\n\\right.\n\\end{align}\nNote that in case of constant weights, robust regression represents extra computational cost in comparison with standard least squares because it leads to a sequence of weighted least squares problems, while standard least squares problems are solved in one step. In our setting, the weights in \\eqref{eq:wls} themselves have to be updated and therefore employing a different loss function does not change the type of the problem we need to solve. \n\nWithout preconditioning, the convergence of projected PCG can be rather slow, and\nit is therefore important to consider preconditioning. The idea of many preconditioners, such as constraint \\cite{Keller2000Constraint,Dollar2007Using}, constraint-type \\cite{Dollar2007Constraint} or Hermitian and skew-Hermitian \\cite{Benzi2006Preconditioned} preconditioners is based on the fact that in many cases it is possible to efficiently solve the \nlinear system in \\eqref{eq:lin_system} if the diagonal matrix $D^{(k)}$ is the identity matrix; that is, if the linear system\ninvolves the matrix\n\\begin{equation}\nA^TA + \\lambda L^TL \\label{eq:without_D}.\n\\end{equation}\nFor example, in the case of image deblurring, it is well known that linear systems involving the matrix\n\\eqref{eq:without_D} can be solved efficiently using fast trigonometric or \nfast Fourier transforms (FFT).\n\nAlthough the constraint-type, and Hermitian and skew-Hermitian preconditioners seem to perform well for problems with a random \nmatrix $D^{(k)}$ (i.e., a random row scaling), see \\cite{Benzi2006Preconditioned}, they performed unsatisfactorily for problems of the form \\eqref{eq:lin_system}, \\eqref{eq:row_scaling}.\n\nA preconditioner based on a similar idea of fast computations with matrices of type \\eqref{eq:without_D} for imaging problems was proposed in \\cite{Fessler1999Conjugate}. In this case, the row scaling is approximated by a column scaling; that is, \nwe find $\\hat{D}^{(k)}$ such that \n\\begin{equation}\nA^TD^{(k)}A \\approx \\hat{D}^{(k)}(A^TA)\\hat{D}^{(k)},\\label{eq:pullout_prec}\n\\end{equation}\nwhere\n\\begin{equation}\n\\hat{D}_{ii}^{(k)} \\equiv \\sqrt{\\frac{e_i^T(A^TD^{(k)}A)e_i}{e_i^T(A^TA)e_i}}.\\label{eq:out_diag_def}\n\\end{equation}\nNote that for $\\hat{D}^{(k)}$ defined in \\eqref{eq:out_diag_def}, the diagonals of the matrices on the two sides of approximation \\eqref{eq:pullout_prec} are exactly equal. \n\nSince for large-scale problems, matrix $A$ is typically not formed explicitly, exact evaluation of the entries of $\\hat{D}^{(k)}$ might become too expensive. To get around this restriction, note that\n\\begin{equation}\ne_i^T(A^TD^{(k)}A)e_i = ((A^T)\\,.^{2}\\,\\text{diag}(D^{(k)}))_i \\quad \\text{and} \\quad e_i^T(A^TA)e_i =((A^T)\\,.^2\\,\\mathbf{1})_i,\n\\label{eq:AT_squared}\n\\end{equation}\nwhere $\\mathbf{1}$ a vector of all ones, and \nwe use MATLAB notation $.^2$ to mean component-wise squaring.\nIn some cases it may be relatively easy to compute both the entries of $(A^T).^2$ and the vector\n$(A^T).^2\\mathbf{1}$; this is the case for image deblurring, and is\ndiscussed in more detail in Section~\\ref{sec:num_exp}.\n\n\nUsing \\eqref{eq:pullout_prec}, we define the preconditioner for the linear system \\eqref{eq:lin_system} as\n\\begin{equation}\nM \\equiv\\hat{D}^{(k)} \\left( A^TA + \\hat{\\lambda} L^TL \\right) \\hat{D}^{(k)},\\label{eq:prec}\n\\end{equation}\nwith \n\\[\\hat{\\lambda} \\equiv \\lambda\/\\text{mean}\\left(\\text{diag}(\\hat{D}^{(k)})\\right)^2 .\\]\nMore details on the computational costs involved in constructing and applying the preconditioner\nin the case of image deblurring are provided in Section~\\ref{sec:num_exp}.\n\n\n\\section{Numerical tests}\\label{sec:num_exp}\n\nThe Poisson--Gaussian model arises naturally in image applications, so in this section\nwe present numerical examples from image deblurring. Specifically, we consider the \ninverse problem \\eqref{eq:inverse_problem} with data model \\eqref{eq:noise}, where\nvector $b$ is an observed image that is corrupted by blur and noise, matrix $A$ models\nthe blurring operation, vector $x_\\text{true}$ is the true image, and $\\eta$ is noise.\nAlthough an image is naturally represented as an array of pixel values, when we \nrefer to `vector' representations, we assume the pixel values have been reordered\nas vectors. For example, if we have a $p \\times p$ image of pixel values, these can\nbe stored in a vector of length $n = p^2$ by, for example, lexicographical ordering\nof the pixel values.\n\nIn many practical image deblurring applications, the blurring is spatially invariant,\nand $A$ is structured matrix defined by a {\\em point spread function} (PSF).\nIn this situation, image deblurring can also be referred to as image decovolution,\nbecause the operation $Ax_\\text{true}$ is the convolution of $x_\\text{true}$ and the PSF.\nAlthough the PSF may be given as an actual function, the more common situation is\nto compute estimates of it by imaging point source objects. Thus, the PSF can be \nrepresented as an image; we typically display the PSF as a mesh plot, which makes\nit easier to visualize how a point in an image is spread to its neighbors because of\nthe blurring operation.\nThe precise structure of the matrix $A$ depends on the imposed boundary condition;\nsee \\cite{Hansen2006Deblurring} for details. In this section we impose periodic boundary\nconditions, so that $A$ and $L$ are both diagonalizable by FFTs.\n\nSo far we have only described what we refer to as the {\\em single-frame} situation, where\n$b$ is a single observed image. It is often the case, especially in astronomical imaging,\nto have multiple observed images of the same object, but with each having\na different blurring matrix associated with it. We refer to this as the\nmulti-frame image deblurring problem. Here, $b$ represents all observed images, stacked\none on top of each other, and similarly $A$ is formed by stacking the various blurring matrices.\n\nBefore describing the test problems used in this section, we first summarize the computational\ncosts.\nFrom the discussion around equation \\eqref{eq:AT_squared}, to construct the preconditioner we need to \nbe able to efficiently square all entries of the matrix $A^T$, or equivalently those of $A$; this can\neasily be approximated by squaring the point-spread function component-wise before forming the operator, i.e.,\n\\begin{equation}\n(A_\\text{PSF}).^2 \\approx A_{\\text{PSF}.^2}\\, .\\label{eq:PSF_squaring}\n\\end{equation}\nUsing this approximation, in each Newton step we only need to perform one multiplication by a matrix,\none component-wise multiplication, and one component-wise square-root to obtain the entries of the diagonal matrix \\eqref{eq:out_diag_def}. \nWith the assumption that $A$ and $L$ are both diagonalizable by FFTs, \nefficient multiplication by the Hessian \\eqref{eq:lin_system} involves two \ntwo-dimensional forward and inverse FFTs, which we refer to as \\texttt{fft2} and \\texttt{ifft2}, respectively. \nSolving systems with matrix \\eqref{eq:prec} involves only one \\texttt{fft2} and one \\texttt{ifft2}. \nIn addition to the \\texttt{fft2} requirements, multiplication by the Hessian \\eqref{eq:lin_system} involves 4 pixel-wise multiplications and 1 addition. Solving systems with the preconditioner \\eqref{eq:prec} involves 3 pixel-wise multiplications (component-wise reciprocals are assumed to be computed only once at the beginning). The total counts for each operation are shown in Table \\ref{tab:op_counts}.\n\\begin{table}\n\\caption{Operation counts for single-frame case.}\\label{tab:op_counts}\n\\centering\n{\\small\n\\begin{tabular}{llcccc}\n\\toprule\n& operation& \\texttt{fft2} & \\texttt{ifft2} & mults & adds \\\\ \n\\midrule\nHessian \\eqref{eq:lin_system} &multiply & 2 & 2 & 4 & 1 \\\\ \npreconditioner \\eqref{eq:prec} &solve & 1 & 1 & 3 & 0 \\\\ \n\\bottomrule\n\\end{tabular}} \n\\end{table}\n\n\n\n\n\nThe robustness and the efficiency of the proposed method is demonstrated on two test problems adopted from \\cite{Nagy2004Iterative}:\n\\paragraph{Satellite}\nAn atmospheric seeing problem with spatially invariant atmospheric blur (moderate seeing conditions with the Fried parameter 30). We also consider a multi-frame case, where the same object is blurred by three different PSFs. \nThese PSFs are generated by transposing and flipping the first PSF. \nThe setting is shown in Figures \\ref{fig:satellite} and \\ref{fig:PSF_mesh_1}.\n\\paragraph{Carbon ash}\nAn image deblurring problem with spatially invariant non-separable Gaussian blur, where the PSF has the\nfunctional definition\n\\begin{equation}\n\\label{eq:GaussianPSF}\n \\mbox{PSF}(s,t) = \\frac{1}{2\\pi \\sqrt{\\gamma}} \\exp\\left\\{\n -\\frac{1}{2} \\left[ \\begin{array}{cc} s & t \\end{array} \\right] C^{-1}\n \\left[ \\begin{array}{c} s \\\\ t \\end{array} \\right] \\right\\}\\, ,\n\\end{equation}\nwhere \n$$\n C = \\left[ \\begin{array}{cc} \\gamma_1^2 & \\tau^2 \\\\[3pt] \\tau^2 & \\gamma_2^2 \\end{array} \\right]\\,, \\quad\n \\mbox{and} \\quad \\gamma_1^2 \\gamma_2^2 - \\tau^4 > 0\\,.\n$$\nThe shape of the Gaussian PSF depends on the parameters $\\gamma_1, \\gamma_2$ and $\\tau$;\nwe use\n$\\gamma_1 = 4$, $\\gamma_2 = 2$; $\\tau = 2$. We also consider a multi-frame case, where the same object is blurred by three different PSFs. The other two PSFs are Gaussian blurs with parameters $\\gamma_1 = 4$, $\\gamma_2 = 2$, $\\tau = 0$, and $\\gamma_1 = 4$, $\\gamma_2 = 2$, $\\tau = 0$. The setting is shown in Figures \\ref{fig:carbon_ash} and \\ref{fig:PSF_mesh_2}.\n\nAs previously mentioned, in the multi-frame case, the vector $b$ in (\\ref{eq:inverse_problem}) is concatenation of the vectorized blurred noisy images, the matrix $A$ is concatenation of the blurring operators, i.e., $A\\in\\mathbb{R}^{3n\\times n}$. For the test problems all true images are $256 \\times 256$ arrays of pixels (with intensities scaled to $[0,255]$), and thus\n$n = 65536$.\n\nComputation was performed in MATLAB R2015b. Noise is generated artificially using MATLAB functions \\texttt{poissrnd} and \\texttt{randn}. Unless specified otherwise, the standard deviation $\\sigma$ is set to $5$. We use the discretized Laplacian, see \\cite[p. 95]{Hansen2006Deblurring}, as the regularization matrix $L$. The Projected Newton method (Algorithm \\ref{alg:projGNCG}) is terminated when the relative size of the projected gradient\n\\begin{equation}\n\\mathcal{P}(\\text{grad}_{J_{\\lambda}}(x^{(k)})), \\quad \\text{where} \\quad \\mathcal{P}(v) \\equiv v.*(1 - \\text{Active}) + \\text{Active}.*(v < 0),\n\\label{eq:proj_grad_def}\n\\end{equation}\nreaches the tolerance $10^{-4}$ or after 40 iterations. We use MATLAB notation $.^*$ to mean component-wise multiplication. Projected PCG (Algorithm \\ref{alg:projPCG}) is terminated when the relative size of the projected residual (denoted in Algorithm \\ref{alg:projPCG} by $r_i$) reaches $10^{-1}$, or the number of iterations reaches $100$. \nWe use the preconditioner given in \\eqref{eq:prec} as the default preconditioner. Given a search direction $s_k$, we apply a projected backtracking linesearch, with the initial step length equal to 1, which we terminate when \n\\[J_\\lambda(x^{(k+1)}) < J_\\lambda(x^{(k)}).\\] \n\\begin{figure}[!th]\n\\centering\n\\includegraphics[width=.2\\textwidth]{SatelliteSingle_true}\n\\hspace*{.5cm}\n\\includegraphics[width=.2\\textwidth]{SatelliteMulti_noisy_1}\n\\includegraphics[width=.2\\textwidth]{SatelliteMulti_noisy_2}\n\\includegraphics[width=.2\\textwidth]{SatelliteMulti_noisy_3}\n\\caption{Test problem Satellite: true image (left) together with three blurred noisy images (right). \n}\\label{fig:satellite}\n\\end{figure}\n\\begin{figure}[!th]\n\\centering\n\\includegraphics[width=.2\\textwidth]{CarbonAshSingle_true}\n\\hspace*{.5cm}\n\\includegraphics[width=.2\\textwidth]{CarbonAshMulti_noisy_1}\n\\includegraphics[width=.2\\textwidth]{CarbonAshMulti_noisy_2}\n\\includegraphics[width=.2\\textwidth]{CarbonAshMulti_noisy_3}\n\\caption{Test problem Carbon ash: true image (left) together with three blurred noisy images (right). \n}\\label{fig:carbon_ash}\n\\end{figure}\n\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{SatelliteSingle_PSFmesh_1}\n\\caption{Satellite}\\label{fig:PSF_mesh_1}\n\\end{subfigure}\n\\begin{subfigure}[b]{.45\\textwidth}\n\\includegraphics[width=.95\\textwidth]{CarbonAshSingle_PSFmesh_1}\n\\caption{Carbon ash}\\label{fig:PSF_mesh_2}\n\\end{subfigure}\n\\caption{Point-spread functions for the first frame of each test problem.\n}\\label{fig:setting}\n\\end{figure}\n\n\n\n\\subsection{Robustness with respect to various types of outliers}\n\nIn this section, we consider several types of outliers, whose choice was motivated by \\cite{Calef2013Iteratively}, and demonstrate the robustness of the proposed method. Note that the difference between \\cite{Calef2013Iteratively} and the proposed approach lies, among others, in the fact that while in \\cite{Calef2013Iteratively}, the approximation of the solution is computed in order to update the outer (robust) weights associated with the components of residual. Here, the weights are represented by the loss function $\\rho$ and are updated implicitly in each Newton step and therefore our approach does not involve any outer iteration.\n\n\\subsubsection*{Random corruptions}\nFirst we consider the most simple case of the outliers -- a given percentage of pixels is corrupted at random. These corruptions are generated artificially by adding a value randomly chosen between $0$ and $\\max(Ax_\\text{true})$ to the given percentage of pixels. The location of these pixels is also chosen randomly. Figures \\ref{fig:sliding_curves_1}, \\ref{fig:sliding_curves_2}, \\ref{fig:sliding_curves_3}, and \\ref{fig:sliding_curves_4} show semiconvergence curves\\footnote{For ill-posed problems,\nthe relative error of an iterative method generally does not decrease monotonically. Instead, unless the problem is highly over-regularized, the relative errors decrease in the early iterations, but at later iterations the noise and\nother errors tend to corrupt the approximations. This behavior, where the relative errors decrease to a certain\nlevel and then increase at later iterations, is referred to as {\\em semiconvergence}; for\nmore information, we refer readers to \\cite{Engl2000Regularization,Hansen2010Discrete,Mueller2012Linear,Vogel2002Computational}.}, representing the dependence of the error on the regularization parameter $\\lambda$, when we increase the percentage of corrupted pixels. \n\\begin{figure}[!ht]\n\\centering\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteSingle_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteSingle_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteSingle_10}\n\\caption{Satellite single-frame}\\label{fig:sliding_curves_1}\n\\end{subfigure}\n\n\\vspace*{.3cm}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteMulti_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteMulti_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_SatelliteMulti_10}\n\\caption{Satellite multi-frame}\\label{fig:sliding_curves_2}\n\\end{subfigure}\n\n\\vspace*{.3cm}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshSingle_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshSingle_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshSingle_10}\n\\caption{Carbon ash single-frame}\\label{fig:sliding_curves_3}\n\\end{subfigure}\n\n\\vspace*{.3cm}\n\n\\begin{subfigure}[b]{\\textwidth}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshMulti_0}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshMulti_2}\n\\hspace*{.2cm}\n\\includegraphics[width=.31\\textwidth]{semiconv_curve_CarbonAshMulti_10}\n\\caption{Carbon ash multi-frame}\\label{fig:sliding_curves_4}\n\\end{subfigure}\n\\caption{Semiconvergence curves -- dependence of the relative error of the reconstruction on the size of the regularization parameter $\\lambda$ for various percentages of outliers: Talwar \\eqref{eq:robust_wls} - \\eqref{eq:beta_opt} (solid line) and the standard data fidelity function \\eqref{eq:wls} (dashed line). \n}\n\\end{figure}\nIt is no surprise that when outliers occur, more regularization is needed in order to obtain a reasonable approximation of the true image $x_\\text{true}$. This is however not the case if we use loss function Talwar, for which the semiconvergence curve remains the same even with increasing percentage of outliers, and therefore no adjustment of the regularization parameter is needed. In Figures \\ref{fig:reconstructions_satellite} and \\ref{fig:reconstructions_carbonash}, we show the reconstructions corresponding to $10 \\%$ outliers. The regularization paramter is chosen as a close-to-the-optimal regularization parameter for the same problem with no outliers. Note that Figures \\ref{fig:reconstructions_satellite} and \\ref{fig:reconstructions_carbonash} show only one frame for illustration. In the multi-frame case, the corruptions look similar for all frames, except that the random locations of the outliers is different. For random outliers like this, robust regression is clearly superior to standard weighted least squares. The influence of the outliers in the multi-frame case is less severe, due to intrinsic regularization of the overdetermined system (\\ref{eq:inverse_problem}). A more comprehensive comparison of the standard and robust approach is shown in Table \\ref{tab:percent}, giving the percentage of cases in which the robust approach provides better reconstruction. The robust approach provides better reconstruction in all cases except for the test problem Satellite with no outliers, where the standard approach gave sometimes slightly better reconstructions. However, even in these cases we observed that the difference between the errors of the reconstructions is rather negligible, about $3\\%$.\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_SatelliteSingleRand10} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteSingleRand10} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteSingleRand10} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteMultiRand10} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteMultiRand10} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Random corruptions: (a) blurred noisy image with $10 \\%$ corrupted pixels (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-4}$. \n}\\label{fig:reconstructions_satellite}\n\\end{figure}\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_CarbonAshSingleRand10} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_CarbonAshSingleRand10} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_CarbonAshSingleRand10} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_CarbonAshMultiRand10} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_CarbonAshMultiRand10} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Random corruptions: (a) blurred noisy image with $10 \\%$ corrupted pixels (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-3}$. \n}\\label{fig:reconstructions_carbonash}\n\\end{figure}\n\\begin{table}[!th]\n\\centering\n\\caption{Comparison of the quality of reconstruction for the standard vs. the robust approach. For each test problem and each percentage of outliers, the results are averaged over 100 independent positions and sizes of random corruptions. Regularization parameters are chosen identically as in Figures \\ref{fig:reconstructions_satellite} and \\ref{fig:reconstructions_carbonash}. Reconstructions are considered to be of the same quality if the difference between the corresponding relative errors is smaller than 1\\%. \n}\\label{tab:percent}\n{\\small \\begin{tabular}{lcccc} \\toprule \n\\multicolumn{5}{c}{better reconstruction: robust\/same\/standard} \\\\ \nproblem\/\\% outliers & \\multicolumn{1}{c}{0\\%} & \\multicolumn{1}{c}{1\\%} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{5\\%} \\\\ \\midrule\nSatellite single-frame& 0\/93\/7& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \nSatellite multi-frame& 0\/94\/6& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \nCarbon ash single-frame& 0\/100\/0& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \nCarbon ash multi-frame& 0\/100\/0& 100\/0\/0& 100\/0\/0& 100\/0\/0\\\\ \n\\bottomrule\\end{tabular}}\n\\end{table}\n\n\n\\subsubsection*{Added object with different blurring}\nWe also consider a situation when a small object appears in the scene, but is blurred by a different PSF than the main object (satellite). The aim is to recover the main object, while suppressing the influence of the added one. In our case, the added object is a small satellite in the left upper corner that is blurred by a small motion blur. In the multi-frame case, the small satellite is added to the first frame only. The difference between the reconstructions using standard and robust approach is shown in Figure \\ref{fig:alien}. For the single frame problem, reconstructions obtained using the standard loss function is fully dominated by the small added object. For the multi-frame situation, the influence of the outlier is somewhat compensated by the two frames without outliers. In both cases, however, robust regression provides better reconstruction, comparable to the reconstruction from the data without outliers.\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_SatelliteSingleAlien} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteSingleAlien} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteSingleAlien} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteMultiAlien} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteMultiAlien} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Added object: (a) blurred noisy image with a small object added to the first frame (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-4}$. \n}\\label{fig:alien}\n\\end{figure}\n\n\n\\subsubsection*{Outliers introduced by boundary conditions}\nDefining the boundary conditions plays an important role in solving image deblurring problems. As is well known, see e.g. \\cite{Hansen2006Deblurring}, unless some strong a priori information about the scene outside the borders is available, any choice of the boundary conditions may lead to artefacts around edges in the reconstruction. Similarly as in \\cite{Calef2013Iteratively}, we may expect that the robust objective functional (\\ref{eq:functional}) can to some extent compensate for these edge artifacts, i.e., the outliers are represented by the `incorrectly' imposed boundary conditions. In our model we assume periodic boundary conditions, which is computationally very appealing, since it allows evaluating the multiplication by $A$ very efficiently using the fast Fourier transform. However, if any of the objects in the scene is close to the boundary, these boundary conditions will most probably cause artifacts. In order to demonstrate the ability of the proposed scheme to eliminate influence of this type of outlier, we shifted the satellite to the right edge of the image. Other settings remain unchanged. Reconstructions using standard and robust approach are shown in Figure \\ref{fig:bc}. We see that, although not spectacular, robust regression can reduce the artifacts caused by incorrectly imposed boundary conditions and therefore provide better reconstruction of the true image. Quantitative results for this and all the previous types of outliers are shown in Tables~\\ref{tab:2a} and \\ref{tab:2b}.\n\\begin{figure}[!th]\n\\centering\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{data_SatelliteSingleBC} \n\\caption{data\\newline}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteSingleBC} \n\\caption{single-frame\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteMultiBC} \n\\caption{single-frame\\\\ robust}\n\\end{subfigure}\n\\hspace*{.1cm}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_standard_SatelliteMultiBC} \n\\caption{multi-frame,\\\\ standard}\n\\end{subfigure}\n\\begin{subfigure}[b]{.18\\textwidth}\\captionsetup{justification=centering}\n\\includegraphics[width=.98\\textwidth]{reconstruction_robust_SatelliteSingleBC} \n\\caption{multi-frame \\\\robust}\n\\end{subfigure}\n\\caption{Incorrectly imposed periodic boundary conditions: (a) blurred noisy image close to the edge (only first frame is shown); (b) - (d) reconstructions corresponding to $\\lambda = 10^{-4}$. \n}\\label{fig:bc}\n\\end{figure}\n\n\\begin{table}[!th]\n\\caption{Comparison of the standard and robust approach in terms of relative error of the reconstruction. Each row contains results for the standard and robust approach. Abbreviation `\\# it' stands for the number of Newton steps performed before the relative size of the projected gradient reached the tolerance $10^{-4}$. Corresponding reconstructions are shown in Figures~\\ref{fig:reconstructions_satellite}--\\ref{fig:bc}. \n}\n\\centering\n{\\small\n\\begin{subtable}[h]{\\textwidth}\n\\subcaption{single-frame}\n\\centering\n\\begin{tabular}{lcccc}\\toprule \n & \\multicolumn{2}{c}{standard} & \\multicolumn{2}{c}{robust} \\\\ \n\\multicolumn{1}{c}{problem} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} \\\\ \\midrule\nSatellite& 15& 3.40$\\times 10^{-1}$& 16& 3.42$\\times 10^{-1}$\\\\ \nSatellite random corr. 10\\%& 14& 6.78$\\times 10^{-1}$& 14& 3.57$\\times 10^{-1}$\\\\ \nCarbon ash& 10& 3.10$\\times 10^{-1}$& 11& 3.08$\\times 10^{-1}$\\\\ \nCarbon ash random corr. 10\\% & 11& 3.80$\\times 10^{-1}$& 14& 3.10$\\times 10^{-1}$\\\\ \nSatellite added object& 15& 4.72$\\times 10^{-1}$& 15& 3.43$\\times 10^{-1}$\\\\ \nSatellite boundary conditions& 15& 5.48$\\times 10^{-1}$& 25& 4.51$\\times 10^{-1}$\\\\ \n\\bottomrule\\end{tabular}\\label{tab:2a}\n\\end{subtable}\n\n\\medskip\n\n\\begin{subtable}[h]{1\\textwidth}\n\\subcaption{multi-frame}\n\\centering\n\\begin{tabular}{lcccc} \\toprule\n & \\multicolumn{2}{c}{standard} & \\multicolumn{2}{c}{robust} \\\\ \n\\multicolumn{1}{c}{problem} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} & \\multicolumn{1}{c}{\\# it} & \\multicolumn{1}{c}{rel. error} \\\\ \\midrule\nSatellite& 12& 2.89$\\times 10^{-1}$& 11& 2.89$\\times 10^{-1}$\\\\ \nSatellite random corr. 10\\%& 11& 6.45$\\times 10^{-1}$& 13& 3.00$\\times 10^{-1}$\\\\ \nCarbon ash& 12& 3.07$\\times 10^{-1}$& 11& 3.05$\\times 10^{-1}$\\\\ \nCarbon ash random corr. 10\\%& 9& 3.70$\\times 10^{-1}$& 19& 3.06$\\times 10^{-1}$\\\\ \nSatellite added object& 13& 3.33$\\times 10^{-1}$& 11& 2.90$\\times 10^{-1}$\\\\ \nSatellite boundary conditions& 14& 5.26$\\times 10^{-1}$& 14& 4.27$\\times 10^{-1}$\\\\ \n\\bottomrule\\end{tabular}\\label{tab:2b}\n\\end{subtable}}\n\\end{table} \n\n\\subsection{Generalized cross-validation}\\label{sec:gcv_tests}\nFor the remainder of this section we will only assume the robust approach, i.e., functional \\eqref{eq:functional} with the loss function Talwar. In Section~\\ref{sec:gcv} we described a regularization parameter selection rule based on leave-one-out cross validation. Since GCV belongs to standard methods, we focus here mainly on the influence of the outliers on its reliability. To obtain various noise levels, we scale the original true scene (with maximum intensity = 255) by 10 and by 100, which results in a decrease of the relative size of Poisson noise. The standard deviation $\\sigma$ for the additive Gaussian noise is scaled accordingly by $\\sqrt{10}$ and $10$. We compute the resulting signal-to-noise ratio as the reciprocal of the coefficient of variation, i.e.,\n\\[\n\\text{SNR} = \\frac{\\|Ax\\|}{\\sqrt{\\sum_{i=1}^n([Ax]_i + \\sigma^2)}}.\n\\]\n\nFor our computations, we use CG to solve \\eqref{eq:gcv_lin_syst}, which we terminate if the relative size of the residual reaches $10^{-4}$ or if the number of iterations reaches 150. \nTo minimize the GCV functional, we use the MATLAB built-in function \\texttt{fminbnd}, for which we set the lower bound to $0$ and the upper bound to $10^{-1}$, $10^{-2}$, $10^{-4}$, depending on the maximum intensity of the image. The tolerance \\texttt{TolX} was set to $10^{-8}$. \n\nFor test problem Satellite, we show the semiconvergence curves including the minimum error and the error obtained using GCV in Figure~\\ref{fig:GCV_satellite}. Quantitative results (averaged over 10 realizations of outliers) for both test problems are shown in Table~\\ref{tab:gcv}. We observe that the proposed rule is rather stable with respect to the increasing number of outliers and generally better for the Carbon ash than for the Satellite. As expected, the method provides better approximation of the optimal regularization parameter for smaller noise levels (larger $Ax_\\text{true}$), where the functional \\eqref{eq:wls} approximates better the maximum likelihood functional for the mixed Poisson--Gaussian model. Occasionally, GCV provides slightly worse reconstruction for the highest percentage (10\\%) of outliers.\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle_Laplacian_5_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle_Laplacian_10_wls_talwar} \n \t\\caption{Satellite single-frame, max. intensity 255 (SNR = 5).}\\end{subfigure}\n \n\\vspace*{.3cm}\n\n\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle2550_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle2550_Laplacian_5_wls_talwar}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle2550_Laplacian_10_wls_talwar} \n \t\\caption{Satellite single-frame, max. intensity 2550 (SNR = 17).}\\end{subfigure}\n \n\\vspace*{.3cm}\n\n\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle25500_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle25500_Laplacian_5_wls_talwar}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteSingle25500_Laplacian_10_wls_talwar} \n \t\\caption{Satellite single-frame, max. intensity 25500 (SNR = 52).}\\end{subfigure}\n \n\\vspace*{.3cm}\n\n \t\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{GCV_SatelliteMulti_Laplacian_1_wls_talwar} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteMulti_Laplacian_5_wls_talwar}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{GCV_SatelliteMulti_Laplacian_10_wls_talwar} \n \t\\caption{Satellite multi-frame, max. intensity 255 (SNR = 5).}\\end{subfigure}\n \n \t\\caption{GCV for data with outliers. \n} \\label{fig:GCV_satellite}\n\\end{figure}\n\n\n\\begin{landscape}\n\\begin{table}\n\\centering\n\\caption{Relative errors of the reconstruction: optimal $\\lambda$ vs. $\\lambda$ obtained by minimizing the GCV functional \\eqref{eq:gcv_fun}. \n}\\label{tab:gcv}\n\\begin{subtable}[h]{1.5\\textwidth}\n\\subcaption{Satellite}\n\\centering\n{\\scriptsize\\begin{tabular}{llcllclcclc} \\toprule \n\\multicolumn{2}{r}{}& \\multicolumn{4}{r}{error min} & \\multicolumn{4}{l}{($\\lambda$ optimal)} \\\\ \n\\multicolumn{2}{r}{} & \\multicolumn{4}{r}{error GCV} & \\multicolumn{4}{l}{($\\lambda$ GCV)} \\\\ \n\\multicolumn{1}{r}{} & \\multicolumn{9}{c}{\\% outliers}\\\\ \nproblem & max. int. & \\multicolumn{2}{c}{1\\%} & & \\multicolumn{2}{c}{5\\%} & & \\multicolumn{2}{c}{10\\%} \\\\ \\midrule \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{255}& $3.39\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.1\\times 10^{-4}$) & & $3.43\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.5\\times 10^{-4}$) & & $3.49\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 3.2\\times 10^{-4}$)\\\\ \n & & $3.60\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 2.1\\times 10^{-3}$) & & $3.68\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 3.1\\times 10^{-3}$) & & $3.80\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 5.6\\times 10^{-3}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{2550}& $2.46\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.5\\times 10^{-6}$) & & $2.48\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.5\\times 10^{-6}$) & & $2.50\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.7\\times 10^{-6}$)\\\\ \n & & $2.48\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.5\\times 10^{-6}$) & & $2.57\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 4.5\\times 10^{-6}$) & & $2.69\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 8.4\\times 10^{-6}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{25500}& $1.78\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.0\\times 10^{-8}$) & & $1.79\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.0\\times 10^{-8}$) & & $1.81\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.0\\times 10^{-8}$)\\\\ \n & & $1.79\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 8.8\\times 10^{-9}$) & & $1.81\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.5\\times 10^{-8}$) & & $2.42\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 7.5\\times 10^{-6}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{mutli-frame} & \\multirow{2}{*}{255}& $2.89\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-4}$) & & $2.92\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-4}$) & & $2.97\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-4}$)\\\\ \n & & $3.16\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.6\\times 10^{-3}$) & & $3.13\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.2\\times 10^{-3}$) & & $3.49\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 5.8\\times 10^{-3}$) \\\\ \\addlinespace[.2cm] \n\\bottomrule\\end{tabular}\n}\n\\end{subtable}\n\n\\begin{subtable}[h]{1.5\\textwidth}\n\\subcaption{Carbon ash}\n\\centering\n{\\scriptsize\\begin{tabular}{llcllclcclc} \\toprule \n\\multicolumn{2}{r}{}& \\multicolumn{4}{r}{error min} & \\multicolumn{4}{l}{($\\lambda$ optimal)} \\\\ \n\\multicolumn{2}{r}{} & \\multicolumn{4}{r}{error GCV} & \\multicolumn{4}{l}{($\\lambda$ GCV)} \\\\ \n\\multicolumn{1}{r}{} & \\multicolumn{9}{c}{\\% outliers}\\\\ \nproblem & max. int. & \\multicolumn{2}{c}{1\\%} & & \\multicolumn{2}{c}{5\\%} & & \\multicolumn{2}{c}{10\\%} \\\\ \\midrule \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{255}& $3.08\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.09\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.10\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.7\\times 10^{-3}$)\\\\ \n& & $3.12\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 8.9\\times 10^{-4}$) & & $3.11\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.5\\times 10^{-3}$) & & $3.11\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 2.2\\times 10^{-3}$) \\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{2550}& $2.91\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.9\\times 10^{-5}$) & & $2.92\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.1\\times 10^{-5}$) & & $2.92\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.9\\times 10^{-5}$)\\\\ \n& & $2.97\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 9.7\\times 10^{-6}$) & & $2.95\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.2\\times 10^{-5}$) & & $2.93\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 2.2\\times 10^{-5}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{single-frame} & \\multirow{2}{*}{25500}& $2.77\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 3.0\\times 10^{-7}$) & & $2.78\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.7\\times 10^{-7}$) & & $2.79\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 2.6\\times 10^{-7}$)\\\\ \n& & $3.00\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 5.2\\times 10^{-8}$) & & $2.84\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 9.4\\times 10^{-8}$) & & $2.82\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.3\\times 10^{-7}$)\\\\ \\addlinespace[.2cm] \n\\multirow{2}{*}{multi-frame} & \\multirow{2}{*}{255}& $3.03\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.04\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$) & & $3.04\\times 10^{-1}$ & ($\\lambda_\\text{opt} = 1.8\\times 10^{-3}$)\\\\ \n & & $3.14\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 6.8\\times 10^{-4}$) & & $3.07\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 9.8\\times 10^{-4}$) & & $3.05\\times 10^{-1}$ & ($\\lambda_\\text{GCV} = 1.9\\times 10^{-3}$)\\\\ \\addlinespace[.2cm] \n\\bottomrule\\end{tabular}\n}\n\\end{subtable}\n\\end{table} \n\\end{landscape}\n\n\n\n\\subsection{Linear subproblems}\n\nAs mentioned earlier, various types of preconditioners have been developed to speed up convergence of iterative methods applied to systems of type \\eqref{eq:lin_system} or its saddle-point counterpart \n\\[\n\\begin{pmatrix}\n D^{-1} & A \\\\\n A^T & \\lambda L^TL\\\\\n \\end{pmatrix} = \n \\begin{pmatrix}\n -z \\\\\n \\lambda L^TLx\\\\\n \\end{pmatrix}.\\label{eq:saddle_point}\n\\]\nThe Hermitian and skew-Hermitian (HSS) preconditioner, as well as constraint precondtioner belong to the best known preconditioners for this type of linear system. Both were incorporated in GMRES and tested on deblurring problems with random diagonal scaling $D$ in \\cite{Benzi2006Preconditioned}. Using random $D$, they indeed accelerate convergence also in our case, as shown in Figure \\ref{fig:prec1}. However, our preconditioner \\eqref{eq:prec} provides a much better speedup. Moreover, for real computations, e.g., when the matrix $D$ is actually generated during the Projected Newton computation, the HSS and constraint preconditioners did not perform well, and even slowed down the convergence, see Figure~\\ref{fig:prec2}. This is fortunately not the case for \nour proposed preconditioner. In this experiment, we did not assume projection on the non-negative half-plane and since in \\eqref{eq:saddle_point}, we need to evaluate $D^{-1}$, if some component $D_{ii}=0$, we replaced it by $2\\sqrt{\\epsilon_\\text{mach}}$, see also \\cite{Mastronardi2008Fast}.\nWe also did not incorporate any outliers for these initial experiments with the preconditioners; \nthese results are intended to show that \nour proposed preconditioning for these problems\noften performs much better than the well-known standard preconditioners.\nIn fact, we see that the behavior of the constraint and HSS preconditioner depends heavily on the actual setting of the problem. In the remainder of this section we will therefore focus on the preconditioner given in \\eqref{eq:prec}.\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{0.85\\textwidth}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_CG_random} \n \t\\hspace*{.1cm}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_GMRES_random} \n \t\\caption{Satellite single-frame, random diagonal}\n \t\\end{subfigure}\n \t\\caption{Preconditoner defined in \\eqref{eq:prec}, constraint preconditioner (CP), and Hermitian and skew-Hermitian splitting preconditioner (HSSP) performance for $(A^TDA + \\lambda L^TL)s = -A^Tb$, where $A$ and $b$ are adopted from the test problem Satellite, and $D$ is a diagonal with random entries uniformly distributed in $(0,1)$. \n}\\label{fig:prec1}\n\\end{figure}\n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{0.85\\textwidth}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_CG_real} \n \t\\hspace*{.1cm}\n \t\\includegraphics[width=.47\\textwidth]{preconditioners_unconstrained_GMRES_real} \n \t\\caption{Satellite single-frame, Newton it = 3}\n \t\\end{subfigure}\n \t\\caption{Preconditoner defined in \\eqref{eq:prec}, constraint preconditioner (CP), and Hermitian\/skew-Hermitian preconditioner (HSSP) performance for $(A^TD^{(k)}A + \\lambda L^TL)s = -\\left(A^Tz^{(k)} + \\lambda L^TLx^{(k)}\\right)$. \n}\\label{fig:prec2} \n\\end{figure}\n\nIn Figure \\ref{fig:prec_levels}, we investigate the overall speedup of the convergence by plotting the number of projected PCG steps needed in each Newton iteration to reach the desired tolerance on the relative size of the projected gradient. Even for the most generous tolerance $10^{-1}$, preconditioner \\eqref{eq:prec} significantly reduces the number of projPCG iterations. Note that in this experiment, the linear subproblems solved in each Newton iteration are generally not identical, since the subproblems are not solved exactly and therefore the approximations $x^{(k)}$ are not the same. We set the outer tolerance to $0$ in order to perform always at least $15$ Newton iterations. \n\\begin{figure}[!ht]\n \\centering\n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteSingle_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteSingle_5_2} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteSingle_5_3} \n \t\\caption{Satellite single-frame}\\end{subfigure}\n \n \n \\vspace*{.3cm} \t\n \n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteMulti_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteMulti_5_2}\n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_SatelliteMulti_5_3} \n \t\\caption{Satellite multi-frame}\\end{subfigure}\n \n \n \\vspace*{.3cm} \t\n \n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshSingle_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshSingle_5_2} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshSingle_5_3} \n \t\\caption{Carbon Ash single-frame}\\end{subfigure}\n \n \\vspace*{.3cm} \t\n \n \\begin{subfigure}[b]{\\textwidth}\n \\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshMulti_5_1} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshMulti_5_2} \n \\quad\n\t\\includegraphics[width=.28\\textwidth]{precond_levels_CarbonAshMulti_5_3} \n \t\\caption{Carbon Ash multi-frame}\\end{subfigure}\n \n \n \t\\caption{The effect of preconditioning by preconditioner defined in \\eqref{eq:prec}: number of projPCG iterations performed in each Newton iteration to achieve the desired tolerance. 5 \\% outliers.\n} \\label{fig:prec_levels}\n\\end{figure}\nThe choice of projPCG tolerance is a difficult question, but from the average number of Newton iterations\/projPCG iterations\/fast Fourier transforms shown in Table \\ref{tab:numit}, we observe that raising the tolerance does not considerably increase the number of Newton steps we need to perform here. Therefore larger tolerance, here $10^{-1}$, leads to a smaller total number of projPCG iterations. This is independent of the percentage of outliers. For each setting, the number of projPCG iterations is significantly smaller for the preconditioned version. This is not always the case for the total count of the fast Fourier transforms, since we need to perform 6 \\texttt{fft2}\/\\texttt{ifft2} in each iteration vs. 4 for the unpreconditioned iterations; see Table \\ref{tab:op_counts}. For large scale problems, however, the computational complexity of fast Fourier transform, which is $\\mathcal{O}(n\\log n)$ is comparable to other operations performed in projPCG, such as the inner products, whose complexity is $\\mathcal{O}(n)$, and therefore the number of projPCG iterations seems to be the more important indicator of efficiency of the preconditioner. Recall here that\n$n$ is the number of pixels in the image, so if we have a $256 \\times 256$ array of pixels, then $n = 65535$.\n\\begin{table}[!ht]\n\\centering\n\\caption{Average number of Newton iterations, projPCG iterations, and (inverse) 2D Fourier transforms for projPCG with and without preconditioning, and two tolerances on the relative size of the projPCG residual. Results are averaged over 10 independent realization of noise and outliers. \n}\\label{tab:numit}\n\\begin{subtable}[h]{\\textwidth}\n\\subcaption{projPCG tol = $10^{-1}$}\n\\centering\n{\\small\\begin{tabular}{lcccc} \\toprule \n \\multicolumn{5}{c}{average count: Newton\/CG\/\\texttt{fft2}s } \\\\ \n& & \\multicolumn{3}{c}{\\% outliers} \\\\ \n\\cmidrule(r){3-5}\n\\multicolumn{1}{c}{problem} & precond & \\multicolumn{1}{c}{0\\%} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{10\\%} \\\\ \\midrule\n\\multirow{2}{*}{Satellite single-frame}& no& 14\/290\/1383& 14\/274\/1329& 14\/283\/1374\\\\ \n & yes& 15\/161\/1280& 16\/172\/1362& 14\/158\/1252\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Satellite multi-frame}& no& 12\/250\/2398& 12\/216\/2104& 13\/241\/2364\\\\ \n& yes& 12\/107\/1545& 11\/107\/1535& 12\/103\/1507\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash single-frame}& no& 11\/190\/939& 10\/179\/891& 11\/184\/915\\\\ \n& yes& 10\/71\/641& 11\/72\/654& 13\/82\/753\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash multi-frame}& no& 14\/221\/2200& 14\/219\/2179& 16\/254\/2542\\\\ \n& yes& 16\/88\/1510& 15\/85\/1419& 17\/99\/1654\\\\ \n\\bottomrule\n\\end{tabular}}\n\\end{subtable}\n\n\\vspace*{.5cm}\n\n\\begin{subtable}[h]{1\\textwidth}\n\\subcaption{projPCG tol = $10^{-2}$}\n\\centering\n{\\small\\begin{tabular}{lcccc} \\toprule \n \\multicolumn{5}{c}{average count: Newton\/CG\/\\texttt{fft2}s } \\\\ \t\n& & \\multicolumn{3}{c}{\\% outliers} \\\\ \n\\cmidrule(r){3-5}\n\\multicolumn{1}{c}{problem} & precond & \\multicolumn{1}{c}{0\\%} & \\multicolumn{1}{c}{2\\%} & \\multicolumn{1}{c}{10\\%} \\\\ \\midrule\n\\multirow{2}{*}{Satellite single-frame}& no& 13\/536\/2359& 13\/539\/2373& 14\/641\/2819\\\\ \n& yes& 14\/284\/2001& 15\/302\/2117& 14\/296\/2091\\\\ \n\\addlinespace[.2cm] \n\\multirow{2}{*}{Satellite multi-frame}& no& 11\/457\/4082& 11\/460\/4130& 12\/499\/4511\\\\ \n& yes& 11\/201\/2536& 11\/197\/2499& 12\/213\/2747\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash single-frame}& no& 11\/393\/1754& 11\/432\/1912& 13\/526\/2315\\\\ \n& yes& 10\/121\/934& 11\/129\/1004& 13\/155\/1201\\\\ \n\\addlinespace[.2cm]\n\\multirow{2}{*}{Carbon ash multi-frame}& no& 13\/426\/3813& 14\/461\/4127& 16\/498\/4467\\\\ \n& yes& 13\/144\/1954& 14\/150\/2038& 16\/173\/2359\\\\ \n\\bottomrule\n\\end{tabular}}\n\\end{subtable}\n\n\\end{table} \n\n\\section{Conclusion}\nWe have presented an efficient approach to compute approximate\nsolutions of a linear inverse problem that is contaminated with mixed Poisson--Gaussian noise, and \nwhen there are outliers in the measured data. \nWe investigated the convexity properties of various robust regression functions and\nfound that the Talwar function was the best option. We proposed a preconditioner,\nand illustrated that it was more effective than other\nstandard preconditioning approaches on the types of problems studied in this paper. Moreover, we showed that a variant of the GCV method\ncan perform well in estimating regularization parameters in robust regression.\nA detailed discussion of computational\ncosts, and extensive numerical experiments illustrate the approach proposed in this paper is effective and efficient on\nimage deblurring problems.\n\n\\section{Acknowledgment}\nThe authors would like to thank Lars Ruthotto for pointing them to the Projected PCG algorithm and providing them with a Matlab code. The first author would like to thank Emory University for the hospitality offered in academic year 2014-2015, when part of this work was completed.\n\n\n\\FloatBarrier\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe phenomena of influence propagation through social networks have attracted a great body of research works. A key function of an online social network (OSN), besides sharing, is that it enables users to express their personal opinions about a product or trend of news by means of posts, likes\/dislikes, etc. Such opinions are propagated to other users and might make a significant influence on them, either positive or negative. \n \nReal world is full of imprecision and uncertainty and this fact necessarily impacts on OSN data. In fact, social interactions can not be always precise and certain, also, OSN allows only a limited access for their data which generates more imprecision and uncertainty. Then, if we ignore this imperfection, we may be confronted to erroneous analysis results. In such a situation, the theory of belief functions \\cite{Dempster67a,Shafer76} have been widely applied. Furthermore, this theory was used for analyzing social networks \\cite{Jendoubi14a,Jendoubi2015,Zhou2015}. \n\n\nInfluence maximization (IM) is the problem of finding a set of $k$ seed nodes that are able to influence the maximum number of nodes in the social network. In the literature, we find many solutions for the IM problem. \\textit{Kempe et al.} \\cite{Kempe03} propose two propagation simulation models which are the \\textit{Linear Threshold Model (LTM)} and the \\textit{Independent Cascade Model (ICM)}. Besides, the credit distribution (CD) model \\cite{Goyal12} is a data based approach that investigates past propagation to detect influencers. However, these solutions does not consider the user's opinion. Zhang et al. \\cite{Zhang2013} propose an opinion based cascading model that considers the user's opinion about the product. However, their work is not based on real word data to estimate user's opinion and influence. \n\nIn this paper, we propose a new data based model for influence maximization\nin online social networks that searches to detect influencer users\nthat adopt a positive opinion about the product. \nThe proposed model is data based because we use past propagation to estimate the influence, and users messages to estimate the opinion. Besides, it uses the theory of belief functions to estimate the influence to deal with\ndata imprecision and uncertainty. To the best of our knowledge, the proposed model\nis the first evidential data based model that maximizes the influence\non OSN, detects influencer users having a positive opinion about\nthe product and uses the theory of belief functions to process the\ndata imperfection.\n\nThe remainder of this paper is organized as follows: section 2 introduces the proposed model\nfor maximizing the positive opinion influence, section 3\nshows the performance of our model through some relevant\nexperiments. Finally, the paper is concluded in section 4.\n\n\\vspace{-0.05cm}\n\n\\section{Maximizing positive opinion influence}\n\nIn this section, we present our positive opinion influence measure and the proposed influence maximization\nalgorithm.\n\n\n\\subsection{Influence measure}\n\nGiven a social network $G=\\left(V,E\\right)$, a frame of discernment expressing opinion $\\Theta=\\left\\{ Pos,\\, Neg,\\, Obj\\right\\} $, $Pos$ for positive, $Neg$ for negative and $Obj$ for objective, a frame of discernment expressing influence and passivity $\\Omega=\\left\\{ I,P\\right\\} $, $I$ for influencer and $P$ for passive user, a probability distribution $\\Pr^{\\Theta}\\left(u\\right)$ defined on $\\Theta$ that expresses the opinion of the user $u\\in V$ about the product and a basic belief assignment (BBA) function~\\cite{Shafer76}, $m^{\\Omega}\\left(u,v\\right)$, defined on $\\Omega$ that expresses the influence that exerts the user $u$ on the user $v$. The first step of the influence maximization process is to measure the influence of each user in the network. Then we propose an influence measure to estimate the positive influence of each user in the network. \n\nThe mass value $m^{\\Omega}\\left(u,v\\right)\\left(I\\right)$ measures the influence of $u$ on $v$ but without considering the opinion of $u$ about the product. We define the positive opinion influence of $u$ on $v$ as the positive proportion of $m^{\\Omega}\\left(u,v\\right)\\left(I\\right)$ and we measure this proportion as: \n\\begin{equation}\nInf_{Pos}\\left(u,v\\right)=\\textrm{Pr}^{\\Theta}\\left(u\\right)\\left(Pos\\right).m^{\\Omega}\\left(u,v\\right)\\left(I\\right)\n\\end{equation}\n\nNext, we define the amount of influence given to a set of nodes $S\\subseteq V$\nfor influencing a user $v\\in V$. We estimate the influence of $S$\non a user $v$ as follows:\n\n\\begin{equation}\nInf_{Pos}\\left(S,v\\right)=\\begin{cases}\n1 & \\textrm{if} \\, v\\in S \\\\\n{\\displaystyle \\sum_{u\\in S}{\\displaystyle }\\sum_{x\\in D_{IN}\\left(v\\right)\\cup\\left\\{ v\\right\\} }Inf_{Pos}\\left(u,x\\right).Inf_{Pos}\\left(x,v\\right)} & \\textrm{Otherwise}\n\\end{cases}\n\\end{equation}\nsuch that $Inf_{Pos}\\left(v,v\\right)=1$ and $D_{IN}\\left(v\\right)$ is\nthe set of nodes in the indegree of $v$. Finally, we define the influence\nspread $\\sigma\\left(S\\right)$ under the evidential model as the total\ninfluence given to $S\\subseteq V$ from all nodes in the social network\nas $\\sigma\\left(S\\right)=\\sum_{v\\in V}Inf_{Pos}\\left(S,v\\right)$. In the\nspirit of the IM problem, as defined by \\textit{Kempe et al.} \\cite{Kempe03}, $\\sigma\\left(S\\right)$\nis the objective function to be maximized.\n\n\n\\subsection{Influence maximization}\n\nIn this section, we present the evidential positive opinion influence\nmaximization model. Its purpose is to find a set of nodes $S$ that\nmaximizes the objective function $\\sigma\\left(S\\right)$. Given a\ndirected social network $G=\\left(V,\\, E\\right)$, an integer $k\\leq |V|$, the goal is to find a set of users $S\\subseteq V,$ $|S|=k$,\nthat maximizes $\\sigma\\left(S\\right)$. We proved that $\\sigma\\left(S\\right)$\nis monotone and sub-modular, also the influence maximization under\nthe proposed model is NP-Hard. However, the number of pages limitation\nprevents us to present proofs in detail. \n\nThe influence maximization under the evidential positive opinion influence\nmaximization model is NP-Hard, consequently, the greedy algorithm performs\ngood approximation for the optimal solution especially when we use\nit with this formula: \n\\begin{equation}\n\\label{eq:mg}\n\\sigma_{Bel} \\left(S\\cup\\left\\{ x\\right\\} \\right)-\\sigma_{Bel}\\left(S\\right)=1+\\sum_{v\\in V\\setminus S\\,}\\sum_{a\\in D_{IN}\\left(v\\right)\\cup\\left\\{ v\\right\\} }Inf\\left(x,a\\right).Inf\\left(a,v\\right)\n\\end{equation}\nthat computes the marginal gain of a candidate node $x$. We choose\nthe cost effective lazy-forward algorithm (CELF) \\cite{Leskovec07b}\nwhich is a two pass modified greedy algorithm. It exploits the sub-modularity\nproperty of the objective function, also, it is about 700 times faster\nthen the basic greedy algorithm.\nThe CELF based evidential influence maximization algorithm starts by estimating the marginal gain of all users in the network and sorts them according to their marginal gain, then, it selects the user that have the maximum marginal gain and add it to $S$ (seed set). After that, the algorithm iterates on the following steps until getting $|S| = k$: 1) Choose the next user in the list, 2) Update its marginal gain (formula (\\ref{eq:mg})), and 3) If the chosen node keeps its position in the list (it still the maximum)\nthen add it to $S$\n\n\n\\section{Experiments}\n\nIn this section, we conduct some experiments on real world data. We used the library Twitter4j\\footnote{http:\/\/twitter4j.org\/en\/index.html} which is a java implementation of the Twitter API to collect Twitter data. We crawled the Twitter network for the period between 08\/09\/2014 and 03\/11\/2014, and we filtered our data by keeping only tweets that talk about smartphones and users that have at least one tweet in the data base. To estimate the opinion polarity of each tweet in our data set, first, we used the java library ``Stanford POS Tagger'' \\footnote{http:\/\/nlp.stanford.edu\/software\/tagger.shtml} with the model ``GATE Twitter part-of-speech tagger''\\footnote{https:\/\/gate.ac.uk\/wiki\/twitter-postagger.html} that were designed for tweets. This step gives a tag (verb, noun, etc) to each word in the tweet. After, we estimated the opinion polarity of each tweet using the SentiWordNet 3.0\\footnote{http:\/\/sentiwordnet.isti.cnr.it\/} dictionary and tags from the first step. We estimated $m^{\\Omega}\\left(u,v\\right)$ using the network structure and past propagation between $u$ and $v$. First, we calculated the number of common neighbors between $u$ and $v$, the number of tweets where $u$ mentions $v$ and the number of tweets where $v$ retweets from $u$. After we used the process defined by \\textit{Wei et al.} \\cite{Wei13} to estimate a BBA for each defined variable. Finally we combine the resulting BBAs to obtain $m^{\\Omega}\\left(u,v\\right)$. In this section, we call belief model: our model in which we use $Inf\\left(u,v\\right)=m^{\\Omega}\\left(u,v\\right)\\left(I\\right)$ as influence measure, CD model: the credit distribution model and opinion model: the proposed positive opinion based model.\n\nThe goal of the first experiment is to show that the proposed model\ndetects well influencer spreaders. To examine the quality of the selected\nseeds, we fixed four comparison criteria which are: the number of\nfollowers, \\#Follow, the number of tweets, \\#Tweet, the number of\ntimes the user was mentioned and retweeted, \\#Mention and \\#Retweet.\nIn fact, we assume that if a user is an influencer on Twitter he would be necessarily:\nvery active so he has a lot of tweets, he is followed by many users\nin the network, he is frequently mentioned and his tweets are retweeted several times. In Figure\n(\\ref{fig:Comparison}), we compare the maximization results of the proposed opinion model with CD model and belief model according to the fixed criteria. Figure (\\ref{fig:Comparison}) shows the performance\nof the proposed model against CD model and belief model. In fact, we see that\nthe proposed opinion model detects influencer that \nhave many followers (more than 8000 for 50 influencer), many tweets (over 250 for 50 users), many mentions (about 1200) and many retweets (about 800).\nHowever, users detected using the belief model have only two good criteria, \\textit{i.e.} \\#Follow (over 8000 follower for 50 users) and \\#Tweet (over 150 tweets for 50 users), and the CD model does not satisfy any criteria. This shows that, the opinion model is the best in detecting influencers.\n\n\\begin{figure}\n\\begin{centering}\n\\includegraphics[scale=0.45]{IphoneDataBaseFollowCompare.png} \\includegraphics[scale=0.45]{IphoneDataBaseMentionCompare.png}\n\\par\\end{centering}\n\n\\begin{centering}\n\\includegraphics[scale=0.45]{IphoneDataBaseRetweetCompare.png} \\includegraphics[scale=0.45]{IphoneDataBaseTweetCompare.png}\n\\par\\end{centering}\n\n\\caption{Comparison between opinion model, belief model and CD model according to \\#Follow,\n\\#Mention, \\#Retweet and \\#Tweet\\label{fig:Comparison}}\n\\end{figure}\n\n\nIn a second experiment, we calculated the mean positive opinion of the first 100 influencer user. The proposed model performed well by selecting influencers that have a positive opinion about the product. In fact, it gives a mean positive opinion equals to 0.89 ($\\pm0.04$, $95\\%$ confidence interval). However, the belief model gives 0.34 ($\\pm0.05$) and the CD model gives only 0.09 ($\\pm0.04$). These results show the performance of the proposed model in selecting influencer users that have a positive opinion against the belief and the CD models that have not.\n\n\n\\section{Conclusion}\n\nIn this paper, we proposed a new influence measure that estimates the positive opinion\ninfluence of OSN users. We used the theory of belief functions to deal with the problem of data imperfection. In future works, we will search to improve the proposed influence maximization model by considering other parameters like the user's profile and the propagation time.\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nNearly five decades since the publication in 1974 of Allen Shweenk's article \\cite{Schwenk1974}, the determination\nof the spectrum and characteristic polynomial of the generalized composition $H[G_1, \\dots, G_p]$ (recently\ndesignated $H$-join of $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$ \\cite{Cardoso_et_al2013}), in terms of the spectra\n(and characteristic polynomials) of the graphs in $\\mathcal{G}$ and an associated matrix, where all graphs\nare undirected, simple and finite, was limited to families of regular graphs. Very recently in\n\\cite{SaravananMuruganArunkumar2020}, as an application of a new generalization of Fiedler's Lemma, the\ncharacteristic polynomial of the universal adjacency matrix of the $H$-joim of a family of arbitrary graphs\nis determined in terms of the characteristic polynomial and a related racional function of each component, and\nthe determinant of an associated matrix. In this work, using a distincy approach, the determination of the spectrum\n(and characteristic polynomial) of the $H$-join, in terms of its components and associated matrix, is extended to\nfamilies of arbitrary graphs (which should be undirected, simple and finite).\\\\\n\nThe generalized composition $H[G_1, \\dots, G_p]$, introduced in \\cite[p. 167]{Schwenk1974} was rediscovered in\n\\cite{Cardoso_et_al2013} under the designation of $H$-join of a family of graphs $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$,\nwhere $H$ is a graph of order $p$. In \\cite[Th. 7]{Schwenk1974}, assuming that $G_1, \\dots, G_p$ are all regular graphs\nand taking into account that $V(G_1) \\cup \\dots \\cup V(G_p)$ is an equitable partition $\\pi$, the characteristic\npolynomial of $H[G_1, \\dots, G_p]$ is determined in terms of the characteristic polynomials of the graphs\n$G_1, \\dots, G_p$ and the matrix associated to $\\pi$. Using a generalization of a Fiedler's result\n\\cite[Lem. 2.2]{Fiedler1974} obtained in \\cite[Th. 3]{Cardoso_et_al2013}, the spectrum of the $H$-join of a family of\nregular graphs (not necessarily connected) is determined in \\cite[Th. 5]{Cardoso_et_al2013}.\n\nWhen the graphs of the family $\\mathcal{G}$ are all isomorphic to a fixed graph $G$, the $H$-join of $\\mathcal{G}$\nis the same as the lexicographic product (also called the composition) of the graphs $H$ and $G$ which is denoted as\n$H[G]$ (or $H \\circ G$). The lexicographic product of two graphs was introduced by Harary in \\cite{harary1} and\nSabidussi in \\cite{sabidussi} (see also \\cite{harary2, hammack_et_al}). From the definition, it is immediate that\nthis graph operation is associative but not commutative.\n\nIn \\cite{abreu_et_al}, as an application of the $H$-join spectral properties, the lexicographic powers of a graph\n$H$ were considered and their spectra determined, when $H$ is regular. The $k$-th lexicographic power of $H$, $H^k$,\nis the lexicographic product of $H$ by itself $k$ times (then $H^2=H[H], H^3=H[H^2]=H^2[H], \\dots$). As an example,\nin \\cite{abreu_et_al}, the spectrum of the $100$-th lexicographic power of the Petersen graph, which has a gogool\nnumber (that is, $10^{100}$) of vertices, was determined. With these powers, $H^k$, in \\cite{Cardoso_et_al2017} the\nlexicographic polynomials were introduced and their spectra determined, for connected regular graphs $H$, in terms\nof the spectrum of $H$ and the coefficients of the polynomial.\n\nOther particular $H$-join graph operations appear in the literature under different designations, as it is the case\nof the mixed extension of a graph $H$ studied in \\cite{haemers_et_al}, where special attention is given to the mixed\nextensions of $P_3$. The mixed extension of a graph $H$, with vertex set $V(H)=\\{1, \\dots, p\\}$, is the $H$-join of\na family of graphs $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$, where each graph $G_i \\in \\mathcal{G}$ is a complete graph or\nits complement. From the $H$-join spectral properties, we may conclude that the mixed extensions of a graph $H$ of\norder $p$ has at most $p$ eigenvalues unequal to $0$ and $-1$.\\\\\n\nThe remaining part of the paper is organized as follows. The focus of Section~\\ref{sec_2} is the preliminaries.\nNamely, the notation and basic definitions, the main spectral results of the $H$-join graph operation and the more\nrelevant properties, in the context of this work, of the main characteristic polynomial and walk-matrix of a graph.\nIn Section~\\ref{sec_3}, the main result of this artice, the determination of the spectrum of the $H$-join\nof a family of arbitrary graphs is deduced. Section~\\ref{sec_4} includes some final remarks. Namely, about\nparticular cases of the $H$-join such as the lexicographic product and determination of the eigenvectors of the\nadjacency matrix of the $H$-join in terms of the eigenvectors of the adjacency matrices of the components and an\nassociated matrix.\n\n\\section{Preliminaries}\\label{sec_2}\n\n\\subsection{Notation and basic definitions}\n\nThroughout the text we consider undirected, simple and finite graphs, which are just called graphs. The vertex set\nand the edge set of a graph $G$ is denoted by $V(G)$ and $E(G)$, respectively. The order of $G$ is the cardinality\nof its vertex set and when it is $n$ we consider that $V(G) = \\{1, \\dots, n\\}$. The eigenvalues of the adjacency\nmatrix of a graph $G$, $A(G)$, are also called the eigenvalues of $G$.\nFor each distinct eigenvalue $\\mu$ of $G$, ${\\mathcal E}_G(\\mu)$ denotes the eigenspace of $\\mu$ whose dimension is\nequal to the algebraic multiplicity of $\\mu$, $m(\\mu)$. The spectrum of a graph $G$ of order $n$ is denoted by\n$\\sigma(G)=\\{\\mu^{[m_1]}_1,\\dots,\\mu^{[m_s]}_s,\\mu^{[m_{s+1}]}_{s+1}, \\dots, \\mu_t^{[m_t]}\\}$, where\n$\\mu_1,\\dots,\\mu_s,\\mu_{s+1}, \\dots, \\mu_t$ are the distinct eigenvalues of $G$, $\\mu_i^{[m_i]}$ means that\n$m(\\mu_i)=m_i$ and then $\\sum_{j=1}^{t}{m_j}=n$. When we say that $\\mu$ is an eigenvalue of $G$ with zero multiplicity\n(that is, $m(\\mu)=0$) it means that $\\mu \\not \\in \\sigma(G)$. The distinct eigenvalues of $G$ are indexed in such way\nthat the eigenspaces ${\\mathcal E}_G(\\mu_i)$, for $1 \\le i \\le s$, are not orthogonal to ${\\bf j}_n$, the all-$1$\nvector with $n$ entries (sometimes we simple write ${\\bf j})$. The eigenvalues $\\mu_i$, with $1 \\le i \\le s$, are\ncalled main eigenvalues of $G$ and the remaining distinct eigenvalues non-main. The concept of main (non-main)\neigenvalue was introduced in \\cite{cvetkov70} and further investigated in several publications. As it is well known,\nthe largest eigenvalue of a connected graph $G$ is main and, when $G$ is regular, all its remaining distinct\neigenvalues are non-main \\cite{cds79}. A survey on main eigenvalues was published in \\cite{rowmain}.\n\n\\subsection{The $H$-join operation}\n\nNow we recall the definition of the $H$-join of a family of graphs \\cite{Cardoso_et_al2013}.\n\n\\begin{definition}\\label{def_h-join}\nConsider a graph $H$ with vertex subset $V(H)=\\{1, \\dots, p\\}$ and a family of graphs\n$\\mathcal{G} = \\{G_1, \\dots, G_p\\}$ such that $|V(G_1)|=n_1, \\dots , |V(G_p)|=n_p$.\nThe $H$-join of $\\mathcal{G}$ is the graph\n$$\nG = \\bigvee_{H}{\\mathcal{G}}\n$$\nin which $V(G) = \\bigcup_{j=1}^{p}{V(G_j)}$ and\n$E(G) = \\left(\\bigcup_{j=1}^{p}{E(G_j)}\\right) \\cup \\left(\\bigcup_{rs \\in E(H)}{E(G_r \\vee G_s)}\\right)$,\nwhere $G_r \\vee G_s$ denotes the join.\n\\end{definition}\n\n\\begin{theorem}\\label{H-Join_Spectra} \\cite{Cardoso_et_al2013}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join}, where $\\mathcal{G}$ is a family of regular graphs\nsuch that $G_1$ is $d_1$-regular, $G_2$ is $d_2$-regular, $\\dots$ and $G_p$ is $d_p$-regular. Then\n\\begin{equation}\\label{h-join-spectra}\n\\sigma(G) = \\left(\\bigcup_{j=1}^{p}{\\left(\\sigma(G_j) \\setminus \\{d_j\\}\\right)}\\right) \\cup \\sigma(\\widetilde{C}),\n\\end{equation}\nwhere the matrix $\\widetilde{C}$ has order $p$ and is such that\n\\begin{equation}\\label{matrix_c}\n\\left(\\widetilde{C}\\right)_{rs} = \\left\\{\\begin{array}{ll}\n d_r & \\hbox{if } r=s,\\\\\n \\sqrt{n_rn_s} & \\hbox{if } rs \\in E(H),\\\\\n 0 & \\hbox{otherwise,} \\\\\n \\end{array}\\right.\n\\end{equation}\nand the set operations in \\eqref{h-join-spectra} are done considering possible repetitions of elements of the multisets.\n\\end{theorem}\n\nFrom the above theorem, if there is $G_i \\in \\mathcal{G}$ which is disconnected, with $q$ components, then its\nregularity $d_i$ appears $q$ times in the multiset $\\sigma(G_i)$. Therefore, according to \\eqref{h-join-spectra},\n$d_i$ remains as an eigenvalue of $G$ with multiplicity $q-1$.\n\nFrom now on, given a graph $H$, we consider the following notation:\n$$\n\\delta_{i,j}(H) = \\left\\{\\begin{array}{ll}\n 1 & \\hbox{if } ij \\in E(H), \\\\\n 0 & \\hbox{otherwise.}\n \\end{array}\n \\right.\n$$\n\nBefore the next result, it is worth observe the following. Considering a graph $G$, it is always possible to extend\na basis of the eigensubspace associated to a main eigenvalue $\\mu_j$, ${\\mathcal E}_G(\\mu_j) \\cap {\\bf j}^{\\perp}$,\nto one of ${\\mathcal E}_G(\\mu_j)$ by adding an eigenvector, $\\hat{\\bf u}_{\\mu_j}$, which is orthogonal to\n${\\mathcal E}_G(\\mu_j) \\cap {\\bf j}^{\\perp}$ and uniquely determined, without considering its multiplication by a\nnonzero scalar. The eigenvector $\\hat{\\bf u}_{\\mu_j}$ is called the main eigenvector of $\\mu_j$. The subspace with\nbasis $\\{\\hat{\\bf u}_{\\mu_1}, \\dots, \\hat{\\bf u}_{\\mu_s}\\}$ is the main subspace of $G$ and is denoted as $Main(G)$.\nNote that for each main eigenvector $\\hat{\\bf u}_{\\mu_j}$ of the basis of $Main(G)$,\n$\\hat{\\bf u}_{\\mu_j}^T{\\bf j} \\ne 0$.\n\n\\begin{lemma}\\label{main_and_nom-main_eigenvalues-join}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join} and $\\mu_{i,j} \\in \\sigma(G_i)$. Then\n$\\mu_{i,j} \\in \\sigma(G)$ with multiplicity\n$$\n\\left\\{\\begin{array}{ll}\n m(\\mu_{i,j}) & \\hbox{whether $\\mu_{i,j}$ is a non-main eigenvalue of } G_i, \\\\\n m(\\mu_{i,j})-1 & \\hbox{whether $\\mu_{i,j}$ is a main eigenvalue of } G_i.\n \\end{array}\\right.\n$$\n\\end{lemma}\n\n\\begin{proof}\nDenoting $\\delta_{i,j} = \\delta_{i,j}(H)$, then $\\delta_{i,j} {\\bf j}_{n_i}{\\bf j}^T_{n_j}$ is an $n_i \\times n_j$\nmatrix whose entries are 1 if $ij \\in E(H)$ and $0$ otherwise. Then the adjacency matrix of $G$ has the form\n$$\nA(G) = \\left(\\begin{array}{ccccc}\n A(G_1) & \\delta_{1,2}{\\bf j}_{n_1} {\\bf j}^T_{n_2} & \\cdots & \\delta_{1,p-1}{\\bf j}_{n_1} {\\bf j}^T_{n_{p-1}}&\\delta_{1,p}{\\bf j}_{n_1} {\\bf j}^T_{n_p}\\\\\n \\delta_{2,1}{\\bf j}_{n_2} {\\bf j}^T_{n_1} & A(G_2) & \\cdots & \\delta_{2,p-1}{\\bf j}_{n_2} {\\bf j}^T_{n_{p-1}}&\\delta_{2,p}{\\bf j}_{n_2} {\\bf j}^T_{n_p}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots & \\vdots \\\\\n \\delta_{p-1,1}{\\bf j}_{n_{p-1}}{\\bf j}^T_{n_1} &\\delta_{p-1,2}{\\bf j}_{n_{p-1}} {\\bf j}^T_{n_2} & \\cdots & A(G_{p-1})&\\delta_{p-1,p}{\\bf j}_{n_{p-1}} {\\bf j}^T_{n_p}\\\\\n \\delta_{p,1}{\\bf j}_{n_p}{\\bf j}^T_{n_1} &\\delta_{p,2}{\\bf j}_{n_p} {\\bf j}^T_{n_2}& \\cdots &\\delta_{p,p-1}{\\bf j}_{n_p} {\\bf j}^T_{n_{p-1}} & A(G_p)\\\\\n \\end{array}\\right).\n$$\nLet $\\hat{\\bf u}_{i,j}$ be an eigenvector of $A(G_i)$ associated to an eigenvalue $\\mu_{i,j}$ whose sum of its\ncomponents is zero (then, $\\mu_{i,j}$ is non-main or it is main with multiplicity greater than one). Then,\n\\begin{equation}\\label{eigenvalue-equation}\nA(G) \\left(\\begin{array}{c}\n 0 \\\\\n \\vdots \\\\\n 0 \\\\\n \\hat{\\bf u}_{i,j} \\\\\n 0 \\\\\n \\vdots \\\\\n 0\n \\end{array}\\right) = \\left(\\begin{array}{c}\n \\delta_{1,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_1} \\\\\n \\vdots \\\\\n \\delta_{i-1,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_{i-1}} \\\\\n A(G_i)\\hat{\\bf u}_{i,j} \\\\\n \\delta_{i+1,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_{i+1}} \\\\\n \\vdots \\\\\n \\delta_{p,i}\\left({\\bf j}^T_{n_i}\\hat{\\bf u}_{i,j}\\right){\\bf j}_{n_p}\n \\end{array}\\right).\n\\end{equation}\nIt should be noted that when $\\mu_{i,j}$ is main, there are $m(\\mu_{i,j})-1$ linear independent eigenvectors\nbelonging to ${\\mathcal E}_G(\\mu_{i,j}) \\cap {\\bf j}^{\\perp}$.\n\\end{proof}\n\n\\subsection{The main characteristic polynomial and the walk-matrix}\nIf $G$ has $s$ distinct main eigenvalues $\\mu_1, \\dots, \\mu_s$, then the main characteristic polynomial of $G$\nis the polynomial of degree $s$ \\cite{rowmain}\n\\begin{eqnarray}\nm_G(x) &=& \\Pi_{i=1}^{s}{(x-\\mu_i)} \\nonumber\\\\\n &=& x^s - c_{0} - c_{1}x - \\cdots - c_{s-2} x^{s-2} - c_{s-1} x^{s-1}. \\label{mcpG}\n\\end{eqnarray}\nAs referred in \\cite{rowmain} (see also \\cite{CvetkovicPetric84}), if $\\mu$ is a main eigenvalue of $G$, so is its\nalgebraic conjugate $\\mu^*$ and then the coefficients of $m_G(x)$ are integers. Furthermore, it is worth to recall\nthe next result which follows from \\cite[Th. 2.5]{Teranishi2001} (see also \\cite{rowmain}).\n\n\\begin{theorem}\\cite[Prop. 2.1]{rowmain}\\label{minimal_p}\nFor every polynomial $f(x) \\in \\mathbb{Q}[x]$, $f(A(G)){\\bf j}={\\bf 0}$ if and only if $m_G(x)$ divides $f(x)$.\n\\end{theorem}\n\nIn particular, it is immediate that $m_G(A(G)){\\bf j}={\\bf 0}$. Therefore,\n\\begin{equation}\\label{main_polynomial}\nA^s(G){\\bf j} = c_{0}{\\bf j} + c_{1}A(G){\\bf j} + \\cdots + c_{s-2} A^{s-2}(G){\\bf j} + c_{s-1} A^{s-1}(G){\\bf j}.\n\\end{equation}\n\nGiven a graph $G$ of order $n$, let us consider the $n \\times k$ matrix \\cite{harscwk2, posu}\n$$\n{\\bf W}_{G;k} = \\left({\\bf j}, A(G) {\\bf j}, A^{2}(G){\\bf j}, \\ldots, A^{k-1}(G){\\bf j} \\right ).\n$$\nThe vector space spanned by the columns of ${\\bf W}_{G;k}$ is denoted as $ColSp{\\bf W}_{G;k}$. The matrix\n${\\bf W}_{G;k}$ with largest integer $k$ such that the dimension of $ColSp{\\bf W}_{G;k}$ is equal to $k$, that is,\nsuch that its columns are linear independent, is referred to be the walk-matrix of $G$ and is just denoted as\n${\\bf W}_G$. Then, as a consequence of Theorem~\\ref{minimal_p} and equality \\eqref{main_polynomial}, it follows a\ntheorem which appears in \\cite{hagos1}.\n\n\\begin{theorem}\\cite[Th. 2.1]{hagos1} \\label{hagos}\nThe rank of $W_G$ is equal to the number of main eigenvalues of the graph $G$.\n\\end{theorem}\n\nFrom Theorem~\\ref{hagos}, we may conclude that the number of distinct main eigenvalues is\n$s = \\max \\{k: \\{{\\bf j}, A(G){\\bf j}, A^2(G){\\bf j}, \\ldots, A^{k-1}(G){\\bf j}\\} \\text{ is linearly\nindependent}\\}.$ \\\\\n\nThe equality \\eqref{main_polynomial} also implies the next corollary.\n\n\\begin{corollary} \\label{ap}\nThe $s$-th column of $A(G){\\bf W}_G$ is $ A^{s}(G){\\bf j} = {\\bf W}_G\\left(\\begin{array}{c}\n c_{0} \\\\\n \\vdots \\\\\n c_{s-2} \\\\\n c_{s-1} \\\\\n \\end{array} \\right ),$\nwhere $c_j$, for $0 \\le j \\le s-1$, are the coefficients of the main characteristic polynomial $m_G$,\ngiven in \\eqref{mcpG}.\n\\end{corollary}\n\nThis corollary allows the determination of the coefficients of the main characteristic polynomial, $m_G$, by solving\nthe linear system ${\\bf W_G \\hat{x}} = A^{s}(G){\\bf j}$.\\\\\n\nFrom \\cite[Th. 2.4]{rowmain} we may conclude the following theorem.\n\n\\begin{theorem}\\label{colspw}\nLet $G$ be a graph with adjacency matrix $A(G)$. Then $ColSp{\\bf W_G}$ coincides with $Main(G)$. Moreover $Main(G)$\nand the vector space spanned by the vectors orthogonal to $Main(G)$, $\\left(Main(G)\\right)^{\\perp}$, are both\n$A(G)$--invariant.\n\\end{theorem}\n\nFrom the above definitions, if $G$ is a $r$-regular graph of order $n$, since its largest eigenvalue, $r$,\nis the unique main eigenvalue, then $m_G(x) = x - r$ and $W_G = \\left( {\\bf j}_n \\right)$.\n\n\\section{The spectrum of the $H$-join of a family of arbitrary graphs}\\label{sec_3}\n\nBefore the main result of this article, we need to define a special matrix ${\\bf \\widetilde{W}}$ which will be called\nthe $H$-join associated matrix.\n\n\\begin{definition}\\label{main_def}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join} and denote $\\delta_{i,j}=\\delta_{i,j}(H)$. For each\n$G_i \\in \\mathcal{G}$, consider the main characteristic polynomial \\eqref{mcpG},\n$m_{G_i}(x)=x^{s_i} - c_{i,0} - c_{i,1}x - \\cdots - c_{i,s_i-1} x^{s_i-1}$ and its walk-matrix ${\\bf W}_{G_i}$.\nThe $H$-join associated matrix is the $s \\times s$ matrix, with $s= \\sum_{i=1}^{p}{s_i}$,\n$$\n\\widetilde{\\bf W} = \\left(\\begin{array}{ccccc}\n {\\bf C}(m_{G_1}) & \\delta_{1,2}{\\bf M}_{1,2}& \\dots & \\delta_{1,p-1}{\\bf M}_{1,p-1}& \\delta_{1,p}{\\bf M}_{1,p} \\\\\n \\delta_{2,1}{\\bf M}_{2,1}& {\\bf C}(m_{G_2}) & \\dots & \\delta_{2,p-1}{\\bf M}_{2,p-1}& \\delta_{2,p}{\\bf M}_{2,p} \\\\\n \\vdots & \\vdots &\\ddots & \\vdots & \\vdots \\\\\n \\delta_{p,1}{\\bf M}_{p,1}& \\delta_{p,2}{\\bf M}_{p,2}& \\dots & \\delta_{p,p-1}M_{p,p-1}& {\\bf C}(m_{G_p})\n \\end{array}\\right), \\text{ where}\n$$\n${\\bf C}(m_{G_i}) = \\left(\\begin{array}{ccccc}\n 0 & 0 &\\dots & 0 & c_{i,0} \\\\\n 1 & 0 &\\dots & 0 & c_{i,1} \\\\\n 0 & 1 &\\dots & 0 & c_{i,2} \\\\\n \\vdots &\\vdots &\\ddots&\\vdots &\\vdots\\\\\n 0 & 0 &\\dots & 1 &c_{i,s_i-1} \\\\\n \\end{array}\\right)$ and ${\\bf M}_{i,j}=\\left(\\begin{array}{c}\n {\\bf j}^T_{n_j}{\\bf W}_{G_j} \\\\\n 0 \\; \\dots \\; 0 \\\\\n \\vdots \\; \\ddots\\; \\vdots\\\\\n 0 \\; \\dots \\; 0 \\\\\n \\end{array}\\right)$, for $1 \\le i,j \\le p$.\n\\end{definition}\n\nNote that the ${\\bf C}(m_{G_i})$ is the Frobenius companion matrix of the main characteristic polynomial $m_{G_i}$\nand ${\\bf M}_{i,j}$ is a $s_i \\times s_j$ matrix such that its first row is\n${\\bf j}_{n_i}^T{\\bf W}_{G_i} = (N_0^i, N_1^i, \\dots, N_{s_i-1}^i)$, where $N_k^i$ is the number of walks of length $k$ in\n$G_i$, for $0 \\le k \\le s_i-1$ (considering $N^i_0=n_i$) and the entries of the remaining rows are equal to zero.\n\n\\begin{theorem}\\label{main_theorem}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join}, where $\\mathcal{G}$ is a family of arbitrary graphs.\nIf for each graph $G_i$, with $1 \\le i \\le p$,\n\\begin{equation}\\label{spectrum_Gi}\n\\sigma(G_i)=\\{\\mu_{i,1}^{[m_{i,1}]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}]}, \\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_{i,t_i}]}\\},\n\\end{equation}\nwhere $m_{i,j}=m(\\mu_{i,j})$ and $\\mu_{i,1}, \\dots, \\mu_{i,s_i}$ are the main distinct eigenvalues of $G_i$, then\n\\begin{eqnarray}\n\\sigma(G) &=& \\bigcup_{i=1}^{p}{\\{\\mu_{i,1}^{[m_{i,1}-1]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}-1]}\\}} \\cup\n \\bigcup_{i=1}^{p}{\\{\\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_{i,t_i}]}\\}} \\cup\n \\sigma({\\bf \\widetilde{W}}), \\label{G_spectrum}\n\\end{eqnarray}\nwhere the union of multisets is considered with possible repetitions.\n\\end{theorem}\n\n\\begin{proof}\nFrom Lemma~\\ref{main_and_nom-main_eigenvalues-join} it is immediate that\n$$\n\\bigcup_{i=1}^{p}{\\{\\mu_{i,1}^{[m_{i,1}-1]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}-1]}\\}} \\cup\n \\bigcup_{i=1}^{p}{\\{\\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_{i,t_i}]}\\}} \\subseteq \\sigma(G).\n$$\nSo it just remains to prove that $\\sigma(\\widetilde{{\\bf W}}) \\subseteq \\sigma(G)$.\\\\\n\nLet us define the vector\n\\begin{eqnarray}\n\\hat{\\bf v} &=& \\left(\\begin{array}{c}\n \\hat{\\bf v}_{1} \\\\\n \\vdots \\\\\n \\hat{\\bf v}_{p} \\\\\n \\end{array}\\right), \\text{ such that } \\label{vector_v}\\\\\n\\hat{\\bf v}_{i} &=& \\sum_{k=0}^{s_i-1}{\\alpha_{i,k}A^k(G_i){\\bf j}_{n_i}}\n = {\\bf W}_{G_i}\\hat{\\mathbf{\\alpha}}_{i},\\label{main_vector_Gi}\n\\end{eqnarray}\nwhere $\\hat{\\mathbf{\\alpha}}_{i} = \\left(\\begin{array}{c}\n \\alpha_{i,0} \\\\\n \\alpha_{i,1} \\\\\n \\vdots \\\\\n \\alpha_{i,s_i-1} \\\\\n \\end{array} \\right ), \\text{ for } 1 \\le i \\le p.$\nFrom \\eqref{main_vector_Gi}, each $\\hat{\\bf v}_{i} \\in Main(G_i)$ and then all vectors $\\hat{\\bf v}$ defined in\n\\eqref{vector_v} are orthogonal to the eigenvectors of $A(G)$ in \\eqref{eigenvalue-equation}. Moreover,\n\n\\begin{equation}\\label{main_subspace}\nA(G_i)\\hat{\\bf v}_{i} = A(G_i){\\bf W}_{G_i}\\hat{\\mathbf{\\alpha}}_{i} = \\sum_{k=0}^{s_i-1}{\\alpha_{i,k}A^{k+1}(G_i){\\bf j}_{n_i}}, \\text{ for } 1 \\le i \\le p.\n\\end{equation}\n\nTherefore,\n\n\\begin{eqnarray}\nA(G)\\hat{\\bf v} &=& \\left(\\begin{array}{cccc}\nA(G_1) & \\delta_{1,2}{\\bf j}_{n_1} {\\bf j}^T_{n_2}& \\cdots & \\delta_{1,p}{\\bf j}_{n_1} {\\bf j}^T_{n_p}\\\\\n\\delta_{2,1}{\\bf j}_{n_2} {\\bf j}^T_{n_1} & A(G_2) & \\cdots & \\delta_{2,p}{\\bf j}_{n_2} {\\bf j}^T_{n_p}\\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n\\delta_{p,1}{\\bf j}_{n_p}{\\bf j}^T_{n_1} &\\delta_{p,2}{\\bf j}_{n_p} {\\bf j}^T_{n_2} & \\cdots & A(G_p)\\\\\n \\end{array}\\right) \\left(\\begin{array}{c}\n \\hat{\\bf v}_{1}\\\\\n \\hat{\\bf v}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf v}_{p}\n \\end{array}\\right) \\nonumber \\\\\n&=& \\left(\\begin{array}{c}\n A(G_1)\\hat{\\bf v}_{1} + \\left(\\sum_{k \\in [p] \\setminus \\{1\\}}{\\delta_{1,k}{\\bf j}^T_{n_k}\\hat{\\bf v}_{k}}\\right){\\bf j}_{n_1}\\\\\n A(G_2)\\hat{\\bf v}_{2} + \\left(\\sum_{k \\in [p] \\setminus \\{2\\}}{\\delta_{2,k}{\\bf j}^T_{n_k}\\hat{\\bf v}_{k}}\\right){\\bf j}_{n_2}\\\\\n \\vdots \\\\\n A(G_p)\\hat{\\bf v}_{p} + \\left(\\sum_{k \\in [p] \\setminus \\{p\\}}{\\delta_{p,k}{\\bf j}^T_{n_k}\\hat{\\bf v}_{k}}\\right){\\bf j}_{n_p}\n \\end{array}\\right) \\label{walk_1}\\\\\n&=& \\left(\\begin{array}{c}\n A(G_1)\\hat{\\bf v}_{1} + \\left(\\sum_{k \\in [p] \\setminus \\{1\\}}{\\delta_{1,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}}\\right){\\bf j}_{n_1}\\\\\n A(G_2)\\hat{\\bf v}_{2} + \\left(\\sum_{k \\in [p] \\setminus \\{2\\}}{\\delta_{2,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}}\\right){\\bf j}_{n_2}\\\\\n \\vdots \\\\\n A(G_p)\\hat{\\bf v}_{p} + \\left(\\sum_{k \\in [p] \\setminus \\{p\\}}{\\delta_{p,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}}\\right){\\bf j}_{n_p}\n \\end{array}\\right), \\label{walk_2}\n\\end{eqnarray}\nwhere \\eqref{walk_2} is obtained applying \\eqref{main_vector_Gi} in \\eqref{walk_1}. Defining\n\\begin{equation*}\n\\beta_{i,0}=\\sum_{k \\in [p] \\setminus \\{i\\}}{\\delta_{i,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\mathbf{\\alpha}}_{k}},\n \\text{ for } 1 \\le i \\le p\n\\end{equation*}\nand taking into account \\eqref{main_subspace}, the $i$-th row of \\eqref{walk_2} can be written as\n{\\small \\begin{eqnarray}\n\\hspace{-0.5cm}\\beta_{i,0}{\\bf j}_{n_i} + A(G_i)\\hat{\\bf v}_{i} &=& \\left(\\underbrace{\\sum_{k \\in [p]\\setminus\\{i\\}}{\\delta_{i,k}{\\bf j}^T_{n_k}{\\bf W}_{G_k}\\hat{\\bf \\alpha}_{k}}}_{\\beta_{i,0}}\\right){\\bf j}_{n_i} + \\sum_{k=0}^{s_i-1}{\\alpha_{i,k}A^{k+1}(G_i){\\bf j}_{n_i}} \\nonumber\\\\\n\\hspace{-0.5cm} &=& \\beta_{i,0}{\\bf j}_{n_i} + \\sum_{k=1}^{s_i-1}{\\alpha_{i,k-1}A^k(G_i){\\bf j}_{n_i}}\n + \\alpha_{i,s_i-1}A^{s_i}(G_i){\\bf j}_{n_i} \\label{ith-row_1}\\\\\n\\hspace{-0.5cm} &=& \\beta_{i,0}{\\bf j}_{n_i} + \\sum_{k=1}^{s_i-1}{\\alpha_{i,k-1}A^k(G_i){\\bf j}_{n_i}}\n + \\alpha_{i,s_i-1}{\\bf W}_{G_i}\\left(\\begin{array}{c}\n c_{i,0} \\\\\n c_{i,1} \\\\\n \\vdots \\\\\n c_{i,s_i-1} \\\\\n \\end{array} \\right) \\label{ith-row_2}\n\\end{eqnarray}}\n\\begin{eqnarray}\n\\qquad \\qquad &=& {\\bf W}_{G_i}\\left(\\begin{array}{c}\n \\beta_{i,0} + \\alpha_{i,s_i-1}c_{i,0} \\\\\n \\alpha_{i,0} + \\alpha_{i,s_i-1}c_{i,1}\\\\\n \\vdots \\\\\n \\alpha_{i,s_i-2} + \\alpha_{i,s_i-1}c_{i,s_i-1} \\\\\n \\end{array}\\right). \\label{ith-row_3}\n\\end{eqnarray}\nObserve that \\eqref{ith-row_2} is obtained applying Corollary~\\ref{ap} to \\eqref{ith-row_1}. Taking into account the\ndefinition of $\\beta_{i,0}$, \\eqref{ith-row_3} can be replaced by the expression\n\n{\\tiny\n$$\n\\hspace{-1.5cm}{\\bf W}_{G_i}\\underbrace{\\left(\\begin{array}{ccccccccccc}\n\\overbrace{\\delta_{i,1}{\\bf j}^T_{n_1}{\\bf W}_{G_1}}^{s_1\\text{ columns}}&\\cdots&\\overbrace{\\delta_{i,{i-1}}{\\bf j}^T_{n_{i-1}}{\\bf W}_{G_{i-1}}}^{s_{i-1}\\text{ columns}}& 0 & 0 &\\cdots& 0 & c_{i,0} & \\overbrace{\\delta_{i,{i+1}}{\\bf j}^T_{n_{i+1}}{\\bf W}_{G_{i+1}}}^{s_{i+1}\\text{ columns}} &\\cdots&\\overbrace{\\delta_{i,p}{\\bf j}^T_{n_p}{\\bf W}_{G_p}}^{s_p\\text{ columns}}\\\\\n {\\bf 0} &\\cdots& {\\bf 0}& 1 & 0 &\\cdots& 0 & c_{i,1} &{\\bf 0}&\\cdots& {\\bf 0} \\\\\n {\\bf 0} &\\cdots&{\\bf 0}& 0 & 1 &\\cdots& 0 & c_{i,2} &{\\bf 0}&\\cdots& {\\bf 0} \\\\\n \\vdots &\\ddots&\\vdots& \\vdots &\\vdots&\\ddots&\\vdots& \\vdots &\\vdots&\\ddots& \\vdots \\\\\n {\\bf 0} &\\cdots&{\\bf 0}& 0 & 0 &\\cdots& 1 &c_{i,s_i-1} &{\\bf 0}&\\cdots& {\\bf 0}\n\\end{array}\\right)}_{\\widetilde{\\bf W}_i}\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{i-1}\\\\\n \\hat{\\bf \\alpha}_{i}\\\\\n \\hat{\\bf \\alpha}_{i+1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right)\n$$}\nwhich is equivalent to the expression\n$$\n{\\bf W}_{G_i}\\underbrace{\\left(\\begin{array}{ccccccc}\n \\delta_{i,1}{\\bf M}_{i,1} & \\dots & \\delta_{i,i-1}{\\bf M}_{i,i-1} & {\\bf C}(m_{G_i}) & \\delta_{i,i+1}{\\bf M}_{i,i+1} & \\dots & \\delta_{i,p}{\\bf M}_{i,p}\\\\\n \\end{array}\\right)}_{\\widetilde{\\bf W}_i}\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{i-1}\\\\\n \\hat{\\bf \\alpha}_{i}\\\\\n \\hat{\\bf \\alpha}_{i+1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right).\n$$\nFrom the above analysis\n\\begin{eqnarray*}\nA(G)\\hat{\\bf v} & = & \\left(\\begin{array}{cccc}\n {\\bf W}_{G_1} & {\\bf 0} & \\cdots & {\\bf 0} \\\\\n {\\bf 0} & {\\bf W}_{G_2} & \\cdots & {\\bf 0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n {\\bf 0} & {\\bf 0} & \\cdots & {\\bf W}_{G_p}\n \\end{array}\\right) \\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n \\vdots\\\\\n {\\bf \\widetilde{W}}_p\n \\end{array}\\right)\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\hat{\\bf \\alpha}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right)\n\\end{eqnarray*}\nand, according to \\eqref{main_vector_Gi},\n\\begin{eqnarray*}\n\\hat{\\bf v} &=& \\left(\\begin{array}{cccc}\n {\\bf W}_{G_1} & {\\bf 0} & \\cdots & {\\bf 0} \\\\\n {\\bf 0} & {\\bf W}_{G_2} & \\cdots & {\\bf 0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n {\\bf 0} & {\\bf 0} & \\cdots & {\\bf W}_{G_p}\n \\end{array}\\right)\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\hat{\\bf \\alpha}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right). \\label{vecto_v}\n\\end{eqnarray*}\nTherefore, $A(G)\\hat{\\bf v} = \\rho \\hat{\\bf v}$ if and only if\n\\begin{eqnarray}\n\\underbrace{\\left(\\begin{array}{cccc}\n {\\bf W}_{G_1} & {\\bf 0} & \\cdots & {\\bf 0} \\\\\n {\\bf 0} & {\\bf W}_{G_2} & \\cdots & {\\bf 0} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n {\\bf 0} & {\\bf 0} & \\cdots & {\\bf W}_{G_p}\n \\end{array}\\right)}_{{\\bf (*)}}\\left(\\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n \\vdots \\\\\n {\\bf \\widetilde{W}}_p\n \\end{array}\\right) - \\rho I_s\\right)\\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\hat{\\bf \\alpha}_{2}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right) &=& {\\bf 0}. \\label{main_equality}\n\\end{eqnarray}\nIt is immediate that ${\\bf \\widetilde{W}} = \\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n \\vdots \\\\\n {\\bf \\widetilde{W}}_p\n \\end{array}\\right)$ and since the columns of each matrix\n${\\bf W}_{G_i}$ are linear independent, the columns of the matrix $(*)$ are also linear independent. Consequently,\n\\eqref{main_equality} is equivalent to\n\\begin{equation}\n\\left({\\bf \\widetilde{W}} - \\rho I_s\\right)\\hat{\\bf \\alpha} = {\\bf 0}, \\label{w_eigenvetor}\n\\end{equation}\nwhere $\\hat{\\bf \\alpha} = \\left(\\begin{array}{c}\n \\hat{\\bf \\alpha}_{1}\\\\\n \\vdots \\\\\n \\hat{\\bf \\alpha}_{p}\n \\end{array}\\right).$\nFinally, we may conclude that $(\\rho,\\hat{\\bf v})$ is an eigenpair of $A(G)$ if and only if $(\\rho, \\hat{\\bf \\alpha})$,\nis an eigenpair of the $H$-join associated matrix ${\\bf \\widetilde{W}}$.\n\\end{proof}\n\nBefore the next corollary of Theorem~\\ref{main_theorem}, it is convenient to introduce the notation\n$\\phi(G)$ and $\\phi({\\bf A})$ which, from now on, will be used for the characteristic polynomial of a graph $G$\nand a matrix ${\\bf A}$, respectively.\n\n\\begin{corollary}\\label{cor_charact_poly}\nLet $G$ be the $H$-join as in Definition~\\ref{def_h-join} with associated matrix ${\\bf \\widetilde{W}}$. Assuming\nthat $\\mathcal{G}=\\{G_1, \\dots, G_p\\}$ is a family of arbitrary graphs for which $\\sigma(G_1), \\dots, \\sigma(G_p)$\nare defined as in \\eqref{spectrum_Gi} and $m_{G_1}, \\dots, m_{G_p}$ are their main characteristic polynomials, then\n$$\n\\phi(G) = \\left(\\prod_{i=1}^{p}{\\frac{\\phi(G_i)}{m_{G_i}}}\\right)\\phi({\\bf \\widetilde{W}}).\n$$\n\\end{corollary}\n\n\\begin{proof}\nSince, for $1 \\le i \\le p$,\n$\\sigma(G_i)=\\{\\mu_{i,1}^{[m_{i,1}]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}]}, \\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_i,t_i]}\\}$,\nwhere the first $s_i$ eigenvalues are main, and the roots of $m_{G_i}$ are all simple main eigenvalues of $G_i$,\nit is immediate that $\\phi(G_i)=m_{G_i}\\phi'(G_i)$, where the roots of the polynomial $\\phi'(G_i)$ are the eigenvalues\nof $G_i$,\n$$\n\\{\\mu_{i,1}^{[m_{i,1}-1]}, \\dots, \\mu_{i,s_i}^{[m_{i,s_i}-1]}, \\mu_{i,s_i+1}^{[m_{i,s_i+1}]}, \\dots, \\mu_{i,t_i}^{[m_i,t_i]}\\}.\n$$\nTherefore, according to \\eqref{G_spectrum}, the roots of $\\prod_{i=1}^{p}{\\frac{\\phi(G_i)}{m_{G_i}}}$ are the\neigenvalues in $\\sigma(G) \\setminus \\sigma({\\bf \\widetilde{W}})$.\n\\end{proof}\n\n\n\\begin{example}\nConsider the graph $H \\cong P_3$, the path with three vertices, and the graphs\n$K_{1,3}$, $K_2$ and $P_3$ depicted in the Figure~\\ref{figura_1}. Then\n$\\sigma(K_{1,3})=\\{\\sqrt{3},-\\sqrt{3},0^{[2]}\\},$ $\\sigma(K_2)=\\{1,-1\\}$,\n$\\sigma(P_3)=\\{\\sqrt{2},-\\sqrt{2},0\\}$\nand their main characteristic polynomials are $m_{K_{1,3}}(x) = x^2 - 3$, $m_{K_2}(x) = x - 1$\nand $m_{P_3}(x) = x^2 - 2$, respectively.\n\n\\begin{figure}[h]\n\\begin{center}\n\\unitlength=0.25 mm\n\\begin{picture}(400,120)(60,60)\n\\put(50,110){\\line(2,1){50}} \n\\put(50,110){\\line(2,-1){50}}\n\\put(50,110){\\line(1,0){50}} \n\\put(150,85){\\line(0,1){50}} \n\\put(225,110){\\line(0,1){25}}\n\\put(225,110){\\line(0,-1){25}\n\\put(50,110){\\circle*{5.7}} \n\\put(100,135){\\circle*{5.7}}\n\\put(100,110){\\circle*{5.7}}\n\\put(100,85){\\circle*{5.7}} \n\\put(150,135){\\circle*{5.7}}\n\\put(150,85){\\circle*{5.7}} \n\\put(225,135){\\circle*{5.7}}\n\\put(225,110){\\circle*{5.7}}\n\\put(225,85){\\circle*{5.7}}\n\\put(40,110){\\makebox(0,0){\\footnotesize 1}}\n\\put(100,145){\\makebox(0,0){\\footnotesize 2}}\n\\put(110,110){\\makebox(0,0){\\footnotesize 3}}\n\\put(100,75){\\makebox(0,0){\\footnotesize 4}}\n\\put(75,50){\\makebox(0,0){\\footnotesize $K_{1,3}$}}\n\\put(150,145){\\makebox(0,0){\\footnotesize 5}}\n\\put(150,75){\\makebox(0,0){\\footnotesize 6}}\n\\put(150,50){\\makebox(0,0){\\footnotesize $K_2$}}\n\\put(225,145){\\makebox(0,0){\\footnotesize 7}}\n\\put(235,110){\\makebox(0,0){\\footnotesize 8}}\n\\put(225,75){\\makebox(0,0){\\footnotesize 9}}\n\\put(225,50){\\makebox(0,0){\\footnotesize $P_3$}}\n\\put(300,110){\\line(2,1){50}} \n\\put(300,110){\\line(2,-1){50}}\n\\put(300,110){\\line(1,0){50}} \n\\put(400,85){\\line(0,1){50}}\n\\put(300,110){\\line(4,1){100}}\n\\put(300,110){\\line(4,-1){100}\n\\put(350,135){\\line(1,0){125}}\n\\put(350,135){\\line(1,-1){50}}\n\\put(350,110){\\line(2,1){50}}\n\\put(350,110){\\line(2,-1){50}}\n\\put(350,85){\\line(1,0){125}} \n\\put(350,85){\\line(1,1){50}} \n\\put(400,135){\\line(3,-1){75}}\n\\put(400,135){\\line(3,-2){75}}\n\\put(400,85){\\line(3,1){75}} \n\\put(400,85){\\line(3,2){75}} \n\\put(475,135){\\line(0,-1){50}}\n\\put(300,110){\\circle*{5.7}}\n\\put(350,135){\\circle*{5.7}}\n\\put(350,110){\\circle*{5.7}} \n\\put(350,85){\\circle*{5.7}} \n\\put(400,135){\\circle*{5.7}\n\\put(400,85){\\circle*{5.7}}\n\\put(475,135){\\circle*{5.7}}\n\\put(475,110){\\circle*{5.7}}\n\\put(475,85){\\circle*{5.7}} \n\\put(295,110){\\makebox(0,0){\\footnotesize 1}}\n\\put(350,145){\\makebox(0,0){\\footnotesize 2}}\n\\put(360,110){\\makebox(0,0){\\footnotesize 3}}\n\\put(350,75){\\makebox(0,0){\\footnotesize 4}}\n\\put(400,145){\\makebox(0,0){\\footnotesize 5}}\n\\put(400,75){\\makebox(0,0){\\footnotesize 6}}\n\\put(475,145){\\makebox(0,0){\\footnotesize 7}}\n\\put(485,110){\\makebox(0,0){\\footnotesize 8}}\n\\put(475,75){\\makebox(0,0){\\footnotesize 9}}\n\\put(400,50){\\makebox(0,0){{\\footnotesize $G=\\bigvee_{P_3}{\\{K_{1,3}, K_2, P_3\\}}$}}}\n\\end{picture}\n\\end{center}\n\\caption{The $P_3$-join of the family of graphs $K_{1,3}$, $K_2$ and $P_3$.}\\label{figura_1}\n\\end{figure}\nSince\n$$\n\\begin{array}{lclcl}\n{\\bf \\widetilde{W}}_1 &=& \\left(\\begin{array}{ccccc}\n 0 & c_{1,0} & \\delta_{1,2}2 & \\delta_{1,3}3 & \\delta_{1,3}4 \\\\\n 1 & c_{1,1} & 0 & 0 & 0 \\\\\n \\end{array}\\right) & = & \\left(\\begin{array}{ccccc}\n 0 & 3 & 2 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 & 0 \\\\\n \\end{array}\\right),\\\\\n{\\bf \\widetilde{W}}_2 &=& \\left(\\begin{array}{ccccc}\n \\delta_{2,1}4 & \\delta_{2,1}6 & c_{2,0} & \\delta_{2,3}3 & \\delta_{2,3}4\\\\\n \\end{array}\\right) & = & \\left(\\begin{array}{ccccc}\n 4 & 6 & 1 & 3 & 4\\\\\n \\end{array}\\right),\\\\\n{\\bf \\widetilde{W}}_3 &=& \\left(\\begin{array}{ccccc}\n \\delta_{3,1}4 & \\delta_{3,1}6 & \\delta_{3,2}2 & 0 & c_{3,0} \\\\\n 0 & 0 & 0 & 1 & c_{3,1}\n \\end{array}\\right) & = & \\left(\\begin{array}{ccccc}\n 0 & 0 & 2 & 0 & 2 \\\\\n 0 & 0 & 0 & 1 & 0\n \\end{array}\\right),\n\\end{array}\n$$\nit follows that\n\\begin{equation*}\n{\\bf \\widetilde{W}} = \\left(\\begin{array}{c}\n {\\bf \\widetilde{W}}_1\\\\\n {\\bf \\widetilde{W}}_2\\\\\n {\\bf \\widetilde{W}}_3\n \\end{array}\\right) = \\left(\\begin{array}{ccccc}\n 0 & 3 & 2 & 0 & 0 \\\\\n 1 & 0 & 0 & 0 & 0 \\\\\n 4 & 6 & 1 & 3 & 4 \\\\\n 0 & 0 & 2 & 0 & 2 \\\\\n 0 & 0 & 0 & 1 & 0\n \\end{array}\\right).\n\\end{equation*}\nTherefore, the characteristic polynomial of ${\\bf \\widetilde{W}}$ is the polynomial\n$$\n\\phi({\\bf \\widetilde{W}}) = -42 - 40 x + 15 x^2 + 19 x^3 + x^4 - x^5\n$$\nand, applying Corollary~\\ref{cor_charact_poly}, we obtain the characteristic polynomial of $G$,\n$$\n\\phi(G) = x^3(x+1)\\phi({\\bf \\widetilde{W}}) = x^3(x+1)(-42 - 40 x + 15 x^2 + 19 x^3 + x^4 - x^5).\n$$\n\\end{example}\n\n\\section{Final remarks}\\label{sec_4}\n\nWhen all graphs of the family $\\mathcal{G}$ are regular, that is, $G_1$ is $d_1$-regular, $G_2$ is $d_2$-regular,\n$\\dots$, $G_p$ is $d_p$-regular, the walk matrices are ${\\bf W}_{G_1}=\\left({\\bf j}_{n_1}\\right)$,\n${\\bf W}_{G_2}=\\left({\\bf j}_{n_2}\\right)$, $\\dots$, ${\\bf W}_{G_p}=\\left({\\bf j}_{n_p}\\right)$, respectively.\nConsequently, the main polynomials are $m_{G_1}(x) = x - d_1$, $m_{G_2}(x) = x - d_2$, $\\dots$, $m_{G_p}(x) = x - d_p$.\nAs direct\nconsequence, for this particular case, the $H$-join associated matrix is\n{\\small $$\n{\\bf\\widetilde{W}}=\\left(\\begin{array}{cccc}\n d_1 & \\delta_{1,2}{\\bf j}_{n_2}^T{\\bf W}_{G_2} & \\cdots & \\delta_{1,p}{\\bf j}_{n_p}^T{\\bf W}_{G_p}\\\\\n \\delta_{2,1}{\\bf j}_{n_1}^T{\\bf W}_{G_1} & d_2 & \\cdots & \\delta_{2,p}{\\bf j}_{n_p}^T{\\bf W}_{G_p}\\\\\n \\vdots &\\vdots & \\ddots & \\vdots \\\\\n \\delta_{p,1}{\\bf j}_{n_1}^T{\\bf W}_{G_1} & \\delta_{p,2}{\\bf j}_{n_2}^T{\\bf W}_{G_2} & \\cdots &d_p \\\\\n \\end{array}\\right) = \\left(\\begin{array}{cccc}\n d_1 & \\delta_{1,2}n_2 & \\cdots & \\delta_{1,p}n_p \\\\\n \\delta_{2,1}n_1 & d_2 & \\cdots & \\delta_{2,p}n_p \\\\\n \\vdots &\\vdots & \\ddots & \\vdots \\\\\n \\delta_{p,1}n_1 &\\delta_{p,2}n_2 & \\cdots & d_p \\\\\n \\end{array}\\right).\n$$}\nTherefore, it is immediate that when all the graphs of the family $\\mathcal{G}$ are regular, the matrix\n${\\bf\\widetilde{W}}$ and the matrix $\\widetilde{C}$ in \\eqref{matrix_c} are similar matrices. Note that\n$\\widetilde{C} = D {\\bf\\widetilde{W}}D^{-1}$, where\n$D = \\text{diag}\\left(\\sqrt{n_1}, \\sqrt{n_2}, \\dots, \\sqrt{n_p}\\right)$ and thus\n${\\bf\\widetilde{W}}$ and $\\widetilde{C}$ are cospectral matrices as it should be.\\\\\n\nIn the particular case of the lexicographic product $H[G]$, which is the $H$-join of a family of graphs\n$\\mathcal{G}$, where all the graphs in $\\mathcal{G}$ are isomorphic to a fixed graph $G$, consider that the graph $H$\nhas order $p$ and the graph $G$ has order $n$. Let\n$\\sigma(G)=\\{\\mu_1^{[m_1]}, \\dots, \\mu_{s}^{[m_s]}, \\mu_{s+1}^{[m_{s+1}]}, \\dots, \\mu_{t}^{[m_t]}\\}$, where\n$\\mu_1, \\dots, \\mu_s$ are the distinct main eigenvalues of $G$ and $\\sum_{i=1}^{t}{m_{i}}=n$. Then, according to\nthe Definition~\\ref{main_def}, the $H$-join associated matrix is\n$$\n{\\bf \\widetilde{W}} = \\left(\\begin{array}{ccccc}\n {\\bf C}(m_{G}) & \\delta_{1,2}{\\bf M} & \\dots & \\delta_{1,p-1}{\\bf M} & \\delta_{1,p}{\\bf M} \\\\\n \\delta_{2,1}{\\bf M} & {\\bf C}(m_{G})& \\dots & \\delta_{2,p-1}{\\bf M} & \\delta_{2,p}{\\bf M} \\\\\n \\vdots & \\vdots &\\ddots & \\vdots & \\vdots \\\\\n \\delta_{p,1}{\\bf M} & \\delta_{p,2}{\\bf M} & \\dots & \\delta_{p,p-1}{\\bf M} & {\\bf C}(m_{G})\n \\end{array}\\right),\n$$\nwhere ${\\bf C}(m_{G})$ is the Frobenius companion matrix of $m_G$ and\n${\\bf M} = \\left(\\begin{array}{c}\n {\\bf j}_n^{\\top}{\\bf W}_G\\\\\n {\\bf 0}\\\\\n \\end{array}\\right)$ (both are $s \\times s$ matrices). Applying Theorem~\\ref{main_theorem}, it follows\n\\begin{eqnarray*}\n\\sigma(G) &=& p{\\{\\mu_{1}^{[m_{1}-1]}, \\dots, \\mu_{s}^{[m_{s}-1]}\\}} \\cup\n p{\\{\\mu_{s+1}^{[m_{s+1}]}, \\dots, \\mu_{t}^{[m_{t}]}\\}} \\cup\n \\sigma({\\bf \\widetilde{W}}),\n\\end{eqnarray*}\nwhere the multiplication of $p$ by a set $X$ means the union of $X$ with himself $p$ times. Therefore, from\nCorollary~\\ref{cor_charact_poly}, the characteristic polynomial of $H[G]$ is\n\\begin{equation}\n\\phi(H[G]) = \\left(\\frac{\\phi(G)}{m_G}\\right)^p \\phi({\\bf \\widetilde{W}}). \\label{charact_poly_lex}\n\\end{equation}\n\nIn \\cite[Th. 2.4]{WangWong2018} a distinct expression for $\\phi(H[G])$ is determined. Such expression is not\nonly related with $H$ and $G$ but also with eigenspaces of the adjacency matrix of $G$.\\\\\n\nFrom the obtained results, we are able to determine all the eigenvectors of the adjacency matrix of the $H$-join of\na family of arbitrary graphs $G_1, \\dots, G_p$ in terms of the eigenvectors of the adjacency matrices $A(G_i)$, for\n$1 \\le i \\le p$, and the eigenvectors of the $H$-join associated matrix ${\\bf \\widetilde{W}}$, as follows.\n\\begin{enumerate}\n\\item Let $G$ be the $H$-join as in Definition~\\ref{def_h-join}, where $\\mathcal{G}=\\{G_1, G_2, \\dots, G_p\\}$ is a\n family of arbitrary graphs.\n\\item For $1 \\le i \\le p$, consider $\\sigma(G_i)$ as defined in \\eqref{spectrum_Gi}. For each eigenvalue\n $\\mu_{i,j} \\in \\sigma(G_i)$, every eigenvetor\n $\\hat{\\bf u}_{i,j} \\in \\mathcal{E}_{G_i}(\\mu_{i,j}) \\cap {\\bf j}_{n_i}^{\\perp}$ defines an eigenvector for\n $A(G)$ as in \\eqref{eigenvalue-equation}.\n\\item The remaining eigenvectors of $A(G)$ are the vectors $\\hat{\\bf v}$ defined in \\eqref{vector_v}-\\eqref{main_vector_Gi}\n from the eigenvectors of ${\\bf \\widetilde{W}}$, $\\hat{\\bf \\alpha}$, obtained as linear independent solutions\n of \\eqref{w_eigenvetor}, for each $\\rho \\in \\sigma({\\bf \\widetilde{W}})$.\n\\end{enumerate}\n\n\\medskip\\textbf{Acknowledgments.}\nThis work is supported by the Center for Research and Development in Mathematics and Applications (CIDMA) through\nthe Portuguese Foundation for Science and Technology (FCT - Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia), reference\nUIDB\/04106\/2020.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn this erratum to \\cite{F}, we point out that there is a gap in the proof of\nTheorem 1 of that paper. Specifically, the estimate for $\\sup_{x\\in E_{n}%\n}\\left\\Vert \\overline{f}_{n}^{\\prime}\\left( x\\right) -F_{n}^{\\prime}\\left(\nx\\right) \\right\\Vert $ in \\cite{F} does not hold (as the inductive proof\nfails here), and as a consequence the conclusion of Theorem 1 does not follow.\nNevertheless, using a construction from \\cite{J}, techniques from\n\\cite{AFGJL}, and employing a similar proof as originally, we are able to\nestablish all the results of \\cite{F} under the additional assumption that the\nsubset $Y\\subset X$ is convex (see Theorem 1 below.)\n\nWe note that the main motivation for this work was to find an analogous result\nto that of \\cite{AFM} for not necessarily bounded functions. Let us recall in\n\\cite{AFM} it was shown, in particular, that for a separable Banach space $X$\nadmitting a Lipschitz, $C^{p}$ smooth bump function, that given $\\varepsilon\n>0$ and a bounded, uniformly continuous function $f:X\\rightarrow\\mathbb{R}$,\nthere exists a Lipschitz, $C^{p}$ smooth function $K$ with $\\left\\vert\nf-K\\right\\vert <\\varepsilon$ on $X.$ We remark that to establish our result\nhere, we need to further assume that our Banach space $X$ has an unconditional\nbasis. However, in addition to relaxing the boundedness condition on $f$, when\n$f$ is also Lipschitz, unlike the result of \\cite{AFM}, we are able to find\nLipschitz, $C^{p}$ smooth approximates $K$ where the Lipschitz constants do\nnot depend on the $\\varepsilon$-degree of precision in the approximation. We\nalso note that the results of \\cite{AFM} are restricted to real-valued maps.\n\n$\\smallskip$\n\nThe notation we employ is standard, with $X,Y,$ etc. denoting Banach spaces.\nWe write the closed unit ball of $X$ as $B_{X}.$ The G\\^{a}teaux derivative of\na function $f$ at $x$ in the direction $h$ will be denoted $D_{h}f\\left(\nx\\right) ,$ while the Fr{\\'{e}}chet derivative of $f$ at $x$ on $h$ is\nwritten $f^{\\prime}\\left( x\\right) \\left( h\\right) .$ We note that a\n$C^{p}$-smooth function is necessarily Fr{\\'{e}}chet differentiable (see e.g.,\n\\cite{BL}.)\n\n$\\smallskip$\n\nA $C^{p}$\\textbf{-smooth bump function} $b$ on $X$ is a $C^{p}$-smooth,\nreal-valued function on $X$ with bounded, non-empty support, where\n\\[\n\\text{support}\\left( b\\right) =\\overline{\\left\\{ x\\in X:b\\left( x\\right)\n\\neq0\\right\\} }.\n\\]\nIf $f:X\\rightarrow Y$ is Lipschitz with constant $\\eta,$ we will say that $f $\nis $\\eta$-Lipschitz. Most additional notation is explained as it is introduced\nin the sequel. For any unexplained terms we refer the reader to \\cite{DGZ} and\n\\cite{FHHMPZ}. For further historical context see the introduction in \\cite{F}.\n\n\\section{Main Results}\n\nWe first introduce some notation which will be used throughout the paper. Let\n$\\left\\{ e_{j},e_{j}^{\\ast}\\right\\} _{j=1}^{\\infty}$ be an unconditional\nSchauder basis on $X,$ and $P_{n}:X\\rightarrow X$ the canonical projections\ngiven by $P_{n}\\left( x\\right) =P_{n}\\left( \\sum_{j=1}^{\\infty}x_{j}%\ne_{j}\\right) =\\sum_{j=1}^{n}x_{j}e_{j},$ and where we set $P_{0}=0.$ By\nrenorming, we may assume that the unconditional basis constant is $1.$ In\nparticular, $\\left\\Vert P_{n}\\right\\Vert \\leq1$ for all $n.$ We put\n$E_{n}=P_{n}\\left( X\\right) ,$ and $E^{\\infty}=\\cup_{n}E_{n},$ noting that\n$\\dim E_{n}=n,$ $E_{n}\\subset E_{n+1},$ and $E^{\\infty}\\ $is a dense subspace\nof $X.$ It will be convenient to denote the closed unit ball of $E_{n}$ by\n$B_{E_{n}}.$\n\n$\\smallskip$\n\nThe proof of our main theorem is a modification of some techniques found in\n\\cite{M} and \\cite{AFGJL}, where $C^{p}$-fine approximation on Banach spaces\nis considered. We also rely on the main construction from \\cite{J}. We follow\nthe original proof of \\cite{F} closely, and have decided to reproduce the\ndetails so that this note is self contained.\n\n\\begin{theorem}\nLet $X$ be a Banach space with unconditional basis which admits a Lipschitz,\n$C^{p}$-smooth bump function. Let $Y\\subset X$ be a convex subset and\n$f:Y\\rightarrow\\mathbb{R}$ a uniformly continuous map. Then for each\n$\\varepsilon>0$ there exists a Lipschitz, $C^{p}$-smooth function\n$K:X\\rightarrow\\mathbb{R}$ such that for all $y\\in Y,$%\n\n\\[\n\\left\\vert f(y)-K(y)\\right\\vert <\\varepsilon.\n\\]\n\n\n$\\smallskip$\n\nIf $Z\\ $is any Banach space, $Y\\subset X$ is any subset, and $f:X\\rightarrow\nZ$ (respectively $f:Y\\rightarrow\\mathbb{R}$) is Lipschitz with constant\n$\\eta,$ then we can choose $K:X\\rightarrow Z$ (respectively $K:X\\rightarrow\n\\mathbb{R}$) to have Lipschitz constant no larger than $C_{0}\\eta,$ where\n$C_{0}>1$ is a constant depending only on $X$ (in particular, $C_{0}$ is\nindependent of $\\varepsilon.$)\n\\end{theorem}\n\n\\medskip\n\n\\textbf{Proof\\ \\ }As noted before, the main idea of the proof is a\nmodification of the proof of \\cite[Lemma 5]{AFGJL} using ideas from \\cite{J}.\n\n\\medskip\n\n\\noindent We will need to use the following result, and refer the reader to\n\\cite[Proposition II.5.1]{DGZ} and \\cite{L} for a proof.\n\n\\medskip\n\n\\begin{proposition}\nLet $Z$ be a Banach space. The following assertions are equivalent.\n\n(a).$\\ Z$ admits a $C^{p}$-smooth, Lipschitz bump function.\n\n\\smallskip\n\n(b). There exist numbers $a,b>0$ and a Lipschitz function $\\psi:Z\\rightarrow\n\\lbrack0,\\infty)$ which is $C^{p}$-smooth on $Z\\setminus\\{0\\}$, homogeneous\n(that is $\\psi(tx)=|t|\\psi(x)$ for all $t\\in\\mathbb{R},x\\in Z$), and such that\n$a\\Vert\\cdot\\Vert\\leq\\psi\\leq b\\Vert\\cdot\\Vert$.\n\\end{proposition}\n\n\\medskip\n\nFor such a function $\\psi$, the set $A=\\{z\\in Z:\\psi(z)\\leq1\\}$ is what we\ncall a $C^{p}$-smooth, Lipschitz \\textbf{starlike body}, and the Minkowski\nfunctional of this body, $\\mu_{A}(z)=\\inf\\{t>0:(1\/t)z\\in A\\}$, is precisely\nthe function $\\psi$ (see \\cite{AD} and the references therein for further\ninformation on starlike bodies and their Minkowski functionals).\n\nWe will denote the open ball of center $x$ and radius $r$, with respect to the\nnorm $\\Vert\\cdot\\Vert$ of $X$, by $B(x,r).$ If $A$ is a bounded starlike body\nof $X$, we define the \\textbf{open}\\textit{\\ }$A$\\textit{-}\\textbf{pseudoball}\nof center $x$ and radius $r$ as%\n\n\\[\nB_{A}(x,r):=\\{y\\in X:\\mu_{A}(y-x)0$. This fact will sometimes be used implicitly in what follows.\n\nFor the proof, we shall first define a function $\\overline{f}:E^{\\infty\n}\\rightarrow\\mathbb{R},$ then a map $\\Psi:X\\rightarrow E^{\\infty},$ and\nfinally our desired function $K$ will be given by $K=\\overline{f}\\circ\\Psi. $\n\nTo begin the proof, first note that as $f$ is real-valued and $Y$ is convex,\nby \\cite[Proposition 2.2.1 (i)]{BL} $f$ can be uniformly approximated by a\nLipschitz map, and so we may and do suppose that $f$ is Lipschitz with\nconstant $\\eta$. Using an infimal convolution, we extend $f$ to a Lipschitz\nmap $F$ on $X$ with the same constant $\\eta$ by defining, $F\\left( x\\right)\n=\\inf\\left\\{ f\\left( y\\right) +\\eta\\left\\Vert x-y\\right\\Vert :y\\in\nY\\right\\} .$\n\nLet $\\varepsilon>0$ and $r\\in\\left( 0,\\varepsilon\/3M\\eta\\right) .$ We shall\nrequire the main construction from \\cite{J} (see also \\cite{FWZ}), and for the\nsake of completeness we outline this here. Let $\\left\\{ h_{i}\\right\\}\n_{i=1}^{\\infty}$ be a dense sequence in $B_{X},$ and $\\varphi_{i}\\in\nC^{\\infty}\\left( \\mathbb{R},\\mathbb{R}^{+}\\right) $ with $\\int_{\\mathbb{R}%\n}\\varphi_{i}=1$ and support$\\left( \\varphi_{i}\\right) \\in\\left[\n-\\frac{\\varepsilon}{6\\eta2^{i}},\\frac{\\varepsilon}{6\\eta2^{i}}\\right] .$\n\n\\medskip\n\n\\noindent Now we define functions $g_{n}:X\\rightarrow\\mathbb{R}$ by,%\n\n\\[\ng_{n}\\left( x\\right) =\\int_{\\mathbb{R}^{n}}F\\left( x-\\sum_{i=1}^{n}%\nt_{i}h_{i}\\right) \\prod_{i=1}^{n}\\varphi_{i}\\left( t_{i}\\right) dt,\n\\]\n\n\n\\noindent where the integral is $n$-dimensional Lebesgue measure.\n\n\\medskip\n\n\\noindent It is proven in \\cite{J} that the following hold:\n\n\\medskip\n\n\\begin{enumerate}\n\\item There exists $g$ with $g_{n}\\rightarrow g$ uniformly on $X,$\n\n\\item $\\left\\vert g-F\\right\\vert <\\varepsilon\/3$ on $X,$\n\n\\item The map $g$ is $\\eta$-Lipschitz,\n\n\\item The map $g$ is uniformly G\\^{a}teaux differentiable\n\\end{enumerate}\n\n\\medskip\n\n\\noindent Next, following \\cite[Lemma 5]{AFGJL}, let $\\varphi:\\mathbb{R}%\n\\rightarrow\\left[ 0,1\\right] $ be a $C^{\\infty}$-smooth function such that\n$\\varphi\\left( t\\right) =1$ if $\\left\\vert t\\right\\vert <1\/2$,\n$\\varphi\\left( t\\right) =0$ if $\\left\\vert t\\right\\vert >1,$ $\\varphi\n^{\\prime}([0,\\infty))\\subseteq\\left[ -3,0\\right] ,$ $\\varphi(-t)=\\varphi(t)$.\n\n\\medskip\n\n\\noindent Let us define a function G\\^{a}teaux differentiable on $X,$ and\n$C^{p}$-smooth on $E_{n},$ by%\n\n\\[\nF_{n}\\left( x\\right) =\\frac{(a_{n})^{n}}{c_{n}}\\int_{E_{n}}g(x-y)\\varphi\n(a_{n}\\mu_{A}\\left( y\\right) )dy\n\\]\nwhere\n\\[\nc_{n}=\\int_{E_{n}}\\varphi\\left( \\mu_{A}\\left( y\\right) \\right) dy,\n\\]\nand (keeping in mind (2.1) and $\\left( 3\\right) $) we have chosen the\nconstants $a_{n}$ large enough that\n\\begin{equation}\n\\sup_{x\\in E_{n}}\\left\\vert F_{n}\\left( x\\right) -g\\left( x\\right)\n\\right\\vert <\\frac{\\varepsilon}{6}2^{-n}.\n\\end{equation}\n\n\n\\noindent As pointed out to us by P. H\\'{a}jek, since $g$ is Lipschitz and\nuniformly G\\^{a}teaux differentiable, by \\cite[Lemma 4]{HJ} for each $h$ the\nmap $x\\rightarrow D_{h}g\\left( x\\right) $ is uniformly continuous. From\nthis, the Lipschitzness of $g,$ and compactness of $B_{E_{n}},$ we can choose\nthe $a_{n}$ larger if need be so that for all $h\\in B_{E_{n}}$ we have,%\n\n\\begin{equation}\n\\sup_{x\\in E_{n}}\\left\\vert D_{h}F_{n}\\left( x\\right) -D_{h}g\\left(\nx\\right) \\right\\vert <\\frac{\\eta}{2}2^{-n}.\n\\end{equation}\n\n\n\\smallskip\n\n\\noindent Note that for any $x,x^{\\prime}\\in X$,%\n\n\\begin{align*}\n\\left\\vert F_{n}\\left( x\\right) -F_{n}\\left( x^{\\prime}\\right)\n\\right\\vert & \\leq\\frac{(a_{n})^{n}}{c_{n}}\\int_{E_{n}}\\left\\vert\ng(x-y)-g\\left( x^{\\prime}-y\\right) \\right\\vert \\varphi(a_{n}\\mu_{A}\\left(\ny\\right) )dy\\\\\n& \\\\\n& \\leq\\eta\\left\\Vert x-x^{\\prime}\\right\\Vert \\frac{(a_{n})^{n}}{c_{n}}%\n\\int_{E_{n}}\\varphi(a_{n}\\mu_{A}\\left( y\\right) )dy=\\eta\\left\\Vert\nx-x^{\\prime}\\right\\Vert ,\n\\end{align*}\n\n\n\\noindent that is, $F_{n}$ is $\\eta$-Lipschitz.\n\n\\medskip\n\n\\noindent We next define a sequence of G\\^{a}teaux differentiable functions\n$\\overline{f}_{n}:X\\rightarrow\\mathbb{R},$ $C^{p}$-smooth on $E_{n},$ as\nfollows. Put $\\bar{f}_{0}=f\\left( 0\\right) ,$ and supposing that\n$\\overline{f}_{0},...,\\overline{f}_{n-1}$ have been defined, we set\n\n\\medskip%\n\n\\[\n\\bar{f}_{n}\\left( x\\right) =F_{n}\\left( x\\right) +\\bar{f}_{n-1}\\left(\nP_{n-1}\\left( x\\right) \\right) -F_{n}\\left( P_{n-1}\\left( x\\right)\n\\right) .\n\\]\n\n\n\\medskip\n\n\\noindent One can verify by induction, using $\\left\\Vert P_{n}\\right\\Vert\n\\leq1,\\ \\left( 2.2\\right) $ and $\\left( 2.3\\right) ,$ that,\n\n\\medskip\n\n(i). The $\\bar{f}_{n}$ are G\\^{a}teaux differentiable, the restriction of\n$\\bar{f}_{n}$ to $E_{n}$ is $C^{p}$-smooth, and $\\bar{f}_{n}$ extends $\\bar\n{f}_{n-1}$,\n\n\\medskip\n\n(ii).\\ $\\sup_{x\\in E_{n}}\\left\\vert \\bar{f}_{n}\\left( x\\right) -g\\left(\nx\\right) \\right\\vert <\\frac{\\varepsilon}{3}\\left( 1-\\frac{1}{2^{n}}\\right)\n$,\n\n\\medskip\n\n(iii). $\\sup_{x\\in E_{n}}\\left\\vert D_{h}\\bar{f}_{n}\\left( x\\right)\n-D_{h}g\\left( x\\right) \\right\\vert \\leq\\eta\\left( 1-\\frac{1}{2^{n}}\\right)\n$, for all $h\\in B_{E_{n}}.$\n\n\\medskip\n\n\\noindent We now define the map $\\bar{f}:E^{\\infty}\\rightarrow\\mathbb{R}$ by\n\n\\medskip%\n\n\\[\n\\bar{f}\\left( x\\right) =\\lim_{n\\rightarrow\\infty}\\bar{f}_{n}\\left(\nx\\right) .\n\\]\n\n\n\\noindent For $x\\in E^{\\infty}=\\cup_{n}E_{n},$ define $n_{x}\\equiv\\min\\left\\{\nn:x\\in E_{n}\\right\\} ,$ and note that we have for any $m\\geq n_{x}$,%\n\n\\begin{equation}\n\\bar{f}\\left( x\\right) =\\lim_{n\\rightarrow\\infty}\\bar{f}_{n}\\left(\nx\\right) =\\bar{f}_{m}\\left( x\\right) .\n\\end{equation}\n\n\n\\medskip\n\n\\noindent In particular, for any $n,$ $\\overline{f}\\mid_{E_{n}}=\\overline\n{f}_{n}.$ One can verify using $\\left( 2.4\\right) ,$ (i), (ii), and (iii)\nabove that $\\bar{f}$ has the following properties:\n\n\\medskip\n\n(i)$^{\\prime}$. The restriction of $\\bar{f}$ to every subspace $E_{n}$ is\n$C^{p}$-smooth,\n\n\\medskip\n\n(ii)$^{\\prime}$. $\\sup_{x\\in E^{\\infty}}\\left\\vert \\bar{f}\\left( x\\right)\n-g\\left( x\\right) \\right\\vert \\leq\\frac{\\varepsilon}{3}.$\n\n\\medskip\n\n(iii)$^{\\prime}$. $\\sup_{x\\in E_{n}}\\left\\vert D_{h}\\bar{f}\\left( x\\right)\n-D_{h}g\\left( x\\right) \\right\\vert \\leq\\eta$, for all $h\\in B_{E_{n}}.$\n\n\\medskip\n\n\\noindent The proof now closely follows \\cite[Lemma 5]{AFGJL}, and we provide\nsome of the details for the sake of completeness.\n\n\\medskip\n\n\\noindent Next let $x=\\sum_{n}x_{n}e_{n}\\in X$ and define the maps\n\\[\n\\chi_{n}\\left( x\\right) =1-\\varphi\\left[ \\frac{\\mu_{A}\\left(\nx-P_{n-1}\\left( x\\right) \\right) }{r}\\right] ,\n\\]\nand\n\\[\n\\Psi\\left( x\\right) =\\sum_{n}\\chi_{n}\\left( x\\right) x_{n}e_{n}.\n\\]\n\n\nFor any $x_{0},$ because $P_{n}\\left( x_{0}\\right) \\rightarrow x_{0}$ and\nthe $\\Vert P_{n}\\Vert$ are uniformly bounded, there exist a neighbourhood\n$N_{0}$ of $x_{0}$ and an $n_{0}=n_{x_{0}}$ so that $\\chi_{n}\\left( x\\right)\n=0$ for all $x\\in N_{0}$ and $n\\geq n_{0}$ and so $\\Psi\\left( N_{0}\\right)\n\\subset E_{n_{0}}.$ Thus, $\\Psi:X\\rightarrow E^{\\infty}$ is a $C^{p}$-smooth\nmap whose range is locally contained in the finite dimensional subspaces\n$E_{n}$. Using the fact that $\\left\\{ e_{n}\\right\\} $ is unconditional with\nconstant $C=1,$ one can show that (see \\cite[Fact 7]{AFGJL})\n\\begin{equation}\n\\left\\Vert x-\\Psi\\left( x\\right) \\right\\Vert 0,$ there exists a Lipschitz,\n$C^{p}$-smooth map $K:X\\rightarrow\\mathbb{R}$ with $\\left\\vert f-K\\right\\vert\n<\\varepsilon$ on $Y.$\n\n\\item For every subset $Y\\subset X,$ Lipschitz function $f:Y\\rightarrow\n\\mathbb{R},$ and $\\varepsilon>0,$ there exists a Lipschitz, $C^{p}$-smooth map\n$K:X\\rightarrow\\mathbb{R}$ with $\\left\\vert f-K\\right\\vert <\\varepsilon$ on\n$Y.$\n\n\\item For every Banach space $Z,$ Lipschitz map $f:X\\rightarrow Z,$ and\n$\\varepsilon>0,$ there exists a Lipschitz, $C^{p}$-smooth map $K:X\\rightarrow\nZ$ with $\\left\\Vert f-K\\right\\Vert <\\varepsilon$ on $X.$\n\\end{enumerate}\n\\end{corollary}\n\n\\begin{proof}\nThat $\\left( 1\\right) \\Rightarrow\\left( 2\\right) ,\\left( 3\\right) ,$ and\n$\\left( 4\\right) $ is Theorem 1. For $\\left( 2\\right) \\Rightarrow\\left(\n1\\right) ,$ choose $Y=X,$ and $f=\\left\\Vert \\cdot\\right\\Vert .$ Let\n$K:X\\rightarrow\\mathbb{R}$ be a $C^{p}$-smooth, Lipschitz map with $\\left\\vert\nf-K\\right\\vert <1$ on $X.$ Let $\\xi:\\mathbb{R\\rightarrow R}$ be $C^{\\infty}%\n$-smooth and Lipschitz with, $\\xi\\left( t\\right) =1$ if $t\\leq1$ and\n$\\xi\\left( t\\right) =0$ if $t\\geq2.$ Then $b=\\xi\\circ K$ is a $C^{p}%\n$-smooth, Lipschitz map with $b\\left( 0\\right) =1$ and $b\\left( x\\right)\n=0$ when $\\left\\Vert x\\right\\Vert \\geq3.$ The remaining implications are similar.\n\\end{proof}\n\n\\medskip\n\n\\textbf{Remark }The Lipschitz constant of $K$ obtained for the second\nstatement of Theorem 1 is not the best possible. By using better derivative\nestimates$,$ one can show that for any $\\delta>0,$ we may arrange $\\left\\Vert\nK^{\\prime}\\right\\Vert \\leq\\left( \\eta+\\delta\\right) \\left( 2\\left(\n2+\\delta\\right) M^{2}+1\\right) .$ This should be compared with the recent\nresult in \\cite{AFLR}, where it is shown in particular that for separable\nHilbert spaces $X$, any Lipschitz, real-valued function on $X$ can be\nuniformly approximated by $C^{\\infty}$ smooth functions with Lipschitz\nconstants arbitrarily close to the Lipschitz constant of $f.$ It is open\nwhether such a result holds outside the Hilbert space setting.\n\n\\medskip\n\n{\\small Acknowledgement\\ \\ The author wishes to thank D. Azagra and J.\nJaramillo for bringing to his attention the problems addressed in this note\nduring the rsme-ams 2003 conference in Sevilla. Also we want to thank P.\nH\\'{a}jek for pointing out \\cite[Lemma 4]{HJ} which enabled us to correct and\nsimplify an earlier version of this corrigendum. }\n\n\\medskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduciton}\n\nThere are increasing interests in seeking novel electro-optic\nmaterials analogous to the famous Sr$_{x}$Ba$_{1-x}$Nb$_{2}$O$_6$\n(SBN, $0.32\\leq x\\leq 0.75$) that belongs to the\n(A$_1)_2$(A$_2)_4$(B$_1)_2$(B$_2)_8$O$_{30}$ tetragonal tungsten\nbronze (TTB) structure with the non-centrosymmetry of P4{\\it bm} at\nroom temperature (see Fig.\\protect\\ref{Fig0})\n\\protect\\cite{Podlozhenov,Chernaya1,Chernaya2} and shows excellent\nelectro-optic and pyroelectric properties \\protect\\cite{Glass} and typical\nrelaxor behaviors.\\protect\\cite{Kleemann1,Kleemann2} Recently,\nCa$_{x}$Ba$_{1-x}$Nb$_{2}$O$_6$ (CBN, $0.2\\leq x\\leq 0.4$) single\ncrystals have received considerable critical attention due to their\nexcellent electro-optic properties and higher working temperature\nin comparison with SBN.\\protect\\cite{Ebser,Burianek,Muehlberg,Qi,Song} The\nferroelectricity of CBN is first discovered by Smolenskii et\nal..\\protect\\cite{Smolenskii} CBN alloys also have the ferroelectric TTB\nstructure at room temperature, however, contrary to Sr in SBN where\nSr ions randomly occupy both the two large A$_1$ and A$_2$ sites in\nthe tungsten bronze framework of NbO$_6$ octahedra, smaller Ca ions\nin CBN only occupy the A$_1$ sites in the structure. It has been\ndemonstrated experimentally that smaller A$_1$ site is almost\nexclusively occupied by Ca while larger A$_2$ site is predominately\nby Ba in CBN single crystal with congruent melting composition\n($x$=0.28).\\protect\\cite{Graetsch1,Graetsch2} In ferroelctric CBN alloys,\nniobium atoms are displaced from the centers of their coordination\npolyhedra and shifted along the tetragonal $c$-axis, which is\nconsidered to be the origin of spontaneous electric\npolarization.\\protect\\cite{Chernaya2, Abrahams}\n\nCurrently, most of investigations on CBN alloys are focused on the\ncrystal growth\\protect\\cite{Ebser,Song}, dielectric,\\protect\\cite{Qi,Niemer}\nferroelectric,\\protect\\cite{Qi} optic,\\protect\\cite{Ebser2,Heine,Sheng} and elastic\n\\protect\\cite{Pandey,Pandey2,Pandey3,Suzuki} properties of CBN single\ncrystal with congruent melting composition $x$=0.28. Despite these\ninvestigations on CBN, there is still a lack of full understanding\non the basic issues in CBN alloys. One important problem is that CBN\nalloys are relaxors (such as their isostructural compounds SBN\nalloys\\protect\\cite{Kleemann2,Blinc,Banys,Miga}), which is characterized by\na broad peak of dielectric susceptibility with strong frequency\ndispersion in the radio frequency over a large temperature range\n\\protect\\cite{Miga,Levstik,Fu1,Fu2} and smear ferroelectric phase\ntransition\\protect\\cite{Fu1,Taniguchi} without an obvious heat capacity\npeak,\\protect\\cite{Moriya} or they are ferroelectrics (such as the normal\nferroelectric BaTiO$_3$), which have a well-defined\npara-ferroelectric thermal phase transition at Curie point $T_{\\rm\nc}$ \\protect\\cite{Fu3} but show polarization precursor dynamics before\ntransition into the ferroelectric\nphase.\\protect\\cite{Burns,Tai,Ziebinska,Ko,Dulkin,Pugachev} Since CBN is an\nisostructural compound of SBN that typical relaxor phenomena have\nbeen clearly demonstrated,\\protect\\cite{Banys,Miga} it is naturally expected\nthat CBN should show the relaxor behaviors such as a broad peak of\ndielectric susceptibility with strong frequency dispersion due to\nthe dynamics of polar-nano-regions (PNRs) occurring in the\nparaelectric mother phase. Following this scenario, several attempts\nhave been made to confirm the relaxor behaviors in\nCBN.\\protect\\cite{Pandey,Pandey2,Pandey3,Suzuki} From the investigations on\nlattice strain and thermal expansion of CBN single crystal with\ncongruent melting composition $x$=0.28, Pandey et al. found the\ndeviation from the linear temperature dependence of lattice strain\nand the anomalous thermal expansion in this single crystal.\\protect\\cite{Pandey} They\nattribute these anomalous elastic behaviors to the relaxor phenomena\noccurring in CBN crystal. They further suggest Burns\ntemperature \\protect\\cite{Burns} $T_{\\rm B}$ to be 1100 K, which\ncharacterizes the initiation of dynamic PNRs, and the intermediate\ntemperature $T^*$ to be 800 K, which indicates the beginning of\nPNRs freezing,\\protect\\cite{Roth,Dkhil} for CBN with $x=0.28$.\\protect\\cite{Pandey}\nOn the other hand, in a recent Brillouin scattering study, $T_{\\rm\nB}$ and $T^*$ are proposed to be approximately 790 K and 640 K,\nrespectively, for CBN with the same composition.\\protect\\cite{Suzuki} The\ndifference in $T_{\\rm B}$ and $T^*$ values estimated by\ndifferent techniques for a same compound is surprisingly large.\nApparently, the existence and exact values of $T_{\\rm B}$ and $T^*$\nin CBN need to be clarified in further investigations.\n\nIt should be noticed that the existence of $T_{\\rm B}$ and $T^*$ is\nnot the characteristic properties of relaxor. In a normal\nferroelectric such as BaTiO$_3$, the existence of these two\ncharacteristic temperatures have been clearly\ndemonstrated.\\protect\\cite{Burns,Dulkin} In\n early investigations, Burns et al.\\protect\\cite{Burns} have demonstrated that\nthe temperature dependence of the optic index of refraction, $n(T)$,\ndeviates from the high temperature extrapolated value between\n$T_{\\rm c}$ and $T_{\\rm c} + 180$ K in the paraelectric phase of\nBaTiO$_3$ crystal. This deviation is commonly accepted to be due to\nthe local polarization precursor dynamics before the\npara-ferroelectric phase transition in BaTiO$_3$, thus the\ntemperature where this deviation occurs is generally called as Burns\ntemperature $T_{\\rm B}$, which is widely used to characterize the\nappearance of PNRs in the paraelectric phase of ferroelectric or\nrelaxor. In additional to the optical\nmeasurements,\\protect\\cite{Burns,Ziebinska,Pugachev} various other\nexperiments such as pulsed X-Ray laser\nmeasurements,\\protect\\cite{Tai,Namikawa} acoustic emission\\protect\\cite{Dulkin} and\nBrillouin light scattering\\protect\\cite{Ko} all show the existence of local\npolarization in BaTiO$_3$ before transition into the ferroelectric\nphase. Theoretically, it has been proposed that local dynamical\nprecursor domains are common to perovskite ferroelectrics, and these\npolar domains grow upon approaching $T_{\\rm c}$ to coalesce into a\nhomogeneously polarized state at $T_{\\rm\nc}$.\\protect\\cite{Bussmann-Holder1,Bussmann-Holder2} Similarly, PNRs develop\nfrom $T_{\\rm B}$ in relaxor, and cause the very strong frequency\ndispersion of dielectric susceptibility. In contrast to relaxor,\nprecursor domains in the perovskite ferroelectric such as BaTiO$_3$\ndo not need to give rise to a frequency-dependent dielectric\nresponse in the radio frequency.\\protect\\cite{Bussmann-Holder1}\n\nFurthermore, a sharp peak of heat capacity has been observed at the\npara-ferroelectric phase transition in CBN crystal with\n$x=0.311$.\\protect\\cite{Muehlberg} Also, in the dielectric measurements on\nCBN single cyrstal with congruent melting composition $x$=0.28, it\nseems that the para-ferroelectric phase transition temperature is\nnot dependent on frequency.\\protect\\cite{Qi} All these findings in CBN are\nin sharp contrast to the characteristic properties of relaxor such\nas smear phase transition \\protect\\cite{Fu1,Taniguchi} without an evident heat\ncapacity peak,\\protect\\cite{Moriya} and very strong frequency dependence\nof dielectric responses in the radio frequency around a temperature,\n$T_{\\rm m}$, where the maximum of dielectric response occurs. These\nresults suggest that CBN alloys may be classified as a\nferroelectric with thermal phase transition associating with\npolarization precursor dynamics rather than a relaxor.\n\n\nIn this study, we firstly determined the solid solution limit of CBN\nferroelectric alloys. We then investigated the para-ferroelectric\nphase transition in these alloys by the dielectric measurements and\nthe differential scanning calorimetry (DSC). Contrary to the relaxor\npicture what have been expected in previous\ninvestigations,\\protect\\cite{Pandey,Pandey2,Pandey3} we clearly demonstrated that CBN\nalloys can be classified as ferrolectrics with a first-order thermal\nphase transition associating with precursor dynamics over a large\ntemperature range above $T_{\\rm c}$ such as\nBaTiO$_3$.\\protect\\cite{Burns,Ziebinska,Pugachev} We also found that the\nlocal polarizations grown exponentially for a temperature range of\n$T_{\\rm c}0.32$.\nThese facts indicate that single phase of CBN ferroelectric alloys\nis available only in a composition range of $0.19\\leq x\\leq0.32$.\n\nThe lattice parameters of CBN alloys at room temperature were then\ndetermined by the method of least squares using twelve reflections\nwith 2$\\theta>50$ degree. The results are shown in Fig.\\protect\\ref{fig2}. In\nCBN alloys with ferroelectric TTB structure, $a$-axis lattice\nis nearly unchanged with composition within the error range. In contrast, the polar\n$c$-axis lattice is shorten with an increase in Ca concentration,\nwhich results in a similar decrease of unit cell volume. This result\nis expectable because the ionic radii of Ca and Ba are 1.34 {\\AA}\nand 1.61 {\\AA},\\protect\\cite{Shannon} respectively, and Ca has a smaller\nionic radius. In spite of the large change in composition $x$ ($\\Delta\nx=0.13$), the corresponding change in the unit cell volume is\nsurprisingly small (approximately 0.65\\%). This situation is\ndifferent significantly from the case of substitution of Ca for Ba\nin Ba$_{1-x}$Ca$_x$TiO$_3$ perovskite oxides, which shows an\napproximate 3.5\\%-reduction in the unit cell volume at a same level\nof Ca-substitution.\\protect\\cite{Fu3} This fact seems to suggest that the\nstacking structure in TTB-type CBN crystal is mainly determined by\nthe framework of NbO$_6$ octahedra as shown in Fig.(\\protect\\ref{Fig0}).\n\n\n\\subsection{Thermal phase transition}\n\nTo investigate the para-ferroelectric phase transition in CBN\nalloys, we observed the variation of dielectric susceptibility as a\nfunction of temperature in a frequency range from 100 Hz to 100 kHz.\nThe results were shown in Fig.\\protect\\ref{fig3} and Fig.\\protect\\ref{fig4},\nrespectively. The dielectric loss for lower frequencies ($\\leq$ 1 kHz) was\nlarge for temperatures higher than approximately 500 K and was not shown here, which is\nvery likely due to the thermal activation of vacancies in the\nsamples. The large dielectric loss made difficult to determine a\nreliable value of dielectric susceptibility for these low frequencies for\nthe high temperatures. However, as shown in Fig.\\protect\\ref{fig4}, we\nindeed observed that (1) Curies point is not dependent on frequency ranging from 100 Hz to 100 kHz,\nand (2) the dielectric susceptibility measured at frequency between\n10 KHz and 100 kHz is nearly independent of frequency in the\ntemperature range of 300 K - 870 K. We thus consider that the\ndielectric susceptibility measured at frequency between 10 KHz and 100 kHz is free from\nthe effects of vacancies and reflects the intrinsic dielectric\nresponse of CBN alloys, which will be used in the later analysis.\nApparently, the dielectric response with frequency and\ntemperature in CBN alloys is in sharp contrast to that observed in\nrelaxor.\\protect\\cite{Fu1} Instead, the dielectric behaviors in CBN are\nsimilar to those observed in normal ferroelectric such as\nBaTiO$_3$.\\protect\\cite{Fu3}\n\nThe temperature dependence of dielectric susceptibility shown in\nFig.\\protect\\ref{fig4} clearly indicates that the para-ferroelectric phase\ntransition has a thermal hysteresis. Curies points on heating and\ncooling have different values. The difference of $T_{\\rm c}$\nbetween heating and cooling is 12.4 K for $x=0.19$ and it increases\nto 25.2 K for $x=0.32$. This indicates that the substitution of Ba\nwith Ca in CBN enhances the thermal hysteresis of para-ferroelectric\nphase transition. On the other hand, this substitution lowers\nCuries point $T_{\\rm c}$ of CBN alloys as demonstrated clearly in\nFig.\\protect\\ref{fig3}.\n\nTo further confirm the nature of thermal phase transition in CBN\nferroelectric alloys, we also observed the change in enthalpy during\nthe phase transition by the DSC measurements. The results are shown\nin Fig.\\protect\\ref{fig5} Although our equipment doesn't allow us to\ndetermine the heat capacity of phase transition, we really\nobserved a change in enthalpy during the phase transition. DSC\nmeasurements also clearly indicate that the phase transition has a thermal\nhysteresis. This result is in good agreements with that observed in\nthe dielectric measurements shown in Fig.\\protect\\ref{fig3}. Our result is\nalso well accorded with that reported for CBN crystal with $x=0.31$\n showing a sharp peak of heat capacity during the phase\ntransition.\\protect\\cite{Muehlberg}\n\nAll facts showed above indicate that CBN alloys undergo a thermal\nphase transition. It can be concluded that the para-ferroelectric\nphase transition in CBN alloys is of first-order and has a thermal\nhysteresis of 12.4 K to 25.2 K depending on Ca-concentration.\n\n\n\\subsection{Precursor behaviors}\n\nIn a recent investigation on CBN single crystal with congruent\nmelting composition $x$=0.28, the birefringence has been\ndemonstrated to occur in a temperature region of $T_{\\rm\nc}T_{\\rm c}$, we performed an analysis on the\ntemperature dependence of dielectric susceptibilities of CBN alloys\nwithin its solid solution limit. As mentioned in the above section,\nhere, we only used the data obtained at 100 kHz since these data are\nconsidered to be free from the effects of vacancies on the\ndielectric response. As shown in Fig.\\protect\\ref{fig6}(b), the dielectric\nsusceptibility $\\chi'$ obeys Curie law,\n\\begin{equation}\\label{eq2}\n \\chi'=\\pm C\/(T-T_0)\\;\\; {\\rm for}\\;\\; TT_{\\rm\n B},\n\\end{equation}\nwhere $C$ is Curie constant and $T_0$ is Curie-Weiss temperature.\nUpon cooling, the dielectric susceptibility deviates from Curie\nlaw at a characteristic temperature. Since the existence of Burns\ntemperature $T_{\\rm B}$ and the intermediate temperature $T^*$ are\nnot well established for CBN,\\protect\\cite{Pandey,Suzuki} Here, we\ntentatively consider this characteristic temperature as Burns\ntemperature $T_{\\rm B}$. The values of $T_{\\rm B}$ were estimated to\nbe approximately $T_{\\rm c}$+88 K for $x=0.19$ and $T_{\\rm c}$+143 K\nfor $x=0.32$, respectively, and $T_{\\rm B}$ showed a nearly linear change\nwith composition as shown\nin the phase diagram (Fig.\\protect\\ref{fig9}(a)). The fitting parameters\nobtained from Curie law are summarized in Fig.\\protect\\ref{fig7}. It should\nbe noticed that Curie constant in the paralectric phase ($T>T_{\\rm\nB}$) of CBN alloys has a similar magnitude of that of BaTiO$_3$\nsingle crystal ($1.5 \\times10^5$ K).\\protect\\cite{Shiozaki}\n\n\n\nFor temperatures higher than $T_{\\rm B}$, the dielectric\nsusceptibility of CBN alloys exactly follows Curie law, which\nindicates that the dielectric response in these high temperatures\ncan be considered to be due to the lattice dynamics. However, on\ncooling, deviation from Curie law was observed from $T\\approx\nT_{\\rm B}$. As mentioned above, this deviation from Curie law of the\ndielectric susceptibility can be reasonably attributed to the\npolarization precursors dynamics occurring before para-ferroelectric\nphase transition in CBN alloys. Therefore, it can be considered that\nthe total dielectric susceptibility is contributed by both lattice\nand precursor dynamics for $T_{\\rm c}T_{\\rm B}$. On cooling to $T\\approx T_{\\rm\n B}$, polarization precursors emerge in the paraelectric mother\n phase of CBN alloys. On further cooling, these local\n polarizations growth exponentially with the temperature before\n transition into a ferrolectric phase with a non-centrocymmetric tetragonal structure at\n $T=T_{\\rm c}$. This ferroelectric thermal phase transition is of\n first-order, and the transition temperature upon heating is\n $12.4\\sim 25.2$ K higher than that on cooling. As shown in the\n phase diagram, increase in Ca-concentration leads to a larger thermal\n hysteresis together with the lowering of $T_{\\rm c}$ and $T_{B}$. On the\n other hand, the temperature range of precursor existence becomes\n larger as increasing the amount of substitution of smaller Ca ions for Ba\n ions. It is still unclear why Ca-substitution enlarges the precursor growth region, the\n local polarization developed at $T_{\\rm c}^+$, and the thermal\n hysteresis of phase transition in CBN alloys.\n A likely reason is relative to the increase of vacancies in\n larger A$_2$ site in the TTB structure due to the increase of substitution amount of\n Ca for Ba. Because Ca almost exclusively occupied smaller A$_1$\nsite in CBN TTB structure, increase of Ca-concentration naturally\nresults in the increase of vacancies in\n A$_2$ site occupied initially by Ba.\\protect\\cite{Graetsch1,Graetsch2} Increase of vacancies in larger A$_2$ site\nwith $x$ may give rise to larger fluctuation of local lattice\ndistortion in CBN alloys, which is though to be the sources of\nlocal polarizations. This suggestion is supported by recent\ninvestigations on the variation of lattice strain with $x$ in CBN\nalloys, in which the thermal expansion along the polar axis becomes\nlarger as increasing Ca-concentration.\\protect\\cite{Pandey3}\n\nOn the other hand, substitution of Ca for Ba leads to a nearly\nlinear reduction of $T_{\\rm c}$ and $T_{B}$ with $x$. As well known\nin many normal ferroelectrics, pressure can significantly reduce the\nferroelectric transition temperature. In solid solution, pressure\ncan be due to chemical substitution of smaller ions for original\nlarger one, and this kind of pressure is normally called as chemical\npressure. Chemical-pressure effects on the reduction of $T_{\\rm\nc}$ have been well studied in many ferroelectric alloys with\nperovskite structure such as\n(Ba$_{1-x}$Sr$_x$)TiO$_3$,\\protect\\cite{Shiozaki} in which $T_{\\rm\nc}$-reduction is generally considered to be due to the reduction of\nunit cell volume by the chemical pressure effects. As mentioned in\nthe above section, in CBN alloys, substitution of smaller Ca ions\nfor Ba ions only lead to an extremely small change in its unit cell\nvolume ($\\Delta=(V(x)-V(0.19))\/V(0.19)$), and $\\Delta$ is less than\n0.65\\% within the solid solution limit. Thus, it can be considered\nthat the chemical pressure effects on the reduction of $T_{\\rm c}$\nwas vanishingly small in CBN alloys.\n\nThe spontaneous polarization in CBN has been reported to be\noriginated from Nb-displacement along the $c$-axis in the TTB\nstructure.\\protect\\cite{Graetsch1} It can is expected that the lattice\ndistortion along the $c$-axis has influence on Nb-displacement.\nFigure\\protect\\ref{fig9}(b) showed the variation of $T_{\\rm c}$ with\n$c$-axis lattice constant. Indeed, one can see that $T_{\\rm c}$ is\nreduced with the shrink of $c$-axis lattice. This fact indicates\nthat the shrink of $c$-axis lattice should reduce the space\navailable for Nb-displacement, thus leading to the reduction of\nferroelectricity and its $T_{\\rm c}$. However, since the reduction\nin $c$-axis lattice constant is approximately 0.03 ${\\AA}$\nwithin the solid solution, this lattice shrink is not enough to\ncompletely explain the large reduction in $T_{\\rm c}$ in CBN alloys.\nIt seems that other reasons may be existed for larger reduction of\n$T_{\\rm c}$ with Ca-concentration in CBN alloys. The exact reason for this large reduction in $T_{\\rm c}$ with Ca-substitution in CBN alloys remains to be\nclarified in further structural investigations.\n\n\n\n\\section{Summary}\n\nIn summary, we have studied the phase transition in CBN alloys\nwithin its solid solution limit ranging from $x=0.19$ to $x=0.32$.\nIn contrast to their isostructural compounds SBN that normally show\ntypical relaxor behaviors, CBN alloys exhibit behaviors such as\nnormal ferroelectric BaTiO$_3$, but show precursors dynamics before\ntransition into the ferroelectric phase. For $T>T_{\\rm B}$, CBN\nalloys obey classical Curie law and can be considered to be\nparaelectric. On further cooling toward $T_{\\rm c}$, local\npolarizations occurs in the paraelectric mother phase, and these\npolarization precursors grow exponentially as temperature lowering.\nFinally, a ferroelectric phase transition was realized at $T_{\\rm c}$. This thermal\nferroelectric phase transition is essentially of first-order. A\nphase diagram was then establish for CBN ferroelectric alloys. These\nfindings provide new insights on understanding the underlying\nphysics in CBN ferroelectric alloys.\n\n\n\n\n\n\n\\newpage\n\n\n\\section{Introduction}\n\nThis is the author's guide to \\revtex~4, the preferred submission\nformat for all APS journals. This guide is intended to be a concise\nintroduction to \\revtex~4. The documentation has been separated out\ninto smaller units to make it easier to locate essential\ninformation.\n\nThe following documentation is also part of the APS \\revtex~4\ndistribution. Updated versions of these will be maintained at\nthe \\revtex~4 homepage located at \\url{http:\/\/publish.aps.org\/revtex4\/}.\n\\begin{itemize}\n\\item \\textit{APS Compuscript Guide for \\revtex~4}\n\\item \\textit{\\revtex~4 Command and Options Summary}\n\\item \\textit{\\revtex~4 Bib\\TeX\\ Guide}\n\\item \\textit{Differences between \\revtex~4 and \\revtex~3}\n\\end{itemize}\nThis guide assumes a working \\revtex~4\ninstallation. Please see the installation guide included with the\ndistribution.\n\nThe \\revtex\\ system for \\LaTeX\\ began its development in 1986 and has\ngone through three major revisions since then. All versions prior to\n\\revtex~4 were based on \\LaTeX2.09 and, until now, \\revtex\\ did not\nkeep pace with the advances of the \\LaTeX\\ community and thus became\ninconvenient to work with. \\revtex~4 is designed to remedy this by\nincorporating the following design goals:\n\n\\begin{itemize}\n\\item\nMake \\revtex\\ fully compatible with \\LaTeXe; it is now a \\LaTeXe\\\ndocument class, similar in function to the standard\n\\classname{article} class.\n\n\\item\nRely on standard \\LaTeXe\\ packages for common tasks, e.g,\n\\classname{graphicx},\n\\classname{color}, and\n\\classname{hyperref}.\n\n\\item\nAdd or improve macros to support translation to tagged formats such as\nXML and SGML. This added markup will be key to enhancing the\npeer-review process and lowering production costs.\n\n\\item\nProvide a closer approximation to the typesetting style used in\n\\emph{Physical Review}.\n\n\\item\nIncorporate new features, such as hypertext, to make \\revtex\\ a\nconvenient and desirable e-print format.\n\n\\item\nRelax the restrictions in \\revtex\\ that had only been necessary for\ntypesetting journal camera-ready copy.\n\\end{itemize}\n\nTo meet these goals, \\revtex~4 is a complete rewrite with an emphasis\non maintainability so that it will be easier to provide enhancements.\n\nThe \\revtex~4 distribution includes both a template\n(\\file{template.aps}) and a sample document (\\file{apssamp.tex}).\nThe template is a good starting point for a manuscript. In the\nfollowing sections are instructions that should be sufficient for\ncreating a paper using \\revtex~4.\n\n\\subsection{Submitting to APS Journals}\n\nAuthors using \\revtex~4 to prepare a manuscript for submission to\n\\textit{Physical Review} or \\textit{Reviews of Modern Physics} \nmust also read the companion document \\textit{APS Compuscript Guide\nfor \\revtex~4}\ndistributed with \\revtex\\ and follow the guidelines detailed there.\n\nFurther information about the compuscript program of the American\nPhysical Society may be found at \\url{http:\/\/publish.aps.org\/ESUB\/}.\n\n\\subsection{Contact Information}\\label{sec:resources}%\nAny bugs, problems, or inconsistencies should reported to\n\\revtex\\ support at \\verb+revtex@aps.org+.\nReports should include information on the error and a \\textit{small}\nsample document that manifests the problem if possible (please don't\nsend large files!).\n\n\\section{Some \\LaTeXe\\ Basics}\nA primary design goal of \\revtex~4 was to make it as compatible with\nstandard \\LaTeXe\\ as possible so that authors may take advantage of all\nthat \\LaTeXe\\ offers. In keeping with this goal, much of the special\nformatting that was built in to earlier versions of \\revtex\\ is now\naccomplished through standard \\LaTeXe\\ macros or packages. The books\nin the bibliography provide extensive coverage of all topics\npertaining to preparing documents under \\LaTeXe. They are highly recommended.\n\nTo accomplish its goals, \\revtex~4 must sometimes patch the underlying\n\\LaTeX\\ kernel. This means that \\revtex~4 requires a fairly recent version of\n\\LaTeXe. Versions prior to 1996\/12\/01 may not work\ncorrectly. \\revtex~4 will be maintained to be compatible with future\nversions of \\LaTeXe.\n\n\\subsection{Useful \\LaTeXe\\ Markup}\n\\LaTeXe\\ markup is the preferred way to accomplish many basic tasks.\n\n\\subsubsection{Fonts}\n\nBecause \\revtex~4 is based upon \\LaTeXe, it inherits all of the\nmacros used for controlling fonts. Of particular importance are the\n\\LaTeXe\\ macros \\cmd{\\textit}, \\cmd{\\textbf}, \\cmd{\\texttt} for changing to\nan italic, bold, or typewriter font respectively. One should always\nuse these macros rather than the lower-level \\TeX\\ macros \\cmd{\\it},\n\\cmd{\\bf}, and \\cmd{\\tt}. The \\LaTeXe\\ macros offer\nimprovements such as better italic correction and scaling in super-\nand subscripts for example. Table~\\ref{tab:fonts}\nsummarizes the font selection commands in \\LaTeXe.\n\n\\begin{table}\n\\caption{\\label{tab:fonts}\\LaTeXe\\ font commands}\n\\begin{ruledtabular}\n\\begin{tabular}{ll}\n\\multicolumn{2}{c}{\\textbf{Text Fonts}}\\\\\n\\textbf{Font command} & \\textbf{Explanation} \\\\\n\\cmd\\textit\\marg{text} & Italics\\\\\n\\cmd\\textbf\\marg{text} & Boldface\\\\\n\\cmd\\texttt\\marg{text} & Typewriter\\\\\n\\cmd\\textrm\\marg{text} & Roman\\\\\n\\cmd\\textsl\\marg{text} & Slanted\\\\\n\\cmd\\textsf\\marg{text} & Sans Serif\\\\\n\\cmd\\textsc\\marg{text} & Small Caps\\\\\n\\cmd\\textmd\\marg{text} & Medium Series\\\\\n\\cmd\\textnormal\\marg{text} & Normal Series\\\\\n\\cmd\\textup\\marg{text} & Upright Series\\\\\n &\\\\\n\\multicolumn{2}{c}{\\textbf{Math Fonts}}\\\\\n\\cmd\\mathit\\marg{text} & Math Italics\\\\\n\\cmd\\mathbf\\marg{text} & Math Boldface\\\\\n\\cmd\\mathtt\\marg{text} & Math Typewriter\\\\\n\\cmd\\mathsf\\marg{text} & Math Sans Serif\\\\\n\\cmd\\mathcal\\marg{text} & Calligraphic\\\\\n\\cmd\\mathnormal\\marg{text} & Math Normal\\\\\n\\cmd\\bm\\marg{text}& Bold math for Greek letters\\\\\n & and other symbols\\\\\n\\cmd\\mathfrak\\marg{text}\\footnotemark[1] & Fraktur\\\\\n\\cmd\\mathbb\\marg{text}\\footnotemark[1] & Blackboard Bold\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\footnotetext[1]{Requires \\classname{amsfonts} or \\classname{amssymb} class option}\n\\end{table}\n\n\\subsubsection{User-defined macros}\n\\LaTeXe\\ provides several macros that enable users to easily create new\nmacros for use in their manuscripts:\n\\begin{itemize}\n\\footnotesize\n\\item \\cmd\\newcommand\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def} \n\\item \\cmd\\newcommand\\verb+*+\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\renewcommand\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\renewcommand\\verb+*+\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\providecommand\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\item \\cmd\\providecommand\\verb+*+\\marg{\\\\command}\\oarg{narg}\\oarg{opt}\\marg{def}\n\\end{itemize}\nHere \\meta{\\\\command} is the name of the macro being defined,\n\\meta{narg} is the number of arguments the macro takes,\n\\meta{opt} are optional default values for the arguments, and\n\\meta{def} is the actually macro definiton. \\cmd\\newcommand\\ creates a\nnew macro, \\cmd\\renewcommand\\ redefines a previously defined macro,\nand \\cmd\\providecommand\\ will define a macro only if it hasn't\nbeen defined previously. The *-ed versions are an optimization that\nindicates that the macro arguments will always be ``short'' arguments. This is\nalmost always the case, so the *-ed versions should be used whenver\npossible.\n\nThe use of these macros is preferred over using plain \\TeX's low-level\nmacros such as\n\\cmd\\def{},\\cmd\\edef{}, and \\cmd\\gdef{}. APS authors must follow the\n\\textit{APS Compuscript Guide for \\revtex~4} when defining macros.\n\n\\subsubsection{Symbols}\n\n\\LaTeXe\\ has added some convenient commands for some special symbols\nand effects. These are summarized in Table~\\ref{tab:special}. See\n\\cite{Guide} for details.\n\n\\begin{table}\n\\caption{\\label{tab:special}\\LaTeXe\\ commands for special symbols and effects}\n\\begin{ruledtabular}\n\\begin{tabular}{lc}\nCommand & Symbol\/Effect\\\\\n\\cmd\\textemdash & \\textemdash\\\\\n\\cmd\\textendash & \\textendash\\\\\n\\cmd\\textexclamdown & \\textexclamdown\\\\\n\\cmd\\textquestiondown & \\textquestiondown\\\\\n\\cmd\\textquotedblleft & \\textquotedblleft\\\\\n\\cmd\\textquotedblright & \\textquotedblright\\\\\n\\cmd\\textquoteleft & \\textquoteleft\\\\\n\\cmd\\textquoteright & \\textquoteright\\\\\n\\cmd\\textbullet & \\textbullet\\\\\n\\cmd\\textperiodcentered & \\textperiodcentered\\\\\n\\cmd\\textvisiblespace & \\textvisiblespace\\\\\n\\cmd\\textcompworkmark & Break a ligature\\\\\n\\cmd\\textcircled\\marg{char} & Circle a character\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\n\\LaTeXe\\ also removed some symbols that were previously automatically\navailable in \\LaTeX 2.09. These symbols are now contained in a\nseparate package \\classname{latexsym}. To use these symbols, include\nthe package using:\n\\begin{verbatim}\n\\usepackage{latexsym}\n\\end{verbatim}\n\n\\subsection{Using \\LaTeXe\\ packages with \\revtex}\\label{sec:usepackage}%\n\nMany \\LaTeXe\\ packages are available, for instance, on CTAN at\n\\url{ftp:\/\/ctan.tug.org\/tex-archive\/macros\/latex\/required\/}\nand at\n\\url{ftp:\/\/ctan.tug.org\/tex-archive\/macros\/latex\/contrib\/}\nor may be available on other distribution media, such as the \\TeX\\\nLive CD-ROM \\url{http:\/\/www.tug.org\/texlive\/}. Some of these packages\nare automatically loaded by \\revtex~4 when certain class options are\ninvoked and are, thus, ``required''. They will either be distributed\nwith \\revtex\\ or are already included with a standard \\LaTeXe\\\ndistribution.\n\nRequired packages are automatically loaded by \\revtex\\ on an as-needed\nbasis. Other packages should be loaded using the\n\\cmd\\usepackage\\ command. To load the\n\\classname{hyperref} package, the document preamble might look like:\n\\begin{verbatim}\n\\documentclass{revtex}\n\\usepackage{hyperref}\n\\end{verbatim}\n\nSome common (and very useful) \\LaTeXe\\ packages are \\textit{a priori}\nimportant enough that \\revtex~4 has been designed to be specifically\ncompatible with them. \nA bug stemming from the use of one of these packages in\nconjunction with any of the APS journals may be reported by contacting\n\\revtex\\ support.\n\\begin{description}\n\\item[\\textbf{AMS packages}] \\revtex~4 is compatible with and depends\n upon the AMS packages\n\\classname{amsfonts},\n\\classname{amssymb}, and\n\\classname{amsmath}. In fact, \\revtex~4 requires use of these packages\nto accomplish some common tasks. See Section~\\ref{sec:math} for more.\n\\revtex~4 requires version 2.0 or higher of the AMS-\\LaTeX\\ package.\n\n\\item[\\textbf{array and dcolumn}]\nThe \\classname{array} and \\classname{dcolumn} packages are part of\n\\LaTeX's required suite of packages. \\classname{dcolumn} is required\nto align table columns on decimal points (and it in turn depends upon\nthe \\classname{array} package).\n\n\\item[\\textbf{longtable}]\n\\file{longtable.sty} may be used for large tables that will span more than one\npage. \\revtex~4 dynamically applies patches to longtable.sty so that\nit will work in two-column mode.\n\n\\item[\\textbf{hyperref}] \\file{hyperref.sty} is a package by Sebastian Rahtz that is\nused for putting hypertext links into \\LaTeXe\\ documents.\n\\revtex~4 has hooks to allow e-mail addresses and URL's to become\nhyperlinks if \\classname{hyperref} is loaded.\n\\end{description}\n\nOther packages will conflict with \\revtex~4 and should be\navoided. Usually such a conflict arises because the package adds\nenhancements that \\revtex~4 already includes. Here are some common\npackages that clash with \\revtex~4:\n\\begin{description}\n\\item[\\textbf{multicol}] \\file{multicol.sty} is a package by Frank Mittelbach\nthat adds support for multiple columns. In fact, early versions of\n\\revtex~4 used \\file{multicol.sty} for precisely this. However, to\nimprove the handling of floats, \\revtex~4 now has its own macros for\ntwo-column layout. Thus, it is not necessary to use \\file{multicol.sty}.\n\n\\item[\\textbf{cite}] Donald Arseneau's \\file{cite.sty} is often used to provide\nsupport for sorting a \\cmd\\cite\\ command's arguments into numerical\norder and to collapse consecutive runs of reference numbers. \\revtex~4\nhas this functionality built-in already via the \\classname{natbib} package.\n\n\\item[\\textbf{endfloat}] The same functionality can be accomplished\nusing the \\classoption{endfloats} class option.\n\n\\item[\\textbf{float}] \\revtex~4 already contains a lot of this\nfunctionality.\n\\end{description}\n\n\\section{The Document Preamble}\n\nThe preamble of a \\LaTeX\\ document is the set of commands that precede\nthe \\envb{document} line. It contains a\n\\cmd\\documentclass\\ line to load the \\revtex~4 class (\\textit{i.~e.},\nall of the \\revtex~4 macro definitions), \\cmd\\usepackage\\ macros to\nload other macro packages, and other macro definitions.\n\n\\subsection{The \\emph{documentclass} line}\nThe basic formatting of the manuscript is controlled by setting\n\\emph{class options} using\n\\cmd\\documentclass\\oarg{options}\\aarg{\\classname{revtex4}}.\nThe macro \\cmd\\documentclass\\ \nreplaces the \\cmd\\documentstyle\\ macro of \\LaTeX2.09. The optional\narguments that appear in the square brackets control the layout of the\ndocument. At this point, one only needs to choose a journal style\n(\\classoption{pra}, \\classoption{prb},\n\\classoption{prc}, \\classoption{prd},\n\\classoption{pre}, \\classoption{prl}, \\classoption{prstab},\nand \\classoption{rmp}) and either \\classoption{preprint} or\n\\classoption{twocolumn}. Usually, one would want to use\n\\classoption{preprint} for draft papers. \\classoption{twocolumn} gives\nthe \\emph{Physical Review} look and feel. Paper size options are also\navailable as well. In particular, \\classoption{a4paper} is available\nas well as the rest of the standard \\LaTeX\\ paper sizes. A\nfull list of class options is given in the \\textit{\\revtex~4 Command\nand Options Summary}.\n\n\\subsection{Loading other packages}\nOther packages may be loaded into a \\revtex~4 document by using the\nstandard \\LaTeXe\\ \\cmd\\usepackage\\ command. For instance, to load\nthe \\classoption{graphics} package, one would use\n\\verb+\\usepackage{graphics}+.\n\n\\section{The Front Matter}\\label{sec:front}\n\nAfter choosing the basic look and feel of the document by selecting\nthe appropriate class options and loading in whatever other macros are\nneeded, one is ready to move on to creating a new manuscript. After\nthe preamble, be sure to put in a \\envb{document} line (and put\nin an \\enve{document} as well). This section describes the macros\n\\revtex~4 provides for formatting the front matter of the\narticle. The behavior and usage of these macros can be quite\ndifferent from those provided in either \\revtex~3 or \\LaTeXe. See the\nincluded document \\textit{Differences between \\revtex~4 and \\revtex~3} for an\noverview of these differences.\n\n\\subsection{Setting the title}\n\nThe title of the manuscript is simply specified by using the\n\\cmd\\title\\aarg{title} macro. A \\verb+\\\\+ may be used to put a line\nbreak in a long title.\n\n\\subsection{Specifying a date}%\n\nThe \\cmd\\date\\marg{date} command outputs the date on the\nmanuscript. Using \\cmd\\today\\ will cause \\LaTeX{} to insert the\ncurrent date whenever the file is run:\n\\begin{verbatim}\n\\date{\\today}\n\\end{verbatim}\n\n\\subsection{Specifying authors and affiliations}\n\nThe macros for specifying authors and their affiliations have\nchanged significantly for \\revtex~4. They have been improved to save\nlabor for authors and in production. Authors and affiliations are\narranged into groupings called, appropriately enough, \\emph{author\ngroups}. Each author group is a set of authors who share the same set\nof affiliations. Author names are specified with the \\cmd\\author\\\nmacro while affiliations (or addresses) are specified with the\n\\cmd\\affiliation\\ macro. Author groups are specified by sequences of\n\\cmd\\author\\ macros followed by \\cmd\\affiliation\\ macros. An\n\\cmd\\affiliation\\ macro applies to all previously specified\n\\cmd\\author\\ macros which don't already have an affiliation supplied.\n\nFor example, if Bugs Bunny and Roger Rabbit are both at Looney Tune\nStudios, while Mickey Mouse is at Disney World, the markup would be:\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\author{Roger Rabbit}\n\\affiliation{Looney Tune Studios}\n\\author{Mickey Mouse}\n\\affiliation{Disney World}\n\\end{verbatim}\nThe default is to display this as \n\\begin{center}\nBugs Bunny and Roger Rabbit\\\\\n\\emph{Looney Tune Studios}\\\\\nMickey Mouse\\\\\n\\emph{Disney World}\\\\\n\\end{center}\nThis layout style for displaying authors and their affiliations is\nchosen by selecting the class option\n\\classoption{groupedaddress}. This option is the default for all APS\njournal styles, so it does not need to be specified explicitly.\nThe other major way of displaying this\ninformation is to use superscripts on the authors and\naffiliations. This can be accomplished by selecting the class option\n\\classoption{superscriptaddress}. To achieve the display\n\\begin{center}\nBugs Bunny,$^{1}$ Roger Rabbit,$^{1,2}$ and Mickey Mouse$^{2}$\\\\\n\\emph{$^{1}$Looney Tune Studios}\\\\\n\\emph{$^{2}$Disney World}\\\\\n\\end{center}\none would use the markup\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\affiliation{Looney Tune Studios}\n\\author{Roger Rabbit}\n\\affiliation{Looney Tune Studios}\n\\affiliation{Disney World}\n\\author{Mickey Mouse}\n\\affiliation{Disney World}\n\\end{verbatim}\n\nNote that \\revtex~4 takes care of any commas and \\emph{and}'s that join\nthe author names together and font selection, as well as any\nsuperscript numbering. Only the author names and affiliations should\nbe given within their respective macros.\n\nThere is a third class option, \\classoption{unsortedaddress}, for\ncontrolling author\/affiliation display. The default\n\\classoption{groupedaddress} will actually sort authors into the\napproriate author groups if one chooses to specify an affiliation for\neach author. The markup:\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\affiliation{Looney Tune Studios}\n\\author{Mickey Mouse}\n\\affiliation{Disney World}\n\\author{Roger Rabbit}\n\\affiliation{Looney Tune Studios}\n\\end{verbatim}\nwill result in the same display as for the first case given\nabove even though Roger Rabbit is specified after Mickey Mouse. To\navoid Roger Rabbit being moved into the same author group as Bugs\nBunny, use the\n\\classoption{unsortedaddress} option instead. In general, it is safest\nto list authors in the order they should appear and specify\naffiliations for multiple authors rather than one at a time. This will\nafford the most independence for choosing the display option. Finally,\nit should be mentioned that the affiliations for the\n\\classoption{superscriptaddress} are presented and numbered \nin the order that they are encountered. These means that the order\nwill usually follow the order of the authors. An alternative ordering\ncan be forced by including a list of \\cmd\\affiliation\\ commands before\nthe first \\cmd{\\author} in the desired order. Then use the exact same\ntext for each affilation when specifying them for each author.\n\nIf an author doesn't have an affiliation, the \\cmd\\noaffiliation\\\nmacro may be used in the place of an \\cmd\\affiliation\\ macro.\n\n\n\\subsubsection{Collaborations}\n\nA collaboration name can be specified with the \\cmd\\collaboration\\\nmacro. This is very similar to the \\cmd\\author\\ macro, but it can only\nbe used with the class option \\classoption{superscriptaddress}. The\n\\cmd\\collaboration\\ macro should appear at the end of the list of\nauthors. The collaboration name will be appear centered in parentheses\nbetween the list of authors and the list of\naffiliations. Because collaborations\ndon't normally have affiliations, one needs to follow the\n\\cmd\\collaboration\\ with \\cmd\\noaffiliation.\n\n\\subsubsection{Footnotes for authors, collaborations, affiliations or title}\\label{sec:footau}\n\nOften one wants to specify additional information associated with an\nauthor, collaboration, or affiliation such an e-mail address, an\nalternate affiliation, or some other anicillary information. \n\\revtex~4 introduces several new macros just for this purpose. They\nare:\n\\begin{itemize}\n\\item\\cmd\\email\\oarg{optional text}\\aarg{e-mail address}\n\\item\\cmd\\homepage\\oarg{optional text}\\aarg{URL}\n\\item\\cmd\\altaffiliation\\oarg{optional text}\\aarg{affiliation}\n\\item\\cmd\\thanks\\aarg{miscellaneous text}\n\\end{itemize}\nIn the first three, the \\emph{optional text} will be prepended before the\nactual information specified in the required argument. \\cmd\\email\\ and\n\\cmd\\homepage\\ each have a default text for their optional arguments\n(`Electronic address:' and `URL:' respectively). The \\cmd\\thanks\\\nmacro should only be used if one of the other three do not apply. Any\nauthor name can have multiple occurences of these four macros. Note\nthat unlike the\n\\cmd\\affiliation\\ macro, these macros only apply to the \\cmd\\author\\\nthat directly precedes it. Any \\cmd\\affiliation\\ \\emph{must} follow\nthe other author-specific macros. A typical usage might be as follows:\n\\begin{verbatim}\n\\author{Bugs Bunny}\n\\email[E-mail me at: ]{bugs@looney.com}\n\\homepage[Visit: ]{http:\/\/looney.com\/}\n\\altaffiliation[Permanent address: ]\n {Warner Brothers}\n\\affiliation{Looney Tunes}\n\\end{verbatim}\nThis would result in the footnote ``E-mail me at: \\texttt{bugs@looney.com},\nVisit: \\texttt{http:\/\/looney.com\/}, Permanent address: Warner\nBrothers'' being attached to Bugs Bunny. Note that:\n\\begin{itemize}\n\\item Only an e-mail address, URL, or affiliation should go in the\nrequired argument in the curly braces.\n\\item The font is automatically taken care of.\n\\item An explicit space is needed at the end of the optional text if one is\ndesired in the output.\n\\item Use the optional arguments to provide customized\ntext only if there is a good reason to.\n\\end{itemize}\n\nThe \\cmd\\collaboration\\ , \\cmd\\affiliation\\ , or even \\cmd\\title\\ can\nalso have footnotes attached via these commands. If any ancillary data\n(\\cmd\\thanks, \\cmd\\email, \\cmd\\homepage, or\n\\cmd\\altaffiliation) are given in the wrong context (e.g., before any\n\\cmd\\title, \\cmd\\author, \\cmd\\collaboration, or \\cmd\\affiliation\\\ncommand has been given), then a warning is given in the \\TeX\\ log, and\nthe command is ignored.\n\nDuplicate sets of ancillary data are merged, giving rise to a single\nshared footnote. However, this only applies if the ancillary data are\nidentical: even the order of the commands specifying the data must be\nidentical. Thus, for example, two authors can share a single footnote\nindicating a group e-mail address.\n\nDuplicate \\cmd\\affiliation\\ commands may be given in the course of the\nfront matter, without the danger of producing extraneous affiliations\non the title page. However, ancillary data should be specified for\nonly the first instance of any particular institution's\n\\cmd\\affiliation\\ command; a later instance with different ancillary\ndata will result in a warning in the \\TeX\\ log.\n\nIt is preferable to arrange authors into\nsets. Within each set all the authors share the same group of\naffiliations. For each author, give the \\cmd\\author\\ (and appropriate\nancillary data), then follow this author group with the needed group\nof \\cmd\\affiliation\\ commands.\n\nIf affiliations have been listed before the first\n\\cmd\\author\\ macro to ensure a particular ordering, be sure\nthat any later \\cmd\\affiliation\\ command for the given institution is\nan exact copy of the first, and also ensure that no ancillary data is\ngiven in these later instances.\n\n\nEach APS journal has a default behavior for the placement of these\nancillary information footnotes. The \\classoption{prb} option puts all\nsuch footnotes at the start of the bibliography while the other\njournal styles display them on the first page. One can override a\njournal style's default behavior by specifying explicitly the class\noption\n\\classoption{bibnotes} (puts the footnotes at the start of the\nbibliography) or \\classoption{nobibnotes} (puts them on the first page).\n\n\\subsubsection{Specifying first names and surnames}\n\nMany APS authors have names in which either the surname appears first\nor in which the surname is made up of more than one name. To ensure\nthat such names are accurately captured for indexing and other\npurposes, the \\cmd\\surname\\ macro should be used to indicate which portion\nof a name is the surname. Similarly, there is a \\cmd\\firstname\\ macro\nas well, although usage of \\cmd\\surname\\ should be sufficient. If an\nauthor's surname is a single name and written last, it is not\nnecessary to use these macros. These macros do nothing but indicate\nhow a name should be indexed. Here are some examples;\n\\begin{verbatim}\n\\author{Andrew \\surname{Lloyd Weber}}\n\\author{\\surname{Mao} Tse-Tung}\n\\end{verbatim}\n\n\\subsection{The abstract}\nAn abstract for a paper is specified by using the \\env{abstract}\nenvironment:\n\\begin{verbatim}\n\\begin{abstract}\nText of abstract\n\\end{abstract}\n\\end{verbatim}\nNote that in \\revtex~4 the abstract must be specified before the\n\\cmd\\maketitle\\ command and there is no need to embed it in an explicit\nminipage environment.\n\n\\subsection{PACS codes}\nAPS authors are asked to supply suggested PACS codes with their\nsubmissions. The \\cmd\\pacs\\ macro is provided as a way to do this:\n\\begin{verbatim}\n\\pacs{23.23.+x, 56.65.Dy}\n\\end{verbatim}\nThe actual display of the PACS numbers below the abstract is\ncontrolled by two class options: \\classoption{showpacs} and\n\\classoption{noshowpacs}. In particular, this is now independent of\nthe \\classoption{preprint} option. \\classoption{showpacs} must be\nexplicitly included in the class options to display the PACS codes.\n\n\\subsection{Keywords}\nA \\cmd\\keywords\\ macro may also be used to indicate keywords for the\narticle. \n\\begin{verbatim}\n\\keywords{nuclear form; yrast level}\n\\end{verbatim}\nThis will be displayed below the abstract and PACS (if supplied). Like\nPACS codes, the actual display of the the keywords is controlled by\ntwo classoptions: \\classoption{showkeys} and\n\\classoption{noshowkeys}. An explicit \\classoption{showkeys} must be\nincluded in the \\cmd\\documentclass\\ line to display the keywords.\n\n\\subsection{Institutional report numbers}\nInstitutional report numbers can be specified using the \\cmd\\preprint\\\nmacro. These will be displayed in the upper lefthand corner of the\nfirst page. Multiple \\cmd\\preprint\\ macros maybe supplied (space is\nlimited though, so only three or less may actually fit). \n\n\\subsection{maketitle}\nAfter specifying the title, authors, affiliations, abstract, PACS\ncodes, and report numbers, the final step for formatting the front\nmatter of the manuscript is to execute the \\cmd\\maketitle\\ macro by\nsimply including it:\n\\begin{verbatim}\n\\maketitle\n\\end{verbatim}\nThe \\cmd\\maketitle\\ macro must follow all of the macros listed\nabove. The macro will format the front matter in accordance with the various\nclass options that were specified in the\n\\cmd\\documentclass\\ line (either implicitly through defaults or\nexplicitly).\n\n\\section{The body of the paper}\n\nFor typesetting the body of a paper, \\revtex~4 relies heavily on\nstandard \\LaTeXe\\ and other packages (particulary those that are part\nof AMS-\\LaTeX). Users unfamiliar with these packages should read the\nfollowing sections carefully. \n\n\\subsection{Section headings}\n\nSection headings are input as in \\LaTeX.\nThe output is similar, with a few extra features.\n\nFour levels of headings are available in \\revtex{}:\n\\begin{quote}\n\\cmd\\section\\marg{title text}\\\\\n\\cmd\\subsection\\marg{title text}\\\\\n\\cmd\\subsubsection\\marg{title text}\\\\\n\\cmd\\paragraph\\marg{title text}\n\\end{quote}\n\nUse the starred form of the command to suppress the automatic numbering; e.g.,\n\\begin{verbatim}\n\\section*{Introduction}\n\\end{verbatim}\n\nTo label a section heading for cross referencing, best practice is to\nplace the \\cmd\\label\\marg{key} within the argument specifying the heading:\n\\begin{verbatim}\n\\section{\\label{sec:intro}Introduction}\n\\end{verbatim}\n\nIn the some journal substyles, such as those of the APS,\nall text in the \\cmd\\section\\ command is automatically set uppercase.\nIf a lowercase letter is needed, use \\cmd\\lowercase\\aarg{x}.\nFor example, to use ``He'' for helium in a \\cmd\\section\\marg{title text} command, type\n\\verb+H+\\cmd\\lowercase\\aarg{e} in \\marg{title text}.\n\nUse \\cmd\\protect\\verb+\\\\+ to force a line break in a section heading.\n(Fragile commands must be protected in section headings, captions, and\nfootnotes and \\verb+\\\\+ is a fragile command.)\n\n\\subsection{Paragraphs and General Text}\n\nParagraphs always end with a blank input line. Because \\TeX\\\nautomatically calculates linebreaks and word hyphenation in a\nparagraph, it is not necessary to force linebreaks or hyphenation. Of\ncourse, compound words should still be explicitly hyphenated, e.g.,\n``author-prepared copy.''\n\nUse directional quotes for quotation marks around quoted text\n(\\texttt{``xxx''}), not straight double quotes (\\texttt{\"xxx\"}).\nFor opening quotes, use one or two backquotes; for closing quotes,\nuse one or two forward quotes (apostrophes).\n\n\\subsection{One-column vs. two-column}\\label{sec:widetext}\n\nOne of the hallmarks of \\textit{Physical Review} is its two-column\nformatting and so one of the \\revtex~4 design goals is to make it easier to\nacheive the \\textit{Physical Review} look and feel. In particular, the\n\\classoption{twocolumn} option will take care of formatting the front matter\n(including the abstract) as a single column. \\revtex~4 has its own\nbuilt-in two-column formatting macros to provide well-balanced columns\nas well as reasonable control over the placement of floats in either\none- or two-column modes.\n\nOccasionally it is necessary to change the formatting from two-column to\none-column to better accomodate very long equations that are more\neasily read when typeset to the full width of the page. This is\naccomplished using the \\env{widetext} environment:\n\\begin{verbatim}\n\\begin{widetext}\nlong equation goes here\n\\end{widetext}\n\\end{verbatim}\nIn two-column mode, this will temporarily return to one-column mode,\nbalancing the text before the environment into two short columns, and\nreturning to two-column mode after the environment has\nfinished. \\revtex~4 will also add horizontal rules to guide the\nreader's eye through what may otherwise be a confusing break in the\nflow of text. The\n\\env{widetext} environment has no effect on the output under the \n\\classoption{preprint} class option because this already uses\none-column formatting.\n\nUse of the \\env{widetext} environment should be restricted to the bare\nminimum of text that needs to be typeset this way. However short pieces\nof paragraph text and\/or math between nearly contiguous wide equations\nshould be incorporated into the surrounding wide sections.\n\nLow-level control over the column grid can be accomplished with the\n\\cmd\\onecolumngrid\\ and \\cmd\\twocolumngrid\\ commands. Using these, one\ncan avoid the horizontal rules added by \\env{widetext}. These commands\nshould only be used if absolutely necessary. Wide figures and tables\nshould be accomodated using the proper \\verb+*+ environments.\n\n\\subsection{Cross-referencing}\\label{sec:xrefs}\n\n\\revtex{} inherits the \\LaTeXe\\ features for labeling and cross-referencing\nsection headings, equations, tables, and figures. This section\ncontains a simplified explanation of these cross-referencing features.\nThe proper usage in the context of section headings, equations,\ntables, and figures is discussed in the appropriate sections.\n\nCross-referencing depends upon the use of ``tags,'' which are defined by\nthe user. The \\cmd\\label\\marg{key} command is used to identify tags for\n\\revtex. Tags are strings of characters that serve to label section\nheadings, equations, tables, and figures that replace explicit,\nby-hand numbering.\n\nFiles that use cross-referencing (and almost all manuscripts do)\nneed to be processed through \\revtex\\ at least twice to\nensure that the tags have been properly linked to appropriate numbers.\nIf any tags are added in subsequent editing sessions, \n\\LaTeX{} will display a warning message in the log file that ends with\n\\texttt{... Rerun to get cross-references right}.\nRunning the file through \\revtex\\ again (possibly more than once) will\nresolve the cross-references. If the error message persists, check\nthe labels; the same \\marg{key} may have been used to label more than one\nobject.\n\nAnother \\LaTeX\\ warning is \\texttt{There were undefined references},\nwhich indicates the use of a key in a \\cmd\\ref\\ without ever\nusing it in a \\cmd\\label\\ statement.\n\n\\revtex{} performs autonumbering exactly as in standard \\LaTeX.\nWhen the file is processed for the first time,\n\\LaTeX\\ creates an auxiliary file (with the \\file{.aux} extension) that \nrecords the value of each \\meta{key}. Each subsequent run retrieves\nthe proper number from the auxiliary file and updates the auxiliary\nfile. At the end of each run, any change in the value of a \\meta{key}\nproduces a \\LaTeX\\ warning message.\n\nNote that with footnotes appearing in the bibliography, extra passes\nof \\LaTeX\\ may be needed to resolve all cross-references. For\ninstance, putting a \\cmd\\cite\\ inside a \\cmd\\footnote\\ will require at\nleast three passes.\n\nUsing the \\classname{hyperref} package to create hyperlinked PDF files\nwill cause reference ranges to be expanded to list every\nreference in the range. This behavior can be avoided by using the\n\\classname{hypernat} package available from \\url{www.ctan.org}.\n\n\\subsection{Acknowledgments}\nUse the \\env{acknowledgments} environment for an acknowledgments\nsection. Depending on the journal substyle, this element may be\nformatted as an unnumbered section title \\textit{Acknowledgments} or\nsimply as a paragraph. Please note the spelling of\n``acknowledgments''.\n\\begin{verbatim}\n\\begin{acknowlegments}\nThe authors would like to thank...\n\\end{acknowlegments}\n\\end{verbatim}\n\n\\subsection{Appendices}\nThe \\cmd\n\\section{Introduction}\nThis document gives a brief summary of how \\revtex~4 is different from\nwhat authors may already be familiar with. The two primary design\ngoals for \\revtex~4 are to 1) move to \\LaTeXe\\ and 2) improve the\nmarkup so that infomation can be more reliably extracted for the\neditorial and production processes. Both of these goals require that\nauthors comfortable with earlier versions of \\revtex\\ change their\nhabits. In addition, authors may already be familiar with the standard\n\\classname{article.cls} in \\LaTeXe. \\revtex~4 differs in some\nimportant ways from this class as well. For more complete\ndocumentation on \\revtex~4, see the main \\textit{\\revtex~4 Author's\nGuide}. The most important changes are in the markup of the front\nmatter (title, authors, affiliations, abstract, etc.). Please see\nSec.~\\ref{sec:front}.\n\n\\section{Version of \\LaTeX}\nThe most obvious difference between \\revtex~4 and \\revtex~3 is that\n\\revtex~4 works solely with \\LaTeXe; it is not useable as a \\LaTeX2.09 package.\nFurthermore, \\revtex~4 requires an up-to-date \\LaTeX\\ installation\n(1996\/06\/01 or later); its use under older versions is not supported.\n\n\\section{Class Options and Defaults}\nMany of the class options in \\revtex~3 have been retained in\n\\revtex~4. However, the default behavior for these options can be\ndifferent than in \\revtex~3. Currently, there is only one society\noption, \\classoption{aps}, and this is the default. Furthermore, the\nselection of a journal (such as \\classoption{prl}) will automatically\nset the society as well (this will be true even after other societies\nare added).\n\nIn \\revtex~3, it was necessary to invoke the \\classoption{floats}, but\nthis is the default for \\classoption{aps} journal in\n\\revtex~4. \\revtex~4 introduces two new class options,\n\\classoption{endfloats} and \\classoption{endfloats*} for moving floats\nto the end of the paper.\n\nThe preamble commands \\cmd{\\draft} and \\cmd{\\tighten} have been replaced\nwith new class options \\classoption{draft} and\n\\classoption{tightenlines}, respectively. The \\cmd{\\preprint} command\nis now used only for specifying institutional report numbers (typeset\nin the upper-righthand corner of the first page); it no longer\ninfluences whether PACS numbers are displayed below the abstract. PACS\ndisplay is controlled by the \\classoption{showpacs} and\n\\classoption{noshowpacs} (default) class options.\n\nPaper size options (\\classoption{letter}, \\classoption{a4paper}, etc.)\nwork in \\revtex~4. The text ``Typeset by \\revtex'' no longer appears\nby default - the option \\classoption{byrevtex} will place this text in\nthe lower-lefthand corner of the first page.\n\n\\section{One- and Two-column formatting}\n\n\\revtex~4 has excellent support for achieving the two-column\nformatting in the \\textit{Physical~Review} and \\textit{Reviews of\nModern Physics} styles. It will balance the columns\nautomatically. Whereas \\revtex~3 had the \\cmd{\\widetext} and\n\\cmd{\\narrowtext} commands for switching between one- and two-cloumn\nmodes, \\revtex~4 simply has a \\env{widetext} environment,\n\\envb{widetext} \\dots \\enve{widetext}. One-column formatting can be\nspecified by choosing either the \\classoption{onecolumn} or\n\\classoption{preprint} class option (the \\revtex~3 option\n\\classoption{manuscript} no longer exists). Two-column formatting is\nthe default for most journal styles, but can be specified with the\n\\classoption{twocolumn} option. Note that the spacing for\n\\classoption{preprint} is now set to 1.5, rather than full\ndouble-spacing. The \\classoption{tightenlines} option can be used to\nreduce this to single spacing.\n\n\n\\section{Front Matter Markup}\n\\label{sec:front}\n\n\\revtex~4 has substantially changed how the front matter for an article\nis marked up. These are the most significant differences between\n\\revtex~4 and other systems for typesetting manuscripts. It is\nessential that authors new to \\revtex~4 be familiar with these changes.\n\n\\subsection{Authors, Affiliations, and Author Notes}\n\\revtex~4 has substantially changed the markup of author names,\naffiliations, and author notes (footnotes giving additional\ninformation about the author such as a permanent address or an email\naddress).\n\\begin{itemize}\n\\item Each author name should appear separately in\nindividual \\cmd\\author\\ macros. \n\n\\item Email addresses should be marked up using the \\cmd\\email\\ macro.\n\n\\item Alternative affiliation information should be marked up using\nthe \\cmd\\altaffiliation\\ macro.\n\n\\item URLs for author home pages can be specified with a\n\\cmd\\homepage\\ macro.\n\n\\item The \\cmd\\thanks\\ macro should only be used if one of the above\ndon't apply.\n\n\\item \\cmd{\\email}, \\cmd{\\homepage}, \\cmd{\\altaffiliation}, and\n\\cmd{\\thanks} commands are grouped together under a single footnote for\neach author. These footnotes can either appear at the bottom of the\nfirst page of the article or as the first entries in the\nbibliography. The journal style controls this placement, but it may be\noverridden by using the class options \\classoption{bibnotes} and\n\\classoption{nobibnotes}. Note that these footnotes are treated\ndifferently than the other footnotes in the article.\n\n\\item The grouping of authors by affiliations is accomplished\nautomatically. Each affiliation should be in its own\n\\cmd{\\affiliation} command. Multiple \\cmd{\\affiliation},\n\\cmd{\\email}, \\cmd{\\homepage}, \\cmd{\\altaffiliation}, and \\cmd{\\thanks}\ncommands can be applied to each author. The macro \\cmd\\and\\ has been\neliminated.\n\n\\item \\cmd\\affiliation\\ commmands apply to all previous authors that\ndon't have an affiliation already declared. Furthermore, for any\nparticular author, the \\cmd\\affilation\\ must follow any \\cmd{\\email},\n\\cmd{\\homepage}, \\cmd{\\altaffiliation}, or \\cmd{\\thanks} commands for\nthat author.\n\n\\item Footnote-style associations of authors with affilitations should\nnot be done via explicit superscripts; rather, the class option\n\\classoption{superscriptaddress} should be used to accomplish this\nautomatically.\n\n\\item A collaboration for a group of authors can be given using the\n\\cmd\\collaboration\\ command.\n\n\\end{itemize}\n\nTable~\\ref{tab:front} summarizes some common pitfalls in moving from\n\\revtex~3 to \\revtex~4.\n\\begin{table*}\n\\begin{ruledtabular}\n\\begin{tabular}{lll}\n\\textbf{\\revtex~3 Markup} & \\textbf{\\revtex~4 Markup} & \\textbf{Explanation}\\\\\n& & \\\\\n\\verb+\\author{Author One and Author Two}+ & \\verb+\\author{Author One}+ & One name per\\\\\n& \\verb+\\author{Author Two}+ & \\verb+\\author+ \\\\\n& & \\\\\n\\verb+\\author{Author One$^{1}$}+ & \\verb+\\author{Author One}+& Use \\classoption{superscriptaddress}\\\\\n\\dots &\\dots & class option \\\\\n\\verb+\\address{$^{1}$APS}+ &\\verb+\\affiliation{APS}+ & \\\\\n& & \\\\\n\\verb+\\thanks{Permanent address...}+ & \\verb+\\altaffiliation{}+& Use most\nspecific macro \\\\\n\\verb+\\thanks{Electronic address: user@domain.edu}+ &\n\\verb+\\email{user@domain.edu}+& available\\\\\n\\verb+\\thanks{http:\/\/publish.aps.org\/}+ &\n\\verb+\\homepage{http:\/\/publish.aps.org\/}+& \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\caption{Common mistakes in marking up the front matter}\n\\label{tab:front}\n\\end{table*}\n\n\n\\subsection{Abstracts}\n\\revtex~4, like \\revtex~3, uses the \\env{abstract} environment\n\\envb{abstract} \\dots \\enve{abstract} for the abstract. The\n\\env{abstract} environment must appear before the \\cmd{\\maketitle}\ncommand in \\revtex~4. The abstract will be formatted\nappropriately for either one-column (preprint) or two-column\nformatting. In particular, in the two-column case, the abstract will\nautomatically be placed in a single column that spans the width of the\npage. It is unnecessary to use a \\cmd{\\minipage} or any other macro to\nachieve this result.\n\n\n\\section{Citations and References}\n\n\\revtex~4 uses the same \\cmd{\\cite},\\cmd{\\ref}, and \\cmd{\\bibitem}\ncommmands as standard \\LaTeX\\ and \\revtex~3. Citation handling is\nbased upon Patick Daly's \\classname{natbib} package. The\n\\env{references} environment is no longer used. Instead, use the\nstandard \\LaTeXe\\ environment \\env{thebibliography}.\n\nTwo new \\BibTeX\\ files have been included with \\revtex~4,\n\\file{apsrev.bst} and \\file{apsrmp.bst}. These will format references\nin the style of \\textit{Physical Review} and \\textit{Reviews of Modern\nPhysics} respectively. In addition, these \\BibTeX\\ styles\nautomatically apply a special macro \\cmd{\\bibinfo} to each element of the\nbibliography to make it easier to extract information for use in the\neditorial and production processes. Authors are strongly urged to use\n\\BibTeX\\ to manage their bibliographies so that the \\cmd{\\bibinfo}\ndirectives will be automatically included. Other bibliography styles\ncan be specified by using the \\cmd\\bibliographystyle\\ command, but\nunlike standard \\LaTeXe, you must give this command \\emph{before} the\n\\envb{document} statement.\n\nPlease note that the package \\classname{cite.sty} is not needed with\n\\revtex~4 and is incompatible.\n\n\\section{Footnotes and Tablenotes}\n\\label{sec:foot}\n\n\\revtex~4 uses the standard \\cmd{\\footnote} macro for\nfootnotes. Footnotes can either appear on the bottom of the page on\nwhich they occur or they can appear as entries at the end of the\nbibliography. As with author notes, the journal style option controls\nthe placement; however, this can be overridden with the class options\n\\classoption{footinbib} and \\classoption{nofootinbib}.\n\nWithin a table, the \\cmd{\\footnote} command behaves differently. Footnotes\nappear at the bottom of the table. \\cmd{\\footnotemark} and\n\\cmd{\\footnotetext} are also available within the table environment so\nthat multiple table entries can share the same footnote text. There\nis no longer a need to use a \\cmd{\\tablenote}, \\cmd{\\tablenotemark},\nand \\cmd{\\tablenotetext} macros.\n\n\\section{Section Commands}\n\nThe title in a \\cmd\\section\\marg{title} command will be automatically\nuppercased in \\revtex~4. To prevent a particular letter from being\nuppercased, enclose it in curly braces.\n\n\\section{Figures}\n\nFigures should be enclosed within either a \\env{figure} or \\env{figure*}\nenvironment (the latter will cause the figure to span the full width\nof the page in two-column mode). \\LaTeXe\\ has two convenient packages\nfor including the figure file itself: \\classname{graphics} and\n\\classname{graphicx}. These two packages both define a macro\n\\cmd{\\includegraphics} which calls in the figure. They differ in how\narguments for rotation, translation, and scaling are specified. The\npackage \\classname{epsfig} has been re-implemented to use these\n\\classname{graphicx} package. The package \\classname{epsfig} provides\nan interface similar to that under the \\revtex~3 \\classoption{epsf}\nclass option. Authors should use these standard\n\\LaTeXe\\ packages rather than some other alternative.\n\n\\section{Tables}\n\nShort tables should be enclosed within either a \\env{table} or \\env{table*}\nenvironmnent (the latter will cause the table to span the full width\nof the page in two-column mode). The heart of the table is the\n\\env{tabular} environment. This will behave for the most part as in\nstandard \\LaTeXe. Note that \\revtex~4 no longer automatically adds\ndouble (Scotch) rules around tables. Nor does the \\env{tabular}\nenvironment set various table parameters as before. Instead, a new\nenvironment \\env{ruledtabular} provides this functionality. This\nenvironment should surround the \\env{tabular} environment:\n\\begin{verbatim}\n\\begin{table}\n\\caption{...}\n\\label{tab:...}\n\\begin{ruledtabular}\n\\begin{tabular}\n...\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\\end{verbatim}\n\nUnder \\revtex~3, tables automatically break across pages. \\revtex~4\nprovides some of this functionality. However, this requires adding the\ntable a float placement option of [H] (meaning put the table\n``here'') to the \\envb{table} command.\n\nLong tables are more robustly handled by using the\n\\classname{longtable.sty} package included with the standard \\LaTeXe\\\ndistribution (put \\verb+\\usepackage{longtable}+ in the preamble). This\npackage gives precise control over the layout of the table. \\revtex~4\ngoes out of its way to provide patches so that the \\env{longtable}\nenvironment will work within a two-column format. A new\n\\env{longtable*} environment is also provided for long tables that are\ntoo wide for a narrow column. (Note that the \\env{table*} and\n\\env{longtable*} environments should always be used rather than\nattempting to use the \\env{widetext} environment.)\n\nTo create tables with columns of numbers aligned on decimal points,\nload the standard \\LaTeXe\\ \\classname{dcolumn} package and use the\n\\verb+d+ column specifier. The content of each cell in the column is\nimplicitly in math mode: Use of math delimiters (\\verb+$+) is unnecessary\nin a \\verb+d+ column.\n\nFootnotes within a table can be specified with the\n\\cmd{\\footnote} command (see Sec.~\\ref{sec:foot}). \n\n\\section{Font selection}\n\nThe largest difference between \\revtex~3 and \\revtex~4 with respect to\nfonts is that \\revtex~4 allows one use the \\LaTeXe\\ font commands such\nas \\cmd{\\textit}, \\cmd{\\texttt}, \\cmd{\\textbf} etc. These commands\nshould be used in place of the basic \\TeX\/\\LaTeX\\ 2.09 font commands\nsuch as \\cmd{\\it}, \\cmd{\\tt}, \\cmd{\\bf}, etc. The new font commands\nbetter handle subtleties such as italic correction and scaling in\nsuper- and subscripts.\n\n\\section{Math and Symbols}\n\n\\revtex~4 depends more heavily on packages from the standard \\LaTeXe\\\ndistribution and AMS-\\LaTeX\\ than \\revtex~3 did. Thus, \\revtex~4 users\nshould make sure their \\LaTeXe\\ distributions are up to date and they\nshould install AMS-\\LaTeX\\ 2.0 as well. In general, if any fine control of\nequation layout, special math symbols, or other specialized math\nconstructs are needed, users should look to the \\classname{amsmath}\npackage (see the AMS-\\LaTeX\\ documentation).\n\n\\revtex~4 provides a small number of additional diacritics, symbols,\nand bold parentheses. Table~\\ref{tab:revsymb} summarizes this.\n\n\\begin{table}\n\\caption{Special \\revtex~4 symbols, accents, and boldfaced parentheses \ndefined in \\file{revsymb.sty}}\n\\label{tab:revsymb}\n\\begin{ruledtabular}\n\\begin{tabular}{ll|ll}\n\\cmd\\lambdabar & $\\lambdabar$ &\\cmd\\openone & $\\openone$\\\\\n\\cmd\\altsuccsim & $\\altsuccsim$ & \\cmd\\altprecsim & $\\altprecsim$ \\\\\n\\cmd\\alt & $\\alt$ & \\cmd\\agt & $\\agt$ \\\\\n\\cmd\\tensor\\ x & $\\tensor x$ & \\cmd\\overstar\\ x & $\\overstar x$ \\\\\n\\cmd\\loarrow\\ x & $\\loarrow x$ & \\cmd\\roarrow\\ x & $\\roarrow x$ \\\\\n\\cmd\\biglb\\ ( \\cmd\\bigrb ) & $\\biglb( \\bigrb)$ &\n\\cmd\\Biglb\\ ( \\cmd\\Bigrb )& $\\Biglb( \\Bigrb)$ \\\\\n& & \\\\\n\\cmd\\bigglb\\ ( \\cmd\\biggrb ) & $\\bigglb( \\biggrb)$ &\n\\cmd\\Bigglb\\ ( \\cmd\\Biggrb\\ ) & $\\Bigglb( \\Biggrb)$ \\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table}\n\nHere is a partial list of the more notable changes between \\revtex~3\nand \\revtex~4 math:\n\\begin{itemize}\n\\item Bold math characters should now be handle via the standard\n\\LaTeXe\\ \\classname{bm} package (use \\cmd{\\bm} instead of \\cmd{\\bbox}).\n\\cmd{\\bm} will handle Greek letters and other symbols.\n\n\\item Use the class options \\classoption{amsmath},\n\\classoption{amsfonts} and \\classoption{amssymb} to get even more math\nfonts and symbols. \\cmd{\\mathfrak} and \\cmd{\\mathbb} will, for instance, give\nFraktur and Blackboard Bold symbols.\n\n\\item Use the \\classoption{fleqn} class option for making equation\nflush left or right. \\cmd{\\FL} and \\cmd{\\FR} are no longer provided.\n\n\\item In place of \\cmd{\\eqnum}, load the \\classname{amsmath} package\n[\\verb+\\usepackage{amsmath}+] and use \\cmd{\\tag}.\n\n\\item In place of \\cmd{\\case}, use \\cmd{\\textstyle}\\cmd{\\frac}.\n\n\\item In place of the \\env{mathletters} environment, load the\n\\classname{amsmath} package and use \\env{subequations} environment.\n\n\\item In place of \\cmd{\\slantfrac}, use \\cmd{\\frac}.\n\n\\item The macros \\cmd{\\corresponds}, \\cmd{\\overdots}, and\n\\cmd{\\overcirc} have been removed. See Table~\\ref{tab:obsolete}.\n\n\\end{itemize}\n\n\\section{Obsolete \\revtex~3.1 commands}\n\nTable~\\ref{tab:obsolete} summarizes more differences between \\revtex~4\nand \\revtex~3, particularly which \\revtex~3 commands are now obsolete.\n\n\\begin{table*}\n\\caption{Differences between \\revtex~3.1 and \\revtex~4\nmarkup}\\label{tab:diff31}\n\\label{tab:obsolete}\n\\begin{ruledtabular}\n\\begin{tabular}{lp{330pt}}\n\\textbf{\\revtex~3.1 command}&\\textbf{\\revtex~4 replacement}\n\\lrstrut\\\\\n\\cmd\\documentstyle\\oarg{options}\\aarg{\\classname{revtex}}&\\cmd\\documentclass\\oarg{options}\\aarg{\\classname{revtex4}}\n\\\\\noption \\classoption{manuscript}& \\classoption{preprint}\n\\\\\n\\cmd\\tighten\\ preamble command & \\classoption{tightenlines} class option\n\\\\\n\\cmd\\draft\\ preamble command & \\classoption{draft} class option\n\\\\\n\\cmd\\author & \\cmd\\author\\marg{name} may appear\nmultiple times; each signifies a new author name.\\\\\n & \\cmd\\collaboration\\marg{name}:\nCollaboration name (should appear after last \\cmd\\author)\\\\\n & \\cmd\\homepage\\marg{URL}: URL for preceding author\\\\\n & \\cmd\\email\\marg{email}: email\naddress for preceding author\\\\\n & \\cmd{\\altaffiliation}: alternate\naffiliation for preceding \\cmd\\author\\\\\n\\cmd\\thanks & \\cmd\\thanks, but use only for\ninformation not covered by \\cmd{\\email}, \\cmd{\\homepage}, or \\cmd{\\altaffilitiation}\\\\\n\\cmd\\and & obsolete, remove this command\\\\\n\\cmd\\address & \\cmd\\affiliation\\marg{institution}\\ gives the affiliation for the group of authors above\\\\\n & \\cmd\\affiliation\\oarg{note} lets you specify a footnote to this institution\\\\\n & \\cmd\\noaffiliation\\ signifies that the above authors have no affiliation\\\\\n\n\\cmd\\preprint & \\cmd\\preprint\\marg{number} can appear multiple times, and must precede \\cmd\\maketitle\\\\\n\\cmd\\pacs & \\cmd\\pacs\\ must precede \\cmd\\maketitle\\\\\n\\env{abstract} environment & \\env{abstract} environment must precede \\cmd\\maketitle\\\\\n\\cmd\\wideabs & obsolete, remove this command\\\\\n\\cmd\\maketitle & \\cmd\\maketitle\\ must follow\n\\emph{all} front matter data commands\\\\\n\\cmd\\narrowtext & obsolete, remove this command\\\\\n\\cmd\\mediumtext & obsolete, remove this command\\\\\n\\cmd\\widetext & obsolete, replace with \\env{widetext} environment\\\\\n\\cmd\\FL & obsolete, remove this command\\\\\n\\cmd\\FR & obsolete, remove this command\\\\\n\\cmd\\eqnum & replace with \\cmd\\tag, load \\classname{amsmath}\\\\\n\\env{mathletters} & replace with \\env{subequations}, load\n\\classname{amsmath}\\\\\n\\env{tabular} environment & No longer puts in doubled-rules. Enclose \\env{tabular} in \\env{ruledtabular} to get old behavior.\\\\\n\\env{quasitable} environment & obsolete, \\env{tabular} environment no longer\nputs in rules\\\\\n\\env{references} environment & replace with \\env{thebibliography}\\verb+{}+\\\\\n\\cmd\\case & replace with \\cmd\\textstyle\\cmd\\frac\\\\\n\\cmd\\slantfrac & replace with \\cmd\\frac\\\\\n\\cmd\\tablenote & replace with \\cmd\\footnote\\\\\n\\cmd\\tablenotemark & replace with \\cmd\\footnotemark\\\\\n\\cmd\\tablenotetext & replace with \\cmd\\footnotetext\\lrstrut\\\\\n\\cmd\\overcirc & Use standard \\LaTeXe\\ \\cmd\\mathring\\ \\\\\n\\cmd\\overdots & Use \\cmd\\dddot\\ with \\classoption{amsmath}\\\\\n\\cmd\\corresponds & Use \\cmd\\triangleq\\ with \\classoption{amssymb}\\\\\n\\classoption{epsf} class option & \\verb+\\usepackage{epsfig}+\\\\\n\\end{tabular}\n\\end{ruledtabular}\n\\end{table*}\n\n\n\\section{Converting a \\revtex~3.1 Document to \\revtex~4}\\label{sec:conv31}%\n\n\\revtex~3 documents can be converted to \\revtex~4 rather\nstraightforwardly. The following checklist covers most of the major\nsteps involved.\n\n\\begin{itemize}\n\\item Change \\cmd\\documentstyle\\verb+{revtex}+ to\n\\cmd\\documentclass\\verb+{revtex4}+, and run the document under\n\\LaTeXe\\ instead of \\LaTeX2.09.\n\n\\item\nReplace the \\cmd\\draft\\ command with the \\classoption{draft} class option.\n\n\\item\nReplace the \\cmd\\tighten\\ command with the \\classoption{tightenlines}\nclass option.\n\n\\item\nFor each \\cmd\\author\\ command, split the multiple authors into\nindividual \\cmd\\author\\ commands. Remove any instances of \\cmd\\and.\n\n\\item For superscript-style associations between authors and\naffiliations, remove explicit superscripts and use the\n\\classoption{superscriptaddress} class option.\n\n\\item\nUse \\cmd\\affiliation\\ instead of \\cmd\\address.\n\n\\item\nPut \\cmd\\maketitle\\ after the \\env{abstract} environment and any\n\\cmd\\pacs\\ commands.\n\n\\item If double-ruled table borders are desired, enclose \\env{tabular}\nenviroments in \\env{ruledtabular} environments.\n\n\\item\nConvert long tables to \\env{longtable}, and load the\n\\classname{longtable} package. Alternatively, give the \\env{table}\nan [H] float placement parameter so that the table will break automatically.\n\n\\item\nReplace any instances of the \\cmd\\widetext\\ and \\cmd\\narrowtext\\\ncommands with the \\env{widetext} environment.\nUsually, the \\envb{widetext} statement will replace the \\cmd\\widetext\\\ncommand, and the \\enve{widetext} statement replaces the matching\n\\cmd\\narrowtext\\ command.\n\nNote in this connection that due to a curious feature of \\LaTeX\\\nitself, \\revtex~4 having a \\env{widetext} environment means that it\nalso has a definition for the \\cmd\\widetext\\ command, even though the\nlatter cammand is not intended to be used in your document.\nTherefore, it is particularly important to remove\nall \\cmd\\widetext\\ commands when converting to \\revtex~4.\n\n\\item\nRemove all obsolete commands: \\cmd\\FL, \\cmd\\FR, \\cmd\\narrowtext, and\n\\cmd\\mediumtext\\ (see Table~\\ref{tab:diff31}).\n\n\\item\nReplace \\cmd\\case\\ with \\cmd\\frac. If a fraction needs to be set\nin text style despite being in a display equation, use the\nconstruction \\cmd\\textstyle\\cmd\\frac. Note that \\cmd\\frac\\ does not\nsupport the syntax \\cmd\\case\\verb+1\/2+.\n\n\\item\nReplace \\cmd\\slantfrac\\ with \\cmd\\frac.\n\n\\item\nChange \\cmd\\frak\\ to \\cmd\\mathfrak\\marg{char}\\index{Fraktur} and\n\\cmd\\Bbb\\ to \\cmd\\mathbb\\marg{char}\\index{Blackboard Bold}, and invoke\none of the class options \\classoption{amsfonts} or\n\\classoption{amssymb}.\n\n\\item\nReplace environment \\env{mathletters} with environment\n\\env{subequations} and load the \\classname{amsmath} package.\n\n\\item\nReplace \\cmd\\eqnum\\ with \\cmd\\tag\\ and load the \\classname{amsmath} package.\n\n\\item\nReplace \\cmd\\bbox\\ with \\cmd\\bm\\ and load the \\classname{bm} package.\n\n\\item\nIf using the \\cmd\\text\\ command, load the \\classname{amsmath} package.\n\n\\item\nIf using the \\verb+d+ column specifier in \\env{tabular} environments,\nload the \\classname{dcolumn} package. Under \\classname{dcolumn}, the\ncontent of each \\verb+d+ column cell is implicitly in math mode:\nremove any \\verb+$+ math delimiters appearing in cells in a \\verb+d+\ncolumn.\n\n\\item\nReplace \\cmd\\tablenote\\ with \\cmd\\footnote, \\cmd\\tablenotemark\\ with\n\\cmd\\footnotemark, and \\cmd\\tablenotetext\\ with \\cmd\\footnotetext.\n\n\\item\nReplace \\envb{references} with \\envb{thebibliography}\\verb+{}+;\n\\enve{references} with \\enve{thebibliography}.\n\\end{itemize}\n\\end{document}\n\n\\section{}+, \\verb+\\subsection{}+,\n\\verb+\\subsubsection{}+ & Start a new section or\nsubsection.\\\\\n\\verb+\\section*{}+ & Start a new section without a number.\\\\\n\\verb+\n\\section{\\label{sec:level1}First-level heading:\\protect\\\\ The line\nbreak was forced \\lowercase{via} \\textbackslash\\textbackslash}\n\nThis sample document demonstrates proper use of REV\\TeX~4 (and\n\\LaTeXe) in mansucripts prepared for submission to APS\njournals. Further information can be found in the REV\\TeX~4\ndocumentation included in the distribution or available at\n\\url{http:\/\/publish.aps.org\/revtex4\/}.\n\nWhen commands are referred to in this example file, they are always\nshown with their required arguments, using normal \\TeX{} format. In\nthis format, \\verb+#1+, \\verb+#2+, etc. stand for required\nauthor-supplied arguments to commands. For example, in\n\\verb+\\section{#1}+ the \\verb+#1+ stands for the title text of the\nauthor's section heading, and in \\verb+\\title{#1}+ the \\verb+#1+\nstands for the title text of the paper.\n\nLine breaks in section headings at all levels can be introduced using\n\\textbackslash\\textbackslash. A blank input line tells \\TeX\\ that the\nparagraph has ended. Note that top-level section headings are\nautomatically uppercased. If a specific letter or word should appear in\nlowercase instead, you must escape it using \\verb+\\lowercase{#1}+ as\nin the word ``via'' above.\n\n\\subsection{\\label{sec:level2}Second-level heading: Formatting}\n\nThis file may be formatted in both the \\texttt{preprint} and\n\\texttt{twocolumn} styles. \\texttt{twocolumn} format may be used to\nmimic final journal output. Either format may be used for submission\npurposes; however, for peer review and production, APS will format the\narticle using the \\texttt{preprint} class option. Hence, it is\nessential that authors check that their manuscripts format acceptably\nunder \\texttt{preprint}. Manuscripts submitted to APS that do not\nformat correctly under the \\texttt{preprint} option may be delayed in\nboth the editorial and production processes.\n\nThe \\texttt{widetext} environment will make the text the width of the\nfull page, as on page~\\pageref{eq:wideeq}. (Note the use the\n\\verb+\\pageref{#1}+ to get the page number right automatically.) The\nwidth-changing commands only take effect in \\texttt{twocolumn}\nformatting. It has no effect if \\texttt{preprint} formatting is chosen\ninstead.\n\n\\subsubsection{\\label{sec:level3}Third-level heading: References and Footnotes}\nReference citations in text use the commands \\verb+\\cite{#1}+ or\n\\verb+\\onlinecite{#1}+. \\verb+#1+ may contain letters and numbers.\nThe reference itself is specified by a \\verb+\\bibitem{#1}+ command\nwith the same argument as the \\verb+\\cite{#1}+ command.\n\\verb+\\bibitem{#1}+ commands may be crafted by hand or, preferably,\ngenerated by using Bib\\TeX. REV\\TeX~4 includes Bib\\TeX\\ style files\n\\verb+apsrev.bst+ and \\verb+apsrmp.bst+ appropriate for\n\\textit{Physical Review} and \\textit{Reviews of Modern Physics},\nrespectively. REV\\TeX~4 will automatically choose the style\nappropriate for the journal specified in the document class\noptions. This sample file demonstrates the basic use of Bib\\TeX\\\nthrough the use of \\verb+\\bibliography+ command which references the\n\\verb+assamp.bib+ file. Running Bib\\TeX\\ (typically \\texttt{bibtex\napssamp}) after the first pass of \\LaTeX\\ produces the file\n\\verb+apssamp.bbl+ which contains the automatically formatted\n\\verb+\\bibitem+ commands (including extra markup information via\n\\verb+\\bibinfo+ commands). If not using Bib\\TeX, the\n\\verb+thebibiliography+ environment should be used instead.\n\nTo cite bibliography entries, use the \\verb+\\cite{#1}+ command. Most\njournal styles will display the corresponding number(s) in square\nbrackets: \\cite{feyn54,witten2001}. To avoid the square brackets, use\n\\verb+\\onlinecite{#1}+: Refs.~\\onlinecite{feyn54} and\n\\onlinecite{witten2001}. REV\\TeX\\ ``collapses'' lists of\nconsecutive reference numbers where possible. We now cite everyone\ntogether \\cite{feyn54,witten2001,epr}, and once again\n(Refs.~\\onlinecite{epr,feyn54,witten2001}). Note that the references\nwere also sorted into the correct numerical order as well.\n\nWhen the \\verb+prb+ class option is used, the \\verb+\\cite{#1}+ command\ndisplays the reference's number as a superscript rather than using\nsquare brackets. Note that the location of the \\verb+\\cite{#1}+\ncommand should be adjusted for the reference style: the superscript\nreferences in \\verb+prb+ style must appear after punctuation;\notherwise the reference must appear before any punctuation. This\nsample was written for the regular (non-\\texttt{prb}) citation style.\nThe command \\verb+\\onlinecite{#1}+ in the \\texttt{prb} style also\ndisplays the reference on the baseline.\n\nFootnotes are produced using the \\verb+\\footnote{#1}+ command. Most\nAPS journal styles put footnotes into the bibliography. REV\\TeX~4 does\nthis as well, but instead of interleaving the footnotes with the\nreferences, they are listed at the end of the references\\footnote{This\nmay be improved in future versions of REV\\TeX.}. Because the correct\nnumbering of the footnotes must occur after the numbering of the\nreferences, an extra pass of \\LaTeX\\ is required in order to get the\nnumbering correct.\n\n\\section{Math and Equations}\nInline math may be typeset using the \\verb+$+ delimiters. Bold math\nsymbols may be achieved using the \\verb+bm+ package and the\n\\verb+\\bm{#1}+ command it supplies. For instance, a bold $\\alpha$ can\nbe typeset as \\verb+$\\bm{\\alpha}$+ giving $\\bm{\\alpha}$. Fraktur and\nBlackboard (or open face or double struck) characters should be\ntypeset using the \\verb+\\mathfrak{#1}+ and \\verb+\\mathbb{#1}+ commands\nrespectively. Both are supplied by the \\texttt{amssymb} package. For\nexample, \\verb+$\\mathbb{R}$+ gives $\\mathbb{R}$ and\n\\verb+$\\mathfrak{G}$+ gives $\\mathfrak{G}$\n\nIn \\LaTeX\\ there are many different ways to display equations, and a\nfew preferred ways are noted below. Displayed math will center by\ndefault. Use the class option \\verb+fleqn+ to flush equations left.\n\nBelow we have numbered single-line equations; this is the most common\ntype of equation in \\textit{Physical Review}:\n\\begin{eqnarray}\n\\chi_+(p)\\alt{\\bf [}2|{\\bf p}|(|{\\bf p}|+p_z){\\bf ]}^{-1\/2}\n\\left(\n\\begin{array}{c}\n|{\\bf p}|+p_z\\\\\npx+ip_y\n\\end{array}\\right)\\;,\n\\\\\n\\left\\{%\n \\openone234567890abc123\\alpha\\beta\\gamma\\delta1234556\\alpha\\beta\n \\frac{1\\sum^{a}_{b}}{A^2}%\n\\right\\}%\n\\label{eq:one}.\n\\end{eqnarray}\nNote the open one in Eq.~(\\ref{eq:one}).\n\nNot all numbered equations will fit within a narrow column this\nway. The equation number will move down automatically if it cannot fit\non the same line with a one-line equation:\n\\begin{equation}\n\\left\\{\n ab12345678abc123456abcdef\\alpha\\beta\\gamma\\delta1234556\\alpha\\beta\n \\frac{1\\sum^{a}_{b}}{A^2}%\n\\right\\}.\n\\end{equation}\n\nWhen the \\verb+\\label{#1}+ command is used [cf. input for\nEq.~(\\ref{eq:one})], the equation can be referred to in text without\nknowing the equation number that \\TeX\\ will assign to it. Just\nuse \\verb+\\ref{#1}+, where \\verb+#1+ is the same name that used in\nthe \\verb+\\label{#1}+ command.\n\nUnnumbered single-line equations can be typeset\nusing the \\verb+\\[+, \\verb+\\]+ format:\n\\[g^+g^+ \\rightarrow g^+g^+g^+g^+ \\dots ~,~~q^+q^+\\rightarrow\nq^+g^+g^+ \\dots ~. \\]\n\n\\subsection{Multiline equations}\n\nMultiline equations are obtained by using the \\verb+eqnarray+\nenvironment. Use the \\verb+\\nonumber+ command at the end of each line\nto avoid assigning a number:\n\\begin{eqnarray}\n{\\cal M}=&&ig_Z^2(4E_1E_2)^{1\/2}(l_i^2)^{-1}\n\\delta_{\\sigma_1,-\\sigma_2}\n(g_{\\sigma_2}^e)^2\\chi_{-\\sigma_2}(p_2)\\nonumber\\\\\n&&\\times\n[\\epsilon_jl_i\\epsilon_i]_{\\sigma_1}\\chi_{\\sigma_1}(p_1),\n\\end{eqnarray}\n\\begin{eqnarray}\n\\sum \\vert M^{\\text{viol}}_g \\vert ^2&=&g^{2n-4}_S(Q^2)~N^{n-2}\n (N^2-1)\\nonumber \\\\\n & &\\times \\left( \\sum_{i