diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzenke" "b/data_all_eng_slimpj/shuffled/split2/finalzzenke" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzenke" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nThe problem of inverse inference has been for a long time one of the\nmain issues in neural networks\nanalysis~\\cite{Bialek2006,Bialek2007,Bolle2009}. Given a number of\nstimuli, one measures the activity of some local components, such as\nspike trains of the neurons, to identify which connections between\nthem, i.e. which synapses, are active, and, possibly, their\nstrength~\\cite{Amit,KappenRodriguez1997}. More recently the problem\nof inverse inference has manifested itself in other branches of\nbiology. For example in structural biology the problem comes down to\ndetermine the probability distribution of amino acid strings by\nobserving the way in which proteins naturally\nfold~\\cite{Socolich.etal-2005}, or, in systems biology it consists in\nrecovering structural details of protein-protein interactions from\nprimary sequence information of gene regulatory\nnetworks~\\cite{Szallasi-1999,Weigt-PNAS2009}. A solution to these\ntypes of problems usually comes about in the following way: one\nproposes a model capturing the characteristics of the network under\nstudy, and eventually develops methods to retrieve the structural\ncharacteristics, which can allow to disprove the underlying\nhypothesis depending on whether the findings are consistent with the observed\nbehavior.\n\nThe pairwise Ising spin glass has been widely used as a starting point\nfor analyzing the above problems. The fact that these problems have a\nnumber of common features has allowed to develop several algorithms\naddressing the corresponding inverse inference\nproblem~\\cite{SessakMonasson2008,MezardMora2008,Roudi-Hertz2009,Roudi-Aurell2009}.\nHowever, as more accurate methods come along, new issues regarding the\nvalidation of the underlying hypothesis have been raised. For\ninstance, it was pointed out in \\cite{Roudi-Nirenberg2009} that by\nonly observing a subset of the nodes composing a neural network,\nreconstructing the original couplings according to a pairwise model\ndoes not necessarily lead to any added information as there exists a\none-to-one relation between the couplings and the (normalized) second\norder correlation coefficient for a certain fraction of hidden\nvariables onward. Thus, in this case one needs to consider higher\norder couplings. Moreover, binning spikes of the neurons, and\naccordingly representing their status by Ising-like variables, can\noversimplify the actual observations. Similarly, determining the\npossible states in protein folding needs at least Potts-like\nvariables. Note also that while for protein networks it is natural to\nassume a pairwise interaction model, in neural networks this is not\nnecessarily the case and though in theory any network can always be\ntransformed into one which contains only pairwise\ninteractions~\\cite{YedidiaFreemanWeiss-2002}, the practical way to go\nabout this might not always be obvious.\n\nIn this paper, we do not address the interesting cases of the presence\nof hidden variables, of higher order interactions or of Potts-like\nvariables. Rather, we question a more basic issue: given a situation\nwhere the pairwise Ising spin glass correctly describes the structure of\nthe network under study, how much information can be gathered from an\nexperimental input about the values of the couplings of the model? In\nother words, given all two-point correlations, or equivalently, given\nthe susceptibilities, up to which point can we say something about the\nreconstructed couplings? Does this information allow us to\nreconstruct the original model or are there intrinsic uncertainties to\nthis inverse inference problem? How does the quality of the\nreconstruction depend on the original distribution of couplings or on\nthe size of the system? The main question can be phrased as to\nwhat amount do the statistical errors that affect the measurements\ninhibit us to reconstruct the original model: if the original\nobservations are incomplete or noisy, up to which point does it make\nsense to try and reconstruct the data? Ideally, one would like to\nanswer the above questions from a theoretical viewpoint. Here,\nhowever, we start by numerically investigating several of the above\nproblems by means of a message passing algorithm, first introduced\nin~\\cite{MezardMora2008}, which is currently among those delivering\nthe best results~\\cite{SessakMonasson2008}. We will use this\nalgorithm to analyze the reconstruction of various types of networks\ngiven their first- and second-order local, possibly noisy,\nobservables.\n\nWe want to analyze some basic features of the\nreconstruction process, by spotting some relevant weakness and by\ntrying to focus on systematic trends that can be relevant in\nexploiting this approach and, consequently, in trying to devise improvements \nthat could lead to better performances. We consider these result as a\ntoolbox spelling and clarify a number of facts that can be useful\nfor a better understanding of and improving this class of methods.\n\nIn Section~\\ref{S:MET} we introduce the message passing method, and\ndefine some relevant quantities. We describe in detail the iterative\nrules in presence of a memory term, that allows the convergence to a\nfixed point. In Section~\\ref{S:DIS} we analyze the reconstruction\nprocedure for different distribution of the couplings: we look at\nbinary random couplings and at Gaussian couplings. In\nSection~\\ref{S:SYN} we introduce synthetic random errors, by modifying\nrandomly exact values for the susceptibility, and we try to understand\nhow a larger incertitude affects the quality of the reconstruction of\nthe couplings. We analyze both the case of an additive error and the\none of a multiplicative error. In Section~\\ref{S:MC} we analyze data\nobtained by a Monte Carlo simulation, and we study the quality of the\nreconstruction as a function of the accuracy of the measurements. We\ndraw our conclusions in Section~\\ref{S:CON}.\n\n\\section{Susceptibility propagation and the inverse Ising spin glass\\label{S:MET}}\n\nWhile message passing algorithms have been widely used to solve the\ndirect inference problem where the characteristics of the underlying\nnetwork are given and one wants to derive experimentally observable\nquantities~\\cite{MezardParisi2000}, their adaptation to tackle the\ninverse problem is relatively recent. Here we consider the inverse\nIsing spin glass, which assumes that the basic constituent agents of the\nnetwork interact only in a pairwise, symmetric way with the other\nagents. In other words, we assume the problem is described by the\nfollowing partition function:\n\\begin{equation}\nZ=\\sum_{\\mathbf{\\sigma}}\\exp \\left[-\\frac{1}{T} \n\\left(\\sum_{i=1}^N h_i \\sigma_i + \\sum_{iT^*(p)$ the information we gather is hidden by\nthe insufficient precision $p$, and the reconstruction quality does\nnot improve. All the results we will discuss in the following have been\nobtained with $12$ bytes wide variables.\n\n\\section{Synthetic noisy data\\label{S:SYN}}\n\nThe susceptibilities (or the correlation functions) one obtains as the\noutput of an experiment are far from exact. The error can either be\ndue to the limitations of the experimental set-up, thus imposing some\nabsolute error on the measured date, or to statistical \nfluctuations, that can originate from \na number of different causes. In case the\nobservables are averages of successive experiments, the two-point\ncorrelation functions are limited by some relative errors which can\npossibly be improved by performing more experiments. We will discuss\nhere how these different types of error can affect the reconstructions\nof the couplings.\n\nThe case of an error that is on average constant in magnitude\n(independently from the size of the observable we are considering) and\nthe one where its ratio to the signal is constant in magnitude are\nindeed very different. In the case of an error that is constant on\naverage small correlations functions will not give any significant\namount of information: if, for example, for a model endowed with a\nEuclidean distance $d$ we expect an exponential decay\nwith $d$, only the first, larger contributions, will be of use in our\nreconstruction, while the smaller ones will be completely hidden by\nthe noise.\n\nWe start again from exact values of the susceptibility that we compute\nby exact enumeration, summing all the contributions of the $2^N$ spin\nconfigurations. We simulate the presence of an absolute error by\nincluding an additive noise term to the exact observables: the\nsusceptibilities $\\chi_{ij}^{A}$ used for reconstruction are given\nhere by the exact susceptibilities $\\chi_{ij}$ with the addition of a\nnoise term $r_{\\eta}$, uniformly drawn from the interval\n$[-\\eta,+\\eta]$, with $\\eta > 0$: $\\chi_{ij}^{A}=\\chi_{ij}+r_{\\eta}\\;,\\;\\;\n\\forall\\; ij$.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[angle=270, width=0.9\\textwidth]\n{coupling_sqrtJbimSK_h0.0_J1.0_ADDITIVEnoise.eps}\n\\caption{\\label{fig:ADDnoise} $\\Delta$ as a function of $T$. Here the\n susceptibilities used as input for the \n reconstruction of the couplings are only\n approximate due to an additive noise. From bottom to top:\n $\\eta=10^{-8}, 10^{-7}, 10^{-6}, 10^{-5}$ for $N=10$ ($+$) and\n $N=20$ ($\\square$).}\n\\end{figure}\n\nWe show our results for the reconstructed couplings in\nFig.~\\ref{fig:ADDnoise}. We show the values obtained for different\nchoices of $\\eta$. The effect of this random noise is irrelevant for the\nlow $T$ range where the reconstruction is possible, but becomes large\nwhen $T$ increases. The larger is $\\eta$, the smaller is the $T$ range\nwhere the reconstruction becomes unreliable. In presence of this kind\nof noise the quality of the reconstruction worsens when $T$ increases:\nthis can be an interesting observation when trying to optimize a\nreconstruction scheme of experimental data.\nIt is interesting to note that the error on the reconstructed coupling\nis several orders of magnitude larger than the additive error on the\nsusceptibilities. This is due to the fact that the order of magnitude\nof the susceptibilities is large at small temperatures, and much\nsmaller at high temperatures. Therefore, especially at high\ntemperatures, additive noise terms are very damaging.\n\nIn Fig.~\\ref{fig:MULnoise} we show the effect of a multiplicative\nnoise term on the susceptibilities. More precisely, these\nreconstructed couplings are computed starting from the approximated\nsusceptibilities $\\chi_{ij}^{M}$, which were obtained from the\noriginal susceptibilities by multiplying them by a factor\n$r_{\\epsilon}$, which was drawn uniformly from the interval\n$[1-\\epsilon,1+\\epsilon]$, with $\\epsilon>0$:\n$\\chi_{ij}^{'}=r_{\\epsilon}\\chi_{ij}\\;,\\;\\;\\forall\\; ij$. \n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[angle=270, width=0.9\\textwidth]\n{coupling_sqrtJbimSK_h0.0_J1.0_MULTIPLICATIVEnoise.eps}\n\\caption{\\label{fig:MULnoise} $\\Delta$ as a function of $T$. Here the\n susceptibilities used as input to the reconstruction scheme are only\n approximate due to the presence of a multiplicative noise. From\n bottom to top: $\\epsilon=10^{-8}, 10^{-7}, 10^{-6}, 10^{-5},\n 10^{-4}, 10^{-3}$ for $N=10$ ($+$) and $N=20$ ($\\square$).}\n\\end{figure}\n\nAgain, the error on the observables influences severely the\nreconstruction of the couplings at higher temperatures. Here there is\na clearer threshold effect than in the former case, and there is a\nclear dependence of the ``breaking point'' $T^*(\\eta)$ over\n$\\eta$. The situation is very similar to the one that we have\ndiscussed in the previous section, where we were using ``short''\nvariables with a finite, small width (down to four bytes).\n\n\\section{Monte Carlo noisy data\\label{S:MC}}\n\nA Monte Carlo numerical experiment is one of the best proxy for a real\nexperiment. One gets sets of data that are asymptotically distributed\naccording to a certain probability function. These data are affected\nby statistical errors, as it would happen in an experiment. We\nanalyze here how the reconstruction works when starting from Monte Carlo\ndata obtained under variable accuracy requirements: this is an issue\nof paramount interest, since we need to know if a given real\nexperiment, with a given level of accuracy, will give information\nthat can lead to a useful coupling reconstruction.\n\nSo here we do not start from exact data, but from data obtained by a \nusual Monte Carlo simulation, with a local, accept-reject Metropolis\nupdating scheme, and we use our inverse algorithm to get the couplings\nfrom these data. We first lead the system to equilibrium (and discard\ndata obtained during this thermalization phase of the simulation), and\neventually collect data for a number of Monte Carlo steps. We show in\nFig.~\\ref{fig:MCnoise} the error $\\Delta$ on the reconstruction of the\ncouplings, given the values $\\chi_{ij}^{MC}$ of the susceptibilities,\nobtained by sampling the solution space with a Monte Carlo Markov\nChain.\n\n\\begin{figure}[!ht]\n\\centering\n\\includegraphics[angle=270, width=0.9\\textwidth]\n{DETAILcoupling_sqrtJbimSK_h0.0_J1.0_N16_MCvsEXACT.eps}\n\\caption{\\label{fig:MCnoise} The error $\\Delta$ as a \n function of $T$ for \n fully connected graphs of size $N=16$ ($\\circ$)\n and $N=128$ ($\\Diamond$), with binary couplings. \n The different curves represent\n reconstructions starting from the exact susceptibilities for the\n $N=16$ system (continuous line),\n and starting from approximations of the susceptibilities\n generated from $10^4$, $10^5$, $10^6$, $10^7$ and $10^9$ MC data\n (from top to bottom) for $N=16$, and $10^5$, $10^6$ and $10^7$ MC\n data (from top to bottom) for $N=128$.}\n\\end{figure}\n\nGiven a fixed number of Monte Carlo measurements the error on the\nsusceptibilities increases with the temperature, resulting in a less\nprecise reconstruction of the couplings. However, by increasing the\nduration of the experiments, i.e. increasing the number of independent\nobservations, the relative error can be drastically reduced as can be\nseen from Fig.~\\ref{fig:MCnoise}. The pattern is, as one would have\nexpected, very similar to the one of a statistical error of constant\naverage size. Indeed, this is exactly what happens here, where we\nestimate all correlation functions by adding numbers of order one\n(the individual values of the correlations, that can be $\\pm 1$).\nFor each level of the error the reconstruction works as if correlations\nwere exact up to a given $T$ value, beyond which its accuracy does not\nincrease anymore with $T$, but, on the contrary, it starts decreasing\nwith $T$. \n\nWe also analyzed the low-temperature limit down to which the couplings\ncan be reconstructed starting from approximated two-point\ncorrelations. While the exact correlations in general allow to\nreconstruct the couplings down to a temperature as low as $T=1.7$, no\nsolution could be found, for example, starting from susceptibilities\nobtained from only $10^4$ independent MC data at this same\ntemperature: the susceptibility propagation algorithm can be\nadditionally limited by an inaccurate original data set.\n\nWe have also tried to understand how the performance of the \nsusceptibility propagation algorithm varies when we increase the\nnumber of elements of the system. In the Monte Carlo case we have\nstudied the two cases $N=16$ and $N=128$, where the second system is\neight times larger than the first one: we show both sets of data in\nFig.~\\ref{fig:MCnoise}. Larger size systems require more experiments\nto get a reconstruction of the same quality than for the smaller systems:\nthe $N=128$ curves overlap with \n$N=16$ curves obtained with ten times less statistics. \nAfter assuming this rescaling\nour data clearly show that the reconstruction procedure also works\nvery well even when we heavily increase the volume of the system. Let\nus look carefully at ``low'' values of $T$. For example, when starting \nfrom observables with a good precision, at $T=2$ the\n$N=128$ reconstruction clearly improves in quality with respect to those \nfor $N=16$ (the\nsame phenomenon can already be observed, on a smaller scale, in\nFig.~\\ref{fig:exact_distributiondependence} when comparing $N=10$ and\n$N=20$). Reconstruction on large systems sizes is possible and\nreliable even if the ``temperature'' of the system is not so far from\ncriticality, which certainly is good and useful news.\n\n\\section{Conclusions\\label{S:CON}}\n\nWe have analyzed a number of features of the inverse Ising spin glass\nproblem, by using the susceptibility propagation algorithm, first\nintroduced in~\\cite{MezardMora2008}. In a very large temperature\nwindow, this algorithm is able to reconstruct the individual couplings\nand, consequently, their overall distributions with a remarkable\nprecision. If the system is ``at high temperature'' (or, in other,\nmaybe more physical terms), if the (zero average) disorder does not\nfluctuate too much, the quality of the reconstruction is basically\nonly limited by the precision under which the experimental input is\nknown, and by the precision used when implementing the susceptibility\npropagation algorithm.\n\nFor smaller temperatures approaching the critical temperature, or\nequivalently, for distributions of the couplings characterized by a\nlarge variance, the reconstruction is less accurate and eventually the\nalgorithm fails to find any solutions to the problem. This is \ndue to the fact that it does not take the possibility of multiple states into account, which is exactly what happens in the spin-glass phase.\n\nAll algorithms currently available suffer this same problem. However,\nthe message passing algorithm used in this paper could possibly be\nimproved by using the probability distributions of the observables as\nbasic working ingredients, rather than the observables themselves to\nobtain a type of survey propagation algorithm for which the exchanged\nmessages do not contain information on the couplings, but rather on\nthe probability distribution of each individual coupling.\nFurthermore, the nature of the susceptibility propagation algorithm\nsuggests it could be easily adapted to include the case of Potts-like\nvariables allowing to treat problems in structural\nbiology~\\cite{Weigt-PNAS2009}.\n\nWhile the overall reconstruction of the pairwise model is quite\nprecise in case the original data set is accurate, the results can\ndeteriorate fast if data are affected by a statistical error. The\nnumber of experiments that have to be used to obtain the average\ntwo-point correlations needs to be increasingly large for increasing\nsample size. Also, at large temperatures, where the values of the\nsusceptibilities are small, this error on the reconstruction of the\ncouplings becomes more pronounced. For the same reason, an absolute\nerror on the two-point correlations is increasingly damaging at higher\ntemperatures.\n\nAll together we feel that our conclusion lead to an optimistic\nscenario. Even in presence of a large statistical or systematic \nignorance the reconstruction is possible and can be effective. Large\nsamples still allow for a good reconstruction quality, under the\ncondition that the statistical inaccuracy that affects the data\nis lowered down to a reasonable level.\n\n\\section*{Acknowledgments\\label{S:ACK}}\nWe thank Thierry Mora for describing us the use of the\n$\\varepsilon$ term in this context. We acknowledge interesting\nconversations with Federico Ricci-Tersenghi. \n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{Acknowledgements}\n\nWe thank Noga Alon for referring us to his paper \\cite{DBLP:conf\/stoc\/AlonMS12} on construction of Ruzsa-Szemer\u00e9di graphs and discussing its implications which were extremely insightful. We are in addition thankful to Hamed Saleh for fruitful discussions and also to anonymous STOC reviewers for helpful suggestions.\n\\section{The Algorithm and Basic Definitions}\\label{sec:analysissetup}\n\nThe algorithm that we analyze is formally stated as Algorithm~\\ref{alg:sampling}.\n\n\\begin{tboxalg}[ (\\cite{soda19})]{A sampling-based non-adaptive algorithm for stochastic matching.}\\label{alg:sampling}\n\t\\textbf{Parameter:} $R$, which controls the maximum degree of $Q$.\n\t\\vspace{0.1cm}\n\t\n\tTake $R$ realizations $\\mc{G}_1, \\ldots, \\mc{G}_R$ of $G$ independently where each realization $\\mc{G}_i$ includes each edge $e$ independently with probability $p_e$. Return subgraph $Q = \\MM{\\mc{G}_1} \\cup \\ldots \\cup \\MM{\\mc{G}_R}$.\n\\end{tboxalg}\n\nIn the algorithm above, $\\MM{\\mc{G}_i}$ returns a maximum matching of $\\mc{G}_i$. It will be convenient for the analysis to assume $\\MM{\\cdot}$ is a deterministic maximum matching algorithm.\n\nIn order to analyze Algorithm~\\ref{alg:sampling}, we will make the following assumption which will simplify many of our arguments.\n\n\\begin{assumption}\\label{ass:optlarge}\n\t$\\opt \\geq 0.1\\epsilon n$.\n\\end{assumption}\n\nAssumption~\\ref{ass:optlarge} comes w.l.o.g. due to a reduction of Assadi~\\etal{}~\\cite{AKL16}. The reduction is roughly as follows: If $n \\gg \\opt$, randomly put nodes of $G$ into $O(\\frac{\\opt}{\\epsilon})$ buckets and contract the nodes within each bucket. The resulting graph will have only $O(\\frac{\\opt}{\\epsilon})$ nodes but its expected maximum realized matching will be as large as $(1-O(\\epsilon))\\opt$. Solving this modified graph will then solve the original graph $G$ as well. We provide further details in Appendix~\\ref{app:optlarge} and note that for the reduction to work, it is important that our algorithm can handle different edge realization probabilities.\n\n\n\\subsection{A Crucial\/Non-crucial Decomposition}\n\nFor each edge $e$ define $q_e := \\Pr[e \\in \\MM{\\mc{G}}]$ where $\\MM{\\cdot}$ is the same matching algorithm used in Algorithm~\\ref{alg:sampling}. Since we assumed $\\MM{\\cdot}$ is deterministic, the probability is taken only over the randomization of the realization $\\mc{G}$. Having this definition, for any vertex $v$ we denote $q_v := \\sum_{e \\ni v} q_e$ and for any subset $E' \\subseteq E$ denote $q(E') := \\sum_{e \\in E'} q_e$. The following statements immediately follow from the definition:\n\n\\begin{observation}\n\t$q(E) = \\opt$.\n\\end{observation}\n\\begin{observation}\\label{obs:qv}\n\tFor any vertex $v$, $q_v$ denotes the probability that $v$ is matched in $\\MM{\\mc{G}}$.\n\\end{observation}\n\nWe will fix two thresholds $0 < \\tau_- < \\tau_+ < 1$ that both depend only on $\\epsilon$ and $p$. Next, for any edge $e$, we say $e$ is {\\em crucial} if $q_e \\geq \\tau_+$, {\\em non-crucial} if $q_e \\leq \\tau_-$, and {\\em ignored} if $q_e \\in (\\tau_-, \\tau_+)$. We denote the crucial edges by $C := \\{ e \\in E \\mid \\text{$e$ is crucial} \\}$, and the non-crucial edges by $N := \\{ e \\in E \\mid \\text{$e$ is non-crucial} \\}$. Furthermore, we denote their realizations by $\\mc{C} := C \\cap \\mc{E}$ and $\\mc{N} := N \\cap \\mc{E}$. When confusion is impossible, we may use $C$ to denote graph $(V, C)$ instead of merely the edge-subset. The same also naturally generalizes to $N$, $\\mc{C}$, and $\\mc{N}$. We will further use $\\Delta_C$ to denote the maximum degree in graph $C$. Moreover, for any vertex $v$ we use $c_v$ (resp. $n_v$) to denote the probability that $v$ is matched via a crucial (resp. non-crucial) edge in $\\MM{\\mc{G}}$.\n\n\\begin{observation}\\label{obs:crucialdegree}\n\t$\\Delta_C \\leq 1\/\\tau_+$.\n\\end{observation}\n\\begin{proof}\n\tEach edge $e \\in C$ has $q_e \\geq \\tau_+$ by definition. Thus, if there is a vertex $v$ of degree larger than $1\/\\tau_+$ in $C$, then it should hold that $q_v > 1\/\\tau_+ \\times \\tau_+ = 1$ which contradicts Observation~\\ref{obs:qv}.\n\\end{proof}\n\n\\subsection{Setting the Thresholds $\\tau_-$ and $\\tau_+$}\n\nTo describe how we set the values of $\\tau_-$ and $\\tau_+$, we state a lemma that we prove in Section~\\ref{sec:proofs}.\n\n\n\\begin{lemma}\\label{lem:gap}\n\tFix any arbitrary function $f(x)$ such that $0 < f(x) < x$ for any $0 < x < 1$. There is a choice of $0 < \\tau_- < \\tau_+ < 1$ such that: (1) $\\tau_- = f(\\tau_+)$. (2) $q(N) + q(C) \\geq (1-\\epsilon)\\opt$. (3) Both $\\tau_-$ and $\\tau_+$ depend only on $\\epsilon$ and $p$. And finally, (4) $\\tau_+ \\leq (\\epsilon p)^{50}$.\n\\end{lemma}\n\nThe lemma above essentially shows that we can have any desirably large gap between $\\tau_+$ and $\\tau_-$ and still ensure that $q(N)+q(C) \\geq (1-\\epsilon)\\opt$. That is, the ignored edges in expectation constitute at most $\\epsilon \\opt$ edges of $\\MM{\\mc{G}}$. While this may sound counter-intuitive, it follows roughly speaking from the fact that by iteratively reducing the threshold $\\tau_+$ by a sufficient amount, all the previously ignored edges become crucial. Thus it cannot continue to hold that there are still a significant mass of the matching on the ignored edges after sufficiently many iterations. See Section~\\ref{sec:proofs} for the proof.\n\nHaving Lemma~\\ref{lem:gap}, we set our thresholds and the parameter $R$ of Algorithm~\\ref{alg:sampling} as follows:\n\n\\begin{highlighttechnical}\n\t\\textbf{Setting $\\tau_-, \\tau_+,$ and $R$:}\n\t\n\t\\vspace{0.4cm}\n\t\n\tDefine function $f(x) := x^{10g(x)}$ where $g(x) := \\epsilon^{-20}\\log\\frac{1}{x}$.\n\t\n\t\\smallskip\n\t\n\tWe plug this function $f$ into Lemma~\\ref{lem:gap} and define $\\tau_-$ and $\\tau_+$ accordingly. We also set $R = \\frac{1}{2\\tau_-}$.\n\\end{highlighttechnical}\n\nNote that function $f$ as defined above satisfies $0 < f(x) < x$ for any $0 < x < 1$ since clearly $g(x) \\geq 1$ so long as $0 < x < 1$. Therefore, we can indeed plug $f$ into Lemma~\\ref{lem:gap}. This results in the following properties:\n\n\\begin{corollary}\\label{cor:thresholds}\n\tIt holds that: (1) $\\tau_- = (\\tau_+)^{10g}$ where $g = \\epsilon^{-20}\\log\\frac{1}{\\tau_+}$. (2) $q(N) + q(C) \\geq (1-\\epsilon)\\opt$. (3) Both $\\tau_-$ and $\\tau_+$ depend only on $\\epsilon$ and $p$ and thus $R=O_{\\epsilon, p}(1)$. (4) $\\tau_- < \\tau_+ \\leq (\\epsilon p)^{50}$.\n\\end{corollary}\n\nThe next lemma shows that $R$ is set such that Algorithm~\\ref{alg:sampling} samples (almost) all crucial edges.\n\n\\begin{observation}\\label{obs:samplealmostallcrucial}\n\tFor every edge $e \\in C$, $\\Pr[e \\in Q] \\geq 1-\\epsilon$.\n\\end{observation}\n\\begin{proof}\n\tNote that $e \\in Q$ if there is at least one $i \\in [R]$ where $e \\in \\MM{\\mc{G}_i}$. The probability that $e \\in \\MM{\\mc{G}_i}$ for any fixed $i$ is precisely $q_e$. Since realizations $\\mc{G}_1, \\ldots, \\mc{G}_R$ are independent, it holds that $\\Pr[e \\not\\in Q] = (1-q_e)^{R}$. On the other hand $q_e \\geq \\tau_+$ since $e$ is crucial. Also $R = \\frac{1}{2\\tau_-} > \\ln \\epsilon^{-1}\/\\tau_+$ where the latter inequality follows easily from Corrolary~\\ref{cor:thresholds} part (1). Combining all of these gives: \n\t$$\n\t\\Pr[e \\not\\in Q] = (1-q_e)^R < (1-\\tau_+)^{\\ln \\epsilon^{-1}\/\\tau_+} < e^{-\\ln \\epsilon^{-1}} = \\epsilon.\n\t$$\n\tTherefore indeed $\\Pr[e \\in Q] \\geq 1-\\epsilon$.\n\\end{proof}\n\n\\subsection{The Vertex-Independent Matching Lemma}\n\nAs discussed before, a key technical contribution of this work that allows getting an arbitrary good approximation-factor is a ``vertex-independent matching'' lemma that we state here. The proof of this lemma is involved and thus we defer it to Section~\\ref{sec:independentmatching}. In Section~\\ref{sec:analysisviavertexindependent}, we show how Lemma~\\ref{lem:independentmatching} can be used to analyze Algorithm~\\ref{alg:sampling} and prove Theorem~\\ref{thm:main}.\n\n\n\\newcommand{\\independentmatching}[0]{There is a randomized algorithm that constructs an integral matching $Z$ of $\\mc{C}$ (the realized subgraph of $C$) such that defining $X_v$ as the indicator random variable for $v \\in V(Z)$, we get:\n\t\\begin{enumerate}[itemsep=0.2pt,topsep=5pt]\n\t\t\\item $\\E[|Z|] \\geq q(C) - 30\\epsilon\\opt$.\n\t\t\\item For every vertex $v$, $\\Pr[X_v] \\leq \\max\\{c_v - \\epsilon^2, 0\\}$, where recall that $c_v$ is the probability that vertex $v$ is matched via a crucial edge in $\\MM{\\mc{G}}$.\n\t\t\\item The matching $Z$ is independent of the realization of non-crucial edges in $G$.\n\t\t\\item Let $\\lambda := \\epsilon^{-20}\\log\\Delta_C$. For every $k$ and every $\\{v_1, v_2, \\ldots, v_k \\} \\subseteq V$ such that $d_C(v_i, v_j) \\geq \\lambda$ for all $v_i \\not= v_j$, random variables $X_{v_1}, \\ldots, X_{v_k}$ are independent.\n\t\\end{enumerate}\nWe emphasize that $\\E[|Z|]$ and $X_v$ are both defined with respect to the randomizations in both the realization of $C$, and the randomization of the algorithm in constructing $Z$.\n\t}\n\\begin{lemma}[Vertex-Independent Matching Lemma]\\label{lem:independentmatching}\n\t\\independentmatching{}\n\\end{lemma}\n\n\\begin{observation}\\label{obs:ggtlambda}\n\tLet $g$ be as defined in Corollary~\\ref{cor:thresholds} and $\\lambda$ be as defined in Lemma~\\ref{lem:independentmatching}. Then it holds that $g \\geq \\lambda$.\n\\end{observation}\n\\begin{proof}\n\tSince $\\lambda = \\epsilon^{-20}\\log \\Delta_C$ by definition and $\\Delta_C \\leq 1\/\\tau_+$ by Observation~\\ref{obs:crucialdegree}, we get that $\\lambda \\leq \\epsilon^{-20}\\log \\frac{1}{\\tau_+}$. On the other hand $g = \\epsilon^{-20}\\log \\frac{1}{\\tau_+}$. Therefore, $g \\geq \\lambda$.\n\\end{proof}\n\\section{Concentration of the Maximum Realized Matching's Size}\\label{sec:concentration}\n\nIn this section, we prove that random variable $\\mu(\\mc{G})$, i.e. the size of the maximum realized matching of $G$, is highly concentrated around its mean $\\E[\\mu(\\mc{G})] = \\opt$. A similar concentration bound was previously proved also in the works of \\cite{DBLP:conf\/soda\/BlumCHPPV17,DBLP:conf\/soda\/AssadiBBMS19}. Nonetheless, we provide the full proof in this section for the sake of self-containment.\n\n\\begin{lemma}\\label{lem:concentration}\n\tFor every $0 < t \\leq \\opt$, $\\Pr[|\\mu(\\mc{G}) - \\opt| \\geq t] \\leq \\exp\\left(-\\frac{t^2}{2\\opt + 2t\/3}\\right) < \\exp\\left(-\\frac{t^2}{3\\opt}\\right)$.\n\\end{lemma}\n\n\\begin{corollary}\\label{cor:highprobability}\n\tLet $Q$ be a subgraph of $G$ obtained via a deterministic algorithm and suppose that $\\opt = \\omega(1)$. If $\\E[\\mu(\\mc{Q})]\/\\E[\\mu(\\mc{G})] \\geq \\alpha$ then with high probability $\\mu(\\mc{Q})\/\\mu(\\mc{G}) \\geq (1-o(1))\\alpha$.\n\\end{corollary}\n\\begin{proof}\n\tLemma~\\ref{lem:concentration} implies that w.h.p. $\\mu(\\mc{Q}) = (1\\pm o(1))\\E[\\mu(\\mc{Q})]$ and $\\mu(\\mc{G}) = (1\\pm o(1))\\E[\\mu(\\mc{G})]$. Therefore, w.h.p. $\\mu(\\mc{Q})\/\\mu(\\mc{G}) = (1\\pm o(1)) \\E[\\mu(\\mc{Q})]\/\\E[\\mu(\\mc{G})] \\geq (1-o(1))\\alpha$.\n\\end{proof}\n\nWe note that our construction of subgraph $Q$ in Algorithm~\\ref{alg:sampling} is randomized, thus the corollary above cannot be used as a black-box to imply a high probability bound. However, we remark that a similar proof to that of Lemma~\\ref{lem:concentration} which we give below, proves $\\mu(\\mc{Q})$ in our algorithm is concentrated around its mean even considering the randomization of Algorithm~\\ref{alg:sampling}. Therefore, our algorithm also guarantees a high probability bound for the approximation-factor.\n\n\nIn order to prove this lemma, we use the concentration of ``self-bounding'' functions. See Sections~3.3 and 6.7 of book \\cite{DBLP:books\/daglib\/0035704} by Boucheron, Lugosi and Massart for a thorough discussion on this concentration inequality and its proof.\n\n\\begin{definition}[{\\cite[Section~6.7]{DBLP:books\/daglib\/0035704}}]\\label{def:selfbounding}\n\tA function $f: \\mc{X}^m \\to \\mathbb{R}$ is ``self-bounding'' if for every $i \\in [m]$ there is a function $f_i: \\mc{X}^{m-1} \\to \\mathbb{R}$ such that for all $x=(x_1, \\ldots, x_m) \\in \\mathcal{X}^m$,\n\t\\begin{enumerate}\n\t\t\\item $0 \\leq f(x) - f_i(x^{(i)})\\leq 1$ for all $i \\in [m]$, and\n\t\t\\item $\\sum_{i=1}^m (f(x)-f_i(x^{(i)})) \\leq f(x)$,\n\t\\end{enumerate}\n\twhere $x^{(i)} = (x_1, \\ldots, x_{i-1}, x_{i+1}, \\ldots, x_n)$.\n\\end{definition}\n\n\\begin{lemma}[{\\cite[Theorem~6.12]{DBLP:books\/daglib\/0035704}}]\\label{lem:selfbounding}\n\tIf $X_1, \\ldots, X_m$ are independent random variables taking values in $\\mathcal{X}$ and $Z = f(X_1, \\ldots, X_m)$ is self-bounding, then for every $0 < t \\leq \\E Z$,\n\t$$\n\t\t\\Pr[|Z - \\E Z| \\geq t] \\leq \\exp \\left(- \\frac{t^2}{2\\E Z + 2t\/3} \\right).\n\t$$\n\\end{lemma}\n\nHaving this inequality, Lemma~\\ref{lem:concentration} follows as follows.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:concentration}]\n\tLet $X_e$ for each edge $e$ in graph $G$ be the indicator of the event that $e$ is realized. We can use vector $X = (X_{e_1}, \\ldots, X_{e_m})$ to represent a realization of $G$ where $e_1, \\ldots, e_m$ are all edges in $G$. With a slight abuse of notation, we use $\\mu(X)$ to denote the size of the maximum matching in realization $X$. We first prove that function $\\mu(X)$ is self-bounding. For each $i \\in [m]$, define\n\t$$\n\t\t\\mu_i(X^{(i)}) = \\mu(X_{e_1}, \\ldots, X_{e_{i-1}}, 0, X_{e_{i+1}}, \\ldots, X_{e_m}).\n\t$$\n\tIn words, $\\mu_i(X^{(i)})$ is the maximum matching size in realization $X$ if we regard edge $e_i$ as unrealized. We need to show that the two conditions of Definition~\\ref{def:selfbounding} hold. First, we have to show that\n\t$$\n\t\t0 \\leq \\mu(X) - \\mu_i(X^{(i)}) \\leq 1 \\qquad \\text{for all $i \\in [m]$ and all realizations $X$.}\n\t$$\n\tObserve that removing a realized edge cannot increase the maximum realized matching size, thus clearly $\\mu(X) - \\mu(X^{(i)}) \\geq 0$. Moreover, removing each edge decreases the maximum matching size by at most 1. Thus $\\mu(X) - \\mu(X^{(i)}) \\leq 1$ proving the first condition. For the second condition, we have to show that\n\t$$\n\t\t\\sum_{i=1}^m \\left(\\mu(X) - \\mu_i(X^{(i)}) \\right) \\leq \\mu(X).\n\t$$\n\tTo see this, fix a maximum realized matching $M$ in realization $X$. For any edge $e_i$ outside this matching, we have $\\mu(X) - \\mu_i(X^{(i)}) = 0$. For the rest, as discussed above $\\mu(X) - \\mu_i(X^{(i)}) \\leq 1$. Therefore indeed $\\sum_{i=1}^m \\left(\\mu(X) - \\mu_i(X^{(i)}) \\right) \\leq |M| = \\mu(X)$.\n\t\n\tWe proved that $\\mu(X)$ is self-bounding. Since the edges are realized independently, we can plug this into Lemma~\\ref{lem:selfbounding} and immediately obtain Lemma~\\ref{lem:concentration}.\n\\end{proof}\n\n\n\n\n\n\n\\subsection{Construction of an Expected Fractional Matching $x$ on $\\mc{Q}$}\\label{sec:constructfractional}\n\nIn this section, we describe an algorithm that constructs an ``expected fractional matching'' $x$ on $\\mc{Q}$ as defined below.\n\n\\begin{definition}\\label{def:expfractional}\n\tLet $\\mathcal{A}$ be a random process that assigns a fractional value $x_e \\in [0, 1]$ to each edge $e$ of a graph $G(V, E)$. We say $x$ is an expected fractional matching if:\n\t\\begin{enumerate}[itemsep=0pt,topsep=5pt]\n\t\t\\item For each vertex $v$, defining $x_v := \\sum_{e \\ni v} x_e$ we have $\\E[x_v] \\leq 1$.\n\t\t\\item For all subsets $U \\subseteq V$ with $|U|\\leq 1\/\\epsilon$, $x(U) \\leq \\lfloor \\frac{|U|}{2} \\rfloor$ with probability 1.\n\t\\end{enumerate}\n\\end{definition}\n\nWe emphasize that the definition only requires $\\E[x_v] \\leq 1$, thus depending on the coin tosses of the process, it may occur that $x_v > 1$, violating the constraints of a normal fractional matching. We will later argue that in our construction, the values of $x_v$'s are sufficiently concentrated around their mean and thus we can turn our expected fractional matching to an actual fractional matching of (almost) the same size.\n\nAs described before, we construct an expected fractional matching $x$ on the edges of graph $\\mc{Q}$. Note that here the graph $\\mc{Q}$ itself is also stochastic. In the construction, we treat crucial and non-crucial edges completely differently.\n\n\\smparagraph{Crucial edges.} On the crucial edges, we first construct an integral matching $Z$ using the algorithm of Lemma~\\ref{lem:independentmatching}. Once we have $Z$, we define $x$ on crucial edges as follows.\n\n\\begin{highlighttechnical}\n\\vspace{-0.4cm}\n\\begin{flalign}\\label{eq:crucialxe}\n\t\\text{For every crucial edge $e$, } \\qquad\\qquad x_e := \\begin{cases}\n 1,& \\text{if $e \\in Z$ and $e \\in Q$,}\\\\\n 0, & \\text{otherwise}.\n \\end{cases}\n\\end{flalign}\n\\end{highlighttechnical}\n\nNote from Observation~\\ref{obs:samplealmostallcrucial} that each crucial edges belong to $Q$ with probability at least $1-\\epsilon$. Therefore the construction above (roughly speaking) sets $x_e = 1$ for most of the edges $e$ in $Z$.\n\n\\smparagraph{Non-crucial edges.} For defining $x$ on the non-crucial edges, we start with a number of useful definitions. For any edge $e$, define $t_e$ to be the number of matchings $\\MM{\\mc{G}_1}, \\ldots, \\MM{\\mc{G}_R}$ that include $e$. Then based on that, define \n\\begin{equation}\\label{eq:deff}\n\tf_e := \\begin{cases}\n\t\t\\frac{t_e}{R}, & \\text{if $\\frac{t_e}{R} \\leq \\frac{1}{\\sqrt{\\epsilon R}}$ and $e$ is non-crucial,}\\\\\n\t\t0, & \\text{otherwise.}\n\t\\end{cases}\n\\end{equation}\nNote that $f_e$ is a random variable of only the randomization of Algorithm~\\ref{alg:sampling}, i.e. it is independent of the realization. Also note that $f_e$ is desirably non-zero only on the edges that belong to graph $Q$. Having defined $f_e$, we define $x_e$ on the non-crucial edges as follows.\n\\begin{highlighttechnical}\nFor every non-crucial edge $e$, define\n\\begin{flalign}\\label{eq:noncrucialxe}\n\tx_e = \\begin{cases}\n \\frac{f_e}{p_e(1-\\Pr[X_v])(1-\\Pr[X_u])}, & \\text{if $e$ is realized, $u, v \\not\\in V(Z)$, and $d_C(u, v) \\geq \\lambda$,}\\\\\n 0, & \\text{otherwise}.\n \\end{cases}\n\\end{flalign}\n\\end{highlighttechnical}\n\nWe note that $\\lambda$ in the definition above is the number defined in Lemma~\\ref{lem:independentmatching} and that $X_v$ is the indicator random variable for the event $v \\in V(Z)$.\n\nBefore concluding this section, let $f_v := \\sum_{e \\in N : v \\in e} f_e$ for each vertex $v$. We note the following properties of $f$, which can be derived directly from the definition above. The proof is given in Section~\\ref{sec:proofs}.\n\n\\begin{claim}\\label{cl:frange}\n\tIt holds that:\n\t\\begin{enumerate}\n\t\t\\item For every non-crucial edge $e$, $\\E[f_e] \\leq q_e$.\n\t\t\\item For every non-crucial edge $e$, $\\E[f_e] \\geq (1-\\epsilon)q_e$.\n\t\t\\item For every vertex $v$, it always holds that $\\sum_{e \\ni v} f_e \\leq 1$.\n\t\t\\item For every vertex $v$, $\\Pr[f_v > n_v + 0.1\\epsilon] \\leq (\\epsilon p)^{10}$, where recall that $n_v$ is the probability that $v$ is matched via a non-crucial edge in $\\MM{\\mc{G}}$.\n\t\\end{enumerate}\n\\end{claim}\n\nConsider a non-crucial edge $\\{u, v\\}$ between two nodes $u$ and $v$ with $d_C(u, v) \\geq \\lambda$. The probability that $x_e$ is non-zero is $p_e(1-\\Pr[X_v])(1-\\Pr[X_u])$: Both $u$ and $v$ should be unmatched in $Z$ and $e$ should be realized, and further all these events are independent. This intuitively explains why we set $x_e = \\frac{f_e}{p_e(1-\\Pr[X_v])(1-\\Pr[X_u])}$ if all these conditions hold: We want the denominator to cancel out with this probability so that we get $\\E[x_e] = f_e$. We will formalize this intuition in Section~\\ref{sec:analysisfractional} where we prove the expected size of $x$ is large.\n\n\\subsection{Validity of $x$}\\label{sec:validityofx}\n\nIn this section, we prove that $x$ is indeed an expected fractional matching of $\\mc{Q}$.\n\nFirst, we prove that $x$ is non-zero only on the edges of $\\mc{Q}$. This simply follows from the construction of $x$.\n\n\\begin{claim}\n\tAny edge $e$ with $x_e > 0$ belongs to $\\mc{Q}$. That is, $x$ is only non-zero on the set of edges queried by Algorithm~\\ref{alg:sampling} that are also realized.\n\\end{claim}\n\\begin{proof}\n\tFor any crucial edge $e$, we either have $x_e = 1$ or $x_e = 0$. By definition, if $x_e = 1$ then $e \\in Z \\cap Q$. By Lemma~\\ref{lem:independentmatching}, $Z$ is a matching of {\\em realized} crucial edges, i.e. $e \\in Z$ implies $e \\in \\mc{E}$. Therefore, $e \\in Z \\cap Q$ implies $e \\in \\mc{E} \\cap Q = \\mc{Q}$ as desired.\n\t\n\tFor any non-crucial edge $e$, if $e \\not\\in Q$, then $f_e = 0$ by definition of $f_e$. Therefore, if $x_e > 0$, then $f_e > 0$ which implies $e \\in Q$. Moreover, by (\\ref{eq:noncrucialxe}), $x_e > 0$ implies $e$ is realized. Combining these two, we get that if $x_e>0$ then $e \\in \\mc{Q}$.\n\\end{proof}\n\nNext, we prove condition (1) of Definition~\\ref{def:expfractional}.\n\n\\begin{claim}\\label{cl:expxvlt1}\n\tFor every vertex $v$, $\\E[x_v] \\leq 1$.\n\\end{claim}\n\\begin{proof}\n\tSuppose at first that there is an edge $e$ incident to $v$ that belongs to matching $Z$. Then we either have $x_e = 1$ or $x_e = 0$ (depending on whether $e \\in Q$ or not). For all other edges $e'$ connected to $v$ (crucial or non-crucial) we have $x_{e'} = 0$ by (\\ref{eq:crucialxe}) and (\\ref{eq:noncrucialxe}). Therefore if such edge $e$ exists, we indeed have $x_v \\leq 1$. For the rest of the proof, we condition on the event that no such edge $e$ exists, i.e. $v \\not\\in V(Z)$ and prove the claim.\n\t\n\tLet $u_1, u_2, \\ldots, u_r$ be neighbors of $v$ in graph $G$ such that for all $i \\in [r]$: (1) edge $\\{v, u_i\\}$ is non-crucial, (2) $d_C(v, u_i) \\geq \\lambda$. Let $e_i := \\{v, u_i\\}$; we claim that conditioned on $v \\not\\in V(Z)$, we have\n\t\\begin{equation}\\label{eq:21410238471098412}\n\tx_v = x_{e_1} + x_{e_2} + \\ldots + x_{e_r}.\n\t\\end{equation}\n\tTo see this, fix an edge $e = \\{v, u\\}$ for some $u \\not\\in \\{u_1, \\ldots, u_r\\}$. We show that $x_e = 0$, which suffices to prove (\\ref{eq:21410238471098412}). First if $e$ is crucial, then $e \\not\\in Z$ given that $v \\not\\in V(Z)$; thus according to (\\ref{eq:crucialxe}) we set $x_e = 0$. Moreover, if $e$ is non-crucial, the assumption $u \\not\\in \\{u_1, \\ldots, u_r\\}$ implies $d_C(v, u) < \\lambda$ by definition of the set. In this case also, we set $x_e = 0$ according to (\\ref{eq:noncrucialxe}); concluding the proof of (\\ref{eq:21410238471098412}).\n\t\n\tBy linearity of expectation applied to (\\ref{eq:21410238471098412}), we get\n\t\\begin{equation}\\label{eq:612903247}\n\t\\E[x_v \\mid v \\not\\in V(Z)] = \\sum_{i=1}^r \\E[x_{e_i} \\mid v \\not\\in V(Z)].\n\t\\end{equation}\n\tMoreover, for any arbitrary $i \\in [r]$ we have\n\t\\begin{align}\n\t\t\\nonumber\\E[x_{e_i} \\mid v\\not\\in V(Z)] &= \\Pr[u_i \\not\\in V(Z), e_i \\text{ realized} \\mid v \\not\\in V(Z)] \\times \\frac{\\E[f_{e_i}]}{p_{e_i}(1-\\Pr[X_v])(1-\\Pr[X_{u_i}])}\\\\\n\t\t&= p_{e_i}(1-\\Pr[X_{u_i}]) \\times \\frac{\\E[f_{e_i}]}{p_{e_i}(1-\\Pr[X_v])(1-\\Pr[X_{u_i}])} = \\frac{\\E[f_{e_i}]}{1-\\Pr[X_v]}.\\label{eq:7128312098}\n\t\\end{align}\n\tThe second equality above follows from the fact that the event of $e_i$ being realized is independent of $u_i$ or $v$ being in $V(Z)$, as indicated by Lemma~\\ref{lem:independentmatching} part 3; and also the fact that $u_i \\not\\in V(Z)$ and $v \\not\\in V(Z)$ are also independent from each other due to Lemma~\\ref{lem:independentmatching} part 4 combined with the assumption that $d_C(u_i, v) \\geq \\lambda$. We also note that we have used $\\E[f_{e_i}]$ instead of $\\E[f_{e_i} \\mid v\\not\\in V(Z)]$ in the equation above since $f_{e_i}$ is only a random variable of the randomization used in Algorithm~\\ref{alg:sampling} whereas the matching $Z$ is constructed in Lemma~\\ref{lem:independentmatching} independent of the outcome of Algorithm~\\ref{alg:sampling}.\n\t\n\tCombining (\\ref{eq:612903247}) and (\\ref{eq:7128312098}) we get\n\t\\begin{equation}\\label{eq:421610986201363}\n\t\t\\E[x_v \\mid v\\not\\in V(Z)] = \\sum_{i = 1}^r \\frac{\\E[f_{e_i}]}{1-\\Pr[X_v]} = \\frac{1}{1-\\Pr[X_v]}\\sum_{i=1}^r\\E[f_{e_i}].\n\t\\end{equation}\t\n\tFrom Claim~\\ref{cl:frange} part 1, we know $\\E[f_{e_i}] \\leq q_{e_i}$. Replacing this into the equality above, we get\n\t$$\n\t\t\\E[x_v \\mid v\\not\\in V(Z)] \\leq \\frac{1}{1-\\Pr[X_v]} \\sum_{i=1}^r q_{e_i} \\leq \\frac{n_v}{1-\\Pr[X_v]}.\n\t$$\n\t\n\tLemma~\\ref{lem:independentmatching} part (2) guarantees that $\\Pr[X_v] < c_v$ which implies $1-\\Pr[X_v] > 1-c_v$. On the other hand, $c_v + n_v$ is upper bounded by the probability that $v$ is matched in $\\opt$, thus $c_v + n_v \\leq 1$, implying $n_v \\leq 1-c_v$. These, combined with the equation above, gives\n\t$$\n\t\t\\E[x_v \\mid v\\not\\in V(Z)] \\leq \\frac{n_v}{1-\\Pr[X_v]} \\leq \\frac{1-c_v}{1-c_v} = 1.\n\t$$\n\tRecalling also that $\\E[x_v \\mid v\\in V(Z)] \\leq 1$ as described at the start of the proof, this concludes the proof of the claim that $\\E[x_v] \\leq 1$.\n\\end{proof}\n\nNext, we show that condition (2) of Definition~\\ref{def:expfractional} also holds for our construction.\n\n\\begin{claim}\\label{cl:xblossom}\n\tFor all subsets $U \\subseteq V$ with $|U|\\leq 1\/\\epsilon$, $x(U) \\leq \\lfloor \\frac{|U|}{2} \\rfloor$ with probability 1.\n\\end{claim}\n\\begin{proof}\n\tBy definition of $x$, the value of $x_e$ on crucial edges is either 1 or 0. Moreover, the definition also implies that if a vertex $v$ is incident to a crucial edge $e$ with $x_e = 1$, for all other edges $e'$ incident to $v$ we have $x_{e'} = 0$. Call all such vertices {\\em integrally matched}. Fix a subset $U$ and let $U'$ be the subset of $U$ excluding its integrally matched vertices. One can easily confirm that if $x(U) > \\lfloor |U|\/2 \\rfloor$, then also $x(U') > \\lfloor |U'|\/2 \\rfloor$. Therefore, either the claim holds, or there should exist a subset with no integrally matched vertices that violates it. Let $U$ be the smallest such subset and observe that $|U| \\leq 1\/\\epsilon$ (otherwise $U$ does not contradict the claim's statement).\t\n\t\n\tSince $U$ has no integrally matched vertex, for every crucial edge $e$ inside $U$ we have $x_e = 0$ and for every non-crucial edge $e$ inside $U$ by definition (\\ref{eq:noncrucialxe}) we have\n\t$\n\t\tx_e \\leq \\frac{f_e}{p_e (1-\\Pr[X_u]) (1-\\Pr[X_v])}.\n\t$\n\tBy definition of $f_e$, it holds that $f_e \\leq 1\/\\sqrt{\\epsilon R}$ and by Lemma~\\ref{lem:independentmatching} part 2, $\\Pr[X_u], \\Pr[X_v] \\leq 1-\\epsilon^2$. Replacing these into the bound above, we get\n\t$\n\t\tx_e \\leq \\frac{1}{p \\times \\epsilon^2 \\times \\epsilon^2 \\sqrt{\\epsilon R}}.\n\t$ Noting from Corollary~\\ref{cor:thresholds} part 4 that $\\tau_- < (\\epsilon p)^{50}$ and that $R = 2\/\\tau_-$, we get $R > 2\/(\\epsilon p)^{50}$. Replacing this into the previous upper bound on $x_e$, we get that $x_e$ is much smaller than say $\\epsilon^3$.\n\t\n\tNow since $|U| \\leq 1\/\\epsilon$ there are at most $\\binom{|U|}{2} < 1\/\\epsilon^2$ edges $e$ inside $U$ that can have non-zero $x_e$. For each of these, as discussed above $x_e < \\epsilon^3$. Thus we have $x(U) < \\epsilon^3 \\times 1\/\\epsilon^2 < 1$ which cannot be larger than $\\lfloor |U|\/2 \\rfloor$ if $|U| \\geq 2$ (if $|U| \\leq 1$, then there are no edges with both endpoints in $U$ and thus clearly $x(U) = 0$). This contradicts the assumption that $x(U) > \\lfloor |U|\/2 \\rfloor$, implying that there is no such subset. \n\\end{proof}\n\n\\subsection{The Expected Size of $x$}\\label{sec:analysisfractional}\n\nIn this section we prove the following.\n\n\\begin{lemma}\\label{lem:sizeofx}\n\tIt holds that $\\E\\left[|x| \\right] \\geq (1-34\\epsilon)\\opt$.\n\\end{lemma}\n\nWe start by analyzing the size of $x$ on the crucial edges. This is a simple consequence of Lemma~\\ref{lem:independentmatching} part 1 which guarantees $\\E[Z]\\geq q(C)-30\\epsilon \\opt$ and Observation~\\ref{obs:crucialdegree} which guarantees each crucial edge belongs to $Q$ with probability at least $1-\\epsilon$.\n\n\\begin{claim}\\label{cl:sizeofxcrucial}\n\tIt holds that $\\E\\left[\\sum_{e \\in C} x_e \\right] \\geq q(C)-31\\epsilon \\opt$.\n\\end{claim}\n\\begin{proof}\n\tDenoting $x(C) = \\sum_{e \\in C} x_e$, we have\n\t\\begin{equation*}\n\t\t\\E[x(C)] = \\E \\Big[ \\sum_{e \\in C} x_e \\Big] = \\sum_{e \\in C} \\E[x_e] = \\sum_{e \\in C} \\Pr[e \\in Q \\text{ and } e \\in Z].\n\t\\end{equation*}\n\tObserve that $Z$ and $Q$ are picked independently as Lemma~\\ref{lem:independentmatching} is essentially unaware of $Q$. Therefore, for any crucial edge $e$ we get \n\t$$\n\t\\Pr[e \\in Q \\text{ and } e \\in Z] = \\Pr[e \\in Q] \\times \\Pr[e \\in Z] \\geq (1-\\epsilon)\\Pr[e \\in Z],\n\t$$\n\twhere the latter inequality comes from Observation~\\ref{obs:samplealmostallcrucial}. Replacing this to the equality above gives\n\t$$\n\t\\E[x(C)] \\geq (1-\\epsilon)\\sum_{e \\in C} \\Pr[e \\in Z] = (1-\\epsilon)\\E[|Z|] \\stackrel{\\text{Lemma~\\ref{lem:independentmatching} part 1}}{\\geq} (1-\\epsilon) (q(C)-30\\epsilon \\opt) \\geq q(C) - 31\\epsilon \\opt,\n\t$$\n\tcompleting the proof of the claim.\n\\end{proof}\n\nTo analyze the size of $x$ on the non-crucial edges, we first define $N'$ to be the subset of non-crucial edges $\\{u, v\\}$ such that $d_C(u, v) \\geq \\lambda$ and define $q(N') := \\sum_{e \\in N'} q_e$ and $x(N') := \\sum_{e \\in N'} x(N')$. Definition of $N'$ is useful since recall from (\\ref{eq:noncrucialxe}) that for any $\\{u, v\\} \\in N$ with $d_C(u, v) < \\lambda$ (i.e. $\\{u, v\\} \\not\\in N'$) we set $x_e = 0$. Therefore only the edges in $N$ that also belong to $N'$ have non-zero $x_e$, implying $x(N) = x(N')$.\n\n\\begin{claim}\\label{cl:nplarge}\n\tIt holds that $q(N') \\geq q(N)-\\epsilon q(C)$.\n\\end{claim}\n\\begin{proof}\n\tFor any edge $e = \\{u, v\\}$ in $N \\setminus N'$, we choose an arbitrary shortest path $P$ between $u$ and $v$ in graph $C$ and charge the edges of this path. Note that by definition of $N'$, such path between $u$ and $v$ exists and has size less than $\\lambda$. Now, take a crucial edge $f$. We denote by $\\Phi(f)$ the set of edges in $N \\setminus N'$ for which we charge a path containing $f$. Below, we argue that\n\t\\begin{equation}\\label{eq:612398273497}\n\t\t|\\Phi(f)| \\leq 4(1\/\\tau_+)^{2\\lambda} \\qquad \\forall f \\in C.\n\t\\end{equation}\n\t\n\tFix a crucial edge $f$ and an edge $\\{u, v\\} \\in \\Phi(f)$. As discussed above, there should be a path of length less than $\\lambda$ between $u$ and $v$ in graph $C$ that passes through $f$. This means that $d_C(u, f) < \\lambda$ and $d_C(v, f) < \\lambda$. Therefore, both $u$ and $v$ are at distance at most $\\lambda$ from $f$ in graph $C$. \n\t\n\tObserve that there are at most $2(\\Delta_C)^{\\lambda}$ vertices in the $\\lambda$-neighborhood of $f$ in graph $C$. Thus, there are at most $2(\\Delta_C)^{\\lambda} \\times 2(\\Delta_C)^{\\lambda} = 4(\\Delta_C)^{2\\lambda}$ pairs of vertices that can potentially charge $f$, proving $|\\Phi(f)| \\leq 4(\\Delta_C)^{2\\lambda} \\leq 4(1\/\\tau_+)^{2\\lambda}$ where the latter inequality comes from Observation~\\ref{obs:crucialdegree} that $\\Delta_C \\leq 1\/\\tau_+$. This concludes the proof of (\\ref{eq:612398273497}).\n\t\n\tAs discussed above, each edge $e \\in N \\setminus N'$ charges a path in $C$, thus belongs to $\\Phi(f)$ of at least one crucial edge $f$. Therefore, we get\n\t\\begin{equation}\\label{eq:10234817293478}\n\t\t|N \\setminus N'| \\leq \\sum_{f \\in C} \\Phi(f).\n\t\\end{equation}\n\tEvery edge $e$ in $N \\setminus N'$ is non-crucial, i.e. $q_e \\leq \\tau_-$. Thus:\n\t\\begin{equation}\\label{eq:7873241712304912348}\n\t\\sum_{e \\in N \\setminus N'} q_e \\leq \\tau_-|N \\setminus N'| \\stackrel{(\\ref{eq:10234817293478})}{\\leq} \\tau_- \\sum_{f \\in C} \\Phi(f) \\stackrel{(\\ref{eq:612398273497})}{\\leq} 4\\tau_- |C|(1\/\\tau_+)^{2\\lambda} \\leq 4\\tau_- q(C)(1\/\\tau_+)^{2\\lambda+1},\n\t\\end{equation}\n\twhere the last inequality comes from the fact that $q(C) \\geq |C| \\tau_+$ as for every edge $e \\in C$, $q_e \\geq \\tau_+$.\n\t\n\tFrom Corollary~\\ref{cor:thresholds} we have $\\tau_- = (\\tau_+)^{10g}$ and we have $g \\geq \\lambda > 1$ by Observation~\\ref{obs:ggtlambda}. Thus:\n\t$$\n\t 4\\tau_- (1\/\\tau_+)^{2\\lambda+1} = 4 (\\tau_+)^{10g} (1\/\\tau_+)^{2\\lambda+1} = 4 (\\tau_+)^{10g - (2\\lambda - 1)} < 4 \\tau_+ < \\epsilon.\n\t$$\n\tReplacing it into inequality (\\ref{eq:7873241712304912348}), we get\n\t$$\n\t\\sum_{e \\in N \\setminus N'} q_e \\leq \\epsilon q(C).\n\t$$\n\tThis concludes the proof since\n\t$$\n\t\tq(N') = \\sum_{e \\in N'} q_e = \\sum_{e \\in N \\setminus (N \\setminus N')} q_e \\geq \\sum_{e \\in N}q_e - \\sum_{e \\in N \\setminus N'} q_e \\geq q(N) - \\epsilon q(C)\n\t$$\n\tas it is desired.\n\\end{proof}\n\n\n\\begin{claim}\\label{cl:xnpgtqnp}\n\tIt holds that $\\E[x(N')] \\geq (1-\\epsilon) q(N')$.\n\\end{claim}\n\\begin{proof}\n\tBy linearity of expectation, we have \n\t\\begin{equation}\\label{eq:16234421340}\n\t\\E[x(N')] = \\E \\Big[ \\sum_{e \\in N'} x_e \\Big] = \\sum_{e \\in N'} \\E[x_e].\n\t\\end{equation}\n\tWe emphasize that the expectation here is taken over the randomization in Algorithm~\\ref{alg:sampling}, the randomization in matching $Z$, and the randomization in realization of non-crucial edges. Specifically, we write $\\E_{\\alg, Z, \\mc{N}}[x_e]$ to emphasize on this.\n\t\n\tThe randomization of Algorithm~\\ref{alg:sampling} determines the value of $f_e$ which is used in defining $x_e$. Let us first condition on $f_e$ and compute $\\E_{Z, \\mc{N}}[x_e \\mid f_e]$. We have\n\t\\begin{equation}\\label{eq:89123}\n\t\t\\E_{Z, \\mc{N}}[x_e \\mid f_e] = \\Pr[e \\in \\mc{E} \\text{ and } u, v \\not\\in V(Z) \\mid f_e] \\times \\frac{f_e}{p_e(1-\\Pr[X_u])(1-\\Pr[X_v])}.\n\t\\end{equation}\n\tWe claim that \n\t\\begin{equation}\\label{eq:5123674182374}\n\t\t\\Pr[e \\in \\mc{E} \\text{ and } u, v \\not\\in V(Z) \\mid f_e] = p_e(1-\\Pr[X_u])(1-\\Pr[X_v]).\n\t\\end{equation}\n\tTo see this, first observe that the value of $f_e$ is determined solely by the random realizations taken by Algorithm~\\ref{alg:sampling}. In particular, the events $e \\in \\mc{E}$, and $u, v \\not\\in V(Z)$ are completely independent of the outcome of Algorithm~\\ref{alg:sampling}. This allows us to remove the condition on $f_e$ from the left hand side of (\\ref{eq:5123674182374}). Moreover, by Lemma~\\ref{lem:independentmatching} part 3, the matching $Z$ is chosen independently from the realization of non-crucial edges, thus events $e \\in \\mc{E}$ and $u, v \\not\\in V(Z)$ are independent. \tFinally, the assumption that $e \\in N'$, by definition of $N'$, implies that $d_C(u, v) \\geq \\lambda$. Therefore, by Lemma~\\ref{lem:independentmatching} part 4, events $v \\in V(Z)$ and $u \\in V(Z)$ (and for that matter their complements) are independent. Thus, indeed:\n\t\\begin{align*}\n\t\\Pr[e \\in \\mc{E} \\text{ and } u, v \\not\\in V(Z) \\mid f_e] &= \\Pr[e \\in \\mc{E}] \\times \\Pr[v \\not\\in V(Z)] \\times \\Pr[u \\not\\in V(Z)]\\\\\n\t&= p_e(1-\\Pr[X_u])(1-\\Pr[X_v]).\n\t\\end{align*}\n\tReplacing (\\ref{eq:5123674182374}) into (\\ref{eq:89123}) we get\n\t\\begin{equation*}\n\t\t\\E_{Z, \\mc{N}}[x_e \\mid f_e] = p_e(1-\\Pr[X_u])(1-\\Pr[X_v]) \\times \\frac{f_e}{p_e(1-\\Pr[X_u])(1-\\Pr[X_v])} = f_e.\n\t\\end{equation*}\n\tTaking expectation over $\\alg$ from both sides, we get\n\t\\begin{equation}\\label{eq:72313409}\n\t\\E_{\\alg}[\\E_{Z, \\mc{N}}[x_e \\mid f_e]] = \\E_{\\alg}[f_e].\n\t\\end{equation}\n\tThe left hand side equals $\\E_{\\alg, Z, \\mc{N}}[x_e]$. For the right hand side, by Claim~\\ref{cl:frange} we have $\\E[f_e] \\geq (1-\\epsilon)q_e$. \tReplacing both the left hand side and right hand side of (\\ref{eq:72313409}) by these bounds, we get\n\t\\begin{equation}\n\t\t\\E_{\\alg, Z, \\mc{N}}[x_e] \\geq (1-\\epsilon) q_e.\n\t\\end{equation}\n\tCombining this with (\\ref{eq:16234421340}) we get\n\t\\begin{equation*}\n\t\\E[x(N')] = \\sum_{e \\in N'} \\E[x_e] \\geq (1-\\epsilon) \\sum_{e \\in N'} q_e = (1-\\epsilon) q(N'),\n\t\\end{equation*}\n\tcompleting the proof.\n\\end{proof}\n\nWe are now ready to prove Lemma~\\ref{lem:sizeofx}.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:sizeofx}]\n\tWe have\n\t$$\n\t\\E\\Big[\\sum_{e}x_e\\Big] = \\E\\Big[\\sum_{e \\in C} x_e\\Big] + \\E\\Big[\\sum_{e \\in N} x_e\\Big] \\stackrel{\\text{Claim~\\ref{cl:sizeofxcrucial}}}{\\geq} q(C) - 31\\epsilon \\opt + \\E\\Big[\\sum_{e \\in N} x_e\\Big].\n\t$$\n\tAlso note that for $e \\in N$, $x_e \\not= 0$ iff $e \\in N'$ by construction of $x$. Thus,\n\t$$\n\t\\E\\Big[\\sum_{e \\in N} x_e\\Big] = \t\\E\\Big[\\sum_{e \\in N'} x_e\\Big] = \\E[x(N')] \\stackrel{\\text{Claim~\\ref{cl:xnpgtqnp}}}{\\geq} (1-\\epsilon)q(N') \\stackrel{\\text{Claim~\\ref{cl:nplarge}}}{\\geq} (1-\\epsilon)(q(N)-\\epsilon q(C)).\n\t$$\n\tCombining the two equations above, we get\n\t\\begin{align*}\n\t\t\\E\\Big[\\sum_{e}x_e\\Big] &\\geq q(C) - 31\\epsilon \\opt + (1-\\epsilon)(q(N)-\\epsilon q(C)) > q(C) + q(N) - 33\\epsilon \\opt\\\\\n\t\t&\\stackrel{\\text{Lemma~\\ref{lem:gap} part (2)}}{\\geq} (1-\\epsilon)\\opt - 33\\epsilon \\opt \\geq (1-34\\epsilon)\\opt,\n\t\\end{align*}\n\tconcluding the proof.\n\\end{proof}\n\\subsection{From the Expected Fractional Matching to an Actual Fractional Matching}\\label{sec:turntofracmatching}\n\nWe showed that $x$ is an expected fractional matching satisfying $\\E[x_v] \\leq 1$ for every vertex $v$. However, as mentioned before, there is still a possibility that $x_v > 1$ depending on the coin tosses of the algorithms and the realization. This should never occur in a valid fractional matching. Thus, we define the following scaled fractional matching $y$ based on $x$ which decreases the fractional matching around vertices that deviate significantly from their expectation to 0.\n\n\\begin{equation}\\label{eq:defy}\n\t\\text{For any edge $e=\\{u, v\\}$,} \\qquad\\qquad y_e = \\begin{cases}\n\t\t\tx_e\/(1+\\epsilon) & \\text{if $x_v, x_u \\leq 1+\\epsilon$,}\\\\\n\t\t\t0 & \\text{otherwise.}\n\t\\end{cases}\n\\end{equation}\n\n\\begin{observation}\\label{obs:yblossom}\n\tBy definition above, $y$ is a valid fractional matching, i.e. $y_v \\leq 1$ for all $v \\in V$. In addition, since $y_e \\leq x_e$ for all edges $e$, Claim~\\ref{cl:xblossom} implies that for all $U \\subseteq V$ with $|U| \\leq 1\/\\epsilon$, $y(X) \\leq \\lfloor \\frac{|U|}{2} \\rfloor$. That is, $y$ also satisfies all blossom inequalities of size up to $1\/\\epsilon$.\n\\end{observation}\n\nIt remains to prove that while turning the expected fractional matching $x$ into an actual fractional matching $y$, we don't significantly hurt the matching's size. We address this in the lemma below.\n\n\\begin{lemma}\\label{lem:ylarge}\n\t$\\E[|y|] \\geq (1-55\\epsilon)\\opt$.\n\\end{lemma}\n\nThe main ingredient in proving Lemma~\\ref{lem:ylarge} is the following claim.\n\n\\begin{claim}\\label{cl:6123719801923}\n\tFor every vertex $v$, $\\Pr[x_v > 1+\\epsilon] \\leq \\epsilon^6p$.\n\\end{claim}\n\nLet us first see how Claim~\\ref{cl:6123719801923} suffices to prove Lemma~\\ref{lem:ylarge} and then prove it.\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:ylarge}]\n\tWe have\n\t\\begin{flalign*}\n\t\t\\sum_{e} y_e &= \\sum_{e = \\{u, v\\}} \\mathbbm{1}(x_u \\leq 1+\\epsilon \\text{ and } x_v \\leq 1+\\epsilon) \\frac{x_e}{1+\\epsilon} && \\text{By definition of $y_e$ in (\\ref{eq:defy}).}\\\\\n\t\t&\\geq \\sum_{e = \\{u, v\\}} (1-\\mathbbm{1}(x_u > 1+\\epsilon)-\\mathbbm{1}(x_v > 1+\\epsilon)) \\frac{x_e}{1+\\epsilon} && \\text{Union bound.}\\\\\n\t\t&= \\sum_{e} \\frac{x_e}{1+\\epsilon} - 2\\sum_{v : x_v > 1+\\epsilon} \\sum_{e \\ni v} \\frac{x_e}{1+\\epsilon} = \\sum_{e} \\frac{x_e}{1+\\epsilon} - 2\\sum_{v : x_v > 1+\\epsilon} \\frac{x_v}{1+\\epsilon}.\n\t\\end{flalign*}\n\tTaking expectation from both sides, we get\n\t\\begin{flalign}\n\t\t\\nonumber\\E\\Big[ \\sum_e y_e \\Big] &\\geq \\E\\Big[\\sum_{e} \\frac{x_e}{1+\\epsilon} - 2\\sum_{v : x_v > 1+\\epsilon} \\frac{x_v}{1+\\epsilon}\\Big] = \\frac{1}{1+\\epsilon}\\left(\\E\\Big[\\sum_{e} x_e\\Big] - 2\\E\\Big[\\sum_{v : x_v > 1+\\epsilon} x_v \\Big]\\right)\\\\\n\t\t\\nonumber&\\geq \\frac{1}{1+\\epsilon}\\left((1-34\\epsilon)\\opt - 2\\E\\Big[\\sum_{v : x_v > 1+\\epsilon} x_v \\Big]\\right) \\qquad\\qquad \\text{By Lemma~\\ref{lem:sizeofx}.}\\\\\n\t\t\\nonumber &\\geq (1-35\\epsilon)\\opt - 2\\sum_{v} \\Pr[x_v > 1+\\epsilon]\\E[x_v \\mid x_v > 1+\\epsilon]\\\\\n\t\t&\\geq (1-35\\epsilon)\\opt - 2\\sum_{v} \\epsilon^6 p \\E[x_v \\mid x_v > 1+\\epsilon] \\qquad\\qquad \\text{By Claim~\\ref{cl:6123719801923}.}\\label{eq:98912308}\n\t\\end{flalign}\n\tWe will soon prove that for every vertex $v$, it \\underline{deterministically} holds that $x_v \\leq \\frac{1}{p\\epsilon^4}$. Replacing this into the last inequality above, gives the desired bound that\n\t\\begin{flalign*}\n\t\t\\E\\Big[\\sum_e y_e \\Big] &\\geq (1-35\\epsilon)\\opt - 2\\sum_{v} \\epsilon^6 p \\frac{1}{p \\epsilon^4} \\geq (1-35\\epsilon)\\opt - 2\\epsilon^2 n \\stackrel{\\text{Assumption~\\ref{ass:optlarge}}}{\\geq} (1-35\\epsilon)\\opt - 20\\epsilon \\opt \\\\\n\t\t&= (1-55\\epsilon)\\opt.\n\t\\end{flalign*}\n\tNow let's see why $x_v \\leq \\frac{1}{p\\epsilon^4}$. Observe from the definition of $x$ that if $v \\in V(Z)$ then $x_v \\leq 1$ and otherwise\n\t$$\n\t\tx_v = \\sum_{e = \\{v, u\\}} x_e \\leq \\sum_{e = \\{v, u\\}} \\frac{f_e}{p(1-\\Pr[X_u])(1-\\Pr[X_v])} \\leq \\frac{1}{p \\epsilon^4} \\sum_{e=\\{v, u\\}} f_e.\n\t$$\n\tThe last inequality above comes from the fact that for every vertex $w$, $\\Pr[X_w] \\leq 1-\\epsilon^2$ due to Lemma~\\ref{lem:independentmatching} part 2, which means $1-\\Pr[X_w] \\geq \\epsilon^2$. \n\t\n\tNow recall from Claim~\\ref{cl:frange} part 3 that $\\sum_{e \\ni v} f_e \\leq 1$. Thus we get our desired upper bound that $x_v \\leq \\frac{1}{p\\epsilon^4}$.\t As described above, this completes the proof that $\\E[\\sum_e y_e] \\geq (1-55\\epsilon)\\opt$.\n\\end{proof}\n\nWe now turn to prove Claim~\\ref{cl:6123719801923} that $\\Pr[x_v > 1+\\epsilon] \\leq \\epsilon^6 p$ for all $v$.\n\n\\newcommand{\\eventvz}[0]{\\ensuremath{A}}\n\n\\begin{proof}[Proof of Claim~\\ref{cl:6123719801923}]\n\tIf an edge incident to $v$ belongs to matching $Z$, i.e. if $X_v = 1$ (as defined in Lemma~\\ref{lem:independentmatching}), then one can confirm easily from the definition of $x$ in (\\ref{eq:crucialxe}) and (\\ref{eq:noncrucialxe}) that either $x_v = 1$ or $x_v = 0$, implying that $\\Pr[x_v > 1+\\epsilon \\mid X_v = 1] = 0$. As such, for the rest of the proof, we simply condition on the event that $X_v = 0$.\n\t\n\tSimilar to the proof of Claim~\\ref{cl:expxvlt1} let $u_1, u_2, \\ldots, u_r$ be the neighbors of $v$ such that for each $i \\in [r]$, (1) edge $e_i = \\{v, u_i\\}$ is non-crucial, and (2) $d_C(v, u_i) \\geq \\lambda$. Recall from (\\ref{eq:21410238471098412}) that given event $X_v = 0$, it holds that\n\t\t\\begin{equation*}\n\t\t\tx_v = x_{e_1} + x_{e_2} + \\ldots + x_{e_r}.\n\t\t\\end{equation*}\n\t\n\t\tLet $f'_v := \\sum_{i=1}^r f_{e_i}$ and note that $f'_v \\leq f_v$ since $f_v$ is sum of $f_e$ of all non-crucial edges $e$ connected to $v$. Claim~\\ref{cl:frange} part 4 proves that $\\Pr[f_v \\geq n_v + 0.1\\epsilon] \\leq (\\epsilon p)^{10}$. Therefore, it also holds that $\\Pr[f'_v \\geq n_v + 0.1\\epsilon] \\leq (\\epsilon p)^{10}$ since $f'_v \\leq f_v$. For the rest of the proof, we regard $f_{e_i}$'s as (adversarially) fixed with the only assumption that $f'_v < n_v + 0.1\\epsilon$ which happens with probability at least $1 - (\\epsilon p)^{10}$. We denote this event, as well as the event that $X_v = 0$, by $\\eventvz$ and prove\n\t\t\\begin{equation}\\label{eq:980989825627}\n\t\t\\Pr[x_v > 1 + \\epsilon \\mid \\eventvz] \\leq 0.5\\epsilon^6p,\n\t\t\\end{equation}\n\t\twhich clearly is sufficient for proving the claim.\n\t\n\t\tWe do this by proving a concentration bound using the second moment method. Consider the variance of $x_v$ conditioned on $\\eventvz$:\n\t\t\\begin{flalign*}\n\t\t\t\\Var[x_v \\mid \\eventvz] = \\sum_{i=1}^r \\sum_{j=1}^r \\Cov(x_{e_i}, x_{e_j} \\mid \\eventvz).\n\t\t\\end{flalign*}\n\t\tNow that $f_e$'s are fixed, $x_v$ is only a random variable of (1) the randomization used in Lemma~\\ref{lem:independentmatching} for obtaining matching $Z$, and (2) the realization of non-crucial edges.\n\t\t\n\t\tIn what follows we identify a condition under which covariance of $x_{e_i}$ and $x_{e_j}$ becomes $0$. We will use this later to upper bound $\\Var[x_v \\mid \\eventvz]$. \n\t\t\n\t\t\\begin{observation}\\label{obs:cov0}\n\t\t\tLet $i, j \\in [r]$ be such that $d_C(u_i, u_j) \\geq \\lambda$. Then $\\Cov(x_{e_i}, x_{e_j} \\mid \\eventvz) = 0$.\n\t\t\\end{observation}\n\t\t\\begin{proof}\n\t\t\tWe already had $d_C(v, u_i) \\geq \\lambda$ and $d_C(v, u_j) \\geq \\lambda$ by definition of $u_i, u_j$. Combined with assumption $d_C(u_i, u_j) \\geq \\lambda$ and using Lemma~\\ref{lem:independentmatching} part 4, we get that $X_v, X_{u_i}, X_{u_j}$ are independent. Realization of $e_i$ and $e_j$ are also independent even given $\\eventvz$. This is because these are non-crucial edges and thus are realized independently from $Z$ (according to Lemma~\\ref{lem:independentmatching} part 3) or the values of $f$ which are derived from Algorithm~\\ref{alg:sampling}.\n\t\t\t\n\t\t\tBy definition (\\ref{eq:noncrucialxe}), the value of $x_{e_i}$ conditioned on $\\eventvz$ is fully determined once we know $X_{u_i}$ and whether $e_i$ is realized. Similarly, the value of $x_{e_j}$ conditioned on $\\eventvz$ is fully determined once we know $X_{u_j}$ and whether $e_j$ is realized. These, as discussed above, are independent. Hence $x_{e_i}$ and $x_{e_j}$, conditioned on $\\eventvz$, are independent and thus their covariance is 0.\n\t\t\\end{proof}\n\t\t\n\t\tNow consider two vertices $u_i$ and $u_j$ (possibly $u_i = u_j$) where $d_C(u_i, u_j) < \\lambda$. Here, the covariance may not be 0. But we still can upper bound it as follows:\n\t\t\\begin{flalign}\n\t\t\\nonumber \\Cov(x_{e_i}x_{e_j} \\mid \\eventvz) &= \\E[x_{e_i}x_{e_j} \\mid \\eventvz] - \\E[x_{e_i} \\mid A]\\E[x_{e_j} \\mid A] \\leq \\E[x_{e_i}x_{e_j} \\mid \\eventvz]\\\\\n\t\t\\nonumber &\\leq \\frac{f_{e_i}}{p(1-\\Pr[X_v])(1-\\Pr[X_{u_i}])} \\times \\frac{f_{e_j}}{p(1-\\Pr[X_v])(1-\\Pr[X_{u_j}])}\\\\\n\t\t&\\leq \\frac{f_{e_i}f_{e_j}}{p^2 \\epsilon^8},\\label{eq:3819123987}\n\t\t\\end{flalign}\n\t\twhere the last inequality follows from Lemma~\\ref{lem:independentmatching} part 2 that states for all vertices $w$, $\\Pr[X_w] < 1-\\epsilon^2$ and thus $1-\\Pr[X_w] \\geq \\epsilon^2$.\n\t\t\n\t\tNow, for each $i \\in [r]$, let $D_i := \\{j : d_C(u_i, u_j) < \\lambda \\}$. Since $C$ is a graph of max degree $\\Delta_C$, the $\\lambda-1$ neighborhood of each vertex $u_i$ in $C$ includes $\\leq (\\Delta_C)^{\\lambda-1}$ vertices. Thus:\n\t\t\\begin{equation}\\label{eq:87193107123897}\n\t\t\t|D_i| \\leq (\\Delta_C)^{\\lambda-1} \\qquad\\qquad \\text{for every $i \\in [r]$.}\n\t\t\\end{equation}\n\t\tHaving these, we obtain that\n\t\t\\begin{flalign*}\n\t\t\t\\Var[x_v \\mid \\eventvz] &= \\sum_{i=1}^r \\sum_{i=1}^r \\Cov(x_{e_i}, x_{e_j} \\mid \\eventvz) \\stackrel{\\text{Obs~\\ref{obs:cov0}}}{=} \\sum_{i = 1}^r \\sum_{j \\in D_i} \\Cov(x_{e_i}, x_{e_j} \\mid \\eventvz) \\stackrel{(\\ref{eq:3819123987})}{\\leq} \\sum_{i = 1}^r \\sum_{j \\in D_i} \\frac{f_{e_i}f_{e_j}}{p^2\\epsilon^8} \\\\\n\t\t\t& = \\frac{1}{p^2 \\epsilon^8}\\sum_{i = 1}^r \\Big(f_{e_i}\\sum_{j \\in D_i} f_{e_j}\t\\Big) \\stackrel{f_{e_j} \\leq \\frac{1}{\\sqrt{\\epsilon R}} \\text{ by (\\ref{eq:deff})}}{\\leq} \\frac{1}{p^2\\epsilon^8} \\sum_{i = 1}^r \\Big(f_{e_i} |D_i| \\frac{1}{\\sqrt{\\epsilon R}}\t\\Big)\\\\\n\t\t\t& \\stackrel{(\\ref{eq:87193107123897})}{\\leq} \\frac{(\\Delta_C)^{\\lambda-1}}{p^2 \\epsilon^8\\sqrt{\\epsilon R}} \\sum_{i = 1}^r f_{e_i} \\stackrel{\\text{Claim~\\ref{cl:frange} part 3}}{\\leq} \\frac{(\\Delta_C)^{\\lambda-1}}{p^2 \\epsilon^8\\sqrt{\\epsilon R}} \\stackrel{\\text{Obs~\\ref{obs:crucialdegree}}}{\\leq} \\frac{(1\/\\tau_+)^{\\lambda-1}}{p^2\\epsilon^{8.5}\\sqrt{R}}.\n\t\t\\end{flalign*}\n\t\tReplacing $R$ with $\\frac{1}{2\\tau_-}$ and noting that $\\tau_- = (1\/\\tau_+)^{10 g}$, we get that\n\t\t\\begin{flalign*}\n\t\t\\Var[x_v \\mid A] \\leq \\frac{2(1\/\\tau_+)^{\\lambda}}{p^2 \\epsilon^{8.5}(1\/\\tau_+)^{10g}} &= \\frac{2}{p^2 \\epsilon^{8.5}}(\\tau_+)^{10g - \\lambda}\\\\\n\t\t&< \\frac{2\\tau_+}{p^2 \\epsilon^{8.5}} && \\text{By Observation~\\ref{obs:ggtlambda} $g \\geq \\lambda > 1$ and $\\tau_+ < 1$.}\\\\\n\t\t&< \\frac{2 (\\epsilon p)^{50}}{p^2 \\epsilon^{8.5}} && \\text{Corrolary~\\ref{cor:thresholds} part 4.}\\\\\n\t\t&= 2 \\epsilon^{41.5} p^{48} < 0.1 \\epsilon^8 p.\n\t\t\\end{flalign*}\n\t\tWith this upper bound on the variance, we can use Chebyshev's inequality to get\n\t\t\\begin{equation}\\label{eq:cheb18023}\n\t\t\t\\Pr\\Big[|x_v - \\E[x_v \\mid \\eventvz]| > 0.5\\epsilon \\,\\Big\\vert\\, \\eventvz\\Big] \\leq \\frac{\\Var[x_v \\mid \\eventvz]}{(0.5\\epsilon)^2} \\leq \\frac{0.1 \\epsilon^8 p}{0.25 \\epsilon^2} < 0.5\\epsilon^6 p.\n\t\t\\end{equation}\n\t\tNext, recall from (\\ref{eq:421610986201363}) in the proof of Claim~\\ref{cl:expxvlt1} that $\\E[x_v \\mid v\\not\\in V(Z)] \\leq \\frac{\\sum_{i=1}^r \\E[f_{e_i}]}{1-\\Pr[X_v]} = \\frac{f'_v}{1-\\Pr[X_v]}$. Event $\\eventvz$ in addition to $v \\not\\in V(Z)$ also fixes the value of $f'_v$. But recall that event $\\eventvz$ (as we defined it) guarantees $f'_v \\leq n_v + 0.5\\epsilon$. Therefore, we get\n\t\t\\begin{equation}\\label{eq:74777748123}\n\t\t\t\\E[x_v \\mid \\eventvz] \\leq \\frac{n_v + 0.5\\epsilon}{1-\\Pr[X_v]} \\stackrel{\\Pr[X_v] < c_v}{\\leq} \\frac{n_v + 0.5\\epsilon}{1-c_v} \\stackrel{n_v \\leq 1-c_v}{\\leq} \\frac{1-c_v+0.5\\epsilon}{1-c_v} \\leq 1 + 0.5\\epsilon.\n\t\t\\end{equation}\n\t\tCombining (\\ref{eq:cheb18023}) and (\\ref{eq:74777748123}) we get the claimed inequality of (\\ref{eq:980989825627}) that \n\t\t$$\n\t\t\\Pr[x_v > 1+\\epsilon \\mid \\eventvz] \\leq \\Pr[|x_v - \\E[x_v \\mid A]| > 0.5\\epsilon \\mid \\eventvz] \\leq 0.5\\epsilon^6p,\n\t\t$$\n\t\twhich as described before suffices to prove $\\Pr[x_v > 1+\\epsilon] \\leq \\epsilon^6p$.\n\\end{proof}\n\n\n\n\\subsection{Lemma~\\ref{lem:independentmatching} Property 1: The Matching's Size}\\label{sec:p1}\n\\input{matchingsize}\n\n\n\\subsection{Lemma~\\ref{lem:independentmatching} Property 2: Matching Probabilities}\\label{sec:p2}\n\\input{matchingprobs}\n\n\\subsection{Lemma~\\ref{lem:independentmatching} Property 4: Matching Independence}\\label{sec:p4}\n\\input{matchingindependence}\n\n\\section{Proof of the Vertex-Independent Matching Lemma}\\label{sec:independentmatching}\n\n\\newcommand{\\neighbors}[1]{\\ensuremath{\\mathsf{Neighbors}(#1)}}\n\\newcommand{\\isrealized}[1]{\\ensuremath{\\mathsf{IsRealized}(#1)}}\n\\newcommand{\\dependent}[0]{\\ensuremath{\\mathcal{D}}}\n\nIn this section we turn to prove Lemma~\\ref{lem:independentmatching} restated below.\n\n\\restate{Lemma~\\ref{lem:independentmatching}}{\n\t\\independentmatching{}\n}\n\n\\subsection{Overview of the Algorithm}\\label{sec:crucialoverview}\n\\input{ind-intuitions}\n\n\\subsection{The Formal Algorithm}\\label{sec:formalcrucialalg}\n\\input{ind-algorithm}\n\n\n\n\\section{Introduction}\\label{sec:intro}\n{\n\nWe study the following {\\em stochastic matching} problem. An arbitrary graph $G=(V, E)$ is given, then each edge $e \\in E$ is retained (or to be consistent with the literature {\\em realized}) independently with some given probability $p \\in (0, 1]$. The goal is to pick a subgraph $Q$ of $G$ without knowing the edge realizations such that:\n\\begin{enumerate}[itemsep=0pt,topsep=5pt]\n\t\\item The expected size of the maximum matching among the realized edges of $Q$ approximates the expected size of the maximum matching among the realized edges in $G$.\n\t\\item The maximum degree in $Q$ is bounded by a function that may depend on $p^{-1}$ but must be independent of the size of $G$.\\footnote{In this paper, we solve a generalization of this problem where each edge $e$ has its own realization probability $p_e$ and the degree of $Q$ can be proportional to $p = \\min_e p_e$. See Section~\\ref{sec:prelim} for the formal setting.}\n\\end{enumerate}\nIt would be useful to think of $p$ as some constant whereas $n := |V| \\to \\infty$. Then the second condition translates to $Q$ having $O(1)$ maximum degree. In other words, the subgraph $Q$ should provide a good approximation while having $O(n)$ edges, in contrast to $G$ which may have up to $\\Omega(n^2)$ edges.\n\n\\smparagraph{Applications.} The setting is mainly motivated by applications in which the process of determining an edge realization (referred to as {\\em querying} the edge) is considered time consuming or expensive. For such applications, one can instead of querying every edge of $G$, only query the edges of its much sparser subgraph $Q$ and still find a large realized matching in $G$. Kidney exchange and online labor markets are major examples of such applications. For more details on the role of the stochastic matching problem in these applications, see \\cite{arXivblumetal,blumetal,AKL16,AKL17,BR18} (particularly \\cite[Section~1.2]{arXivblumetal}) for kidney exchange and \\cite{BR18,soda19,sagt19} for online labor markets. Another natural application of the model is that this subgraph $Q$ can be used as a {\\em matching sparsifier} for $G$ which approximately preserves its maximum matching size under random edge failures \\cite{sosa19}.\n\n\\smparagraph{Related work.} The problem has received significant attention \\cite{blumetal,AKL16,AKL17,YM18,BR18,soda19,sosa19,sagt19} after the pioneering work of Blum~\\etal{}~\\cite{blumetal} who proved that it admits a $(\\frac{1}{2}-\\epsilon)$-approximation. Earlier follow-up works revolved around the prevalent half-approximation barrier until it was first broken by Assadi~\\etal{}~\\cite{AKL16}. This was followed by a $0.6568$-approximation by Behnezhad~\\etal{}~\\cite{soda19} and eventually a $(\\frac{2}{3}-\\epsilon)$-approximation by Assadi and Bernstein \\cite{sosa19} which is the state-of-the-art. See also \\cite{YM18,BR18,soda19,YM19} for various natural generalizations of the problem.\n\n\\smparagraph{Our result.} In this work, we improve the approximation-factor all the way up to $(1-\\epsilon)$:\n\n\\begin{highlighttechnical}\n\t\\begin{theorem}\\label{thm:main}\n\t\tFor any $\\epsilon > 0$, there is an algorithm that picks an $O_{\\epsilon, p}(1)$-degree subgraph $Q$ of $G$ such that the expected size of the maximum realized matching in $Q$ is at least $(1-\\epsilon)$ times the expected size of the maximum realized matching in $G$.\n\t\\end{theorem}\n\\end{highlighttechnical}\n\nTo get a $(1-\\epsilon)$-approximation, the dependence of the maximum degree of $Q$ on both $\\epsilon$ and $p$ is necessary. Particularly, a simple lower bound shows that even when $G$ is a clique, to avoid too many singleton vertices in a realization of $Q$, the maximum degree in $Q$ must be $\\Omega(\\frac{\\ln \\epsilon^{-1}}{p})$ \\cite{AKL16}. The same lower bound also shows that a $(1-o(1))$ approximation is not achievable unless the maximum degree of $Q$ is $\\omega(1)$, meaning that our approximation-factor is essentially the best one can hope for.\n\n\\begin{remark}\n\tThe $O_{\\epsilon, p}(1)$ term in Theorem~\\ref{thm:main} is in the order \n\t$\\exp\\left(\\exp\\left(\\exp\\left(O\\left(\\epsilon^{-1} \\right)\\right) \\times \\log \\log p^{-1}\\right)\\right)$. We do not believe this dependence is optimal and leave it as an open problem to improve it. Particularly, we conjecture that the same algorithm that is analyzed in this work (see Algorithm~\\ref{alg:sampling}) should obtain up to $(1-\\epsilon)$-approximation even by picking only a $\\poly(1\/\\epsilon p)$-degree subgraph.\n\\end{remark}\n\n\\smparagraph{The algorithm.} Many different constructions of $Q$ have been studied in the literature. A well-studied algorithm first considered by Blum~\\etal{}~\\cite{blumetal} which was further analyzed (module minor differences and generalizations) in the subsequent works of \\cite{AKL16,AKL17,BR18,YM18,YM19} is as follows: Iteratively pick a maximum matching $M_i$ from $G$, remove its edges, and finally let $Q = M_1 \\cup \\ldots \\cup M_R$ for some parameter $R$ that controls the maximum degree in $Q$. Despite the positive results proved for this algorithm, it was already shown in \\cite{blumetal} that its approximation-factor is not better than $5\/6$. Thus to obtain $(1-\\epsilon)$-approximation, one has to use a different algorithm.\n\nWe focus on an algorithm proposed previously by Behnezhad~\\etal{}~\\cite{soda19}, which they proved obtains at least a $0.6568$-approximation. The algorithm is equally simple, but subtly different: Draw $R$ independent realizations $\\mc{G}_1, \\ldots, \\mc{G}_R$ of $G$ and let $Q = \\MM{\\mc{G}_1} \\cup \\ldots \\cup \\MM{\\mc{G}_R}$ where $\\MM{\\mc{G}_i}$ is a maximum matching of $\\mc{G}_i$. Our main result is obtained via providing a different analysis of this algorithm. Within the next two paragraphs, we discuss how our analysis differs substantially from the previous approaches and in particular from the analysis of \\cite{soda19}.\n\n\\smparagraph{The analysis and the Ruzsa-Szemer\u00e9di barrier.} A major barrier to overcome in order to prove existence of a $(1-\\epsilon)$-approximate subgraph was already discussed in the work of Assadi, Khanna, and Li \\cite[Section~6]{AKL16} based on Ruzsa-Szemer\u00e9di graphs \\cite{ruzsa1978triple,DBLP:conf\/stoc\/FischerLNRRS02,DBLP:conf\/soda\/GoelKK12,DBLP:conf\/stoc\/AlonMS12} which we henceforth call the ``RS-barrier''. Consider an extension of the stochastic matching setting where the realization of edges in a single a-priori known matching $M$ of $G$ can be correlated while other edges are still realized independently. An implication of the RS-barrier is that in this extended model, no algorithm can obtain $(1-\\epsilon)$-approximation (or even beat $\\frac{2}{3}$-approximation\\footnote{The original proof of \\cite{AKL16} rules out $>\\frac{6}{7}$-approximation. A similar instance can rule out $\\frac{2}{3}$-approximation using a more efficient construction of RS-graphs \\cite{DBLP:conf\/soda\/GoelKK12} and allowing a subset of edges of $G$ to have realization probability 1.}) unless $Q$ has maximum degree $n^{\\Omega(1\/\\log\\log n)} = \\omega(\\polylog n)$. Put differently, this proves that in order to beat $\\frac{2}{3}$-approximation, the analysis has to use the fact that {\\em every} edge around a vertex is realized independently. This explains why the previous arguments were short of bypassing $\\frac{2}{3}$-approximation: They can all (to our knowledge) be adapted to tolerate adversarial realization of one edge per vertex.\n\n\\smparagraph{``Vertex-independent matchings'' to the rescue.} We overview our analysis soon in Section~\\ref{sec:techniques}. However, here we briefly mention our key analytical tool in bypassing the RS-barrier. It is an algorithm (Lemma~\\ref{lem:independentmatching}) for constructing a matching $Z$ on the realized {\\em crucial} edges (roughly, an edge is crucial if it has a sufficiently high probability of being part of an optimal realized matching). The algorithm constructs $Z$ such that among some other useful properties, it guarantees that each vertex is matched independently from all but $O(1)$ other vertices. Here the independence is with regards to both the randomization of the algorithm in constructing $Z$, and also importantly \\underline{the edge realizations of $G$}. This independence property is the key that separates the stochastic matching model from the extended model of the RS-barrier: Due to the added correlations in the edge realizations, such vertex-independent matchings essentially do not exist in the model of the RS-barrier. Using this independence, we show that $Z$ can be well-augmented by the rest of the realized edges in $Q$. See Section~\\ref{sec:techniques} for a more detailed overview of our analysis and how the independence property helps.\n\nOur method of bypassing the RS-barrier via vertex-independent matchings sheds more light on the limitations imposed by Ruzsa-Szemer\u00e9di type graphs. These graphs are known to be notoriously hard examples in various other areas such as property testing, streaming algorithms, communication complexity, and additive combinatorics among others \\cite{DBLP:conf\/soda\/Kapralov13,DBLP:conf\/soda\/GoelKK12,DBLP:conf\/stoc\/AlonMS12,ruzsa1978triple,DBLP:conf\/stoc\/FischerLNRRS02,gowers2001some}. As such, we believe that this method may find applications beyond the stochastic matching problem.\n\n\\smparagraph{Organization of the paper.} In Section~\\ref{sec:techniques} we provide an informal overview of our analysis. In Section~\\ref{sec:prelim} we formally state the problem and the notations used throughout the paper. In Section~\\ref{sec:analysissetup} we describe the algorithm and basic definitions that we will use throughout the analysis. In Section~\\ref{sec:analysisviavertexindependent} we prove how the vertex-independent matching lemma leads to a $(1-\\epsilon)$-approximation and in Section~\\ref{sec:independentmatching}, we prove the vertex-independent matching lemma. Finally, Section~\\ref{sec:proofs} contains the proofs of (less important) statements that are deferred.\n\n}\n\n\n\n\n\n\\section{On Generality of Assumption~\\ref{ass:optlarge}}\\label{app:optlarge}\n\nIn this section, we prove that Assumption~\\ref{ass:optlarge} comes without loss of generality. Precisely, we show that solving the problem for any input graph $G$ can be reduced to solving it for a graph $H$ with $O(\\opt\/\\epsilon)$ vertices and $\\E[\\mu(\\mc{H})] \\geq (1-\\epsilon)\\opt$ where $\\mc{H}$ is a realization $H$. To do this, we use a ``vertex sparsification'' idea of Assadi~\\etal{}~\\cite{AKL16}. Our reduction is slightly different since we do not want parallel edges in the graph, but the main idea is essentially the same. It is also worth noting that for the reduction to work, it is crucial that our algorithm works for different edge realization probabilities. We provide the full proof for completeness.\n\nWe note that throughout the proof we may assume that $\\opt$ is larger than constant $3\\epsilon^{-3}$ and remark that the problem otherwise is trivial.\n\n\\smparagraph{Construction of $H$ from $G$.} We construct graph $H=(U, F)$ as follows. For $k = \\frac{8\\opt}{\\epsilon}$, define $k$ {\\em buckets} $U = \\{u_1, \\ldots, u_k\\}$. Each of these buckets $u_i$ will correspond to a node in $H$. Assign each vertex $v$ of graph $G$ to a bucket $b(v) \\in \\{u_1, \\ldots, u_k\\}$ picked independently and uniformly at random. Then for any edge $\\{v_1, v_2\\}$ in graph $G$, we add an edge $\\{b(v_1), b(v_2)\\}$ to $F$. Finally, we turn $H$ into a simple graph by removing self-loops and merging parallel edges.\n\nNow we need to set the realization probability $p_e$ of every edge $e \\in F$ as well. For any $e \\in F$, let us denote by $E(e)$ the set of edges in the original graph $G$ that are mapped to $e$. We set\n$$\n\tp_e := 1 - \\prod_{e' \\in E(e)} (1-p_{e'}).\n$$\nWe note that $p_e$ is defined such that it precisely equals to the probability that at least one edge in $E(e)$ is realized.\n\n\\begin{claim}\\label{cl:largeinH}\n\tFix any matching $M$ in $G$ satisfying $|M| \\leq 2\\opt$. Then $\\E[\\mu(H)] \\geq (1-\\epsilon)|M|$ where the expectation is taken over the randomization of the algorithm in constructing $H$. \n\\end{claim}\n\\begin{proof}\n\tLet $V(M)$ be the vertex-set of matching $M$ in graph $G$ and define \n\t$$X := \\{v \\in V(M) \\mid \\exists u \\in V(M) \\text{ s.t. } v\\not= u \\text{ and } b(v)=b(u)\\},\n\t$$\n\twhich is the set of vertices in $V(M)$ whose bucket is not unique with regards to others in $V(M)$.\n\t\n\tWe first claim that $\\mu(H) \\geq |M| - |X|$. Call an edge $\\{u, v\\} \\in M$ {\\em good} if $u \\not\\in X$, $v \\not\\in X$, and {\\em bad} otherwise. Each bad edge has at least one endpoint in $X$, thus there are at least $|M| - |X|$ good edges in $M$. One can easily confirm that the set of corresponding edges of all good edges in $M$ forms a matching in $H$. Thus $\\mu(H) \\geq |M|-|X|$.\n\t\n\tTo conclude, we prove that $\\E[|X|] \\leq \\epsilon |M|$ which proves $\\E[\\mu(H)] \\geq |M| - \\epsilon |M| = (1-\\epsilon)|M|$. To see why $\\E[|X|] \\leq \\epsilon |M|$, fix any vertex $v \\in V(M)$ and suppose that we have adversarially fixed the bucket $b(u)$ of all other vertices $u \\in V(M)$. Since the bucket of $v$ is picked uniformly at random from $10\\opt\/\\epsilon$ buckets and $|V(M)| \\leq 2|M| \\leq 4\\opt$, the probability of $v$ choosing a bucket already chosen by another vertex in $V(M)$ would be $\\leq \\frac{4\\opt}{8\\opt\/\\epsilon} \\leq \\epsilon\/2$. By linearity of expectation over $2|M|$ vertices in $V(M)$, we get $\\E[|X|] \\leq \\epsilon |M|$, concluding the proof.\n\\end{proof}\n\n\\begin{claim}\\label{cl:713713}\n\tIt holds that $\\E[\\mu(\\mc{H})] \\geq (1-3\\epsilon)\\opt$. Here the expectation is taken over both the randomization in construction of $H$ and the randomization in realization $\\mc{H}$ of $H$.\n\\end{claim}\n\\begin{proof}\n\tWe first map each realization $\\mc{G}$ of $G$ to a realization $\\mc{H}$ of $H$. To do so, we say an edge $e \\in F$ is realized in $\\mc{H}$ if and only if at least one edge $e' \\in E(e)$ is realized in $\\mc{G}$. We argue that this mapping preserves independence of edge realizations in $H$ and their realization probabilities. First, since for any two edges $e_1, e_2 \\in F$ it holds that $E(e_1) \\cap E(e_2) = \\emptyset$, realization of an edge $e \\in F$ gives no information regarding realization of other edges. Moreover, observe that each edge $e \\in F$ will be precisely realized with probability $p_e$ as discussed above in defining $p_e$.\n\t\n\tLet $M$ be the maximum realized matching of $G$. By Lemma~\\ref{lem:concentration}, $\\Pr[||M|-\\opt| \\geq \\epsilon \\opt] < \\exp(-\\frac{(\\epsilon \\opt)^2}{3\\opt}) = \\exp( - \\frac{\\epsilon^2 \\opt}{3}) < \\epsilon$ where the last inequality follows from assumption $\\opt > 3\\epsilon^{-3}$. This means that with probability at least $1-\\epsilon$, $|M| \\in [(1-\\epsilon)\\opt, (1+\\epsilon)\\opt]$. Let us suppose that this event holds and denote it by $A$. Note that event $A$ is only with regards to realization of $G$ and reveals no information about the algorithm to construct $H$. Now plugging matching $M$ into Claim~\\ref{cl:largeinH}, we get that $\\E[\\mu(\\mc{H}) \\mid A] \\geq (1-\\epsilon)|M| \\geq (1-\\epsilon)(1-\\epsilon)\\opt \\geq (1-2\\epsilon)\\opt$. Incorporating also the probability that event $A$ holds, which as described is at least $1-\\epsilon$, we get $\\E[\\mu(\\mc{H})] \\geq (1-\\epsilon)(1-2\\epsilon)\\opt \\geq (1-3\\epsilon)\\opt$, concluding the proof.\n\\end{proof}\n\n\\smparagraph{The reduction.} We are now ready to give the full reduction. Suppose we are given $n$-vertex graph $G$ with $\\opt = \\E[\\mu(\\mc{G})]$ and assume that $\\opt < 0.1 \\epsilon n$ (otherwise Assumption~\\ref{ass:optlarge} holds). We first construct graph $H$ as described. Note that $H$ has at most $n' = \\frac{8\\opt}{\\epsilon}$ nodes by the construction and that $\\E[\\mu(\\mc{H})] \\geq (1-3\\epsilon)\\opt$ by Claim~\\ref{cl:713713}. Replacing $\\opt$ with $\\epsilon n'\/8$, we get $\\E[\\mu(\\mc{H})] \\geq (1-3\\epsilon)\\frac{\\epsilon n'}{8}$. Assuming $\\epsilon < 0.05$ (recall that we can assume $\\epsilon$ to be smaller than any needed constant), this implies $\\E[\\mu(\\mc{H})] \\geq \\frac{\\epsilon n'}{10}$ and thus Assumption~\\ref{ass:optlarge} holds for graph $H$.\n\nLet $Q$ be the result of running Algorithm~\\ref{alg:sampling} on graph $H$. Since Assumption~\\ref{ass:optlarge} holds for $H$, it leads to a $(1-\\epsilon)$-approximation. That is, we get $\\E[\\mu(\\mc{Q})] \\geq (1-\\Omega(\\epsilon))\\E[\\mu(\\mc{H})]$. We use this subgraph $Q$ to pick a bounded-degree subgraph $Q'$ of $G$ that provides a $(1-\\epsilon)$-approximation: For each edge $e \\in Q$, let us {\\em pick} $\\min\\{p^{-1} \\log \\epsilon^{-1}, |E(e)|\\}$ arbitrary edges from $E(e)$ and put them in $Q'$. We argue that this subgraph $Q'$ has maximum degree $O_{\\epsilon, p}(1)$ and that $\\E[\\mu(\\mc{Q}')] \\geq (1-\\Omega(\\epsilon))\\opt$.\n\n\\begin{claim}\n\t$Q'$ has maximum degree $O_{\\epsilon, p}(1)$.\t\n\\end{claim}\n\\begin{proof}\n\tObserve that an edge $e'$ incident to a vertex $v \\in V$ is in $Q'$ only if its corresponding edge $e$ in graph $H$ is in $Q$. Since $e$ corresponds to $e'$, it should be incident to $b(v)$ of $v$ by the construction of $H$. Moreover, since $b(v)$ has maximum degree $O_{\\epsilon, p}(1)$ in $Q$ and that for each edge incident to $b(v)$ in $Q$, we put at most $O(p^{-1}\\log \\epsilon^{-1})$ edges in $Q'$, the degree of $v$ in $Q'$ is bounded by $O_{\\epsilon, p}(1) \\times O(p^{-1}\\log \\epsilon^{-1}) = O_{\\epsilon, p}(1)$. This bounds the maximum degree of $Q'$ by $O_{\\epsilon, p}(1)$.\n\\end{proof}\n\n\\begin{claim}\n\t$\\E[\\mu(\\mc{Q}')] \\geq (1-\\Omega(\\epsilon))\\opt$.\n\\end{claim}\n\\begin{proof}\n\tFor any edge $e \\in Q$, define $p'_e$ to be the probability that at least one of the edges in $G$ picked for $e$ is realized. We first argue that $p'_e \\geq (1-\\epsilon)p_e$. To see this, note that if $|E(e)| \\leq p^{-1}\\log \\epsilon^{-1}$, then all the edges in $E(e)$ will be picked. Thus by definition of $p_e$ we have $p'_e = p_e$. On the other hand, if $|E(e)| > p^{-1}\\log \\epsilon^{-1}$, we pick exactly $p^{-1}\\log \\epsilon^{-1}$ edges for $e$. Since each of these edges has realization probability at least $p$, the probability that at least one of them is realized is at least\n$$\n\t1-(1-p)^{p^{-1}\\log \\epsilon^{-1}} \\geq 1-\\epsilon \\geq (1-\\epsilon)p_e.\n$$\n\nNow let $M$ be any matching in $Q$. For each edge $e \\in M$, choose one arbitrary edge in $E(e)$. From the construction of $H$ from $G$, one can confirm that the set of these chosen edges will form a matching of size $|M|$ in $G$. This concludes the proof: For each edge $e \\in Q$, there is a probability at least $(1-\\epsilon)p_e$ that one picked edge in $Q'$ is realized, thus $\\E[\\mu(\\mc{Q}')] \\geq (1-\\epsilon)\\E[\\mu(\\mc{Q})]$. As it was previously shown that $\\E[\\mu(\\mc{Q})] \\geq (1-\\Omega(\\epsilon))\\opt$, we conclude that $\\E[\\mu(\\mc{Q}')] \\geq (1-\\Omega(\\epsilon))\\opt$.\n\\end{proof}\n\n\n\\section{Preliminaries}\\label{sec:prelim}\n\n\\paragraph{General notations.} We denote the maximum matching size of any graph $G$ by $\\mu(G)$. For a matching $M$, we use $V(M)$ to denote the set of vertices matched in $M$. For any two nodes $u$ and $v$ in a graph $G$, we use $d_G(u, v)$ to denote their distance, i.e. the number of edges in their shortest path. Furthermore, the distance $d_G(u, e)$ between an edge $e$ and a node $u$ is the minimum distance between an endpoint of $e$ and $u$. We use $\\mathbbm{1}(A)$ as the {\\em indicator} of an event $A$, i.e. $\\mathbbm{1}(A) = 1$ if event $A$ occurs and $\\mathbbm{1}(A) = 0$ otherwise. Also, we may use $[k] := \\{1, 2, \\ldots, k\\}$ for any integer $k \\geq 1$.\n\nThroughout the paper, we define various functions of form $x : E \\to [0, 1]$ that map each edge $e \\in E$ to a real number in $[0, 1]$. Having such function $x$, for any vertex $v$ we define $x_v := \\sum_{e \\ni v} x_e$, for any edge subset $F$ we define $x(F) := \\sum_{e \\in F} q_e$, and for any vertex subset $U$ we define $x(U) := \\sum_{e=\\{u, v\\}: u, v \\in U} x_e$. We also denote $|x| = \\sum_e x_e$.\n\n\\smparagraph{The setting.} We consider a generalized variant of the standard stochastic matching problem studied in the literature where each edge $e$ has a realization probability $p_e$ that may be different from that of other edges. We then let $p = \\min_e p_e$, which is the parameter the degree of subgraph $Q$ can depend on. This generalization will actually help in solving the original model of the literature defined in Section~\\ref{sec:intro} which coincides with the case where $p_e = p$ for every edge $e$.\n\nWe denote realizations by script font; for instance, we use $\\mc{G}=(V, \\mc{E})$ to denote the realized subgraph of the input graph $G$, which includes each edge $e$ independently with probability $p_e$. Similarly, we use $\\mc{Q}$ to denote the realized subgraph of $Q$. The same notation also naturally extends to denote realization of other subgraphs of $G$ that we may later define.\n\nAs discussed in Section~\\ref{sec:intro}, the goal is to pick a sparse subgraph $Q$ of $G$ such that the ratio $\\E[\\mu(\\mc{Q})]\/\\E[\\mu(\\mc{G})]$, known as the approximation-factor, is large. Here the expectations are taken over the realizations $\\mc{Q}$ and $\\mc{G}$, and possibly the randomization of the algorithm in constructing subgraph $Q$. For brevity, we use $\\opt$ to denote $\\E[\\mu(\\mc{G})]$. Note that $\\opt$ is just a number.\n\nWe note that the {\\em expected} approximation-factor defined above can automatically be turned into {\\em high-probability} due to a simple concentration bound. See Appendix~\\ref{sec:concentration}.\n\n\n\\section{Deferred Proofs}\\label{sec:proofs}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:gap}]\n\tLet $t_0 = (\\epsilon p)^{50}$ and for any $i \\geq 1$ let $t_i = f(t_{i-1})$. Note that $t_0 > t_1 > t_2 > \\ldots$ by the assumption of the lemma that $0 < f(x) < x$ for all $0 < x < 1$. For any $i \\geq 1$ define $q_i = \\sum_{e \\in E: q_e \\in (t_i, t_{i-1}]} q_e$ and let $j$ be the smallest number where $q_j \\leq \\epsilon \\opt$. We will soon prove existence of such $j$ and also prove that $j = O(1\/\\epsilon)$. We claim that setting $\\tau_+ = t_{j-1}$ and $\\tau_- = t_j$ satisfies the conditions of the lemma.\n\t\n\t\\textbf{Condition} (1): This condition holds trivially since $\\tau_- = t_{j} = f(t_{j-1}) = f(\\tau_+)$. \n\t\n\t\\textbf{Condition} (2): Let us define $X := \\{ e \\mid \\tau_- < q_e < \\tau_+ \\}$. Recall that crucial and non-crucial edges are defined based on $\\tau_+$ and $\\tau_-$. That is, an edge $e$ is crucial (i.e. $e \\in C$) if $q_e \\geq \\tau_+$, and is non-crucial (i.e. $e \\in N$) if $q_e \\leq \\tau_-$. This implies that the remaining edges that are neither crucial nor non-crucial belong to $X$. Therefore,\n\t$$\n\t\t\\opt = q(E) = q(C) + q(N) + q(X).\n\t$$\n\tTo obtain $q(N)+q(C) \\geq (1-\\epsilon)\\opt$ it thus suffices to show $q(X) \\leq \\epsilon \\opt$. Noting that $\\tau_+ = t_{j-1}$ and $\\tau_- = t_j$ and also noting the definition of $q_j$ above, we get $q(X) \\leq q_j$. Recall that we chose $j$ such that $q_j \\leq \\epsilon \\opt$. Therefore we indeed get that $q(X) \\leq \\epsilon \\opt$.\n\t\n\t\\textbf{Condition} (3): We defined $t_0 = (\\epsilon p)^{50}$ and recursively defined $t_i = f(t_{i-1})$. Since $f(\\cdot)$ is only a function of its input, we get via a simple induction that both $t_j$ and $t_{j-1}$ are also functions of only $\\epsilon$ and $p$. (Recall that $j = O(1\/\\epsilon)$.)\n\t\n\t\\textbf{Condition} (4): We defined $t_0 = (\\epsilon p)^{50}$ and recall that we showed $t_0 > t_1 > t_2 > \\ldots$; this implies clearly that $\\tau_+ = t_{j-1} \\leq (\\epsilon p)^{50}$.\n\t\n\t\\textbf{Existence of $j$.} It only remains to prove that there exists a choice of $j$ satisfying $q_j \\leq \\epsilon \\opt$ and that this $j$ is not too large. Precisely, we show that $j = O(1\/\\epsilon)$. Since intervals $(t_1, t_0], (t_2, t_1], (t_3, t_2], \\ldots$ are disjoint, it holds that for each edge $e$ there is at most one $i$ for which $q_e \\in (t_i, t_{i-1}]$. This means that $\\sum_{i=1}^\\infty q_i \\leq \\sum_{e \\in E} q_e = \\opt$. It thus has to hold that $j \\leq \\lceil 1\/\\epsilon \\rceil + 1$ or otherwise \n\t$$\n\t\\sum_{i=1}^{j-1} q_i \\geq \\sum_{i=1}^{\\lceil 1\/\\epsilon \\rceil + 1} \\epsilon \\opt = (\\lceil 1\/\\epsilon \\rceil + 1) \\epsilon \\opt > \\opt\n\t$$ contradicting the previous statement. This concludes the proof of the lemma.\n\\end{proof}\n\n\n\n\\begin{proof}[Proof of Claim~\\ref{cl:frange}]\n\tWe prove parts 1-3 one by one.\n\t\n\t\\smparagraph{Part 1.} The upper bound $\\E[f_e] \\leq q_e$ is simple to prove. Consider random variable $f'_{e} = t_e\/R$ and note that $f'_e \\geq f_e$. We have\n\t$$\n\t\\E[f'_e] = \\E\\left[\\frac{t_e}{R}\\right] = \\frac{1}{R} \\E[t_e] = \\frac{1}{R} \\left(\\sum_{i=1}^R \\Pr[e \\in \\MM{\\mc{G}_i}]\\right) = \\frac{1}{R} (R \\times \\Pr[e \\in \\MM{\\mc{G}_1}]) = q_e.\n\t$$\n\tSince $f_e \\leq f'_e$, we get $\\E[f_e]\\leq\\E[f'_e] = q_e$, concluding the proof of part 1.\n\t\n\t\\smparagraph{Part 2.} Next we turn to prove the lower bound $\\E[f_e] \\geq (1-\\epsilon)q_e$. Let $X_i$ be the indicator random variable for $e \\in \\MM{\\mc{G}_i}$. We have $t_e = X_1 + \\ldots + X_R$, $\\E[X_i] = q_e$, and $\\E[t_e] = Rq_e$. Note also that the $X_i$'s are independent since graphs $\\mc{G}_1, \\ldots, \\mc{G}_R$ are drawn independently. Therefore, $\\Var[t_e] = \\sum_{i=1}^R \\Var[X_i] = R(q_e - q_e^2)$. \n\t\n\tNoting that $R = 0.5\/\\tau_-$ and that $q_e < \\tau_-$ since $e$ is non-crucial, we get $R q_e < 1$. This means that if $t_e \\geq a + 1$, then $|t_e - Rq_e| \\geq a$; which implies $\\Pr[t_e \\geq a + 1] \\leq \\Pr[|t_e - Rq_e| \\geq a]$. Therefore by setting $a = \\sqrt{R\/\\epsilon}$ and also using Chebyshev's inequality, we get\n\t\\begin{equation}\\label{eq:1234123489172346}\n\t\t\\Pr\\left[t_e \\geq \\sqrt{R\/\\epsilon}+1\\right] \\leq \\Pr\\left[|t_e - \\E[t_e]| \\geq \\sqrt{R\/\\epsilon}\\right] \\leq \\frac{\\Var[t_e]}{(\\sqrt{R\/\\epsilon})^2} = \\frac{R(q_e-q_e^2)}{(\\sqrt{R\/\\epsilon})^2} = \\epsilon (q_e - q_e^2) \\leq \\epsilon q_e.\n\t\\end{equation}\n\tFinally, we have\n\t\\begin{align*}\n\t\\E\\left[\\frac{t_e}{R}\\right] &= \\underbrace{\\Pr\\left[\\frac{t_e}{R} \\leq \\frac{1}{\\sqrt{\\epsilon R}}\\right] \\E\\left[\\frac{t_e}{R} \\mid \\frac{t_e}{R} \\leq \\frac{1}{\\sqrt{\\epsilon R}} \\right]}_{=\\E[f_e]} + \\Pr\\left[\\frac{t_e}{R} > \\frac{1}{\\sqrt{\\epsilon R}}\\right] \\underbrace{\\E\\left[\\frac{t_e}{R} \\mid \\frac{t_e}{R} > \\frac{1}{\\sqrt{\\epsilon R}} \\right]}_{\\leq 1 \\text{ since by definition, $t_e \\leq R$.}}\n\t\\end{align*}\n\tRearranging the terms and replacing the bounds specified, we get\n\t$$\n\t\\E[f_e] \\geq \\E\\left[\\frac{t_e}{R}\\right] - \\Pr\\left[\\frac{t_e}{R} > \\frac{1}{\\sqrt{\\epsilon R}}\\right] = \\frac{1}{R}\\E\\left[t_e\\right] - \\Pr\\left[t_e \\geq \\sqrt{R\/\\epsilon} + 1 \\right] \\stackrel{(\\ref{eq:1234123489172346})}{\\geq} \\frac{1}{R} \\times R q_e - \\epsilon q_e = (1-\\epsilon)q_e,\n\t$$\n\tconcluding the proof of part 2.\n\t\n\t\\smparagraph{Part 3.} Note that $f_e \\leq t_e\/R$ by definition. Thus, we have $\\sum_{e \\ni v} f_e \\leq \\sum_{e \\ni v} t_e\/R = R^{-1} \\sum_{e \\ni v} t_e$. Since each $\\MM{\\mc{G}_i}$ includes at most one incident edge of $v$ for being a matching, it holds that $\\sum_{e \\ni v} t_e \\leq R$, thus indeed $\\sum_{e \\ni v} f_e \\leq R^{-1} R = 1$.\n\t\n\t\\smparagraph{Part 4.} Let $X_i$ be the event that $v$ is matched in $\\MM{\\mc{G}_i}$ via a non-crucial edge and define $X := \\sum_{i=1}^R X_i$. Furthermore, define for each edge $e$,\n\t$$\n\t\tf'_e := \\begin{cases}\n\t\t\\frac{t_e}{R}, & \\text{if $e$ is non-crucial,}\\\\\n\t\t0, & \\text{otherwise.}\n\t\\end{cases}\n\t$$\n\tNote that $f'_e$ is very similar to the value of $f_e$ except for the case where $t_e\/R > 1\/\\sqrt{\\epsilon R}$. In this case, $f_e = 0$ but $f'_e$ remains to be the ratio $t_e\/R$. This implies that $f'_e \\geq f_e$. Now let $f'_v = \\sum_{e \\ni v} f'_e$. Since $f_e \\leq f'_e$ for all edges, we have $f_v \\leq f'_v$. Therefore, instead of proving $\\Pr[f_v > n_v + 0.1\\epsilon] \\leq (\\epsilon p)^{10}$, it suffices to prove $\\Pr[f'_v > n_v + 0.1\\epsilon] \\leq (\\epsilon p)^{10}$.\n\t\n\t\n\tIt holds from the definition that\n\t$$\n\tf'_v = \\sum_{e: e \\in N, v \\in e} \\frac{t_e}{R} = \\frac{1}{R} \\sum_{e: e \\in N, v \\in e} t_e = \\frac{1}{R} \\times (X_1 + \\ldots + X_R) = X\/R.\n\t$$\n\tReplacing this into $\\Pr[f'_v > n_v + 0.1\\epsilon] \\leq (\\epsilon p)^{10}$, we thus have to prove \n\t$\n\t\t\\Pr\\left[X\/R > n_v + 0.1\\epsilon \\right] \\leq (\\epsilon p)^{10},\n\t$\n\tor equivalently:\n\t$$\n\t\t\\Pr[X > R n_v + 0.1 R \\epsilon] \\leq (\\epsilon p)^{10}.\n\t$$\n\tTo prove this we use a concentration bound on $X$. Note that the $X_i$'s are independent since graphs $\\mc{G}_1, \\ldots, \\mc{G}_R$ are drawn independently. Moreover, for each $i \\in [R]$, we have $\\E[X_i] = n_v$ since recall $X_i = 1$ iff $v$ is matched via a non-crucial edge in $\\MM{\\mc{G}_i}$ and this has probability $\\sum_{e: e\\in N, v \\in e} q_e = n_v$. Thus $\\E[X] = Rn_v$. While we can use Chernoff's bound here since all $X_i$'s are independent, even the second-moment method is enough for our desired inequality. The variance of $X$ can be bounded as follows:\n\t$$\n\t\\Var[X] = \\sum_{i=1}^R \\Var[X_i] = \\sum_{i=1}^R E[X_i^2] - \\E[X_i]^2 = R (n_v - n_v^2).\n\t$$\n\tBy Chebyshev's inequality, we get\n\t$$\n\t\t\\Pr[X > Rn_v + 0.1R\\epsilon] \\leq \\frac{R(n_v - n_v^2)}{(0.1 R \\epsilon)^2} = \\frac{100(n_v - n_v^2)}{R \\epsilon^2} \\leq \\frac{100}{R\\epsilon^2}.\n\t$$\n\tSince $R = 1\/2\\tau_-$ and $\\tau_- < (\\epsilon p)^{50}$ by Corrolary~\\ref{cor:thresholds}, we get \n\t$$\n\t\\Pr[X > Rn_v + R\\epsilon] \\leq \\frac{100}{R\\epsilon^2} < \\frac{200(\\epsilon p)^{50}}{\\epsilon^2} < (\\epsilon p)^{10},\n\t$$\n\twhich as described above concludes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Observation~\\ref{obs:samedist}]\nFirst note that realizations $\\mc{C}_1, \\ldots, \\mc{C}_\\alpha$ are all drawn precisely from the same distribution that realization $\\mc{C} = \\mc{C}_0$ is drawn from. Thus due to symmetry, matchings $M_0, \\ldots, M_\\alpha$ are all derived from the same distribution. Matchings $M'_0, \\ldots, M'_\\alpha$ are then the result of applying the augmenting-hyperwalks $I$ found by $\\apxMIS{H, \\epsilon}$ on graph $H$. Construction of graph $H$ is symmetrical w.r.t. matchings $M_0, \\ldots, M_\\alpha$. The only remaining component of the algorithm where this symmetry may break is in algorithm $\\apxMIS{H, \\epsilon}$ that may be biased towards picking augmenting-hyperwalks depending on which matching $M_i$ they would augment. This can be avoided by using an algorithm for $\\apxMIS{H, \\epsilon}$ that is oblivious to the indices of matchings $M_0, \\ldots, M_\\alpha$ used to construct graph $H$. That is, suppose e.g. that we pick the ID of nodes in $H$ randomly before feeding it into $\\apxMIS{H, \\epsilon}$. This guarantees that the obtained matchings $M'_0, \\ldots, M'_\\alpha$ will all have the same distribution due to their symmetry.\n\\end{proof}\n\n\\section{The Analysis via the Vertex-Independent Matching Lemma}\\label{sec:analysisviavertexindependent}\n\nIn this section, given correctness of Lemma~\\ref{lem:independentmatching}, we prove Theorem~\\ref{thm:main}. In what follows we give the outline of the proof by referring to the needed lemmas that will be proved in subsequent Sections~\\ref{sec:constructfractional}, \\ref{sec:validityofx}, \\ref{sec:analysisfractional}, and \\ref{sec:turntofracmatching}.\n\n\\smparagraph{Proof Outline for Theorem~\\ref{thm:main}.} \n\tLet $Q$ be the output of by Algorithm~\\ref{alg:sampling} where parameter $R$ is set as described above. We show that one can construct a matching of expected size at least $(1-56\\epsilon)\\opt$ on the realized subgraph $\\mc{Q}$ of $Q$. This implies that $\\E[\\mu(\\mc{Q})] \\geq (1-56\\epsilon)\\opt = (1-56\\epsilon)\\E[\\mu(\\mc{G})]$. In other words, this proves that the approximation-factor of the algorithm is at least $(1-56\\epsilon)$. (Note this is equivalent to $(1-\\epsilon)$ approximation since one can choose $\\epsilon$ to be any desirably small constant.)\n\t\n\tIn order to construct a matching of expected size at least $(1-56\\epsilon)\\opt$ on $\\mc{Q}$, we first describe how to construct an ``expected fractional matching'' (see Definition~\\ref{def:expfractional}) $x$ on $\\mc{Q}$ in Sections~\\ref{sec:constructfractional}, \\ref{sec:validityofx}, and \\ref{sec:analysisfractional}. Later on, we show in Section~\\ref{sec:turntofracmatching} how to turn $x$ into a fractional matching $y$ on $\\mc{Q}$ such that $\\E[|y|] \\geq (1-55\\epsilon)\\opt$ (see Lemma~\\ref{lem:ylarge}). Finally, to turn $y$ into an {\\em integral} matching, we show (Observation~\\ref{obs:yblossom}) that the so called ``blossom inequalities'' of size up to $1\/\\epsilon$ also hold for $y$. That is, we show that for all vertex subsets $U \\subseteq V$ with $|U| \\leq 1\/\\epsilon$, we have $y(U) \\leq \\lfloor \\frac{|U|}{2}\\rfloor$. By Edmond's celebrated theorem \\cite{edmonds1965maximum,schrijver2003combinatorial} on the matching polytope, this means that there is an integral matching of size at least $\\frac{1}{1+\\epsilon}|y| \\geq (1-\\epsilon)|y|$ in \\mc{Q}. As described, $\\E[|y|] \\geq (1-55\\epsilon)\\opt$, thus indeed $\\E[\\mu(\\mc{Q})] \\geq (1-\\epsilon)(1-55\\epsilon)\\opt \\geq (1-56\\epsilon)\\opt$ as desired.\n \n\\input{expfracmatching}\n\n\\input{exptoactualfracmatching}\n\\section{Our Techniques}\\label{sec:techniques}\n\n\nAs previously described, we consider the following algorithm for constructing subgraph $Q$ (see also Algorithm~\\ref{alg:sampling}): Draw $R$ realizations $\\mc{G}_1, \\ldots, \\mc{G}_R$ of graph $G$, then pick a matching $\\MM{\\mc{G}_i}$ from each realization, and finally set $Q = \\MM{\\mc{G}_1} \\cup \\ldots \\cup \\MM{\\mc{G}_R}$. In this section, we give an informal overview of our analysis for this algorithm. \n\nNote that these realizations $\\mc{G}_i$ are part of the randomization of the algorithm and may be very different from the actual realization $\\mc{G}$ of $G$. In fact, in expectation, only $p$ fraction of the edges of each matching $\\MM{\\mc{G}_i}$ are realized in $\\mc{G}$. Thus, we have to argue that the realized edges of these matchings can be used to augment each other and form a large matching in the realized subgraph $\\mc{Q}$ of $Q$. In order to do this, we will give a ``procedure'' to construct a matching in $\\mc{Q}$. To get a handle on the dependencies involved, the procedure carefully decides how the realization of edges in $Q$ are revealed and which are chosen to be in the matching. We emphasize that this procedure is merely an analytical tool for analyzing the approximation-factor. Thus, no matter how intricate it is, the algorithm for constructing $Q$ remains to be the simple Algorithm~\\ref{alg:sampling} described above.\n\n\\smparagraph{A crucial\/non-crucial decomposition.} Similar to \\cite{soda19} (and also implicitly \\cite{AKL17}), we consider a partitioning of the edges of $G$ into what we call {\\em crucial} and {\\em non-crucial} edges. For each edge $e$, define $q_e := \\Pr[e \\in \\MM{\\mc{G}}]$ where $\\MM{\\cdot}$ is the same matching algorithm used to construct $Q$. We further assume that $\\MM{\\cdot}$ is deterministic, so the probability is taken only over the realization $\\mc{G}$. For two thresholds $0 < \\tau_- < \\tau_+ < 1$ that we fix later, we define:\n\\begin{itemize}[itemsep=0pt]\n\t\\item The crucial edges as $C := \\{ e \\in E \\mid q_e \\geq \\tau_+\\}$.\n\t\\item The non-crucial edges as $N = \\{ e \\in E \\mid q_e \\leq \\tau_-\\}$.\n\\end{itemize}\nNote that in the decomposition above edges $e$ with $q_e \\in (\\tau_-, \\tau_+)$ are neither crucial nor non-crucial. We will essentially ``ignore'' these edges in the analysis but ensure that we choose $\\tau_-$ and $\\tau_+$ such that there are few ignored edges. \n\nIn our procedure to construct a matching on $\\mc{Q}$, we treat crucial and non-crucial edges differently. We start with the crucial edges and (in Lemma~\\ref{lem:independentmatching}) construct a matching $Z$ on them whose expected size is (almost) as large as the expected number of crucial edges in the optimal maximum realized matching of $G$. We then show that this matching $Z$ can be augmented via the non-crucial edges to eventually form a matching whose expected size is arbitrarily close to $\\opt := \\E[|\\MM{\\mc{G}}|]$.\n\n\\smparagraph{The procedure for crucial edges.} In addition to the lower bound on the expected size of $Z$, we make sure that no vertex tends to be ``over-matched'' in $Z$. More formally, the probability of any vertex $v$ being matched in $Z$ should not be larger than the probability that $v$ is matched via a crucial edge in $\\MM{\\mc{G}}$. Both of these conditions can actually be satisfied by a very simple randomized procedure: Reveal the whole realization $\\mc{C}$ of $C$, also draw a random realization $\\mc{N}'$ of the non-crucial edges, and let $Z$ be the crucial edges in matching $\\MM{\\mc{C} \\cup \\mc{N}'}$. \n\nUnfortunately, the matching constructed via the above-mentioned procedure is hard to augment via the non-crucial edges as we have no control over the correlations. To get around this, we need an extra ``independence'' property. Let $X_v$ be the indicator of the event that vertex $v$ is matched in $Z$. The independence property requires random variables $X_{v_1}, X_{v_2}, \\ldots, X_{v_n}$ to be (almost) independent where $\\{v_1, \\ldots, v_n\\}$ is the vertex-set of $G$. Clearly, perfect independence cannot be achieved: Given the event that a vertex $v$ is matched in $Z$, we derive that at least one of its neighbors in $C$ is also matched. What we prove can be achieved, though, is that each $X_{v}$ is independent from $X_u$ of vertices $u$ outside a small local neighborhood of $v$ in graph $C$. (See Lemma~\\ref{lem:independentmatching} part~4 for the formal statement.)\n\nIn order to satisfy the independence property described above, we will not reveal the whole realization $\\mc{C}$ outright and then construct $Z$ based on it as it was done in the simple procedure described above. Instead, we present a different algorithm (Algorithm~\\ref{alg:crucial}) for constructing this matching $Z$. To prove the independence property, we show that this algorithm can be simulated locally. In other words, for each vertex $v$, the value of $X_v$ can be determined uniquely by having the realization of edges in a small local neighborhood of $v$. Thus, if two vertices $u$ and $v$ are sufficiently far from each other in graph $C$, then $X_v$ and $X_u$ would be independent. \n\n\\smparagraph{Augmenting $Z$ via non-crucial edges.} We noted above that $\\E[|Z|]$ is (almost) as large as the expected number of crucial edges in $\\MM{\\mc{G}}$. Therefore, in order to construct a matching of $\\mc{Q}$ with expected size arbitrarily close to $\\opt$, we have to augment $Z$ via the non-crucial edges. To do this, we only use non-crucial edges $\\{u, v\\}$ in $Q$ such that $X_u$ and $X_v$ are independent. Describing how exactly we construct the matching on these non-crucial edges requires a number of definitions which we give in Section~\\ref{sec:constructfractional}. However, to convey the key intuition, here we only mention how and why the independence of $X_u$ and $X_v$ plays an important role in using a non-crucial edge $e = \\{u, v\\}$ to augment $Z$. Suppose that $\\Pr[X_u] = \\Pr[X_v] = 1\/2$. Note that it is only when {\\em both} $u$ and $v$ are unmatched in $Z$ that we can use edge $e$ to augment $Z$. If $X_u$ and $X_v$ are independent, there is a relatively large probability $(1-\\Pr[X_u])(1-\\Pr[X_v]) = \\frac{1}{4}$ that this occurs. However, if $X_u$ and $X_v$ can be correlated, it may be the case that with probability half $X_u = 1$ and $X_v = 0$, and with probability half $X_u = 0$ and $X_v = 1$. In this case, the probability of both $u$ and $v$ being unmatched in $Z$ would be zero and thus we would never be able to use $e$ to augment $Z$. We remark that this is precisely the type of correlation introduced in the RS-barrier of \\cite{AKL16} which the independence property allows us to bypass.\n\n\n\n\n\\section{Approximate MIS}\\label{app:weakmis}\n\nIn this section we describe how Lemma~\\ref{lem:apxMIS} can be derived as a corollary of the algorithm of \\cite{DBLP:conf\/soda\/Ghaffari19}. Theorem~1.1 of \\cite{DBLP:conf\/soda\/Ghaffari19} gives a randomized \\local{} independent-set (IS) algorithm which guarantees that for each node $v$, the probability that $v$ ``has not made its decision'' after $O(\\log \\deg(v) + \\log \\frac{1}{\\delta})$ rounds is at most $\\delta$. The decision of $v$ is finalized if it is in the IS or it has a neighbor that is in the IS (implying that $v$ cannot be in the IS). \n\nTo achieve Lemma~\\ref{lem:apxMIS} we set $\\delta = \\frac{\\epsilon}{10\\Delta}$. Let $I$ denote the independent set returned by the algorithm after $O(\\log \\deg(v) + \\log \\frac{10\\Delta}{\\epsilon}) = O(\\log \\frac{\\Delta}{\\epsilon})$ rounds and let $U$ and $D$ respectively denote the set of undecided and decided vertices. We have\n$$\n\\E[|U|] = \\E\\Big[ \\sum_{v} \\mathbbm{1}(\\text{$v$ is undecided}) \\Big] = \\sum_v \\Pr[\\text{$v$ is undecided}] \\leq \\sum_v \\frac{\\epsilon}{10\\Delta} = \\frac{\\epsilon}{10\\Delta}n,\n$$\nand thus $\\E[|D|] = n - \\E[|U|] \\geq (1-\\frac{\\epsilon}{10\\Delta})n \\geq 0.9n$. There is at least one IS node among the at most $\\Delta + 1$ inclusive neighbors of any decided vertex; thus $\\E[|I|] \\geq \\frac{\\E[|D|]}{\\Delta+1} \\geq \\frac{0.9n}{\\Delta+1} \\geq \\frac{0.9n}{2\\Delta} = 0.45 \\frac{n}{\\Delta}$. On the other hand, let $I'$ be the MIS obtained by greedily adding the undecided nodes to $I$ until they form an MIS. We have $|I'| \\leq |I| + |U|$. Therefore, we indeed get that\n$$\n\\frac{\\E[|I|]}{\\E[|I'|]} \\geq \\frac{\\E[|I|]}{\\E[|I|] + \\E[|U|]} \\geq \\frac{0.45\\frac{n}{\\Delta}}{0.45\\frac{n}{\\Delta} + \\frac{\\epsilon}{10\\Delta}n} = \\frac{0.45\\frac{n}{\\Delta}}{(0.45 + 0.1 \\epsilon) \\frac{n}{\\Delta}} = \\frac{0.45}{0.45 + 0.1 \\epsilon} > 1-\\epsilon,\n$$ \nconcluding the proof.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe active galactic nucleus (AGN) is one of the most luminous class of objects \nin the Universe, whose huge radiative energy is released through the mass \naccretion onto the supermassive black hole (SMBH). The mass of SMBHs \n($M_{\\rm BH}$) is tightly correlated with the mass or the stellar velocity \ndispersion of their host galaxies \\citep[e.g.,][]{1998AJ....115.2285M, \n2003ApJ...589L..21M,2013ARA&A..51..511K, 2015ApJ...813...82R}, \nimplying that SMBHs and \ngalaxies have evolved with closely interacting in each other (the so-called \nco-evolution of SMBHs and galaxies). However, the physics behind the \nco-evolution is still unclear. To understand the total picture of the \nco-evolution, examining the scaling relations for AGNs in the early phase of the \nco-evolution is an interesting approach since different theoretical models predict \ndifferent redshift dependences of scaling relations \\citep[e.g.,][]\n{2003ApJ...583...85K, 2010MNRAS.405...29L}. One simple strategy to explore \nthe early phase of the co-evolution is measuring the scaling relations at high \nredshifts, where the typical age of AGNs is much younger than low-redshift \nAGNs. Many attempts have been made for measuring the scaling relations for \nhigh-redshift AGNs \\citep[e.g.,][]{2008A&A...478..311S, 2010ApJ...714..699W, \n2013A&A...559A..29C}, and a higher $M_{\\rm BH}$ with respect to the mass \nor velocity dispersion of host galaxies has been sometimes reported \n\\citep[e.g.,][]{2010ApJ...714..699W}. On the other hand, there are some reports \nclaiming that such a possible evolution in the scaling relation is a result of \nobservational bias through the sample selection \\citep[e.g.,][]\n{2011A&A...535A..87S}. Measuring the properties of AGN host galaxies at high \nredshift is generally very challenging, that prevents us from assessing the scaling\nrelations at high redshifts.\n\nAnother possible approach to study the early phase of the co-evolution is \nfocusing on young AGNs at low redshifts, where detailed observations are much \neasier than high redshifts. In this context, low-metallicity (i.e., chemically young) \nAGNs in the low-redshift Universe are particularly interesting. However, the \ntypical metallicity of AGNs inferred for broad-line regions (BLRs) and narrow-line\nregions (NLRs) is high ($Z \\gtrsim 2 Z_{\\odot}$; e.g., \n\\citealt{2006A&A...447..157N, 2009A&A...503..721M}) and low-metallicity AGNs\nare very rare \\citep[e.g.,][]{2008ApJ...687..133I}. \\citet{2006MNRAS.371.1559G}\nproposed a method to search for AGNs with a low-metallicity NLR, that utilizes \nan optical emission-line diagnostic diagram which consists of the flux ratios of\n[N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 and\n[O~{\\sc iii}]$\\lambda$5007\/H$\\beta$$\\lambda$4861. This diagnostic diagram\nwas originally investigated for classifying emission-line galaxies into star-forming\ngalaxies and Seyfert 2 galaxies (BPT diagram, \\citealt{1981PASP...93....5B}).\n\\citet[hereafter Ke01]{2001ApJ...556..121K} established the ``maximum'' \nstarburst line in the BPT diagram by combining stellar population synthesis \nmodels and photoionization models. On the other hand, \\citet[hereafter Ka03]\n{2003MNRAS.346.1055K} derived empirical classification criteria for star-forming \ngalaxies while \\citet[hereafter Ke06]{2006MNRAS.372..961K} derived empirical \nclassification criteria for low-ionization nuclear emission-line regions (LINERs;\n\\citealt{1980A&A....87..152H}), using emission-line data taken from the database \nof Sloan Digital Sky Survey (SDSS; \\citealt{2000AJ....120.1579Y}).\n\n\\citet{2006MNRAS.371.1559G} pointed out that AGNs with a low-metallicity \nNLRs (i.e., characterized by the solar or sub-solar metallicity) should have a flux ratio of \n[O~{\\sc iii}]$\\lambda$5007\/H$\\beta$$\\lambda$4861 as high as usual AGNs\n($\\sim 10^{0.5}-10^1$) but have an intermediate flux ratio of\n[N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 between usual AGNs and\nlow-mass (i.e., low-metallicity) star-forming galaxies ($\\sim 10^{-1}-10^{-0.5}$). \nThis is because the nitrogen relative abundance is in proportion to the metallicity \ndue to its nature as a secondary element \\citep[e.g.,][]{1998ApJ...497L...1V}. In \nthe BPT diagram, there are only few objects located at the region characterized \nby a high flux ratio of [O~{\\sc iii}]$\\lambda$5007\/H$\\beta$$\\lambda$4861 and \nan intermediate flux ratio of [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563\n(hereafter ``BPT valley''; see Figure~\\ref{BPT_diagram}). \n\\citet{2006MNRAS.371.1559G} specifically focused on AGNs with a low-mass\nhost galaxy (i.e., $M_{\\rm host} < 10^{10} M_\\odot$), and then they selected \nlow-metallicity AGNs using another diagnostic\ndiagram that consists of [N~{\\sc ii}]$\\lambda$6584\/[O~{\\sc ii}]$\\lambda$3727 and\n[O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 flux ratios. However, it is not\nclear whether low-metallicity AGNs should be always found in a sample of AGNs with a\nlow-mass host galaxy. Also, the method adopted by \\citet{2006MNRAS.371.1559G}\nrequires a wide wavelength coverage ($\\lambda_{\\rm rest} \\sim 3700-6600$ \\AA), \nthat is not convenient for future applications to expand the search of low-metallicity\nAGNs toward the high-redshift Universe. \n\nTherefore, we focus on BPT-valley selection (requiring a moderately narrow \nwavelength coverage; $\\lambda_{\\rm rest} \\sim 4800-6600$ \\AA) to select \nlow-metallicity AGNs without any host-mass cut.\nHowever, there is a potentially serious problem in the BPT-valley selection for \nidentifying low-metallicity AGNs. \nThat is, not only low-metallicity AGNs are located in the BPT valley. \nAs \\citet{2013ApJ...774..100K} showed, star-forming galaxies with \na very hard radiation field or high-density H~{\\sc ii} regions are expected \nto be seen in the BPT valley (see also, e.g., \\citealt[][]{2014ApJ...787..120S}). \nAlso, star-forming galaxies with a high ionization parameter \n\\citep[e.g.,][]{2014ApJ...795..165S, 2015PASJ...67...80H}, a high nitrogen-to-oxygen \nabundance ratio (N\/O; e.g., \\citealt[][]{2014ApJ...785..153M, 2015ApJ...801...88S, 2015PASJ...67..102Y, \n2016arXiv160503436K}), \nor shocks \\citep[e.g.,][]{2014ApJ...781...21N} are also expected to be seen in the BPT valley. \nNot only star-forming galaxies, AGNs with a high electron density or high ionization parameter \n(i.e., not characterized by a low metallicity) could be also seen in the BPT valley \n\\citep[e.g.,][]{2001ApJ...546..744N}. \nTherefore, it is not completely clear whether the BPT-valley objects are really low-metallicity AGNs \nand whether the BPT diagram is a useful tool to search for low-metallicity AGNs. \nThis problem prevents us from selecting chemically-young AGNs observationally.\n\nIn this paper, we investigate the optical spectra of BPT-valley objects for \nexamining whether most of emission-line galaxies at the BPT valley are really \nlow-metallicity AGNs or not. Through this examination, it will be tested whether \nthe optical BPT diagram is an efficient and appropriate method to search for \nlow-metallicity AGNs. In Section 2, we present our selection procedure of the \nBPT-valley sample. In Section 3, we show how we identify BPT-valley AGNs to \navoid contaminating star-forming galaxies at the BPT valley. In Section 4, we \ninvestigate gas properties of the selected BPT-valley AGNs such as electron \ndensity and ionization parameter, for examining whether the BPT-valley AGNs\nare characterized by a low metallicity or not. \nIn Section 5, we disccus physical properties of the BPT-valley AGNs.\nSection 6 describes the summary of this work.\n\n\n\\section{Sample}\n\nIn order to select the BPT-valley objects, we use Max-Planck-Institute for \nAstrophysics (MPA)-Johns Hopkins University (JHU) SDSS Data Release 7 \n(\\citealt{2009ApJS..182..543A}) galaxy catalog\\footnote[1]\n{http:\/\/www.mpa-garching.mpg.de\/SDSS\/}. \nThe MPA-JHU DR7 catalog of spectral measurements contains various\nspectral properties such as emission-line fluxes and their errors, based on\nthe analysis for 927,552 objects without showing dominant broad Balmer lines \n(i.e., star-forming galaxies, composite galaxies, LINERs, and type-2 Seyfert galaxies) \nin the SDSS DR7. \nOur sample selection is based on the following procedure (the flow chart of our \nsample selection process is shown in Figure~\\ref{flow_chart}).\n\nFirst, we select the initial sample according to the following criteria. \nWe require the reliable redshift measurement (i.e., $z_{\\rm warning} = 0$) and also $z>$ 0.02. \nThis redshift limit is required to cover [O~{\\sc ii}]$\\lambda$3727.\nThis results in 906,761 galaxies.\nThen we require a signal-to-noise ratio (S\/N) $>$ 3 for some key emission lines, i.e., \nH$\\beta$$\\lambda$4861, [O~{\\sc iii}]$\\lambda$5007, \n[O~{\\sc i}]$\\lambda$6300, H$\\alpha$$\\lambda$6563, \n[N~{\\sc ii}]$\\lambda$6584 and [S~{\\sc ii}]$\\lambda \\lambda$6717, 31 (212,866 galaxies).\n\nNext, we classify these 212,866 galaxies and extract the BPT-valley sample \naccording to the following steps. \n\\begin{enumerate}\n\\item \n Using the \\citetalias{2003MNRAS.346.1055K} empirical line,\n \\begin{eqnarray}\n \\log \\left( \\frac{\\rm [O\\ {\\scriptstyle III}]}{\\rm H\\beta} \\right) \n > \\frac{0.61}{\\rm log ([N\\ {\\scriptstyle II}]\/H\\alpha) -0.05}+1.3,\n \\end{eqnarray}\n for removing usual star-forming galaxies (56,217 galaxies).\n\\item \n Using the \\citetalias{2001ApJ...556..121K} maximum starburst line,\n \\begin{eqnarray}\n \\log \\left( \\frac{\\rm [O\\ {\\scriptstyle III}]}{\\rm H\\beta} \\right) > \n \\frac{0.61}{\\rm log ([N\\ {\\scriptstyle II}]\/H\\alpha) -0.47}+1.19,\n \\end{eqnarray}\n for removing so-called composite galaxies (22,865 galaxies).\n\\item \n Using the \\citetalias{2006MNRAS.372..961K} empirical criterion,\n \\begin{eqnarray}\n \\log \\left( \\frac{\\rm [O\\ {\\scriptstyle III}]}{\\rm H\\beta} \\right) > \n 1.36\\log \\left( \\frac{\\rm [O\\ {\\scriptstyle I}]}{\\rm H\\alpha} \\right) + 1.4, \n \\end{eqnarray}\n for obtaining Seyfert sample by removing LINERs (14,253 galaxies).\n\\item \n Adopting the following criterion,\n \\begin{eqnarray}\n \\log \\left( \\frac{\\rm [N\\ {\\scriptstyle II}]}{\\rm H\\alpha} \\right) < -0.5,\n \\label{BPT_valley}\n \\end{eqnarray}\n for finally selecting the BPT-valley sample (71 galaxies).\n\\end{enumerate}\nNote that 1 object in the 71 BPT-valley objects was observed twice and \nduplicated in the final sample, i.e., the final BPT-valley sample consists of 70 objects. \nThe BPT-valley criterion (Equation~\\ref{BPT_valley}) is determined empirically, \nby taking account of the frequency distribution of the [N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$ \nflux ratio of Seyfert galaxies. \nFigure~\\ref{NII_Ha} shows the [N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$ frequency distribution of \nSeyfert galaxies, \nwhere the average and standard deviation of the logarithmic [N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$ \nflux ratio are $-0.058$ and $0.145$, respectively.\nAccordingly, the $3\\ \\sigma$ bounding from the average value is $-0.493$, \nand therefore we adopt the threshold to categorize BPT-valley objects as described \nby Equation~\\ref{BPT_valley}.\nFigure~\\ref{BPT_diagram} shows the finally selected 70 BPT-valley objects \nin the BPT diagram that consists of\n[N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 versus \n[O~{\\sc iii}]$\\lambda$5007\/H$\\beta$$\\lambda$4861.\nTable 1 shows the basic properties of the selected BPT-valley objects.\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{figure_1.eps}\n \\caption{Flow chart of the selection of BPT-valley objects. Numbers shown in the chart denote \n the numbers of objects at each selection stage. \n The number shown in the parenthesis denotes the number of objects after removing the duplication.}\n \\label{flow_chart} \n\\end{figure}\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{figure_2.eps}\n \\caption{\n Histogram of the [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 flux ratio of Seyfert galaxies.\n Dashed line denotes the average of log ([N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$) $= -0.058$, \n while dotted line denotes the threshold of log ([N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$) $= -0.5$ \n to select BPT-valley objects.\n }\n \\label{NII_Ha} \n\\end{figure}\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{figure_3s.eps}\n \\caption{The BPT diagram ([N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 versus \n [O~{\\sc iii}]$\\lambda$5007\/H$\\beta$$\\lambda$4861), showing the BPT-valley sample \n (red square) among the SDSS DR7 emission-line objects. \n The green solid line is the Ke01 extreme starburst criterion, \n while the red solid line denotes the criterion for separating star-forming galaxies and \n composite galaxies (Ka03). \n The violet solid line is the BPT valley criterion which is defined in this paper.\n The numbers of various galaxy populations are shown in the parenthesis \n in the lower-left box.}\n \\label{BPT_diagram} \n\\end{figure}\n\n\n\n\\begin{deluxetable*}{ccrrrrrrrrc}\n\\tabletypesize{\\scriptsize}\n\\tablecaption{The BPT-valley sample}\n\\tablewidth{0pt}\n\\tablehead{\n\\colhead{ID} & \\colhead{SDSS Name} & \\colhead{Plate} & \\colhead{MJD} & \\colhead{Fiber} &\n\\colhead{$z$} & \\colhead{H$\\beta$$\\lambda$4861} & \\colhead{[O~{\\sc iii}]$\\lambda$5007} &\n\\colhead{[O~{\\sc i}]$\\lambda$6300} & \\colhead{H$\\alpha$$\\lambda$6563} &\n\\colhead{[N~{\\sc ii}]$\\lambda$6584}\\\\\n\\colhead{(1)} & \\colhead{(2)} & \\colhead{(3)} & \\colhead{(4)} & \\colhead{(5)} & \n\\colhead{(6)} & \\colhead{(7)} & \\colhead{(8)} & \\colhead{(9)} & \\colhead{(10)} & \n\\colhead{(11)}\n}\n\\startdata\n1.......... & SDSS J102310.97$-$002810.8 & 0272 & 51941 & 0238 & 0.11274 & 253.97 & 2279.29 & 125.19 & 881.10 & 235.20 \\\\\n2.......... & SDSS J111006.26$-$010116.5 & 0278 & 51900 & 0096 & 0.10949 & 382.92 & 1837.07 & 129.44 & 1484.61 & 438.76 \\\\\n3.......... & SDSS J230321.73+011056.4 & 0380 & 51792 & 0565 & 0.18136 & 235.38 & 1692.71 & 12.29 & 816.35 & 247.33 \\\\\n4.......... & SDSS J024825.26$-$002541.4 & 0409 & 51871 & 0150 & 0.02467 & 10.35 & 41.33 & 4.71 & 47.65 & 14.92 \\\\\n5.......... & SDSS J073506.37+393300.8 & 0432 & 51884 & 0316 & 0.03479 & 29.46 & 163.57 & 8.55 & 109.51 & 21.85 \\\\\n6.......... & SDSS J023310.78$-$074813.4 & 0455 & 51909 & 0388 & 0.03097 & 97.71 & 707.20 & 12.83 & 433.29 & 104.88 \\\\\n7.......... & SDSS J092907.78+002637.3 & 0475 & 51965 & 0205 & 0.11732 & 259.86 & 2602.18 & 56.12 & 1173.36 & 274.70 \\\\\n8.......... & SDSS J090613.76+561015.1 & 0483 & 51902 & 0016 & 0.04668 & 188.36 & 1144.93 & 73.96 & 621.46 & 189.46 \\\\\n9.......... & SDSS J144328.78+044022.0 & 0587 & 52026 & 0374 & 0.11411 & 36.11 & 300.13 & 3.37 & 137.53 & 32.98 \\\\\n10......... & SDSS J114825.71+643545.0 & 0598 & 52316 & 0189 & 0.04169 & 349.08 & 1347.75 & 60.17 & 1302.50 & 392.24 \\\\\n11......... & SDSS J213439.57$-$071641.9 & 0641 & 52199 & 0487 & 0.06377 & 215.60 & 2079.35 & 78.83 & 800.93 & 182.94 \\\\\n12......... & SDSS J001050.35$-$010257.4 & 0686 & 52519 & 0020 & 0.11299 & 223.89 & 1669.57 & 109.74 & 940.89 & 275.90 \\\\\n13......... & SDSS J124738.52+621243.1 & 0781 & 52373 & 0076 & 0.12112 & 171.82 & 915.80 & 33.03 & 697.15 & 162.91 \\\\\n14......... & SDSS J131659.37+035319.9 & 0851 & 52376 & 0219 & 0.04541 & 127.05 & 1399.96 & 80.91 & 698.40 & 136.73 \\\\\n15......... & SDSS J115908.55+525823.1 & 0881 & 52368 & 0623 & 0.06644 & 392.77 & 2812.16 & 52.13 & 1218.45 & 379.53 \\\\\n16......... & SDSS J104632.21+543559.7 & 0906 & 52368 & 0169 & 0.14475 & 83.71 & 938.64 & 19.52 & 375.92 & 101.46 \\\\\n17......... & SDSS J104600.36+061632.0 & 1000 & 52643 & 0035 & 0.18447 & 146.67 & 637.74 & 31.28 & 586.86 & 177.72 \\\\\n18......... & SDSS J101945.65+520608.6 & 1008 & 52707 & 0378 & 0.06492 & 29.60 & 115.17 & 6.75 & 92.29 & 28.00 \\\\\n19......... & SDSS J205111.11+000913.2 & 1023 & 52818 & 0393 & 0.06644 & 7.37 & 44.11 & 4.92 & 37.11 & 9.68 \\\\\n20......... & SDSS J214930.43+010509.4 & 1031 & 53172 & 0369 & 0.11399 & 28.14 & 210.35 & 3.43 & 109.29 & 31.13 \\\\\n21......... & SDSS J081653.27+285423.1 & 1206 & 52670 & 0530 & 0.05740 & 246.45 & 1610.19 & 131.99 & 1145.34 & 203.60 \\\\\n22......... & SDSS J095319.42+422912.2 & 1217 & 52672 & 0349 & 0.22343 & 207.67 & 1835.79 & 83.29 & 698.87 & 115.09 \\\\\n23......... & SDSS J110504.94+101623.5 & 1221 & 52751 & 0359 & 0.02076 & 104.02 & 466.39 & 29.14 & 481.20 & 151.62 \\\\\n24......... & SDSS J114440.53+102429.3 & 1226 & 52734 & 0435 & 0.12688 & 51.67 & 304.73 & 39.93 & 241.38 & 67.83 \\\\\n25......... & SDSS J124110.10+104143.7 & 1233 & 52734 & 0611 & 0.15613 & 214.49 & 1004.84 & 29.37 & 703.65 & 188.71 \\\\\n26......... & SDSS J092620.42+352250.3 & 1274 & 52995 & 0147 & 0.24729 & 118.98 & 868.18 & 81.24 & 513.89 & 75.74 \\\\\n27......... & SDSS J131756.07+491531.3 & 1282 & 52759 & 0390 & 0.09231 & 215.30 & 2050.51 & 105.88 & 711.65 & 176.30 \\\\\n28......... & SDSS J090107.41+085459.2 & 1300 & 52973 & 0335 & 0.08380 & 172.61 & 1116.69 & 153.30 & 822.23 & 210.37 \\\\\n29......... & SDSS J120134.05+581421.1 & 1313 & 52790 & 0527 & 0.04636 & 28.83 & 136.20 & 5.78 & 87.43 & 20.49 \\\\\n30......... & SDSS J152723.47+334919.1 & 1354 & 52814 & 0044 & 0.09116 & 44.63 & 173.46 & 24.18 & 254.04 & 79.05 \\\\\n31......... & SDSS J112314.89+431208.7 & 1365 & 53062 & 0119 & 0.08005 & 56.52 & 226.87 & 33.20 & 183.08 & 52.19 \\\\\n32......... & SDSS J152328.09+313655.6 & 1387 & 53118 & 0210 & 0.06850 & 382.47 & 1707.84 & 67.32 & 1300.98 & 322.89 \\\\\n33......... & SDSS J120900.89+422830.9 & 1448 & 53120 & 0075 & 0.02364 & 100.07 & 614.24 & 24.50 & 320.70 & 61.56 \\\\\n34......... & SDSS J121839.40+470627.6 & 1451 & 53117 & 0190 & 0.09389 & 478.35 & 5046.81 & 101.68 & 1861.68 & 380.45 \\\\\n35......... & SDSS J005231.29$-$011525.2 & 1496 & 52883 & 0089 & 0.13485 & 251.99 & 2400.52 & 55.68 & 826.34 & 254.44 \\\\\n36......... & SDSS J011341.11+010608.5 & 1499 & 53001 & 0522 & 0.28090 & 191.99 & 2118.06 & 38.27 & 779.72 & 162.79 \\\\\n37......... & SDSS J001901.52+003931.8 & 1542 & 53734 & 0375 & 0.09669 & 58.06 & 296.69 & 15.99 & 242.69 & 58.75 \\\\\n38......... & SDSS J034019.39+002530.6 & 1632 & 52996 & 0467 & 0.35296 & 40.58 & 375.40 & 10.78 & 167.65 & 46.21 \\\\\n39......... & SDSS J032224.64+401119.8 & 1666 & 52991 & 0048 & 0.02608 & 121.08 & 1388.24 & 50.70 & 428.76 & 130.77 \\\\\n40......... & SDSS J135855.82+493414.1 & 1670 & 53438 & 0061 & 0.11592 & 56.67 & 385.56 & 14.74 & 189.76 & 50.29 \\\\\n41......... & SDSS J160452.78+344540.4 & 1682 & 53173 & 0201 & 0.05493 & 87.33 & 437.20 & 37.34 & 364.99 & 111.94 \\\\\n42......... & SDSS J132011.71+125940.9 & 1698 & 53146 & 0327 & 0.11398 & 25.43 & 174.93 & 4.92 & 92.99 & 24.68 \\\\\n43......... & SDSS J143523.42+100704.1 & 1711 & 53535 & 0306 & 0.03122 & 128.18 & 530.61 & 15.31 & 473.30 & 123.16 \\\\\n44......... & SDSS J072637.94+394557.8 & 1733 & 53047 & 0326 & 0.11141 & 505.82 & 3357.26 & 23.23 & 1744.03 & 120.42 \\\\\n45......... & SDSS J095914.76+125916.4 & 1744 & 53055 & 0385 & 0.03432 & 1298.59 & 8418.94 & 361.09 & 4396.86 & 1088.11 \\\\\n46......... & SDSS J113714.22+145917.2 & 1755 & 53386 & 0463 & 0.03484 & 74.19 & 364.47 & 25.99 & 276.40 & 77.15 \\\\\n47......... & SDSS J120847.79+135906.7 & 1764 & 53467 & 0013 & 0.29030 & 136.33 & 659.31 & 28.11 & 554.94 & 137.92 \\\\\n48......... & SDSS J135429.05+132757.2 & 1777 & 53857 & 0076 & 0.06332 & 312.80 & 3422.56 & 111.50 & 1005.37 & 306.94 \\\\\n49......... & SDSS J130431.99+061616.7 & 1794 & 54504 & 0046 & 0.06283 & 184.84 & 1202.46 & 41.38 & 702.29 & 217.24 \\\\\n50......... & SDSS J134316.52+101440.1 & 1804 & 53886 & 0433 & 0.08132 & 186.67 & 960.51 & 170.90 & 635.68 & 198.21 \\\\\n51......... & SDSS J160032.89+052608.8 & 1822 & 53172 & 0012 & 0.11653 & 281.02 & 1630.88 & 69.22 & 1204.67 & 280.09 \\\\\n52......... & SDSS J081212.84+541539.8 & 1871 & 53384 & 0060 & 0.04417 & 93.10 & 809.73 & 34.28 & 326.93 & 94.82 \\\\\n53......... & SDSS J084038.99+245101.6 & 1931 & 53358 & 0396 & 0.04334 & 137.37 & 770.11 & 39.75 & 579.80 & 151.50 \\\\\n54......... & SDSS J122451.88+360535.4 & 2003 & 53442 & 0112 & 0.15094 & 25.95 & 148.22 & 11.11 & 126.62 & 35.77 \\\\\n55......... & SDSS J134237.37+273251.3 & 2017 & 53474 & 0127 & 0.04947 & 12.67 & 97.69 & 10.27 & 60.02 & 17.32 \\\\\n56......... & SDSS J140952.03+244334.6 & 2128 & 53800 & 0358 & 0.05215 & 45.42 & 220.76 & 12.29 & 198.30 & 41.19 \\\\\n57......... & SDSS J142535.21+314027.1 & 2129 & 54252 & 0618 & 0.03324 & 91.61 & 362.11 & 42.31 & 323.51 & 95.49 \\\\\n58......... & SDSS J145505.97+211121.1 & 2148 & 54526 & 0122 & 0.06751 & 82.30 & 441.80 & 12.31 & 437.94 & 126.61 \\\\\n59......... & SDSS J083200.51+191205.8 & 2275 & 53709 & 0472 & 0.03753 & 549.64 & 6069.28 & 42.46 & 15422.57 & 419.58 \\\\\n60......... & SDSS J103731.01+280626.9 & 2356 & 53786 & 0468 & 0.04263 & 54.57 & 447.10 & 24.55 & 216.95 & 65.48 \\\\\n61......... & SDSS J104403.52+282628.3 & 2356 & 53786 & 0618 & 0.16286 & 225.46 & 1047.00 & 17.43 & 794.20 & 193.30 \\\\\n62......... & SDSS J104724.40+204433.5 & 2478 & 54097 & 0541 & 0.26515 & 102.51 & 751.70 & 37.17 & 391.58 & 66.16 \\\\\n63......... & SDSS J160635.22+142201.9 & 2524 & 54568 & 0498 & 0.03245 & 162.66 & 621.83 & 41.72 & 517.08 & 160.44 \\\\\n64......... & SDSS J171901.28+643830.8 & 2561 & 54597 & 0345 & 0.08954 & 152.42 & 709.97 & 21.69 & 586.95 & 174.32 \\\\\n65......... & SDSS J084658.44+111457.5 & 2574 & 54084 & 0382 & 0.06296 & 130.82 & 638.15 & 41.04 & 557.91 & 161.29 \\\\\n66......... & SDSS J095745.49+152350.6 & 2584 & 54153 & 0442 & 0.05183 & 96.42 & 702.55 & 24.58 & 514.46 & 117.37 \\\\\n67......... & SDSS J133014.91+242153.9 & 2665 & 54232 & 0388 & 0.07151 & 50.49 & 244.84 & 15.74 & 284.46 & 76.98 \\\\\n68......... & SDSS J135007.07+164227.2 & 2742 & 54233 & 0551 & 0.13043 & 99.01 & 903.05 & 38.82 & 495.52 & 137.00 \\\\\n69......... & SDSS J153941.67+171421.9 & 2795 & 54563 & 0509 & 0.04583 & 157.66 & 758.85 & 14.76 & 500.33 & 119.40 \\\\\n70......... & SDSS J143730.46+620649.4 & 2947 & 54533 & 0227 & 0.21862 & 66.97 & 290.54 & 22.39 & 219.25 & 65.39 \n\n\\enddata\n\\tablecomments{Col. (1): Identification number assigned in this paper. \nCol. (2): Object name. \nCol. (3)--(5): Plate-MJD-Fiber ID in the SDSS observation for analyzed spectra. \nCol. (6): Redshift measured by the SDSS pipeline.\nCol. (7)--(11): Emission-line fluxes in units of $10^{-17}$ $\\rm erg\\ s^{-1}\\ cm^{-2}$.\n}\n\\end{deluxetable*}\n\n\n\n\\section{Selection of secure-AGN sample}\n\nAs described in Section 1, the BPT-valley sample potentially includes star-forming galaxies with \nspecial gas properties, not only AGNs. \nThus we first select objects showing secure evidence of the AGN from the BPT-valley sample. \nSpecifically, we regard objects showing at least one of the following two features \nin their SDSS spectra as secure AGNs; (1) a broad H$\\alpha$$\\lambda$6563 emission, \nand (2) a He~{\\sc ii}$\\lambda 4686$ emission line. \nDetails of the selection procedure of secure AGNs are given below.\n\n\n\\subsection{Broad H$\\alpha$$\\lambda$6563 emission line}\n\nThe velocity profile of recombination lines is a powerful tool to examine \nthe presence of AGNs, since star-forming galaxies never show a velocity \nwidth wider than $\\sim$1000 km~s$^{-1}$ in full-width at half maximum (FWHM). \nGenerally the optical spectra of type-1 AGNs show broad permitted lines whose \nvelocity width is $\\gtrsim$ 2000 km~s$^{-1}$ emitted from BLRs. \nThe origin of recombination lines with $\\rm FWHM \\sim 1000 - 2000$ km~s$^{-1}$ is \nnot very clear, since such lines may arise at BLRs in so-called narrow-line \nSeyfert 1 galaxies (NLS1s; e.g., Osterbrock \\& Pogge 1985) or \nat NLRs in type-2 AGNs with a relatively large velocity width \n(such as NGC 1068 and NGC 1275; see, e.g., \\citealt{1984ApJ...281..525H, 2000ApJ...532L.101C}). \nHowever, in either case, the detection of recombination lines with \n$\\rm FWHM > 1000$ km~s$^{-1}$ strongly suggests the presence of AGNs. \nTherefore we search for the broad H$\\alpha$$\\lambda$6563 component \nin the optical spectrum of the BPT-valley objects. Here we do not search for \nthe broad component of the H$\\beta$$\\lambda$4861 emission, \nsince it is intrinsically fainter than that of the H$\\alpha$$\\lambda$6563 emission \nand it is sometimes affected significantly by the Fe~{\\sc ii} multiplet emission\n\\citep[e.g.,][]{2001AJ....122..549V}.\n\nWe use an IRAF routine {\\tt specfit} \\citep{1994ASPC...61..437K} to find the broad \nH$\\alpha$$\\lambda$6563 component.\nSpecifically, we fit the SDSS optical spectrum of the BPT-valley objects \nin the range of $\\lambda_{\\rm rest} = 6200-6800\\ \\rm \\AA$ with and \nwithout the broad H$\\alpha$$\\lambda$6563 component, and examine \nwhether the addition of the broad component improves the spectral fit significantly. \nThe details of the fitting procedure are as follows. \nFirst, we fit the optical spectrum with a linear continuum component and \nsingle-Gaussian emission-line components for [O~{\\sc i}]$\\lambda$6300, \n[O~{\\sc i}]$\\lambda$6363, [N~{\\sc ii}]$\\lambda$6548, H$\\alpha$$\\lambda$6563, \n[N~{\\sc ii}]$\\lambda$6584, [S~{\\sc ii}]$\\lambda$6717, and [S~{\\sc ii}]$\\lambda$6731 \n(hereafter ``nobroad fitting''). \nHere we assume that the velocity width of all emission lines is the same, \nand the relative separation of the emission lines is fixed to be the same as \nthat of their laboratory wavelengths. \nThe flux ratios of [O~{\\sc i}]$\\lambda$6300 to [O~{\\sc i}]$\\lambda$6363 and \n[N~{\\sc ii}]$\\lambda$6584 to [N~{\\sc ii}]$\\lambda$6548 are fixed to be 3.00 and 2.96 \nrespectively \\citep{1983IAUS..103..143M}, and the flux ratios among the remaining \nemission lines are kept to be free. \nThen, we add a broad component for the H$\\alpha$$\\lambda$6563 emission to \nthe nobroad fit, where the flux, wavelength center and width of this additional \ncomponent are kept to be free (hereafter ``broad fitting''). \nHere we recognize that the additional broad component significantly improves \nthe fit by the following criterion:\n\\begin{eqnarray}\n\\frac{\\tilde{\\chi}^2_{\\rm nobroad}-\\tilde{\\chi}^2_{\\rm broad}}{\\tilde{\\chi}^2_{\\rm nobroad}}>0.4,\n\\label{chi}\n\\end{eqnarray}\nwhere $\\tilde{\\chi}^2_{\\rm nobroad}$ and $\\tilde{\\chi}^2_{\\rm broad}$ are the reduced \nchi-square of the nobroad fitting and broad fitting, respectively. \nNote that the threshold, 0.4, is determined empirically, so that the result \nbecomes consistent with the visual inspection. \nAs a result, 13 BPT-valley objects with a broad component are identified from the 70 BPT-valley objects.\nFigures~\\ref{broad_HeII_1} and~\\ref{broad_noHeII} show the SDSS spectrum with the best-fit result \nfor the BPT-valley objects with a broad H$\\alpha$$\\lambda$6563 component. \nFigure~\\ref{ID_48} shows an example of objects (ID = 48) whose fitting result does not satisfy the \ncriterion defined by Equation~\\ref{chi} (for this case, the improvement of the fit is slightly less than \nthe threshold, 0.32).\nNote that we regard object ID = 8 as an object with a broad H$\\alpha$ component, \nthough the FWHM of the broad H$\\alpha$ component is less than \n$1000\\ {\\rm km\\ s^{-1}}$ (Figure~\\ref{ID_8}). \nThis is because this object shows [Fe~{\\sc vii}]$\\lambda$6087 and [Fe~{\\sc x}]$\\lambda$6374 \nlines, that are seen only when the AGN presents. Note that such high-ionization forbidden \nemission lines are preferentially seen in type-1 AGNs \n\\citep[e.g.,][]{1998ApJ...497L...9M, 2000AJ....119.2605N}. \nNote that the [Fe~{\\sc vii}]$\\lambda$6087 line is seen in 8 objects \nwhile [Fe~{\\sc x}]$\\lambda$6374 line is seen in 3 objects \n(including ID = 8, note that 2 objects in addition to ID = 8 show both [Fe~{\\sc vii}]$\\lambda$6087 \nand [Fe~{\\sc x}]$\\lambda$6374). \nThe spectral properties of the BPT-valley objects with a broad H$\\alpha$$\\lambda$6583 \ncomponent are summarized in Table 2.\nOnly 1 BPT-valley object (ID = 25) shows the broad H$\\beta$ component among \nthe 13 BPT-valley objects showing a broad H$\\alpha$ component (see Figure~\\ref{broad_HeII_1}).\n\n\\subsection{He~{\\sc ii}$\\lambda$4686 emission line}\nThe presence of a He~{\\sc ii}$\\lambda4686$ emission line indicates the existence of \nthe hard ionizing radiation since the ionization potential for \n$\\rm He^{+}$ is 54.4 eV. \nThis hard radiation is naturally produced by AGNs. \nTherefore, the He~{\\sc ii}$\\lambda4686$ emission line is a good indicator of AGNs. \nWe examine whether the SDSS optical spectrum of the BPT-valley objects show \nthe He~{\\sc ii}$\\lambda$4686 line by the visual inspection, \nsince the He~{\\sc ii}$\\lambda$4686 information is not given in the MPA-JHU database. \nAs a result, 38 BPT-valley objects with the He~{\\sc ii} emission line are identified from \nthe 70 BPT-valley objects. \nSome of the SDSS spectra of BPT-valley objects with the He~{\\sc ii} detection are shown \nin Figures~\\ref{broad_HeII_1}, while those without the He~{\\sc ii} detection are shown \nin Figure~\\ref{broad_noHeII}.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=17cm]{figure_4.eps}\n \\vspace{2mm}\n \\caption{Spectra of the BPT-valley objects showing a broad H$\\alpha$ component and He~{\\sc ii} emission. \n Best-fit models are plotted in red, while the narrow-H$\\alpha$+[N~{\\sc ii}] Gaussian components, \n broad H$\\alpha$ component, and continuum are plotted in green, violet, and orange, respectively. \n Residuals are plotted in blue. Reduced chi-square values are given at the upper-right side in the \n right panels (the value before adding the broad $\\alpha$ component is given in the parenthesis).\n }\n \\label{broad_HeII_1} \n \\end{figure*}\n\n\\setcounter{figure}{3}\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=17cm]{figure_5.eps}\n\\caption{(Continued)}\n \\label{broad_HeII_2} \n\\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=17.cm]{figure_6.eps}\n \\vspace{2mm}\n \\caption{Same as Figure~\\ref{broad_HeII_1} but for objects showing the broad H$\\alpha$ emission \n but without the He~{\\sc ii} line.}\n \\label{broad_noHeII} \n \\end{figure*}\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=17.cm]{figure_7.eps}\n \\vspace{2mm}\n \\caption{\n Same as Figure~\\ref{broad_HeII_1} but for an example of objects whose fitting result \n does not satisfy Equation~\\ref{chi}.\n }\n \\label{ID_48} \n \\end{figure*}\n\n\\begin{figure}[h]\n \\centering\n \\hspace{0cm}\n \\vspace{0.cm}\n \\includegraphics[width=8.5cm]{figure_8.eps}\n \\vspace{0cm}\n \\caption{Same as Figure~\\ref{broad_HeII_1}, but for objects ID = 8, whose FWHM of broad \n H$\\alpha$ component is less than $1500\\ {\\rm km\\ s^{-1}}$ (see Table 2). \n High-ionization forbidden emission lines of [Fe~{\\sc vii}]$\\lambda$6087 and \n [Fe~{\\sc x}]$\\lambda$6374 are clearly detected.\n Note that fitting range is from 6200 to 6800 $\\rm \\AA$ in the restframe.\n Continuum fitting is extrapolated below 6200 $\\rm \\AA$.}\n \\label{ID_8} \n\\end{figure}\n\n\\subsection{Classification result of the BPT-valley sample}\n\n The results of the classification of the BPT-valley objects are summarized in Table 3. \n Among the 70 BPT-valley objects, 8 objects show both broad H$\\alpha$ component and \n He~{\\sc ii} emission line, that are now confirmed to be AGNs. \n There are 5 objects showing the broad H$\\alpha$ component but without He~{\\sc ii} emission line, \n that are also regarded as AGNs. \n The non-detection of the He~{\\sc ii} line is likely due to insufficient signal-to-noise ratio, \n since the He~{\\sc ii} line is very weak. In addition, 30 objects show the He~{\\sc ii} line \n but without broad H$\\alpha$ component, that are thought to be typical type-2 AGNs. \n Here we should mention that the stellar absorption lines (mainly H$\\alpha$) are not \n considered in our fitting procedure. \n Though the stellar H$\\alpha$ absorption line could impact the narrow component of the H$\\alpha$ \n emission, the absorption effect is negligible for examining the presence of the broad H$\\alpha$ \n component. \n This is because the equivalent width of the detected broad H$\\alpha$ component is higher than \n $20\\ {\\rm \\AA}$ (the median value of $\\rm EW_{rest}(H\\alpha)_b$ is $44.74\\ {\\rm \\AA}$, Table 2) \n while the typical equivalent width of the stellar H$\\alpha$ absorption is $\\sim 2-3\\ {\\rm \\AA}$ \n in nearby galaxies \\citep[e.g.,][]{1997ApJS..112..315H}.\n Note that the detected He~{\\sc ii} line is not caused by Wolf-Rayet stars, \n because the typical velocity width of the detected He~{\\sc ii} line is not \n broad ($\\lesssim 1000\\ \\rm km\\ s^{-1}$). \n Therefore, at least 43 among the BPT-valley objects are regarded as AGNs. \n There may be some additional AGNs in the remaining 27 objects, possibly \n owing to insufficient S\/N to detect any AGN indicators in their spectra. \n Instead, some of those 27 objects could be non-AGNs, i.e., star-forming \n galaxies with a relatively high N\/O ratio or fast shocks. \n We do not discuss further about those 27 objects since the main interests \n of this work are on the BPT-valley AGN sample. \n Accordingly, we conclude that at least 43 objects of the BPT-valley sample \n (or $\\sim$ 60\\%, but probably more) are confirmed to be AGNs.\n \n As described in Section 3.1, at least one of the [Fe~{\\sc vii}]$\\lambda$6087 and \n [Fe~{\\sc x}]$\\lambda$6374 lines are seen in 9 BPT-valley objects. \n Interestingly, a large fraction of objects showing both the broad H$\\alpha$ component and He~{\\sc ii} \n emission show such high-ionization iron lines (5 among 8 objects). \n On the other hand, objects showing neither the broad H$\\alpha$ component nor He~{\\sc ii} emission \n never shows those high-ionization iron lines. \n Then, a few objects in the remaining two classes show high-ionization iron lines (4 among 35 objects). \n This may infer that our classification is well tracing the presence of the AGN, \n but the absence of high-ionization iron lines could be simply due to a low S\/N ratio of the spectra. \n \n Figure~\\ref{BPT_classification} shows how various populations of galaxies classified in this work \n are populated in the BPT diagram. \n There are no significant segregation except for two BPT-valley objects whose \n [N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$ flux ratio is very low, $< 0.1$. \n Both of these two galaxies show no broad H$\\alpha$ component nor He~{\\sc ii} line, \n which is consistent with the idea that these two objects are not low-metallicity AGNs \n but somewhat extreme low-metallicity galaxies, characterized probably by a very \n high ionization parameter and\/or very hard ionization radiation.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=8.5cm]{figure_9s.eps}\n\\caption{The BPT diagram ([N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 versus \n [O~{\\sc iii}]$\\lambda$5007\/H$\\beta$$\\lambda$4861) with the classification result of the \n BPT valley sample among the SDSS DR7 emission-line objects. \n The numbers of various galaxy populations are shown in the parenthesis in the lower-right box.\n}\n\\label{BPT_classification} \n\\end{figure}\n\n\n\\begin{deluxetable*}{lrrrrr}\n\\tablewidth{0pt}\n\\tablecolumns{6} \n\\tablecaption{Broad-line AGNs in the BPT valley}\n\\tablehead{\n\\colhead{ID} & \\colhead{$f(\\rm H\\alpha)_n$} & \\colhead{$f(\\rm H\\alpha)_b$} & \\colhead{$\\rm FWHM_{H\\alpha}$} & \\colhead{$\\rm FWHM_{[S\\ {\\scriptscriptstyle II}]}$} & \\colhead{$\\rm EW_{\\rm rest}(H\\alpha)_b$}\\\\\n\\colhead{(1)} & \\colhead{(2)} & \\colhead{(3)} & \\colhead{(4)} & \\colhead{(5)} & \\colhead{(6)}\n}\n\\startdata\n\\multicolumn{6}{c}{broad H$\\alpha$ and He~{\\sc ii}}\\\\\n\\tableline\n3......... & 991.96 & 1834.96 & 7162.24 & 245.05 & 88.10\\\\\n8......... & 414.20 & 309.70 & 875.81\\tablenotemark{1} & 247.31 & 27.02\\\\\n12........ & 524.87 & 830.66 & 1682.70 & 324.79 & 57.25\\\\\n13........ & 748.78 & 728.55 & 2033.36 & 223.68 & 72.83\\\\\n25......... & 607.56 & 930.70 & 1974.28 & 248.66 & 159.88\\\\\n34........ & 2072.41 & 797.10 & 2083.37 & 377.39 & 33.06\\\\\n65........ & 573.27 & 761.34 & 4830.02 & 263.52 & 44.61\\\\\n66........ & 448.38 & 768.76 & 2430.79 & 221.86 & 47.03\\\\\n\\tableline\n\\multicolumn{6}{c}{broad H$\\alpha$ and noHe~{\\sc ii}}\\\\\n\\tableline\n\\dataset\n17........ & 719.07 & 392.29 & 3396.57 & 270.94 & 44.74\\\\\n21........ & 1014.89 & 1145.11 & 2440.19 & 337.97 & 44.13\\\\\n47........ & 515.66 & 726.01 & 3569.90 & 273.99 & 246.77\\\\\n58........ & 390.35 & 801.51 & 4347.42 & 307.51 & 28.83\\\\\n67........ & 276.51 & 335.10 & 2372.32 & 276.41 & 25.14 \n\\enddata\n\\tablenotetext{1}{Classified as an object with a broad H$\\alpha$ component through the FWHM \nof the additional H$\\alpha$ component is less than 1000 km s$^{-1}$ (see the main text).}\n\\tablecomments{\nCol. (1): Identification number assigned in this paper. \nCol. (2): Flux of the nallow component of H$\\alpha$ in units of $10^{-17}$ $\\rm erg\\ s^{-1}\\ cm^{-2}$.\nCol. (3): Flux of the broad component of H$\\alpha$ in units of $10^{-17}$ $\\rm erg\\ s^{-1}\\ cm^{-2}$.\nCol. (4): FWHM of the broad component of H$\\alpha$ in units of km $\\rm s^{-1}$.\nCol. (5): FWHM of the [S~{\\sc ii}]$\\lambda$6717 (i.e., narrow component) in units of km $\\rm s^{-1}$.\nCol. (6): Rest-frame equivalent width of the broad component of H$\\alpha$ in units of $\\rm \\AA$.\n}\n\n\\end{deluxetable*}\n\n\\begin{deluxetable}{lcc}\n\\tablecolumns{3} \n\\tabletypesize{\\scriptsize}\n\\tablewidth{0pc}\n\\tablecaption{Classification result of the BPT-valley sample}\n\\tablehead{ \n\\colhead{} & \\colhead{broad} & \\colhead{nobroad} \n}\n\\startdata\n He II & 8 & 30 \\\\\n noHe II & 5 & 27 \n\\enddata\n\\end{deluxetable}\n\n\n\\section{Selection of Low-Metallicity AGNs}\n\nThe 43 BPT-valley objects confirmed to be AGNs are not necessarily low-metallicity AGNs, \nbecause AGNs with a very high electron density or very high ionization parameter are \nalso expected to be populated in the BPT valley as mentioned in Section 1. \nMore specifically, the [N~{\\sc ii}]$\\lambda$6584 emission in AGNs with a density higher than \nthe critical density of the [N~{\\sc ii}]$\\lambda$6584 transition ($\\sim$8.7 $\\times 10^4$ cm$^{-3}$) \nis significantly suppressed due to the collisional de-excitation effect. \nOn the other hand, a very high ionization parameter results in a higher relative ionic abundance of \nN$^{2+}$ (i.e., a lower relative ionic abundance of N$^{+}$), that results in a weaker \n[N~{\\sc ii}]$\\lambda$6584 emission.\nTherefore, in this section, we examine whether the 43 BPT-valley AGNs are characterized \nby a very high electron density or very high ionization parameter or not, and test whether \nthe AGNs in the BPT-valley are characterized by low-metallicity gas or not.\n\n\\subsection{Electron density}\nThe emission-line flux ratios of [S~{\\sc ii}]$\\lambda$6717\/$\\lambda$6731 and \n[O~{\\sc ii}]$\\lambda$3729\/$\\lambda$3726 are famous good indicators of the electron density \n\\citep[e.g.,][]{1989agna.book.....O}.\nIn this work, we use the [S~{\\sc ii}]$\\lambda$6717\/$\\lambda$6731 line ratio to estimate electron density, \nbecause the wavelength separation of the [O~{\\sc ii}] doublet is too small to be well resolved with \nthe SDSS spectral resolution.\nWe use an IRAF routine {\\tt temden} for deriving the electron density from the \n[S~{\\sc ii}]$\\lambda$6717\/$\\lambda$6731 ratio, by assuming the electron temperature of 10,000 K. \nHere we derive the electron density whose [S~{\\sc ii}]$\\lambda$6717\/$\\lambda$6731 ratio is \nwithin the range of 0.5--1.4. \nNo BPT-valley objects show the [S~{\\sc ii}]$\\lambda$6717\/$\\lambda$6731 ratio lower than 0.5 \n(i.e., the high-density limit) while 11 among the 70 BPT-valley objects show the flux ratio \nhigher than 1.4 (i.e., the low-density limit). \nAmong 14,252 Seyfert sample, only 12 objects show the [S~{\\sc ii}]$\\lambda$6716\/$\\lambda$6731 \nratio lower than 0.5 while 2,880 objects show the flux ratio higher than 1.4.\n\nFigure~\\ref{SII_electron} shows the frequency distribution of the inferred gas density for objects whose \n[S~{\\sc ii}]$\\lambda$6717\/$\\lambda$6731 ratio is within the range of 0.5--1.4; i.e., 41 BPT-valley AGNs \n(showing a broad H$\\alpha$ component and\/or He~{\\sc ii} emission), 59 BPT-valley objects \n(including objects without any AGN signatures), and 11,360 Seyfert galaxies. \nHere we show the histograms for both BPT-valley AGNs and BPT-valley objects, because some of \nBPT-valley objects without any AGN signatures could be also AGNs (see Section 3.3). \nThe median density of the BPT-valley AGNs, BPT-valley objects, and Seyfert galaxies are \n210 cm$^{-3}$, 210 cm$^{-3}$, and 270 cm$^{-3}$, respectively. \nIn order to investigate whether the frequency distribution of the gas density is statistically different \namong the samples, we apply the Kolmogorov-Smirnov (K-S) statistical test with a null hypothesis that \nthe frequency distribution of the gas density of two classes of objects comes from \nthe same underlying population. \nThe derived K-S probability for the BPT-valley AGNs and Seyferts is 0.207, while that for \nthe BPT-valley objects and Seyferts is 0.146. \nThese results strongly suggest that the BPT-valley sample is not characterized by the higher gas density \nwith respect to the Seyfert sample.\n\n\\begin{figure}[htbp]\n \\centering\n \\includegraphics[width=8.5cm]{figure_10.eps}\n \\caption{Histograms of the electron density of the BPT-valley AGN (filled green), \n BPT-valley objects (open red), and Seyfert galaxies (open gray), normalized by their peak count.\n Dashed lines denote the range of the electron density measurable through the [S~{\\sc ii}] doublet ratio.} \n \\label{SII_electron} \n\\end{figure}\n\n\\subsection{Ionization parameter}\nThe ionization palameter is the ratio of the number density of hydrogen-ionizing photons \nto that of Hydrogen atoms. \nIn order to investigate the ionization parameter, the [O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 \nflux ratio is a useful indicator because this ratio does not suffer significantly from chemical properties of \nthe gas in both AGNs and star-forming galaxies \n\\citep[see, e.g.,][]{1997A&A...323...31K, 2002ApJ...567...73N, 2014MNRAS.442..900N}.\nNote that this flux ratio is sensitive also to the gas density if the density is higher than \nthe critical density of [O~{\\sc ii}] ($\\sim$$10^{3.5}$ cm$^{-3}$), \nbut the typical density of NLRs inferred from the [S~{\\sc ii}] doublet ratio is much lower than \nthat as described in Section 4.1.\nThough the dust reddening is not corrected to study the BPT diagram due to the small wavelength \nseparation of emission-line pairs used for the BPT diagram (Section 2), we should correct for the reddening \neffect to investigate the [O~{\\sc iii}]$\\lambda5007$\/[O~{\\sc ii}]$\\lambda3727$ flux ratio. \nFor this correction, we assume $R_V = A_V\/E(B-V) = 3.1$ and the intrinsic flux ratio \nof H$\\alpha$$\\lambda6584$\/H$\\beta$$\\lambda4861$ = 3.1, and adopt the reddening curve \nof \\cite{1989ApJ...345..245C}.\n\nFigure~\\ref{OIII_OII} shows the histogram of the [O~{\\sc iii}]$\\lambda$5007\/\n[O~{\\sc ii}]$\\lambda$3727 line \nratio of the BPT-valley AGNs, BPT-valley objects, and Seyferts, with S\/N([O~{\\sc ii}]$\\lambda$3727) $>$ 3.\nHere it should be noted that the BPT-valley objects show \nlog([O~{\\sc iii}]$\\lambda5007$\/H$\\beta$$\\lambda4861$) $>$ 0.5 by definition, \nwhile the Seyfert galaxies could have much lower [O~{\\sc iii}]$\\lambda5007$\/H$\\beta$$\\lambda4861$ flux ratios \ndown to $\\sim -0.2$. \nThis may introduce a selection effect in the sense that strong [O~{\\sc iii}] emitters could be selectively \nincluded in the BPT-valley sample. \nTherefore, for reducing this selection effect, only objects with \nlog([O~{\\sc iii}]$\\lambda5007$\/H$\\beta$$\\lambda4861$) $>$ 0.5 are examined for assessing \nthe ionization parameter. \nAfter adopting this additional criterion, the numbers of the BPT-valley AGNs, BPT-valley objects, and \nSeyferts examined in Figure~\\ref{OIII_OII} are 42, 69, and 8,500, respectively. \nThis figure shows that the BPT-valley samples seem to show systematically higher \n[O~{\\sc iii}]$\\lambda5007$\/[O~{\\sc ii}]$\\lambda3727$ flux ratios than the Seyfert sample. \nThe median values of the logarithmic [O~{\\sc iii}]$\\lambda5007$\/[O~{\\sc ii}]$\\lambda3727$ flux ratios of \nthe BPT-valley AGNs, \nBPT-valley objects, and Seyferts are 0.67, 0.65, and 0.46, respectively. \nIn order to investigate whether or not the distributions of the \n[O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 line ratio are \nstatistically different between BPT-valley sample and Seyfert sample, we apply the K-S statistical test. \nThe K-S probability that the underlying distribution of these two\ndistributions is the same is $3.925\\times 10^{-6}$ for the BPT-valley AGNs and Seyferts, \nwhile $1.803 \\times 10^{-5}$ for the BPT-valley objects and Seyferts. \nThese results suggest that the BPT-valley samples have statistically higher \n[O~{\\sc iii}]$\\lambda5007$\/[O~{\\sc ii}]$\\lambda3727$ flux ratios, i.e., the ionization parameter, \nthan the Seyfert sample. \nNote that it is well known that low-metallicity galaxies are generally characterized by a relatively \nhigh ionization parameter, at least for star-forming galaxies \\citep[e.g.,][]{2006A&A...459...85N}. \nIt may be interesting that the BPT-valley objects show a clear edge at the lower side of the \n[O~{\\sc iii}]$\\lambda5007$\/[O~{\\sc ii}]$\\lambda3727$ distribution in Figure~\\ref{OIII_OII}.\nHowever, probably this feature is not statistically significant, because the number of BPT-valley objects \nis not enough to discuss the tail of the frequency distribution of the \n[O~{\\sc iii}]$\\lambda5007$\/[O~{\\sc ii}]$\\lambda3727$ flux ratio.\n\nIn the next subsection, we will examine whether or not this difference in the ionization parameter can be \nresponsible for the lower [N~{\\sc ii}]$\\lambda6584$\/H$\\alpha$$\\lambda6563$ ratio observed in the \nBPT-valley samples with respect to the Seyfert sample.\n\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{figure_11.eps}\n \\caption{Same as Figure~\\ref{SII_electron} but for the [O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 \n flux ratio.}\n \\label{OIII_OII} \n\\end{figure}\n\n\\subsection{Model calculations}\nAs shown in Section 4.2, the ionization parameter of the BPT valley sample is higher than that of \nthe Seyfert sample. \nSince it is interesting to examine either the BPT-valley AGNs are characterized by a low metallicity or \na high ionization parameter, we perform photoionization model calculations.\n\nWe perform photoionization model calculations for simulating the NLR of AGNs, \nusing the code CLOUDY version 13.03 \\citep{1998PASP..110..761F}. \nHere the main parameters for CLOUDY calculations are as follows \n\\citep[see][for more details]{2001ApJ...546..744N}:\n\\begin{enumerate}\n \\item The hydrogen density of the cloud ($n_{\\rm H}$).\n \\item The ionization parameter ($U$).\n \\item The chemical composition of the gas.\n \\item The shape of the input SED.\n\\end{enumerate}\nWe calculate photoionization models covering the following ranges of parameters:\n$10^1\\ {\\rm cm^{-3}} \\leq n_{\\rm H} \\leq 10^6\\ {\\rm cm^{-3}}$ and\n$10^{-4} \\leq U \\leq 10^{-1}$. \nWe set the gas-phase elemental abundance ratios to be the solar ones. \nThe adopted solar abundances relative to hydrogen are taken from \\cite{1989AIPC..183....1G} \nwith extensions by \\citet{1993oee..conf...15G}.\nThe adopted metallicity (i.e., the solar one) is not typical for usual Seyfert galaxies \n(whose NLR metallicity is generally higher than the solar metallicity), possibly nor BPT-valley AGNs \n(that could have sub-solar metallicity). \nHowever, as described below, it is useful to fix the metallicity to examine whether the ionization \nparameter alone can account for the difference in the emission-line flux ratios between \nBPT-valley objects and Seyferts. \nFor the input SED, we adopt the following one:\n\\begin{eqnarray}\nf_{\\nu}={\\nu}^{\\rm \\alpha_{UV}} \\exp \\left( -\\frac{h\\nu}{kT_{\\rm BB}}\\right) \\exp \\left( -\\frac{kT_{\\rm IR}}{h\\nu}\\right) \n+ a{\\nu}^{\\alpha_{\\rm X}}\n\\end{eqnarray}\nas a typical spectrum of AGNs (see \\citealt{1996hbic.book.....F}). \n$kT_{\\rm IR}$ is the infrared cutoff of the big-blue bump, and we adopt \n$kT_{\\rm IR}=0.01$ ryd \\citep[see][]{1996hbic.book.....F}. \n$\\alpha_{\\rm UV}$ is the slope of the low-energy side of the big-blue bump.\nWe adopt $\\alpha_{\\rm UV} = 0.5$, which is typical for AGNs \n\\citep{1996hbic.book.....F}. \n$\\alpha_{\\rm ox}$ is the UV--to--X-ray spectral slope, which determines the parameter $a$ in equation (6).\nWe adopt $\\alpha_{\\rm ox}=-1.35$, which is the average value of nearby Seyfert 1 galaxies\n\\citep[see][]{1993A&A...274..105W}.\n$\\alpha_{\\rm x}$ is the X-ray slope, and we adopt $\\alpha_{\\rm x}=-0.85$ (see \\citealt{2001ApJ...546..744N}).\n$T_{\\rm BB}$ is the characterizing the shape of the big-blue bump, and we adopt 490,000 K \n(see \\citealt{2001ApJ...546..744N}).\nThe calculations end at the depth where the temperature falls to 3,000 K, \nbelow which gas does not contribute significantly to the flux of optical emission lines.\n\nFigure~\\ref{cloudy_1} shows the results of the photoionization model calculations, overlaid \non the BPT diagram. \nThough the density effect is not significant in the range of \n$10^1$ cm$^{-3}$ $<$ $n_{\\rm H}$ $<$ $10^5$ cm$^{-3}$, \nwe can see the effect of the collisional de-excitation at $n_{\\rm H}$ $>$ $10^4$ cm$^{-3}$. \nHowever, this figure suggests that the difference in the [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6583 \nflux ratio is more easily explained by the difference in the ionization parameter rather than by the difference \nin the gas density. \nMore specifically, a higher ionization parameter by 0.5--1 dex in the BPT-valley objects with respect to \nthe Seyfert sample is required to explain the lower [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6583 \nflux ratio of the BPT-valley objects. \n\nFor examining whether the BPT-valley objects have a higher ionization parameter than the Seyfert sample, \nwe investigate another diagnostic diagram that consists of the emission-line flux ratios of \n[O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 and [O~{\\sc i}]$\\lambda$6300\/[O~{\\sc iii}]$\\lambda$5007 \n(Figure~\\ref{cloudy_2}). \nThis diagram is useful to examine the effect of ionization parameter without suffering from \nthe metallicity effect, because only oxygen lines are used and thus less sensitive to the metallicity. \nFigure~\\ref{cloudy_2} shows that the BPT-valley sample and Seyfert sample have a similar gas density, \nthat is consistent with our analysis presented in Section 4.1. \nMore interestingly, Figure~\\ref{cloudy_2} shows that the BPT-valley sample shows a systematically \nhigher ionization parameter than the Seyfert sample, but the inferred difference in the ionization \nparameters is only less than 0.5 dex. \nThis strongly suggests that the lower [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 flux ratio \nin the BPT-valley sample with respect to the Seyfert sample is not explained by the ionization parameter \n(nor the gas density, as described in Section 4.1). \nTherefore we conclude that the BPT-valley AGNs are characterized by a systematlcally lower metallicity \nthan the Seyfert sample, as originally proposed by \\citet{2006MNRAS.371.1559G}.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=8.5cm]{figure_12s.eps}\n \\caption{Same as Figure~\\ref{BPT_classification} (without the inset panel in \n Figure~\\ref{BPT_classification}) but grids of photoionization models are overlaid. \n Different colors of lines denote different parameters adopted in the calculations, \n as shown in the inset panels.\n }\n \\label{cloudy_1} \n\\end{figure}\n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=8.5cm]{figure_13.eps}\n \\caption{Diagnostic diagram of [O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 versus \n [O~{\\sc i}]$\\lambda$6300\/[O~{\\sc iii}]$\\lambda$5007. \n The symbols and lines are the same as in Figure~\\ref{cloudy_1}. \n Note that only the BPT valley objects with S\/N $>$ 5 of the [O~{\\sc ii}]$\\lambda$3727, \n [O~{\\sc iii}]$\\lambda$5007 and [O~{\\sc i}]$\\lambda$6300 line are plotted.}\n \\label{cloudy_2} \n\\end{figure}\n\n\\section{Disccusions}\nAs mentioned Section 1, low-metallicity AGNs are interesting to study the early phase of \nthe AGN evolution.\nHowever low-metallicity AGNs are very rare, so that little has been reported on\nphysical property of low-metallicity AGNs. \nIn this section, we present some basic properties of BPT-valley objects which are expected \nto be low-metallicity AGNs.\n\n\\subsection{Stellar mass}\nNaively it is expected that the stellar mass of low-metallicity AGNs is expected to be \nrelatively low, as suggested by the mass-metallicity relation seen in star-forming galaxies \n\\citep[e.g.,][]{2004ApJ...613..898T, 2006ApJ...647..970L}.\nAccordingly \\citet[][]{2006MNRAS.371.1559G} introduced a mass criterion \n(i.e., $M_{*} < 10^{10}\\ {\\rm M_{\\odot}}$) to select low-metallicity AGNs. \nHowever, it is not clarified whether low-metallicity AGNs should be always found in \na sample of AGNs with a low-mass host galaxy. \nTherefore, in this paper, we select low-metallicity AGNs without stellar-mass cut and \ninvestigate the mass distribution of host galaxies of low-metallicity AGNs. \nHere the stellar mass has been measured and given in the MPA-JHU DR7 catalog \n\\cite[see also][]{2003MNRAS.341...33K}.\nAmong the 43 BPT-valley AGNs and 70 BPT-valley objects, the host mass is \navailable for 39 and 64 objects, respectively. \nFigure~\\ref{mass} shows the histogram of the stellar mass of the 39 BPT-valley AGNs, \n64 BPT-valley objects and 13,662 Seyferts.\nThe median of the stellar mass of the BPT-valley AGNs, BPT-valley objects \nand Seyferts are $10^{10.15}\\ {\\rm M_{\\odot}}$, $10^{10.07}\\ {\\rm M_{\\odot}}$ and \n$10^{10.77}\\ {\\rm M_{\\odot}}$, respectively. \nThis result clearly shows that the stellar mass of the BPT-valley AGNs is systematically lower \nthan that of Seyferts. \nHowever, interestingly, a substantial fraction of the BPT-valley AGN (23 among 39 objects) \nare actually hosted by galaxies with $M_{*} > 10^{10}\\ {\\rm M_{\\odot}}$, \nsuggesting that low-metallicity AGNs are not necessarily hosted by low-mass galaxies. \nNote that such low-metallicity AGNs with a relatively massive host galaxy cannot be selected \nby the criteria of \\citet[][]{2006MNRAS.371.1559G} due to the mass criterion of \n$M_{*} < 10^{10}\\ {\\rm M_{\\odot}}$. \nSuch low-metallicity AGNs hosted by a relatively massive host galaxy may be realized by \ntaking into account of the inflow of low-metallicity gas from the surrounding environment \n\\citep[e.g.,][]{2011A&A...535A..72H}. \n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=8.5cm]{figure_14.eps}\n \\caption{\n Same as Figure~\\ref{SII_electron} but for the stellar mass.\n }\n \\label{mass} \n\\end{figure}\n\n\n\n\n\\subsection{Electron temperature}\nConsidering the effect of the metal cooling, \nlow-metallicity AGNs are expected to be characterized by the higher electron temperature.\nHence we investigate the [O~{\\sc iii}]$\\lambda \\lambda(4959+5007)$\/[O~{\\sc iii}]$\\lambda4363$ line \nratio which is very sensitive to the gas temperature.\nHere it should be mentioned that, [O~{\\sc iii}]$\\lambda \\lambda(4959+5007)$\/[O~{\\sc iii}]$\\lambda4363$ line \nratio also depends on the electron density \\citep[see, e.g.,][]{2001ApJ...549..155N}. \nTherefore we investigate the [O~{\\sc iii}]$\\lambda \\lambda(4959+5007)$\/[O~{\\sc iii}]$\\lambda4363$ and \n[S~{\\sc ii}]$\\lambda6717$\/[S~{\\sc ii}]$\\lambda6731$ line ratios simultaneously in Figure~\\ref{temperature_density}. \nHere this figure shows the emission-line flux ratios of BPT-valley objects and Seyferts but only for \nobjects with a significant detection of the [O~{\\sc iii}]$\\lambda4363$ line (S\/N $> 3$).\nAs described in Section 4.2, only objects with log ([O~{\\sc iii}]$\\lambda$5007\/H$\\beta$) $> 0.5$ are used \n(that results in 9,043 Seyferts and 70 BPT-valley objects).\nNote that [O~{\\sc iii}]$\\lambda \\lambda(4959+5007)$\/[O~{\\sc iii}]$\\lambda4363$ line ratio is corrected \nfor the reddening effect in the same way as Section 4.2.\nThe median values of log ([S~{\\sc ii}]$\\lambda6717$\/[S~{\\sc ii}]$\\lambda6731$) of the BPT-valley AGNs, \nBPT-valley objects and Seyferts with a [O~{\\sc iii}]$\\lambda4363$ detection are 0.088, 0.088 and 0.055, \nrespectively.\nTherefore the electron density of the BPT-valley sample is slightly higher than that of Seyferts \nas already mentioned in Section 4.1. \nThe median of log ([O~{\\sc iii}]$\\lambda \\lambda(4959+5007)$\/[O~{\\sc iii}]$\\lambda4363$) of \nthe BPT-valley AGN, BPT-valley objects and Seyferts are 1.77, 1.77 and 1.79, respectively.\nThis result suggests that the electron temperature of the BPT-valley objects is not significantly higher \nthan that of Seyferts. \nHowever, the fraction of objects showing a significant (S\/N $> 3$) [O~{\\sc iii}]$\\lambda$4363 emission \nis very different between the Seyferts and BPT-valley objects. \nMore specifically, 44 among the 70 BPT-valley objects ($\\sim 63\\ \\%$) show the [O~{\\sc ii}]$\\lambda$4363 \nemission while only 1,516 among 9,043 Seyferts ($\\sim 17\\ \\%$) show the [O~{\\sc iii}]$\\lambda4363$ line. \nThis difference infers that generally the gas temperature of the NLR in BPT-velley objects tends to be \nso high that the [O~{\\sc iii}] $\\lambda$4363 line is detected in most cases, \nwhile the typical gas temperature of the NLR in Seyferts may be lower than that in BPT-valley objects \nand only highly biased objects with a relatively high temperature in the Seyfert sample show the \n[O~{\\sc iii}]$\\lambda4363$ line. \nThis result is consistent to our expectation that the BPT-valley objects is actually characterized by a \nrelatively high gas temperature, due to the low gas metallicity. \n\n\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=8.5cm]{figure_15.eps}\n \\caption{\n Diagnostic diagram of [O~{\\sc iii}]$\\lambda$$\\lambda$(4959+5007)\/\n [O~{\\sc iii}]$\\lambda$4363 versus [S~{\\sc ii}]$\\lambda$6717\/[S~{\\sc ii}]$\\lambda$6731. \n The symbols are the same as in Figure~\\ref{BPT_classification}. \n Note that only the BPT valley objects with S\/N $>$ 3 of the [O~{\\sc iii}]$\\lambda$4363 line are plotted. \n }\n \\label{temperature_density} \n\\end{figure}\n\n\n\\section{Conclusions}\n\nIn this paper, we focus on low-metallicity AGNs ($Z_{\\rm NLR}$ $\\lesssim$ $1\\ Z_{\\odot}$) \nwhich are very rare but important since they are in the early phase of the galaxy-SMBH co-evolution.\nSpecifically, in this work it is examined whether the BPT-valley selection is an effective and reliable \nway to identify low-metallicity AGNs, as proposed by \\citet{2006MNRAS.371.1559G}. \nThe main results are as follows:\n\\begin{itemize}\n \\item We select 70 BPT valley sample which expected low metallicity AGN from \n14,253 Seyfert galaxies of MPA-JHU SDSS DR7 galaxy catalog.\n \\item Out of 70 BPT-valley objects, 43 objects show clear evidence of the AGN based on \n the detection of the broad H$\\alpha$ component and\/or He~{\\sc ii}$\\lambda$4686 emission.\n \\item The typical gas density of the BPT-valley sample ($\\sim$210 cm$^{-3}$) is not higher than that of \n the Seyfert sample ($\\sim$270 cm$^{-3}$), suggesting that the lower \n [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 ratio in the BPT-valley AGNs with respect to \n the Seyfert sample is not caused by the collisional de-excitation effect. \n \\item The higher [O~{\\sc iii}]$\\lambda$5007\/[O~{\\sc ii}]$\\lambda$3727 ratio in the BPT-valley sample \n ($\\sim$4.5) with respect to that in the Seyfert sample ($\\sim$2.9) suggests a typically higher \n ionization parameter of the BPT-valley sample; however, photoionization models suggest that \n the inferred difference in the ionization parameter between the BPT-valley sample and \n Seyfert sample is not enough to explain the observed lower \n [N~{\\sc ii}]$\\lambda$6584\/H$\\alpha$$\\lambda$6563 ratio of the BPT-valley sample. \n \\item The BPT-valley selection for identifying low-metallicity AGNs is thus confirmed to be a useful method; \n in our analysis, more than 60\\% of the BPT-valley sample are low-metallicity AGNs \n ($Z_{\\rm NLR}$ $\\lesssim$ $1\\ Z_{\\odot}$). \n\n\\end{itemize}\n\n\n\\acknowledgments\n\nWe would like to thank the anonymous referee for her\/his\ncareful reading this paper and useful suggestions, and also\nMasaru Kajisawa and Kazuyuki Ogura for their useful comments.\nTN is financially supported by JSPS grants Nos. 25707010, 16H01101, and 16H03958. \nKM is also supported by JSPS grant No. 14J01811. \nFunding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http:\/\/www.sdss.org\/.\nThe SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\nEven though General Relativity (GR) is extensively tested in the weak-field regime, it is only recently that we have started constraining it in the strong field regime. Gravitational-wave observations \\cite{Abbott:2016blz,TheLIGOScientific:2017qsa,Abbott:2020niy} will soon be rising to the hundreds, providing us with enough data to accurately confront many of the proposed strong gravity GR deviations. Increased precision in observations will allow us to determine whether compact objects, which are associated with extremely large curvatures, have different properties than predicted by GR. \n\nThe phenomenon of \\textit{spontaneous scalarization} provides perhaps the most promising framework, in which we can investigate the manifestation of a strong gravity process that remains dormant in low curvature regimes. Spontaneous scalarization was initially proposed in the case of neutron stars by Damour and Esposito-Far\\`ese (DEF) \\cite{Damour:1992kf,Damour:1993hw}. According to it, a scalar field coupled to gravity in a suitable manner, might acquire a non-trivial structure only in the strong field regime of neutron stars, while remaining trivial and undetected in the weak field regime. In the DEF model black holes do not exhibit scalarization unless it is induced by matter in their vicinity \\cite{Hawking:1972qk,Sotiriou:2011dz,Cardoso:2013opa,Cardoso:2013fwa,Palenzuela:2013hsa}. However, recently, a different class of models in which there is scalarization of both black holes and neutron stars has been receiving a lot of attention: scalar-Gauss-Bonnet theories (\\textit{e.g.} \\cite{Silva:2017uqg,Doneva:2017bvd,Doneva:2017duq}).\n\nScalarization of both black holes and neutron stars has been scrutinized in various works concerning many different modifications (bare mass, self-interactions, different field content, \\textit{etc} \\cite{Ramazanoglu:2016kul,Blazquez-Salcedo:2018jnn,Macedo:2019sem,Herdeiro:2018wub,Ramazanoglu:2017xbl,Ramazanoglu:2018hwk}). Scalarization can be thought of as triggered by a curvature-induced tachyonic instability of the scalar field. In more recent works, it has been shown that this instability can be triggered by spin \\cite{Dima:2020yac} and lead to black holes that are scalarized only when rapidly rotating \\cite{Herdeiro:2020wei,Berti:2020kgk}. It should be noted that scalarization models differ from certain hairy black hole models ({\\em e.g.}~\\cite{Sotiriou:2013qea,Sotiriou:2014pfa,Antoniou:2017acq,Antoniou:2017hxj}) in that, in the latter all black holes carry a non-trivial scalar configuration, whereas in the former only black holes with certain mass or spin characteristics deviate from the Kerr metric.\n\nThe onset of the tachyonic instability that triggers scalarization is controlled by linear terms (although Ref.~\\cite{Doneva:2021tvn} also examined what happens if linear terms are absent from the potential) but eventually this instability is quenched by non-linearities, which control the end-state.\nIn \\cite{Andreou:2019ikc}, all terms that can affect the onset of the instability in the framework of Horndeski theory were listed. However, one of these terms, namely the coupling to the Ricci scalar, has not received much attention in many of the aforementioned works. This is mostly due to the fact that,\nin the black-hole scenario, the onset of scalarization is only controlled by the Gauss-Bonnet invariant, since the Ricci scalar evaluates to zero for GR black holes. Nonetheless, including the Ricci term does seem to provide us with several advantages. To begin with, as discussed in \\cite{Antoniou:2020nax}, the Ricci term is crucial if one wants to retrieve a late-time attractor to GR in a cosmological scenario. Additionally, it was shown in \\cite{Ventagli:2020rnx} that the Ricci term can help in suppressing the scalarization of neutron stars, which would otherwise tend to place significant constraints. Finally, Ref.~\\cite{Antoniou:2021zoy} showed that this term has very interesting effects on the properties of scalarized black holes. Even though the Ricci coupling does not affect the onset of black hole scalarization (being zero in a GR black hole background), it affects the properties of the scalarized solutions and, consequently, observables. For certain values of the Ricci coupling ---~which happen to be consistent with the ones associated with a late-time attractor behaviour~--- the presence of this operator is expected to render black holes radially stable, without the need to introduce self-interaction terms.\n\nFor the reasons presented above, it is of great interest to examine how the combination of Ricci and Gauss-Bonnet couplings affects neutron star properties. We present the analytic and numerical setup of our study in Sec.~\\ref{sec:setup}. The numerical results are presented in Sec.~\\ref{sec:results}. In Sec.~\\ref{Sec:parameterSpace}, we determine over which region of the parameter space scalarized solutions exist, for three different stellar scenarios. In Sec.~\\ref{Sec:betaNeg} and \\ref{Sec:betaPos}, we discuss the properties of the scalarized solutions, in particular their scalar charges and masses. Section \\ref{Sec:instabilityLines} investigates in more detail the solutions that always exist near the scalarization thresholds, while Sec.~\\ref{Sec:EffMass} explains how, already at the level of the GR solution, a given scalar profile may be favored. We conclude with a discussion in Sec.~\\ref{sec:discussion}.\n\n\n\n\n\n\n\\section{SETUP}\n\\label{sec:setup}\n\nIt has been shown in \\cite{Andreou:2019ikc} that, in the framework of Horndeski theories, the minimal action containing all the terms that can affect the onset of a tachyonic instability is\n\\begin{equation}\n\\begin{split}\\label{eq:ActionCaseI}\nS&=\\int\\mathrm{d}^4x\\sqrt{-g}\\bigg\\{\\dfrac{R}{2\\kappa}+X+ \\gamma\\, G^{\\mu\\nu}\\nabla_\\mu\\phi\\,\\nabla_\\nu\\phi\n\\\\\n&\\quad -\\left(m_\\phi^2+\\dfrac{\\beta}{2} R-\\alpha\\mathscr{G}\\right)\\dfrac{\\phi^2}{2}\\bigg\\}\n +S_\\mathrm{M},\n\\end{split}\n\\end{equation}\nwhere $X=-\\nabla_\\mu\\phi\\nabla^\\mu\\phi\/2$, $\\kappa=8\\pi G\/c^4$ and $\\mathscr{G}$ is the Gauss-Bonnet invariant\n\\begin{equation}\n\\mathscr{G}=R^2-4R_{\\mu\\nu}R^{\\mu\\nu}+R_{\\mu\\nu\\rho\\sigma}R^{\\mu\\nu\\rho\\sigma}.\n\\end{equation}\n $S_\\text{M}$ is the matter action, where matter is assumed to couple minimally to the metric; in other words, we are working in the so-called Jordan frame. $m_\\phi$ is the bare mass of the scalar field, and $\\alpha$, $\\beta$ and $\\gamma$ parametrize the deviations from GR. Note that $\\beta$ is dimensionless, whereas $\\gamma$ and $\\alpha$ have the dimension of a length squared. $\\beta$ is defined such that it matches the notation of the (linearized) DEF model (see~\\cite{Andreou:2019ikc} for a detailed discussion on the relation to the original DEF model). For the purpose of this paper we set $\\gamma=0$ and $m_\\phi=0$. If a bare mass is included it needs to be tuned to rather small values else it can prevent scalarization altogether \\cite{Ramazanoglu:2016kul,Ventagli:2020rnx}, while $\\gamma$ has a very limited effect on the threshold of tachyonic scalarization \\cite{Ventagli:2020rnx}. Note that by setting these two parameters to zero, we retrieve the action studied in \\cite{Antoniou:2021zoy} in the context of spontaneously scalarized black holes. The modified Einstein equation is\n\n \\begin{equation}\\label{eq:grav_eq}\n G_{\\mu\\nu}=\\kappa T^\\phi_{\\mu\\nu}+\\kappa T^\\text{M}_{\\mu\\nu},\n \\end{equation}\n\nwhere \n\\begin{equation}\n\\begin{split}\\label{eq:StressEnScal}\nT^{\\phi}_{\\mu\\nu} & =\\nabla_\\mu\\nabla_\\nu\\phi-\\frac{1}{2}g_{\\mu\\nu}\\nabla_\\lambda\\phi\\nabla^\\lambda\\phi\\\\\n& +\\frac{1}{2}\\beta\\left(G_{\\mu\\nu}-\\nabla_\\mu\\nabla_\\nu+g_{\\mu\\nu}\\nabla_\\lambda\\nabla^\\lambda \\right)\\phi^2\\\\\n& +2\\alpha\\big[R\\big(\\nabla_\\mu\\nabla_\\nu-g_{\\mu\\nu}\\nabla_\\lambda\\nabla^\\lambda\\big)\\phi^2\\\\\n& +2\\big(R_{\\mu\\nu}\\nabla_\\lambda\\nabla^\\lambda-2R_{(\\mu\\lambda}\\nabla_{\\nu)}\\nabla^\\lambda\\\\\n& +4g_{\\mu\\nu}R_{\\lambda\\sigma}\\nabla^\\lambda\\nabla^\\sigma\\big)\\phi^2-2R_{\\mu\\lambda\\nu\\sigma}\\nabla^\\lambda\\nabla^\\sigma\\phi^2\\big]\n\\end{split}\n\\end{equation}\ncomes from the variation of the $\\phi$-dependent part of the action with respect to the metric, and $T^\\mathrm{M}_{\\mu\\nu}=-(2\/\\sqrt{-g})(\\delta S_\\mathrm{M}\/\\delta g^{\\mu\\nu})$ is the matter stress-energy tensor. The scalar field equation reads\n\\begin{equation}\\label{eq:scal_eq}\n \\DAlembert \\phi =m_\\text{eff}^2\\phi,\n\\end{equation}\nwhere the effective scalar mass is given by\n\\begin{equation}\\label{eq:eff_masss}\n m_\\text{eff}^2=\\frac{\\beta}{2}R-\\alpha \\mathscr{G}.\n\\end{equation}\nA configuration with a sufficiently\\footnote{Any negative effective mass squared will cause an instability in Minkowski spacetime, but a curved spacetime is destabilized only if a certain threshold is exceeded.} negative effective mass squared will suffer from a tachyonic instability, triggering spontaneous scalarization. For the purpose of this paper, we restrict our analysis to static and spherically symmetric spacetimes:\n\\begin{equation}\\label{eq:metric}\n\\text{d}s^2= - e^{\\Gamma(r)}\\text{d}t^2+e^{\\Lambda(r)}\\text{d}r^2+r^2 \\text{d}\\Omega^2,\n\\end{equation}\nand we assume matter to be described by a perfect fluid with $T^\\text{M}_{\\mu\\nu}=(\\epsilon+p)u_\\mu u_\\nu+p\\,g_{\\mu\\nu}$, where $\\epsilon$, $p$ and $u_\\mu$ are respectively the energy density, the pressure and the 4-velocity of the fluid. The pressure is directly related to the energy density through the equation of state. The field equations then take the form of coupled ordinary differential equations for $\\Gamma$, $\\Lambda$, $\\epsilon$ and $\\phi$, see Appendix. We can solve algebraically the $(rr)$ component of the modified Einstein equation for $e^\\Lambda$. The result is\n\\begin{equation}\\label{eq:ExpLambda}\ne^\\Lambda=\\frac{-B+\\delta\\sqrt{B^2-4\\,A\\,C}}{4 A},\\,\\,\\delta=\\pm 1\n\\end{equation}\nwhere\n\\begin{equation}\n\\begin{split}\n& A=1+\\kappa\\,r^2p-\\frac{1}{2}\\,\\beta\\kappa\\phi^2,\\\\\n& B=-2+\\beta\\kappa\\,\\phi^2-2\\,r\\Gamma'+r\\beta\\kappa\\,\\phi^2\\Gamma'+4\\,r\\beta\\kappa\\,\\phi\\phi'\\\\\n&\\qquad -8\\,\\alpha\\kappa\\,\\phi\\Gamma'\\phi'+r^2\\beta\\kappa\\phi\\Gamma'\\phi'+\\kappa\\,r^2\\phi'^2,\\\\\n& C=48\\,\\alpha\\kappa\\,\\phi\\,\\Gamma'\\phi'.\n\\end{split}\n\\end{equation}\nFor the $\\delta=-1$ branch of solutions we do not retrieve GR in the limit $\\alpha\\to 0$ and $\\beta \\to 0$, henceforth we will assume $\\delta=1$. By substituting Eq.~\\eqref{eq:ExpLambda} in the remaining differential equations, we can reduce our problem to an integration in three variables: $\\Gamma$, $\\phi$ and $\\epsilon$. \n\n\\subsection{Expansion for $r\\to 0$}\\label{Sec:Exp0}\nClose to the center of the star, we can perform an analytic expansion of the form \n\\begin{equation}\\label{eq:smallr}\nf(r)=\\sum_{n=0}^\\infty f_n r^n\n\\end{equation}\nfor the functions $\\Gamma$, $\\Lambda$, $\\epsilon$, $p$ and $\\phi$.\nPlugging these expansions in the field equations, we can solve order by order to determine the boundary conditions at the origin. At this point, there are essentially three quantities that one can freely fix: the central density $\\epsilon_0$, the value of the scalar field at the center $\\phi_0$, and the value of the time component of the metric at the center, determined by $\\Gamma_0$. On the other hand, $\\Lambda_0$ has to vanish in order to avoid a conical singularity at the center, while $p_0$ is directly related to $\\epsilon_0$ by the equation of state. All higher order quantities $\\{\\Gamma_i, ...,\\phi_i\\}$, $i\\geq1$ can be determined in terms of the three quantities $\\{\\epsilon_0,\\Gamma_0,\\phi_0\\}$. We will require that spacetime is asymptotically flat, with a trivial scalar field at spatial infinity, which fixes uniquely $\\Gamma_0$ and $\\phi_0$, or rather restricts $\\phi_0$ to a discrete set of values, each corresponding to a different mode; technically, these values are found through a numerical shooting method. Therefore, for given parameters $\\alpha$ and $\\beta$, a solution is eventually fully determined by the central density $\\epsilon_0$. Different choices of $\\epsilon_0$ will translate into different masses. \n\nWe must underline the difference with the black hole case, studied in \\cite{Antoniou:2021zoy}. For black holes, the equations are scale invariant up to a redefinition of the couplings. Practically, this means that it is enough to explore the full space of parameters $\\alpha$ and $\\beta$ for a \\textit{fixed} mass. One can then deduce all solutions, of arbitrary mass, by an appropriate rescaling. For neutron stars this scaling symmetry is broken by the equation of state that relates $p$ and $\\epsilon$. Therefore, one \\textit{a priori} has to explore a 3-dimensional space of parameters ($\\epsilon_0$, $\\alpha$ and $\\beta$) in the case of neutron stars. In order to keep this exploration tractable, as it was already done in \\cite{Ventagli:2020rnx}, we will focus our study on a selection of central densities and equations of state. We pick these in order to cover very diverse solutions, typically corresponding to the lightest\/heaviest observed stars in general relativity. We then explore a wide range of the $(\\alpha,\\beta)$ parameter space for these fixed densities and equations of state.\n\nTo complete this section, let us note that solving order by order the field equations for the higher order coefficients in the expansion \\eqref{eq:smallr} does not always yield solutions. All first order coefficients in this expansion have to vanish; one can express $\\Gamma_2$, $\\epsilon_2$, $p_2$ and $\\phi_2$ in terms of $\\Lambda_2$; however, $\\Lambda_2$ itself is determined by the following equation:\n\\begin{widetext}\n\\begin{equation}\\label{eq:Lambda2}\n\\begin{split}\n& \\Lambda_2^4(512\\,\\alpha^3\\kappa\\,\\phi_0^2-256\\,\\alpha^3\\beta\\kappa^2\\phi_0^4)+\\Lambda_2^3(512\\,p_0\\alpha^3\\kappa^2\\phi_0^2-64\\,\\alpha^2\\beta\\kappa\\phi_0^2+32\\,\\alpha^2\\beta^2\\kappa^2\\phi_0^4)+\\Lambda_2^2(12\\,\\alpha\\beta^3\\kappa^2\\phi_0^4-24\\,\\alpha\\beta^2\\kappa\\phi_0^2\\\\\n& -192\\,p_0\\alpha^2\\beta\\kappa^2\\phi_0^2)+\\Lambda_2\\left(2\\,\\beta-\\frac{16}{3}\\alpha\\epsilon_0\\kappa-2\\,\\beta^2\\kappa\\,\\phi_0^2+3\\,\\beta^3\\kappa\\,\\phi_0^2+24\\,p_0\\alpha\\beta^2\\kappa^2\\phi_0^2+\\frac{8}{3}\\alpha\\beta\\epsilon_0\\kappa^2\\phi_0^2+\\frac{16}{3}\\alpha\\beta^2\\epsilon_0\\kappa^2\\phi_0^2\\right.\\\\\n&\\left.+\\frac{1}{2}\\beta^3\\kappa^2\\phi_0^4-\\frac{3}{2}\\beta^4\\kappa^2\\phi_0^4\\right)-\\frac{2}{3}\\beta\\epsilon_0\\kappa+\\frac{16}{9}\\alpha\\epsilon_0^2\\kappa^2-p_0\\beta^3\\kappa^2\\phi_0^2+\\frac{1}{3}\\beta^2\\epsilon_0\\kappa^2\\phi_0^2-\\frac{2}{3}\\beta^3\\epsilon_0\\kappa^2\\phi_0^2=0.\n\\end{split}\n\\end{equation}\n\\end{widetext}\nEquation \\eqref{eq:Lambda2} is a fourth order equation in $\\Lambda_2$. Such an equation does not necessarily possess real solutions. Therefore, for any choice of parameters $(\\alpha,\\beta)$ and initial values $(\\epsilon_0,\\phi_0)$, we need to check that a real solution to Eq.~\\eqref{eq:Lambda2} exists. In particular, we need to check this when implementing the shooting method that will allow us to find the values of $\\phi_0$ such that the scalar field is trivial at spatial infinity. Such values might actually not exist in the domain where Eq.~\\eqref{eq:Lambda2} possesses real solutions.\nIn practice, we make sure that each choice of parameters that we consider guarantees not only that Eq.~\\eqref{eq:Lambda2} has a positive\\footnote{An acceptable solution to Eq.~\\eqref{eq:Lambda2} must be positive, otherwise $g_{rr}$ diverges at a finite radius, and consequently the pressure and the energy density diverge as well.} real solution, but that such a solution is connected to the GR one. We discard all other parameter combinations that do not respect such criteria.\n\n\\subsection{Expansion at spatial infinity}\n\nWe now analyze the asymptotic behaviour of the solutions at spatial infinity. This time, we expand the metric and scalar functions in inverse powers of $r$, and solve order by order.\nWe impose that the asymptotic value of the scalar field vanishes, that is $\\phi(r\\to\\infty)\\equiv\\phi_\\infty=0$, and that $\\Gamma(r\\to\\infty)=0$. The asymptotic solution then reads\n\\begin{widetext}\n\\begin{align}\n\\begin{split}\\label{eq:Asymptotic1}\ne^{-\\Lambda}&= 1-\\frac{2M}{r}+\\frac{1}{2}\\frac{Q^2\\kappa}{r^{2}}(1-2\\,\\beta\\kappa)+\\frac{1}{2}\\frac{MQ^2\\kappa}{r^{3}}(1-3\\,\\beta)+\\frac{1}{12}\\frac{Q^2\\kappa}{r^{4}}\\left[ M^2(8-28\\,\\beta)+Q^2\\beta\\kappa(1-5\\,\\beta+12\\,\\beta^2)\\right]\\\\\n& \\quad +\\frac{1}{48}\\frac{MQ^2\\kappa}{r^{5}}\\left[ 768\\,\\alpha+8\\,M^2(6-23\\,\\beta)-Q^2\\kappa(1-18\\,\\beta+77\\,\\beta^2-156\\beta^3)\\right]+O(r^{-6}),\n\\end{split}\n\\\\\n\\begin{split}\\label{eq:Asymptotic2}\ne^\\Gamma&=1-\\frac{2M}{r}+\\frac{1}{2}\\frac{Q^2\\beta\\kappa}{r^2}+\\frac{1}{6}\\frac{MQ^2\\kappa}{r^3}(1-3\\,\\beta)+\\frac{1}{r^4}\\left[ 4\\,M^4-\\frac{1}{3}M^2Q^2\\kappa(1+3\\,\\beta)+\\frac{1}{8}Q^4\\beta^2\\kappa^2 \\right]\\\\\n& \\quad -\\frac{1}{r^5}\\left\\{ 8\\,M^5-\\frac{1}{30}M^3Q^2\\kappa(58-75\\beta)-\\frac{1}{80}M Q^2\\kappa\\left[ 512\\,\\alpha-Q^2\\kappa(3+10\\,\\beta-85\\,\\beta^2+60\\,\\beta^3) \\right] \\right\\}+O(r^{-6}),\n\\end{split}\n\\\\\n\\begin{split}\\label{eq:Asymptotic3}\n\\phi&= \\frac{Q}{r}+\\frac{MQ}{r^2}+\\frac{1}{12}\\frac{Q}{r^3}\\left[ 16\\,M^2-Q^2\\kappa(1-2\\,\\beta+3\\,\\beta^2) \\right]+\\frac{1}{r^4}\\left[ 2\\,M^3Q-\\frac{1}{12}MQ^3\\kappa(4-9\\,\\beta+9\\,\\beta^2) \\right]\\\\\n& \\quad +\\frac{1}{480}\\frac{Q}{r^5}\\big\\{Q^4\\kappa^2(9-40\\,\\beta+86\\,\\beta^2-144\\,\\beta^3+117\\,\\beta^4) -8M^2\\left[ 144\\,\\alpha+Q^2\\kappa(58-140\\,\\beta+105\\,\\beta^2) \\right] \\\\\n& \\quad + 1536\\,M^4 \\big\\} +O(r^{-6}).\\vphantom{\\dfrac{Q}{r}}\n\\end{split}\n\\end{align}\n\\end{widetext}\nwhere $M$ and $Q$ are free. We identify $M$ as the ADM mass and $Q$ as the scalar charge, in the sense that it dictates the fall-off of the scalar field far away. As one can see from Eqs.~\\eqref{eq:Asymptotic1}--\\eqref{eq:Asymptotic3}, the contribution from the Ricci coupling dominates the asymptotic behaviour of the solutions over the Gauss-Bonnet coupling. Indeed, terms proportional to $\\beta$ enter the expansion already at order $r^{-2}$, whereas $\\alpha$-dependent terms arise only at order $r^{-5}$. This expansion is in fact entangled with the boundary conditions at the center of the star, as we already mentioned. For fixed parameters $\\alpha$ and $\\beta$, the freedom in $M$ directly relates to the freedom in the central density $\\epsilon_0$. On the other hand, the fact that only discrete values of $\\phi_0$ yield a vanishing scalar field at infinity means that the scalar profile is actually fixed once a central density (or a mass) is chosen. Therefore, $Q$ is fixed as a function of $M$, and does not constitute a free charge; this is sometimes referred to as secondary hair.\n\nThe scalar charge constitutes probably the most direct channel to test the theory through observations. Indeed, binaries of compact objects endowed with an asymmetric charge will emit dipolar radiation. This enhances the gravitational-wave emission of such systems: in a Post-Newtonian (PN) expansion, dipolar radiation contributes to the energy flux at order -1PN with respect to the usual quadrupolar GR flux. Generically, this dipolar emission is controlled by the sensitivities of the compact objects, defined as\\footnote{The factor of $1\/\\sqrt{4\\pi}$ is added to match the standard definition of the sensitivity in the literature, where a different normalization for the scalar field is generally used.}\n\\begin{equation}\n \\alpha_I=\\dfrac{1}{\\sqrt{4\\pi}}\\,\\dfrac{\\partial\\text{ln}M_I}{\\partial\\phi_0},\n\\end{equation} \n$M_I$ being the mass of the component $I$, and $\\phi_0$ the value of the scalar field at infinity. The observation of various binary pulsars, notably the PSR~J1738+0333 system, allows one to set the following constraint:\n\\begin{equation}\n | \\alpha_A-\\alpha_B|\\lesssim2\\times10^{-3},\n \\label{eq:DEFbound}\n\\end{equation}\nwhere $A$ and $B$ label the two components of the system \\cite{Shao:2017gwu,Wex:2020ald}. We can then relate the sensitivity to the scalar charge $Q$, using the generic arguments of \\cite{Damour:1992we}. We have\n\\begin{equation}\n Q_I=-\\dfrac{1}{4\\pi}\\,\\dfrac{\\partial M_I}{\\partial\\phi_0}.\n \\label{eq:DEFcharge}\n\\end{equation} \nIf there is no accidental coincidence in the charge of the two components of the binary, Eqs.~\\eqref{eq:DEFbound}-\\eqref{eq:DEFcharge} translate as\n\\begin{equation}\n\\left|\\dfrac{Q}{M}\\right|\\lesssim6\\times10^{-4}\n\\label{eq:boundQ}\n\\end{equation}\nfor the solutions we consider. Only solutions satisfying this bound on the charge to mass ratio are relevant. It is however a non-trivial task to map this bound onto the parameters of the Lagrangian \\eqref{eq:ActionCaseI}. We will do so by exploring the parameter space in Sec.~\\ref{sec:results}.\n\n\\subsection{Numerical implementation}\n\nWe solve the system of three differential equations for the three independent functions $\\Gamma$, $\\phi$ and $\\epsilon$ by starting our integration from $r_0=10^{-5}~\\text{km}$. We fix the parameters of the theory $\\alpha$ and $\\beta$, and the central density $\\epsilon_0$, typically to values of order $10^{17}$~kg\/m$^3$. Then, we give an initial guess for $\\phi_0$, and determine boundary conditions as explained in Sec.~\\ref{Sec:Exp0}. The integration will generically give a solution; however, we also demand that the scalar field vanishes at infinity, that is $\\phi_\\infty=0$. Only a discrete set of $\\phi_0$ values will yield $\\phi_\\infty=0$. Each value corresponds to a different number of nodes of the scalar field in the radial direction. In practice, we integrate up to distances $r_\\text{max}=300\\, \\text{km}$ and we implement a shooting method to select the solutions with $\\phi_\\infty=0$. Generally, we use Mathematica's built-in function FindRoot.\n\nHowever, in some cases FindRoot fails to find the right solutions, even if one gives it a limited range $(\\phi_{0,\\:\\text{min}},\\phi_{0,\\:\\text{max}})$ where to look for. When this happens, we resort to bisection instead. In this latter case, we require that $\\phi(r_\\text{max})\/\\phi_0 \\leq 10^{-2}$. \n\nAt each stage of the shooting method, we must check that Eq.~\\eqref{eq:Lambda2} gives a real positive solution for $\\Lambda_2$ that is connected to the GR solution. In some cases, we reach the limit of the region of the parameter space where these criteria are fulfilled before reaching $\\phi_\\infty=0$. When this is the case, there is no solution associated to the given choice of $\\alpha$, $\\beta$ and $\\epsilon_0$. Note also that, given a set of $\\alpha$, $\\beta$ and $\\epsilon_0$, there is a maximum number of nodes that the solution can have, consequently a maximum number of suitable choices of $\\phi_0$ (typically up to three modes in the regions we explore). Solutions with more nodes are encountered only for higher values of the parameters $\\alpha$ and $\\beta$, or at higher curvatures (that is, at higher $\\epsilon_0$).\n\nGiven a solution, we extract the value of the ADM mass $M$ and the scalar charge $Q$, as defined in the asymptotic expansion \\eqref{eq:Asymptotic1}--\\eqref{eq:Asymptotic3}. We then have\n\\begin{equation}\n \\begin{split}\n & M = -\\left(\\frac{1}{2}r^2\\Lambda'\\,e^{-\\Lambda}\\right) \\bigg|_{r_\\text{max}},\\\\\n & Q = -\\left(r^2 \\phi'\\right)\\big|_{r_\\text{max}}.\n \\end{split}\n\\end{equation}\n\n\n\\section{NUMERICAL RESULTS}\n\\label{sec:results}\n\n\\subsection{Existence regions of scalarized solutions}\n\\label{Sec:parameterSpace}\n\nIn this section, we will study the regions where scalarized solutions exist in the $(\\alpha,\\beta)$ parameter space. We analyze three different neutron star scenarios, which correspond to the three cases studied in \\cite{Ventagli:2020rnx}.\n\n\\subsubsection{Light star with SLy EOS}\n\\label{sec:lightSLy}\n\nFirst, we consider a neutron star described by the SLy equation of state \\cite{Haensel:2004nu}, with a central energy density of $\\epsilon_0=8.1\\times 10^{17}~\\text{kg}\/\\text{m}^3$, so that its gravitational mass in GR is $M_{\\text{GR}}=1.12 M_\\odot$. The results are summarized in Fig.~\\ref{fig:Sly112}, where we relate our new results to the previous study of the scalarization thresholds \\cite{Ventagli:2020rnx}.\n\\begin{figure}[ht]\n\t\\includegraphics[width=1\\linewidth]{Sly112n1.pdf}%\n\t\\caption{Regions of existence of scalarized solutions in the $(\\alpha,\\beta)$ space, for the SLy EOS with $\\epsilon_0=8.1\\times 10^{17}~\\text{kg}\/\\text{m}^3$. The red (respectively blue) region is the region where scalarized solutions with 0 (respectively 1) node exist. We superimposed the grey contours obtained in Ref.~\\cite{Ventagli:2020rnx}, which represent the lines beyond which GR solutions with the same density are unstable to scalar perturbations with 0, 1, 2, \\textit{etc} nodes. We see that the region where there exist scalarized solutions with $n$ nodes is included in the region where the GR solutions are unstable to scalar perturbations with $n$ nodes, but much smaller. The dashed boundary for the blue region corresponds to a breakdown of the integration inside the star. In GR, a star with this choice of $\\epsilon_0$ and EOS has a light mass, $M_\\text{GR}=1.12 M_\\odot$.}\n\t\\label{fig:Sly112}\n\\end{figure}\nThe white area corresponds to the region of the parameter space where the GR solution is stable. When cranking up the parameters $\\alpha$ or $\\beta$, a new unstable mode appears every time one crosses a black line. The first mode has 0 nodes, the second 1 node, \\textit{etc}. We will refer to these black lines as \\textit{instability lines}. Any point in the parameter space that lies within some grey region corresponds to a configuration where the GR solution is unstable. \nThe red (respectively blue) area corresponds to the region where scalarized solutions with $n=0$ (respectively $n=1$) nodes exist. We do not include the equivalent regions for higher $n$, to not complicate further the analysis. The region where a scalarized solution does exist\nis considerably reduced with respect to the region where the GR solution is unstable.\n\nOne of our main results is that the parameters $(\\alpha,\\beta)$ corresponding to the grey areas that are not covered by the colored regions must be excluded. Indeed, there, scalarized solutions do not exist while the GR solution itself is unstable. Therefore, neutron stars in these theories, when they reach a critical mass, will be affected by a tachyonic instability, but there does not exist a fixed point (a static scalarized solution) where the growth could halt. This would imply that neutron stars with this mass and EOS do not exist for the corresponding parameters of the theory \\eqref{eq:ActionCaseI}. Considering that the properties of the scalarized star are sensitive to nonlinearities, adding further nonlinear interaction terms to the action, \\textit{e.g.} self-interactions in a scalar potential, as was proposed in \\cite{Macedo:2019sem}, or non-linear terms in the coupling functions \\cite{Doneva:2017bvd,Silva:2018qhn}, can potentially change this result.\n\nIn Fig.~\\ref{fig:Sly112}, the regions where scalarized solutions exist are delimited by \\textit{existence lines}, represented by a curve of the respective color. The plain lines correspond to boundaries beyond which it is no longer possible to find a value of $\\phi_0$ that allows a suitable solution to Eq.~\\eqref{eq:Lambda2}, while providing $\\phi_\\infty=0$. Beyond dashed lines, on the other hand, nothing special occurs at the center of the star, but the numerical integration breaks down at a finite radius inside the star. We do not know whether, when crossing these dashed lines, our integration is affected by numerical problems, or whether the divergence corresponds to an actual singularity of the solutions. \nIt could be that this singularity emerges as an artifact of the method we employ. Indeed, in our analysis, we keep the central density $\\epsilon_0$ fixed while pushing the couplings $\\alpha$ and $\\beta$ to larger and larger values. However, for each couple of parameters $(\\alpha,\\beta)$, there probably exists a maximal central density beyond which star solutions do not exist, or equivalently it becomes impossible to sustain such a high central density. The dashed line could correspond to this saturation, where we try to push all the parameters beyond values that can actually be sustained by the model.\n\nA surprising feature, which is not visible in Fig.~\\ref{fig:Sly112}, is that scalarized solutions always exist in a very narrow range along the instability lines. For example, when crossing the black instability line that delimitates the white region where the GR solution is stable, from the light-grey region where it is unstable against $n=0$ scalar perturbations, there exists a very narrow band (within the grey region) where scalarized solutions with zero node exist. We observed similar behaviours along each instability line, also in the scenarios discussed in the next paragraphs. We further investigate these particular solutions in Sec.~\\ref{Sec:instabilityLines}.\n\n\\subsubsection{Light star with MPA1 EOS}\n\nWe next consider a stellar model described by the MPA1 equation of state~\\cite{Gungor:2011vq}. We choose a central energy density of $\\epsilon_0=6.3\\times 10^{17}\\,\\text{kg}\/\\text{m}^3$, such that it corresponds to the same GR mass as in the previous case, that is $M_{\\text{GR}}=1.12 M_\\odot$. We report the results in Fig.~\\ref{fig:MPA1}.\n\\begin{figure}[ht]\n\t\\includegraphics[width=1\\linewidth]{MPA1n1.pdf}%\n\t\\caption{Regions of existence of scalarized solutions in the $(\\alpha,\\beta)$ space, for the MPA1 EOS with $\\epsilon_0=6.3\\times 10^{17}~\\text{kg}\/\\text{m}^3$. The conventions are the same as in Fig.~\\ref{fig:Sly112}. In GR, a star with this choice of $\\epsilon_0$ and EOS is again light, with $M_\\text{GR}=1.12 M_\\odot$.}\n\t\\label{fig:MPA1}\n\\end{figure}\nAs one can see, changing the EOS has only mild effects on the region of existence of scalarized solutions. The analysis of the parameter space is qualitatively the same as for the SLy EOS. The main difference is that, for the range of parameters we considered, no numerical divergences (associated with dashed lines) appear with the MPA1 EOS.\n\n\\subsubsection{Heavy star with SLy EOS}\n\nLast, we consider a denser neutron star described by the SLy EOS, with $\\epsilon_0=3.4\\times 10^{18}\\,\\text{kg}\/\\text{m}^3$. It corresponds to an increased mass in GR of $M_{\\text{GR}}=2.04 M_\\odot$. The results are shown in Fig.~\\ref{fig:SLy204}.\n\\begin{figure}[ht]\n\t\\subfloat{\\includegraphics[width=\\linewidth]{SLy204.pdf}%\n\t}\n\t\\\\\n\t\t\\subfloat{%\n\t\\includegraphics[width=\\linewidth]{SLy204Zoom.pdf}%\n\t}\n\t\\caption{Regions of existence of scalarized solutions in the $(\\alpha,\\beta)$ space, for the SLy EOS with $\\epsilon_0=3.4\\times 10^{18}~\\text{kg}\/\\text{m}^3$. The conventions are the same as in Fig.~\\ref{fig:Sly112}. In GR, a star with this choice of $\\epsilon_0$ and EOS is the heaviest possible, $M_\\text{GR}=2.04 M_\\odot$. The bottom panel is simply a zoom of the upper one.}\n\t\\label{fig:SLy204}\n\\end{figure}\nIn this case, positive values of $\\beta$ can also lead to scalarized solutions. Already in \\cite{Mendes:2014ufa,Palenzuela:2015ima,Mendes:2016fby,Ventagli:2020rnx}, it was shown that, in GR, dense neutron possess a negative Ricci scalar towards the center, which allows for scalarization to be triggered even when $\\beta>0$. As before, a dashed line signals the appearance of divergences, which in this case show up already for the $n=0$ node.\n\nIn the lower panel of Fig.~\\ref{fig:SLy204}, we zoomed on the region of small couplings, in order to understand better what happens for natural values of the Ricci coupling $\\beta$. In the absence of the Gauss-Bonnet coupling, scalarization can occur either if $\\beta<-8.55$, or $\\beta>11.5$. Let us concentrate on the $\\beta>0$ scenario, which is motivated by the results of Ref.~\\cite{Antoniou:2020nax}, where it was shown that positive values of $\\beta$ make GR a cosmological attractor. We remind that black hole scalarization (at least for non-rotating black holes) occurs for $\\alpha>0$. Hence, we see that there exists an interesting region in the $\\alpha>0,~\\beta>0$ quadrant where even very compact stars do not scalarize, while black holes do. Such models can therefore \\textit{a priori} pass all binary pulsar tests, while being testable with black hole observations. On the other hand, for $\\beta\\gtrsim11.5$, the red region where GR solutions are replaced by scalarized solutions spreads very fast in the $\\alpha$ direction, and one has to be careful, when considering black hole scalarization, that such models are not already excluded by neutron star observations.\n\nSo far, we established the regions where scalarized solutions exist in the parameter space. In the next two sections, we will discuss the properties of these solutions, in particular their scalar charge and their mass. We separate this study into two cases: $\\beta<0$ (Sec.~\\ref{Sec:betaNeg}) and $\\beta>0$ (Sec.~\\ref{Sec:betaPos}); indeed, these two situations have different motivations and observational interests.\n\n\\subsection{Mass and scalar charge of the $\\beta<0$ solutions}\n\\label{Sec:betaNeg}\n\nWe now focus on the scenario where $\\beta<0$. This corresponds to the original situation studied by Damour and Esposito-Far\u00e8se. Typically, scalarized solutions with $\\beta<0$ and $\\alpha=0$ are extremely constrained by binary pulsar observations \\cite{Freire:2012mg,Antoniadis:2013pzd,Shao:2017gwu}. A particular motivation to study solutions with $\\beta<0$ is therefore to determine whether the addition of a non-zero Gauss-Bonnet coupling can improve their properties. We will consider three different choices of the Ricci coupling: $\\beta=-5.5,-10$ and $-100$. The two first choices are relevant astrophysically: $\\beta=-5.5$ is approximately the value where scalarization is triggered for small Gauss-Bonnet couplings, while $\\beta=-10$ corresponds to a region where neutron stars are scalarized, but with rather small deviations with respect to GR. The third choice, $\\beta=-100$, is certainly disfavored observationally, but it will allow us to illustrate an interesting behaviour concerning different scalar modes.\n\nLet us start with the comparison between the cases $\\beta=-5.5$ and $-10$. The results are summarized in Fig.~\\ref{fig:smallCoup}.\n\\begin{figure*}[ht]\n\\begin{center}\n\t\\subfloat{%\n\t\\includegraphics[width=0.4\\linewidth]{dMalpha1v1N.pdf}%\n\t}\n\t\\subfloat{%\n\t\\includegraphics[width=0.4\\linewidth]{dMalpha2v1N.pdf}%\n\t}\n\t\\\\\n\t\\subfloat{%\n\t\\includegraphics[width=0.4\\linewidth]{QMalpha1v1.pdf}%\n\t}\n\t\\subfloat{%\n\t\\includegraphics[width=0.4\\linewidth]{QMalpha2v1N.pdf}%\n\t}\n\t\\caption{Mass difference and scalar charge of scalarized solutions for $\\beta<0$. The two left (respectively right) panels show how these quantities evolve when varying $\\alpha$ at fixed $\\beta=-5.5$ (respectively $-10$). The scalar charge $Q$ (bottom panels) is normalized to the total mass of the solutions, $M$. For all curves, the mass difference $\\delta M$ (upper panels) is computed with respect to a GR star with the same central density and EOS. Plain curves correspond to a GR mass of $1.12~M_\\odot$, using the SLy EOS; dashed curves to the same GR mass, but the MPA1 EOS; and dotted-dashed curves to a GR mass of $2.04~M_\\odot$, using the SLy EOS. In this region of the parameter space, only solutions with 0 nodes for the scalar field exist. A generic feature of lighter stars (plain and dashed curves), is that the charge decreases when $\\alpha$ increases, \\textit{a priori} offering a way to evade the stringent bound of Eq.~\\eqref{eq:boundQ} when increasing $\\alpha$. However, it is only for values of $\\beta$ very close to the DEF threshold ($\\beta=-5.5$) that we can obtain scalar charges compatible with observations.\n\t}\n\t\\label{fig:smallCoup}\n\\end{center}\n\\end{figure*}\nThis figure shows two properties of scalarized stars. First, the mass default (or excess) of scalarized stars with respect to GR stars with the same central density and EOS: \n$\\delta M=M-M_{GR}$. \nSecond, the scalar charge of the scalarized solutions, $Q$. We compare the results for the three different stellar models considered in Sec.~\\ref{Sec:parameterSpace}, for the two values of $\\beta$. \nAll curves extend only over a finite range of $\\alpha$. Indeed, passed a certain value of $\\alpha$, we exit the red region on the $\\beta<0$ side of Figs.~\\ref{fig:Sly112}, \\ref{fig:MPA1} and \\ref{fig:SLy204} (moving vertically, since $\\beta$ is fixed to $-5.5$ or $-10$). Scalarized solutions do not exist outside of this region. \n\nFigure \\ref{fig:smallCoup} shows that the choice of EOS does not affect much the properties of the scalarized solutions.\nHowever, increasing the density drastically modifies these properties. In particular, at higher densities, there exist solutions with $\\delta M>0$. This can appear problematic at first. Indeed, one expects that, in a scalarization process, energy is stored in the scalar field distribution. Hence, the ADM mass, that constitutes a measure of the gravitational energy, should decrease in the process. \nHowever, we stress that we are not studying a dynamical process. Indeed, the stars for which we are computing the mass difference $\\delta M$ have, by construction, the same central energy density $\\epsilon_0$. In the scalarization process of a GR neutron star, the central energy density will not remain fixed. Hence, our results do not necessarily mean that a star will gain mass when undergoing scalarization.\n\nPerhaps more interestingly for observations, Fig.~\\ref{fig:smallCoup} also shows the behaviour of the scalar charge. For the light neutron stars, the scalar charge always decreases when $\\alpha$ increases. Therefore, the constraint on the scalar charge, Eq.~\\eqref{eq:boundQ}, disfavors the solutions with $\\alpha<0$ with respect to standard DEF ($\\alpha=0$) solutions. On the contrary, one could hope that a positive Gauss-Bonnet coupling could help evade these constraints even for $\\beta<-5.5$, by quenching the charge. Effectively, there will be a direction in the $\\alpha>0$ and $\\beta<0$ quadrant where the effects of the two operators, Ricci and Gauss-Bonnet, combine to yield a small scalar charge.\nThis interesting possibility is moderated by what happens in the case of denser stars (dotted-dashed line in Fig.~\\ref{fig:smallCoup}). For large negative values of the Ricci coupling ($\\beta=-10$), the scalar charge does not have a monotonic behaviour with $\\alpha$. In particular, as shown in the bottom-right panel of Fig.~\\ref{fig:smallCoup}, $Q$ starts increasing for positive values of $\\alpha$. Even at the point where $Q$ is minimal, its value ($Q\/M\\simeq8\\times10^{-3}$) already exceeds the bound of Eq.~\\eqref{eq:boundQ}. Therefore, it is only for values of $\\beta$ that are very close to the DEF threshold $\\beta\\simeq-5.5$, that the addition of the Gauss-Bonnet coupling can help to reduce the scalar charge, and to pass the stringent binary pulsar tests.\n\nTo conclude the study of the $\\beta<0$ region, we consider a significantly more negative Ricci coupling, namely $\\beta=-100$. To illustrate what happens at these large negative values of $\\beta$, it is enough to consider one scenario, for example the one of lighter neutron stars with the SLy EOS. For such negative values of $\\beta$, there exist several scalarized solutions, with different number of nodes. We can then compare the mass difference of these solutions between each other. Figure \\ref{fig:betaNeg100} shows that, for $\\alpha>\\alpha_\\text{c}\\simeq350\\, \\text{km}^2$, scalarized solutions with 1 node become lighter than scalarized solutions with 0 node.\n\\begin{figure}[ht]\n\t\\includegraphics[width=0.75\\linewidth]{dMalpha6v1N.pdf}\n\t\\caption{Mass difference $\\delta M$ vs $\\alpha$ at $\\beta=-100$. The EOS considered here is the SLy one, with $\\epsilon_0=8.1\\times 10^{17}\\,\\text{kg}\/\\text{m}^3$, which in GR corresponds to $M_\\text{GR}=1.12~M_\\odot$. The color and dashing conventions is the same as in Fig.~\\ref{fig:smallCoup}. We have more modes in this region of parameter space, that we represent as dotted-dashed (for $n=1$ node) and dashed (for $n=2$ nodes) curves. For $\\alpha\\gtrsim350\\, \\text{km}^2$, solutions with 1 node start having a smaller mass than solutions with 0 node, which can indicate that solutions with 1 node are more energetically favored.}\n\t\\label{fig:betaNeg100}\n\\end{figure}\nThis is a hint that, for $\\alpha>\\alpha_c$, the one node solution will be preferred energetically to the zero node solution. We cannot conclude definitively on this issue, as the ADM mass does not take into account the energy stored in the scalar distribution (which is non-zero for the two scalarized solutions). However, in the regime where this inversion happens, the mass difference with respect to GR, $\\delta M$, is rather small. If our interpretation in terms of energetic preference is correct, the transition from a preferred solution with zero node to a solution with one node is interesting. Indeed, the scalarized solution with zero node is associated with the fundamental mode of the GR background instability. At the perturbative level, all the other modes of instability have higher energies. It would then be natural to expect that, at the non-linear level of scalarized solutions, this energy hierarchy is respected. This is the case up to $\\alpha=\\alpha_\\text{c}$, but not anymore beyond. In Sec.~\\ref{Sec:EffMass}, we provide a putative explanation for this inversion: that for $\\alpha>\\alpha_\\text{c}$, the profile of the effective mass over the GR background tends to favor the growth of scalar field solutions with one node, rather than zero.\n\n\\subsection{Mass and scalar charge of the $\\beta>0$ solutions}\\label{Sec:betaPos}\n\nWe now consider the case of positive $\\beta$. Such solutions are less constrained by observations than their $\\beta<0$ counterparts. They are also very interesting from a cosmological perspective, where $\\beta>0$ allows a consistent history throughout different epochs \\cite{Antoniou:2020nax}. We have seen in Sec.~\\ref{Sec:parameterSpace} that, among the three different possible neutron star configurations we focus on, only the denser one leads to scalarized solutions for $\\beta>0$. In Fig.~\\ref{fig:50M204}, we show the mass difference $\\delta M$ and scalar charge $Q$ as functions of $\\alpha$ when $\\beta=50$.\n \\begin{figure}[ht]\n\t\\subfloat{\\includegraphics[width=0.75\\linewidth]{dMalpha4v1N.pdf}}\n\t\\\\\n\t\\subfloat{\\includegraphics[width=0.75\\linewidth]{QMalpha4v1N.pdf}} \n\t\\caption{Mass difference and scalar charge of scalarized solutions for $\\beta>0$ ($\\beta=50$ here). Among the three neutron star scenarios that we considered throughout the paper, only the heavier star ($\\epsilon_0=5.51 \\times 10^{-3}$~kg\/m$^3$, $M_\\text{GR}=2.04~M_\\odot$, SLy EOS) possesses some scalarized solutions in this region. The dashing convention is the same as in Fig.~\\ref{fig:betaNeg100}. Solutions that correspond to the interval of $\\alpha$ centered on 0 are interesting observationally, as they yield very small scalar charges, compatible with Eq.~\\eqref{eq:boundQ}.}\n\t\\label{fig:50M204}\n\\end{figure}\nNote that scalarized solutions with zero node exist over two disconnected ranges of $\\alpha$ ($-44~\\text{km}^2<\\alpha<57~\\text{km}^2$ and $174~\\text{km}^2<\\alpha<522~\\text{km}^2$). In the gap, GR solutions are stable and no scalarized solutions exist. This is obvious from Fig.~\\ref{fig:SLy204}, taking a cut along the vertical line $\\beta=50$. \n \nOver the first interval, $\\alpha$ is rather small and the scalarization process is dominated by the negative Ricci scalar. For strictly vanishing $\\alpha$, the scalarization phenomenon with $\\beta>0$ has already been examined in \\cite{Mendes:2014ufa,Palenzuela:2015ima,Mendes:2016fby}. Here, we find that, in the interval of small values of $\\alpha$, the scalar charges of the $n=0$ solutions (as well as of the $n=1$ solutions) are very small. Typically, $Q\/M \\simeq 10^{-4}-10^{-5}$, compatible with Eq.~\\eqref{eq:boundQ}. Hence, all solutions with $\\beta>0$ and rather small values of $\\alpha$ are interesting observationally: they display either no scalarization effects for neutron stars (for $\\beta\\lesssim 11.51$) or very mild scalar charges (for $\\beta\\gtrsim 11.51$). At the same time, they allow for a consistent cosmological history; finally, together with positive values of $\\alpha$, they will generically give rise to black hole scalarization, as studied in detail in \\cite{Antoniou:2021zoy}. In this region of parameter space, we can therefore hope to discover scalarization effects in the future gravitational-wave signals of binary black holes, that are either absent or suppressed in the case of neutron stars.\n \nOver the second interval ($174~\\text{km}^2<\\alpha<522~\\text{km}^2$), the contribution of the Gauss-Bonnet invariant tends to dominate, and the scalar charges are more significant, as one can immediately notice in Fig.~\\ref{fig:SLy204}. Such setups are not compatible with Eq.~\\eqref{eq:boundQ}, and therefore less interesting phenomenologically.\n\n\\subsection{Scalarized solutions along the instability lines}\\label{Sec:instabilityLines}\n\nAs we mentioned at the end of Sec.~\\ref{sec:lightSLy}, a generic feature that is not observable in Figs.~\\ref{fig:Sly112}, \\ref{fig:MPA1} and \\ref{fig:SLy204}, is that scalarized solutions are present in a tiny band close to each instability line.\nLet us illustrate this with the light star model (with SLy EOS), that is the one which corresponds to Fig.~\\ref{fig:Sly112}. For simplicity, we also restrict our study to solutions with $\\beta=0$ (\\textit{i.e.}, we take a cut along the vertical axis in Fig.~\\ref{fig:Sly112}). The characteristics of the solutions are shown in Fig.~\\ref{fig:beta0}.\n\\begin{figure}[ht]\n\t\\subfloat{\\includegraphics[width=0.75\\linewidth]{dMalpha5v1N.pdf}}\n\t\\\\\n\t\\subfloat{\\includegraphics[width=0.75\\linewidth]{QMalpha5v1N.pdf}} \n\t\\caption{Mass difference and scalar charge of the scalarized solutions along the instability lines, for $\\beta=0$. The scenario considered here corresponds to $\\epsilon_0=8.1\\times 10^{17}~\\text{kg}\/\\text{m}^3$ ($M_\\text{GR}=1.12M_\\odot$) together with the SLy EOS. Solutions with zero node acquire a significant charge and mass difference, and are apparently disconnected from GR when they appear while increasing $\\alpha$ towards positive values. Solutions with $n=1$ nodes are very close to GR, with a small charge and mass difference. Since they extend only over a small range of $Q$ and $\\delta M$, they are difficult to spot. They lie at the upper left (respectively lower left) of the top (respectively bottom) panel.\n\t}\n\t\\label{fig:beta0}\n\\end{figure}\nScalarized solutions with zero nodes (the ones lying close to the $n=0$ instability line of the GR solution) have a characteristic mass difference and scalar charge which is not particularly small. It is of the same order as for the solutions we previously examined (Figs.~\\ref{fig:smallCoup}--\\ref{fig:50M204}). They also exhibit a surprising behaviour: when increasing $\\alpha$ progressively from 0 towards positive values, the mass and scalar charge suddenly deviate from GR, instead of being smoothly connected; further increasing $\\alpha$, $\\delta M$ and $Q$ then tend to decrease. This behaviour is significantly different from what we could observe in Figs.~\\ref{fig:smallCoup}--\\ref{fig:50M204}.\n \n Solutions with more nodes ($n=1$, 2, 3...) exhibit a clear feature: they deviate very slightly from GR in terms of mass, and acquire only a small scalar charge (typically $\\delta M < 10^{-2}$ and $Q\/M < 10^{-4}$). We verified this behaviour for all higher nodes admitted; however, for simplicity, in Fig.~\\ref{fig:beta0} we show only the case $n=1$. This feature can be understood as follows; close to some instability line (on the unstable side), an unstable mode of the effective potential associated with the GR solution has just appeared. A very small deformation of the potential can therefore easily restore the equilibrium. This deformation can be caused by the back-reaction of the scalar onto the metric: the instability is triggered, the scalar field starts growing, but it immediately back-reacts on the potential, making it shallower and suppressing the instability. Clearly, such a behaviour can only happen close to instability lines, where a specific mode is on the edge of stability.\n\n\\subsection{Predicting the scalar profile of scalarized stars from GR solutions}\\label{Sec:EffMass}\n\nWe will conclude this study by arguing that, already at the perturbative level of the GR solution, we can identify an influence on the profile of the scalar field in the fully scalarized solution. To this end, let us focus on the effective mass given in Eq.~\\eqref{eq:eff_masss}, $ m_\\text{eff}^2=\\beta R\/2-\\alpha \\mathscr{G}$. This is a radially dependent quantity, and the scalar field is most likely to grow at radii where $m_\\text{eff}^2$ is most negative. In particular, it is natural to expect that, if $m_\\text{eff}^2$ has a minimum at $r=0$, this will favor a monotonic profile for the scalar field, and hence an $n=0$ type of solution. On the contrary, if $m_\\text{eff}^2$ has a minimum at $r>0$, this favors a peaked profile for the scalar field, which is more common in $n\\geq1$ solutions. Let us illustrate this with a concrete example. We will\nconsider the scenario that corresponds to $M_\\text{GR}=1.12 M_\\odot$, together with the SLy EOS, and two choices of $\\beta$: $\\beta=-10$ and $\\beta=-100$. In the first case, only solutions with 0 node exist; in the second case, we can construct solutions with 0 or 1 node.\n\nWe first focus on the case $\\beta=-10$. The Ricci scalar is everywhere positive over the background we consider, with a maximum at $r=0$; hence, $\\beta R$ contributes negatively to the squared mass, favouring the growth of the scalar field close to the center. The Gauss-Bonnet scalar, on the other hand, is negative in the central region of the star, and becomes positive towards the surface. Therefore, $-\\alpha\\mathscr{G}$ reinforces the effect of $\\beta R$ if $\\alpha<0$, while couterbalancing it if $\\alpha>0$. This is illustrated in the top panel of Fig.~\\ref{fig:EffMass2}.\n\\begin{figure}[ht]\n\t\\subfloat{\\includegraphics[width=0.7\\linewidth]{mEff2.pdf}}\n\t\\\\%\n\t\\subfloat{\\includegraphics[width=0.7\\linewidth]{phi2N.pdf}}\n\t\\caption{Upper panel: radial profile of the effective mass squared over the GR background, using the SLy EOS and a central density $\\epsilon_0=8.1\\times 10^{17}\\,\\text{kg}\/\\text{m}^3$ (yielding $M_{\\text{GR}}=1.12 M_{\\odot}$), for $\\beta=-10$ and $\\alpha=\\pm200$~km$^2$; Lower panel: radial profile of the scalar field, this time in the fully scalarized solution with the same EOS, central density, and Lagrangian parameters. The radial coordinate is normalized by $R_\\text{s}$, the radius of the star surface. In the lower panel, the scalar field is normalized to its central value for $\\alpha=-200\\, \\text{km}^2$. When the minimum of $m_\\text{eff}^2$ is shifted to $r>0$, so is the peak of $\\phi$.}\n\t\\label{fig:EffMass2}\n\\end{figure}\nThe bottom panel shows the scalar profile of the fully scalarized solutions associated with the same parameters. In this range of parameters, only solutions with 0 node are allowed (as one can check in Fig.~\\ref{fig:Sly112}); hence, pushing the minimum of $m_\\text{eff}^2$ away from the center cannot favour $n=1$ solutions, which do not exist. Still, we notice that positive $\\alpha$ values, which have the effect of displacing the minimum of $m_\\text{eff}^2$ to $r>0$, also displace the peak of the scalar field to $r>0$. The peak of the scalar field is located approximately at the minimum of $m_\\text{eff}^2$. Again, one must be careful in the comparison of the two panels, as one of them corresponds to a GR star while the other one corresponds to a scalarized star. However, our analysis seems to capture what happens during the transition from the GR to the scalarized branch.\n\nTo illustrate better the transition between $n=0$ and $n=1$ solutions, let us now consider the case $\\beta=-100$. \nThe qualitative discussion about the effect of $\\beta R$ and $-\\alpha\\mathscr{G}$ over the effective mass is exactly the same as in the previous case. We will therefore consider again a large negative and a large positive value of $\\alpha$, as well as an intermediate one: $\\alpha=-2000, \\,350$ and 1500~km$^2$. Note that the intermediate value corresponds to $\\alpha_\\text{c}$ in Sec.~\\ref{Sec:betaNeg}, the critical value at which scalarized stars with $n=0$ node become more massive (and hence probably less stable) than those with $n=1$ node. We show the results in Fig.~\\ref{fig:EffMass3}.\n\\begin{figure}[ht]\n\t\\subfloat{\\includegraphics[width=0.735\\linewidth]{mEff3.pdf}}\n\t\t\\\\%\n\t\\subfloat{\\includegraphics[width=0.735\\linewidth]{phi3n0N.pdf}}\n\t \\\\%\n\t\\subfloat{\\includegraphics[width=0.735\\linewidth]{phi3n1N.pdf}}\n\t\\caption{Upper panel: radial profile of the effective mass squared over the GR background, using the SLy EOS and a central density $\\epsilon_0=8.1\\times 10^{17}\\,\\text{kg}\/\\text{m}^3$ (yielding $M_{\\text{GR}}=1.12 M_{\\odot}$), for $\\beta=-100$ and $\\alpha=-200$, 350 or 1500~km$^2$; Center (respectively lower) panel: radial profile of the scalar field solution with 0 (respectively 1) node in the fully scalarized solution with the same EOS, central density, and Lagrangian parameters. The normalization is similar to the one of Fig.~\\ref{fig:EffMass2}. When increasing $\\alpha$, the minimum of $m_\\text{eff}^2$ is progressively shifted from $r=0$ to a finite radius, alternatively favoring the growth of $n=0$ and $n=1$ solutions.}\n\t\\label{fig:EffMass3}\n\\end{figure}\nThe top panel shows the profile of the effective scalar mass. It behaves exactly as in the case $\\beta=-10$, with a minimum at $r=0$ for negative values of $\\alpha$, which is progressively shifted to larger radii when we increase $\\alpha$. For the parameters we chose, this time, both solutions with zero and one node exist. In the center (respectively bottom) panel of Fig.~\\ref{fig:EffMass3}, we show the $n=0$ (respectively $n=1$) solutions. In Sec.~\\ref{Sec:betaNeg}, we stated that for $\\alpha<\\alpha_c$ we expected that the zero node solution will be energetically preferred over the one node solution, and vice-versa for $\\alpha>\\alpha_c$. The profiles of the effective mass squared give a complementary argument that strengthens this expectation. Indeed, for $\\alpha=-2000\\,\\text{km}^2\\ll\\alpha_c$ the shape of $m_\\text{eff}^2$ favours a scalar solution with a maximum at the center of the star, which decays monotonically with $r$, \\textit{i.e.} a $n=0$ solution. For $\\alpha=1500\\,\\text{km}^2\\gg\\alpha_c$, the tachyonic instability is still triggered inside the star, but away from the center. Thus, we expect that a solution with one node will be favoured. The transition between a minimum at $r=0$ and $r>0$ indeed seems to occur around $\\alpha_\\text{c}$.\n\n\n\\section{Conclusions}\n\\label{sec:discussion}\n\nWe have explored scalarized neutron stars when couplings between the scalar field and both the Ricci and the Gauss-Bonnet invariants are present. This completes the analysis initiated in~\\cite{Andreou:2019ikc,Ventagli:2020rnx}, where all the terms contributing to the onset of scalarization were identified, and continued in~\\cite{Antoniou:2021zoy} with the study of scalarized black holes in this minimal setup.\n\n\nWe have identified the regions of parameter space where solutions exist, considering three different stellar scenarios which correspond to different central densities and EOS. Although we have considered only a limited number of different central densities, we have selected the ones that correspond to the lowest\/largest neutron star mass in GR, in order to cover very different setups. The regions where scalarized solutions exist are systematically smaller than the ones where the GR branch is tachyonically unstable. The complementary regions, where the GR solution is unstable while no scalarized solution exists, should be excluded.\n\nWe then investigated in detail the physical characteristics of the scalarized solutions. In general, large parameters ($|\\beta|\\gg1$ or $|\\alpha|\\gg L^2$, where $L\\simeq10$~km is the typical curvature scale) lead to scalar charges that would be in conflict with binary pulsar constraints. However, it is interesting to notice that solutions with $\\beta>0$ and reasonably small $\\alpha$ (typically $|\\alpha|\\lesssim 50$~km$^2$) lead either to stable GR configurations, or to scalarized stars with small charges. Remarkably, this is the region of the $(\\alpha,\\beta)$ parameter space for which GR is a cosmological attractor \\cite{Antoniou:2020nax} and black holes scalarization can take place \\cite{Antoniou:2021zoy}. Therefore, it is possible to construct scalarization models that are consistent with current observations, while still having interesting strong field phenomenology. It's worth noting that future gravitational-wave observations, such as for instance the observations of extreme mass ratio inspirals by LISA~\\cite{Maselli:2020zgv, Maselli:2021men}, will reach the precision to measure small scalar charges for neutron stars and black holes.\n\nWe have also discovered that scalarized solutions systematically exist near the thresholds that delimit the stability of the GR solutions, and provided a putative explanation for this. Finally, we have shown that the profile of the effective mass at the GR level can foster the growth of certain modes with respect to others.\n\nAn obvious continuation of the present work is the stability analysis of the scalarized solutions, both the neutron stars presented here and the black holes investigated in~\\cite{Antoniou:2021zoy}.\nIt will also be interesting to combine the bounds coming from neutron star and black hole observations with the theoretical constraints that relate to the requirement that scalarization models have a well-posed initial value problem \\cite{Ripley:2020vpk}. So far, the combined theory with both Ricci and Gauss-Bonnet couplings has not been studied in detail from the initial value problem perspective. Finally, rotation is known to have important effects on black hole scalarization with a Gauss-Bonnet coupling, either quenching it (for $\\alpha>0$ \\cite{Cunha:2019dwb,Collodel:2019kkx}) or triggering it (for $\\alpha<0$ \\cite{Dima:2020yac}). The effect of rotation on neutron star scalarization was investigated in the framework of the DEF model \\cite{Doneva:2013qva}. It would be interesting to extend this analysis to coupled Ricci\/Gauss-Bonnet couplings, or pure Gauss-Bonnet ones.\n\n\n\\begin{acknowledgments}\n G.A. acknowledges partial support from\nthe Onassis Foundation.\nThis project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101007855.\nA.L. thanks FCT for financial support through Project~No.~UIDB\/00099\/2020.\nA.L. acknowledges financial support provided by FCT\/Portugal through grants PTDC\/MAT-APL\/30043\/2017 and PTDC\/FIS-AST\/7002\/2020.\nT.P.S. acknowledges partial support from the STFC Consolidated Grants No. ST\/T000732\/1 and No. ST\/V005596\/1. \nWe also acknowledge networking support by the GWverse COST Action\nCA16104, ``Black holes, gravitational waves and fundamental physics.''\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec1}\n\nIn {\\it Harry Potter and the Deathly Hallows} \\cite{rowling:HarryPotter:2007},\nJ.~K.~Rowling tells about the origin of the famous piece of cloaking tissue that\nallows Harry Potter, the renowned young wizard apprentice, to hide from unwanted company.\nThis is just an example taken from modern literature of how the idea of optical cloaking and\ninvisibility, the capability to hide objects from sight (people in this case), has permeated popular culture\nin recent times, although there are many more examples.\nFor instance, as is well known, a number of optical illusions in fashion during the XIXth and XXth centuries were\nbased on the reflection properties of mirrors and window glasses \\cite{steinmeyer-bk,zepf:physteach:2004}.\nIt is perhaps because of this capability to both fascinate and amaze at the same time, that the scientific\ncommunity has also shown much interest for the cloaking phenomenon and its possible applications\n\\cite{azanna:optica:2018}, from the stealth technology (widely used in the military\nindustry in aircrafts or warships, for example) to the design and fabrication of metamaterials\n\\cite{engheta:PRE:2005,engheta:PRE:2006,smith:Science:2006,shalaev:NatPhot:2007}.\n\nIn general, (optical) cloaking conditions are related to a particular geometrical design of the\nobject to be hidden or the cloaking device that is going to be used to hide the object.\nConversely, such a design strongly depends on the level at which cloaking is required.\nThis can be done either at the highly sophisticated level of electromagnetic optics\nor at the more elementary level of geometrical optics.\nAt the level of electromagnetic optics, this implies fabricating specific arrangements\nthat produce a direct, local influence on the phase of the incident light.\nThis is the basic principle behind the possibility to produce cloaking metamaterials, of much\ninterest at present, although out of the scope here.\nNevertheless, just to provide a quick glimpse on the issue, it is worth mentioning that\nin these new materials cloaking emerges from the collective behavior of repetitive\npatterns of assemblies of tiny units, smaller than the typical wavelengths they intend to\ninfluence, instead of the material itself that such units are made of.\nBy means of a specific design of such assemblies, the electric and magnetic properties\nof the material can be controlled, achieving narrow electromagnetic spectral bands,\ninfinite phase velocities, or negative refractive indexes.\nAt a pedagogical level, there have been some proposals to introduce the physics of\nmetamaterials in simple terms with the aid of inexpensive experimental setups, based\non elements available in any undergraduate teaching laboratory and the use of\nmicrowaves \\cite{marques:AJP:2011,fleming:AJP:2017} or sound waves \\cite{gennaro:AJP:2016}.\nOther authors have approached the issue, particularly cloaking properties, from a more\ntheoretical perspective, through easy-to-implement numerical tools \\cite{thompson:AJP:2008} or with\njust some elementary theory \\cite{longhi:AJP:2017}, also appropriate at the undergraduate level.\n\nAt the level of geometrical optics, on the other hand, the cloaking effect can\nbe achieved in a far simpler fashion, involving either multiple reflections \\cite{choi:ApplOpt:2014,howell:video}\nor the imaging properties of lens systems \\cite{choi:OptExp:2014,choi:OptExp:2015,howell:video2}.\nIn either case, the working principle is the same: the light coming from an object is redirected in a way\nthat it reaches the observer's eyes without being affected by the presence of an interposed element,\nwhich remains unnoticeable for the observer.\nThis is the case, for instance, of the optical cloaking device (OCD) proposed in Ref.~\\cite{choi:OptExp:2014},\nwhich is based on the very basic contracting\/expanding property of a two-lens system.\nMore specifically, the waist of an incident light beam coming from the object is reduced\nby a telescope-like two-lens configuration and, then, after the light has traveled a certain\ndistance, the inverted version of the same two-lens configuration expands again the beam waist\nto its initial size, which is eventually what the observer is looking at.\nIn the middle part of the OCD, between the reducer and the expander two-lens systems, the narrowness\nof the beam allows to accommodate around it whatever object we wish to hide, because its presence\nwill not be noticed by the observer.\nA nice video demonstration of the experiment can be seen in \\cite{howell:video2}.\nThe same experiment can also be performed with standard lenses, available in any teaching\noptics laboratory.\nAnalogous results can also be obtained with cheaper versions, such as the one specially prepared\nin our teaching laboratory for a popular science TV show \\cite{orbita-laika}.\n\nBecause of its simplicity, it is clear that for an optics instructor the aforementioned\nexperiment is very appealing at an elementary teaching level, as a way to introduce the concept\nof optical cloaking in the classroom in simple terms: just a simple arrangement of a few lenses\nand basic theory illustrates very nicely an application of the principles of geometrical optics\nin paraxial form, beyond other conventional examples that have been used in optics\nlaboratories for decades to demonstrate the functioning of telescopes, microscopes,\nphoto-cameras, the eye, or other optical instruments.\nFor undergraduate students, on the other hand, these experiments would constitute a beneficial\nfirst-approach to the phenomenon, since it is introduced at a very basic level, without getting into\nvery sophisticated mathematics or the use of specific optics software.\nNow, the question here is how to design an experiment in a way that can be quantitatively meaningful,\nis easy to be described and analyzed theoretically, and, very important, does not require expensive\n(professional) laboratory material, which is what one usually has at hand in typical teaching\nlaboratories.\nThe setup proposed in \\cite{choi:OptExp:2014,choi:OptExp:2015,howell:video2} produces\nvery spectacular results (just what everyone is expecting from this kind of experiences), but our\nexperience in the laboratory with this kind of OCD is that students lose track very quickly of the\nphysics behind it, because eventually all is based on apparatent sizes, semi-qualitative analyses\nand, of course, the funny cloaking effect when they interpose objects (or even their hands and\nfaces).\nHence, instead, we considered an observer-independent alternative setup, more appropriate\nto conduct quantitative measurements and to study the physics involved (and less prone to\ndistractions), where cloaking is not directly observed by looking through, but is analyzed in terms\nof the effects the hidden object makes on the image of the observed object.\n\nHere, we report on theory and measurements performed with the above mentioned OCD.\nThe work combines a preliminary theoretical\nanalysis with a subsequent experimental development of a simple OCD\nThe theoretical analysis is based on the transfer matrix method applied to optics \\cite{pedrotti-bk},\nwhich allows the student to understand the physics of the optical cloaking phenomenon by\ninvestigating imaging through a lens system in a relatively simple fashion (no high knowledge\non the issue is actually required, but just simple matrix algebra).\nThis analysis allows to determine optimal cloaking conditions.\nAccordingly, the simpler, optimal configuration consists of two sets of lenses with different\nfocal lengths arranged in the form of two Kepler-type telescopes faced by their eyepieces.\nThe analysis also provides the distance that should be considered between those ``eyepieces''.\nWith these data, the device is then mounted with cheap resources (as we have confirmed,\nthere is no need for high-quality, expensive lenses to observe the phenomenon) available in\nany teaching laboratory.\nIn this regard, we have noticed that, even with the lowest-quality lenses available in our laboratory\n(formerly used to study the effects of aberrations), the amount of cloaking obtained is already\nremarkable.\nFurthermore, the presence of spherical and chromatic aberrations is taken advantageously\nto determine unambiguously the limits of our optimal cloaking conditions (beyond them,\naberration effects start dominating very quickly the image observed).\n\nThe work has been organized as follows.\nTo be self-contained, in Sec.~\\ref{sec2} the basic theory on the transfer matrix method is\nintroduced, which is based on Gaussian paraxial optics (small-angle approximation) and\ntherefore does not require expert-level knowledge.\nAn analysis and discussion of the application of this approach to systems with increasing\nnumber of lenses is also presented in order to settle down the theory that the experiment is\nbased on.\nThis approach is applied to the analysis of cloaking with systems with increasing number of\nlenses.\nThe experimental setup considered here is described in Sec.~\\ref{sec3}, discussing some of\nthe main aspects that have been taken into consideration for its implementation as well as\nthe main outputs.\nTo conclude, a series of final remarks are summarized in Sec.~\\ref{sec4}.\n\n\n\\section{Theoretical analysis}\n\\label{sec2}\n\nThe OCD implemented in Ref.~\\cite{choi:OptExp:2014} worked in a very simple manner:\nwhen looking through the OCD, any object observed behind it should be seen as if there\nwas nothing between such an object and our eyes, even if we introduce another object\ninside the OCD.\nThis behavior can be summarized in terms of the following two conditions:\n\\begin{enumerate}\n \\item The image of the observed object must be direct, virtual, and with unitary\n magnification.\n\n \\item The OCD has some extension, henceforth denoted with $L$, so that the hidden object\n can be accommodated somewhere inside it.\n\\end{enumerate}\nAccordingly, the rays leaving the object will reach our eyes with a minor influence\nfrom the hidden object or the OCD itself.\n\nThe variant here proposed works in a similar way, although instead of a virtual image of a\nbackground object, we are going to focus on the real image produced by an illuminated object.\nThat is, instead of analyzing cloaking by direct observation, we are going to analyze it by\nobserving a projected image, although the theoretical analysis is equally suitable to both.\nConsequently, for optimal cloaking conditions, such an image should be unaffected by the\npresence of the OCD itself or an object hidden inside it.\n\nIt should also be mentioned that the above condition (i) is not essential regarding\ncloaking, unless the OCD is also required to be hidden from sight, i.e., we do not wish\nto notice the presence of the OCD, but only the background as it appears when there is\nnothing in front of it (along the direct line of sight).\nThis is worth mentioning, because it enables the possibility to construct alternative\nOCDs in a way that, even if their configuration is not exactly the same as the one\nreported in \\cite{choi:OptExp:2014}, still the cloaking phenomenon can be observed.\n\n\n\\subsection{Elementary aspects of the transfer matrix method}\n\\label{sec21}\n\nIn Gaussian paraxial optics, when dealing with simple lens systems, imaging is typically determined\nby means of ray tracing.\nAn efficient way to tackle the issue when the number of optical elements increases (which, from a\npractical point of view, essentially means considering more than two or three lenses) is by making\nuse of the so-called transfer matrix method \\cite{pedrotti-bk}.\nThis is an easy-to-handle input\/output method based on the linear relationship between the object\n(input) and its conjugate image (output) independently of the number of optical elements (lenses\nand mirrors) accommodated between both.\nSuch a relationship is given in terms of the so-called $ABCD$ matrix,\n\\begin{equation}\n \\mathbb{M} =\n \\left( \\begin{array}{cc}\n A & B \\\\ C & D\n \\end{array} \\right) ,\n \\label{eqA}\n\\end{equation}\nwhere each element is directly related with a property of the optical system itself, if the matrix\nis computed between its two boundary surfaces, or the imaging process, if it is defined from the\nobject plane to the image one.\n\nTo better understand this basic concept, consider an object point $P_O$ and its conjugate image\npoint $P_I$.\nThe point $P_O$ is at a height $h_O$ off the optical axis and $P_I$ is at $h_I$.\nBoth points can be joined by a swarm of rays, all employing the same time in going from one to the\nother, according to Fermat's least time principle.\nLet us consider one of such rays.\nThis ray leaves $P_O$ at an angle $\\alpha_O$ with respect to the direction of the optical axis,\nand reaches $P_I$ with an angle $\\alpha_I$ (also with respect to the optical axis).\nAlthough it is not shown here (but it is not difficult to prove either), $h$ and $\\alpha$ are\nthe only two parameters we need to characterized the imaging process in paraxial optics.\nThe relationship between the input (object) properties, $(h_O,\\alpha_O)$, and the output (image)\nones, $(h_I,\\alpha_I)$, is described by a linear matrix transformation, $\\mathbb{M}$, which\ntransfers the former to the latter.\nIf these properties are recast in vector form, we have\n\\begin{equation}\n {\\bf p}_I = \\left( \\begin{array}{c} h_I \\\\ \\alpha_I \\end{array} \\right)\n = \\left( \\begin{array}{cc} A & B \\\\ C & D \\end{array} \\right)\n \\left( \\begin{array}{c} h_O \\\\ \\alpha_O \\end{array} \\right)\n = \\mathbb{M} {\\bf p}_O .\n \\label{eqC}\n\\end{equation}\n\nAccording to Eq.~(\\ref{eqC}), the height and inclination of the image point are given,\nrespectively, by\n\\begin{eqnarray}\n h_I & = & A h_O + B \\alpha_O ,\n \\label{eqE} \\\\\n \\alpha_I & = & C h_O + D \\alpha_O ,\n \\label{eqF}\n\\end{eqnarray}\nfrom which it is readily seen that $A$ and $D$ are dimensionless parameters,\nwhile $B$ and $D$ have length and inverse-length dimensions, respectively.\nThe dimensionality of these matrix elements can easily be understood by noting that\n$A$ is related to the linear magnification of the image with respect to the object\n(perpendicularly measured from the optical axis, i.e., the ratio $h_I\/h_O$), while $D$\nis related to the angular magnification, which describes the apparent size with respect\nto the object (i.e., $\\alpha_I\/\\alpha_O$).\nRegarding the elements $C$ and $B$, they are associated with the positions of the first\nand second focal planes of the optical system, $h_I\/\\alpha_O$ and $h_O\/\\alpha_I$,\nrespectively, taking its input and output planes as a reference.\n\n\n\\subsection{Transfer matrix for an $N$-lens system}\n\\label{sec22}\n\nIn the particular case we are going to deal with here, only two types of matrices are needed, namely a matrix\ndescribing the passage of light through a single thin lens, which in paraxial form reads as\n\\begin{equation}\n \\mathbb{L} =\n \\left( \\begin{array}{cc}\n 1 & 0 \\\\ - 1\/f & 1\n \\end{array} \\right) ,\n \\label{eq1}\n\\end{equation}\nwith $f$ being the lens effective focal length, and a translation matrix,\n\\begin{equation}\n \\mathbb{T} =\n \\left( \\begin{array}{cc}\n 1 & d \\\\ 0 & 1\n \\end{array} \\right) ,\n \\label{eq2}\n\\end{equation}\naccounting for the transit of ray bundles through an empty space of length $d$\n(this can be the space between two consecutive lenses, or just the distance between\nthe object and the first lens and the distance from the last lens to the image, as will\nbe seen below).\n\nFrom the above considerations, (i) and (ii), an ideal OCD should behave analogously to a single\ntranslation matrix with $d = L$, i.e.,\n\\begin{eqnarray}\n h_I & = & h_O + L \\alpha_O ,\n \\label{eqEocd} \\\\\n \\alpha_I & = & \\alpha_O ,\n \\label{eqFocd}\n\\end{eqnarray}\nso that the image has the same size and orientation as the object when looked from any\ndirection (that is, any $\\alpha_O \\neq 0$).\nThus, let us consider a system of $N$ lenses with their centers aligned along the system\noptical axis.\nIn this system, $f_n$ is the effective focal length of the $n$th lens and $d_{n-1}$ denotes\nthe distance between the $n$th and $(n-1)$th lenses, with $d_0 \\equiv 0$.\nImaging in this system is determined by the matrix product\n\\begin{equation}\n \\mathbb{M}_N = \\mathbb{L}_N \\mathbb{T}_{N-1} \\mathbb{L}_{N-1}\n \\cdots \\mathbb{L}_1 \\mathbb{T}_2 \\mathbb{L}_1\n = \\overleftarrow{\\Pi}_{n=1}^N \\mathbb{S}_n ,\n \\label{eq3}\n\\end{equation}\nwith\n\\begin{equation}\n \\mathbb{S}_n \\equiv \\mathbb{L}_n \\mathbb{T}_{n-1} ,\n \\label{eq4}\n\\end{equation}\nand where the arrow over the product symbol ($\\Pi_n$) denotes that each new\nproduct element $n$ has to be added to the left instead of to the right.\nNotice that for $n=1$, we have $\\mathbb{T}_0 = \\mathbb{I}$, since $d_0 = 0$.\n\nWhen the explicit form of the matrices (\\ref{eq1}) and (\\ref{eq2}) is\nsubstituted into (\\ref{eq4}), with $f_n$ and $d_{n-1}$ instead of $f$ and $L$,\nrespectively, the product matrix (\\ref{eq3}) reads as\n\\begin{equation}\n \\mathbb{M}_N =\n \\overleftarrow{\\Pi}_{n=1}^N\n \\left( \\begin{array}{cc}\n 1 & d_{n-1} \\\\ - 1\/f_n & 1 - d_{n-1}\/f_n\n \\end{array} \\right) ,\n \\label{eq5}\n\\end{equation}\nwhich is of the form\n\\begin{equation}\n \\mathbb{M}_N =\n \\left( \\begin{array}{cc}\n A_N & B_N \\\\ C_N & D_N\n \\end{array} \\right) .\n \\label{eq6}\n\\end{equation}\nThis matrix is to be compared with the total translation matrix that represents\nthe ideal OCD, i.e.,\n\\begin{equation}\n \\mathbb{M}_N =\n \\left( \\begin{array}{cc}\n 1 & L \\\\ 0 & 1\n \\end{array} \\right) ,\n \\label{eq7}\n\\end{equation}\nwith $L = \\sum_{n=1}^N d_{n-1}$.\nBy comparing matrices (\\ref{eq6}) and (\\ref{eq7}) element by element, we obtain the\nset of equations\n\\begin{eqnarray}\n A_N & = & 1 ,\n \\label{eq8a}\n \\\\\n B_N & = & L ,\n \\label{eq8b}\n \\\\\n C_N & = & 0 ,\n \\label{eq8c}\n \\\\\n D_N & = & 1 ,\n \\label{eq8d}\n\\end{eqnarray}\nwhich are used to design the OCD.\nIn compliance with Eqs.~(\\ref{eqEocd}) and (\\ref{eqFocd}),\nThe fact that $A_N$ and $D_N$ are both unitary means that the image\nproduced by the device of an object located to its right must also have\nunitary lateral and angular magnification (image equal to object) even if\nthere is a cloaked object inside it.\nThat is, the picture collected by lens 1 is directly transferred to lens $N$,\nwithout any further optical operation, as it is inferred from the fact that\n$B_N = L$.\nMoreover, like a telescope, the system is afocal, since $C_N = 0$.\n\nNext we are going to consider systems with $N$ ranging from 2 to 4 in order\nto better understand how the cloaking property works.\nThe case $N=1$ is not considered, because it is the trivial one given by (\\ref{eq1})\nwhen instead of a lens we have a very thin plate (with negligible thickness).\nObviously, this can never be an OCD, because there is no room to hide an object\ninside it.\nWe thus need to take into account that the conditions specified by Eqs.~(\\ref{eq8a})\nto (\\ref{eq8d}) are necessary for cloaking, but not sufficient.\n\n\n\\subsection{Two-lens system}\n\\label{sec23}\n\nFor a two-lens system, after proceeding with the product of matrices, the conditions\ngiven by Eqs.~(\\ref{eq8a}) to (\\ref{eq8d}) read as\n\\begin{eqnarray}\n A_2 & = & 1 - \\frac{d_1}{f_1} ,\n \\label{eq9a}\n \\\\\n B_2 & = & d_1 ,\n \\label{eq9b}\n \\\\\n C_2 & = & - \\frac{1}{f_1} - \\frac{1}{f_2} + \\frac{d_1}{f_1 f_2} ,\n \\label{eq9c}\n \\\\\n D_2 & = & 1 - \\frac{d_1}{f_2} .\n \\label{eq9d}\n\\end{eqnarray}\nThis system is analogous to a thick lens with thickness $t = d_1$ and refractive\nindex $n_L = 1$.\nThe term $-C_2$ provides us with the equivalent power of the system, also known as Gullstrand's\nequation \\cite{milton-bk}, from which the system focal length is readily obtained: $f = -1\/C_2$.\n\nIf we apply the condition (\\ref{eq8c}) to the Eq.~(\\ref{eq9c}), we\nfind\n\\begin{equation}\n f_1 + f_2 = d_1 ,\n \\label{eq10}\n\\end{equation}\ni.e., for the system to be afocal, the distance between the two lenses must be equal to\nthe sum of their respective focal lengths.\nThis is precisely the condition that makes afocal a telescope.\nHowever, contrarily to a telescope, where we are interested in large magnification factors,\nin the case of the OCD we are looking for a unitary magnification.\nThus, in order to ensure that the magnification elements (\\ref{eq9a}) and (\\ref{eq9d}) are\nnearly unitary, both focal lengths, $f_1$ and $f_2$, must be much larger than $d_1$, so that\n$d_1\/f_1 \\ll 1$ and $d_1\/f_2 \\ll 1$.\nWhen this condition is satisfied, we find that, although there is room to place an object\ninside this two-lens device, the situation is again analogous to the above single-lens\ncase: cloaking conditions lead to a non-cloaking device formed by two plane-parallel\nplates.\nTherefore, this is another example where conditions (\\ref{eq8a}) to (\\ref{eq8d}) are\nnecessary for cloaking, but not sufficient.\n\n\n\\subsection{Three-lens system}\n\\label{sec24}\n\nIn the case of three lenses, the elements of the matrix $\\mathbb{M}_3$ describing\nthe system read as\n\\begin{eqnarray}\n A_3 & = & 1 - \\frac{L}{f_1} - \\left( 1 - \\frac{d_1}{f_1} \\right) \\frac{d_2}{f_2} ,\n \\label{eq11a}\n \\\\\n B_3 & = & L - \\frac{d_1 d_2}{f_2} ,\n \\label{eq11b}\n \\\\\n C_3 & = & - \\frac{1}{f_1} - \\frac{1}{f_2} - \\frac{1}{f_3}\n + \\left( \\frac{d_1}{f_1} + \\frac{d_2}{f_3} \\right) \\frac{1}{f_2}\n \\nonumber \\\\ & &\n + \\left( L - \\frac{d_1 d_2}{f_2} \\right) \\frac{1}{f_1 f_3} ,\n \\label{eq11c}\n \\\\\n D_3 & = & 1 - \\frac{L}{f_3} - \\left( 1 - \\frac{d_2}{f_3} \\right) \\frac{d_1}{f_2} ,\n \\label{eq11d}\n\\end{eqnarray}\nwith $L = d_1 + d_2$.\nAs before, we find that $f_2$ must be very large in order that the condition (\\ref{eq8b})\nsatisfies, which means that the second lens essentially behaves as a thin glass layer that\ndoes not affect all the other components of the system.\nActually, this leads to a two-lens system analogous to the previous one, for which the\nsame equations hold after replacing $f_2$ in Eqs.~(\\ref{eq9a})--(\\ref{eq9d}) by $f_3$.\n\nThere is, however, an alternative non-trivial solution.\nBy inspecting the magnification terms, $A_3$ and $D_3$, we notice that they display\nsome symmetry when $f_1$ and $f_3$ are exchanged, and also when the same is done with\n$d_1$ and $d_2$.\nIf we apply conditions (\\ref{eq8a}) and (\\ref{eq8d}), we obtain the following relation\n\\begin{equation}\n \\frac{f_1}{f_3} = \\frac{d_1}{d_2} .\n \\label{eq12}\n\\end{equation}\nThis means that, if the cloaking device is designed with inversion symmetry (it should\nnot matter whether we look through the front or through the back), then we can consider\nas a convenient working hypothesis that $d_1 = d_2 = L\/2$, which leads to $f_1 = f_3 = f$.\nMaking the corresponding substitutions in either Eq.~(\\ref{eq11a}) or Eq.~(\\ref{eq11d}),\nwith the cloaking conditions (\\ref{eq8a}) or (\\ref{eq8d}), and solving for $f_2$, we obtain\na non-vanishing value for this focal length:\n\\begin{equation}\n f_2 = \\frac{L - 2f}{4} .\n \\label{eq13}\n\\end{equation}\nIt can readily be noticed that, if this condition is substituted into (\\ref{eq11c}),\nthis matrix element vanishes.\nThe resulting matrix then reads as\n\\begin{equation}\n \\mathbb{M}_3 =\n \\left( \\begin{array}{cc}\n 1 & L\/(1 - L\/2f) \\\\ 0 & 1\n \\end{array} \\right) ,\n \\label{eq14}\n\\end{equation}\nwhich, for $2f \\gg L$, can be recast as\n\\begin{equation}\n \\mathbb{M}_3 \\approx\n \\left( \\begin{array}{cc}\n 1 & L \\left( 1 + L\/2f \\right) \\\\ 0 & 1\n \\end{array} \\right) .\n \\label{eq15}\n\\end{equation}\nIf the term $L\/2f$ can be neglected ($L \\ll f$), then the matrix (\\ref{eq15}) acquires\nthe form of (\\ref{eq7}) and, in principle, this condition might allow cloaking.\nMoreover, when applied to Eq.~(\\ref{eq13}), this assumption implies that $f_2$ must\nbe a negative lens, since $f_2 \\approx - f\/2$, unlike $f_1$ and $f_3$, which are both\npositive (convergent).\nTherefore, we find that the first non-trivial OCD can be achieved by playing with\nlenses with relatively large focal lengths and different vergence.\n\n\n\\subsection{Four-lens system}\n\\label{sec25}\n\nLet us now consider a setup consisting of four lenses.\nProceeding as before, the elements of the corresponding $\\mathbb{M}_4$\nmatrix read as\n\\begin{eqnarray}\n\\fl A_4 & = & 1 - \\frac{L}{f_1} - \\left( 1 - \\frac{d_1}{f_1} \\right)\n \\left( d_2 + d_3 - \\frac{d_2 d_3}{f_3} \\right) \\frac{1}{f_2}\n - \\left[ 1 - \\frac{(L - d_3)}{f_1} \\right] \\frac{d_3}{f_3} ,\n \\label{eq16a}\n \\\\\n\\fl B_4 & = & L - \\frac{\\left( L - d_1 \\right) d_1}{f_2}\n - \\frac{\\left( L - d_3 \\right) d_3}{f_3} + \\frac{d_1 d_2 d_3}{f_2 f_3} ,\n \\label{eq16b}\n \\\\\n\\fl C_4 & = & - \\frac{1}{f_1} - \\frac{1}{f_2} - \\frac{1}{f_3} - \\frac{1}{f_4} + \\frac{L}{f_1 f_4}\n + \\left( \\frac{1}{f_2} + \\frac{1}{f_3} \\right) \\left( \\frac{d_1}{f_1} + \\frac{d_3}{f_4} \\right)\n \\nonumber \\\\\n\\fl & & + \\left( \\frac{1}{f_1 f_3} + \\frac{1}{f_2 f_4} + \\frac{1}{f_2 f_3} \\right) d_2\n - \\left[ \\frac{\\left( L - d_1 \\right) d_1}{f_2} + \\frac{\\left( L - d_3 \\right) d_3}{f_3} \\right]\n \\frac{1}{f_1 f_4}\n \\nonumber \\\\\n\\fl & & - \\left( \\frac{d_1}{f_1} + \\frac{d_3}{f_4} - \\frac{d_1 d_3}{f_1 f_4} \\right) \\frac{d_2}{f_2 f_3} ,\n \\label{eq16c}\n \\\\\n\\fl D_4 & = & 1 - \\frac{L}{f_4} - \\left( 1 - \\frac{d_3}{f_4} \\right)\n \\left( d_1 + d_2 - \\frac{d_1 d_2}{f_2} \\right) \\frac{1}{f_3}\n - \\left[ 1 - \\frac{(L - d_1)}{f_4} \\right] \\frac{d_1}{f_2} ,\n \\label{eq16d}\n\\end{eqnarray}\nwith $L = d_1 + d_2 + d_3$.\nAgain here we can notice a certain symmetry by exchange of indices (both in focal lengths\nand in inter-lens distances) that can be advantageously considered in order to determine\noptimal cloaking conditions.\n\nThus, as before, let us assume the OCD satisfies inversion symmetry.\nThis means that the focal lengths of the outermost lenses is equal ($f_1 = f_4 = f_\\alpha$),\nand the same holds for the innermost lenses ($f_2 = f_3 = f_\\beta$).\nMoreover, in order to preserve such symmetry, it is also required that the distances $d_1$\nand $d_3$ are equal.\nSo, from now on, $d_1 = d_3 = d_\\alpha$ and $d_2 = d_\\beta$.\nWith this, the matrix elements specified by Eqs.~(\\ref{eq16a}) to (\\ref{eq16d}) can be recast as\n\\begin{eqnarray}\n\\fl A_4 & = & 1 - \\left( \\frac{1}{f_\\alpha} + \\frac{1}{f_\\beta} \\right) L\n + \\frac{2 d_\\alpha (L - d_\\alpha)}{f_\\alpha f_\\beta}\n + \\frac{d_\\alpha d_\\beta}{f_\\beta^2} - \\frac{d_\\alpha^2 d_\\beta}{f_\\alpha f_\\beta^2} ,\n \\label{eq17a}\n \\\\\n\\fl B_4 & = & L - \\left( L - d_\\alpha \\right) \\frac{2 d_\\alpha}{f_\\beta} + \\frac{d_\\alpha^2 d_\\beta}{f_\\beta^2} ,\n \\label{eq17b}\n \\\\\n\\fl C_4 & = & - \\frac{2}{f_\\alpha} - \\frac{2}{f_\\beta} + \\frac{L}{f_\\alpha^2} + \\frac{d_\\beta}{f_\\beta^2}\n + \\frac{2 L}{f_\\alpha f_\\beta} - \\frac{2 \\left( L - d_\\alpha \\right) d_\\alpha}{f_\\alpha^2 f_\\beta}\n - \\frac{2 d_\\alpha d_\\beta}{f_\\alpha f_\\beta^2}\n + \\frac{d_\\alpha^2 d_\\beta}{f_\\alpha^2 f_\\beta^2} ,\n \\label{eq17c}\n \\\\\n\\fl D_4 & = & 1 - \\left( \\frac{1}{f_\\alpha} + \\frac{1}{f_\\beta} \\right) L\n + \\frac{2 d_\\alpha (L - d_\\alpha)}{f_\\alpha f_\\beta}\n + \\frac{d_\\alpha d_\\beta}{f_\\beta^2} - \\frac{d_\\alpha^2 d_\\beta}{f_\\alpha f_\\beta^2} .\n \\label{eq17d}\n\\end{eqnarray}\nFrom the application of (\\ref{eq8b}) to (\\ref{eq17b}), we obtain\n\\begin{equation}\n f_\\beta = \\frac{d_\\alpha d_\\beta}{2(L - d_\\alpha)} ,\n \\label{eq18}\n\\end{equation}\nwhich avoids making further assumptions on the relative size of $f_\\beta$ with respect to $L$,\nas in the previous cases, and hence allows some freedom of choice.\nWith this result and applying (\\ref{eq8a}) to (\\ref{eq17a}), we find the value of other focal length,\n\\begin{equation}\n f_\\alpha = \\frac{d_\\alpha L}{2(L - d_\\alpha)} .\n \\label{eq19}\n\\end{equation}\nAs it can easily be noticed, the addition of these two focal lengths satisfies the relation\n\\begin{equation}\n d_\\alpha = f_\\alpha + f_\\beta .\n \\label{eq20}\n\\end{equation}\nThis means that the configuration of the OCD is such that the set of lenses 1 and 2, on the\none hand, and the set of lenses 3 and 4, on the other hand, form each a telescope, one in\nfront of the other.\nWithin this configuration, lenses 1 and 4 play the role of the objective, while lenses 2 and 3\nwould be the eyepieces, since $f_\\alpha > f_\\beta$, as it is inferred from the relation\n\\begin{equation}\n \\frac{f_\\alpha}{f_\\beta} = \\frac{L}{d_\\beta} .\n \\label{eq21}\n\\end{equation}\nThis relation also gives the magnification of each telescope.\nThus, we can see that the magnified image of a given object allocated before the first telescope\nis reversed by the other, so that the total magnification becomes unitary.\nThis is precisely the counterpart in geometrical optics of the invisibility recipe in terms\nof transfer matrices found by S\\'anchez-Soto and coworkers for general electromagnetic fields\n\\cite{sanchezsoto:EJP:2008,sanchezsoto:PhysRep:2012}.\nFurthermore, it should also be noticed that the condition leading to the cancelation of the\nelement $C_4$, in compliance with the functional form (\\ref{eq7}), is precisely (\\ref{eq20}),\nwhich can easily be shown by direct substitution.\n\nTaking into account that $L = 2d_\\alpha + d_\\beta$ in (\\ref{eq21}) and the value of $d_\\alpha$,\ngiven by (\\ref{eq20}), we can now obtain the value for $d_\\beta$, which reads as\n\\begin{equation}\n d_\\beta = \\frac{2 \\left( f_\\alpha + f_\\beta \\right) f_\\beta}{f_\\alpha - f_\\beta} .\n \\label{eq22}\n\\end{equation}\nThis provides us with an exact solution (condition) for optical cloaking if we have two\nsets of two lenses with focal lengths $f_\\alpha$ and $f_\\beta$.\nThe total size of the cloaking device will be\n\\begin{equation}\n L = 2 d_\\alpha + d_\\beta\n = \\frac{2 \\left( f_\\alpha + f_\\beta \\right) f_\\alpha}{f_\\alpha - f_\\beta} .\n \\label{eq23}\n\\end{equation}\n\n\n\\section{Experimental implementation}\n\\label{sec3}\n\n\n\\subsection{General aspects}\n\\label{sec31}\n\nIn the analysis presented in the previous section, cloaking has been investigated within an ideal scenario based on the\nfollowing assumptions:\n\\begin{itemize}\n \\item Paraxial conditions are always guaranteed.\n \\item Lenses are aberration-free.\n \\item Lack of aperture effects associated with the diameter of the lenses considered.\n\\end{itemize}\nObviously, these are ideal conditions that simplify the theoretical analysis, as we have seen above, but that have to be taken\ninto account when considering sets of standard lenses, as it is the case here, where we actually were not so much concerned\nabout getting a high degree of cloaking as constructing a relatively simple and cheap device that would allow us to study this\nphenomenon.\nIn any case, all these handicaps are advantageous from a teaching perspective, since they can be used to better characterize\nthe cloaking conditions.\nFurthermore, this is the reason why a projective OCD has been chosen to the detriment of a more appealing direct-sight OCD.\n\nThe so-called projective OCD prepared here is based on some basic optical properties.\nConsider we illuminate a transparent slide and project its image on a somehow distant wall (or projection screen if the optical\nbench is long enough).\nWhen the OCD is inserted between the object and its image, if it has been properly implemented, its presence should not\naffect too much the image (or not, at least, to a great extent), except for a reduction of luminosity due to the many lenses\ninvolved in the setup.\nMoreover, if an additional object is inserted inside the OCD (the ``hidden object''), its presence should not either affect\nthe image projected on the wall.\nIn spite of the difference in its performance with respect to a direct-sight OCD, notice that the theory introduced in the\nprevious section is still applicable, since the operation principle is the same (i.e., it does not matter whether we look through\nthe OCD or we make the light from an object to cross it and form an image beyond it).\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.85\\textwidth]{figure1.png}\n \\caption{\\label{Fig1}\n Full OCD experimental setup used here.\n In order to better appreciate the cloaking effect, a ``twin'' imaging system is accommodated side by side,\n so that the images produced by both are compared.\n As it can be seen, the setup consists of four basic elements: a light source with the object (a transparent\n slide with different shapes depicted), the projective lens system (see Sec.~\\ref{sec32}), the OCD itself,\n and a series of diaphragms with variable diameter.}\n\\end{figure}\n\nA photograph of the full experimental setup implemented here to investigate optical cloaking, as it has been used\nduring the different experiments carried out, is displayed in Fig.~\\ref{Fig1}.\nIt consists of two standard optical benches with a length of about one meter and a half, a set of simple teaching lenses\nwith different focal lengths, several iris-type diaphragms, and two halogen lamps connected to 6~V\/12~V DC power\nsuppliers.\nThe setup essentially consists of the object light sources (surrounded by a blue dashed line), the projective system\n(orange dashed line), and the OCD (embraced by a white dashed line).\nThe hidden objects (plates with iris-type diaphragms) are all at display (their positions denoted with green arrows),\nalthough depending on the experiment performed they could all be mounted, or only one of them.\n\nWe have considered two optical benches, because the OCD is going to be mounted in one of them, while the other\nis just an idler partner.\nThe purpose of the idler bench is to produce a reference image that is not affected by any of the effects due to\neither the OCD or the hidden object.\nThese two benches were aligned side-by-side very close together, such that the image produced with the OCD\ncould be easily compared with the idler one by direct sight (it was difficult, though, to get an optimal photograph\nof them, as it can be inferred from Fig.~\\ref{Fig4}).\nThus, when the OCD is not mounted, the two optical systems produce exactly the same image (see Sec.~\\ref{sec32});\nwhen the OCD is mounted, the presence of the idler image allows us to determine to what extent cloaking is achieved\nand its quality (particularly, to detect any change in the size of the image, presence of aberrations or decrease in the\nluminosity).\n\nThe lenses considered have focal lengths of $+5$~cm, $+10$~cm, and $+20$~cm, all of them with a diameter\nof 3.5~cm and mounted on opaque $10\\times 11$~cm$^2$ square frame plates.\nThese lenses are used to construct both the projective system and the OCD (see below).\nAs mentioned above, the role of hidden object is played here by a series of iris-type diaphragms mounted on plates with\nthe same features as those for the lenses.\nThese diaphragms have all of them maximum and minimum diameters of 3~cm and 0.1~cm, respectively.\nThey allow us to determine the maximum area that can be covered inside the OCD, while still observing the image\nwithout much distortion (aberration effects) or remarkable loses of luminosity.\nOr, in other words, the regions around the transferred bundle of rays where an object can be hidden without noticing\nits presence.\nIt is worth stressing the fact that, because of the negligible thickness of the stop blades (and even the frame\nwhere they are accommodated), this working method is ideal to determine the optimal distances or longitudinal ranges\n(alongside the bench) where neither the position or the diameter of the diaphragm (or diaphragms) affect too much\nthe projected image, that is, the tolerance ranges of the OCD to hide object (see discussions in this respect\nin Sec.~\\ref{sec32} regarding the different experiments carried out and the corresponding tolerance ranges found).\n\n\n\\subsection{The projective imaging system}\n\\label{sec32}\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{figure2.png}\n \\caption{\\label{Fig2}\n Ray-tracing diagram showing the effects on imaging due to size limitation of the diameter of the lenses\n used in the projective system.\n As it can be seen, due to such limitation, not every point of the illuminated object has an image.\n The original object is denoted with the shaded blue arrow on the left and its expected image with the inverted\n shaded blue arrow on the right.\n The final object and image are represented with the dark blue arrow (left and right, respectively).\n Limiting rays for the on-axis object point are displayed with orange dashed line, while for off-axis points are denoted\n green and read dashed lines (arising from the top and bottom parts of the object, respectively).}\n\\end{figure}\n\nPrevious to the practical implementation of the OCD, it is necessary to prepare the projective system that ensures a real\nimage with unitary linear magnification.\nTo that end, we have considered two lenses with the same focal length, $f = + 10$~cm.\nOne lens is positioned approximately at the focal distance away from the object (10~cm), while the other is at about the\nsame distance from the wall.\nThis particular arrangement, where both object and image are accommodated at the focal planes of the lenses\n(front and rear, respectively), ensures the production of a real image with the same size as the object, although\ninverted.\nThis can easily be seen from the ray diagram displayed in Fig.~\\ref{Fig2}, by inspecting the green and red rays (dashed\nlines) leaving, respectively, the top and bottom points of the object (dark blue arrow on the left).\nNevertheless, in terms of the transfer matrix method, if we construct the matrix from the object plane to the image one,\nwe have:\n\\begin{eqnarray}\n\\fl\n \\mathbb{M}_{IO} & = &\n \\left( \\begin{array}{cc} 1 & f \\\\ 0 & 1 \\end{array} \\right)\n \\left( \\begin{array}{cc} 1 & 0 \\\\ - 1\/f & 1 \\end{array} \\right)\n \\left( \\begin{array}{cc} 1 & L \\\\ 0 & 1 \\end{array} \\right)\n \\left( \\begin{array}{cc} 1 & 0 \\\\ - 1\/f & 1 \\end{array} \\right)\n \\left( \\begin{array}{cc} 1 & f \\\\ 0 & 1 \\end{array} \\right) \\nonumber \\\\\n\\fl\n & = & \\left( \\begin{array}{cc} - 1 & 0 \\\\ - 2\/f + L\/f^2 & - 1 \\end{array} \\right) ,\n \\label{eqIO}\n\\end{eqnarray}\nwhich is a symmetric matrix (it displays the same functional form going from the object $O$ to the image $I$,\nas in the other way around).\nAs it can be noticed, the lateral magnification (element $A$) and the angular one (element $D$) are both equal\nto $-1$, which denotes the fact that the image is inverted with respect to the object, although the size is the\nsame.\nRegarding the element $C$, it corresponds to the equivalent power for two identical lenses separated a distance\n$L$, according to Gullstrand's equation \\cite{milton-bk}, while a vanishing element $B$ denotes the fact that\nthe input plane corresponds to the front (object) focal plane of the system and the output plane to the rear\n(image) focal plane.\n\nIn Fig.~\\ref{Fig2} it can also be noticed that the full object $O$, denoted with the shaded blue arrow on the left,\nis not going to produce an image $I$ (shaded blue arrow on the right).\nThis arises as a consequence of the above commented effect of the size-limitation of the lenses, which prevents rays\ncoming from any point on the object to pass through the two-lens system and produce a full image.\nTypically, ray tracing in paraxial optics assumes that extension of the object is relatively small compared to the diameter\nof the lens, by virtue of which sine and tangent functions can be approximated by the value of their arguments.\nIn realistic optical systems, like the one we are dealing with here, where the object is a rectangular transparent slide\nof several centimeters wide and high, while the diameter of the lenses is smaller, the approximation works fine only for\nobject points off the system optical axis but still close to it; as object points become more and more off the optical\naxis, the approximation breaks and additional considerations are required in order to explain or determine the imaging\nprocess.\nIn principle, this leads to introduce some more advanced technical knowledge on aperture and field stops.\nHowever, in order to keep the discussion here at the simplest level, which is one of the main purposes of the work,\nwe are going to further exploit the diagram of Fig.~\\ref{Fig2} and extract such an information directly from it.\n\nThus, consider again the object denoted with the left shaded blue arrow.\nAny bundle of rays leaving any point along this object will be able to pass through the front lens.\nIf the object is accommodated on the front focal plane of the lens, then ray bundles leaving the same point will\nemerge parallel from the lens.\nIn the case of the on-axis object point, this is illustrated by the two orange dashed lines.\nAs it can be seen, after reaching the rear lens, the corresponding bundle of parallel rays will merge into the on-axis\nimage point, at the back focal plane of such a lens.\nNow, if such ray bundles are also required to pass through a second lens, namely the rear lens here, this constitutes\na severe restriction, because not all the parallel ray bundles leaving the front lens will be able to reach totally or even\npartially the rear lens.\nThere will be ray bundles with such an inclination that, after having travel the distance $L$, will fall out of the diameter\nof the rear lens.\nFor example, in the diagram of Fig.~\\ref{Fig2} we notice that, compared to the on-axis object point, only a half of the\nray bundles leaving the front lens can reach the rear one if such rays come either from the top of the dark blue arrow\n(see green dashed lines) or the bottom (red dashed lines).\nWhen these bundles cross the rear lens, the merge respectively into the bottom and top off-axis image points, denoted\nwith the dark blue arrow on the right.\nIt is clear that, because only half of the initial bundle that penetrated the front lens is going to reach the focal plane of\nthe rear lens, the luminosity of the image in those points will be lesser than closer to the optical axis.\nThe same can be applied to object points further away from the optical axis, with a relatively quick loss of luminosity in\nthe corresponding image points.\n\nIn our particular case, taking also into account the points for which the ray bundles reaching the rear lens reduce to\na half, and considering that the lens diameter of 3.5~cm and a distance between both lenses $L = 84.7$~cm, a simple\ntrigonometry-based calculation renders an estimate for the size of the image of about 0.41~cm.\nThis is the same to say that only an effective circular spot in the object with such diameter is going to form a clear image,\neven if the illuminated area of the object is much larger.\nNonetheless, in practice we have noted that the image is a bit larger, namely a spot of about 0.7~cm (see Fig.~\\ref{Fig5}),\nwhich means that ray bundles coming from upper or lower points in the object are also going to contribute, although with\nsmaller luminosity and importantly affected by spherical and chromatic aberrations.\nFor instance, if we consider the limit of the off-axis object points for which only one of the corresponding outgoing rays is\ngoing to pass through both lenses, we find an estimate of the spot diameter of about 0.83~cm, although the borders of\nsuch a spot will be relatively dark.\nBy averaging with the previous value, we obtain a spot size of 0.62~cm, which is closer to the value observed experimentally.\nThis means that even object points contributing with about less than a quarter of the ray bundle that passes through the front\nlens are going to be significant.\nIn any case, these values are fine, because what we have used as a test object\/image is a picture of two parallel straight\nsegments of 0.2~cm length.\nNotice that with a smaller OCD, $L$ would also be smaller and therefore the projected image would be larger.\n\nRegarding this projective system, it is also worth mentioning that, if the light source and the two projective lenses are removed,\nand in the place of the original image on the wall we put a picture, when looking through the OCD we can see (although\naffected by some amount of aberration) the image of a such picture with exactly the same size and orientation.\nThis experiment was performed in order to confirm the conditions (i) and (ii) by direct sight, although the result was not\nas spectacular as in Ref.~\\cite{choi:OptExp:2014} and it was not possible to obtain any good quality photograph to be\nreported here.\nFurthermore, also notice that analogous size-limitation effects are going to be associated with the lenses of the OCD itself,\nalthough in a minor proportion, as it will be seen.\n\n\n\\subsection{Experiments with the projective OCD}\n\\label{sec33}\n\nFigure~\\ref{Fig3} shows a top view (a) and a side view (b) of the full experimental setup\nused here.\nThe top view in panel (a) shows how the OCD almost occupies the full length between the lenses\nof the projective system (compare the lower bench, where the OCD is mounted, with the idler,\nwhich is upper one), as well as the proximity between both benches.\nThe side view, in panel (b), gives an idea of the side-by-side alignment, very beneficial for\na direct comparison of the images $I$ (with the OCD) and $I'$ (idler).\n\nThe OCD implemented can be better seen in Fig.~\\ref{Fig3}, where a top view (a) and a side view (b) are shown.\nFor an easier identification of the different elements, the projective system lenses are denoted with $\\ell_O$ for\nthe front lens and $\\ell_I$ for the rear one (the same for the idler companion, but with primes).\nThe lenses constituting the OCD are denoted as $\\ell_i$, with $i = 1, 2, 3, 4$.\nThis criterion follows the same labeling used in the theoretical section, where the closest lens to the observer\nis precisely the one closer to image in our case here.\nAccordingly, we have also labeled the three spaces generated by every two consecutive lenses as 1, 2 and 3,\nincreasing from the image to the object.\nHence, the distances spanned by these spaces are $d_1$, $d_2$ and $d_3$, with the total length of the OCD being\n$L = d_1 + d_2 + d_3$.\nAs for the diaphragms, to keep a consistent notation, they are referred to as $A_1$, $A_2$ and $A_3$ ($A$ for\naperture), which makes reference to the space where they are accommodated, although they all are identical.\nBecause of the decrease of luminosity in the image when the OCD is introduced, in the corresponding setup\na power of 12~V is supplied to the halogen bulb, although only 6~V were required in the idler companion,\notherwise the image spot was too bright under darkness conditions, which were used for a better performance\nof the experiment (the photos of Figs.~\\ref{Fig1} and \\ref{Fig3} were taken with daylight conditions).\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{figure3.png}\n \\caption{\\label{Fig3}\n Top view (a) and side view (b) of experimental setup used as OCD, where $\\ell_i$ ($i = 1, 2, 3, 4$) denote\n the positions of the lenses used and $A_j$ denotes the positions of three ($j = 1, 2, 3$) iris-type diaphragms.\n The distances between lenses $d_k$ (with $k = 1, 2, 3$) and the total length $L$ of the device are also shown.\n Notice in panel (a) that both projective systems, test and OCD, are replicas one another.}\n\\end{figure}\n\nFigure~\\ref{Fig4} illustrates by means of a simple ray diagram the working principle of the OCD:\nthe rays leaving the on-axis object point (orange solid lines) pass through the device as if it\nwas not there (dashed lines).\nThis transit takes place by diverting the rays incident onto the OCD in the way indicated by\nthe red solid lines.\nNotice that this ray diversion is theoretically described by the matrix found in Sec.~\\ref{sec25}, which would\nreplay the central transit matrix in Eq.~(\\ref{eqIO}), in compliance with the fact that the effect of the four-lens\ntransfer matrix should be equivalent to having nothing along the path $L$ pursued by the incoming object rays.\nSpecifically, the lenses selected to built the OCD have focal lengths of $+20$~cm (for $\\ell_1$ and $\\ell_4$)\nand $+5$~cm ($\\ell_2$ and $\\ell_3$).\nAccordingly, from Eq.~(\\ref{eq20}), the distance between them is $d_1 = d_3 = 25$~cm, while\n$d_2 \\approx 16.7$~cm, from Eq.~(\\ref{eq22}).\nThe OCD was mounted in such a way that $\\ell_4$ was at 5.4~cm from $\\ell_O$ and $\\ell_1$ at 12.6~cm from $\\ell_I$.\nAs mentioned above, cloaking conditions have been investigated by using iris-type diaphragms.\nIt is clear that, as the diaphragm diameter is decreased, the bundle of rays will also decrease, which have\nan observable effect on the image.\nTherefore, by conveniently choosing a relatively narrow aperture that still allows the full bundle\nof rays to pass through, it is possible to determine a range of positions of the top along which its\npresence will not alter the image.\nIn other words, the presence of the diaphragm will be cloaked unless its diameter is so small that it starts\naffecting the projected image.\nSeveral experiments were carried out with analogous results.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\textwidth]{figure4.png}\n \\caption{\\label{Fig4}\n Ray diagram illustrating the working principle of the OCD implemented here: the rays\n going from the object to the image (orange solid lines) pass through the device as if\n it was not there (orange dashed lines) by deviating them (red solid lines).}\n\\end{figure}\n\nThe first experiment performed consisted in determining the optimal cloaking conditions\nfor each diaphragm individually considered in the setup.\nNotice that, if the diaphragm diameter is reduced in an important amount, by moving it\nalong the corresponding section of the OCD we should obtain a spatial range for which\nthe projected image is not affected.\nThis range will be the optimal cloaking region, that is, an observer at the position of\nthe projected image will not be able to perceive the presence of any object accommodated\nwithin such a range beyond the boundary defined by the corresponding iris diameter.\nWith this in mind, we selected an aperture of 0.5~cm for all three diaphragms, which\ncorresponds to about a 2.8\\% of their maximum area when they are fully open.\nAccordingly, we have found the following optimal distances (tolerance ranges):\n\\begin{itemize}\n \\item For $A_1$, with a range going from 11.7~cm to 13.2~cm measured from $L_1$, no important\n effects were observed in the projected image, such as loss of luminosity or appearance of\n chromatic aberrations (in the form of light color rings surrounding the image).\n This range lies around the center of the section (at 12.5~cm from $L_1$).\n\n \\item For $S_2$, the range goes from 6.7~cm to 8.9~cm, measured from $L_2$, which is also\n around the center of the section ($\\approx 8.4$~cm from $L2$).\n\n \\item For $S_3$, the range was between 9.7~cm and 11.6~cm, measured from $L_3$, closer\n (although still below) the center of the section (12.5~cm from $L_3$).\n\\end{itemize}\nA photograph of what can be seen projected onto the wall during the performance of\nthe experiment is displayed in Fig.~\\ref{Fig5}.\nThe two images, the one produced with the OCD (left) and the idler one (right), are shown\ntogether for comparison in panel (a).\nDue to the small size of the illuminated spots, the distance between them seem to be\nrelatively large, although there are only 10~cm between their centers.\nActually, the distortion observed (the kind of oval shape that they display) is due to\nthe perspective introduced by the camera.\nIn panel (b) we show only the image produce with OCD, where the colored halo due to the\nincipient effects of the chromatic aberration can be seen.\nNevertheless, it is worth mentioning that the effect is much stronger than it is\nactually, because of the treatment of the image, which require a high contrast for\na better visualization.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=0.95\\textwidth]{figure5.png}\n \\caption{\\label{Fig5}\n (a) Photograph of the illuminated spots that can be observed during the performance of\n the experiments reported here.\n The left spot corresponds to the bench with the OCD, while the right spot is the idler one.\n Although they are circular, here the present an oval shaped due to the perspective introduced\n by the camera.\n (b) Enlargement of the left spot showing the colored halo produced by chromatic aberration\n in the limit of the optimal cloaking region (it appears much more enhanced that it is\n actually, because the photograph has received a high-contrast treatment in order to better\n visualize it).}\n\\end{figure}\n\nOnce cloaking ranges are set on each section of the OCD, one might wonder about the maximum\ncloaking achievable with this device.\nBy inspecting Fig.~\\ref{Fig4}, given that the ray bundle crossing the OCD is going to get\nnarrower between $\\ell_2$ and $\\ell_3$, we are going to focus on this section.\nBy moving the diaphragm $A_2$ from one of these lenses to the other and making narrower its\ndiameter, we finally found such optimal cloaking conditions at 7.7~cm from $L_2$ (near the\ncentral value found in the previous experiment), for a diameter of 0.2~cm, i.e., the diaphragm\nopening is only about 0.4\\% of its maxima opening.\nFor these conditions, we noticed that:\n\\begin{itemize}\n \\item If the diameter was decreased at this position, there was a remarkable quick\n loss of luminosity in the projected image.\n\n \\item If the diaphragm was slightly displaced backwards, towards $L_2$, vignetting-related\n effects became remarkable.\n\n \\item If the diaphragm was slightly displaced forward, towards $L_3$, the effects of chromatic\n aberrations also appeared immediately (blue spots inside the image).\n\\end{itemize}\n\nOne might also wonder why instead of considering the central section of the OCD, the first or\nthird sections were not considered in the regions where the rays cross the foci of its lenses.\nThe reason is very simple: although there is a minimum (zero) waste there for incident rays\nthat are parallel to the optical axis, rays incident with any other inclination will not\npass through such points, which constitutes an important inconvenience.\nNevertheless, this led us to consider a third experiment to test the robustness of the cloaking\ncondition rendered by the previous experiment.\nThat is, without changing either the diameter or the position of the diaphragm $A_2$, we decided\nto determine where inside the other sections of the OCD an object could be hidden without noticing\nits presence.\nThus, we selected a diameter of 0.4~cm for the other two diaphragms, namely $A_1$ and $A_3$.\nWith this diameter we ensured a reduced loss of luminosity in the projected image when all three\ndiaphragms were used.\nThe optimal positions for these diaphragms were 12.1~cm for $A_1$, measured from $\\ell_1$, and\n10.7~cm for $A_3$, measured from $\\ell_3$.\nNotice that none of them is close to the corresponding focus, but they are closer to the center\nof the corresponding section of the OCD.\nFurthermore, also notice that the two positions lie close to the respective central values of\nthe ranges found in the first experiment.\n\n\n\\section{Concluding remarks}\n\\label{sec4}\n\nThe main purpose of this work has been to provide startup tools to introduce the\noptical cloaking phenomenon in the classroom both at the theory level and also at\nthe level of teaching experiment.\nThis has been done by exploring the performance of what we have called a projective\noptical cloaking device (OCD), where instead of directly observing through the device\nitself, as it is the case in Ref.~\\cite{choi:OptExp:2014}, the phenomenon is studied\nand analyzed by inspecting the projected image of an illuminated object.\nThe main advantage of this setup is that it can be built with material that is currently\navailable in any optics teaching laboratory (organic teaching lenses), without the\nnecessity to rely on high-quality material, such as good quality lenses (glass research\nlenses).\nBy means of a series of experiments (these are just some examples, but many others\ncan be devised) has served to detect the phenomenon and also to determine optimal\ncloaking regimes for the setup considered.\nIn this regard, we have advantageously used the fact that the lenses were not\naberration-free, which has allowed us establishing appropriate boundaries for the\ncloaking regimes.\nThese regimes have been found to happen in all three sections of the OCD, with\ncloaking ranges of about 2~cm, approximately, within each section, although in\nprinciple one would expect to detect cloaking only along the central section, where\nthe incident ray bundle gets narrower, as it is illustrated in Fig.~\\ref{Fig3}.\nIn this sense, we have also seen how putting a number of lenses one after the other\nbecomes very important regarding cloaking, not only because of a remarkable reduction of\nluminosity, but also because of aperture issues, exemplified by means of Fig.~\\ref{Fig2}.\n\nThe OCD here has been built taking into account a theoretical analysis based on the transfer\nmatrix formulation of Gaussian paraxial optics.\nThe main reason why we have chosen this method instead of more conventional ones\nbased on ray tracing is because it stresses in a nice manner the input\/ouput relationship\nenabled by the system analyzed.\nAlthough the transfer matrix method is not widely known, it is worth introducing at this\nlevel, because the construction of the system matrix allows to get a general view of\nthe path followed by the rays, since they depart from the object until they reach the\nprojection wall where the image is formed, without restricting ourselves to a limited set\nof rays.\nNevertheless, as it can be seen, it is not necessary a high knowledge on the method,\nbut just a few aspects; the rest is just standard matrix algebra.\nOn the other hand, the appealing feature of ray tracing constructions, however, is not lost\neither, because they have also been used here, in particular to illustrate the passage of rays\nthrough the projective system or the OCD, as shown respectively in Figs.~\\ref{Fig2} and\n\\ref{Fig3}.\nIn general terms, we have found that, in the same way that the experimental OCD is\nsimple, the theory here considered has also be presented at a simple level in order to\nmake it suitable for undergraduate optics courses.\nIn this regard, the introduction of more sophisticated, on-purpose software typically used\nin this kind of analysis either to proceed with the ray tracing or to compute and solve matrices\nhas been skipped, because the main idea is to tackle the problem just with simple tools.\nFor the same reason, a more refined and explicit analysis of the problem based on the role\nof lenses as aperture and field stops has also been omitted, because it already requires\nsome more advanced knowledge on the issue, which are not really necessary at this stage\nto explain imaging, as we have shown.\n\n\n\\ack\n\nFinancial support from the Spanish MINECO (Grant No.\\ FIS2016-76110-P)\nis acknowledged.\n\n\n\\section*{References}\n\n\n\\providecommand{\\newblock}{}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}