diff --git "a/data_all_eng_slimpj/shuffled/split2/finalzzeugc" "b/data_all_eng_slimpj/shuffled/split2/finalzzeugc" new file mode 100644--- /dev/null +++ "b/data_all_eng_slimpj/shuffled/split2/finalzzeugc" @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nGravitational lensing is the process in which light from background galaxies is deflected as it travels towards us. The deflection is a result of the gravitation of the intervening mass.\nMeasuring the deformations in a large sample of galaxies offers a direct probe of the matter distribution in the Universe (including dark matter) and can thus be directly compared to theoretical models of structure formation. The statistical properties of the weak-lensing field can be assessed by a statistical analysis of either the shear field or the convergence field.\nOn the one hand, convergence is a direct tracer of the total matter distribution integrated along the line of sight, and is therefore directly linked with the theory. On the other hand, the shear (or more exactly, the reduced shear) is a direct observable and usually preferred for simplicity reasons.\n\nAccordingly, the most common method for characterising the weak-lensing field distribution is the shear two-point correlation function. It is followed very closely by the mass-aperture two-point correlation functions, which are the result of convolving the shear two-point correlation functions by a compensated filter \\citep{2pcf:schneider02} that is able to separate the E and B modes of the two-point correlation functions \\citep{wl:crittenden02}.\nHowever, gravitational clustering is a non-linear process, and in particular, the mass distribution is highly non-Gaussian at small scales. For this reason, several estimators of the three-point correlation functions have been proposed, either in the shear field \\citep{wl:bernardeau02,wl:benabed06} or using the mass-aperture filter \\citep{map:kilbinger05}. The three-point correlation functions are the lowest order statistics to quantify non-Gaussianity in the weak-lensing field and thus provide additional information on structure formation models.\n\nThe convergence field can also be used to measure the two- and three-point correlation functions and other higher-order statistics. When we assume that the mass inversion (the \ncomputation of the convergence map from the measured shear field) is properly conducted, the shear field contains the same information as the convergence maps \\citep[e.g.][]{2pcf:schneider02,3pcf:shi11}. While it carries the same information, the lensing signal is more compressed in the convergence maps than in the shear field, which makes it easier to extract and computationally less expensive.\nThe convergence maps becomes a new tool that might bring additional constraints complementary to those that we can obtain from the shear field.\nHowever, the weak-lensing signal being highly non-Gaussian at small scales, mass-inversion methods using smoothing or de-noising to regularise the problem are not optimal.\n\nReconstructing convergence maps from weak lensing is a difficult task because of shape noise, irregular sampling, complex survey geometry, and the fact that the shear is not a direct observable.\nThis is an ill-posed inverse problem and requires regularisation to avoid pollution from spurious B modes.\nSeveral methods have been derived to reconstruct the projected mass distribution from the observed shear field. The first non-parametric mass reconstruction was proposed by \\cite{wl:kaiser93} and was further improved by \\cite{wl:bartelmann95}, \\cite{wl:kaiser95}, \\cite{wl:schneider95}, and \\cite{wl:squires96}. These linear inversion methods are based on smoothing with a fixed kernel, which acts as a regularisation of the inverse problem. Non-linear reconstruction methods were also proposed using different sets of priors and noise-regularisation techniques \\citep{wl:bridle98,wl:seitz98,wl:marshall02,wl:pires09,wl:jullo14, wl:lanusse16}.\nConvergence mass maps have been built from many surveys, including\nthe COSMOS Survey \\citep{cosmos:massey07}, \nthe Canada France Hawa\\\"i Telescope Lensing Survey CFHTLenS \\citep{cfhtlens:vanwaerbeke13}, the CFHT\/MegaCam Stripe-82 Survey \\citep{cs82:shan14}, \nthe Dark Energy Survey Science Verification DES SV \\citep{dessv:chang15,dessv:vikram15,dessv:jeffrey18}, \nthe Red Cluster Sequence Lensing Survey RCSLenS \\citep{rcslens:hildebrandt16}, and the Hyper SuprimeCam Survey \\citep{hsc:oguri18}.\nWith the exception of \\cite{dessv:jeffrey18}, who used the non-linear reconstruction proposed by \\cite{wl:lanusse16}, all these methods are based on the standard Kaiser \\& Squires method.\n \n \nIn the near future, several wide and deep weak-lensing surveys are planned: Euclid \\citep{euclid:laureijs11}, Large Synoptic Survey Telescope LSST \\citep{lsst:abell09}, and Wide Field Infrared Survey Telescope WFIRST \\citep{wfirst:green12}.\nIn particular, the Euclid satellite will survey 15 000 deg$^2$ of the sky to map the geometry of the dark Universe.\nOne of the goals of the Euclid mission is to produce convergence maps for non-Gaussianity studies and constrain cosmological parameters.\nTo do this, two different mass inversion methods are being included into the official Euclid data processing pipeline. \nThe first method is the standard Kaiser \\& Squires method (hereafter KS). Although it is well known that the KS method has several shortcomings, it is taken as the reference for cross-checking the results. The second method is a new non-linear mass-inversion method (hereafter KS+) based on the formalism developed in \\cite{wl:pires09}. The KS+ method aims at performing the mass inversion with minimum information loss. This is done by performing the mass inversion with no other regularisation than binning while controlling systematic effects. \n \n\nIn this paper, the performance of these two mass-inversion methods is investigated using the Euclid Flagship mock galaxy catalogue (version 1.3.3, Castander F. et al. in prep) with realistic observational effects (i.e. shape noise, missing data, and the reduced shear). The effect of intrinsic alignments is not studied in this paper because we lack simulations that would properly model intrinsic alignments.\nHowever, intrinsic alignments also need to be considered seriously because they affect second- and higher-order statistics. A contribution of several percent is expected to two-point statistics \\citep[see e.g.][]{ia:joachimi13}.\n\nWe compare the results obtained with the KS+ method to those obtained with a version of the KS method in which no smoothing step is performed other than binning. \nWe quantify the quality of the reconstruction using two-point correlation functions and moments of the convergence.\nOur tests illustrate the efficacy of the different mass-inversion methods in preserving the second-order statistics and higher-order moments.\n\n\n\n\nThe paper is organised as follows.\nIn Sect. 2 we present the weak-lensing mass-inversion problem and the standard KS method.\nSection 3 presents the KS+ method we used to correct for the different systematic effects.\nIn Sect. 4 we explain the method with which we compared these two mass-inversion methods.\nIn Sect. 5 we use the Euclid Flagship mock galaxy catalogue with realistic observational effects such as shape noise and complex survey geometry and consider the reduced shear to investigate the performance of the two mass-inversion methods. First, we derive simulations including only one issue at a time to test each systematic effect independently. Then we derive realistic simulations that include them all and study the systematic effects simultaneously.\nWe conclude in Sect. 6.\n\n\\section{Weak-lensing mass inversion}\n\\label{inversion}\n\n\\subsection{Weak gravitational lensing formalism}\n\\label{formalism}\nIn weak-lensing surveys, the shear field $\\gamma({\\vec{\\theta}})$ is derived from the ellipticities of the background galaxies at position {\\vec{\\theta}} in the image. The two components of the shear can be written in terms of the lensing potential $\\psi({\\vec{\\theta}})$ as \\citep[see e.g.][]{wl:bartelmann01}\n\\begin{eqnarray}\n\\label{eq:gamma_psi} \n\\gamma_1 & = & \\frac{1}{2}\\left( \\partial_1^2 - \\partial_2^2 \\right) \\psi, \\nonumber \\\\\n\\gamma_2 & = & \\partial_1 \\partial_2 \\psi,\n\\end{eqnarray}\nwhere the partial derivatives $\\partial_i$ are with respect to the angular coordinates $\\theta_i$, $i = 1,2$ representing the two dimensions of sky coordinates. \nThe convergence $\\kappa({\\vec{\\theta}})$ can also be\nexpressed in terms of the lensing potential as\n\\begin{eqnarray}\n\\label{eq:kappa_psi}\n\\kappa = \\frac{1}{2}\\left(\\partial_1^2 + \\partial_2^2 \\right) \\psi.\n\\end{eqnarray}\nFor large-scale structure lensing, assuming a spatially flat Universe, the convergence at a sky position ${\\vec{\\theta}}$ from sources at comoving distance $r$ is defined by \n\\begin{eqnarray}\n\\label{eq:kappa_r}\n\\kappa(\\vec{\\theta}, r) =\\frac{3H^2_0\\Omega_{\\rm m}}{2 c^2}\\int_0^r {\\rm d}r' \\frac{r'(r-r')}{r} \\frac{\\delta(\\vec{\\theta}, r')}{a(r')},\n\\end{eqnarray}\nwhere $H_0$ is the Hubble constant, $\\Omega_{\\rm m}$ is the matter density, $a$ is the scale factor, and $\\delta \\equiv (\\rho-\\bar\\rho)\/\\bar\\rho$ is the density contrast (where $\\rho$ and $\\bar\\rho$ are the 3D density and the mean 3D density, respectively).\nIn practice, the expression for $\\kappa$ can be generalised to sources with a distribution in redshift, or equivalently, in comoving distance $f(r)$, yielding\n\\begin{eqnarray}\n\\label{eq:kappa}\n\\kappa(\\vec{\\theta}) =\\frac{3H^2_0\\Omega_{\\rm m}}{2 c^2}\\int_0^{r_{\\rm H}} {\\rm d}r'p(r')r' \\frac{\\delta(\\vec{\\theta}, r')}{a(r')},\n\\end{eqnarray}\nwhere $r_{\\rm H}$ is the comoving distance to the horizon.\nThe convergence map reconstructed over a region on the sky gives us the integrated mass-density fluctuation weighted by the lensing-weight function $p(r')$,\n\\begin{eqnarray}\n\\label{eq:kappa_r}\np(r') =\\int_{r'}^{r_{\\rm H}} {\\rm d}r f(r)\\frac{r-r'}{r}.\n\\end{eqnarray}\n\n\n\n\n\\subsection{Kaiser \\& Squires method (KS)} \n\\label{ks}\n\n\n\\subsubsection{KS mass-inversion problem} \n\n\nThe weak lensing mass inversion problem consists of reconstructing the convergence $\\kappa$ from the measured shear field $\\gamma$. We can use complex notation to represent the shear field, $\\gamma = \\gamma_1 + {\\rm i} \\gamma_2$, and the convergence field, $\\kappa = \\kappa_{\\rm E} + {\\rm i} \\kappa_{\\rm B}$, with $\\kappa_{\\rm E}$ corresponding to the curl-free component and $\\kappa_{\\rm B}$ to the gradient-free component of the field, called E and B modes by analogy with the electromagnetic field.\nThen, from Eq.~(\\ref{eq:gamma_psi}) and Eq.~(\\ref{eq:kappa_psi}), we can derive the relation between the shear field $\\gamma$ and the convergence field $\\kappa$.\nFor this purpose, we take the Fourier transform of these equations and obtain\n\\begin{eqnarray}\n\\label{eq:kappa2gamma}\n\\hat \\gamma = \\hat P \\, \\hat \\kappa,\n\\end{eqnarray}\nwhere the hat symbol denotes Fourier transforms, $\\hat P = \\hat P_1 + {\\rm i} \\hat P_2$,\n\\begin{eqnarray}\n\\hat{P_1}(\\vec{\\ell}) & = & \\frac{\\ell_1^2 - \\ell_2^2}{\\ell^2}, \\nonumber \\\\\n\\hat{P_2}(\\vec{\\ell}) & = & \\frac{2 \\ell_1 \\ell_2}{\\ell^2},\n\\end{eqnarray}\nwith $\\ell^2 \\equiv \\ell_1^2 + \\ell_2^2$ and $\\ell_i$ the wave numbers corresponding to the angular coordinates $\\theta_i$. \n\n$\\hat P$ is a unitary operator. The inverse operator is its complex conjuguate $\\hat P^* = \\hat P_1 - {\\rm i} \\hat P_2$ , as shown by \\cite{wl:kaiser93},\n\\begin{eqnarray}\n\\label{eq:gamma2kappa}\n\\hat \\kappa = \\hat P^* \\, \\hat \\gamma.\n\\end{eqnarray}\nWe note that to recover $\\kappa$ from $\\gamma,$ there is a degeneracy when $\\ell_1 = \\ell_2 = 0$. Therefore the mean value of $\\kappa$ cannot be recovered if only shear information is available. This is the so-called mass-sheet degeneracy \\citep[see e.g.][for a discussion]{wl:bartelmann95}.\nIn practice, we impose that the mean convergence vanishes across the survey by setting the reconstructed $\\ell = 0$ mode to zero. This is a reasonable assumption for large-field reconstruction \\citep[e.g.][]{wl:seljak98}.\n\n\nWe can easily derive an estimator of the E-mode and B-mode convergence in the Fourier domain,\n\\begin{eqnarray}\n\\label{eq:fourier}\n\\hat{\\tilde \\kappa}_{\\rm E} &=& \\hat P_1 \\hat \\gamma_1 + \\hat P_2 \\hat \\gamma_2,\\\\ \\nonumber \n\\hat{\\tilde \\kappa}_{\\rm B} &=& - \\hat P_2 \\hat \\gamma_1 + \\hat P_1 \\hat \\gamma_2.\n\\end{eqnarray}\nBecause the weak lensing arises from a scalar potential (the lensing potential $\\psi$), it can be shown that weak lensing only produces E modes. However, intrinsic alignments and imperfect corrections of the point spread function (PSF) generally generate both E and B modes. The presence of B modes can thus be used to test for residual systematic effects in current weak-lensing surveys.\n\n\\subsubsection{Missing-data problem in weak lensing}\nThe shear is only sampled at the discrete positions of the galaxies where the ellipticity is measured. \nThe first step of the mass map-inversion method therefore is to bin the observed ellipticities of galaxies on a regular pixel grid to create what we refer to as the observed shear maps $\\gamma^{\\rm{obs}}$.\nSome regions remain empty because various masks were applied to the data, such as the masking-out of bright stars or camera CCD defects. In such cases, the shear is set to zero in the original KS method,\n\\begin{eqnarray}\n\\label{eq:mask}\n\\gamma^{\\rm{obs}} &=& M \\gamma^{\\rm n},\n\\end{eqnarray}\nwith $M$ the binary mask (i.e. $M = 1$ when we have information at the pixel, $M = 0$ otherwise) and $\\gamma^{\\rm n}$ the noisy shear maps.\nAs the shear at any sky position is non-zero in general, this introduces errors in the reconstructed convergence maps.\nSome specific methods address this problem by discarding masked pixels at the noise-regularisation step \\cite[e.g.][]{cfhtlens:vanwaerbeke13}. However, as explained previously, this intrinsic filtering results in subtantial signal loss at small scales. Instead, inpainting techniques are used in the KS+ method to fill the masked regions (see Appendix A).\n\n\n\n\\subsubsection{Weak-lensing shape noise}\n\\label{sect_shape_noise}\nThe gravitational shear is derived from the ellipticities of the background galaxies.\nHowever, the galaxies are not intrinsically circular, therefore their measured ellipticity is a combination of their intrinsic ellipticity and the gravitational lensing shear. \nThe shear is also subject to measurement noise and uncertainties in the PSF correction. All these effects can be modelled as an additive noise, $N = N_1 + {\\rm i} N_2$,\n\\begin{eqnarray}\n\\gamma^{\\rm n} &=& \\gamma+ N\n\\label{eq:noise1}\n\\end{eqnarray}\nThe noise terms $N_1$ and $N_2$ are assumed to be Gaussian and uncorrelated with zero mean and standard deviation,\n\\begin{eqnarray}\n\\sigma_{\\rm n}^i = \\frac{\\sigma_{\\rm \\epsilon}}{\\sqrt{N_{\\rm g}^i}}, \n\\label{eq:noise2}\n \\end{eqnarray}\nwhere $N_{\\rm g}^i$ is the number of galaxies in pixel $i$.\nThe root-mean-square shear dispersion per galaxy, $\\sigma_{\\rm \\epsilon}$, arises both from the measurement uncertainties and the intrinsic shape dispersion of galaxies. The Gaussian assumption is a reasonable assumption, and $\\sigma_{\\rm \\epsilon}$ is set to 0.3 for each component as is generally found in weak-lensing surveys \\citep[e.g.][]{sigmae:leauthaud07, sigmae:schrabback15, sigmae:schrabback18}. The surface density of usable galaxies is expected to be around $n_{\\rm g} = 30$ gal. arcmin$^{-2}$ for the Euclid Wide survey \\citep{euclid:cropper13}.\n\n\nThe derived convergence map is also subject to an additive noise,\n\\begin{eqnarray}\n\\hat{\\tilde \\kappa}^{\\rm n} = \\hat P^* \\, \\hat{ \\gamma}^{\\rm n} = \\hat \\kappa + \\hat P^* \\, \\hat{N}.\n\\label{kappan}\n\\end{eqnarray}\nIn particular, the E component of the convergence noise is\n\\begin{eqnarray}\nN_{\\rm E}= N_1* P_1 + N_2 * P_2 , \n\\label{eq:noise3}\n\\end{eqnarray}\nwhere the asterisk denotes the convolution operator, and $P_1$ and $P_2$ are the inverse Fourier transforms of $\\hat{P_1}$ and $\\hat{P_2}$.\nWhen the shear noise terms $N_1$ and $N_2$ are Gaussian, uncorrelated, and with a constant standard deviation across the field, the convergence noise is also Gaussian and uncorrelated.\nIn practice, the number of galaxies varies slightly across the field. The variances of $N_1$ and $N_2$ might also be slightly different, which can be modelled by different values of $\\sigma_{\\epsilon}$ for each component. These effects introduce noise correlations in the convergence noise maps, but they were found to remain negligible compared to other effects studied in this paper.\n\n\nIn the KS method, a smoothing by a Gaussian filter is frequently applied to the background ellipticities before mass inversion to regularise the solution.\nAlthough performed in most applications of the KS method, this noise regularisation step is not mandatory. It was introduced to avoid infinite noise and divergence at very small scales.\nHowever, the pixelisation already provides an intrinsic regularisation. This means that there is no need for an additional noise regularisation prior to the inversion. Nonetheless, for specific applications that require denoising in any case, the filtering step can be performed before or after the mass inversion.\n\n\n\\section{Improved Kaiser \\& Squires method (KS+)} \n\\label{iks}\n\nSystematic effects in mass-inversion techniques must be fully controlled in order to use convergence maps as cosmological probes for future wide-field weak-lensing experiments such as \\textit{Euclid}\\xspace. \nWe introduce the KS+ method based on the formalism developed in \\cite{wl:pires09} and \\cite{wl:jullo14}, which integrates the necessary corrections for imperfect and realistic measurements.\nWe summarise its improvements over KS in this section and evaluate its performance in Sect. \\ref{results_1}.\n\n\nIn this paper, the mass-mapping formalism is developed in the plane. \nThe mass inversion can also be performed on the sphere, as proposed in \\cite{sphere:pichon09} and \\cite{sphere:chang18}, and the extension of the KS+ method to the curved sky is being investigated. \nHowever, the computation time and memory required to process the spherical mass inversion means limitations in terms of convergence maps resolution and\/or complexity of the algorithm.\nThus, planar mass inversions remain important for reconstructing convergence maps with a good resolution and probing the non-Gaussian features of the weak-lensing field (e.g. for peak-count studies).\n\n\\subsection{Missing data}\n\\label{KS+}\n\nWhen the weak-lensing shear field $\\gamma$ is sampled on a grid of $N \\times N$ pixels, we can describe the complex shear and convergence fields by their respective matrices. In the remaining paper, the notations $\\bm \\gamma$ and $\\bm \\kappa$ stand for the matrix quantities.\n\n\n\n\nIn the standard version of the KS method, the pixels with no galaxies are set to zero.\nFig.~\\ref{shear_mask} shows an example of simulated shear maps without shape noise derived from the Euclid Flagship mock galaxy catalogue (see Sect. \\ref{sect_simu} for more details). The upper panels of Fig.~\\ref{shear_mask} show the two components of the shear with zero values (displayed in black) corresponding to the mask of the missing data.\nThese zero values generate an important leakage during the mass inversion.\n\\begin{figure*}[h]\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_mask,height=6.cm,width=7.cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_mask,height=6.cm,width=7.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_inp_mask,height=6.cm,width=7.cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_inp_mask,height=6.cm,width=7.cm,clip=}\n}}\n}\n\\caption{Simulated shear maps with missing data covering a field of $5^\\circ \\times 5 ^\\circ$. The left panels show the first component of the shear $\\gamma_1$ , and the right panels present the second component of the shear $\\gamma_2$. The upper panels show the incomplete shear maps, where the pixels with no galaxies are set to zero (displayed in black). The lower panels show the result of the inpainting method that allows us to fill the gaps judiciously.}\n\\label{shear_mask}\n\\end{figure*}\n\n\nWith KS+, the problem is reformulated by including additional assumptions to regularise the problem. \nThe convergence $\\bm{\\kappa}$ can be analysed using a transformation $\\mathbf \\Phi, $ which yields a set of coefficients $\\bm \\alpha = \\mathbf \\Phi^{\\rm T} \\bm{\\kappa}$ ($\\mathbf \\Phi$ is an orthogonal matrix operator, and $\\mathbf \\Phi^{\\rm T}$ represents the transpose matrix of $\\mathbf \\Phi$).\nIn the case of the Fourier transformation, $\\mathbf \\Phi^{\\rm T}$ would correspond to the discrete Fourier transform (DFT) matrix, and $\\bm \\alpha$ would be the Fourier coefficients of $\\bm{\\kappa}$.\nThe KS+ method uses a prior of sparsity, that is, it assumes that there is a transformation $\\mathbf \\Phi$ where the convergence $\\bm{\\kappa}$ can be decomposed into a set of coefficients $\\bm \\alpha$, where most of its coefficients are close to zero. \nIn this paper, $\\mathbf \\Phi$ was chosen to be the discrete cosine transform (DCT) following \\cite{wl:pires09}. The DCT expresses a signal in terms of a sum of cosine functions with different frequencies and amplitudes. It is similar to the DFT, but uses smoother boundary conditions. This provides a sparser representation. Hence the use of the DCT for JPEG compression.\n\nWe can rewrite the relation between the observed shear $\\bm{ \\gamma}^{\\rm{obs} }$ and the noisy convergence $\\bm{\\kappa}^{\\rm n}$ as\n\\begin{eqnarray}\n\\bm{\\gamma}^{\\rm{obs}} =\\mathbf M \\mathbf{P} \\bm{\\kappa}^{\\rm n},\n\\label{miss}\n\\end{eqnarray}\nwith $\\mathbf M$ being the mask operator and $\\mathbf P$ the KS mass-inversion operator.\nThere is an infinite number of convergence $\\bm{\\kappa}^{\\rm n}$ that can fit the observed shear $\\bm \\gamma^{\\rm{obs}}$.\nWith KS+, we first impose that the mean convergence vanishes across the survey, as in the KS method. \nThen, among all possible solutions, KS+ searches for the sparsest solution $\\tilde{\\bm{\\kappa}}^{\\rm n}$ in the DCT $\\mathbf \\Phi$ (i.e. the convergence $\\bm{\\kappa}^{\\rm n}$ that can be represented with the fewest large coefficients). \nThe solution of this mass-inversion problem is obtained by solving\n\\begin{equation}\n\\min_{\\bm{\\tilde{\\kappa}}^{\\rm n}} \\| \\mathbf \\Phi^{\\rm T} \\bm{\\tilde{\\kappa}}^{\\rm n} \\|_0 \\textrm{ subject to } \\parallel \\bm{\\gamma}^{\\rm{obs}} - \\mathbf M \\mathbf P \\bm{\\tilde{\\kappa}}^{\\rm n} \\parallel^2 \\le \\sigma^2,\n\\label{eq1}\n\\end{equation}\nwhere $|| z ||_0$ the pseudo-norm, that is, the number of non-zero entries in $z$, $|| z ||$ the classical $l_2$ norm (i.e. $|| z || =\\smash{\\sqrt{ \\sum_{k}(z_{k})^2}}$), and $\\sigma$ stands for the standard deviation of the input shear map measured outside the mask.\nThe solution of this optimisation task can be obtained through an iterative thresholding algorithm called morphological component analysis (MCA), which was introduced by \\cite{mca:elad05} and was adapted to the weak-lensing problem in \\cite{wl:pires09}. \n\n\\cite{wl:pires09} used an additional constraint to force the B modes to zero. This is optimal when the shear maps have no B modes. However, any real observation has some residual B modes as a result of intrinsic alignments, imperfect PSF correction, etc. The B-mode power is then transferred to the E modes, which degrades the E-mode convergence reconstruction. We here instead let the B modes free, and an additional constraint was set on the power spectrum of the convergence map. \nTo this end, we used a wavelet transform to decompose the convergence maps into a set of aperture mass maps using the starlet transform algorithm \\citep{starck:book98,starck:book02}. \nThen, the constraint consists of renormalising the standard deviation (or equivalently, the variance) of each aperture mass map inside the mask regions to the values measured in the data, outside the masks, and then reconstructing the convergence through the inverse wavelet transform.\nThe variance per scale corresponding to the power spectrum at these scales allows us to constrain a broadband power spectrum of the convergence $\\bm \\kappa$ inside the gaps.\n\n\nAdding the power spectrum constraints yields the final sparse optimisation problem,\n\\begin{equation}\n\\min_{\\bm{\\tilde{\\kappa}}^{\\rm n}} \\| \\mathbf \\Phi^{\\rm T} \\bm{\\tilde{\\kappa}}^{\\rm n} \\|_0 \\textrm{ s.t. } \\parallel \\bm{\\gamma}^{\\rm{obs}} - \\mathbf M \\mathbf P \\mathbf{W^{\\rm T}} \\mathbf Q \\mathbf{W} \\bm{\\tilde{\\kappa}}^{\\rm n} \\parallel^2 \\le \\sigma^2,\n\\label{eq2}\n\\end{equation}\nwhere $\\mathbf{W}$ is the forward wavelet transform and $\\mathbf{W^{\\rm T}}$ its inverse transform, and $\\mathbf Q$ is the linear operator used to impose the power spectrum constraint.\nMore details about the KS+ algorithm are given in Appendix A.\n\nThe KS+ method allows us to reconstruct the in-painted convergence maps and the corresponding in-painted shear maps, where the empty pixels are replaced by non-zero values. These interpolated values preserve the continuity of the signal and reduce the signal leakage during the mass inversion (see lower panels of Fig.~\\ref{shear_mask}). \nThe quality of the convergence maps reconstruction with respect to missing data is evaluated in Sect. \\ref{results_1}.\nAdditionally, the new constraint allows us to use the residual B modes of the reconstructed maps to test for the presence of residual systematic effects and possibly validate the shear measurement processing chain.\n\n\n\n\n\\subsection{Field border effects}\n\\label{border}\n\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_border,height=6.cm,width=7.cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_border,height=6.cm,width=7.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=g1_inp_border,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=g2_inp_border,height=6.cm,width=7cm,clip=}\n}}\n}\n\\caption{Upper panels: Simulated shear maps covering a field of $5^\\circ \\times 5 ^\\circ$ , extended to a field of $10^\\circ \\times 10^\\circ$ by zero padding (zero values are displayed in black). Lower panels: Result of the inpainting method that allows us to extrapolate the shear on the borders. The left panels show the first component of the shear $\\gamma_1$, and the right panels present the second component of the shear $\\gamma_2$.}\n\\label{shear_border}\n\\end{figure*}\n\n\n\nThe KS and KS+ mass-inversion methods relate the convergence and the shear fields in Fourier space.\nHowever, the discrete Fourier transform implicitly assumes that the image is periodic along both dimensions. Because there is no reason for opposite borders to be alike, the periodic image generally presents strong discontinuities across the frame border. These discontinuities cause several artefacts at the borders of the reconstructed convergence maps. The field border effects can be addressed by removing the image borders, which throws away a large fraction of the data.\nDirect finite-field mass-inversion methods have also been proposed \\citep[e.g.][]{wl:seitz96,wl:seitz01}. Although unbiased, convergence maps reconstructed using these methods are noisier than those obtained with the KS method.\nIn the KS+ method, the problem of borders is solved by taking larger support for the image and by considering the borders as masked regions to be in-painted.\nThe upper panels of Fig.~\\ref{shear_border} show the two components of a shear map covering $5^{\\circ} \\times 5^{\\circ}$ degrees and extending to a field of $10^{\\circ} \\times 10^{\\circ}$.\nThe inpainting method is then used to recover the shear at the field boundaries, as shown in the lower panels of Fig.~\\ref{shear_border}. \nAfter the mass inversion is performed, the additional borders are removed. This technique reduces the field border effects by pushing the border discontinuities farther away. \n\n\n\\subsection{Reduced shear}\n\\label{reduced}\nIn Sect. \\ref{ks} we assumed knowledge of the shear, in which case the mass inversion is linear.\nIn practice, the observed galaxy ellipticity is not induced by the shear $\\gamma,$ but by the reduced shear $g$ that depends on the convergence $\\kappa$ corresponding to that particular line of sight,\n\\begin{eqnarray}\ng \\equiv \\frac{\\gamma}{1-\\kappa}.\n\\label{reducedshear}\n\\end{eqnarray}\nWhile the difference between the shear $\\gamma$ and the reduced shear $g$ is small in the regime of cosmic shear ($\\kappa \\ll 1$), neglecting it might nevertheless cause a measurable bias at small angular scales \\citep[see e.g.][]{wl:white05, wl:shapiro09}.\nIn the standard version of KS, the Fourier estimators are only valid when the convergence is small ($\\kappa \\ll 1$),\nand they no longer hold near the centre of massive galaxy clusters.\nThe mass-inversion problem becomes non-linear, and it is therefore important to properly account for reduced shear.\n\nIn the KS+ method, an iterative scheme is used to recover the E-mode convergence map, as proposed in \\cite{wl:seitz95}. The method consists of solving the linear inverse problem iteratively (see Eq.~\\ref{eq:fourier}), using at each iteration the previous estimate of the E-mode convergence to correct the reduced shear using Eq.~(\\ref{reducedshear}). \nEach iteration then provides a better estimate of the shear. This iterative algorithm was found in \\cite{wl:jullo14} to quickly converge to the solution (about three iterations). The KS+ method uses the same iterative scheme to correct for reduced shear, and we find that it is a reasonable assumption in the case of large-scale structure lensing.\n\n\\subsection{Shape noise}\n\n\\label{section:shearnoise}\n\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_512,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=kappa_noise,height=6.cm,width=7.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_g5,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=kappa_mrlens,height=6.cm,width=7cm,clip=}\n}}\n\n}\n\\caption{Shape-noise effect. The upper panels show the original E-mode convergence $\\kappa$ map (left) and the noisy convergence map with $n_{\\rm g} = 30$ gal. arcmin$^{-2}$ (right). The lower panels show the reconstructed maps using a linear Gaussian filter with a kernel size of $\\sigma = 3\\arcmin$ (left) and the non-linear MRLens filtering using $\\alpha_{\\rm FDR} = 0.05$ (right). The field is $5^\\circ \\times 5^\\circ$ downsampled to $512 \\times 512$ pixels.}\n\n\\label{kappa_noise}\n\\end{figure*}\n\n\nIn the original implementation of KS, the shear maps are first regularised with a smoothing window (i.e. a low-pass filter) to obtain a smoothed version of the shear field. Then, Eq.~(\\ref{eq:fourier}) is applied to derive the convergence maps. \nIn contrast, the KS+ method aims at producing very general convergence maps for many applications. In particular, it produces noisy maps with minimum information loss. \n\nHowever, for specific applications (e.g. galaxy cluster detection and characterisation), it can be useful to add an additional de-noising step, using any of the many regularisation techniques that have been proposed \\citep{wl:bridle98,wl:seitz98,wl:marshall02, wl:starck06, wl:lanusse16}. \nTo compare the results of the KS and KS+ methods on noisy maps, we used a linear Gaussian\nand the non-linear MRLens filter \\citep{wl:starck06} for noise suppression.\nFig.~\\ref{kappa_noise} illustrates the effect of shape noise on reconstructing the convergence map.\nThe upper panels show one E-mode convergence map reconstructed from noise-free (left) and noisy (right) shear data. The convergence map is dominated by the noise. The lower panels show the results of the Gaussian filter (left) and MRLens filter (right). The Gaussian filter gives a smoothed version of the noisy convergence map, whose level of smoothness is set by the width of the Gaussian ($\\sigma$). Thus, the amplitude of the over-densities (in blue) are systematically lowered by the Gaussian filter. \nIn contrast, \nthe MRLens filter uses a prior of sparsity to better recover the amplitude of the structures and uses a parameter, the false-discovery rate ($\\alpha_{\\rm FDR}$), to control the average fraction of false detections (i.e. the number of pixels that is truly inactive, declared positive) made over the total number of detections \\citep{benjamini95}. For some other applications (e.g. two- or three-point correlation), the integrity of the reconstructed noisy convergence maps might be essential and this denoising step can be avoided.\n\n\n\\section{Method}\n\\label{method}\n\n\n\n\\subsection{Comparing second-order statistics}\n\\label{2pcf}\nThe most common tools for constraining cosmological parameters in weak-lensing studies are the shear two-point correlation functions.\nFollowing \\cite{wl:bartelmann01}, they are defined by considering pairs of positions $\\vec{\\vartheta}$ and $\\vec{\\theta+\\vartheta}$, and defining the tangential and cross-component of the shear $\\gamma_{\\rm t}$ and $\\gamma_{\\times}$ at position $\\vec{\\vartheta}$ for this pair as\n\\begin{eqnarray}\n\\gamma_{\\rm t} &=& -\\operatorname{\\mathcal{R}e}(\\gamma \\operatorname{e}^{-2{\\rm i}\\varphi}),\\\\\n\\gamma_{\\times} &=& -\\operatorname{\\mathcal{I}m}(\\gamma \\operatorname{e}^{-2{\\rm i}\\varphi}),\n\\end{eqnarray}\nwhere $\\varphi$ is the polar angle of the separation vector $\\vec{\\theta}$.\nThen we define the two independent shear correlation functions\n\\begin{eqnarray}\n\\xi_\\pm(\\theta) &:=& \\langle \\gamma_{\\rm t} \\gamma_{\\rm t}' \\rangle \\pm \\langle \\gamma_\\times \\gamma_\\times' \\rangle \\\\\n&=& \\frac{1}{2\\pi} \\int_0^{\\infty} d\\ell \\, \\ell \\, P_{\\rm \\kappa}(\\ell) \\,{\\rm J}_{0,4}(\\ell \\theta) ,\n\\end{eqnarray}\nwhere the Bessel function ${\\rm J}_0$ $({\\rm J}_4)$ corresponds to the plus (minus) correlation function, $P_{\\kappa}(\\ell)$ is the power spectrum of the projected matter density, and $\\ell$ is the Fourier variable on the sky.\nWe can also compute the two-point correlation functions of the convergence ($\\kappa = \\kappa_{\\rm E} + \\rm i \\kappa_{\\rm B}$), defined as\n\\begin{eqnarray}\n\\xi_{\\kappa_{\\rm E}}(\\theta) = \\langle \\kappa_{\\rm E} \\kappa_{\\rm E}' \\rangle,\\nonumber \\\\\n\\xi_{\\kappa_{\\rm B}}(\\theta) = \\langle \\kappa_{\\rm B} \\kappa_{\\rm B}' \\rangle.\n\\end{eqnarray}\nWe can verify that these two quantities agree \\citep{2pcf:schneider02}:\n\\begin{eqnarray}\n \\xi_+(\\theta) = \\xi_{\\kappa_{\\rm E}}(\\theta) + \\xi_{\\kappa_{\\rm B}}(\\theta).\n \\end{eqnarray}\nWhen the B modes in the shear field are consistent with zero, the two-point correlation of the shear ($\\xi_+$) is equal to the two-point correlation of the convergence $(\\xi_{\\kappa_{\\rm E}})$. Then the differences between the two are due to the errors introduced by the mass inversion to go from shear to convergence.\n\nWe computed these two-point correlation functions using the tree code \\texttt{athena} \\citep{athena:kilbinger14}. The shear two-point correlation functions were computed by averaging over pairs of galaxies of the mock galaxy catalogue, whereas the convergence two-point correlation functions were computed by averaging over pairs of pixels in the convergence map. The convergence two-point correlation functions can only be computed for separation vectors $\\vec{\\theta}$ allowed by the binning of the convergence map. \n\n\n\\subsection{Comparing higher-order statistics}\n\\label{hos}\n\nTwo-point statistics cannot fully characterise the weak-lensing field at small scales where it becomes non-Gaussian \\citep[e.g.][]{pt:bernardeau02}. Because the small-scale features carry important cosmological information, we computed the third-order moment, $\\langle \\kappa_{\\rm E}^3 \\rangle$, and the fourth-order moment, $\\langle \\kappa_{\\rm E}^4\\rangle$, of the convergence. Computations were performed on the original convergence maps provided by the Flagship simulation, as well as on the convergence maps reconstructed from the shear field with the KS and KS+ methods. \nWe evaluated the moments of convergence at various scales by computing aperture mass maps \\citep{map:schneider96, map:schneider97}. Aperture mass maps are typically obtained by convolving the convergence maps with a filter function of a specific scale (i.e. aperture radii). We performed this here by means of a wavelet transform using the starlet transform algorithm \\citep{starck:book98,starck:book02}, which simultaneously produces a set of aperture mass maps on dyadic (powers of two) scales (see Appendix A for more details). Leonard et al. (2012) demonstrated that the aperture mass is formally identical to a wavelet transform at a specific scale and the aperture mass filter corresponding to this transform is derived. The wavelet transform offers significant advantages over the usual aperture mass algorithm in terms of computation time, providing speed-up factors of about 5 to 1200 depending on the scale.\n\n\n\\subsection{Numerical simulations}\n\\label{sect_simu}\n\n\n\nWe used the Euclid Flagship mock galaxy catalogue version 1.3.3\n(Castander F. et al., in prep) derived from N-body cosmological simulation \\citep{flagship:potter17} with parameters $\\Omega_{\\rm m}=0.319$, $\\Omega_{\\rm b} = 0.049$, $\\Omega_{\\Lambda} = 0.681$, $\\sigma_8=0.83$, $n_{\\rm s}=0.96$, $h=0.67$, and the particle mass was \\smash{$m_{\\rm p} \\sim 2.398 \\times10^9\\,\\text{\\ensuremath{\\textup{M}_{\\odot}}} h^{-1}$}. The galaxy light-cone catalogue contains 2.6 billion galaxies over $5000\\,\\deg^2$ , and it extends up to $z=2.3$. It has been built using a hybrid halo occupation distribution and halo abundance matching (HOD+HAM) technique, whose galaxy-clustering properties were discussed in detail in \\cite{flagship:crocce15}.\nThe lensing properties were computed using the Born approximation and projected mass density maps (in \\texttt{HEALPix} format with $N_{\\rm side}=8192$) generated from the particle light-cone of the Flagship simulation.\nMore details on the lensing properties of the Flagship mock galaxy catalogue can be found in \\cite{flagship:fosalba15,flagship:fosalba18}.\n\nIn order to evaluate the errors introduced by the mass-mapping methods, we extracted ten contiguous shear and convergence fields of $10^\\circ \\times 10^\\circ$ from the Flagship mock galaxy catalogue, yielding a total area of 1000 deg$^2$. The fields correspond to galaxies that lie in the range of \\smash{$15^\\circ < \\alpha < 75^\\circ$} and \\smash{$15^\\circ < \\delta < 35^\\circ$} , where $\\alpha$ and $\\delta$ are the right ascension and declination, respectively.\nIn order to obtain the density of 30 galaxies per arcmin$^2$ foreseen for the Euclid Wide survey, we randomly selected one quarter of all galaxies in the catalogue. \nThen projected shear and convergence maps were constructed by combining all the redshifts of the selected galaxies.\nMore sophisticated selection methods based on galaxy magnitude would produce slightly different maps. However, they would not change the performances of the two methods we studied here.\nThe fields were down-sampled to $1024 \\times 1024$ pixels, which corresponds to a pixel size of about \\ang[angle-symbol-over-decimal]{;0.6;}. Throughout all the paper, the shaded regions stand for the uncertainties on the mean estimated from the total 1000 deg$^2$ of the ten fields. Because the Euclid Wide survey is expected to be 15 000 deg$^2$, the sky coverage will be 15 times larger than the current mock. Thus, the uncertainties will be smaller by a factor of about 4.\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=diff_kappa_mask_ks_m20,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=diff_kappa_mask_ki_m20,height=6.cm,width=7cm,clip=}\n}}\n}\n\\caption{Missing data effects: Pixel difference outside the mask between the original E-mode convergence $\\kappa$ map and the map reconstructed from the incomplete simulated noise-free shear maps using the KS method (left) and the KS+ method (right). The field is $10^\\circ \\times 10^\\circ$ downsampled to $1024 \\times 1024$ pixels. The missing data represent roughly 20\\% of the data.}\n\\label{kappa_missing}\n\\end{figure*}\n\n\n\\subsection{Shear field projection}\n\\label{projections}\n\nWe considered fields of $10^\\circ \\times10^\\circ$. The fields were taken to be sufficiently small to be approximated by a tangent plane. \nWe used a gnomonic projection to project the points of the celestial sphere onto a tangent plane, following \\cite{cmb:pires12}, who found that this preserves the two-point statistics. We note, however, that higher-order statistics may behave differently under different projections.\n\nThe shear field projection is obtained by projecting the galaxy positions from the sphere ($\\alpha$, $\\delta$) in the catalogue onto a tangent plane ($x$, $y$).\nThe projection of a non-zero spin field such as the shear field requires a projection of both the galaxy positions and their orientations.\nProjections of the shear do not preserve the spin orientation, which can generate substantial B modes (depending on the declination) if not corrected for. \nTwo problems must be considered because of the orientation. First, the projection of the meridians are not parallel, so that north is not the same everywhere in the same projected field of view. Second, the projection of the meridians and great circles is not perpendicular, so that the system is locally non-Cartesian.\nBecause we properly correct for the other effects (e.g. shape noise, missing data, or border effects) and consider large fields of view ($10^{\\circ} \\times 10^{\\circ}$) possibly at high latitudes, these effects need to be considered.\nThe first effect is dominant and generates substantial B modes (increasing with latitude) if not corrected for. This can be easily corrected \nfor by measuring the shear orientation with respect to local north. We find that this correction is sufficient for the residual errors due to projection to become negligible compared to errors due to other effects.\n\n\n\n\n\\section{Systematic effects on the mass-map inversion}\n\\label{results_1}\nIn this section, we quantify the effect of field borders, missing data, shape noise, and the approximation of shear by reduced shear on the KS and KS+ mass-inversion methods. The quality of the reconstruction is assessed by comparing the two-point correlation functions, third- and fourth-order moments.\n\n\n\\subsection{Missing data effects}\n\\label{sect_missing}\n\nWe used the ten noise-free shear fields of $10^\\circ \\times 10^\\circ$ described in Sect.~\\ref{sect_simu} and the corresponding noise-free convergence maps.\nWe converted the shear fields into planar convergence maps using the KS and KS+ methods, masking 20\\% of the data as expected for the \\textit{Euclid}\\xspace survey. \nThe mask was derived from the Data Challenge 2 catalogues produced by the Euclid collaboration using the code \\texttt{FLASK} \\citep{flask:xavier16}.\n\n\\begin{figure}\n\\centerline{\n\\psfig{figure=Residual_PDF_mask.pdf,height=6.5cm,clip=}\n}\n\\caption{Missing data effects: PDF of the residual errors between the original E-mode convergence map and the reconstructed maps using KS (blue) and KS+ (red), measured outside the mask.}\n\\label{missing_residual_PDF}\n\\end{figure}\n\n\n\nFig.~\\ref{kappa_missing} compares the results of the KS and KS+ methods in presence of missing data.\nThe figure shows the residual maps, that is, the pixel difference between the original E-mode convergence map and the reconstructed maps.\nThe amplitude of the residuals is larger with the KS method. Detailed investigation shows that the excess error is essentially localised around the gaps. \nBecause the mass inversion operator $\\mathbf P$ is intrinsically non-local, it generates artefacts around the gaps.\nIn order to quantify the average errors,\nFig.~\\ref{missing_residual_PDF} shows the probability distribution function (PDF) of the residual maps, estimated outside the mask. \nThe standard deviation is 0.0080 with KS and 0.0062 with KS+. The residual errors obtained with KS are then 30\\% larger than those obtained with KS+.\n\n\n\\begin{figure}[h!]\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_missing_m20.pdf,width=9.cm,clip=}\n}\n\\caption{Missing data effects: Mean shear two-point correlation function $\\xi_+$ (black) and corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ reconstructed using the KS method (blue) and using the KS+ method (red) from incomplete shear maps. The estimation is only made outside the mask $M$. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative two-point correlation errors introduced by missing data effects, that is, the normalised difference between the upper curves.}\n\\label{missing_corr}\n}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_mask.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_mask.pdf,width=9.cm,clip=}\n}}\n}\n\\caption{Missing data effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original E-mode convergence map (black) compared to the moments estimated on the KS (blue) and KS+ (red) convergence maps at the same scales. The KS and KS+ convergence maps are reconstructed from incomplete noise-free shear maps. The estimation of the third- and fourth-order moments is made outside the mask. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors introduced by missing data effects, that is, the normalised difference between the upper curves.}\n\\label{missing_hos}\n\\end{figure}\n\n\n\nThe quality of the mass inversion at different scales can be estimated using the two-point correlation function and higher-order moments computed at different scales.\nFig.~\\ref{missing_corr} compares the two-point correlation functions computed on the convergence and shear maps outside the mask.\nBecause the B mode is consistent with zero in the simulations, we expect that these two quantities are equal within the precision of the simulations (see Sect.~\\ref{2pcf}).\nThe KS method systematically underestimates the original two-point correlation function by a factor of about 2 on arcminute scales, but can reach factors of 5 at larger scales.\nThe mass-inversion operator $\\mathbf P$ being unitary, the signal energy is conserved by the transformation (i.e. $\\sum(\\gamma_1^2+ \\gamma_2^2) = \\sum(\\kappa_{\\rm E}^2+ \\kappa_{\\rm B}^2)$, where the summation is performed over all the pixels of the maps). We found that about 10\\% of the total energy leaks into the gaps and about 15\\% into the B-mode component.\nIn contrast, the errors of the KS+ method are of the order of a few percent at scales smaller than $1^\\circ$. At any scale, the KS+ errors are about 5-10 times smaller than the KS errors, remaining in the $1\\sigma$ uncertainty of the original two-point correlation function.\n\nFig.~\\ref{missing_hos} shows the third-order (upper panel) and fourth-order (lower panel) moments estimated at six different wavelet scales ( \\ang[angle-symbol-over-decimal]{;2.34;}, \\ang[angle-symbol-over-decimal]{;4.68;}, \\ang[angle-symbol-over-decimal]{;9.37;}, \\ang[angle-symbol-over-decimal]{;18.75;}, \\ang[angle-symbol-over-decimal]{;37.5;}, and \\ang[angle-symbol-over-decimal]{;75.0;}) using the KS and KS+methods. For this purpose, the pixels inside the mask were set to zero in the reconstructed convergence maps. The aperture mass maps corresponding to each wavelet scale were computed, and the moments were calculated outside the masks.\n\nThe KS method systematically underestimates the third- and fourth-order moments at all scales.\nBelow 10$\\arcmin$, the errors on the moments remain smaller than 50\\%, and they increase with scale up to a factor 3. In comparison, the KS+ errors remain much smaller at all scales, and remain within the 1$\\sigma$ uncertainty.\n\n\n\\subsection{Field border effects}\nFig.~\\ref{kappa_borders} compares the results of the KS (left) and KS+ (right) methods for border effects. It shows the residual error maps corresponding to the pixel difference between the original E-mode convergence map and the reconstructed maps.\nWith KS, as expected, the pixel difference shows errors at the border of the field. With KS+, there are also some low-level boundary effects, but these errors are considerably reduced and do not show any significant structure at the field border.\nIn KS+, the image is extended to reduce the border effects. The effect of borders decreases when the size of the borders increases. A border size of 512 pixels has been selected for \\textit{Euclid}\\xspace as a good compromise between precision and computational speed. It corresponds to extending the image to be in-painted to 2048 $\\times$ 2048 pixels.\nAgain, the PDF of these residuals can be compared to quantify the errors. For the two methods, Fig.~\\ref{border_residual_PDF} shows the residuals PDFs computed at the boundaries (as dotted lines) and in the remaining central part of the image (as solid lines). The border width used to compute the residual PDF is 100 pixels, which corresponds to about one degree. \nWith the KS method, the standard deviation of the residuals in the centre of the field is 0.0062.\n In the outer regions, the border effect causes errors of 0.0076 (i.e. 25\\% larger than at the centre). Away from the borders, the KS+ method gives results similar to the KS method (0.0060). However, it performs much better at the border, where the error only reaches 0.0061. The small and uniform residuals of the KS+ method show how efficiently it corrects for borders effects. \n \n\n\\begin{figure*}\n\\centerline{\n\\hbox{\n\\psfig{figure=diff_kappa_nomask_ks,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=diff_kappa_nomask_ki,height=6.cm,width=7cm,clip=}\n}}\n\\caption{Field border effects: Pixel difference between the original E-mode convergence $\\kappa$ map and the map reconstructed from the corresponding simulated shear maps using the KS method (left) and the KS+ method (right). The field is $10^\\circ \\times 10^\\circ$ downsampled to $1024 \\times 1024$ pixels.}\n\\label{kappa_borders}\n\\end{figure*}\n\n\n\\begin{figure}\n\\centerline{\n\\psfig{figure=Residual_PDF_nomask.pdf,height=6.5cm,clip=}\n}\n\\caption{Field border effects: PDF of the residual errors between the original E-mode convergence map and the convergence maps reconstructed using KS (blue) and KS+ (red). The dotted lines correspond to the PDF of the residual errors measured at the boundaries of the field, and the solid lines show the PDF of the residual errors measured in the centre of the field. The borders are 100 pixels wide.}\n\\label{border_residual_PDF}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_border.pdf,width=9.cm,clip=}\n}\n\\caption{Field border effects: Mean shear two-point correlation function $\\xi_+$ (black) compared to the corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ reconstructed using the KS method (blue) and the KS+ method (red). The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative two-point correlation error introduced by border effects.}\n\\label{border_corr}\n}\n\\end{figure}\n\nAs before, the scale dependence of the errors can be estimated using the two-point correlation function and higher-order moments computed at different scales. \nFig.~\\ref{border_corr} shows the two-point correlation functions.\nFor both methods, the errors increase with angular scale because the fraction of pairs of pixels that include boundaries increase with scale. The loss of amplitude at the image border is responsible for significant errors in the two-point correlation function of the KS convergence maps. \nIn contrast, the errors are about five to ten times smaller with the KS+ method and remain in the $1\\sigma$ uncertainty range of the original two-point correlation function.\n\nFig.~\\ref{border_hos} shows field borders effects on the third-order (upper panel) and fourth-order (lower panel) moments of the convergence maps at different scales.\nAs was observed earlier for the two-point correlation estimation, the KS method introduces errors at large scales on the third- and fourth-order moment estimation. With KS+, the discrepancy is about $1\\%$ and within the $1\\sigma$ uncertainty.\n\nWhen the two-point correlation functions and higher-order moments are computed far from the borders, the errors of the KS method decrease, as expected. In contrast, we observe no significant improvement when the statistics are computed similarly on the KS+ maps, indicating that KS+ corrects for borders properly.\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_border.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_border.pdf,width=9.cm,clip=}\n}}}\n\\caption{Field border effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original convergence (black) compared to the moments estimated on the KS (blue) and KS+ (red) convergence maps reconstructed from noise-free shear maps. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors introduced by border effects.}\n\\label{border_hos}\n\\end{figure}\n\n\n\n\\subsection{Reduced shear}\nIn this section we quantify the errors due to the approximation of shear ($\\gamma$) by the reduced shear ($g$).\nTo this end, we used the noise-free shear fields described in Sect.~\\ref{sect_simu} and computed the reduced shear fields using Eq.~(\\ref{reducedshear}) and the convergence provided by the catalogue. We then derived the reconstructed convergence maps using the KS and KS+ methods.\n\nFor both methods, the errors on the convergence maps are dominated by field border effects. \nWe did not find any estimator able to separate these two effects and then identify the reduced shear effect in the convergence maps. \nThe errors introduced by the reduced shear can be assessed by comparing the shear and reduced shear two-point correlation functions (see Fig.~\\ref{reduced_corr}), however. While the differences are negligible at large scales, they reach the percent level on arcminute scales \\citep[in agreement with][]{wl:white05}, where they become comparable or larger than the KS+ errors due to border effects.\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_reduced_2.pdf,width=9.cm,clip=}\n}\n\\caption{Reduced shear effects: Relative two-point correlation error between the mean two-point correlation functions \\smash{$\\xi_+^{\\gamma}$} estimated from the shear fields and corresponding mean two-point correlation function \\smash{$\\xi_+^{\\rm g}$} estimated from the reduced shear fields without any correction.}\n\\label{reduced_corr}\n}\n\\end{figure}\n\n\n\\subsection{Shape noise}\n\nIn this section we study the effect of the shape noise on convergence maps.\nWe derived noisy shear maps, assuming a Gaussian noise ($\\sigma_{\\epsilon} = 0.3$).\nThen, we compared the two mass-inversion methods. \nThe pixel difference cannot be used in this case because the convergence maps are noise dominated (see Fig.~\\ref{kappa_noise}, upper right panel).\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_border_noise.pdf,width=9.cm,clip=}\n}\n\\caption{Shape noise effects: Mean shear two-point correlation function $\\xi_+$ (black) and corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ estimated from complete noisy shear fields. The convergence maps have been estimated using the KS method (blue) and using the KS+ method (red). The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative two-point correlation error introduced by shape noise.}\n\\label{border_noise_corr}\n}\n\\end{figure}\nHowever, we can still assess the quality of the convergence maps using two-point correlation functions because the ellipticity correlation is an unbiased estimate of the shear correlation, and similarly, the convergence two-point correlation functions is unbiased by the shape noise.\n\n \nFig.~\\ref{border_noise_corr} compares the results of the KS and KS+ methods when shape noise is included.\nCompared to Fig.~\\ref{border_corr}, the two-point correlation of the noisy maps is less smooth because the noise fluctuations do not completely average out. However, the amplitude of the errors introduced by the mass inversion remain remarkably similar to the errors computed without shape noise for the KS and KS+ methods. The same conclusions then hold: the errors are about five times smaller with the KS+ method.\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_border_noise.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_border_noise.pdf,width=9.cm,clip=}\n}}}\n\\caption{Shape noise effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original convergence with realistic shape noise (black) compared to the moments estimated on the KS (blue) and KS+ (red) convergence reconstructed from noisy shear maps. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors introduced by shape noise.}\n\\label{noise_border_hos}\n\\end{figure}\n\nMoments of noisy maps are biased and potentially dominated by the shape noise contribution. For instance, the total variance in the noisy convergence map is expected to be the sum of the variance in the noise-free convergence map and the noise variance. Therefore moments of the noisy KS and KS+ convergence maps cannot be directly compared to moments of the original noise-free convergence maps. \nInstead, Fig.~\\ref{noise_border_hos} compares them to the moments of the original convergence maps where noise was added with properties similar to the noise expected in the convergence maps. For this purpose, we generated noise maps $N_1$ and $N_2$ for each field using Eq.~(\\ref{eq:noise1}) and (\\ref{eq:noise2}), and we derived the noise to be added in the convergence using Eq.~(\\ref{eq:noise3}).\n\nThe comparison of Fig.~\\ref{noise_border_hos} to Fig.~\\ref{border_hos} shows that the third-order moment of the convergence is not affected by shape noise. In contrast, the fourth-order moment is biased for scales smaller than 10$\\arcmin$. The two methods slightly underestimate the third- and fourth-order moments at large scales. However, with KS+, the errors are reduced by a factor of 2 and remain roughly within the $1\\sigma$ uncertainty.\n\n\n\n\n\n\n\\subsection{All systematic effects taken into account simultaneously}\nIn this section, we assess the performance of KS and KS+ for realistic data sets by combining the effects of shape noise, reduced shear, borders, and missing data.\n\n\n\\begin{figure*}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_1024_m20,height=6.cm,width=6.7cm,clip=}\n\\hspace{0.6cm}\n\\psfig{figure=mask_m20,height=6.cm,width=6.4cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kappa_mask_g5_m20,height=6.cm,width=7cm,clip=}\n\\hspace{0.2cm}\n\\psfig{figure=kappa_mask_mrlens_m20,height=6.cm,width=7cm,clip=}\n}}\n\n}\n\\caption{All systematic effects: The upper panels show the original E-mode convergence $\\kappa$ map (left) and the mask that is applied to the shear maps (right). The lower panels show the convergence map reconstructed from an incomplete noisy shear field using the KS method (left) and using the KS+ method (right) applying a nonlinear MRLens filtering with $\\alpha_{\\rm FDR} = 0.05$. The field is $10^\\circ \\times 10^\\circ$ downsampled to $1024 \\times 1024$ pixels.}\n\n\\label{kappa_complete}\n\\end{figure*}\n\n\nFig.~\\ref{kappa_complete} compares the results of the KS method and the KS+ method combined with a filtering step to correct for all systematic effects in one field.\nWe used the nonlinear MRLens filter to reduce the noise in the KS and KS+ convergence maps because it is particularly well suited for the detection of isotropic structures \\citep{stat:pires09a, wl:pires12, wl:lin16}. Again, KS+ better recovers the over-densities because it reduces the signal leakage during the mass inversion compared to KS.\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\psfig{figure=correlation_missing_noise_m20.pdf,width=9.cm,clip=}\n}\n\\caption{All systematic effects: Mean shear two-point correlation function $\\xi_+$ (black) and corresponding mean convergence two-point correlation function $\\xi_{\\kappa_{\\rm E}}$ estimated from incomplete noisy shear fields. The convergence maps have been estimated using KS (blue) and KS+ (red). The convergence two-point correlations were estimated outside the mask. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the normalised difference between the two upper curves.}\n\\label{missing_noise_corr} \n}\n\\end{figure}\n\n\n\nFig.~\\ref{missing_noise_corr} shows the two-point correlation computed with the two methods. \nThe masked regions were excluded from the two-point correlation computation, resulting in fewer pairs and higher noise than in Fig.~\\ref{border_noise_corr}.\nAgain, the strong leakage due to missing data is clearly observed with the KS method. \nThe results obtained with the KS+ method reduce the errors in the mean convergence two-point correlation function by a factor of about 5, and the errors remain roughly within the 1$\\sigma$ uncertainty.\n\n\n\n\n\\begin{figure}\n\\vbox{\n\\centerline{\n\\hbox{\n\\psfig{figure=skewness_mask_noise.pdf,width=9.cm,clip=}\n}}\n\\vspace{0.3cm}\n\\centerline{\n\\hbox{\n\\psfig{figure=kurtosis_mask_noise.pdf,width=9.cm,clip=}\n}}}\n\\caption{All systematic effects: Third-order (upper panel) and fourth-order (lower panel) moments estimated on seven wavelet bands of the original convergence with realistic noise (black) compared to the moments estimated using KS (blue) and KS+ (red) obtained from incomplete noisy shear maps. The third- and fourth-order moments are estimated outside the mask. The shaded area represents the uncertainties on the mean estimated on 1000 deg$^{2}$. The lower panel shows the relative higher-order moment errors.}\n\\label{noise_missing_hos}\n\\end{figure}\n\n\n\n\nIn Fig.~\\ref{noise_missing_hos} we test the efficacy of the mass-inversion methods in preserving higher-order moments of the convergence maps in a realistic setting. As before, realistic noise was added to the original convergence maps for comparison.\nAs was observed earlier in the noise-free case, the KS method systematically underestimates the third- and fourth-order moments at all scales. \nWith KS+, the errors are significantly reduced, by a factor of about 2 in the third-order moment and by a factor of about 10 in the fourth-order moment estimation, at all scales.\nAlthough reduced, the errors of the KS+ method on the third-order moment cannot be neglected.\nThese errors might result from noise correlations introduced by the inpainting method in the shear maps. Inside the gaps, the noise is indeed correlated because it is interpolated from the remaining data. These noise correlations propagate into the convergence maps and can explain the bias in the moment estimation.\n\nWe note that the two-point correlation functions and higher-order moments are here only used to probe the accuracy of the reconstruction methods. For specific applications, the small residuals of the KS+ method can be reduced even more using additional treatment such as down-weighting the region around the mask when the moments are computed \\citep[e.g.][]{cfhtlens:vanwaerbeke13}.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sect_cl}\n\nThis paper was motivated by the use of convergence maps in \\textit{Euclid}\\xspace to constrain cosmological parameters and to assess other physical constraints. \nConvergence maps encode the lensing information in a different manner, allowing more optimised computations than shear.\nHowever, the mass-inversion process is subject to field border effects, missing data, reduced shear, intrinsic alignments, and shape noise. This requires accurate control of the systematic effects during the mass inversion to reduce the information loss as much as possible. \nWe presented and compared the two mass-inversion methods that are included in the official Euclid data-processing pipeline: the standard Kaiser \\& Squires (KS) method, and an improved Kaiser \\& Squires (KS+) mass-inversion technique that integrates corrections for the mass-mapping systematic effects.\n The systematic effects on the reconstructed convergence maps were studied using the Euclid Flagship mock galaxy catalogue.\n\n\nIn a first step, we analysed and quantified one by one the systematic effects on reconstructed convergence maps using two-point correlation functions and moments of the convergence.\nIn this manner, we quantified the contribution of each effect to the error budget to better understand the error distribution in the convergence maps. With KS, missing data are the dominant effect at all scales. Field border effects also have a strong effect, but only at the map borders. These two effects are significantly reduced with KS+. The reduced shear is the smallest effect in terms of contribution and only affects small angular scales. \nThe study also showed that pixellisation provides an intrinsic regularisation and that no additional smoothing step is required to avoid infinite noise in the convergence maps.\n\n\nIn a second step, we quantified the errors introduced by the KS and KS+ methods in a realistic setting that included the systematic effects. \nWe showed that the KS+ method reduces the errors on the two-point correlation functions and on the moments of the convergence compared to the KS method. \nThe errors introduced by the mass inversion on the two-point correlation of the convergence maps are reduced by a factor of about 5. \nThe errors on the third-order and fourth-order moment estimates are reduced by factors of about 2 and 10, respectively.\nSome errors remain in the third-order moment that remain within the $2\\sigma$ uncertainty. They might result from noise correlations introduced by the inpainting method inside the gaps.\n\nOur study was conducted on a mock of 1000 deg$^2$ divided into ten fields of 10$^{\\circ}$ $\\times$ 10$^{\\circ}$ to remain in the flat-sky approximation. \\textit{Euclid}\\xspace will observe a field of 15 000 deg$^2$. As long as KS+ has not been extended to the curved sky, it is not possible to apply the method to larger fields without introducing significant projection effects. However, the \\textit{Euclid}\\xspace survey can be divided into small fields, which allows reducing the uncertainties in the statistics that are estimated on the convergence maps. Moreover, we can expect that part of the errors will average out.\n\nRecent studies have shown that combining the shear two-point statistics with higher-order statistics of the convergence such as higher-order moments \\citep{hos:vicinanza18}, Minkowski functionals \\citep{hos:vicinanza19}, or peak counts \\citep{hos:liu15,hos:martinet18} allows breaking common degeneracies. \nThe precision of the KS+ mass inversion makes the E-mode convergence maps a promising tool for such cosmological studies.\nIn future work, we plan to propagate these errors into cosmological parameter constraints using higher-order moments and peak counts.\n\n\n\\label{sect_cl}\n\n\n\n\n\n\\section*{Acknowledgments}\n{This study has been carried inside the Mass Mapping Work Package of the Weak Lensing Science Working Group of the \\textit{Euclid}\\xspace project to better understand the impact of the mass inversion systematic effects on the convergence maps. \nThe authors would like to thank the referees and editors for their valuable comments, which helped to improve the manuscript.\nS. Pires thanks F. Sureau, J. Bobin, M. Kilbinger, A. Peel and J.-L. Starck for useful discussions. \n\\AckEC\n\n\\section*{Appendix A: KS+ inpainting algorithm}\n\n\nThis appendix describes the KS+ method presented in Sect.~\\ref{iks} in more detail.\nThe solution of the KS+ mass inversion is obtained through the iterative algorithm described in Algorithm~\\ref{algo}.\n\nThe outer loop starting at step 5 is used to correct for the reduced shear using the iterative scheme described in Sect.~\\ref{reduced}. The inner loop starting at step 7 is used to solve the optimisation problem defined by Eq.~(\\ref{eq2}). \n$\\bm \\Phi$ is the discrete cosine transform operator matrix.\nIf the convergence ${\\bm \\kappa}$ is sparse in $\\bm \\Phi$, most of the signal is contained in the strongest DCT coefficients. The smallest coefficients result from missing data, border effects, and shape noise. Thus, the algorithm is based on an iterative algorithm with a threshold that decreases exponentially (at each iteration) from a maximum value to zero, following the decreasing law $F$ described in \\cite{wl:pires09}. \nBy accumulating increasingly more high DCT coefficients through each iteration, the gaps in $\\bm{\\tilde \\gamma}$ fill up steadily, and the power of the spurious B modes due to the gaps decreases.\nThe algorithm uses the fast Fourier transform at each iteration to compute the shear maps $\\bm \\gamma$ from the convergence maps $\\bm \\kappa$ (step 14) and the inverse relation (step 16). \n\n\n\nA data-driven power spectrum prior is introduced at steps 11-13. To do so, the KS+ algorithm uses the undecimated isotropic wavelet transform that decomposes an image $\\bm \\kappa$ into a set of coefficients $\\{ \\bm{w_1}, \\bm{w_2}, ..., \\bm{w_J}, \\bm{c_J}\\}$, as a superposition of the form\n\\begin{eqnarray}\n\\bm \\kappa[i_1, i_2]= \\bm{c_{J}}[i_1, i_2] + \\sum_{j=1}^{J} \\bm{w_{j}}[i_1,i_2], \n \\label{wavelet}\n \\end{eqnarray}\nwhere $\\bm{c_{J}}$ is a smoothed version of the image, and $\\bm \\kappa$ and $\\bm{w_{j}}$ are a set of aperture mass maps (usually called wavelet bands) at scale $\\theta = 2^{j}$. Then, we estimate the variance on each wavelet band $\\bm{w_j}$. The variance per scale estimated in this way can be directly compared to the power spectrum. This provides a way to estimate a broadband power spectrum of the convergence $\\bm \\kappa$ from incomplete data. \nThe power spectrum is then enforced by multiplying each wavelet coefficient by the factor \\smash{$\\sigma_j^{\\rm{out}}\/\\sigma_j^{\\rm{in}}$} inside the gaps, where \\smash{$\\sigma_j^{\\rm{in}}$} and \\smash{$\\sigma_j^{\\rm{out}}$} are the standard deviation estimated in the wavelet band $\\bm{w_j}$ inside and outside the mask, respectively.\nThis normalisation can be described by a linear operator $\\mathbf Q$ as used in Eq.~(\\ref{eq2}). \nThe constraint is applied on the E- and B-mode components before reconstructing the convergence $\\bm \\kappa$ by backward wavelet transform.\n\\begin{algorithm}[H]\n\\caption{KS+ algorithm} \n\\label{algo} \n\\begin{enumerate}\n\\item[1.] Project the shear from the celestial sphere onto a tangent plane by projecting the galaxy positions and applying a local rotation to the shear field.\n\\item[2.] Bin the projected shear onto a grid and define $\\bm{\\tilde{\\gamma}}$ as the average shear in each pixel.\n\\item[3.] Set the mask $\\mathbf M$: $M[i_1, i_2] = 1$ for pixels where we have information and $M[i_1, i_2] = 0$ for pixels with no galaxies, and take a support twice larger for the shear maps and include the borders in the masked region (see Fig.~\\ref{shear_mask}).\n\\item[4.] Set the maximum number of iterations to $I_{\\rm max}=100$, the maximum threshold $\\lambda_{\\rm max} = \\max(\\mid \\bm \\Phi^{\\rm T} \\mathbf P^* \\bm{\\tilde{\\gamma}} \\mid),$ and the minimum threshold $\\lambda_{\\rm min} = 0$.\n\\item[5.] Set $k = 0$, $\\bm{\\kappa_{\\rm E}^{k}}=0$ and iterate:\n\\begin{enumerate}\n\\item[6.] Update the shear $\\bm{\\tilde{\\gamma}^{k}} = \\bm{\\tilde{\\gamma}} \\, (1-\\bm{\\kappa_{\\rm E}^{k}})$ and initialise the solution to $\\bm{\\kappa^{k}} = \\mathbf P^* \\bm{\\tilde{\\gamma}^{k}}$.\n\\item[7.] Set $i = 0$, $\\lambda^{0}=\\lambda_{\\rm max}$, $\\bm{\\kappa^{i}}=\\bm{\\kappa^{k}}$ and iterate: \n \\begin{enumerate}\n \\item[8.] Compute the forward transform: $\\bm \\alpha = \\bm{\\Phi^{\\rm T}} \\bm{\\kappa^{i}}$.\n \\item[9.] Compute $\\bm{\\tilde \\alpha}$ by setting to zero the coefficients $\\bm \\alpha$ below the threshold $\\lambda^{i}$.\n \\item[10.] Reconstruct $\\bm{\\kappa^{i}}$ from $\\bm{\\tilde \\alpha}$: $\\bm{\\kappa^{i}} = \\bm{\\Phi} \\bm{ \\tilde\\alpha}$.\n \\item[11.] Decompose $\\bm{\\kappa^{i}}$ into its wavelet coefficients $\\{ \\bm{w_1}, \\bm{w_2}, ..., \\bm{w_J}, \\bm{c_J}\\}$.\n \\item[12.] Renormalise the wavelet coefficients $\\bm{w_j}$ by a factor $\\sigma_j^{\\rm{out}}\/\\sigma_j^{\\rm{in}}$ inside the gaps.\n \\item[13.] Reconstruct $\\bm{\\kappa^{i}}$ by performing the backward wavelet transform from the normalised coefficients. \n \\item[14.] Perform the inverse mass relation: $\\bm{\\gamma^{i}} = \\mathbf P \\bm{\\kappa^{i}}$.\n \\item[15.] Enforce the observed shear $\\bm{\\tilde\\gamma}$ outside the gaps:\n $\\bm{\\gamma^{i}} = (1-\\mathbf M) \\, \\bm{\\gamma^{i}} + \\mathbf M \\bm{\\tilde{ \\gamma}^{k}}$.\n \\item[16.] Perform the direct mass inversion: $\\bm{\\kappa^{i}} = \\mathbf P^* \\bm{\\gamma^{i}}$.\n \\item[17.] Update the threshold: $\\lambda^i = F(i, \\rm \\lambda_{min}, \\lambda_{max})$.\n \\item[18.] Set $i=i+1$. If $i>}[d]^f \\\\\nP \\ar@{-->}[ur] \\ar@{->}[r] & Y. \\\\\n}\n\\]\nDually, the injectives and $\\cat{I}$-monic maps determine each other via the extension property\n\\[\n\\xymatrix{\nX \\ar[r] \\ar@{ >->}[d]_f & I \\\\\nY . \\ar@{-->}[ur] & \\\\\n}\n\\]\nThis is part of the equivalent definition of a projective (resp.\\ injective) class described in \\cite{Christensen98}*{Proposition 2.4}. \n\\end{rem}\n\n\n\n\\begin{convention}\\label{co:SuspensionIso}\nWe will implicitly use the natural isomorphism $\\cat{T}(A,B) \\cong \\cat{T}(\\Sigma^k A, \\Sigma^k B)$ sending a map $f$ to $\\Sigma^k f$. \n\\end{convention}\n\n\\begin{defn}\\label{def:AdamsResol}\nAn \\textbf{Adams resolution} of an object $X$ in $\\cat{T}$ with respect to a projective class $(\\cat{P}, \\cat{N})$ is a diagram\n\\begin{equation} \\label{eq:AdamsResolProj}\n\\cxymatrix{\n\\mathllap{X =\\ }X_0 \\ar[rr]^{i_0} & & X_1 \\circar[dl]^{\\delta_0} \\ar[rr]^{i_1} & & X_2 \\circar[dl]^{\\delta_1} \\ar[rr]^{i_2} & & X_3 \\circar[dl]^{\\delta_2} \\ar[r] & \\cdots \\\\\n& P_0 \\ar@{->>}[ul]^{p_0} & & P_1 \\ar@{->>}[ul]^{p_1} & & P_2 \\ar@{->>}[ul]^{p_2} & & \\\\\n}\n\\end{equation}\nwhere every $P_s$ is projective, every map $i_s$ is in $\\cat{N}$, and every triangle $P_s \\ral{p_s} X_s \\ral{i_s} X_{s+1} \\ral{\\delta_s} \\Sigma P_s$ is distinguished. Here the arrows $\\delta_s \\colon X_{s+1} {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} P_{s}$ denote degree-shifting maps, namely, maps $\\delta_s \\colon X_{s+1} \\to \\Sigma P_{s}$.\n\nDually, an \\textbf{Adams resolution} of an object $Y$ in $\\cat{T}$ with respect to an injective class $(\\cat{I}, \\cat{N})$ is a diagram\n\\begin{equation}\\label{eq:AdamsResolInj}\n\\cxymatrix{\n\\mathllap{Y =\\ }Y_0 \\ar@{ >->}[dr]_{p_0} & & Y_1 \\ar@{ >->}[dr]_{p_1} \\ar[ll]_{i_0} & & Y_2 \\ar@{ >->}[dr]_{p_2} \\ar[ll]_{i_1} & & Y_3 \\ar[ll]_{i_2} & \\cdots \\ar[l] \\\\\n& I_0 \\circar[ur]_{\\delta_0} & & I_1 \\circar[ur]_{\\delta_1} & & I_2 \\circar[ur]_{\\delta_2} & & \\\\\n}\n\\end{equation}\nwhere every $I_s$ is injective, every map $i_s$ is in $\\cat{N}$, and every triangle $\\Sigma^{-1} I_s \\ral{\\Sigma^{-1} \\delta_s} Y_{s+1} \\ral{i_s} Y_s \\ral{p_s} I_s$ is distinguished.%\n\\end{defn}\nFrom now on, fix a triangulated category $\\cat{T}$ and a (stable) injective class $(\\cat{I}, \\cat{N})$ in $\\cat{T}$. \n\n\\begin{lem}\nEvery object $Y$ of $\\cat{T}$ admits an Adams resolution.\n\\end{lem}\n\nGiven an object $X$ and an Adams resolution of $Y$, applying $\\cat{T}(X,-)$ yields an exact couple\n\\[\n\\xymatrix{\n\\bigoplus_{s,t} \\cat{T}(\\Sigma^{t-s} X, Y_s) \\ar[rr]^-{i = \\oplus (i_s)_*} & & \n\\bigoplus_{s,t} \\cat{T}(\\Sigma^{t-s} X, Y_s) \\ar[dl]^-{p = \\oplus (p_s)_*} \\\\\n& \\bigoplus_{s,t} \\cat{T}(\\Sigma^{t-s} X, I_s) \\ar[ul]^-{\\delta = \\oplus (\\delta_s)_*}\n}\n\\]\nand thus a spectral sequence with $E_1$ term\n\\[\nE_1^{s,t} %\n = \\cat{T}\\left( \\Sigma^{t-s} X, I_s \\right)\n \\cong \\cat{T}\\left( \\Sigma^t X, \\Sigma^s I_s \\right)\n\\]\nand differentials\n\\[\nd_r \\colon E_r^{s,t} \\to E_r^{s+r, t+r-1}\n\\]\ngiven by $d_r = p \\circ i^{-(r-1)} \\circ \\delta$, where $i^{-1}$ means choosing an $i$-preimage.\nThis is called the \\textbf{Adams spectral sequence} with respect to the injective class $\\cat{I}$\nabutting to $\\cat{T}(\\Sigma^{t-s} X, Y)$.\n\n\n\\begin{lem}\nThe $E_2$ term is given by\n\\[\nE_2^{s,t} = \\Ext_{\\cat{I}}^{s,t}(X,Y) := \\Ext_{\\cat{I}}^{s}(\\Sigma^t X,Y)\n\\]\nwhere $\\Ext_{\\cat{I}}^{s}(X,Y)$ denotes the $s^{\\text{th}}$ derived functor of $\\cat{T}(X,-)$ (relative to the injective class $\\cat{I}$) applied to the object $Y$.\n\\end{lem}\n\n\\begin{proof}\nThe Adams resolution of $Y$ yields an $\\cat{I}$-injective resolution of $Y$\n\\begin{equation}\\label{eq:InjResol}\n\\xymatrix @C=3.3pc {\n0 \\ar[r] & Y \\ar[r]^-{p_0} & I_0 \\ar[r]^-{(\\Sigma p_1) \\delta_0} & \\Sigma I_1 \\ar[r]^-{(\\Sigma^2 p_2) (\\Sigma \\delta_1)} & \\Sigma^2 I_2 \\ar[r] & \\cdots \\\\\n} \\qedhere\n\\end{equation}\n\\end{proof}\n\n\\begin{rem}\\label{re:NotGenerate}\nWe do not assume that the injective class $\\cat{I}$ generates, \ni.e., that every non-zero object $X$ admits a non-zero map $X \\to I$ to an injective. \nHence, we do not expect the Adams spectral sequence to be conditionally convergent in general; c.f.~\\cite{Christensen98}*{Proposition~4.4}.\n\\end{rem}\n\n\n\n\n\n\n\n\n\n\\begin{ex}\\label{ex:EBased}\nLet $E$ be a commutative (homotopy) ring spectrum. \nA spectrum is called \\textbf{$E$-injective} if it is a retract of $E \\sm W$ for some $W$ \\cite{HoveyS99}*{Definition 2.22}. \nA map of spectra $f \\colon X \\to Y$ is called \\textbf{$E$-monic} if the map $E \\sm f \\colon E \\sm X \\to E \\sm Y$ is a split monomorphism. \nThe $E$-injective objects and $E$-monic maps form an injective class in the stable homotopy category. \nThe Adams spectral sequence associated to \nthis injective class\nis the \\emph{Adams spectral sequence based on $E$-homology}, as described in \\cite{Ravenel04}*{Definition~2.2.4}, also called the \\emph{unmodified Adams spectral sequence} in \\cite{HoveyS99}*{\\S 2.2}. Further assumptions are needed in order to identify the $E_2$ term as $\\Ext$ groups in $E_*E$-comodules.\n\\end{ex}\n\n\n\n\n\\begin{defn}\nThe \\textbf{$\\cat{I}$-cohomology} of an object $X$ is the family of abelian groups\\break $H^I(X) := \\cat{T}(X,I)$ indexed by the injective objects $I \\in \\cat{I}$.\n\nA \\textbf{primary operation} in $\\cat{I}$-cohomology is a natural transformation $H^I(X) \\to H^J(X)$ of functors $\\cat{T}^{\\mathrm{op}} \\to \\mathrm{Ab}$. Equivalently, by the (additive) Yoneda lemma, a primary operation is a map $I \\to J$ in $\\cat{T}$.\n\\end{defn}\n\n\\begin{ex}\nThe differential $d_1$ is given by primary operations. More precisely, let $x \\in E_1^{s,t}$ be a map $x \\colon \\Sigma^{t-s} X \\to I_s$. Then $d_1(x) \\in E_1^{s+1,t}$ is the composite\n\\[\n\\xymatrix{\n\\Sigma^{t-s} X \\ar[r]^-{x} & I_s \\ar[r]^-{\\delta_s} & \\Sigma Y_{s+1} \\ar[r]^-{\\Sigma p_{s+1}} & \\Sigma I_{s+1}. \\\\\n}\n\\]\nIn other words, $d_1(x)$ is obtained by applying the primary operation $d_1 := (\\Sigma p_{s+1}) \\delta_s \\colon I_s \\to \\Sigma I_{s+1}$ to $x$.\n\\end{ex}\n\n\\begin{prop}\nA primary operation $\\theta \\colon I \\to J$ appears as $d_1 \\colon I_s {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} I_{s+1}$ in some Adams resolution if and only if $\\theta$ admits an $\\cat{I}$-epi -- $\\cat{I}$-mono factorization.\n\\end{prop}\n\n\\begin{proof}\nThe condition is necessary by construction. In the factorization $d_1 = (\\Sigma p_{s+1}) \\delta_s$, the map $\\delta_s$ is $\\cat{I}$-epic while $p_{s+1}$ is $\\cat{I}$-monic.\n\nTo prove sufficiency, assume given a factorization $\\theta = iq \\colon I \\to W \\to J$, where $q \\colon I \\twoheadrightarrow W$ is $\\cat{I}$-epic and $i \\colon W \\hookrightarrow J$ is $\\cat{I}$-monic. Taking the fiber of $q$ twice yields the distinguished triangle\n\\[\n\\xymatrix{\n\\Sigma^{-1} W \\ar[r] & Y_0 \\ar@{ >->}[r] & I \\ar@{->>}[r]^q & W \\\\\n}\n\\]\nwhich we relabel\n\\[\n\\xymatrix{\nY_1 \\ar[r]^-{i_0} & Y_0 \\ar@{ >->}[r]^-{p_0} & I \\ar@{->>}[r]^-{\\delta_0} & \\Sigma Y_1. \\\\\n}\n\\]\nRelabeling the given map $i \\colon W \\hookrightarrow J$ as $\\Sigma p_1 \\colon \\Sigma Y_1 \\hookrightarrow \\Sigma I_1$, we can continue the usual construction of an Adams resolution of $Y_0$ as illustrated in Diagram~\\eqref{eq:AdamsResolInj}, in which $\\theta = iq$ appears as the composite $(\\Sigma p_1) \\delta_0$. Note that by the same argument, for any $s \\geq 0$, $\\theta$ appears as $d_1 \\colon I_s {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} I_{s+1}$ in some (other) Adams resolution.\n\\end{proof}\n\n\\begin{ex}\nNot every primary operation appears as $d_1$ in an Adams resolution. For example, consider the stable homotopy category with the projective class $\\cat{P}$ generated by the sphere spectrum $S = S^0$, that is, $\\cat{P}$ consists of retracts of wedges of spheres. The $\\cat{P}$-epis (resp. $\\cat{P}$-monos) consist of the maps which are surjective (resp. injective) on homotopy groups. The primary operation $2 \\colon S \\to S$ does \\emph{not} admit a $\\cat{P}$-epi -- $\\cat{P}$-mono factorization.\n\nIndeed, assume that $2 = iq \\colon S \\twoheadrightarrow W \\hookrightarrow S$ is such a factorization. We will show that this implies $\\pi_2 (S\/2) = \\mathbb{Z}\/2 \\oplus \\mathbb{Z}\/2$, contradicting the known fact $\\pi_2 (S\/2) = \\mathbb{Z}\/4$. Here $S\/2$ denotes the mod $2$ Moore spectrum, sitting in the cofiber sequence $S \\ral{2} S \\to S\/2$.\n\nBy the octahedral axiom applied to the factorization $2 = iq$, there is a diagram\n\\[\n\\xymatrix{\nS \\ar@{=}[d] \\ar@{->>}[r]^-{q} & W \\ar@{ >->}[d]^{i} \\ar[r] & C_q \\ar[d]^{\\alpha} \\ar@{ >->}[r]^{\\delta'} & S^1 \\ar@{=}[d] \\\\\nS \\ar[r]^2 & S \\ar@{->>}[d]^{j} \\ar[r] & S\/2 \\ar[d]^{\\beta} \\ar[r]^{\\delta} & S^1 \\\\\n& C_i \\ar@{=}[r] & C_i & \\\\\n}\n\\]\nwith distinguished rows and columns. The long exact sequence in homotopy yields $\\pi_n C_q = \\mbox{}_2 \\pi_{n-1} S$, \nwhere the induced map $\\pi_n(\\delta') \\colon \\pi_n C_q \\to \\pi_n S^1$ corresponds to the inclusion $\\mbox{}_2 \\pi_{n-1} S \\hookrightarrow \\pi_{n-1} S$. Likewise, we have $\\pi_n C_i = \\left( \\pi_{n} S \\right) \/ 2$, \nwhere the induced map $\\pi_n(j) \\colon \\pi_n S \\to \\pi_n C_i$ corresponds to the quotient map $\\pi_{n} S \\twoheadrightarrow \\left( \\pi_{n} S \\right) \/ 2$. The defining cofiber sequence $S \\ral{2} S \\to S\/2$ yields the exact sequence\n\\[\n\\xymatrix{\n\\pi_n S \\ar[r]^2 & \\pi_n S \\ar[r] & \\pi_n (S\/2) \\ar[r]^{\\pi_n \\delta} & \\pi_{n-1} S \\ar[r]^2 & \\pi_{n-1} S \\\\\n}\n\\] \nwhich in turn yields the short exact sequence\n\\begin{equation*}%\n\\xymatrix{\n0 \\ar[r] & \\left( \\pi_n S \\right) \/ 2 \\ar[r] & \\pi_n (S\/2) \\ar[r]^{\\pi_n \\delta} & \\mbox{}_2 \\pi_{n-1} S \\ar[r] & 0. \\\\\n}\n\\end{equation*}\nThe map $\\pi_n (\\alpha) \\colon \\mbox{}_2 \\pi_{n-1} S \\to \\pi_n(S\/2)$ is a splitting of this sequence, because of the equality $\\pi_n(\\delta) \\pi_n (\\alpha) = \\pi_n(\\delta \\alpha) = \\pi_n(\\delta')$. However, the short exact sequence does not split in the case $n=2$, by the isomorphism $\\pi_2(S\/2) = \\mathbb{Z}\/4$. \nFor references, see~\\cite{Schwede12}*{Proposition II.6.48}, \\cite{Schwede10}*{Proposition 4},\nand~\\cite{MO100272}.\n\\end{ex}\n\n\n\\section{\\texorpdfstring{$3$}{3}-fold Toda brackets}\\label{se:3-fold-Toda-brackets}\n\n\nIn this section, we review different constructions of $3$-fold Toda brackets and some of their properties.\n\n\\enlargethispage{3pt}\n\\begin{defn} \\label{def:TodaBracket}\nLet $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ be a diagram in a triangulated category $\\cat{T}$. We define subsets of $\\cat{T}(\\Sigma X_0, X_3)$ as follows.\n\\begin{itemize}\n\\item The \\textbf{iterated cofiber Toda bracket} $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc} \\subseteq \\cat{T}(\\Sigma X_0, X_3)$ consists of all maps $\\psi \\colon \\Sigma X_0 \\to X_3$ that appear in a commutative diagram\n\\begin{equation} \\label{eq:CofCof}\n\\cxymatrix{\nX_0 \\ar@{=}[d] \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar[d]^{\\varphi} \\ar[r] & \\Sigma X_0 \\ar[d]^{\\psi} \\\\\nX_0 \\ar[r]^-{f_1} & X_1 \\ar[r]^-{f_2} & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n\\end{equation}\nwhere the top row is distinguished.\n\\item The \\textbf{fiber-cofiber Toda bracket} $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc} \\subseteq \\cat{T}(\\Sigma X_0, X_3)$ consists of all composites $\\beta \\circ \\Sigma \\alpha \\colon \\Sigma X_0 \\to X_3$, where $\\alpha$ and $\\beta$ appear in a commutative diagram\n\\begin{equation} \\label{eq:FibCof}\n\\vcenter{\n\\xymatrix @C=3.3pc {\nX_0 \\ar[d]_-{\\alpha} \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] & & \\\\\n\\Sigma^{-1} C_{f_2} \\ar[r] & X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^-{\\beta} \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n}\n\\end{equation}\nwhere the middle row is distinguished.\n\\item The \\textbf{iterated fiber Toda bracket} $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\ff} \\subseteq \\cat{T}(\\Sigma X_0, X_3)$ consists of all maps $\\Sigma \\delta \\colon \\Sigma X_0 \\to X_3$ where $\\delta$ appears in a commutative diagram\n\\begin{equation} \\label{eq:FibFib}\n\\vcenter{\n\\xymatrix @C=3.3pc {\nX_0 \\ar[d]_{\\delta} \\ar[r]^-{f_1} & X_1 \\ar[d]_{\\gamma} \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r]^-{f_3} & X_3 \\ar@{=}[d] \\\\\n\\Sigma^{-1} X_3 \\ar[r] & \\Sigma^{-1} C_{f_3} \\ar[r] & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n}\n\\end{equation}\nwhere the bottom row is distinguished.\n\\end{itemize}\n\\end{defn}\n\n\\begin{rem}\\label{re:3-fold-negation}\nIn the literature, there are variations of these definitions, which sometimes\ndiffer by a sign.\nWith the notion of cofiber sequence implicitly used in~\\cite{Toda62},\nour definitions agree with Toda's.\nThe Toda bracket also depends on the choice of triangulation.\nGiven a triangulation, there is an associated negative triangulation whose\ndistinguished triangles are those triangles whose negatives are distinguished\nin the original triangulation (see~\\cite{Balmer02}).\nNegating a triangulation negates the $3$-fold Toda brackets.\nDan Isaksen has pointed out to us that in the stable homotopy category\nthere are $3$-fold Toda brackets which are not equal to their own \nnegatives.\nFor example, Toda showed in~\\cite{Toda62}*{Section~VI.v, and Theorems~7.4\nand~14.1} that the Toda bracket $\\left\\langle 2 \\sigma, 8, \\nu \\right\\rangle$\nhas no indeterminacy and contains an element $\\zeta$ of order $8$.\nWe give another example in Example~\\ref{ex:negative}.\n\\end{rem}\n\nThe following proposition can be found in \\cite{Sagave08}*{Remark 4.5 and Figure 2} and was kindly pointed out by Fernando Muro. It is also proved in \\cite{Meier12}*{\\S 4.6}. We provide a different proof more in the spirit of what we do later. In the case of spaces, it was originally proved by Toda \\cite{Toda62}*{Proposition 1.7}. \n\n\\begin{prop} \\label{TodaBracketsAgree}\nThe iterated cofiber, fiber-cofiber, and iterated fiber definitions of Toda brackets coincide. More precisely, for any diagram $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ in $\\cat{T}$, the following subsets of $\\cat{T}(\\Sigma X_0, X_3)$ are equal:\n\\[\n\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc} = \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc} = \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\ff}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nWe will prove the first equality; the second equality is dual.\n\n($\\supseteq$) Let $\\beta (\\Sigma \\alpha) \\in \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc}$ be obtained from maps $\\alpha$ and $\\beta$ as in Diagram \\eqref{eq:FibCof}. Now consider the diagram with distinguished rows\n\\[\n\\xymatrix{\nX_0 \\ar[d]^{\\alpha} \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar@{-->}[d]^{\\varphi} \\ar[r] & \\Sigma X_0 \\ar[d]^{\\Sigma \\alpha} \\\\\n\\Sigma^{-1} C_{f_2} \\ar[r] & X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^{\\beta} \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n\\]\nwhere there exists a filler $\\varphi \\colon C_{f_1} \\to X_2$. The commutativity of the tall rectangle on the right exhibits the membership $\\beta (\\Sigma \\alpha) \\in \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc}$.\n\n($\\subseteq$) Let $\\psi \\in \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\cc}$ be as in Diagram \\eqref{eq:CofCof}. The octahedral axiom comparing the cofibers of $q_1$, $\\varphi$, and $\\varphi \\circ q_1 = f_2$ yields a commutative diagram\n\\[\n\\xymatrix @C=1.1pc @R=0.88pc {\n&& && \\Sigma^{-1} C_{\\varphi} \\ar[dd]_{- \\Sigma^{-1} \\iota} \\ar@{=}[rr] & & \\Sigma^{-1} C_{\\varphi} \\ar[dd]_{- \\Sigma^{-1} \\eta} & \\\\ \\\\\nX_0 \\ar[dd]_{\\alpha} \\ar[rr]^-{f_1} && X_1 \\ar@{=}[dd] \\ar[rr]^-{q_1} && C_{f_1} \\ar[dd]_{\\varphi} \\ar[rr]^-{\\iota_1} & & \\Sigma X_0 \\ar[dd]_{\\Sigma \\alpha} \\ar@\/_1pc\/[dddl]_(0.35){\\psi} \\ar[rr]^-{- \\Sigma f_1} && \\Sigma X_1 \\ar@{=}[dd] \\\\ \\\\\n\\Sigma^{-1} C_{f_2} \\ar[rr]^(0.52){- \\Sigma^{-1} \\iota_2} && X_1 \\ar[rr]^-{f_2} && X_2 \\ar[dd]_{q} \\ar[dr]_{f_3} \\ar[rr]^(0.4){q_2} & & C_{f_2} \\ar[dd]^{\\xi} \\ar@{-->}[dl]^{\\!\\beta} \\ar[rr]^{\\iota_2} && \\Sigma X_1 \\\\\n&& && & X_3 & & \\\\\n&& && C_{\\varphi} \\ar@{-->}[ur]^-{\\theta} \\ar@{=}[rr] & & C_{\\varphi}, & \\\\\n}\n\\]\nwhere the rows and columns are distinguished. By exactness of the sequence\n\\[\n\\xymatrix @C=3.3pc {\n\\cat{T}(C_{f_2}, X_3) \\ar[r]^-{(\\Sigma \\alpha)^*} & \\cat{T}(\\Sigma X_0, X_3) \\ar[r]^-{(- \\Sigma^{-1} \\eta)^*} & \\cat{T}(\\Sigma^{-1} C_{\\varphi}, X_3)\n}\n\\]\nthere exists a map $\\beta \\colon C_{f_2} \\to X_3$ satisfying $\\psi = \\beta (\\Sigma \\alpha)$ if and only if the restriction of $\\psi$ to the fiber $\\Sigma^{-1} C_{\\varphi}$ of $\\Sigma \\alpha$ is zero. That condition does hold: one readily checks the equality $\\psi (- \\Sigma^{-1} \\eta) = 0$.\nThe chosen map $\\beta \\colon C_{f_2} \\to X_3$ might \\emph{not} satisfy the equation $\\beta q_2 = f_3$, but we will correct it to another map $\\beta'$ which does. The error term $f_3 - \\beta q_2$ is killed by restriction along $\\varphi$, %\nand therefore factors through the cofiber of $\\varphi$, i.e., there exists a factorization\n\\[\nf_3 - \\beta q_2 = \\theta \\iota\n\\]\nfor some $\\theta \\colon C_{\\varphi} \\to X_3$. The corrected map $\\beta' := \\beta + \\theta \\xi \\colon C_{f_2} \\to X_3$ satisfies $\\beta' q_2 = f_3$. \nMoreover, this corrected map $\\beta'$ still satisfies $\\beta' (\\Sigma \\alpha) = \\psi = \\beta (\\Sigma \\alpha)$, since the correction term satisfies $\\theta \\xi (\\Sigma \\alpha) = 0$. %\n\\end{proof}\n\nThanks to the proposition, we can write $\\left\\langle f_3, f_2, f_1 \\right\\rangle$ if we \ndo not need to specify a particular definition of the Toda bracket.\n\nWe also recall this well-known fact, and leave the proof as an exercise:\n\n\\begin{lem}\\label{le:indeterminacy}\nFor any diagram $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ in $\\cat{T}$,\nthe subset $\\left\\langle f_3, f_2, f_1 \\right\\rangle$ of $\\cat{T}(\\Sigma X_0, X_3)$ is a coset of\nthe subgroup\n\\[\n (f_3)_* \\, \\cat{T}(\\Sigma X_0, X_2) + (\\Sigma f_1)^* \\, \\cat{T}(\\Sigma X_1, X_3) .\n\\vspace{-18pt}\n\\]\n\\qed\n\\end{lem}\n\nThe displayed subgroup is called the \\textbf{indeterminacy}, and when\nit is trivial, we say that the Toda bracket \\textbf{has no indeterminacy}.\n\n\\begin{lem}\\label{le:MoveAround}\nConsider maps $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3 \\ral{f_4} X_4$. Then the following inclusions of subsets of $\\cat{T}(\\Sigma X_0, X_4)$ hold.\n\\begin{enumerate}\n\\item\n\\[\nf_4 \\left\\langle f_3, f_2, f_1 \\right\\rangle \\subseteq \\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle\n\\]\n\\item\n\\[\n\\left\\langle f_4, f_3, f_2 \\right\\rangle f_1 \\subseteq \\left\\langle f_4, f_3, f_2 f_1 \\right\\rangle\n\\]\n\\item\n\\[\n\\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle\n\\]\n\\item\n\\[\n\\left\\langle f_4, f_3, f_2 f_1 \\right\\rangle \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle.\n\\]\n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n(1)-(2) These inclusions %\nare straightforward.\n\n(3)-(4) Using the iterated cofiber definition, the subset $\\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle_{\\cc}$ consists of the maps $\\psi \\colon \\Sigma X_0 \\to X_4$ appearing in a commutative diagram\n\\[\n\\xymatrix{\nX_0 \\ar@{=}[d] \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar[d]^{\\varphi} \\ar[rr] & & \\Sigma X_0 \\ar[d]^{\\psi} \\\\\nX_0 \\ar[r]^-{f_1} & X_1 \\ar[r]^-{f_2} & X_2 \\ar[r]^-{f_3} & X_3 \\ar[r]^-{f_4} & X_4 \\\\\n}\n\\]\nwhere the top row is distinguished. Given such a diagram, the diagram\n\\[\n\\xymatrix{\nX_0 \\ar@{=}[d] \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] \\ar[r] & C_{f_1} \\ar[dr]^{f_3 \\varphi} \\ar[rr] & & \\Sigma X_0 \\ar[d]^{\\psi} \\\\\nX_0 \\ar[r]^-{f_1} & X_1 \\ar[r]^-{f_2} & X_2 \\ar[r]^-{f_3} & X_3 \\ar[r]^-{f_4} & X_4 \\\\\n}\n\\]\nexhibits the membership $\\psi \\in \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle_{\\cc}$. A similar argument can be used to prove the inclusion $\\left\\langle f_4, f_3, f_2 f_1 \\right\\rangle_{\\ff} \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle_{\\ff}$.\n\\end{proof}\n\n\\begin{ex}\nThe inclusion $\\left\\langle f_4 f_3, f_2, f_1 \\right\\rangle \\subseteq \\left\\langle f_4, f_3 f_2, f_1 \\right\\rangle$ need not be an equality. For example, consider the maps $X \\ral{0} Y \\ral{1} Y \\ral{0} Z \\ral{1} Z$. The Toda brackets being compared are\n\\begin{align*}\n\\left\\langle 1_Z 0, 1_Y, 0 \\right\\rangle &= \\left\\langle 0, 1_Y, 0 \\right\\rangle \\\\\n&= \\left\\{ 0 \\right\\} \\\\\n\\left\\langle 1_Z, 0 1_Y, 0 \\right\\rangle &= \\left\\langle 1_Z, 0, 0 \\right\\rangle \\\\\n&= \\cat{T}(\\Sigma X, Z).\n\\end{align*}\n\\end{ex}\n\n\\begin{defn}\nIn the setup of Definition \\ref{def:TodaBracket}, the \\textbf{restricted Toda brackets} are the subsets of the Toda bracket\n\\begin{align*}\n&\\left\\langle f_3, \\stackrel{\\alpha}{f_2, f_1} \\right\\rangle_{\\fc} \\subseteq \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc} \\\\\n&\\left\\langle \\stackrel{\\beta}{f_3, f_2}, f_1 \\right\\rangle_{\\fc} \\subseteq \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc}\n\\end{align*}\nconsisting of all composites $\\beta (\\Sigma \\alpha) \\colon \\Sigma X_0 \\to X_3$, where $\\alpha$ and $\\beta$ appear in a commutative diagram \\eqref{eq:FibCof} where the middle row is distinguished, with the prescribed map $\\alpha \\colon X_0 \\to \\Sigma^{-1} C_{f_2} $ (resp. $\\beta \\colon C_{f_2} \\to X_3$).\n\\end{defn}\n\n\n\n\nThe lift to the fiber $\\alpha \\colon X_0 \\to \\Sigma^{-1} C_{f_2}$ is a witness of the equality $f_2 f_1 = 0$. Dually, the extension to the cofiber $\\beta \\colon C_{f_2} \\to X_3$ is a witness of the equality $f_3 f_2 = 0$.\n\n\\begin{rem} \\label{ComposeWitness}\nLet $X_1 \\ral{f_2} X_2 \\ral{q_2} C_{f_2} \\ral{\\iota_2} \\Sigma X_1$ be a distinguished triangle. By definition, we have equalities of subsets\n\\begin{align*}\n&\\left\\langle f_3, \\stackrel{\\alpha}{f_2, f_1} \\right\\rangle_{\\fc} = \\left\\langle f_3, \\stackrel{1}{f_2, {-}}\\!\\! \\Sigma^{-1} \\iota_2 \\right\\rangle_{\\fc} (\\Sigma \\alpha) \\\\\n&\\left\\langle \\stackrel{\\beta}{f_3, f_2}, f_1 \\right\\rangle_{\\fc} = \\beta \\left\\langle \\stackrel{1}{q_2, f_2}, f_1 \\right\\rangle_{\\fc}.\n\\end{align*}\n\\end{rem}\n\n\n\n\n\n\\section{Adams \\texorpdfstring{$d_2$}{d2} in terms of \\texorpdfstring{$3$}{3}-fold Toda brackets}\\label{se:AdamsD2}\n\nIn this section, we show that the Adams differential $d_r$ can be expressed in several ways \nusing $3$-fold Toda brackets. One of these expressions is as a secondary cohomology operation.\n\nGiven an injective class $\\cat{I}$,\nan Adams resolution of an object $Y$ as in Diagram~\\eqref{eq:AdamsResolInj}, and an object $X$, consider a class $[x] \\in E_2^{s,t}$ represented by a cycle $x \\in E_1^{s,t} = \\cat{T}(\\Sigma^{t-s} X, I_s)$. Recall that $d_2 [x] \\in E_2^{s+2,t+1} $ is obtained as illustrated in the diagram \n\\begin{equation*}\n\\xymatrix{\n\\cdots & Y_s \\ar[l] \\ar@{ >->}[dr]_{p_s} & & Y_{s+1} \\ar@{ >->}[dr]_{p_{s+1}} \\ar[ll]_{i_s} & & Y_{s+2} \\ar@{ >->}[dr]_{p_{s+2}} \\ar[ll]_{i_{s+1}} & & Y_{s+3} \\ar[ll]_{i_{s+2}} & \\cdots \\ar[l] \\\\\n& & I_s \\circar[ur]_{\\delta_s} & & I_{s+1} \\circar[ur]_{\\delta_{s+1}} & & I_{s+2} \\circar[ur]_{\\delta_{s+2}} & & \\\\\n& & \\Sigma^{t-s} X \\ar[u]_x \\ar@\/^0.5pc\/@{-->}[uurrr]_(0.4){\\widetilde{x}} \\ar@\/_0.5pc\/[urrrr]_{d_2(x)} & & & & & & \\\\\n}\n\\end{equation*}%\nExplicitly, since $x$ satisfies $d_1(x) = (\\Sigma p_{s+1}) \\delta_s x = 0$, we can choose a lift $\\widetilde{x} \\colon \\Sigma^{t-s} X {\\ooalign{$\\longrightarrow$\\cr\\hidewidth$\\circ$\\hidewidth\\cr}} \\Sigma Y_{s+2}$ of $\\delta_s x$ to the fiber of $\\Sigma p_{s+1}$. Then the differential $d_2$ is given by \n\\[\nd_2 [x] = \\left[ (\\Sigma p_{s+2}) \\widetilde{x} \\right].\n\\]\n\nFrom now on, we will unroll the distinguished triangles and keep track of the suspensions. %\nFollowing Convention \\ref{co:SuspensionIso}, we will use the identifications\n\\[\nE_1^{s+2,t+1} = \\cat{T}(\\Sigma^{t-s-1} X, I_{s+2}) \\cong \\cat{T}(\\Sigma^{t-s} X, \\Sigma I_{s+2}) \\cong \\cat{T}(\\Sigma^{t-s+1} X, \\Sigma^2 I_{s+2}).\n\\]\n\n\\begin{prop} \\label{pr:DifferentD2}\nDenote by $d_2 [x] \\subseteq E_1^{s+2,t+1}$ the subset of all representatives of the class $d_2 [x] \\in E_2^{s+2,t+1}$. Then the following equalities hold:\n\\begin{enumerate}\n\\item\\label{it:d1pdex}\n\\begin{align*}\nd_2 [x] &= \\left\\langle \\stackrel{\\Sigma^2 p_{s+2}}{\\Sigma d_1,\\strut \\Sigma p_{s+1}}, \\delta_s x \\right\\rangle_{\\fc} \\\\\n&= \\left\\langle \\Sigma d_1, \\Sigma p_{s+1}, \\delta_s x \\right\\rangle\n\\end{align*}\n\\item\n\\begin{align*}\nd_2 [x] &= (\\Sigma^2 p_{s+2}) \\left\\langle \\stackrel{\\!\\!1}{\\Sigma \\delta_{s+1}, \\Sigma p_{s+1}}, \\delta_s x \\right\\rangle_{\\fc} \\\\\n&= (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, \\Sigma p_{s+1}, \\delta_s x \\right\\rangle\n\\end{align*}\n\\item\\label{it:d1d1x}\n\\[\nd_2 [x] = \\left\\langle \\stackrel{\\ \\beta}{\\Sigma d_1, d_1}, x \\right\\rangle_{\\fc} ,\n\\]\nwhere $\\beta$ is the composite $C \\ral{\\widetilde{\\beta}} \\Sigma^{2} Y_{s+2} \\ral{\\Sigma^2 p_{s+2}} \\Sigma^{2} I_{s+2}$ and $\\widetilde{\\beta}$ is obtained from the octahedral axiom applied to the factorization $d_1 = (\\Sigma p_{s+1}) \\delta_{s} \\colon I_s \\to \\Sigma Y_{s+1} \\to \\Sigma I_{s+1}$.\n\\end{enumerate}\n\\end{prop}\n\nIn~\\eqref{it:d1d1x}, $\\beta$ is a witness to the fact that the composite $(\\Sigma d_1) d_1$ \nof primary operations is zero, and so the restricted Toda bracket is a secondary operation.\n\n\\begin{proof}\nNote that $t$ plays no role in the statement, so we will assume without loss of generality that $t=s$ holds.\n\n(1) The first equality holds by definition of $d_2 [x]$, namely choosing a lift of $\\delta_s x$ to the fiber of $\\Sigma p_{s+1}$. \nThe second equality follows from the fact that $\\Sigma^2 p_{s+2}$ is the \\emph{unique} extension of $\\Sigma d_1 = (\\Sigma^2 p_{s+2}) (\\Sigma \\delta_{s+1})$ to the cofiber of $\\Sigma p_{s+1}$. Indeed, $\\Sigma \\delta_{s+1}$ is $\\cat{I}$-epic and $\\Sigma I_{s+2}$ is injective, so that the restriction map\n\\[\n(\\Sigma \\delta_{s+1})^* \\colon \\cat{T}(\\Sigma^2 Y_{s+2}, \\Sigma^2 I_{s+2}) \\to \\cat{T}(\\Sigma I_{s+1}, \\Sigma^2 I_{s+2})\n\\]\nis injective.\n\n(2) The first equality holds by Remark \\ref{ComposeWitness}. The second equality holds because $\\Sigma \\delta_{s+1}$ is $\\cat{I}$-epic and $\\Sigma I_{s+2}$ is injective, as in part (1).\n\n\n(3) The map $d_1 \\colon I_s \\to \\Sigma I_{s+1}$ is the composite $I_s \\ral{\\delta_s} \\Sigma Y_{s+1} \\ral{\\Sigma p_{s+1}} \\Sigma I_{s+1}$. The octahedral axiom applied to this factorization yields the dotted arrows in a commutative diagram\n\\[\n\\cxymatrix{\nI_s \\ar@{=}[d] \\ar[r]^-{\\delta_s} & \\Sigma Y_{s+1} \\ar[d]_{\\Sigma p_{s+1}} \\ar[r]^-{\\Sigma i_s} & \\Sigma Y_s \\ar@{-->}[d]^{\\widetilde{\\alpha}} \\ar[r]^-{-\\Sigma p_s} & \\Sigma I_s \\ar@{=}[d] \\\\\nI_s \\ar[r]^-{d_1} & \\Sigma I_{s+1} \\ar[d]_{\\Sigma \\delta_{s+1}} \\ar@{-->}[r]^-{q} & C_{d_1} \\ar@{-->}[d]^{\\widetilde{\\beta}} \\ar@{-->}[r]^-{\\iota} & \\Sigma I_s \\\\\n& \\Sigma^2 Y_{s+2} \\ar[d]_{-\\Sigma^{2} i_{s+1}} \\ar@{=}[r] & \\Sigma^2 Y_{s+2} \\ar[d] & \\\\\n& \\Sigma^2 Y_{s+1} \\ar[r]^{\\Sigma^{2} i_s} & \\Sigma^2 Y_{s} & \\\\\n}\n\\]\nwhere the rows and columns are distinguished and the equation $(-\\Sigma^2 i_{s+1}) \\widetilde{\\beta} = (\\Sigma \\delta_s) \\iota$ holds. The restricted bracket $\\left\\langle \\stackrel{\\ \\beta}{\\Sigma d_1, d_1}, x \\right\\rangle_{\\fc}$ consists of the maps $\\Sigma X \\to \\Sigma^2 I_{s+2}$ appearing as downward composites in the commutative diagram\n\\[\n\\xymatrix@C-7pt@R-3pt{\n& & & & \\Sigma X \\ar@{-->}[d]_-{\\Sigma \\alpha} \\ar[rr]^{- \\Sigma x} & & \\Sigma I_s \\ar@{=}[d] \\\\\nI_s \\ar[rr]^-{d_1} & & \\Sigma I_{s+1} \\ar@{=}[dd] \\ar[rr]^-{q} & & C_{d_1} \\ar[dl]_(0.55){\\widetilde{\\beta}\\!\\!} \\ar[dd]^{\\beta} \\ar[rr]^-{\\iota} && \\Sigma I_s \\\\\n& & & \\Sigma^2 Y_{s+2} \\ar[dr]^-{\\!\\Sigma^2 p_{s+2}} & & \\\\\n& & \\Sigma I_{s+1} \\ar[rr]_-{\\Sigma d_1} \\ar[ur]^-{\\Sigma \\delta_{s+1}\\!} & & \\Sigma^2 I_{s+2} & \\\\\n}\n\\]\n\n($\\supseteq$) Let $\\beta (\\Sigma \\alpha) \\in \\left\\langle \\stackrel{\\beta}{d_1, d_1}, x \\right\\rangle_{\\fc}$. By definition of $\\beta$, we have $\\beta (\\Sigma \\alpha) = (\\Sigma^2 p_{s+2}) \\widetilde{\\beta} (\\Sigma \\alpha)$. Then $\\widetilde{\\beta} (\\Sigma \\alpha) \\colon \\Sigma X \\to \\Sigma^{2} Y_{s+2}$ is a valid choice of the lift $\\widetilde{x}$ in the definition of $d_2[x]$:\n\\begin{align*}\n(\\Sigma^2 i_{s+1}) \\widetilde{\\beta} (\\Sigma \\alpha) &= -(\\Sigma \\delta_s) \\iota (\\Sigma \\alpha) \\\\%By the equation -i_{s+1} \\widetilde{\\beta} = \\delta_s \\iota in the octahedron\n&= - (\\Sigma \\delta_s) (-\\Sigma x) \\\\\n&= \\Sigma (\\delta_s x).\n\\end{align*}\n($\\subseteq$) Given a representative $(\\Sigma p_{s+2}) \\widetilde{x} \\in d_2 [x]$, let us show that $\\Sigma \\widetilde{x} \\colon \\Sigma X \\to \\Sigma^2 Y_{s+2}$ factors as $\\Sigma X \\ral{\\Sigma \\alpha} C_{d_1} \\ral{\\widetilde{\\beta}} \\Sigma^{2} Y_{s+2}$ for some $\\Sigma \\alpha$, yielding a factorization of the desired form\n\\begin{align*}\n(\\Sigma^2 p_{s+2}) (\\Sigma \\widetilde{x}) &= (\\Sigma^2 p_{s+2}) \\widetilde{\\beta} (\\Sigma \\alpha) \\\\\n&= \\beta (\\Sigma \\alpha).\n\\end{align*}\nBy construction, the map $(\\Sigma^2 i_s) (-\\Sigma^2 i_{s+1}) \\colon \\Sigma^2 Y_{s+2} \\to \\Sigma^2 Y_{s}$ is a cofiber of $\\widetilde{\\beta}$. The condition\n\\[\n(\\Sigma^2 i_s) (\\Sigma^2 i_{s+1}) (\\Sigma \\widetilde{x}) = (\\Sigma^2 i_s) \\Sigma (\\delta_s x) = 0\n\\]\nguarantees the existence of some lift $\\Sigma \\alpha \\colon \\Sigma X \\to C_{d_1}$ of $\\Sigma \\widetilde{x}$. The chosen lift $\\Sigma \\alpha$ might \\emph{not} satisfy $\\iota (\\Sigma \\alpha) = - \\Sigma x$, but we will correct it to a lift $\\Sigma \\alpha'$ which does. The two sides of the equation become equal after applying $-\\Sigma \\delta_s$, i.e., $(-\\Sigma \\delta_s) (-\\Sigma x) = (-\\Sigma \\delta_s) \\iota (\\Sigma \\alpha)$ holds.\nHence, the error term factors as\n\\[\n-\\Sigma x - \\iota \\Sigma \\alpha = (-\\Sigma p_s)(\\Sigma \\theta)\n\\]\nfor some $\\Sigma \\theta \\colon \\Sigma X \\to \\Sigma Y_s$, since $-\\Sigma p_s$ is a fiber of $-\\Sigma \\delta_s$. The corrected map $\\Sigma \\alpha' := \\Sigma \\alpha + \\widetilde{\\alpha} (\\Sigma \\theta) \\colon \\Sigma X \\to C_{d_1}$ satisfies $\\iota (\\Sigma \\alpha') = - \\Sigma x$ \nand still satisfies $\\widetilde{\\beta} (\\Sigma \\alpha') = \\widetilde{\\beta} (\\Sigma \\alpha) = \\Sigma \\widetilde{x}$, since the correction term $\\widetilde{\\alpha} (\\Sigma \\theta)$ satisfies $\\widetilde{\\beta} \\widetilde{\\alpha} (\\Sigma \\theta) = 0$. \n\\end{proof}\n\n\\begin{prop}\\label{pr:inclusions}\nThe following inclusions of subsets hold in $E_1^{s+2,t+1}$: %\n\\[\nd_2 [x] \\subseteq (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, d_1, x \\right\\rangle \\subseteq \\left\\langle \\Sigma d_1, d_1, x \\right\\rangle .\n\\]\n\\end{prop}\n\n\\begin{proof}\nThe first inclusion is \n\\[\nd_2 [x] = (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, \\Sigma p_{s+1}, \\delta_s x \\right\\rangle \\subseteq (\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, (\\Sigma p_{s+1}) \\delta_s, x \\right\\rangle,\n\\]\nwhereas the second inclusion is\n\\[\n(\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, d_1, x \\right\\rangle \\subseteq \\left\\langle (\\Sigma^2 p_{s+2}) (\\Sigma \\delta_{s+1}), d_1, x \\right\\rangle,\n\\] \nboth using Lemma~\\ref{le:MoveAround}.\n\\end{proof}\n\n\\begin{prop}\\label{pr:proper-inclusion}\nThe inclusion $(\\Sigma^2 p_{s+2}) \\left\\langle \\Sigma \\delta_{s+1}, d_1, x \\right\\rangle \\subseteq \\left\\langle \\Sigma d_1, d_1, x \\right\\rangle$ need \\emph{not} be an equality in general.\n\\end{prop}\n\nIt was pointed out to us by Robert Bruner that this can happen in principle.\nWe give an explicit example in Proposition~\\ref{pr:proper-inclusion-C4}.\n\n\n\n\\section{Higher Toda brackets}\\label{se:HigherBrackets}\n\n\n\n\n\n\nWe saw in Section~\\ref{se:3-fold-Toda-brackets} that there are several equivalent ways\nto define $3$-fold Toda brackets.\nFollowing the approach given in~\\cite{McKeown12}, we show that\nthe fiber-cofiber definition generalizes nicely to $n$-fold Toda brackets.\nThere are $(n-2)!$ ways to make this generalization, and we prove\nthat they are all the same up to a specified sign.\nWe also show that this Toda bracket is self-dual.\n\nOther sources that discuss higher Toda brackets in triangulated categories\nare~\\cite{Shipley02}*{Appendix A}, \\cite{Gelfand03}*{IV \\S 2} and~\\cite{Sagave08}*{\\S 4},\nwhich all give definitions that follow Cohen's approach for spectra or spaces~\\cite{Cohen68}.\nWe show that our definition agrees with those of~\\cite{Shipley02} and~\\cite{Sagave08}.\n(We believe that it sometimes differs in sign from~\\cite{Cohen68}. We have not compared carefully with~\\cite{Gelfand03}.)\n\n\\begin{defn}\\label{def:TodaFamily}\nLet $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$ be a diagram\nin a triangulated category $\\cat{T}$.\nWe define the \\textbf{Toda family}\nof this sequence to be the collection $\\mathrm{T}(f_3, f_2, f_1)$\nconsisting of all pairs $(\\beta, \\Sigma \\alpha)$, where $\\alpha$ and $\\beta$ appear in a commutative diagram\n\\[\n\\xymatrix @C=3.3pc {\nX_0 \\ar[d]_-{\\alpha} \\ar[r]^-{f_1} & X_1 \\ar@{=}[d] & & \\\\\n\\Sigma^{-1} C_{f_2} \\ar[r] & X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^-{\\beta} \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 \\\\\n}\n\\]\nwith distinguished middle row.\nEquivalently,\n\\[\n\\xymatrix @C=3.3pc {\n& & & \\Sigma X_0 \\ar[d]_-{\\Sigma \\alpha} \\ar[r]^-{-\\Sigma f_1} & \\Sigma X_1 \\ar@{=}[d] \\\\\n& X_1 \\ar[r]^-{f_2} & X_2 \\ar@{=}[d] \\ar[r] & C_{f_2} \\ar[d]^-{\\beta} \\ar[r] & \\Sigma X_1 \\\\\n& & X_2 \\ar[r]^-{f_3} & X_3 , \\\\\n}\n\\]\nwhere the middle row is again distinguished. (The negative of $\\Sigma f_1$\nappears, since when a triangle is rotated, a sign is introduced.)\nNote that the maps in each pair form a composable sequence\n$\\Sigma X_0 \\ral{\\Sigma \\alpha} C_{f_2} \\ral{\\beta} X_3$,\nwith varying intermediate object,\nand that the collection of composites $\\beta \\circ \\Sigma \\alpha$ is exactly the\nToda bracket $\\langle f_3, f_2, f_1 \\rangle$, using the fiber-cofiber definition\n(see Diagram~\\eqref{eq:FibCof}).\n(Also note that the Toda family is generally a proper class,\nbut this is only because the intermediate object can be varied up to isomorphism,\nand so we will ignore this.)\n\nMore generally, if $S$ is a set of composable triples of maps,\nstarting at $X_0$ and ending at $X_3$, we define $\\mathrm{T}(S)$ to\nbe the union of $\\mathrm{T}(f_3, f_2, f_1)$ for each triple\n$(f_3, f_2, f_1)$ in $S$.\n\\end{defn}\n\n\\begin{defn}\\label{def:HigherToda}\nLet\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} \\cdots \\ral{f_n} X_n$\nbe a diagram in a triangulated category $\\cat{T}$.\nWe define the \\textbf{Toda bracket} $\\langle f_n, \\ldots, f_1 \\rangle$\ninductively as follows.\nIf $n = 2$, it is the set consisting of just the composite $f_2 f_1$.\nIf $n > 2$, it is the union of the sets\n$\\langle \\beta, \\Sigma \\alpha, \\Sigma f_{n-3}, \\ldots, \\Sigma f_1 \\rangle$,\nwhere $(\\beta, \\Sigma \\alpha)$ is in $T(f_n, f_{n-1}, f_{n-2})$.\n\\end{defn}\n\n\nIn fact, there are $(n-2)!$ such definitions, depending on a\nsequence of choices of which triple of consecutive maps to apply\nthe Toda family construction to.\nIn Theorem~\\ref{th:n-fold} we will enumerate these choices\nand show that they all agree up to sign.\n\n\\begin{ex}\\label{ex:4FoldBracket}\nLet us describe $4$-fold Toda brackets in more detail. We have\n\\[\n\\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle\n = \\bigcup_{\\beta, \\alpha} \\left\\langle \\beta, \\Sigma \\alpha, \\Sigma f_1 \\right\\rangle\n = \\bigcup_{\\beta, \\alpha} \\bigcup_{\\beta', \\alpha'} \\{ \\beta' \\circ \\Sigma \\alpha' \\}\n\\]\nwith $(\\beta, \\Sigma \\alpha) \\in T(f_4, f_3, f_2)$ and $(\\beta', \\Sigma \\alpha') \\in T(\\beta, \\Sigma \\alpha, \\Sigma f_1)$. These maps fit into a commutative diagram\n\\[\n \\xymatrix{\n \\Sigma^2 X_0 \\ar[r]^-{\\Sigma \\alpha'} &\tC_{\\Sigma \\alpha} \\ar[r] \\ar[ddr]^(0.3){\\beta'} &\t\\Sigma^2 X_1 & \\text{row = $\\mathrlap{-\\Sigma^2 f_1}$} \\\\\n \\Sigma X_1 \\ar[r]^{\\Sigma \\alpha} &\tC_{f_3} \\ar[r] \\ar[dr]_(0.45){\\beta} \\ar[u] &\t\\Sigma X_2 &\t\\text{row = $\\mathrlap{-\\Sigma f_2}$} \\\\\n X_2 \\ar[r]^{f_3} &\tX_3 \\ar[u] \\ar[r]_{f_4} &\tX_4 \\\\\n & 0 \\ar[u] \\\\\n }\n\\]\nwhere the horizontal composites are specified as above, and each ``snake''\n\\[\n\\xymatrix{\n& \\cdot \\ar[r] & \\cdot \\\\\n\\cdot \\ar[r] & \\cdot \\ar[u] & \\\\\n}\n\\]\nis a distinguished triangle.\nThe middle column is an example of a \\emph{$3$-filtered object} as defined below.\n\\end{ex}\n\nNext, we will show that Definition \\ref{def:HigherToda} coincides with the definitions of higher Toda brackets in~\\cite{Shipley02}*{Appendix A} and~\\cite{Sagave08}*{\\S 4}, which we recall here.\n\n\\begin{defn}\\label{def:NFiltered}\nLet $n \\geq 1$ and consider a diagram in $\\cat{T}$\n\\[\n\\xymatrix{\nY_0 \\ar[r]^-{\\lambda_1} &\tY_1 \\ar[r]^-{\\lambda_2} &\tY_2 \\ar[r] &\t\\cdots \\ar[r]^-{\\lambda_{n-1}} &\tY_{n-1} \\\\\n} \n\\]\nconsisting of $n-1$ composable maps. An \\textbf{$n$-filtered object} $Y$ based on $(\\lambda_{n-1}, \\ldots, \\lambda_1)$ consists of a sequence of maps\n\\[\n\\xymatrix{\n0 = F_0 Y \\ar[r]^-{i_{0}} &\tF_{1} Y \\ar[r]^-{i_{1}} &\t\\cdots \\ar[r]^-{i_{n-1}} &\tF_n Y = Y \\\\ \n}\n\\]\ntogether with distinguished triangles\n\\[\n\\xymatrix{\nF_{j} Y \\ar[r]^-{i_j} &\tF_{j+1} Y \\ar[r]^-{q_{j+1}} & \\Sigma^{j} Y_{n-1-j} \\ar[r]^-{e_j} &\t\\Sigma F_j Y \\\\\t\n}\n\\]\nfor $0 \\leq j \\leq n-1$, such that for $1 \\leq j \\leq n-1$, the composite\n\\[\n\\xymatrix{\n\\Sigma^j Y_{n-1-j} \\ar[r]^-{e_j} &\t\\Sigma F_j Y \\ar[r]^-{\\Sigma q_j} &\t\\Sigma^{j} Y_{n-j} \\\\\n}\n\\]\nis equal to $\\Sigma^j \\lambda_{n-j}$. In particular, the $n$-filtered object $Y$ comes equipped with maps\n\\[\n\\sigma'_Y \\colon Y_{n-1} \\cong F_1 Y \\to Y \n\\]\n\\[\n\\sigma_Y \\colon Y = F_n Y \\to \\Sigma^{n-1} Y_0.\n\\]\n\\end{defn}\n\n\\begin{defn}\\label{def:HigherTodaSS}\nLet $X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} \\cdots \\ral{f_n} X_n$\nbe a diagram in a triangulated category $\\cat{T}$.\nThe \\textbf{Toda bracket} in the sense of Shipley--Sagave $\\langle f_n, \\ldots, f_1 \\rangle_{\\Ship} \\subseteq \\cat{T}(\\Sigma^{n-2} X_0, X_n)$ is the set of all composites appearing in the middle row of a commutative diagram\n\\[\n\\xymatrix{\n& X_{n-1} \\ar[d]_{\\sigma'_X} \\ar[dr]^-{f_n} & \\\\\n\\Sigma^{n-2} X_0 \\ar[dr]_{\\Sigma^{n-2} f_1} \\ar@{-->}[r] & X \\ar[d]^{\\sigma_X} \\ar@{-->}[r] & X_n \\\\\n& \\Sigma^{n-2} X_1 , & \\\\\n}\n\\]\nwhere $X$ is an $(n-1)$-filtered object based on $(f_{n-1}, \\ldots, f_3, f_2)$.\n\\end{defn}\n\n\\begin{ex}\nFor a $3$-fold Toda bracket $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\Ship}$, a $2$-filtered object $X$ based on $f_2$ amounts to a cofiber of $-f_2$, more precisely, a distinguished triangle\n\\[\n\\xymatrix{\nX_2 \\ar[r]^-{\\sigma'_X} & X \\ar[r]^-{\\sigma_X} & \\Sigma X_1 \\ar[r]^-{\\Sigma f_2} & \\Sigma X_2.\n}\n\\]\nUsing this, one readily checks the equality $\\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\Ship} = \\left\\langle f_3, f_2, f_1 \\right\\rangle_{\\fc}$, as noted in \\cite{Sagave08}*{Definition 4.5}.\n\\end{ex}\n\n\\begin{ex}\\label{ex:4FoldBracketSS}\nFor a $4$-fold Toda bracket $\\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle_{\\Ship}$, a $3$-filtered object $X$ based on $(f_3, f_2)$ consists of the data displayed in the diagram\n\\[\n \\xymatrix{\n & F_3 X = X \\ar[r]^-{q_3 = \\sigma_X} & \\Sigma^2 X_1 & \\\\\n \\Sigma X_1 \\ar[r]^-{- \\Sigma^{-1} e_2} & F_2 X \\ar[r]^-{q_2} \\ar[u]_{i_2} & \\Sigma X_2 & \\text{row = $\\mathrlap{-\\Sigma f_2}$} \\\\\n X_2 \\ar[r]^-{- \\Sigma^{-1} e_1} & F_1 X \\ar[u]_{i_1} \\ar[r]^-{q_1}_-{\\cong} & X_3 & \\text{row = $\\mathrlap{-f_3}$} \\\\\n & F_0 X = 0 , \\ar[u]_{i_0} \\\\\n }\n\\]\nwhere the two snakes are distinguished. The bracket consists of the maps $\\Sigma^2 X_0 \\to X_4$ appearing as composites of the dotted arrows in a commutative diagram\n\\[\n \\xymatrix{\n \\Sigma^2 X_0 \\ar@{-->}[r] & X \\ar[r]^-{\\sigma_X} \\ar@\/^1pc\/@{-->}[ddr] & \\Sigma^2 X_1 & \\text{row = $\\mathrlap{\\Sigma^2 f_1}$} \\\\\n \\Sigma X_1 \\ar[r]^-{- \\Sigma^{-1} e_2} & F_2 X \\ar[r]^-{q_2} \\ar[u] & \\Sigma X_2 & \\text{row = $\\mathrlap{-\\Sigma f_2}$} \\\\\n X_2 \\ar[r]^-{- f_3} & X_3 \\ar[u] \\ar[r]^-{f_4} & X_4 & \\\\\n & 0 , \\ar[u] \\\\\n }\n\\]\nwhere the two snakes are distinguished. By negating the first and third map in each snake,\nthis recovers the description in Example \\ref{ex:4FoldBracket}, thus proving the equality of subsets\n\\[\n\\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle_{\\Ship} = \\left\\langle f_4, f_3, f_2, f_1 \\right\\rangle.\n\\]\n\\end{ex}\n\n\\begin{prop}\nDefinitions \\ref{def:HigherToda} and \\ref{def:HigherTodaSS} agree. In other words, we have the equality\n\\[\n\\left\\langle f_n, \\ldots, f_1 \\right\\rangle_{\\Ship} = \\left\\langle f_n, \\ldots, f_1 \\right\\rangle\n\\]\nof subsets of $\\cat{T}(\\Sigma^{n-2} X_0, X_n)$.%\n\\end{prop}\n\n\\begin{proof}\nThis is a straightforward generalization of Example \\ref{ex:4FoldBracketSS}.\n\\end{proof}\n\nWe define the \\textbf{negative} of a Toda family $T(f_3, f_2, f_1)$\nto consist of pairs $(\\beta, -\\Sigma \\alpha)$ for $(\\beta, \\Sigma \\alpha) \\in T(f_3, f_2, f_1)$.\n(Since changing the sign of two maps in a triangle doesn't affect\nwhether it is distinguished, it would be equivalent to put the\nminus sign with the $\\beta$.)\n\n\\begin{lem}\\label{le:four-fold}\nLet\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3 \\ral{f_4} X_4$\nbe a diagram in a triangulated category $\\cat{T}$.\nThen the two sets of pairs\n$T(T(f_4, f_3, f_2), \\Sigma f_1)$ and\n$T(f_4, T(f_3, f_2, f_1))$ are negatives of each other.\n\\end{lem}\n\nThis is stronger than saying that the two ways of computing the Toda bracket\n$\\langle f_4, f_3, f_2, f_1 \\rangle$ are negatives, and the stronger statement\nwill be used inductively to prove Theorem~\\ref{th:n-fold}.\n\n\\begin{proof}\nWe will show that the negative of\n$T(T(f_4, f_3, f_2), \\Sigma f_1)$\nis contained in the family\n$T(f_4, T(f_3, f_2, f_1))$.\nThe reverse inclusion is proved dually.\n\nSuppose $(\\beta, \\Sigma \\alpha)$ is in $T(T(f_4, f_3, f_2), \\Sigma f_1)$,\nthat is, $(\\beta, \\Sigma \\alpha)$ is in $T(\\beta', \\Sigma \\alpha', \\Sigma f_1)$\nfor some $(\\beta', \\Sigma \\alpha')$ in $T(f_4, f_3, f_2)$.\nThis means that we have the following commutative diagram\n\\[\n \\xymatrix@!@C=-0.2ex@R=-0.2ex{\n && && \\Sigma X_1 \\ar[rr]^{-\\Sigma f_2} \\ar@{-->}[dd]^{\\Sigma \\alpha'} && \\Sigma X_2 \\ar@{=}[dd] \\\\\n \\\\\nX_2 \\ar[rr]^{f_3} && X_3 \\ar[dr]_-{f_4} \\ar[rr] && C_{f_3} \\ar@{-->}[dl]^{\\!\\beta'} \\ar[rr] \\ar[dd] && \\Sigma X_2 \\\\\n && & X_4 \\\\\n && && C_{\\Sigma \\alpha'} \\ar@{-->}[ul]_{\\!\\beta} \\ar[dd] \\\\\n && & \\Sigma^2 X_0 \\ar@{-->}[ur]^{\\Sigma \\alpha} \\ar[dr]_-{- \\Sigma^2 f_1} \\\\\n && && \\Sigma^2 X_1 && \\\\\n }\n\\]\nin which the long row and column are distinguished triangles.\n\nUsing the octahedral axiom, there exists a map $\\delta : C_{f_2} \\to X_3$\nin the following diagram making the two squares commute\nand such that the diagram can be extended as shown,\nwith all rows and columns distinguished:\n\\[\n \\xymatrix@!@C=-0.7ex@R=-0.7ex{\n && & \\Sigma X_0 \\ar@{-->}[dl]_{\\gamma\\!} \\ar[dr]^{\\!-\\Sigma f_1}\\\\\nX_2 \\ar[rr] \\ar@{=}[dd] && C_{f_2} \\ar[rr] \\ar@{-->}[dd]^{\\delta} && \\Sigma X_1 \\ar[rr]^{-\\Sigma f_2} \\ar@{-->}[dd]^{\\Sigma \\alpha'} && \\Sigma X_2 \\ar@{=}[dd] \\\\\n \\\\\nX_2 \\ar[rr]^{f_3} && X_3 \\ar[dr]^-{f_4} \\ar[rr] \\ar[dd] && C_{f_3} \\ar@{-->}[dl]^{\\!\\!\\beta'} \\ar[rr] \\ar[dd] && \\Sigma X_2 \\\\\n && & X_4 \\\\\n && C_{\\delta} \\ar@{=}[rr] \\ar[dd] && C_{\\Sigma \\alpha'} \\ar@{-->}[ul]_{\\!\\!\\beta} \\ar[dd] && \\\\\n && & \\Sigma^2 X_0 \\ar@{-->}[ur]^(0.4){\\Sigma \\alpha\\!} \\ar[dr]_(0.4){-\\Sigma^2 \\!f_1\\!\\!\\!\\!} \\ar@{-->}[dl]_{\\Sigma \\gamma\\!\\!\\!} \\\\\n && \\Sigma C_{f_2} \\ar[rr] && \\Sigma^2 X_1 . && \\\\\n }\n\\]\nDefine $\\Sigma \\gamma$ to be the composite \n$\\Sigma^2 X_0 \\to C_{\\Sigma \\alpha'} = C_{\\delta} \\to \\Sigma C_{f_2}$, where the first map is $\\Sigma \\alpha$.\nThen the small triangles at the top and bottom of the last diagram commute as well.\nTherefore, $(\\delta, \\gamma)$ is in $T(f_3, f_2, f_1)$.\nMoreover, this diagram shows that\n$(\\beta, - \\Sigma \\alpha)$ is in $T(f_4, \\delta, \\gamma)$,\ncompleting the argument.\n\\end{proof}\n\n\nTo concisely describe different ways of computing higher Toda\nbrackets, we introduce the following notation.\nFor $0 \\leq j \\leq n-3$, write $T_j(f_n, f_{n-1}, \\ldots, f_1)$ for the set of tuples\n\\[\n \\{ (f_n, f_{n-1}, \\ldots, f_{n-j+1}, \\beta, \\Sigma \\alpha, \\Sigma f_{n-j-3}, \\ldots, \\Sigma f_1) \\},\n\\]\nwhere $(\\beta, \\Sigma \\alpha)$ is in $T(f_{n-j}, f_{n-j-1}, f_{n-j-2})$.\n(There are $j$ maps to the left of the three\\break used for the Toda family.)\nIf $S$ is a set of $n$-tuples of composable maps, we define\\break\n$T_j(S)$ to be the union of the sets $T_j(f_n, f_{n-1}, \\ldots, f_1)$\nfor $(f_n, f_{n-1}, \\ldots, f_1)$ in $S$.\nWith this\\break notation, the standard Toda bracket $\\left\\langle f_n, \\ldots, f_1 \\right\\rangle$ consists of the composites of all the pairs\\break occurring in the iterated Toda family\n\\[\n\\mathrm{T}(f_n, \\ldots, f_1) := T_0(T_0(T_0(\\cdots T_0(f_n, \\ldots, f_1) \\cdots ))).\n\\]\nA general Toda bracket is of the form \n$T_{j_1}(T_{j_2}(T_{j_3}(\\cdots T_{j_{n-2}}(f_n, \\ldots, f_1) \\cdots )))$,\nwhere\\break $j_1, j_2, \\ldots, j_{n-2}$ is a sequence of natural numbers\nwith $0 \\leq j_i < i$ for each $i$.\nThere are $(n-2)!$ such sequences.\n\n\\begin{rem}\nThere are six ways to compute the five-fold Toda bracket \n$\\langle f_5, f_4, f_3, f_2, f_1 \\rangle$, as the set of composites\nof the pairs of maps in one of the following sets:\n\\begin{align*}\n&T_0(T_0(T_0(f_5, f_4, f_3, f_2, f_1))) = T(T(T(f_5, f_4, f_3), \\Sigma f_2), \\Sigma^2 f_1),\\\\\n&T_0(T_0(T_1(f_5, f_4, f_3, f_2, f_1))) = T(T(f_5, T(f_4, f_3, f_2)), \\Sigma^2 f_1),\\\\\n&T_0(T_1(T_1(f_5, f_4, f_3, f_2, f_1))) = T(f_5, T(T(f_4, f_3, f_2), \\Sigma f_1)),\\\\\n&T_0(T_1(T_2(f_5, f_4, f_3, f_2, f_1))) = T(f_5, T(f_4, T(f_3, f_2, f_1))),\\\\\n&T_0(T_0(T_2(f_5, f_4, f_3, f_2, f_1))), \\quad\\text{and}\\\\\n&T_0(T_1(T_0(f_5, f_4, f_3, f_2, f_1))).\n\\end{align*}\nThe last two cannot be expressed directly just using $T$.\n\\end{rem}\n\nNow we can prove the main result of this section.\n\n\\begin{thm}\\label{th:n-fold}\nThe Toda bracket computed using the sequence $j_1, j_2, \\ldots, j_{n-2}$\nequals the standard Toda bracket up to the sign $(-1)^{\\sum j_i}$.\n\\end{thm}\n\n\\begin{proof}\nLet $j_1, j_2, \\ldots, j_{n-2}$ be a sequence with $0 \\leq j_i < i$ for each $i$.\nLemma~\\ref{le:four-fold} tells us that if we replace consecutive entries \n$k, k+1$ with $k, k$ in any such sequence, the two Toda brackets agree up to a sign.\nTo begin with, we ignore the signs.\nWe will prove by induction on $\\ell$ that the initial portion\n$j_1, \\ldots, j_\\ell$ of such a sequence can be converted into\nany other sequence, using just the move allowed by Lemma~\\ref{le:four-fold} and its inverse,\nand without changing $j_i$ for $i > \\ell$.\nFor $\\ell = 1$, there is only one sequence $0$.\nFor $\\ell = 2$, there are two sequences, $0, 0$ and $0, 1$, and Lemma~\\ref{le:four-fold} applies.\nFor $\\ell > 2$, suppose our goal is to produce the sequence $j'_1, \\ldots, j'_\\ell$.\nWe break the argument into three cases:\n\\medskip\n\n\\noindent\nCase 1: $j'_\\ell = j_\\ell$. We can directly use the induction hypothesis to\nadjust the entries in the first $\\ell - 1$ positions.\n\\medskip\n\n\\noindent\nCase 2: $j'_\\ell > j_\\ell$. By induction, we can change the first $\\ell - 1$\nentries in the sequence $j$ so that the entry in position $\\ell - 1$ is $j_\\ell$,\nsince $j_{\\ell} < j'_{\\ell} \\leq \\ell - 1$.\nThen, using Lemma~\\ref{le:four-fold}, we can change the entry in position $\\ell$ to $j_\\ell + 1$.\nContinuing in this way, we get $j'_\\ell$ in position $\\ell$, and then we\nare in Case 1.\n\\medskip\n\n\\noindent\nCase 3: $j'_\\ell < j_\\ell$. Since the moves are reversible, this is equivalent to Case 2.\n\nTo handle the sign, first note that signs propagate through the Toda family construction.\nMore precisely, suppose $S$ is a set of $n$-tuples of maps, and let $S'$ be a set obtained\nby negating the $k^{\\text{th}}$ map in each $n$-tuple for some fixed $k$.\nThen $T_j(S)$ has the same relationship to $T_j(S')$, possibly for a different value of $k$.\n\nAs a result, applying the move of Lemma~\\ref{le:four-fold} changes the resulting\nToda bracket by a sign.\nThat move also changes the parity of $\\sum_i j_i$.\nSince we get a plus sign when each $j_i$ is zero, it follows that the\ndifference in sign in general is $(-1)^{\\sum_i j_i}$.\n\\end{proof}\n\nAn animation of this argument is available at~\\cite{Canim}.\nIt was pointed out by Dylan Wilson that the combinatorial part of the above proof \nis equivalent to the well-known fact that if a binary operation is associative on triples,\nthen it is associative on $n$-tuples.\n\n\\medskip\n\nIn order to compare our Toda brackets to the Toda brackets in the opposite\ncategory, we need one lemma.\n\n\\begin{lem}\\label{le:suspension-of-Toda-family}\nLet\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} X_3$\nbe a diagram in a triangulated category $\\cat{T}$.\nThen the Toda family $T(\\Sigma f_3, \\Sigma f_2, \\Sigma f_1)$ is the negative\nof the suspension of $T(f_3, f_2, f_1)$.\nThat is, it consists of $(\\Sigma \\beta, -\\Sigma^{2} \\alpha)$ for $(\\beta, \\Sigma \\alpha)$ \nin $T(f_3, f_2, f_1)$.\n\\end{lem}\n\n\\begin{proof}\nGiven a distinguished triangle $\\Sigma^{-1} C_{f_2} \\ral{k} X_1 \\ral{f_2} X_2 \\ral{\\iota} C_{f_2}$,\na distinguished triangle involving $\\Sigma f_2$ is\n\\[\nC_{f_2} \\ral{-\\Sigma k} \\Sigma X_1 \\ral{\\Sigma f_2} \\Sigma X_2 \\ral{\\Sigma \\iota} \\Sigma C_{f_2} .\n\\]\nBecause of the minus sign at the left, the maps that arise in the Toda\nfamily based on this triangle are $-\\Sigma^2 \\alpha$ and $\\Sigma \\beta$,\nwhere $\\Sigma \\alpha$ and $\\beta$ arise in the Toda family based on the starting triangle.\n\\end{proof}\n\nGiven a triangulated category $\\cat{T}$, the opposite category $\\cat{T}^{\\mathrm{op}}$\nis triangulated in a natural way. The suspension in $\\cat{T}^{\\mathrm{op}}$ is $\\Sigma^{-1}$\nand a triangle\n\\[\n\\xymatrix{Y_0 \\ar[r]^{g_1} & Y_1 \\ar[r]^{g_2} & Y_2 \\ar[r]^-{g_3} & \\Sigma^{-1} Y_0}\n\\]\nin $\\cat{T}^{\\mathrm{op}}$ is distinguished if and only if the triangle\n\\[\n\\xymatrix{\\Sigma \\Sigma^{-1} Y_0 & Y_1 \\ar[l]_-{g_1'} & Y_2 \\ar[l]_-{g_2} & \\Sigma^{-1} Y_0 \\ar[l]_-{g_3} }\n\\]\nin $\\cat{T}$ is distinguished, where $g_1'$ is the composite of $g_1$ with\nthe natural isomorphism $Y_0 \\cong \\Sigma \\Sigma^{-1} Y_0$.\n\n\\begin{cor}\\label{co:SelfDual}\nThe Toda bracket is self-dual up to suspension.\nMore precisely, let\n$X_0 \\ral{f_1} X_1 \\ral{f_2} X_2 \\ral{f_3} \\cdots \\ral{f_n} X_n$\nbe a diagram in a triangulated category $\\cat{T}$.\nThen the subset\n\\[\n \\left\\langle f_1, \\ldots, f_n \\right\\rangle^{\\cat{T}^{\\mathrm{op}}} \\subseteq \\cat{T}^{\\mathrm{op}}(\\Sigma^{-(n-2)} X_n, X_0)\n= \\cat{T}(X_0, \\Sigma^{-(n-2)} X_n)\n\\]\ndefined by taking the Toda bracket in $\\cat{T}^{\\mathrm{op}}$ is sent to the subset\n\\[\n \\left\\langle f_n, \\ldots, f_1 \\right\\rangle^{\\cat{T}} \\subseteq \\cat{T}(\\Sigma^{n-2} X_0, X_n)\n\\]\ndefined by taking the Toda bracket in $\\cat{T}$ under the bijection\n$\\Sigma^{n-2} : \\cat{T}(X_0, \\Sigma^{-(n-2)} X_n) \\to \\cat{T}(\\Sigma^{n-2} X_0, X_n)$.\n\\end{cor}\n\n\\begin{proof}\nFirst we compare Toda families in $\\cat{T}$ and $\\cat{T}^{\\mathrm{op}}$.\nIt is easy to see that the Toda family\n$T^{\\cat{T}^{\\mathrm{op}}}(f_1, f_2, f_3)$\ncomputed in $\\cat{T}^{\\mathrm{op}}$ consists of the pairs\n$(\\alpha, \\Sigma^{-1} \\beta)$ for $(\\Sigma \\alpha, \\beta)$ in the Toda family\n$T^{\\cat{T}}(f_3, f_2, f_1)$ computed in $\\cat{T}$.\nIn short, one has to desuspend and transpose the pairs.\n\nUsing this, one can see that the iterated Toda family\n\\[\nT^{\\cat{T}^{\\mathrm{op}}}(T^{\\cat{T}^{\\mathrm{op}}} \\cdots T^{\\cat{T}^{\\mathrm{op}}}(f_1, f_2, f_3), \\ldots, \\Sigma^{-(n-3)} f_n)\n\\]\nis equal to the transpose of\n\\[\n\\Sigma^{-1} T^{\\cat{T}}(\\Sigma^{-(n-3)} f_n, \\Sigma^{-1} T^{\\cat{T}}(\\Sigma^{-(n-4)} f_{n-1}, \\Sigma^{-1} T^{\\cat{T}} \\cdots \\Sigma^{-1} T^{\\cat{T}}(f_3, f_2, f_1) \\cdots ))\n\\]\nBy Lemma~\\ref{le:suspension-of-Toda-family}, the desuspensions pass\nthrough all of the Toda family constructions, introducing an overall\nsign of $(-1)^{1+2+3+\\cdots+(n-3)}$, and producing\n\\[\n\\Sigma^{-(n-2)} T^{\\cat{T}}(f_n, T^{\\cat{T}}(f_{n-1}, T^{\\cat{T}} \\cdots T^{\\cat{T}}(f_3, f_2, f_1) \\cdots ))\n\\]\nBy Theorem~\\ref{th:n-fold}, composing the pairs gives the usual\nToda bracket up to the sign\\break $(-1)^{0+1+2+\\cdots+(n-3)}$.\nThe two signs cancel, yielding the result.\n\\end{proof}\n\nWe do not know a direct proof of this corollary.\nTo summarize, our insight is that\nby generalizing the corollary to all $(n-2)!$ methods of computing the\nToda bracket, we were able to reduce the argument to the $4$-fold case (Lemma~\\ref{le:four-fold}) and some combinatorics.\n\n\\begin{rem}\nAs with the $3$-fold Toda brackets (see Remark~\\ref{re:3-fold-negation}),\nthe higher Toda brackets depend on the triangulation.\nIf the triangulation is negated, the $n$-fold Toda brackets change\nby the sign $(-1)^n$.\n\\end{rem}\n\n\n\n\\section{Higher order operations determine \\texorpdfstring{$d_r$}{dr}}\\label{se:AdamsDr}\n\nIn this section, we show that the higher Adams differentials can be\nexpressed in terms of higher Toda brackets, in two ways.\nOne of these expressions is as an $r^{\\text{th}}$ order cohomology operation.\n\nGiven an injective class $\\cat{I}$,\nan Adams resolution of an object $Y$ as in Diagram~\\eqref{eq:AdamsResolInj}, and an object $X$, consider a class $[x] \\in E_r^{s,t}$ represented by an element $x \\in E_1^{s,t} = \\cat{T}(\\Sigma^{t-s} X, I_s)$.\nThe class $d_r[x]$ is the set of all $(\\Sigma p_{s+r}) \\widetilde{x}$, where $\\widetilde x$\nruns over lifts of $\\delta_s x$ through the $(r-1)$-fold composite $\\Sigma(i_{s+1} \\cdots i_{s+r-1})$\nwhich appears across the top edge of the Adams resolution.\n\nOur first result will be a generalization of\nProposition~\\ref{pr:DifferentD2}\\eqref{it:d1pdex},\nexpressing $d_r$ in terms of an $(r+1)$-fold Toda bracket.\n\n\\begin{thm}\\label{th:d1pdex}\nAs subsets of $E_1^{s+r,t+r-1}$, we have\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots , \\Sigma^2 d_1, \\Sigma d_1 , \\Sigma p_{s+1} , \\delta_s x \\right\\rangle .\n\\]\n\\end{thm}\n\n\\begin{proof}\nWe compute the Toda bracket, applying the Toda family construction starting from\nthe right, which introduces a sign of $(-1)^{1+2+\\cdots+(r-2)}$, by Theorem~\\ref{th:n-fold}.\nWe begin with the Toda family $T(\\Sigma d_1, \\Sigma p_{s+1}, \\delta_s x)$.\nThere is a distinguished triangle\n\\[\n \\cxymatrix{\\Sigma Y_{s+2} \\ar[r]^-{\\Sigma i_{s+1}} & \\Sigma Y_{s+1} \\ar[r]^-{\\Sigma p_{s+1}} & \\Sigma I_{s+1} \\ar[r]^-{\\Sigma \\delta_{s+1}} & \\Sigma^2 Y_{s+2},}\n\\]\nwith no needed signs.\nThe map $\\Sigma d_1$ factors through $\\Sigma \\delta_{s+1}$ as $\\Sigma^2 p_{s+2}$, and this\nfactorization is unique because $\\Sigma \\delta_{s+1}$ is $\\cat{I}$-epic and $\\Sigma^2 I_{s+2}$ is injective.\nThe other maps in the Toda family are $\\Sigma x_1$, where $x_1$ is a lift \nof $\\delta_s x$ through $\\Sigma i_{s+1}$.\nSo \n\\[\n T(\\Sigma d_1, \\Sigma p_{s+1}, \\delta_s x) = \\{ (\\Sigma^2 p_{s+2} , \\, \\Sigma x_1) \\mid x_1 \\text{ a lift of $\\delta_s x$ through $\\Sigma i_{s+1}$} \\}.\n\\]\n(The Toda family also includes $(\\Sigma^2 p_{s+2} \\, \\phi, \\, \\phi^{-1} (\\Sigma x_1))$, where $\\phi$\nis any isomorphism, but these contribute nothing additional to the later computations.)\nThe composites of such pairs give $d_2[x]$, up to suspension, recovering\nProposition~\\ref{pr:DifferentD2}\\eqref{it:d1pdex}.\n\nContinuing, for each such pair we compute\n\\[\n\\begin{aligned}\n T(\\Sigma^2 d_1, \\Sigma^2 p_{s+2}, \\Sigma x_1)\n&= -\\Sigma T(\\Sigma d_1, \\Sigma p_{s+2}, x_1) \\\\\n&= -\\Sigma \\{ (\\Sigma^2 p_{s+3} , \\, \\Sigma x_2) \\mid x_2 \\text{ a lift of $x_1$ through $\\Sigma i_{s+2}$} \\}.\n\\end{aligned}\n\\]\nThe first equality is Lemma~\\ref{le:suspension-of-Toda-family}, and the second reuses\nthe work done in the previous paragraph, with $s$ increased by $1$.\nComposing these pairs gives $-d_3[x]$.\nThe sign which is needed to produce the standard Toda bracket is $(-1)^1$,\nand so the signs cancel.\n\nAt the next step, we compute\n\\[\n\\begin{aligned}\n T(\\Sigma^3 d_1, \\Sigma^3 p_{s+3}, -\\Sigma^2 x_2)\n&= -\\Sigma^2 T(\\Sigma d_1, \\Sigma p_{s+3}, x_2) \\\\\n&= -\\Sigma^2 \\{ (\\Sigma^2 p_{s+4} , \\, \\Sigma x_3) \\mid x_3 \\text{ a lift of $x_2$ through $\\Sigma i_{s+3}$} \\}.\n\\end{aligned}\n\\]\nAgain, the composites give $-d_4[x]$.\nSince it was a double suspension that passed through the Toda family, no additional\nsign was introduced.\nSimilarly, the sign to convert to the standard Toda bracket is $(-1)^{1+2}$,\nand since $2$ is even, no additional sign was introduced.\nTherefore, the signs still cancel.\n\nThe pattern continues. \nIn total, there are $1+2+\\cdots+(r-2)$ suspensions that pass through the Toda\nfamily, and the sign to convert to the standard Toda bracket is also based on\nthat number, so the signs cancel.\n\\end{proof}\n\n\\begin{rem}\nTheorem~\\ref{th:d1pdex} can also be proved using the definition Toda\nbrackets based on $r$-filtered objects, \nas in Definitions~\\ref{def:NFiltered} and~\\ref{def:HigherTodaSS}.\nHowever, one must work in the opposite category $\\cat{T}^{\\mathrm{op}}$.\nIn that category, there is a unique $r$-filtered object, up to isomorphism,\nbased on the maps in the Toda bracket.\nOne of the dashed arrows in the diagram from Definition~\\ref{def:HigherTodaSS}\nis also unique, and the other corresponds naturally to the choice of lift\nin the Adams differential.\n\\end{rem}\n\n\\medskip\n\nIn the remainder of this section, we describe the analog of\nProposition~\\ref{pr:DifferentD2}\\eqref{it:d1d1x}.\nWe begin by defining restricted higher Toda brackets, in terms of\nrestricted Toda families.\n\nConsider a Toda family $T(g h_1, g_1 h_0, g_0 h)$, where the maps\nfactor as shown, there are distinguished triangles\n\\begin{equation}\\label{eq:tri}\n \\cxymatrix{Z_i \\ar[r]^{g_i} & J_i \\ar[r]^{h_i} & Z_{i+1} \\ar[r]^{k_i} & \\Sigma Z_i}\n\\end{equation}\nfor $i = 0, 1$,\nand $g$ and $h$ are arbitrary maps $Z_2 \\to A$ and $B \\to Z_0$, respectively.\nThis information determines an essentially unique element of the Toda family in the following way.\nThe octahedral axiom applied to the factorization $g_1 h_0$\nyields the dotted arrows in a commutative diagram\n\\[\n\\cxymatrix{\nJ_0 \\ar@{=}[d] \\ar[r]^-{h_0} & Z_1 \\ar[d]_(0.45){g_1} \\ar[r]^-{k_0} & \\Sigma Z_0 \\ar@{-->}[d]^{\\alpha_2} \\ar[r]^-{-\\Sigma g_0} & \\Sigma J_0 \\ar@{=}[d] \\\\\nJ_0 \\ar[r]^-{g_1 h_0} & J_1 \\ar[d]_{h_1} \\ar@{-->}[r]^-{q} & W_2 \\ar@{-->}[d]^{\\beta_2} \\ar@{-->}[r]^-{\\iota} & \\Sigma J_0 \\\\\n& Z_2 \\ar[d]_{k_1} \\ar@{=}[r] & Z_2 \\ar[d]^{\\gamma_2} & \\\\\n& \\Sigma Z_1 \\ar[r]^{\\Sigma k_0} & \\Sigma^2 Z_0 , & \\\\\n}\n\\]\nwhere the rows and columns are distinguished and $\\gamma_2 := (\\Sigma k_0) k_1$.\nIt is easy to see that $-\\Sigma(g_0 h)$ lifts through $\\iota$ as $\\alpha_2 (\\Sigma h)$,\nand that $g h_1$ extends over $q$ as $g \\beta_2$.\nWe define the \\textbf{restricted Toda family} to be the set \n$T(g h_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_1 h_0 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_0 h)$ consisting of the pairs $(g \\beta_2, \\, \\alpha_2 (\\Sigma h))$\nthat arise in this way.\nSince $\\alpha_2$ and $\\beta_2$ come from a distinguished triangle involving a fixed map $\\gamma_2$,\nsuch pairs are unique up to the usual ambiguity of replacing\nthe pair with $(g \\beta_2 \\phi, \\, \\phi^{-1} \\alpha_2 (\\Sigma h))$, where $\\phi$ is an isomorphism.\nSimilarly, given any map $x : B \\to J_0$,\nwe define $T(g h_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_1 h_0 , x)$ to be the set \nconsisting of the pairs $(g \\beta_2, \\, \\Sigma \\alpha)$,\nwhere $\\beta_2$ arises as above and $\\Sigma \\alpha$ is any lift of $-\\Sigma x$ through $\\iota$.\n\n\\begin{defn}\nGiven distinguished triangles as in Equation~\\eqref{eq:tri}, for $i = 1, \\ldots, n-1$,\nand maps $g : Z_n \\to A$ and $x : B \\to J_1$, we define the \\textbf{restricted Toda bracket}\n\\[\n\\left\\langle g h_{n-1} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_{n-1} h_{n-2} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_3 h_2 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_2 h_1 , x \\right\\rangle\n\\]\ninductively as follows:\nIf $n = 2$, it is the set consisting of just the composite $g h_1 x$.\nIf $n = 3$, it is the set of composites of the pairs in $T(g h_2 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_2 h_1 , x)$.\nIf $n > 3$, it is the union of the sets\n\\[\n\\left\\langle g \\beta_2 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}}\\, \\alpha_2 (\\Sigma h_{n-3}) \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}}\\, \\Sigma (g_{n-3} h_{n-4}) \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots , \\Sigma x \\right\\rangle ,\n\\]\nwhere $(g \\beta_2, \\alpha_2 (\\Sigma h_{n-3}))$ is in $T(g h_{n-1} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_{n-1} h_{n-2} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} g_{n-2} h_{n-3})$.\n\\end{defn}\n\n\\begin{rem}\nDespite the notation, we want to make it clear that these restricted\nToda families and restricted Toda brackets depend on the choice of\nfactorizations and on the distinguished triangles in Equation~\\eqref{eq:tri}.\nMoreover, the elements of the restricted Toda families are not simply pairs,\nbut also include the factorizations of the maps in those pairs, and\nthe distinguished triangle involving $\\alpha_2$ and $\\beta_2$.\nThis information is used in the $(n-1)$-fold restricted Toda bracket\nthat is used to define the $n$-fold restricted Toda bracket.\n\\end{rem}\n\nRecall that the maps $d_1$ are defined to be $(\\Sigma p_{s+1}) \\delta_s$, and that we\nhave distinguished triangles\n\\[\n \\cxymatrix{Y_s \\ar[r]^{p_s} & I_s \\ar[r]^-{\\delta_s} & \\Sigma Y_{s+1} \\ar[r]^{\\Sigma i_s} & \\Sigma Y_s}\n\\]\nfor each $s$.\nThe same holds for suspensions of $d_1$, with the last map\nchanging sign each time it is suspended. \nThus for $x : \\Sigma^{t-s} X \\to I_s$ in the $E_1$ term, the $(r+1)$-fold restricted Toda bracket\n$\\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle$\nmakes sense for each $r$, where we are implicitly using the defining factorizations\nand the triangles from the Adams resolution.\n\n\\begin{thm}\\label{th:AdamsDrCohomOp}\nAs subsets of $E_1^{s+r,t+r-1}$, we have\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle .\n\\]\n\\end{thm}\n\nThis is a generalization of Proposition~\\ref{pr:DifferentD2}\\eqref{it:d1d1x}.\nThe data in the Adams resolution is the witness that the composites of\nthe primary operations are zero in a sufficiently coherent way to permit\nan $r^{\\text{th}}$ order cohomology operation to be defined.\n\n\\begin{proof}\nThe restricted Toda bracket $\\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle$\nis defined recursively, working from the left.\nEach of the $r-2$ doubly restricted Toda families has essentially one element.\nThe first one involves maps $\\alpha_2$, $\\beta_2$ and $\\gamma_2$ that form a distinguished\ntriangle, and $\\gamma_2$ is equal to $[(-1)^r \\Sigma^r i_{s+r-2}][-(-1)^r \\Sigma^r i_{s+r-1}]$.\nWe will denote the corresponding maps in the following octahedra $\\alpha_k$, $\\beta_k$ and $\\gamma_k$,\nwhere each $\\gamma_k$ equals $[(-1)^r \\Sigma^r i_{s+r-k}] \\, \\gamma_{k-1}$,\nand so $\\gamma_k = -(-1)^{rk} \\Sigma^r (i_{s+r-k} \\cdots i_{s+r-1})$.\nOne is left to compute the singly restricted Toda family\n$\\left\\langle \\Sigma^r p_{s+r} \\beta_{r-1} \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}}\\, \\alpha_{r-1} \\Sigma^{r-2} \\delta_s ,\\, \\Sigma^{r-2} x \\right\\rangle$,\nwhere $\\alpha_{r-1}$ and $\\beta_{r-1}$ fit into a distinguished triangle\n\\[\n \\cxymatrix{\\Sigma^{r-1 }Y_{s+1} \\ar[r]^-{\\alpha_{r-1}} & W_{r-1} \\ar[r]^-{\\beta_{r-1}} & \\Sigma^{r} Y_{s+r} \\ar[r]^-{\\gamma_{r-1}} & \\Sigma^r Y_{s+1} ,}\n\\]\nand $\\gamma_{r-1} = - \\Sigma^r (i_{s+1} \\cdots i_{s+r-1})$.\nThus, to compute the last restricted Toda bracket, one uses the following diagram,\nobtained as usual from the octahedral axiom:\n\\[\n\\xymatrix@C+24pt{\n& & & \\Sigma^{t-s+r-1} X \\ar[d]^(0.45){-\\Sigma^{r-1} x} \\\\\n\\Sigma^{r-2} I_{s} \\ar@{=}[d] \\ar[r]^-{\\Sigma^{r-2} \\delta_{s}} & \\Sigma^{r-1} Y_{s+1} \\ar[d]_{\\alpha_{r-1}} \\ar[r]^-{\\!(-1)^r \\, \\Sigma^{r-1} i_{s}} & \\Sigma^{r-1} Y_{s} \\ar@{-->}[d]^{\\alpha_r} \\ar[r]^-{- \\Sigma^{r-1} p_{s}} & \\Sigma^{r-1} I_{s} \\ar@{=}[d] \\\\\n\\Sigma^{r-2} I_{s} \\ar[r]^-{} & W_{r-1} \\ar[d]_{\\beta_{r-1}} \\ar@{-->}[r]^-{q_{r-1}} & W_r \\ar@{-->}[d]^{\\beta_r} \\ar@{-->}[r]^-{\\iota_{r-1}} & \\Sigma^{r-1} I_{s} \\\\\n\\Sigma^r I_{s+r} & \\Sigma^r Y_{s+r} \\ar[l]_{\\Sigma^r p_{s+r}} \\ar[d]_{\\gamma_{r-1}} \\ar@{=}[r] & \\Sigma^r Y_{s+r} \\ar[d]^{\\gamma_r} & \\\\\n& \\Sigma^r Y_{s+1} \\ar[r]^{(-1)^r \\, \\Sigma^r i_{s}} & \\Sigma^r Y_{s} . & \\\\\n}\n\\]\nUp to suspension, both $d_r[x]$ and the last restricted Toda bracket are computed by\ncomposing certain maps $\\widetilde{x} : \\Sigma^{t-s+r-2} X \\to \\Sigma^r Y_{s+r}$ with $\\Sigma^r p_{s+r}$.\nFor $d_r[x]$, the maps $\\widetilde{x}$ must lift $\\Sigma^{r-1} (\\delta_s x)$ through $- \\gamma_{r-1}$.\nFor the last bracket, the maps $\\widetilde{x}$ are of the form $\\beta_r y$,\nwhere $y : \\Sigma^{t-s+r-1} X \\to W_r$ is a lift of $-\\Sigma^{r-1} x$ through $\\iota_{r-1}$.\nAs in the proof of Proposition~\\ref{pr:DifferentD2}\\eqref{it:d1d1x}, one can\nsee that the possible choices of $\\widetilde{x}$ coincide.\n\\end{proof}\n\nWe next give a description of $d_r[x]$ using higher Toda brackets defined\nusing filtered objects, as in Definitions~\\ref{def:NFiltered} and~\\ref{def:HigherTodaSS}.\nThe computation of the restricted Toda bracket above produces a sequence \n\\begin{equation}\\label{eq:W}\n \\cxymatrix{0 = W_0 \\ar[r]^-{q_0} & W_1 \\ar[r]^{q_1} & \\cdots \\ar[r]^{q_{r-1}} & W_r , }\n\\end{equation}\nwhere $W_k$ is the fibre of the $k$-fold composite $\\Sigma^r (i_{s+r-k} \\cdots i_{s+r-1})$.\n(The map $\\gamma_k$ may differ in sign from this composite, but that doesn't affect the fibre.)\nFor each $k$, we have a distinguished triangle\n\\[\n \\xymatrix@C+3pt{W_k \\ar[r]^-{q_k} & W_{k+1} \\ar[r]^-{\\iota_k}\n & \\Sigma^{r-1} I_{s+r-k-1} \\ar[rrr]^-{-(\\Sigma \\alpha_k)(\\Sigma^{r-1} \\delta_{s+r-k-1})} &&& \\Sigma W_k ,}\n\\]\nwhere we extend downwards to $k=0$ by defining $W_1 = \\Sigma^{r-1} I_{s+r-1}$\nand using the non-obvious triangle\n\\[\n \\xymatrix@C+7pt{W_0 \\ar[r]^-{q_0 = 0} & W_{1} \\ar[r]^-{\\iota_0 = -1}\n & \\Sigma^{r-1} I_{s+r-1} \\ar[r]^-{0} & \\Sigma W_0 .}\n\\]\nOne can check that \n\\[\n(\\Sigma \\iota_{k-1})(-\\Sigma \\alpha_k)(\\Sigma^{r-1} \\delta_{s+r-k-1}) = (\\Sigma^r p_{s+r-k})(\\Sigma^{r-1} \\delta_{s+r-k-1})\n= \\Sigma^{r-1} d_1 = \\Sigma^k (\\Sigma^{r-k-1} d_1) ,\n\\]\nwhere $\\Sigma^{r-k-1} d_1$ is the map appearing in the $(k+1)$st spot of the Toda bracket.\nIn other words, the sequence~\\eqref{eq:W} is an $r$-filtered object based on\n$(\\Sigma^{r-2} d_1, \\ldots, d_1)$.\n\nThe natural map $\\sigma_W : W_r \\to \\Sigma^{r-1} I_s$ is $\\iota_{r-1}$,\nand the natural map $\\sigma_W' : \\Sigma^{r-1} I_{s+r-1} \\cong W_1 \\to W_r$ is the composite\n$q_{r-1} \\cdots q_1 \\iota_0 = -q_{r-1} \\cdots q_1$.\nThe Toda bracket computed using the filtered object $W$ consists of all\ncomposites appearing in the middle row of this commutative diagram:\n\\begin{equation}\\label{eq:Wlift}\n\\cxymatrix{\n& \\Sigma^{r-1} I_{s+r-1} \\ar[d]_{\\sigma'_W} \\ar[dr]^-{\\Sigma^{r-1} d_1} & \\\\\n\\Sigma^{t-s+r-1} X \\ar[dr]_{\\Sigma^{r-1} x} \\ar@{-->}[r]^-a & W_r \\ar[d]^{\\sigma_W} \\ar@{-->}[r]_-b & \\Sigma^r I_{s+r} \\\\\n& \\Sigma^{r-1} I_s . & \\\\\n}\n\\end{equation}\nWe claim that there is a natural choice of extension $b$.\nSince $\\Sigma^{r-1} d_1 = (\\Sigma^r p_{s+r})(\\Sigma^{r-1} \\delta_{s+r-1})$, it suffices\nto extend $\\Sigma^{r-1} \\delta_{s+r-1}$ over $\\sigma_W'$.\nWell, $\\beta_2$ by definition is an extension of $\\Sigma^{r-1} \\delta_{s+r-1}$ over $q_1$,\nand each subsequent $\\beta_k$ gives a further extension.\nBecause $\\iota_0 = -1$, $-(\\Sigma^r p_{s+r}) \\beta_r$ is a valid choice for $b$.\n\nOn the other hand, as described at the end of the previous proof,\nthe lifts $a$ of $\\Sigma^{r-1} x$ through $\\sigma_W = \\iota_{r-1}$, when \ncomposed with $-(\\Sigma^r p_{s+r}) \\beta_r$, give exactly\nthe Toda bracket computed there.\n\nIn summary, we have:\n\n\\begin{thm}\nGiven an Adams resolution of $Y$ and $r \\geq 2$, there is an associated\n$r$-filtered object $W$ and a choice of a map $b$ in Diagram~\\eqref{eq:Wlift},\nsuch that for any $X$ and class $[x] \\in E_r^{s,t}$,\nwe have\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots , \\Sigma d_1 , d_1 , x \\right\\rangle ,\n\\]\nwhere the Toda bracket is computed only using the $r$-filtered object $W$\nand the chosen extension $b$.\n\\end{thm}\n\n\n\\section{Sparse rings of operations}\\label{se:sparse}\n\nIn this section, we focus on injective and projective classes which \nare generated by an object with a ``sparse'' endomorphism ring.\nIn this context, we can give conditions under which the restricted Toda bracket\nappearing in Theorem~\\ref{th:AdamsDrCohomOp} is equal to the unrestricted Toda bracket,\nproducing a cleaner correspondence between Adams differentials and Toda brackets.\nWe begin in Subsection~\\ref{ss:sparse-injective} by giving the results in the case\nof an injective class, and then briefly summarize the dual results in Subsection~\\ref{ss:sparse-projective}.\nSubsection~\\ref{ss:sparse-examples} gives examples.\n\nLet us fix some notation and terminology, also discussed in \\cite{Sagave08}, \\cite{Patchkoria12}, \n\\cite{SchwedeS03}*{\\S 2}, \nand \\cite{BensonKS04}.\n\n\\begin{defn}\nLet $N$ be a natural number. A graded abelian group $R_*$ is \\textbf{$N$-sparse} if $R_*$ is concentrated in degrees which are multiples of $N$, i.e., $R_i = 0$ whenever $i \\not\\equiv 0 \\pmod{N}$.\n\\end{defn}\n\n\n\\subsection{Injective case}\\label{ss:sparse-injective}\n\n\\begin{nota}\nLet $E$ be an object of the triangulated category $\\cat{T}$. Define the \\textbf{$E$-cohomology} of an object $X$ to be the graded abelian group $E^*X$ given by $E^n X := \\cat{T}(X,\\Sigma^n E)$. Postcomposition makes $E^*X$ into a left module over the graded endomorphism ring $E^*E$.%\n\\end{nota}\n\n\\begin{assum}\nFor the remainder of this subsection, we assume the following.\n\\begin{enumerate}\n\\item The triangulated category $\\cat{T}$ has infinite %\nproducts.\n\\item The graded ring $E^* E$ is $N$-sparse for some $N \\geq 2$.%\n\\end{enumerate}\n\\end{assum}\n\nLet $\\cat{I}_E$ denote the injective class generated by $E$, as in Example~\\ref{ex:InjClass}. Explicitly, $\\cat{I}_E$ consists of retracts of (arbitrary) products $\\prod_i \\Sigma^{n_i} E$.\n\n\\begin{lem}\\label{le:SparseInj}\nWith this setup, we have:\n\\begin{enumerate}\n\\item Let $I$ be an injective object such that $E^* I$ is $N$-sparse. Then $I$ is a retract of a product $\\prod_i \\Sigma^{m_i N} E$.\n\\item If, moreover, $W$ is an object such that $E^*W$ is $N$-sparse, then we have $\\cat{T}(W,\\Sigma^t I) = 0$ for $t \\not\\equiv 0 \\pmod{N}$. \n\\end{enumerate}\n\\end{lem}\n\n\\begin{proof}\n(1) $I$ is a retract of a product $P = \\prod_i \\Sigma^{n_i} E$, with a map $\\iota \\colon I \\hookrightarrow P$ and retraction $\\pi \\colon P \\twoheadrightarrow I$. Consider the subproduct $P' = \\prod_{N \\mid n_i} \\Sigma^{n_i} E$, with inclusion $\\iota' \\colon P' \\hookrightarrow P$ (via the zero map into the missing factors) and projection $\\pi' \\colon P \\twoheadrightarrow P'$. Then the equality\n\\[\n\\iota' \\pi' \\iota = \\iota \\colon I \\to P\n\\]\nholds, using the fact that $E^*I$ is $N$-sparse. %\nTherefore, we obtain $\\pi \\iota' \\pi' \\iota = \\pi \\iota = 1_I$, so that $I$ is a retract of $P'$. %\n\n(2) By the first part, $\\cat{T}(W,\\Sigma^t I)$ is a retract of\n\\begin{align*}\n\\cat{T}(W,\\Sigma^t \\prod_i \\Sigma^{m_i N} E) &= \\cat{T}(W,\\prod_i \\Sigma^{m_i N+ t} E) \\\\\n&= \\prod_i \\cat{T}(W,\\Sigma^{m_i N+ t} E) \\\\ \n&= \\prod_i E^{m_i N+ t}W \\\\\n&= 0 ,\n\\end{align*}\nusing the assumption that $E^*W$ is $N$-sparse.\n\\end{proof}\n\n\n\\begin{lem}\\label{le:SparseBracket}\nLet $I_0 \\ral{f_1} I_1 \\ral{f_2} I_2 \\to \\cdots \\ral{f_r} I_r$ be a diagram\nin $\\cat{T}$, with $r \\leq N+1$. Assume that each object $I_j$ is injective and \nthat each $E^*(I_j)$ is $N$-sparse. Then the iterated Toda family $\\mathrm{T}(f_r, f_{r-1}, \\ldots, f_1)$ is either empty or consists of a single composable pair $\\Sigma^{r-2} I_0 \\to C \\to I_r$, up to automorphism of $C$.\n\\end{lem}\n\n\\begin{proof}\nIn the case $r=2$, there is nothing to prove, so we may assume $r \\geq 3$. The iterated Toda family is obtained by $r-2$ iterations of the $3$-fold Toda family construction. The first iteration computes the Toda family of the diagram\n\\[\n\\xymatrix{\nI_{r-3} \\ar[r]^-{f_{r-2}} & I_{r-2} \\ar[r]^-{f_{r-1}} & I_{r-1} \\ar[r]^-{f_{r}} & I_{r}. \\\\\n}\n\\]\nChoose a cofiber of $f_{r-1}$, i.e., a distinguished triangle $I_{r-2} \\ral{f_{r-1}} I_{r-1} \\to C_1 \\to \\Sigma I_{r-2}$.\nA lift of $f_{r-2}$ to the fiber $\\Sigma^{-1} C_1$, if it exists, is determined up to\n\\[\n\\cat{T}(I_{r-3}, \\Sigma^{-1} I_{r-1}) = \\cat{T}(\\Sigma I_{r-3}, I_{r-1}),\n\\]\nwhich is zero by Lemma~\\ref{le:SparseInj}(2).\nLikewise, an extension of $f_{r}$ to the cofiber $C_1$, if it exists, is determined up to\n\\[\n\\cat{T}(\\Sigma I_{r-2}, I_{r}) = 0.\n\\]\nHence, $\\mathrm{T}(f_r, f_{r-1}, f_{r-2})$ is either empty or consists of a single pair $(\\beta_1, \\Sigma \\alpha_1)$,\nup to automorphisms of $C_1$.\nIt is easy to see that the object $C_1$ has the following property:\n\\begin{equation}\\label{eq:SparseCohom1}\n\\text{If $E^*W = 0$ for $\\ast \\equiv 0,1 \\;(\\bmod\\; N)$, then $\\cat{T}(W,C_1) = 0$.}\n\\end{equation}\nFor $r \\geq 4$, the next iteration computes the Toda family of the diagram\n\\[\n\\xymatrix{\n\\Sigma I_{r-4} \\ar[r]^-{\\Sigma f_{r-3}} & \\Sigma I_{r-3} \\ar[r]^-{\\Sigma \\alpha_1} & C_1 \\ar[r]^-{\\beta_1} & I_{r}. \\\\\n}\n\\]\nThe respective indeterminacies are\n\\[\n\\cat{T}(\\Sigma^2 I_{r-4}, C_1),\n\\]\nwhich is zero by Property~\\eqref{eq:SparseCohom1}, and \n\\[\n\\cat{T}(\\Sigma^2 I_{r-3}, I_{r}),\n\\]\nwhich is zero by Lemma~\\ref{le:SparseInj}(2), since $N \\geq 3$ in this case.\nHence, $\\mathrm{T}(\\beta_1, \\Sigma \\alpha_1, \\Sigma f_{r-3})$ is either empty or consists of a single pair $(\\beta_2, \\Sigma \\alpha_2)$,\nup to automorphism of the cofiber $C_2$ of $\\Sigma \\alpha_1$.\nRepeating the argument inductively, the successive iterations compute the Toda family of a diagram\n\\[\n\\xymatrix @C=3.3pc {\n\\Sigma^j I_{r-3-j} \\ar[r]^-{\\Sigma^j f_{r-2-j}} & \\Sigma^j I_{r-2-j} \\ar[r]^-{\\Sigma \\alpha_j} & C_j \\ar[r]^-{\\beta_j} & I_{r} \\\\\n}\n\\]\nfor $0 \\leq j \\leq r-3$, where $C_j$ has the following property:\n\\begin{equation}\\label{eq:SparseCohomj}\n\\text{If $E^*W = 0$ for $\\ast \\equiv 0,1,\\ldots,j \\;(\\bmod\\; N)$, then $\\cat{T}(W,C_j) = 0$.}\n\\end{equation}\nThe indeterminacies $\\cat{T}(\\Sigma^{j+1} I_{r-3-j}, C_j)$ and $\\cat{T}(\\Sigma^{j+1} I_{r-2-j}, I_{r})$\nagain vanish.\nHence,\\break $\\mathrm{T}(\\beta_j, \\Sigma \\alpha_j, \\Sigma^j f_{r-2-j})$ is either empty or consists of a single pair \n$(\\beta_{j+1}, \\Sigma \\alpha_{j+1})$, up to automorphism of $C_{j+1}$.\nNote that the argument works until the last iteration $j = r-3$, by the assumption $r-2 < N$.\n\\end{proof}\n\nWe will need the following condition on an object $Y$:\n\\begin{condition}\\label{co:InjSparse}\n$Y$ admits an $\\cat{I}_E$-Adams resolution $Y_{\\bullet}$ (see \\eqref{eq:AdamsResolInj}) such that for each injective $I_j$ in the resolution,\n$E^* (\\Sigma^j I_j)$ is $N$-sparse.\n\\end{condition}\n\n\\pagebreak[2]\n\\begin{rem}\\leavevmode\n\\begin{enumerate}\n\\item Condition~\\ref{co:InjSparse} implies that $E^* Y$ is itself $N$-sparse, because of the surjection $E^* I_0 \\twoheadrightarrow E^* Y$.\n\\item The condition can be generalized to: there is an integer $m$ such that for each $j$, $E^* (\\Sigma^j I_j)$ is concentrated in degrees $\\ast \\equiv m \\pmod{N}$. We take $m=0$ for notational convenience.\n\\item We will see in Propositions~\\ref{pr:CohomProduct} and~\\ref{pr:CoherentRing} situations in which\nCondition~\\ref{co:InjSparse} holds.\n\\end{enumerate}\n\\end{rem}\n\n\\begin{thm}\\label{th:SparseDr}\nLet $X$ and $Y$ be objects in $\\cat{T}$ and consider the Adams spectral sequence abutting to $\\cat{T}(X,Y)$ with respect to the injective class $\\cat{I}_E$. Assume that $Y$ satisfies Condition~\\ref{co:InjSparse}. Then for all $r \\leq N$, the Adams differential is given, as subsets of $E_1^{s+r,t+r-1}$, by\n\\[\nd_r [x] = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1 , x \\right\\rangle.\n\\]\nIn other words, the restricted bracket appearing in Theorem~\\ref{th:AdamsDrCohomOp} coincides with the full Toda bracket.\n\\end{thm}\n\n\\begin{proof}\nWe will show that \n\\[\n \\left\\langle \\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1 , x \\right\\rangle = \\left\\langle \\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1 , x \\right\\rangle.\n\\]\nConsider the diagram \n\\[\n\\xymatrix{\nI_s \\ar[r]^-{d_1} & \\Sigma I_{s+1} \\ar[r]^-{\\Sigma d_1} & \\Sigma^2 I_{s+2} \\ar[r] & \\cdots \\ar[r] & \\Sigma^{r-1} I_{r-1} \\ar[r]^-{\\Sigma^{r-1} d_1} & \\Sigma^{r} I_{s+r} \\\\\nX \\ar[u]^{x} & & & & & \\\\\n}\n\\]\nwhose Toda bracket is being computed. The corresponding Toda family is\n\\[\n\\mathrm{T}(\\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1 , x) = \\mathrm{T} \\left( \\mathrm{T}(\\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1), \\Sigma^{r-2} x \\right).\n\\]\nWe know that\n\\[\n \\mathrm{T}(\\Sigma^{r-1} d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\ldots \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} \\Sigma d_1 \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} d_1) \\subseteq \\mathrm{T}(\\Sigma^{r-1} d_1 , \\ldots, \\Sigma d_1, d_1).\n\\]\nBy Lemma~\\ref{le:SparseBracket}, %\nthe Toda family on the right has at most one element, up to automorphism.\nBut fully-restricted Toda families are always non-empty, so the inclusion must be an equality.\nWrite $\\Sigma^{r-2} I_{s} \\ral{f} C \\ral{g} \\Sigma^r I_{s+r}$ for an element of these families.\nIt remains to show that the inclusion\n\\[\n \\left\\langle g \\overset{!\\hspace*{1.5pt}}{,\\rule{0pt}{3pt}} f, \\Sigma^{r-2} x \\right\\rangle \\subseteq \\left\\langle g, f, \\Sigma^{r-2} x \\right\\rangle \n\\]\nis an equality, i.e., that the extension of $g$ to the cofiber of $f$ is unique.\nThis follows from the equality $\\cat{T}(\\Sigma^{r-1} I_{s}, \\Sigma^r I_{s+r}) = 0$, which uses the assumption on the injective objects $I_j$ and that $r-1 < N$.\n\\end{proof}\n\nNext, we describe situations in which Theorem~\\ref{th:SparseDr} applies.\n\n\\begin{prop}\\label{pr:CohomProduct}\nAssume that every product of the form $\\prod_i \\Sigma^{m_i N} E$ has cohomology\\break $E^* \\left( \\prod_i \\Sigma^{m_i N} E \\right)$ which is $N$-sparse. Then every object $Y$ such that $E^* Y$ is $N$-sparse also satisfies Condition~\\ref{co:InjSparse}.\n\\end{prop}\n\n\\begin{proof}\nLet $(y_i)$ be a set of non-zero generators of $E^* Y$ as an $E^* E$-module. %\nThen the corresponding map $Y \\to \\prod_i \\Sigma^{\\abs{y_i}} E$ is $\\cat{I}_E$-monic into an injective object;\nwe take this map as the first step $p_0 \\colon Y_0 \\to I_0$, with cofiber $\\Sigma Y_1$. By our assumption on $Y$, each degree $\\abs{y_i}$ is a multiple of $N$, and thus $E^* I_0$ is $N$-sparse, by the assumption on $E$. The distinguished triangle $Y_1 \\to Y_0 \\ral{p_0} I_0 \\to \\Sigma Y_1$ induces a long exact sequence on $E$-cohomology which implies that the map $I_0 \\to \\Sigma Y_1$ is injective on $E$-cohomology. It follows that $E^*(\\Sigma Y_1)$ is $N$-sparse as well. Repeating this process, we obtain an $\\cat{I}_E$-Adams resolution of $Y$ such that for every $j$, $E^* (\\Sigma^j Y_j)$ and $E^* (\\Sigma^j I_j)$ are $N$-sparse.\n\\end{proof}\n\nThe condition on $E$ is discussed in Example \\ref{ex:Compact}.\n\n\\begin{prop}\\label{pr:CoherentRing}\nAssume that the ring $E^* E$ is left coherent, and that $E^* Y$ is $N$-sparse and finitely presented as a left $E^*E$-module. Then $Y$ satisfies Condition~\\ref{co:InjSparse}.\n\\end{prop}\n\n\\begin{proof}\nSince $E^*Y$ is finitely generated over $E^*E$, the map $p_0 \\colon Y \\to I_0$ can be chosen so that $I_0 = \\prod_i \\Sigma^{m_i N} E \\cong \\oplus_i \\Sigma^{m_i N} E$ is a finite product. \nIt follows that $E^* I_0$ is $N$-sparse and finitely presented.\nWe have that $E^{*-1}Y_1 = \\ker \\left( p_0^* \\colon E^*I_0 \\twoheadrightarrow E^*Y \\right)$.\nThis is $N$-sparse, since $E^* I_0$ is, and is finitely presented over $E^*E$, since both $E^* I_0$ and $E^*Y$ are, and $E^*E$ is coherent \\cite{Bourbaki98}*{\\S I.2, Exercises 11--12}. %\nRepeating this process, we obtain an $\\cat{I}_E$-Adams resolution of $Y$ such that for every $j$, $\\Sigma^j I_j$ is a finite product of the form $\\prod_i \\Sigma^{m_i N} E$.\n\\end{proof}\n\n\n\n\\subsection{Projective case}\\label{ss:sparse-projective}\n\nThe main applications of Theorem~\\ref{th:SparseDr} are to projective classes instead of injective classes. For future reference, we state here the dual statements of the previous subsection and adopt a notation inspired from stable homotopy theory.\n\n\\begin{nota}\nLet $R$ be an object of the triangulated category $\\cat{T}$. Define the \\textbf{homotopy} (with respect to $R$) of an object $X$ as the graded abelian group $\\pi_* X$ given by $\\pi_n X := \\cat{T}(\\Sigma^n R,X)$. Precomposition makes $\\pi_*X$ into a right module over the graded endomorphism ring $\\pi_* R$.%\n\\end{nota}\n\n\\begin{assum}\nFor the remainder of this subsection, we assume the following.\n\\begin{enumerate}\n\\item The triangulated category $\\cat{T}$ has infinite %\ncoproducts.\n\\item The graded ring $\\pi_* R$ is $N$-sparse for some $N \\geq 2$.\n\\end{enumerate}\n\\end{assum}\n\nLet $\\cat{P}_R$ denote the stable projective class spanned by $R$, as in Example~\\ref{ex:GhostSphere}. Explicitly, $\\cat{P}_R$ consists of retracts of (arbitrary) coproducts $\\oplus_i \\Sigma^{n_i} R$.\n\n\\begin{condition}\\label{co:ProjSparse}\n$X$ admits an $\\cat{P}_R$-Adams resolution $X_{\\bullet}$ as in Diagram~\\eqref{eq:AdamsResolProj} such that each projective $P_j$ satisfies that $\\pi_* (\\Sigma^{-j} P_j)$ is $N$-sparse.\n\\end{condition}\n\n\\begin{thm}\\label{th:SparseDrProj}\nLet $X$ and $Y$ be objects in $\\cat{T}$ and consider the Adams spectral sequence abutting to $\\cat{T}(X,Y)$ with respect to the projective class $\\cat{P}_R$. Assume that $X$ satisfies Condition~\\ref{co:ProjSparse}. Let $[y] \\in E_r^{s,t}$ be a class represented by $y \\in E_1^{s,t} = \\cat{T}(\\Sigma^{t-s} P_s, Y)$. Then for all $r \\leq N$, the Adams differential is given, as subsets of $E_1^{s+r,t+r-1}$, by\n\\[\nd_r [y] = \\left\\langle y, d_1, \\Sigma^{-1} d_1, \\ldots, \\Sigma^{-(r-1)} d_1 \\right\\rangle.\n\\]\n\\end{thm}\n\nNote that we used Corollary~\\ref{co:SelfDual} to ensure that the equality holds as stated, not merely up to sign.\n\n\\begin{prop}\\label{pr:HomotCoproduct}\nAssume that every coproduct of the form $\\oplus_i \\Sigma^{m_i N} R$ has homotopy\\break $\\pi_* \\left( \\oplus_i \\Sigma^{m_i N} R \\right)$ which is $N$-sparse. Then every object $X$ such that $\\pi_* X$ is $N$-sparse also satisfies Condition~\\ref{co:ProjSparse}.\n\\end{prop}\n\nRecall the following terminology:\n\n\\begin{defn}\nAn object $X$ of $\\cat{T}$ is \\textbf{compact} if the functor $\\cat{T}(X,-)$ preserves infinite coproducts.%\n\\end{defn}\n\n\\begin{ex}\\label{ex:Compact}\nIf $R$ is compact in $\\cat{T}$, then $R$ satisfies the assumption of Proposition~\\ref{pr:HomotCoproduct}. This follows from the isomorphism\n\\[\n\\pi_* \\left( \\oplus_i \\Sigma^{m_i N} R \\right) \\cong \\bigoplus_i \\pi_* (\\Sigma^{m_i N} R) = \\bigoplus_i \\Sigma^{m_i N} \\pi_* R \n\\]\nand the assumption that $\\pi_* R$ is $N$-sparse. The same argument works if $R$ is a retract of a coproduct of compact objects.\n\nDually, if $E$ is cocompact in $\\cat{T}$, then $E$ satisfies the assumption of Proposition~\\ref{pr:CohomProduct}. \nThis holds more generally if $E$ is a retract of a product of cocompact objects.\n\\end{ex}\n\n\\begin{rem}\nSome of the related literature deals with compactly generated triangulated categories. As noted in Remark~\\ref{re:NotGenerate}, we do \\emph{not} assume that the object $R$ is a generator, i.e., that the condition $\\pi_* X = 0$ implies $X=0$.\n\\end{rem}\n\n\\begin{prop}\\label{pr:CoherentRingProj}\nAssume that the ring $\\pi_* R$ is right coherent, and that $\\pi_* X$ is $N$-sparse and finitely presented as a right $\\pi_* R$-module. Then $X$ satisfies Condition~\\ref{co:ProjSparse}.\n\\end{prop}\n\nThe following is a variant of \\cite{Patchkoria12}*{Lemma 2.2.2}, where we do not assume that $R$ is a generator.\nIt identifies the $E_2$ term of the spectral sequence associated to the projective class $\\cat{P}_R$.\nThe proof is straightforward.\n\n\\begin{prop}\\label{pr:ExtEndoRing}\nAssume that the object $R$ is compact.\n\\begin{enumerate}\n\\item Let $P$ be in the projective class $\\cat{P}_R$. Then the map of abelian groups\n\\[ %\n\\cat{T}(P,Y) \\to \\Hom_{\\pi_* R} (\\pi_* P, \\pi_* Y)\n\\] %\nis an isomorphism for every object $Y$.\n\\item There is an isomorphism\n\\[\n\\Ext^{s}_{\\cat{P}_R}(X,Y) \\cong \\Ext^{s}_{\\pi_* R}(\\pi_* X, \\pi_* Y)\n\\]\nwhich is natural in $X$ and $Y$. \n\\end{enumerate}\n\\end{prop}\n\n\n\n\n\\subsection{Examples}\\label{ss:sparse-examples}\n\nTheorem~\\ref{th:SparseDrProj} applies to modules over certain ring spectra. We describe some examples, along the lines of \\cite{Patchkoria12}*{Examples 2.4.6 and 2.4.7}.\n\n\\begin{ex}\\label{ex:RingSpectrum}\nLet $R$ be an $A_{\\infty}$ ring spectrum, and let $h\\Mod{R}$ denote the homotopy category of the stable model category of (right) $R$-modules %\n\\cite{SchwedeS03}*{Example 2.3(ii)} \n\\cite{EKMM97}*{\\S III}. Then $R$ itself, the free $R$-module of rank $1$, is a compact generator for $h\\Mod{R}$. The $R$-homotopy of an $R$-module spectrum $X$ is the usual homotopy of $X$, as suggested by the notation:\n\\[\nh\\Mod{R}(\\Sigma^n R, X) \\cong h\\Mod{S}(S^n, X) = \\pi_n X.\n\\]\nIn particular, the graded endomorphism ring $\\pi_* R$ is the usual coefficient ring of $R$.\n\nThe projective class $\\cat{P}_R$ is the ghost projective class \\cite{Christensen98}*{\\S 7.3}, generalizing Example~\\ref{ex:GhostSphere}, where $R$ was the sphere spectrum $S$. The Adams spectral sequence relative to $\\cat{P}_R$ is \nthe universal coefficient spectral sequence\n\\[\n\\Ext_{\\pi_* R}^{s}(\\Sigma^t \\pi_* X, \\pi_* Y) \\Ra h\\Mod{R}(\\Sigma^{t-s} X,Y)\n\\]\nas described in \\cite{EKMM97}*{\\S IV.4} and~\\cite{Christensen98}*{Corollary 7.12}. %\nWe used Proposition~\\ref{pr:ExtEndoRing} to identify the $E_2$ term.\n\nSome $A_{\\infty}$ ring spectra $R$ with sparse homotopy $\\pi_* R$ are discussed in \\cite{Patchkoria12}*{\\S 4.3, 5.3, 6.4}. In view of Proposition~\\ref{pr:ExtEndoRing}, the Adams spectral sequence in $h\\Mod{R}$ collapses at the $E_2$ page if $\\pi_* R$ has (right) global dimension less than $2$. %\n\nThe Johnson--Wilson spectrum $E(n)$ has coefficient ring\n\\[\n\\pi_* E(n) = \\mathbb{Z}_{(p)}[v_1, \\ldots, v_n, v_n^{-1}], \\quad \\abs{v_i} = 2(p^i - 1),\n\\]\nwhich has global dimension $n$ and is $2(p-1)$-sparse. Hence, Theorem~\\ref{th:SparseDrProj} applies in this case to the differentials $d_r$ with $r \\leq 2(p-1)$, while $d_r$ is zero for $r > n$.\nLikewise, connective complex $K$-theory $ku$ has coefficient ring\n\\[\n\\pi_* ku = \\mathbb{Z}[u], \\quad \\abs{u} = 2,\n\\]\nwhich has global dimension $2$ and is $2$-sparse.\n\\end{ex}\n\n\\begin{ex}\\label{ex:DGA}\nLet $R$ be a differential graded (\\emph{dg} for short) algebra over a commutative ring $k$, and consider the category of dg $R$-modules $\\dgMod{R}$. The homology $H_* X$ of a dg $R$-module is a (graded) $H_* R$-module. %\nThe derived category $D(R)$ is defined as the localization of $\\dgMod{R}$ with respect to quasi-isomorphisms. \nThe free dg $R$-module $R$ is a compact generator of $D(R)$. The $R$-homotopy of an object $X$ of $D(R)$ is its homology $\\pi_* X = H_* X$. In particular, the graded endomorphism ring of $R$ in $D(R)$ is the graded $k$-algebra $H_* R$.\n\nThe Adams spectral sequence relative to $\\cat{P}_R$ is \nan Eilenberg--Moore spectral sequence\n\\[\n\\Ext_{H_* R}^{s} \\left( \\Sigma^t H_* X, H_* Y \\right) \\Ra D(R)(\\Sigma^{t-s} X, Y)\n\\]\nfrom ordinary $\\Ext$ to differential $\\Ext$, as described in \\cite{BarthelMR14}*{\\S 8, 10}. See also \\cite{KrizM95}*{\\S III.4}, \\cite{HoveyPS97}*{Example 10.2(b)}, \nand \\cite{EilenbergM66}.\n\\end{ex}\n\n\\begin{rem}\nExample~\\ref{ex:DGA} can be viewed as a special case of Example~\\ref{ex:RingSpectrum}. Letting $HR$ denote the Eilenberg--MacLane spectrum associated to $R$, the categories $\\Mod{HR}$ and $\\dgMod{R}$ are Quillen equivalent, by \\cite{SchwedeS03}*{Example 2.4(i)} \\cite{Shipley07HZ}*{Corollary 2.15}, yielding a triangulated equivalence $h\\Mod{HR} \\cong D(R)$. The generator $HR$ corresponds to the generator $R$ via this equivalence.\n\\end{rem}\n\n\\begin{ex}\\label{ex:Ring}\nLet $R$ be a ring, viewed as a dg algebra concentrated in degree $0$. Then Example~\\ref{ex:DGA} yields the ordinary derived category $D(R)$. The graded endomorphism ring of $R$ in $D(R)$ is $H_* R$, which is $R$ concentrated in degree $0$.\nThis is $N$-sparse for any $N \\geq 2$. \n\nThe Adams spectral sequence relative to $\\cat{P}_R$ is the hyperderived functor spectral sequence\n\\[\n\\Ext_{H_* R}^{s} \\left( \\Sigma^t H_* X, H_* Y \\right) = \\prod_{i \\in \\mathbb{Z}} \\Ext_{R}^{s} \\left( H_{i-t} X, H_{i} Y \\right) \\Ra D(R)(\\Sigma^{t-s}X, Y) = \\mathbf{Ext}_{R}^{s-t}(X,Y)\n\\]\nfrom ordinary $\\Ext$ to hyper-$\\Ext$, as described in \\cite{Weibel94}*{\\S 5.7, 10.7}.%\n\\end{ex}\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nRogers \\cite{Rog58} introduced the notion of computable numbering\n(of all partial computable functions) and the notion of reducibility.\nHe showed that the set of all equivalence classes of computable numberings\nforms an upper semilattice with respect to reducibility which is called\nthe \\emph{Rogers semilattice}. He asked whether this semilattice is\na lattice, and if not, whether any two elements have a lower bound.\nFriedberg \\cite{Fri58} constructed an injective computable numbering\n(called a Friedberg numbering) by a finite-injury priority argument.\nPour-El \\cite{Pou64} showed that every Friedberg numberings of is\nminimal. She also showed that there are two incomparable Friedberg\nnumberings through modifying Friedberg's construction. These numberings\nare non-equivalent minimal elements, and therefore they have no lower\nbound. Thus Rogers' questions were negatively answered.\n\nShen \\cite{She12} gave some examples of game-theoretic proofs of\ntheorems in computability theory and algorithmic information theory.\nIn particular, he gave the game-theoretic proof of the theorem of\nFriedberg. The game representation of Friedberg's construction is\nclear and intuitional, and can be used to prove other existence theorems\nof Friedberg numberings as we will demonstrate in the paper.\n\nIn \\prettyref{sec:Friedberg's construction} we present Shen's proof\nto use later. We provide game proofs of certain well-known existence\ncriteria of Friedberg numberings for general classes of partial computable\nfunctions. In \\prettyref{sec:Modifications}, we give two proofs of\nthe theorem of Pour-El using two games. Also we give the proof of\nthe existence of an infinite c.e. sequence and an independent sequence\nof Friedberg numberings. These are essentially modifications of Shen's\nproof.\n\n\n\\section{Notations and Definitions}\n\nWe denote by $\\mathcal{P}^{\\left(1\\right)}$ the set of all partial\ncomputable functions from $\\mathbb{N}$ to $\\mathbb{N}$. $\\braket{\\cdot,\\cdot}$\nis a computable pairing function which is a computable bijection between\n$\\mathbb{N}^{2}$ and $\\mathbb{N}$. Let $\\mathcal{A}$ be any set.\nA surjective map $\\nu:\\mathbb{N}\\to\\mathcal{A}$ is called a \\emph{numbering}\nof $\\mathcal{A}$. Let $\\nu$ and $\\mu$ be numberings of $\\mathcal{A}$.\nWe say that $\\nu$ is \\emph{reducible to} $\\mu$, denoted by $\\nu\\leq\\mu$,\nif there is a total computable function $f:\\mathbb{N}\\to\\mathbb{N}$\nsuch that $\\nu=\\mu\\circ f$. We say that $\\nu$ and $\\mu$ are \\emph{equivalent}\nif they are reducible to each other. We say that $\\nu$ and $\\mu$\nare \\emph{incomparable} if they are not reducible to each other. In\nthis paper, we often identify a numbering $\\nu$ of a set of partial\nmaps from $X$ to $Y$ with the partial map $\\nu\\left(i,x\\right)=\\nu\\left(i\\right)\\left(x\\right)$\nfrom $\\mathbb{N}\\times X$ to $Y$. A numbering $\\nu$ of a subset\nof $\\mathcal{P}^{\\left(1\\right)}$ is said to be \\emph{computable}\nif it is computable as a partial function from $\\mathbb{N}^{2}$ to\n$\\mathbb{N}$. A computable injective numbering is called a \\emph{Friedberg\nnumbering}. A sequence $\\set{\\nu_{i}}_{i\\in\\mathbb{N}}$ of numberings\nof a subset of $\\mathcal{P}^{\\left(1\\right)}$ is said to be \\emph{uniformly\nc.e.} if it is uniformly c.e. as a sequence of partial functions from\n$\\mathbb{N}^{2}$ to $\\mathbb{N}$, or equivalently, if it is computable\nas a partial function from $\\mathbb{N}^{3}$ to $\\mathbb{N}$. We\nsay that a sequence $\\set{\\nu_{i}}_{i\\in\\mathbb{N}}$ of numberings\nof a set $\\mathcal{A}$ is \\emph{independent} if $\\nu_{i}\\nleq\\bigoplus_{j\\neq i}\\nu_{j}$\nfor all $i\\in\\mathbb{N}$, where $\\bigoplus_{i\\in\\mathbb{N}}\\nu_{j}$\nis the direct sum of $\\set{\\nu_{i}}_{i\\in\\mathbb{N}}$ defined by\n$\\bigoplus_{i\\in\\mathbb{N}}\\nu_{i}\\left(\\braket{j,k}\\right)=\\nu_{j}\\left(k\\right)$.\n\n\n\\section{\\label{sec:Friedberg's construction}Friedberg's construction and\nthe infinite game with two boards}\n\\begin{thm}[{Friedberg \\cite[Corollary to Theorem 3]{Fri58}}]\n\\label{thm:Fri58-Corollary-to-Theorem3}$\\mathcal{P}^{\\left(1\\right)}$\nhas a Friedberg numbering.\\end{thm}\n\\begin{proof}[Proof (Shen \\cite{She12})]\nFirst, we consider an infinite game $\\mathcal{G}_{0}$ and prove\nthat the existence of a computable winning strategy of $\\mathcal{G}_{0}$\nfor one of the players implies the existence of a Friedberg numbering\nof $\\mathcal{P}^{\\left(1\\right)}$. The game $\\mathcal{G}_{0}$ is\nas follows:\n\\begin{description}\n\\item [{Players}] Alice, Bob.\n\\item [{Protocol}] FOR $s=0,1,2,\\ldots$:\n\n\nAlice announces a finite partial function $A_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\nBob announces a finite partial function $B_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\\item [{Collateral duties}] $A_{s}\\subseteq A_{s+1}$ and $B_{s}\\subseteq B_{s+1}$\nfor all $s\\in\\mathbb{N}$.\n\\item [{Winner}] Let $A=\\bigcup_{s\\in\\mathbb{N}}A_{s}$ and $B=\\bigcup_{s\\in\\mathbb{N}}B_{s}$.\nBob wins if\n\n\\begin{enumerate}\n\\item for each $i\\in\\mathbb{N}$, there is a $j\\in\\mathbb{N}$ such that\n$A\\left(i,\\cdot\\right)=B\\left(j,\\cdot\\right)$;\n\\item for any $i,j\\in\\mathbb{N}$, if $i\\neq j$, then $B\\left(i,\\cdot\\right)\\neq B\\left(j,\\cdot\\right)$.\n\\end{enumerate}\n\\end{description}\n\nWe consider $A$ and $B$ as two boards, $A$-table and $B$-table.\nEach board is a table with an infinite number of rows and columns.\nEach player plays on its board. At each move player can fill finitely\nmany cells with any natural numbers. The collateral duties prohibit\nplayers from erasing cells.\n\n\nA strategy is a map that determines the next action based on the previous\nactions of the opponent. Since any action in this game is a finitary\nobject, we can define the computability of strategies via g\\\"odelization.\nSuppose that there is a computable winning strategy for Bob. Let Alice\nfill $A$-table with the values of some computable numbering of $\\mathcal{P}^{\\left(1\\right)}$\nby using its finite approximation, and let Bob use some computable\nwinning strategy. Clearly $B$ is a Friedberg numbering of $\\mathcal{P}^{\\left(1\\right)}$.\n\n\nSecond, we consider an infinite game $\\mathcal{G}_{1}$, which is\na simplified version of $\\mathcal{G}_{0}$, and describe a computable\nwinning strategy of $\\mathcal{G}_{1}$. The game $\\mathcal{G}_{1}$\nis as follows:\n\\begin{description}\n\\item [{Players}] Alice, Bob.\n\\item [{Protocol}] FOR $s=0,1,2,\\ldots$:\n\n\nAlice announces a finite partial function $A_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\nBob announces a finite partial function $B_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$\nand a finite set $K_{s}\\subseteq\\mathbb{N}$.\n\n\\item [{Collateral duties}] $A_{s}\\subseteq A_{s+1}$, $B_{s}\\subseteq B_{s+1}$\nand $K_{s}\\subseteq K_{s+1}$ for all $s\\in\\mathbb{N}$.\n\\item [{Winner}] Let $A=\\bigcup_{s\\in\\mathbb{N}}A_{s}$, $B=\\bigcup_{s\\in\\mathbb{N}}B_{s}$\nand $K=\\bigcup_{s\\in\\mathbb{N}}K_{s}$. Bob wins if\n\n\\begin{enumerate}\n\\item for each $i\\in\\mathbb{N}$, there is a $j\\in\\mathbb{N}\\setminus K$\nsuch that $A\\left(i,\\cdot\\right)=B\\left(j,\\cdot\\right)$;\n\\item for any $i,j\\in\\mathbb{N}\\setminus K$, if $i\\neq j$, then $B\\left(i,\\cdot\\right)\\neq B\\left(j,\\cdot\\right)$.\n\\end{enumerate}\n\\end{description}\n\nWe consider that in this game Bob can \\emph{invalidate} some rows\nand that we ignore invalid rows when we decide the winner. Bob cannot\nvalidate invalid rows again.\n\n\nTo win this game, Bob hires a countable number of assistants who guarantee\nthat each of the rows in $A$-table appears in $B$-table exactly\nonce. At each move, the assistants work one by one. The $i$-th assistant\nstarts working at move $i$. She can reserve a row in $B$-table exclusively,\nfill her reserved row, and invalidate her reserved row. The instruction\nfor the $i$-th assistant: \\textit{if you have no reserved row, reserve\na new row. Let $k$ be the number of rows such that you have already\ninvalidated. If in the current state of $A$-table the first $k$\npositions of the $i$-th row are identical to the first $k$ positions\nof some previous row, invalidate your reserved row. If you have a\nreserved row, copy the current contents of the $i$-th row of $A$-table\ninto your reserved row.} These instructions guarantee in the limit\nthat\n\\begin{itemize}\n\\item if the $i$-th row in $A$-table is identical to some previous row,\nthen the $i$-th assistant invalidates her reserved row infinitely\nmany times, so she has no permanently reserved row;\n\\item if not, the $i$-th assistant invalidates her reserved row only finitely\nmany times, so she has a permanently reserved row.\n\\end{itemize}\n\n\\noindent In the second case, she faithfully copies the contents of\nthe $i$-th row of $A$-table into her permanently reserved row. We\ncan assume that each of the rows in $B$-table has been reserved or\ninvalidated in the limit: when some assistant reserves a row let her\nselect the first unused row. Then Bob wins the simplified game. Now\nwe prove the above properties. Suppose that the $i$-th row in $A$-table\nis not identical to any previous row in the limit. For each of the\nprevious rows, select some column witnessing that this row is not\nidentical to the $i$-th row. Let $k$ be the maximum of the selected\ncolumns. Wait for convergence of the rectangular area $\\left[0,i\\right]\\times\\left[0,k\\right]$\nof $A$-table. After that, the first $k$ positions of the $i$-th\nrow in $A$-table are not identical to the first $k$ positions of\nany previous row, and hence the $i$-th assistant invalidates her\nreserved row at most $k$ times. Conversely, suppose that the $i$-th\nassistant invalidates her reserved row only finitely many times. Let\n$k$ be the number of invalidations. After the $k$-th invalidation,\nthe $i$-th row in $A$-table is not identical to any previous row,\nand the same is true in the limit.\n\n\nFinally, we describe a computable winning strategy of $\\mathcal{G}_{0}$\nthrough modifying the winning strategy of $\\mathcal{G}_{1}$ described\nabove. We say that a row is odd if it contains a finite odd number\nof non-empty cells. We can assume without loss of generality that\nodd rows never appear in $A$-table: if Alice fills some cells in\na row making this row odd, Bob ignores one of these cells until Alice\nfills other cells in this row. We replace invalidation to \\emph{odd-ification}:\ninstead of invalidating a row, fill some cells in this row making\nit new and odd. Bob consider that odd-ified rows in $B$-table are\ninvalid. This modification guarantees that each of the non-odd rows\nof $A$-table appears in $B$-table exactly once. Bob hires an additional\nassistant who guarantees that each odd row appears in $B$-table exactly\nonce. At each move, the additional assistant reserves some row exclusively\nand fills some cells in this row making it new and odd so that all\nodd rows are exhausted in the limit. Thus Bob wins this game, and\nthe theorem is proved.\n\n\\end{proof}\nKummer \\cite{Kum90a} gave a priority-free proof of the existence\nof a Friedberg numbering of $\\mathcal{P}^{\\left(1\\right)}$. The key\npoint of his proof is to split $\\mathcal{P}^{\\left(1\\right)}$ into\n$\\mathcal{P}^{\\left(1\\right)}\\setminus\\mathcal{O}$ and $\\mathcal{O}$,\nwhere $\\mathcal{O}$ is the set of all \\emph{odd} partial functions.\nObserve that $\\mathcal{P}^{\\left(1\\right)}\\setminus\\mathcal{O}$ has\na computable numbering, $\\mathcal{O}$ has a Friedberg numbering,\nand any finite subfunction of a partial function in $\\mathcal{P}^{\\left(1\\right)}\\setminus\\mathcal{O}$\nhas infinitely many extensions in $\\mathcal{O}$. He provided the\nfollowing useful criterion.\n\\begin{cor}[{Kummer \\cite[Extension Lemma]{Kum89}}]\n\\label{cor:Kum89-Extension-Lemma}Let $\\mathcal{A}$ and $\\mathcal{B}$\nbe disjoint subsets of $\\mathcal{P}^{\\left(1\\right)}$. If $\\mathcal{A}$\nhas a computable numbering, $\\mathcal{B}$ has a Friedberg numbering,\nand every finite subfunction of a member of $\\mathcal{A}$ has infinitely\nmany extensions in $\\mathcal{B}$, then $\\mathcal{A}\\cup\\mathcal{B}$\nhas a Friedberg numbering.\\end{cor}\n\\begin{proof}[Proof sketch]\nLet us play the game $\\mathcal{G}_{0}$ where Alice fills $A$-table\nwith the values of some computable numbering of $\\mathcal{P}^{\\left(1\\right)}$,\nand Bob uses the strategy which is obtained by modifying the winning\nstrategy of $\\mathcal{G}_{0}$ as follows. In this strategy, we do\nnot assume that odd rows never appear in $A$-table, and Bob does\nnot ignore cells in $A$-table. Assistants use partial functions in\n$\\mathcal{B}$ instead of odd partial functions. Replace odd-ification\nto \\emph{$\\mathcal{B}$-ification}: instead of odd-ificating a row,\nfill some cells in this row making it an unused member of $\\mathcal{B}$\nin the limit. The additional assistant guarantees that each member\nof $\\mathcal{B}$ appears in $B$-table exactly once. These actions\nare possible since $\\mathcal{B}$ has a Friedberg numbering. Then,\nBob wins, and $B$ becomes a Friedberg numbering of $\\mathcal{A}\\cup\\mathcal{B}$.\\end{proof}\n\\begin{cor}[{Pour-El and Putnam \\cite[Theorem 1]{PP65}}]\nLet $\\mathcal{A}$ be a subset of $\\mathcal{P}^{\\left(1\\right)}$\nand $f$ be a member of $\\mathcal{P}^{\\left(1\\right)}$ with an infinite\ndomain. If $\\mathcal{A}$ has a computable numbering, then there is\na subset $\\mathcal{B}$ of $\\mathcal{P}^{\\left(1\\right)}$ such that\n\\begin{enumerate}\n\\item $\\mathcal{A}\\subseteq\\mathcal{B}$,\n\\item the domain of every member of $\\mathcal{B}\\setminus\\mathcal{A}$ is\nfinite,\n\\item for any $g\\in\\mathcal{B}\\setminus\\mathcal{A}$, there is an $h\\in\\mathcal{A}$\nwith $g\\subseteq f\\cup h$,\n\\item $\\mathcal{B}$ has a Friedberg numbering.\n\\end{enumerate}\n\\end{cor}\n\\begin{proof}[Proof sketch]\nLet us play the game $\\mathcal{G}_{0}$ where Alice fills $A$-table\nwith the values of some computable numbering of $\\mathcal{P}^{\\left(1\\right)}$,\nand Bob uses the strategy which is obtained by modifying the winning\nstrategy of $\\mathcal{G}_{0}$ as follows. When some assistant fills\na cell in the $j$-th column making this row odd, she must use $f\\left(j\\right)$\nfor filling. The instruction for the additional assistant: \\textit{if\nthere is an odd row $i$ in $A$-table such that this row is not identical\nto any row in $B$-table, reserve a new row, copy the current contents\nof the $i$-th row of $A$-table into your reserved row, and release\nyour reserved row. }Released rows cannot be used forever. Note that\nthe additional assistant exceptionally does not ignore cells in $A$-table.\nThen, Bob wins, $\\mathcal{B}=\\set{B\\left(i,\\cdot\\right)|i\\in\\mathbb{N}}$\nhas the desired properties, and $B$ becomes a Friedberg numbering\nof $\\mathcal{B}$.\n\\end{proof}\n\n\\section{\\label{sec:Modifications}Modifications}\n\\begin{thm}[{Pour-El \\cite[Theorem 2]{Pou64}}]\n\\label{thm:Pou64-Theorem2}There are two incomparable Friedberg numberings\nof $\\mathcal{P}^{\\left(1\\right)}$.\n\\end{thm}\nThe first proof is obtained from the proof of \\prettyref{thm:Fri58-Corollary-to-Theorem3}\nthrough modifying in the same way done by Pour-El.\n\\begin{proof}[Proof (asymmetric version)]\nWe consider the following game $\\mathcal{G}_{2}$:\n\\begin{description}\n\\item [{Players}] Alice, Bob.\n\\item [{Protocol}] FOR $s=0,1,2,\\ldots$:\n\n\nAlice announces a finite partial function $A_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\nBob announces a finite partial function $B_{s}:\\mathbb{N}^{2}\\rightharpoonup\\mathbb{N}$.\n\n\\item [{Collateral duties}] $A_{s}\\subseteq A_{s+1}$ and $B_{s}\\subseteq B_{s+1}$\nfor all $s\\in\\mathbb{N}$.\n\\item [{Winner}] Let $A=\\bigcup_{s\\in\\mathbb{N}}A_{s}$ and $B=\\bigcup_{s\\in\\mathbb{N}}B_{s}$.\nBob wins if\n\n\\begin{enumerate}\n\\item for each $i\\in\\mathbb{N}$, there is a $j\\in\\mathbb{N}$ such that\n$A\\left(i,\\cdot\\right)=B\\left(j,\\cdot\\right)$;\n\\item for any $i,j\\in\\mathbb{N}$, if $i\\neq j$, then $B\\left(i,\\cdot\\right)\\neq B\\left(j,\\cdot\\right)$;\n\\item for each $i\\in\\mathbb{N}$, if $A\\left(i,\\cdot\\right)$ is total,\nthen there is a $j\\in\\mathbb{N}$ such that $B\\left(A\\left(i,j\\right),\\cdot\\right)\\neq A\\left(j,\\cdot\\right)$.\n\\end{enumerate}\n\\end{description}\n\nSuppose that there is a computable winning strategy of $\\mathcal{G}_{2}$\nfor Bob. Let Alice fill $A$-table with the values of some Friedberg\nnumbering of $\\mathcal{P}^{\\left(1\\right)}$, and let Bob use some\ncomputable winning strategy. Then, $A$ is a Friedberg numbering of\n$\\mathcal{P}^{\\left(1\\right)}$, and $B$ is a Friedberg numbering\nof $\\mathcal{P}^{\\left(1\\right)}$ to which $A$ is not reducible.\nSince $A$ is minimal, $B$ is also not reducible to $A$.\n\n\nWe describe a computable winning strategy of $\\mathcal{G}_{2}$. Bob\nuses the winning strategy of $\\mathcal{G}_{0}$ described in the proof\nof \\prettyref{thm:Fri58-Corollary-to-Theorem3}, which guarantees\nthat the first two winning conditions are satisfied. For the third\nwinning condition, Bob adds the following instruction for the $i$-th\nassistant: \\textit{if the $i$-th row $i$-th column in $A$-table\nhas been filled, and you have reserved the $A\\left(i,i\\right)$-th\nrow, then odd-ify the $A\\left(i,i\\right)$-th row.} Each of these\ninstructions is done at most once because after doing this the corresponding\nrequirement is permanently satisfied. Hence they do not interrupt\nsatisfying the first two winning conditions. It remains to show that\nthe third winning condition is also satisfied. Suppose that $A\\left(i,\\cdot\\right)$\nis total. We can assume without loss of generality that $i$ is the\nleast index of $A\\left(i,\\cdot\\right)$, i.e., there is no $j1, & \\end{array} \\right .\n\\end{equation}\nNote that in all cases, the number variance of a hyperuniform\npoint pattern grows more slowly than $R^d$.\n\n\\subsection{Order metrics}\n\nThe local bond-orientational-order metric $q_6$ is defined as\n\\cite{ref25}\n\\begin{equation}\nq_6 = \\left |{\\frac{1}{N_b}\\sum_j\\sum_k \\exp(6i\n\\theta_{jk})}\\right |,\n\\end{equation}\nwhere $j$ runs over all cells in the system, $k$ runs over all\nneighbors of cell $j$, $\\theta_{jk}$ is the angle between some\nfixed reference axis in the system and the bond connecting the\ncenters of cells $j$ and $k$, and $N_b$ is the total number of\nsuch bonds in the system. This quantity indicates the degree of\norientational order in the local arrangement of the immediate\nneighbors of a cell and it is maximized (i.e., $q_6=1$) for the\nperfect hexagonal arrangement.\n\nTo characterize translational order of a configuration, we use the\nfollowing translation order metric $T$ introduced in Ref.\n\\cite{order_T} and further applied in Ref. \\cite{ref26},\n\\begin{equation}\nT = \\frac{1}{\\eta_c}\\int_0^{\\eta_c}|g_2(r)-1|dr =\n\\frac{1}{\\eta_c}\\int_0^{\\eta_c}|h(r)|dr\n\\end{equation}\nwhere $g_2(r)$ is the pair correlation function, $h(r) = g_2(r)-1$\nis the total correlation function and $\\eta_c$ is a numerical\ncutoff determined by the linear size of the system. The\ntranslational order metric measures the deviation of the spatial\narrangement of cell centers in a pattern from that of a totally\ndisordered system (i.e., a Poisson distribution of points). The\ngreater the deviation from zero, the more ordered is the point\nconfiguration.\n\n\n\\section{Structural properties of experimentally obtained photoreceptor\npatterns}\n\nThe chicken retina contains five different cone cell types of\ndifferent sizes: violet, blue, green, red and double. Each cell\ntype of this {\\it multicomponent} system is maximally sensitive to\nvisible light of a different wavelength. The spatial coordinates\nof each cell can be determined by the presence of a colored oil\ndroplet in the cell's inner segment (Fig. \\ref{fig1_cone}). Since\nthe oil droplets used to identify the locations of individual\nphotoreceptors are not always in exactly the same plane\n\\cite{ref12}, pairs of real photoreceptors sometimes appear to be\ncloser to one another than they are in actuality and in the\nsimulations. In addition, the original slightly curved retina\nepithelium was flattened for imaging purposes \\cite{ref12}. These\neffects introduce small errors in the intercell small-distance\nbehavior but do not affect the overall statistics, especially on\nlarge length scales. The spatial coordinate datasets of post-hatch\nday 15 chicken (Gallus gallus) cone photoreceptors were obtained\nfrom a published study \\cite{ref12}. Each dataset contains\napproximately 4430 photoreceptors, and the average numbers of\nviolet, blue, green, red and double species are respectively 350,\n590, 880, 670 and 1840. To clearly illustrate the photoreceptor\npatterns of different species, only a portion of the entire system\nis shown in Fig. \\ref{fig_cellpacking}. We compute a variety of\nthe associated statistical structural descriptors and order\nmetrics to quantify the degree of spatial regularity (or disorder)\nof the cell arrangements.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig3.eps}\\\\\n\\end{center}\n\\caption{Experimentally obtained configurations representing the\nspatial arrangements of chicken cone photoreceptors. Upper panels:\nThe configurations shown from left to right respectively\ncorrespond to violet, blue, green species. Lower panels: The\nconfigurations shown from left to right respectively correspond to\nred, double species and the overall pattern.}\n\\label{fig_cellpacking}\n\\end{figure*}\n\n\n\\subsection{Disordered Hyperuniformity}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig4.eps} \\\\\n\\end{center}\n\\caption{Structure factors $S(k)$ of the experimentally obtained\npoint configurations representing the spatial arrangements of\nchicken cone photoreceptors. The experimental data were obtained\nby averaging 14 independent patterns. The estimated values of\n$S(k=0)$ by extrapolation for violet, blue, green, red, double and\nthe overall population in the actual pattern are respectively\ngiven by $2.11\\times 10^{-3}$, $6.10\\times10^{-4}$,\n$1.06\\times10^{-3}$, $5.72\\times10^{-4}$, $1.38\\times10^{-4}$,\n$1.13\\times10^{-3}$.} \\label{fig_Sk}\n\\end{figure*}\n\n\nAs discussed in Sec. IIB, a point pattern is hyperuniform if the\nnumber variance $\\sigma^2(R)$ within a spherical sampling window\nof radius $R$ (in $d$ dimensions) grows more slowly than the\nwindow volume for large $R$, i.e., more slowly than $R^d$\n\\cite{ref13}. The property of hyperuniformity can also be\nascertained from the small wavenumber behavior of the structure\nfactor, i.e., $S(k=0)=0$ of the pattern \\cite{ref13}, which\nencodes information about large-scale spatial correlations (see\nSec. IIB for details). We find that $S(k)$ for the cell\nconfigurations associated with both the total population and the\nindividual photoreceptor species are hyperuniform and each of\nthese structure factors vanishes linearly with k as k tends to\nzero, i.e., $S(k) \\sim k$ ($k \\rightarrow 0$) (see Fig.\n\\ref{fig_Sk}). As discussed in Sec. IIB [cf. Eq. (\\ref{eq_S0})],\nsuch a linear behavior indicates a power-law decay for large $r$\nvalues in the pair correlation function (i.e., $g_2(r)-1 \\sim\n-1\/r^{3}$) instead of an exponential decay and therefore\nquasi-long-range correlations in the system. We will elaborate on\nthis point in the ensuing discussion.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=8.5cm,keepaspectratio]{fig5.eps} \\\\\n\\end{center}\n\\caption{The number variance $\\sigma^2(R)$ associated with the\nphotoreceptor patterns in chicken retina as well as the associated\nfitting function of the form $\\sigma^2(R) = A R^2 + B R\\ln(R) + C\nR$. We found that the values of the parameter $A$ are several\norders of magnitude smaller than the other two parameters,\nindicating that the associated patterns are effectively\nhyperuniform. Also shown in each plot is the ``surface term'' $C\nR$ for purposes of comparison. The window radius $R$ is normalized\nwith respect to the mean nearest neighbor distance $d_0$ of the\ncorresponding point configurations.} \\label{fig_sigma}\n\\end{figure*}\n\n\nWe have directly computed the number variance $\\sigma^2(R)$ for\nthe individual and overall patterns and verified that they are\nalso consistent with hyperuniformity, i.e., the ``volume term'' in\n$\\sigma^2(R)$ is several orders of magnitude smaller than the\nother terms [c.f. Eq.~\\eqref{numasymp}]. Specifically, for each\n$R$ value, 2500 windows are randomly placed in the system without\noverlapping the system boundary. The finite system size $L$\nimposes an upper limit on the largest window size, which is chosen\nto be $R_{max} = L\/2$ here. Figure \\ref{fig_sigma} shows the\nexperimental data as well as the associated fitting functions of\nthe form\n\\begin{equation}\n\\sigma^2(R) = A R^2 + B R\\ln(R) + C R,\n\\end{equation}\nwhere $A = S(k=0)$ and $B, C>0$. Note that in the plots, the\nwindow size $R$ is normalized by the corresponding\nnearest-neighbor distance $d_0$ for each species. Also shown in\neach plot is the corresponding ``surface term'' $C R$ for purposes\nof comparison. The numerical values of the fitting parameters for\nboth the overall pattern and the individual species are given in\nTable \\ref{tab_1}. It can be clearly seen that the values of the\nparameter $A$ are several orders of magnitude smaller than the\nother two parameters, indicating that the associated patterns are\neffectively hyperuniform. These values are also consistent with\nthe numerical values of $S(k=0)$ obtained by directly fitting\n$S(k)$ for small $k$ values \\cite{footnote0}.\n\n\\begin{table*}[h]\n\\caption{The numerical values of the fitting parameters for both\nthe overall pattern and the individual species.}\n\\begin{tabular}{c|@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c@{\\hspace{0.35cm}}c}\n\\hline\n& Violet & Blue & Green & Red & Double & Overall \\\\\n\\hline $A$ & $2.53\\times 10^{-4}$ & $9.24\\times 10^{-4}$ &\n$1.07\\times 10^{-3}$ & $1.77\\times 10^{-3}$ & $4.46\\times 10^{-3}$\n& $1.93\\times 10^{-3}$ \\\\\n$B$ & 0.203 & 0.198 & 0.169 & 0.146 & 0.122 & 0.127\\\\\n$C$ & 1.22 & 1.14 & 1.03 & 1.09 & 1.17 & 1.06 \\\\\n\\hline\n\\end{tabular}\n\\label{tab_1}\n\\end{table*}\n\nThe fact that the photoreceptor patterns display both overall\nhyperuniformity and homotypic hyperuniformity implies that if any\nsubpopulation of the individual species is removed from the\noverall population, the remaining pattern is still hyperuniform.\nWe term such patterns {\\it multi-hyperuniform} because distinct\nmultiple subsets of the overall point pattern are themselves\nhyperuniform. These are highly unusual and unique structural\nattributes. Until now, the property of \\textit{overall}\nhyperuniformity was identified only in a special subset of\ndisordered physical systems \\cite{ref15, ref16, ref18, berthier,\nweeks, aleksPRL, jiaoPRE, helium, plasma,universe, fermion,\nref17}. The chicken photoreceptor patterns provides the first\nexample of a disordered hyperuniform biological system. In\naddition, the photoreceptor patterns possess quasi-long-range\n(QLR) correlations as indicated by the linear small-$k$ behavior\nin $S(k)$. We will elaborate on these points in Sec. V.\n\n\\subsection{Pair Correlation Functions}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig6.eps} \\\\\n\\end{center}\n\\caption{Pair correlation functions $g_2(r)$ of the experimentally\nobtained point configurations representing the spatial\narrangements of chicken cone photoreceptors. The experimental data\nwere obtained by averaging 14 independent patterns. The distance\nis rescaled by the average nearest neighbor distance $d_n$ in the\nsystem.} \\label{fig_g2}\n\\end{figure*}\n\nWe find that each cell is associated with an effective exclusion\nregion (i.e., an area in 2D) with respect to any other cells,\nregardless of the cell types. The size of these exclusion regions\nroughly corresponds to the size of the cells themselves\n\\cite{ref12}. In addition, cells belonging to the same subtype\n(i.e., like-cells) are found to be mutually separated from one\nanother almost as far as possible, leading to a larger effective\nexclusion region associated with like-cells of each species. The\nexclusion effects are quantitatively captured by the associated\npair-correlation functions (Fig. \\ref{fig_g2}). The hard-core\nexclusion effect is manifested in $g_2(r)$ as an interval of $r$\nfor which $g_2(r) = 0$ (i.e., an ``exclusion gap'') and $g_2(r)$\napproaches its large-$r$ asymptotic value of unity very quickly,\nindicating the absence of any long-range spatial ordering. This is\nto be contrasted with ordered systems, such as crystals, whose\npair correlation functions are composed of individual Dirac delta\nfunctions at specific $r$ values.\n\n\\subsection{Order Metrics}\n\n\\begin{table*}[h]\n\\caption{Bond-orientational and translational order metrics, $q_6$\nand $T$, respectively, of the chicken photoreceptor patterns. The\nexperimental data were obtained by averaging 14 independent\npatterns.}\n\\begin{tabular}{@{\\vrule height 10.5pt depth4pt width0pt}c|c|c}\n\\hline\n~Species~ & ~$q_6$~ & ~$T$~ \\\\\n\\hline\nViolet & ~0.150~ & ~0.304~ \\\\\nBlue & ~0.158~ & ~0.411~ \\\\\nGreen & ~0.130~ & ~0.278~ \\\\\nRed & ~0.147~ & ~0.254~ \\\\\nDouble & ~0.184~ & ~0.390~ \\\\\nAll & ~0.058~ & ~0.096~ \\\\\n\\hline\n\\end{tabular}\n\\label{tab_2}\n\\end{table*}\n\n\nA bond-orientational order metric $q_6$ \\cite{ref25} and a\ntranslational order metric $T$ \\cite{ref26} were used next to\nquantify the degree of spatial regularity in the photoreceptor\npatterns (see Tab. \\ref{tab_2}), each of which are maximized by\nthe triangular lattice and minimized by a spatially uncorrelated\npoint pattern. Interestingly, the $q_6$ and $T$ values for the\ntotal population are close to the corresponding values for\npolydisperse hard-disk packings we obtained, implying that local\ncell exclusion effect plays a primary role in determining the\noverall pattern. In contrast, the higher $q_6$ and $T$ values for\nindividual cell species suggest that like-cells interact with one\nanother on a length scale larger than the size of a single cell,\nwhich tends to increase the degree of order in the arrangements of\nlike-cells.\n\nFrom a functional point of view, photoreceptor cells of a given\ntype maximize their sampling efficiency when arranged on an\nordered triangular lattice, as in the case of the compound eye of\ninsects \\cite{ref5, ref6}. Importantly, the triangular lattice has\nbeen shown to be the most hyperuniform pattern \\cite{ref13}, i.e.,\nit minimizes the large-scale density fluctuations among all 2D\npatterns. However, this most hyperuniform pattern may not be\nachieved if other constraints (e.g., cell size polydispersity)\nare operable. We therefore hypothesize that the disordered\nhyperuniformity of avian photoreceptor patterns represents a\ncompromise between the tendency of the individual cell types to\nmaximize their spatial regularity and the countervailing effects\nof packing heterotypic cell types within a single epithelium,\nwhich inhibits the spatial regularity of the individual cell\ntypes. In other words, the avian photoreceptors are driven to\nachieve the most ``uniform'' spatial distribution subject to\nheterotypic cell packing constraints.\n\n\n\\section{Computational Model That Yields Multi-Hyperuniform Patterns}\n\nOur initial attempt to model the avian photoreceptor cell patterns\nemployed classic packing models of polydisperse hard disks that\nare driven to their ``jammed states'' \\cite{ref27}. However, these\nmodels failed to generate patterns with multi-hyperuniformity. Such standard\njamming models involving interactions on a single length scale are\ninsufficient to represent the two competing effects leading to the\nphotoreceptor patterns and motivated us to develop a unique\nmultiscale packing model as described below.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig7.eps}\n\\end{center} \\caption{Illustration of the hard-core and soft-core\ninteractions in a two-species system containing black and red\ncells. The left panel shows the exclusion regions (circular disks\nwith two distinct sizes) associated with the two types of cells,\nwhich is proportional to the actual sizes of the cells. The black\ncells have a larger exclusion region than the red cells. The\nmiddle panel illustrates the soft-core repulsive interaction\n(large concentric overlapping circles of the solid black disks)\nbetween the black cells. Such a repulsive interaction will drive\nthe black cells to arrange themselves in a perfect triangular\nlattice in the absence of other species. The right panel\nillustrates the soft-core repulsive interaction (large concentric\noverlapping circles of the solid red disks) between the red\ncells.} \\label{fig_packing}\n\\end{figure*}\n\nIn the experimental data representing the spatial arrangements of\nchicken cone photoreceptors, each cell is represented by a point.\nWe refer to these points as ``cell centers'', although they may\nnot correspond to the actual geometrical centers of the cells.\n\nIn order to modify a simple hard-core interaction, we consider two\ntypes of effective cell-cell interactions: isotropic short-range\nhard-core repulsions between any pair of cells and isotropic\nlong-range soft-core repulsions between pairs of like-cells (i.e.,\ncells of the same subtype). The multiscale nature of the model\nresults from the multiple length scales involved in these\ninteractions for different species, as we discuss now. The\nstrength of the hard-core repulsion is characterized by the radius\n$R^i_h$ of a hard-disk exclusion region associated with a cell\ntype $i$. This interaction imposes a nonoverlap constraint such\nthat the distance between the cells $i$ and $j$ cannot be smaller\nthan $(R^i_h + R^j_h)$, which mimics the physical cell packing\nconstraint. In this regard, $R^i_h$ will also be referred to as\nthe radius of a cell $i$ in the ensuing discussions. The relative\nmagnitudes of $R^i_h$ are estimated from an electron micrograph\nshowing photoreceptor cell packing at the level of the inner\nsegment (see discussion below) \\cite{ref11}. The characteristic\nradius $R_s$ of the soft-core repulsion is associated with the\nmean nearest-neighbor distance of the cells of the same type.\nSpecifically, the pair potential between two like-cells is given\nby\n\\begin{equation}\n\\label{potential} E(r) =\n\\left\\{{\\begin{array}{c@{\\hspace{0.8cm}}c}\n\\displaystyle{\\frac{\\alpha}{\\beta+1}}(2R_s-r)^{\\beta+1} & r\\le\n2R_s,\n\\\\\\\\ 0 & r>2R_s, \\end{array}} \\right .\n\\end{equation}\nwhere the parameters $\\alpha>0$ and $\\beta>0$ set the scale of the\ninteraction energy \\cite{footnote2}. In our simulations, we\nrequire that value of $R_s$ be uniquely determined by the\nassociated cell number density $\\rho$, i.e., $R_s =\n\\frac{1}{2}\\sqrt{2\/(\\sqrt(3)\\rho)}$. This implies that a system\ncomposed of cells of the same type (i.e., a single-component\nsystem) interacting via a pair potential given by Eq.\n\\eqref{potential} at number density $\\rho$ (i.e., the number of\ncells per unit area) possesses the triangular-lattice ground\nstate, i.e., an arrangement associated with a minimal total energy\n(sum of the total interaction energy between any pairs of\nlike-cells). In other words, when the total energy in a\nsingle-component system is reduced to its minimal value (e.g.,\nzero), sufficiently slowly from an arbitrary initial\nconfiguration, the cells will reorganize themselves into a\ntriangular-lattice arrangement.\n\n\n\nWhen the system contains multiple species, the hard-core and\nsoft-core interactions represent two competing effects in\ndetermining the packing arrangement of the cells; see Fig.\n\\ref{fig_packing}. Specifically, the polydisperse hard-disk\nexclusion regions induce geometrical frustration in the packing,\ni.e., in this five-component system, it is not possible for the\nsubset of disks with the same size, surrounded by disks with\ndifferent sizes to be arranged on a perfect triangular lattice. On\nthe other hand, the long-range soft interaction between like\nspecies tends to drive the cells of the same type to arrange\nthemselves on a perfect triangular lattice. Note that although the\nrelative magnitudes of $R^i_h$ for different species (i.e., the\nratio between any two $R^i_h$) are fixed, the actual values of\n$R^i_h$ are variable and used as a tuning parameter in our model.\nAs stated above, the ratios between $R^i_h$ are estimated from a\npreviously published study \\cite{ref11}. Specifically, the\nrelative sizes of the violet, blue, green, red and double species\nare 1.00, 1.19, 1.13, 1.06 and 1.50, respectively. Given the\nnumber of cells of each species, the values of $R^i_h$ can be\nuniquely determined from the packing fraction $\\phi$ of the cells\n(i.e., the fraction of space covered by the cells) and vice versa,\n\\begin{equation}\n\\label{eq_phi}\n\\phi = \\displaystyle{\\frac{1}{A}\\sum_i N_i \\pi (R^i_h)^2},\n\\end{equation}\nwhere $N_i$ is the number of cells of species $i$ and $A$ is the\narea of the system.\n\nOur Monte Carlo algorithm, which involves iterating ``growth'' and ``relaxation'' steps,\nworks as follows:\n\n\\begin{itemize}\n\n\\item{(1) Initialization. In the beginning of the simulation, cell centers of each species are generated in a\nsimulation box using the random-sequential-addition (RSA) process\n\\cite{ref27}. Specifically, for each species $i$, $N_i$ cell\ncenters are randomly generated such that these cell centers are\nmutually separated by a minimal distance $\\mu R_s$ $(0<\\mu<1)$. In addition,\nthe newly added cell cannot overlap any existing cells in the box\n(determined by the hard-core radius $R_s$), regardless of cell types. The initial covering fraction $\\phi_I$\nassociated with the hard-core exclusion regions is determined by\n$R^i_h$ via Eq. (\\ref{eq_phi}), and is about $80\\%$ of the RSA saturation density \\cite{ref27}.}\n\n\n\\item{(2) Growth step. At each stage $n$, the cells are allowed to randomly move a prescribed\nmaximal distance ($\\sim 0.25 R^i_h$) and direction such that no\npairs of cells overlap. After a certain number ($\\approx$1,000) of\nsuch random movements for each cell, the radius $R^i_h$ of each\ncell is increased by a small amount such that the size ratios of\nthe cells remain the same. This leads to an increase of the\npacking fraction $\\phi_n$ at this stage by an amount of about $1\\% - 3\\%$.\nNote that in this ``growth'' step, the long-range\nsoft interactions between the like-cells are turned off.}\n\n\\item{(3) Relaxation step. At the end of the ``growth'' step,\nthe soft interactions are then turned on, and the cells are\nallowed to relax from their current positions to reduce the total\nsystem energy subject to the nonoverlap condition. The steepest\ndecent method is used to drive the system to the closest local\nenergy minimum (i.e., the inherent structure \\cite{ref27}) associated with the starting configuration. This is\nreferred to as the ``relaxation'' process.}\n\n\n\\item{(4) Statistics. After the relaxation process, structural statistics of the\nresulting configuration of cell centers are obtained and compared\nto the corresponding experimental data. To ensure that the simulations\nmatch the data for the pair statistics to the best extent possible,\nwe introduce a deviation metric $\\Delta$. Specifically, $\\Delta$ is the\nnormalized sums of the squared differences between the simulated and experimental $S(k)$\nand $g_2(r)$ associated with the simulated and actual patterns, i.e.,\n\\begin{equation}\n\\label{eq_delta}\n\\Delta = \\frac{1}{n_S}\\sum_i^{n_S}\\sum_r[g^{(i)}_2(r) - \\bar{g}^{(i)}_2(r)]^2\n +\\frac{1}{n_S}\\sum_i^{n_S}\\sum_k[S^{(i)}(k) - \\bar{S}^{(i)}(k)]^2,\n\\end{equation}\nwhere $n_S = 6$ is the total number of species including both the 5 individual species\nand the overall pattern, $g_2^{(i)}(r)$ and $S^{(i)}(k)$ are the simulated functions associated with\nspecies $i$, and $\\bar{g}_2^{(i)}(r)$ and $\\bar{S}^{(i)}(k)$ are the corresponding\nexperimentally measured functions.}\n\n\n\\item{(5) The growth and relaxation steps described in the bullet items (2) and (3),\nrespectively, are repeated until $\\phi_n$ reaches a prescribed\nvalue $\\phi_F$. Specifically, the configuration obtained by relaxation at stage $n$ is used\nas the starting point for the growth step at stage $n+1$. The best simulated pattern (i.e., that with the smallest\ndeviation metric $\\Delta_{min}$) and the associated $\\phi^*$ value are\nthen identified.}\n\n\\end{itemize}\n\nAt a given packing fraction $\\phi$ (or equivalently a set of\n$R^i_h$), the polydispersity of the exclusion regions associated\nwith different species and the resulting nonoverlap constraints\nfrustrate the spatial order in the system. For example, the\nlong-range soft interaction drives a single-species system to the\ntriangular-lattice arrangement in the absence of other species. On\nthe other hand, for any $\\phi >0$, it is impossible for cells of a\nparticular species, surrounded by cells of other species to sit on\na perfect triangular lattice \\cite{ref12}. Therefore, the\ndisordered point configurations obtained by minimizing the energy\nassociated with the soft repulsive interactions subject to the\nhard-core packing constraints are the local energy minima (i.e., inherent structures)\nof the system. The extent to which the structure deviates from that of a\nperfect triangular lattice (i.e., global energy minimum) is\ndetermined by the parameter $\\phi$ (or, equivalently, $R^i_h$).\nTherefore, by tuning this parameter in our algorithm, one can, in\nprinciple, generate a continuous spectrum of configurations of\ncell centers with varying degrees of spatial order (see Appendix).\nNote that in the limit $R^i_h \\rightarrow 0$, triangular-lattice\narrangements for individual species are accessible again and the\nresulting configuration is a superposition of five\ntriangular-lattice arrangements of the cell centers.\n\n\nWe note that the order of the aforementioned growth and relaxation\nsteps can be interchanged without affecting the final\nconfiguration. In addition, instead of starting from a disordered\nRSA arrangement of cell centers as described above, we have also\nused ordered initial configurations (i.e., superposition of\ntriangular-lattice arrangements), leading to the same\nconfiguration at a given number density $\\rho$. However, the\ninitial packing density $\\phi_I$ associated with ordered initial\nconfigurations is very low and thus, it is computationally\ninefficient to start from such initial configurations. By tuning\nthe ``strength'' of the hard-core interactions via the packing\nfraction associated with the exclusion regions, our multiscale\npacking model enables us to produce disordered point configuration\nwith various degrees of hyperuniformity, examples of which are\nprovided in the Appendix for a three-component system for\nillustrative purposes.\n\n\n\\subsection{Modeling Avian Photoreceptor System via Multiscale\nParticle Packing}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=11.5cm,keepaspectratio]{fig8.eps} \\\\\n\\end{center}\n\\caption{Left panel: The bond-orientational order metric $q_6$ of\nthe individual species as a function of the packing fraction\n$\\phi$ associated with the exclusion regions. Right panel: The\ntranslational order metric $T$ of the individual species as a\nfunction of the packing fraction $\\phi$ associated with the\nexclusion regions.} \\label{fig_order}\n\\end{figure*}\n\nBy using the multiscale packing model, we were able to accurately\nreproduce the unique features of the native avian photoreceptors.\nWe modeled the aforementioned two competing effects as two types\nof effective interactions between the cells: a long-range\nsoft-core repulsion between the cells of the same type (that would\nlead to an ordered triangular-lattice arrangement in the absence\nof packing constraints) and a short-range hard-core repulsion\n(with polydisperse exclusion regions associated with different\ncell species) between any pair of cells that frustrates spatial\nordering in the system. Given the sizes of the hard-core exclusion\nregions associated with each cell species (or equivalently the\npacking fraction $\\phi$ of the exclusion regions), the system is\nallowed to relax to a state that is a local energy minimum for the\nlong-range soft-core repulsive interactions between like-species.\nSuch long-range interactions would drive each of the five cell\nspecies in the multicomponent system to the associated\ntriangular-lattice arrangement (global energy minimum) in the\nabsence of the hard-core repulsions. As we increase the strength\nof the hard-core repulsions by increasing $\\phi$, the degree of\norder in the system, which is quantified by the order metrics\n$q_6$ and $T$, decreases (see Fig. \\ref{fig_order}). It is\nimportant to emphasize that these disordered hyperuniform avian\nphotoreceptor patterns are {\\it not} simple random perturbations\nof a triangular-lattice pattern. Statistically equivalent\ndisordered hyperuniform patterns have also been obtained from\ndisordered initial configurations (e.g., RSA packings). Thus, the\nunique structural features in these patterns are not attributed to\nparticular initial configurations but rather arise from the two\ncompeting effects, which are well captured by our multiscale\npacking model.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig9.eps}\\\\\n\\end{center}\n\\caption{Simulated point configurations representing the spatial\narrangements of chicken cone photoreceptors. Upper panels: The\nconfigurations shown from left to right respectively correspond to\nviolet, blue, green species. Lower panels: The configurations\nshown from left to right respectively correspond to red, double\nspecies and the overall pattern. The simulated patterns for\nindividual photoreceptor species are virtually indistinguishable\nfrom the actual patterns obtained from experimental measurements.}\n\\label{fig_simupacking}\n\\end{figure*}\n\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig10.eps} \\\\\n\\end{center}\n\\caption{Comparison of the structure factors $S(k)$ of the\nexperimentally obtained and simulated point configurations\nrepresenting the spatial arrangements of chicken cone\nphotoreceptors. The simulation data were obtained by averaging 50\nindependent configurations.} \\label{fig_simuSk}\n\\end{figure*}\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[height=10cm,keepaspectratio]{fig11.eps} \\\\\n\\end{center}\n\\caption{Comparison of the pair correlation functions $g_2(r)$ of\nthe experimentally obtained and simulated point configurations\nrepresenting the spatial arrangements of chicken cone\nphotoreceptors. The simulation data were obtained by averaging 50\nindependent configurations. The distance is rescaled by the\naverage nearest neighbor distance $d_n$ in the system.}\n\\label{fig_simug2}\n\\end{figure*}\n\n\n\\begin{table*}[h]\n\\caption{Comparison of the bond-orientational and translational\norder metrics, $q_6$ and $T$, of the experimental and simulated\npoint configurations. The simulation data were obtained by\naveraging 50 independent configurations.}\n\\begin{tabular}{@{\\vrule height 10.5pt depth4pt width0pt}c|c|c|c|c }\n\\hline\n& \\multicolumn{2}{|c|}{$q_6$} & \\multicolumn{2}{|c}{$T$} \\\\\n~Species~& ~Exp.~ & ~Sim.~ & ~Exp.~ & ~Sim.~ \\\\\n\\hline\nViolet & 0.150 & 0.148 & 0.304 & 0.327 \\\\\nBlue & 0.158 & 0.164 & 0.411 & 0.395 \\\\\nGreen & 0.130 & 0.134 & 0.278 & 0.266 \\\\\nRed & 0.147 & 0.149 & 0.254 & 0.263 \\\\\nDouble & 0.184 & 0.189 & 0.390 & 0.363\\\\\nAll & 0.058 & 0.063 & 0.096 & 0.108 \\\\\n\\hline\n\\end{tabular}\n\\label{tab_simu}\n\\end{table*}\n\n\n\nThe simulation box contains 2600 cell centers and the\nnumbers of violet, blue, green, red and double species are\nrespectively 210, 355, 530, 405, and 1100. The\nrelative sizes of the violet, blue, green, red and double species\nare 1.00, 1.19, 1.13, 1.06 and 1.50, respectively.\nThe initial packing fraction associated with the hard cores\nis $\\phi_I = 0.45$ and the simulation stops at $\\phi_F = 0.7$.\nAt $\\phi \\approx 0.58$, the resulting\nconfigurations (see Fig. \\ref{fig_simupacking}) are virtually\nindistinguishable from the actual photoreceptor patterns, as\nquantified using a variety of descriptors. Specifically, the\nassociated structure factors (see Fig. \\ref{fig_simuSk}) and pair\ncorrelation functions (see Fig. \\ref{fig_simug2}) match the\nexperimental data very well, as quantified by the minimum\ndeviation metric value of $\\Delta_{min} \\approx 0.4$ [c.f. Eq.(\\ref{eq_delta})].\nWe note that the major contributions to $\\Delta_{min}$ are the large\nfluctuations in the experimental data due to a limited number of samples.\n(The initial value of $\\Delta$ is roughly $3.16$.)\nThe order metrics $q_6$ and $T$ of the simulated pattern also match\nthose of the experimental data very well (see Tab. \\ref{tab_simu}).\nThis is a stringent test for the simulations to pass. The success\nof the simulations strongly suggests that the disordered\nhyperuniform photoreceptor patterns indeed arise from the\ncompetition between cell packing constraints and the tendency to\nmaximize the degree of regularity for efficient light sampling,\nsuggesting that the individual photoreceptor types are as uniform\nas they can be, given the packing constraints within the\nphotoreceptor epithelium.\n\n\n\\section{Conclusions and Discussion}\n\nBy analyzing the chicken cone photoreceptor patterns using a\nvariety of sensitive microstructural descriptors arising in\nstatistical mechanics and particle-packing theory, we found that\nthese disordered patterns display both overall and homotypic\nhyperuniformity, i.e., the system is multi-hyperuniform. This\nsingular property implies that if any subset of the individual\nspecies is removed from the overall population, the remaining\npattern is still hyperuniform. Importantly, it is highly\nnontrivial to devise an algorithm that would remove a large\nfraction of the points from a disordered hyperuniform system while\nleaving the remaining point pattern hyperuniform, and yet Nature\nhas found such a design.\n\nUntil now, the property of \\textit{overall} hyperuniformity was\nidentified only in a special subset of disordered physical\nsystems, including ground-state liquid helium \\cite{helium},\none-component plasmas \\cite{plasma}, Harrison-Zeldovich power\nspectrum of the density fluctuations of the early Universe\n\\cite{universe}, fermionic ground states \\cite{fermion}, classical\ndisordered ground states \\cite{ref17}, and maximally random jammed\npackings of equal-sized hard particles \\cite{aleksPRL, jiaoPRE}.\nAll of these examples involve single-component systems. More\nrecently, disordered multicomponent physical systems such as\nmaximally random jammed (MRJ) hard-particle packings \\cite{ref16,\nref18, ref15} have been identified that possess an appropriately\ngeneralized hyperuniformity property ascertained from the local\nvolume fraction fluctuations. However, the multicomponent\nphotoreceptor avian system pattern, which represents the first\nexample of a disordered hyperuniform system in a living organism,\nis singularly different from any of these hyperuniform physical\nsystems in that in the pattern each species and the total\npopulation are hyperuniform, i.e., the avian patterns are\nmulti-hyperuniform. Although it is not very difficult to construct\na overall hyperuniform system by superposing subsystems that are\nindividually hyperuniform, the reverse process (i.e., decomposing\na hyperuniform system into individually hyperuniform subsets) is\nhighly nontrivial. It will be of interest to identify other\ndisordered hyperuniform biological systems. It is likely that some\nother epithelial tissues and phyllotactic systems \\cite{ref27}\npossess such attributes. Interestingly, it has been shown that the\nlarge-scale number-density fluctuations associated with the\nmalignant cells in brain tumors are significantly suppressed,\nalthough the cell patterns in such brain tumors are not\nhyperuniform \\cite{plos}.\n\n\nIn addition, the photoreceptor patterns possess quasi-long-range\n(QLR) correlations as indicated by the linear small-$k$ behavior\nin $S(k)$. Such QLR correlations are also observed in the\nground-state liquid helium \\cite{helium}, the density fluctuations\nof the early Universe \\cite{universe}, fermionic ground states\n\\cite{fermion} and MRJ packings of hard particles \\cite{ref16,\nref18, ref15}. In the MRJ particle packings, it is believe that\nthe QLR correlations arise from the competition between the\nrequirement of jamming and maximal disorder in the system\n\\cite{ref16, ref18, ref15}. As we showed employing the unique\nmultiscale packing model, the multicomponent avian system that is\nboth homotypic and overall hyperuniform, i.e., multi-hyperuniform,\ncan result from two competing interactions between the\nphotoreceptors.\n\n\\begin{figure*}\n\\begin{center}\n\\includegraphics[width=10.5cm,keepaspectratio]{fig2.eps}\\\\\n\\end{center}\n\\caption{Left panel: A random-sequential-addition (RSA) packing of\nhard, identical circular disks in two-dimensions with a packing\nfraction $\\phi = 0.54$, which is close to the saturation state.\nRight panel: An equilibrium system of hard, identical disks at\n$\\phi = 0.54$. The fact that neither of these systems is\nhyperuniform, as discussed in the text, indicates that hard-core\nexclusion effects alone are not sufficient to induce\nhyperuniformity.} \\label{fig2_rsa}\n\\end{figure*}\n\nIt is noteworthy that while hard-core exclusion and high density\nin a disordered particle packing are necessary conditions to\nachieve a hyperuniform state, these are not sufficient conditions.\nFigure \\ref{fig2_rsa} shows a nonequilibrium\nrandom-sequential-addition (RSA) packing of hard circular disks in\ntwo-dimensions with a packing fraction $\\phi = 0.54$ (left panel),\nwhich is generated by randomly and sequentially placing hard disks\nin a domain without overlapping existing disks, until there is no\nroom for additional disks \\cite{rsa}. The right panel of Fig.\n\\ref{fig2_rsa} shows an equilibrium system of hard disks at $\\phi\n= 0.54$ (right panel). The structure factor values at $k=0$ for\nthe RSA and equilibrium systems are respectively given by $S(0)=\n0.059$ \\cite{rsa} and $S(0) = 0.063$ \\cite{salbook, newref43,\nfootnote}. Although hard-core exclusion play a central role in\nthese two distinct high-density packings, neither packing is\nhyperuniform, as indicated by the relatively large positive values\nof the corresponding $S(0)$.\n\n\n\n\nTo understand the origin of the unique spatial features of the\navian photoreceptor patterns, we have devised a unique multiscale\ncell packing model that suggests that photoreceptor types interact\nwith both short- and long-ranged repulsive forces and that the\nresultant competition between the types gives rise to the singular\ncell patterns. The fact that a disordered hyperuniform pattern\ncorresponds to a local optimum associated with the multiscale\npacking problem indicates that such a pattern may represent the\nmost uniform sampling arrangement attainable in the avian system,\ninstead of the theoretical optimal solution of a regular hexagonal\narray. Specifically, our studies show how fundamental physical\nconstraints can change the course of a biological optimization\nprocess. Although it is clear that physical cell packing\nconstraints are the likely cause of the short-range hard-core\nrepulsion, the origin of the effective longer-range soft-core\nrepulsion is less obvious. We hypothesize that repulsive forces of\nthis type occur during retinal development and may be secondary to\ncell-cell interactions during photoreceptor neurogenesis. However,\na comprehensive test of this hypothesis is beyond the scope of\nthis investigation, and therefore its resolution represents a\nfascinating avenue for future research.\n\n\nRecent studies have shown that disordered hyperuniform materials\ncan be created that possess unique optical properties, such as\nbeing ``stealthy'' (i.e., transparent to incident radiation at\ncertain wavelengths) \\cite{ref17}. Moreover, such disordered\nhyperuniform point patterns have been employed to design isotropic\ndisordered network materials that possess complete photonic band\ngaps (blocking all directions and polarizations of light)\ncomparable in size to those in photonic crystals \\cite{ref28,\nman13}. While the physics of these systems are not directly\nrelated to the avian photoreceptor patterns, such investigations\nand our present findings demonstrate that a class of disordered\nhyperuniform materials are endowed with novel photonic properties.\n\nBesides capturing the unusual structural features of photoreceptor\npatterns, our multiscale packing model represents a unique\nalgorithm that allows one to generate multi-hyperuniform\nmulticomponent systems with varying degrees of order by tuning the\npacking fraction $\\phi$ of the hard-core exclusion regions (see\nAppendix for additional examples). This knowledge could now be\nexploited to produce multi-hyperuniform disordered structures for\napplications in condensed matter physics and materials science. For example, it would be of\ninterest to explore whether colloidal systems can be synthesized\nto have such repulsive interactions in order to self assemble\ninto the aforementioned unique disordered arrangements and to study\nthe resulting optical properties. It is noteworthy that it has\nalready been demonstrated that three-dimensional disordered hyperuniform\npolymer networks can be fabricated for photonic applications using direct\nlaser writing \\cite{polymer}.\n\n\n\n\n\n\n\\begin{acknowledgments}\nThe authors are grateful to Paul Steinhardt for useful\ndiscussions. Y. J. and S. T. were supported by the National Cancer\nInstitute under Award NO. U54CA143803 and by the Division of\nMathematical Sciences at the National Science Foundation under\nAward No. DMS-1211087. J.C.C. was supported by NIH grants\n(EY018826, HG006346 and HG006790) and J.C.C., M.M.-H. and H.H.\nwere supported by a grant from the Human Frontier Science Program.\nH.H. also acknowledges the support of the German Research\nFoundation (DFG) within the Cluster of Excellence, ``Center for\nAdvancing Electronics Dresden''. This work was partially supported\nby a grant from the Simons Foundation (Grant No. 231015 to\nSalvatore Torquato).\n\\end{acknowledgments}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}}